The Iterative Design Process in Research and Development: A Work Experience Paper
NASA Technical Reports Server (NTRS)
Sullivan, George F. III
2013-01-01
The iterative design process is one of many strategies used in new product development. Top-down development strategies, like waterfall development, place a heavy emphasis on planning and simulation. The iterative process, on the other hand, is better suited to the management of small to medium scale projects. Over the past four months, I have worked with engineers at Johnson Space Center on a multitude of electronics projects. By describing the work I have done these last few months, analyzing the factors that have driven design decisions, and examining the testing and verification process, I will demonstrate that iterative design is the obvious choice for research and development projects.
The Effect of Iteration on the Design Performance of Primary School Children
ERIC Educational Resources Information Center
Looijenga, Annemarie; Klapwijk, Remke; de Vries, Marc J.
2015-01-01
Iteration during the design process is an essential element. Engineers optimize their design by iteration. Research on iteration in Primary Design Education is however scarce; possibly teachers believe they do not have enough time for iteration in daily classroom practices. Spontaneous playing behavior of children indicates that iteration fits in…
Conjecture Mapping to Optimize the Educational Design Research Process
ERIC Educational Resources Information Center
Wozniak, Helen
2015-01-01
While educational design research promotes closer links between practice and theory, reporting its outcomes from iterations across multiple contexts is often constrained by the volumes of data generated, and the context bound nature of the research outcomes. Reports tend to focus on a single iteration of implementation without further research to…
Campbell, Megan M; Susser, Ezra; Mall, Sumaya; Mqulwana, Sibonile G; Mndini, Michael M; Ntola, Odwa A; Nagdee, Mohamed; Zingela, Zukiswa; Van Wyk, Stephanus; Stein, Dan J
2017-01-01
Obtaining informed consent is a great challenge in global health research. There is a need for tools that can screen for and improve potential research participants' understanding of the research study at the time of recruitment. Limited empirical research has been conducted in low and middle income countries, evaluating informed consent processes in genomics research. We sought to investigate the quality of informed consent obtained in a South African psychiatric genomics study. A Xhosa language version of the University of California, San Diego Brief Assessment of Capacity to Consent Questionnaire (UBACC) was used to screen for capacity to consent and improve understanding through iterative learning in a sample of 528 Xhosa people with schizophrenia and 528 controls. We address two questions: firstly, whether research participants' understanding of the research study improved through iterative learning; and secondly, what were predictors for better understanding of the research study at the initial screening? During screening 290 (55%) cases and 172 (33%) controls scored below the 14.5 cut-off for acceptable understanding of the research study elements, however after iterative learning only 38 (7%) cases and 13 (2.5%) controls continued to score below this cut-off. Significant variables associated with increased understanding of the consent included the psychiatric nurse recruiter conducting the consent screening, higher participant level of education, and being a control. The UBACC proved an effective tool to improve understanding of research study elements during consent, for both cases and controls. The tool holds utility for complex studies such as those involving genomics, where iterative learning can be used to make significant improvements in understanding of research study elements. The UBACC may be particularly important in groups with severe mental illness and lower education levels. Study recruiters play a significant role in managing the quality of the informed consent process.
NASA Astrophysics Data System (ADS)
Wu, Weibin; Dai, Yifan; Zhou, Lin; Xu, Mingjin
2016-09-01
Material removal accuracy has a direct impact on the machining precision and efficiency of ion beam figuring. By analyzing the factors suppressing the improvement of material removal accuracy, we conclude that correcting the removal function deviation and reducing the removal material amount during each iterative process could help to improve material removal accuracy. Removal function correcting principle can effectively compensate removal function deviation between actual figuring and simulated processes, while experiments indicate that material removal accuracy decreases with a long machining time, so a small amount of removal material in each iterative process is suggested. However, more clamping and measuring steps will be introduced in this way, which will also generate machining errors and suppress the improvement of material removal accuracy. On this account, a free-measurement iterative process method is put forward to improve material removal accuracy and figuring efficiency by using less measuring and clamping steps. Finally, an experiment on a φ 100-mm Zerodur planar is preformed, which shows that, in similar figuring time, three free-measurement iterative processes could improve the material removal accuracy and the surface error convergence rate by 62.5% and 17.6%, respectively, compared with a single iterative process.
Learning Objects: A User-Centered Design Process
ERIC Educational Resources Information Center
Branon, Rovy F., III
2011-01-01
Design research systematically creates or improves processes, products, and programs through an iterative progression connecting practice and theory (Reinking, 2008; van den Akker, 2006). Developing a new instructional systems design (ISD) processes through design research is necessary when new technologies emerge that challenge existing practices…
ERIC Educational Resources Information Center
McClain, Arianna D.; Hekler, Eric B.; Gardner, Christopher D.
2013-01-01
Background: Previous research from the fields of computer science and engineering highlight the importance of an iterative design process (IDP) to create more creative and effective solutions. Objective: This study describes IDP as a new method for developing health behavior interventions and evaluates the effectiveness of a dining hall--based…
WESTERN RESEARCH INSTITUTE CONTAINED RECOVERY OF OILY WASTES (CROW) PROCESS - ITER
This report summarizes the findings of an evaluation of the Contained Recovery of Oily Wastes (CROW) technology developed by the Western Research Institute. The process involves the injection of heated water into the subsurface to mobilize oily wastes, which are removed from the ...
ERIC Educational Resources Information Center
Jones, Sarah-Louise; Procter, Richard; Younie, Sarah
2015-01-01
Research alone does not inform practice, rather a process of knowledge translation is required to enable research findings to become meaningful for practitioners in their contextual settings. However, the translational process needs to be an iterative cycle so that the practice itself can be reflected upon and thereby inform the ongoing research…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cristescu, I.; Cristescu, I. R.; Doerr, L.
2008-07-15
The ITER Isotope Separation System (ISS) and Water Detritiation System (WDS) should be integrated in order to reduce potential chronic tritium emissions from the ISS. This is achieved by routing the top (protium) product from the ISS to a feed point near the bottom end of the WDS Liquid Phase Catalytic Exchange (LPCE) column. This provides an additional barrier against ISS emissions and should mitigate the memory effects due to process parameter fluctuations in the ISS. To support the research activities needed to characterize the performances of various components for WDS and ISS processes under various working conditions and configurationsmore » as needed for ITER design, an experimental facility called TRENTA representative of the ITER WDS and ISS protium separation column, has been commissioned and is in operation at TLK The experimental program on TRENTA facility is conducted to provide the necessary design data related to the relevant ITER operating modes. The operation availability and performances of ISS-WDS have impact on ITER fuel cycle subsystems with consequences on the design integration. The preliminary experimental data on TRENTA facility are presented. (authors)« less
NASA Technical Reports Server (NTRS)
Barnes, Bruce W.; Sessions, Alaric M.; Beyon, Jeffrey; Petway, Larry B.
2014-01-01
Optimized designs of the Navigation Doppler Lidar (NDL) instrument for Autonomous Landing Hazard Avoidance Technology (ALHAT) were accomplished via Interdisciplinary Design Concept (IDEC) at NASA Langley Research Center during the summer of 2013. Three branches in the Engineering Directorate and three students were involved in this joint task through the NASA Langley Aerospace Research Summer Scholars (LARSS) Program. The Laser Remote Sensing Branch (LRSB), Mechanical Systems Branch (MSB), and Structural and Thermal Systems Branch (STSB) were engaged to achieve optimal designs through iterative and interactive collaborative design processes. A preliminary design iteration was able to reduce the power consumption, mass, and footprint by removing redundant components and replacing inefficient components with more efficient ones. A second design iteration reduced volume and mass by replacing bulky components with excessive performance with smaller components custom-designed for the power system. The existing power system was analyzed to rank components in terms of inefficiency, power dissipation, footprint and mass. Design considerations and priorities are compared along with the results of each design iteration. Overall power system improvements are summarized for design implementations.
Defense Advanced Research Projects Agency (DARPA) Network Archive (DNA)
2008-12-01
therefore decided for an iterative development process even within such a small project. The first iteration consisted of conducting specific...Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions...regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden to Washington
NASA Astrophysics Data System (ADS)
Addor, Nans; Ewen, Tracy; Johnson, Leigh; Ćöltekin, Arzu; Derungs, Curdin; Muccione, Veruska
2015-08-01
In the context of climate change, both climate researchers and decision makers deal with uncertainties, but these uncertainties differ in fundamental ways. They stem from different sources, cover different temporal and spatial scales, might or might not be reducible or quantifiable, and are generally difficult to characterize and communicate. Hence, a mutual understanding between current and future climate researchers and decision makers must evolve for adaptation strategies and planning to progress. Iterative two-way dialogue can help to improve the decision making process by bridging current top-down and bottom-up approaches. One way to cultivate such interactions is by providing venues for these actors to interact and exchange on the uncertainties they face. We use a workshop-seminar series involving academic researchers, students, and decision makers as an opportunity to put this idea into practice and evaluate it. Seminars, case studies, and a round table allowed participants to reflect upon and experiment with uncertainties. An opinion survey conducted before and after the workshop-seminar series allowed us to qualitatively evaluate its influence on the participants. We find that the event stimulated new perspectives on research products and communication processes, and we suggest that similar events may ultimately contribute to the midterm goal of improving support for decision making in a changing climate. Therefore, we recommend integrating bridging events into university curriculum to foster interdisciplinary and iterative dialogue among researchers, decision makers, and students.
Martins, Mauricio Dias; Gingras, Bruno; Puig-Waldmueller, Estela; Fitch, W Tecumseh
2017-04-01
The human ability to process hierarchical structures has been a longstanding research topic. However, the nature of the cognitive machinery underlying this faculty remains controversial. Recursion, the ability to embed structures within structures of the same kind, has been proposed as a key component of our ability to parse and generate complex hierarchies. Here, we investigated the cognitive representation of both recursive and iterative processes in the auditory domain. The experiment used a two-alternative forced-choice paradigm: participants were exposed to three-step processes in which pure-tone sequences were built either through recursive or iterative processes, and had to choose the correct completion. Foils were constructed according to generative processes that did not match the previous steps. Both musicians and non-musicians were able to represent recursion in the auditory domain, although musicians performed better. We also observed that general 'musical' aptitudes played a role in both recursion and iteration, although the influence of musical training was somehow independent from melodic memory. Moreover, unlike iteration, recursion in audition was well correlated with its non-auditory (recursive) analogues in the visual and action sequencing domains. These results suggest that the cognitive machinery involved in establishing recursive representations is domain-general, even though this machinery requires access to information resulting from domain-specific processes. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Improvement of tritium accountancy technology for ITER fuel cycle safety enhancement
NASA Astrophysics Data System (ADS)
O'hira, S.; Hayashi, T.; Nakamura, H.; Kobayashi, K.; Tadokoro, T.; Nakamura, H.; Itoh, T.; Yamanishi, T.; Kawamura, Y.; Iwai, Y.; Arita, T.; Maruyama, T.; Kakuta, T.; Konishi, S.; Enoeda, M.; Yamada, M.; Suzuki, T.; Nishi, M.; Nagashima, T.; Ohta, M.
2000-03-01
In order to improve the safe handling and control of tritium for the ITER fuel cycle, effective in situ tritium accounting methods have been developed at the Tritium Process Laboratory in the Japan Atomic Energy Research Institute under one of the ITER-EDA R&D tasks. The remote and multilocation analysis of process gases by an application of laser Raman spectroscopy developed and tested could provide a measurement of hydrogen isotope gases with a detection limit of 0.3 kPa analytical periods of 120 s. An in situ tritium inventory measurement by application of a `self-assaying' storage bed with 25 g tritium capacity could provide a measurement with the required detection limit of less than 1% and a design proof of a bed with 100 g tritium capacity.
Parallel computation of multigroup reactivity coefficient using iterative method
NASA Astrophysics Data System (ADS)
Susmikanti, Mike; Dewayatna, Winter
2013-09-01
One of the research activities to support the commercial radioisotope production program is a safety research target irradiation FPM (Fission Product Molybdenum). FPM targets form a tube made of stainless steel in which the nuclear degrees of superimposed high-enriched uranium. FPM irradiation tube is intended to obtain fission. The fission material widely used in the form of kits in the world of nuclear medicine. Irradiation FPM tube reactor core would interfere with performance. One of the disorders comes from changes in flux or reactivity. It is necessary to study a method for calculating safety terrace ongoing configuration changes during the life of the reactor, making the code faster became an absolute necessity. Neutron safety margin for the research reactor can be reused without modification to the calculation of the reactivity of the reactor, so that is an advantage of using perturbation method. The criticality and flux in multigroup diffusion model was calculate at various irradiation positions in some uranium content. This model has a complex computation. Several parallel algorithms with iterative method have been developed for the sparse and big matrix solution. The Black-Red Gauss Seidel Iteration and the power iteration parallel method can be used to solve multigroup diffusion equation system and calculated the criticality and reactivity coeficient. This research was developed code for reactivity calculation which used one of safety analysis with parallel processing. It can be done more quickly and efficiently by utilizing the parallel processing in the multicore computer. This code was applied for the safety limits calculation of irradiated targets FPM with increment Uranium.
Flexible Method for Developing Tactics, Techniques, and Procedures for Future Capabilities
2009-02-01
levels of ability, military experience, and motivation, (b) number and type of significant events, and (c) other sources of natural variability...research has developed a number of specific instruments designed to aid in this process. Second, the iterative, feed-forward nature of the method allows...FLEX method), but still lack the structured KE approach and iterative, feed-forward nature of the FLEX method. To facilitate decision making
NASA Astrophysics Data System (ADS)
Wilcox, H.; Schaefer, K. M.; Jafarov, E. E.; Strawhacker, C.; Pulsifer, P. L.; Thurmes, N.
2016-12-01
The United States National Science Foundation funded PermaData project led by the National Snow and Ice Data Center (NSIDC) with a team from the Global Terrestrial Network for Permafrost (GTN-P) aimed to improve permafrost data access and discovery. We developed a Data Integration Tool (DIT) to significantly speed up the time of manual processing needed to translate inconsistent, scattered historical permafrost data into files ready to ingest directly into the GTN-P. We leverage this data to support science research and policy decisions. DIT is a workflow manager that divides data preparation and analysis into a series of steps or operations called widgets. Each widget does a specific operation, such as read, multiply by a constant, sort, plot, and write data. DIT allows the user to select and order the widgets as desired to meet their specific needs. Originally it was written to capture a scientist's personal, iterative, data manipulation and quality control process of visually and programmatically iterating through inconsistent input data, examining it to find problems, adding operations to address the problems, and rerunning until the data could be translated into the GTN-P standard format. Iterative development of this tool led to a Fortran/Python hybrid then, with consideration of users, licensing, version control, packaging, and workflow, to a publically available, robust, usable application. Transitioning to Python allowed the use of open source frameworks for the workflow core and integration with a javascript graphical workflow interface. DIT is targeted to automatically handle 90% of the data processing for field scientists, modelers, and non-discipline scientists. It is available as an open source tool in GitHub packaged for a subset of Mac, Windows, and UNIX systems as a desktop application with a graphical workflow manager. DIT was used to completely translate one dataset (133 sites) that was successfully added to GTN-P, nearly translate three datasets (270 sites), and is scheduled to translate 10 more datasets ( 1000 sites) from the legacy inactive site data holdings of the Frozen Ground Data Center (FGDC). Iterative development has provided the permafrost and wider scientific community with an extendable tool designed specifically for the iterative process of translating unruly data.
Action Research: Enhancing Classroom Practice and Fulfilling Educational Responsibilities
ERIC Educational Resources Information Center
Young, Mark R.; Rapp, Eve; Murphy, James W.
2010-01-01
Action Research is an applied scholarly paradigm resulting in action for continuous improvement in our teaching and learning techniques offering faculty immediate classroom payback and providing documentation of meeting our educational responsibilities as required by AACSB standards. This article reviews the iterative action research process of…
Optical Computing Based on Neuronal Models
1988-05-01
walking, and cognition are far too complex for existing sequential digital computers. Therefore new architectures, hardware, and algorithms modeled...collective behavior, and iterative processing into optical processing and artificial neurodynamical systems. Another intriguing promise of neural nets is...with architectures, implementations, and programming; and material research s -7- called for. Our future research in neurodynamics will continue to
Interdisciplinary Research: Performance and Policy Issues.
ERIC Educational Resources Information Center
Rossini, Frederick A.; Porter, Alan L.
1981-01-01
Successful interdisciplinary research performance, it is suggested, depends on such structural and process factors as leadership, team characteristics, study bounding, iteration, communication patterns, and epistemological factors. Appropriate frameworks for socially organizing the development of knowledge such as common group learning, modeling,…
Inside the Black Box: Tracking Decision-Making in an Action Research Study
ERIC Educational Resources Information Center
Smith, Cathryn
2017-01-01
Action research has been described as "designing the plane while flying it" (Herr & Anderson, 2005, p. 69). A black box documented the researcher's decisions while facilitating leadership development sessions with teacher leaders. Ten process folio steps informed the study through six iterations. Planning steps included a design…
Rater variables associated with ITER ratings.
Paget, Michael; Wu, Caren; McIlwrick, Joann; Woloschuk, Wayne; Wright, Bruce; McLaughlin, Kevin
2013-10-01
Advocates of holistic assessment consider the ITER a more authentic way to assess performance. But this assessment format is subjective and, therefore, susceptible to rater bias. Here our objective was to study the association between rater variables and ITER ratings. In this observational study our participants were clerks at the University of Calgary and preceptors who completed online ITERs between February 2008 and July 2009. Our outcome variable was global rating on the ITER (rated 1-5), and we used a generalized estimating equation model to identify variables associated with this rating. Students were rated "above expected level" or "outstanding" on 66.4 % of 1050 online ITERs completed during the study period. Two rater variables attenuated ITER ratings: the log transformed time taken to complete the ITER [β = -0.06, 95 % confidence interval (-0.10, -0.02), p = 0.002], and the number of ITERs that a preceptor completed over the time period of the study [β = -0.008 (-0.02, -0.001), p = 0.02]. In this study we found evidence of leniency bias that resulted in two thirds of students being rated above expected level of performance. This leniency bias appeared to be attenuated by delay in ITER completion, and was also blunted in preceptors who rated more students. As all biases threaten the internal validity of the assessment process, further research is needed to confirm these and other sources of rater bias in ITER ratings, and to explore ways of limiting their impact.
The Primary Physical Education Curriculum Process: More Complex That You Might Think!!
ERIC Educational Resources Information Center
Jess, Mike; Carse, Nicola; Keay, Jeanne
2016-01-01
In this paper, we present the curriculum development process as a complex, iterative and integrated phenomenon. Building on the early work of Stenhouse [1975, "An Introduction to Curriculum Research and Development". London: Heinemann Educational], we position the teacher at the heart of this process and extend his ideas by exploring how…
On iterative processes in the Krylov-Sonneveld subspaces
NASA Astrophysics Data System (ADS)
Ilin, Valery P.
2016-10-01
The iterative Induced Dimension Reduction (IDR) methods are considered for solving large systems of linear algebraic equations (SLAEs) with nonsingular nonsymmetric matrices. These approaches are investigated by many authors and are charachterized sometimes as the alternative to the classical processes of Krylov type. The key moments of the IDR algorithms consist in the construction of the embedded Sonneveld subspaces, which have the decreasing dimensions and use the orthogonalization to some fixed subspace. Other independent approaches for research and optimization of the iterations are based on the augmented and modified Krylov subspaces by using the aggregation and deflation procedures with present various low rank approximations of the original matrices. The goal of this paper is to show, that IDR method in Sonneveld subspaces present an original interpretation of the modified algorithms in the Krylov subspaces. In particular, such description is given for the multi-preconditioned semi-conjugate direction methods which are actual for the parallel algebraic domain decomposition approaches.
NASA Astrophysics Data System (ADS)
Fu, Linyun; Ma, Xiaogang; Zheng, Jin; Goldstein, Justin; Duggan, Brian; West, Patrick; Aulenbach, Steve; Tilmes, Curt; Fox, Peter
2014-05-01
This poster will show how we used a case-driven iterative methodology to develop an ontology to represent the content structure and the associated provenance information in a National Climate Assessment (NCA) report of the US Global Change Research Program (USGCRP). We applied the W3C PROV-O ontology to implement a formal representation of provenance. We argue that the use case-driven, iterative development process and the application of a formal provenance ontology help efficiently incorporate domain knowledge from earth and environmental scientists in a well-structured model interoperable in the context of the Web of Data.
Stevenson, Fiona A; Gibson, William; Pelletier, Caroline; Chrysikou, Vasiliki; Park, Sophie
2015-05-08
UK-based research conducted within a healthcare setting generally requires approval from the National Research Ethics Service. Research ethics committees are required to assess a vast range of proposals, differing in both their topic and methodology. We argue the methodological benchmarks with which research ethics committees are generally familiar and which form the basis of assessments of quality do not fit with the aims and objectives of many forms of qualitative inquiry and their more iterative goals of describing social processes/mechanisms and making visible the complexities of social practices. We review current debates in the literature related to ethical review and social research, and illustrate the importance of re-visiting the notion of ethics in healthcare research. We present an analysis of two contrasting paradigms of ethics. We argue that the first of these is characteristic of the ways that NHS ethics boards currently tend to operate, and the second is an alternative paradigm, that we have labelled the 'iterative' paradigm, which draws explicitly on methodological issues in qualitative research to produce an alternative vision of ethics. We suggest that there is an urgent need to re-think the ways that ethical issues are conceptualised in NHS ethical procedures. In particular, we argue that embedded in the current paradigm is a restricted notion of 'quality', which frames how ethics are developed and worked through. Specific, pre-defined outcome measures are generally seen as the traditional marker of quality, which means that research questions that focus on processes rather than on 'outcomes' may be regarded as problematic. We show that the alternative 'iterative' paradigm offers a useful starting point for moving beyond these limited views. We conclude that a 'one size fits all' standardisation of ethical procedures and approach to ethical review acts against the production of knowledge about healthcare and dramatically restricts what can be known about the social practices and conditions of healthcare. Our central argument is that assessment of ethical implications is important, but that the current paradigm does not facilitate an adequate understanding of the very issues it aims to invigilate.
Learner Centred Design for a Hybrid Interaction Application
ERIC Educational Resources Information Center
Wood, Simon; Romero, Pablo
2010-01-01
Learner centred design methods highlight the importance of involving the stakeholders of the learning process (learners, teachers, educational researchers) at all stages of the design of educational applications and of refining the design through an iterative prototyping process. These methods have been used successfully when designing systems…
Comparisons of Observed Process Quality in German and American Infant/Toddler Programs
ERIC Educational Resources Information Center
Tietze, Wolfgang; Cryer, Debby
2004-01-01
Observed process quality in infant/toddler classrooms was compared in Germany (n = 75) and the USA (n = 219). Process quality was assessed with the Infant/Toddler Environment Rating Scale(ITERS) and parent attitudes about ITERS content with the ITERS Parent Questionnaire (ITERSPQ). The ITERS had comparable reliabilities in the two countries and…
Convergence of Proximal Iteratively Reweighted Nuclear Norm Algorithm for Image Processing.
Sun, Tao; Jiang, Hao; Cheng, Lizhi
2017-08-25
The nonsmooth and nonconvex regularization has many applications in imaging science and machine learning research due to its excellent recovery performance. A proximal iteratively reweighted nuclear norm algorithm has been proposed for the nonsmooth and nonconvex matrix minimizations. In this paper, we aim to investigate the convergence of the algorithm. With the Kurdyka-Łojasiewicz property, we prove the algorithm globally converges to a critical point of the objective function. The numerical results presented in this paper coincide with our theoretical findings.
ERIC Educational Resources Information Center
Thein, Amanda Haertling; Barbas, Patricia; Carnevali, Christine; Fox, Ashleigh; Mahoney, Amanda; Vensel, Scott
2012-01-01
This paper details a teacher-researcher effort to investigate effective instructional practices for teaching multicultural literature through a collaborative, iterative process of inquiry driven by tentative, theoretical principles. The study began with a distillation of recent scholarship on multicultural literature response into a set of…
Using Design-Based Research in Informal Environments
ERIC Educational Resources Information Center
Reisman, Molly
2008-01-01
Design-Based Research (DBR) has been a tool of the learning sciences since the early 1990s, used as a way to improve and study learning environments. Using an iterative process of design with the goal of reining theories of learning, researchers and educators now use DBR seek to identify "how" to make a learning environment work. They then draw…
Development of a practice-based research program.
Hawk, C; Long, C R; Boulanger, K
1998-01-01
To establish an infrastructure to collect accurate data from ambulatory settings. The program was developed through an iterative model governed by a process of formative evaluation. The three iterations were a needs assessment, feasibility study and pilot project. Necessary program components were identified as infrastructure, practitioner-researcher partnership, centralized data management and standardized quality assurance measures. Volunteer chiropractors and their staff collected data on patients in their practices in ambulatory settings in the U.S. and Canada. Evaluative measures were counts of participants, patients and completed forms. Standardized, validated and reliable measures collected by patient self-report were used to assess treatment outcomes. These included the SF-36 or SF-12 Health Survey, the Pain Disability Index, and the Global Well-Being Scale. For characteristics for which appropriate standardized instruments were not available, questionnaires were designed and and pilot-tested before use. Information was gathered on practice and patient characteristics and treatment outcomes, but for this report, only those data concerning process evaluation are reported. Through the three program iterations, 65 DCs collected data on 1360 patients, 663 of whom were new patients. Follow-up data recorded by doctors were obtained for more than 70% of patients; a maximum of 50% of patient-completed follow-up forms were collected in the three iterations. This program is capable of providing data for descriptive epidemiology of ambulatory patients, and, with continued effort to maximize follow-up, may have utility in providing insight into utilization patterns and patient outcomes.
ERIC Educational Resources Information Center
MacKinnon, Kim
2012-01-01
While design research can be useful for designing effective technology integrations within complex social settings, it currently fails to provide concrete methodological guidelines for gathering and organizing information about the research context, or for determining how such analyses ought to guide the iterative design and innovation process. A…
NASA Technical Reports Server (NTRS)
Boyer, Charles M.; Jackson, Trevor P.; Beyon, Jeffrey Y.; Petway, Larry B.
2013-01-01
Optimized designs of the Navigation Doppler Lidar (NDL) instrument for Autonomous Landing Hazard Avoidance Technology (ALHAT) were accomplished via Interdisciplinary Design Concept (IDEC) at NASA Langley Research Center during the summer of 2013. Three branches in the Engineering Directorate and three students were involved in this joint task through the NASA Langley Aerospace Research Summer Scholars (LARSS) Program. The Laser Remote Sensing Branch (LRSB), Mechanical Systems Branch (MSB), and Structural and Thermal Systems Branch (STSB) were engaged to achieve optimal designs through iterative and interactive collaborative design processes. A preliminary design iteration was able to reduce the power consumption, mass, and footprint by removing redundant components and replacing inefficient components with more efficient ones. A second design iteration reduced volume and mass by replacing bulky components with excessive performance with smaller components custom-designed for the power system. Mechanical placement collaboration reduced potential electromagnetic interference (EMI). Through application of newly selected electrical components and thermal analysis data, a total electronic chassis redesign was accomplished. Use of an innovative forced convection tunnel heat sink was employed to meet and exceed project requirements for cooling, mass reduction, and volume reduction. Functionality was a key concern to make efficient use of airflow, and accessibility was also imperative to allow for servicing of chassis internals. The collaborative process provided for accelerated design maturation with substantiated function.
e-Learning Application for Machine Maintenance Process using Iterative Method in XYZ Company
NASA Astrophysics Data System (ADS)
Nurunisa, Suaidah; Kurniawati, Amelia; Pramuditya Soesanto, Rayinda; Yunan Kurnia Septo Hediyanto, Umar
2016-02-01
XYZ Company is a company based on manufacturing part for airplane, one of the machine that is categorized as key facility in the company is Millac 5H6P. As a key facility, the machines should be assured to work well and in peak condition, therefore, maintenance process is needed periodically. From the data gathering, it is known that there are lack of competency from the maintenance staff to maintain different type of machine which is not assigned by the supervisor, this indicate that knowledge which possessed by maintenance staff are uneven. The purpose of this research is to create knowledge-based e-learning application as a realization from externalization process in knowledge transfer process to maintain the machine. The application feature are adjusted for maintenance purpose using e-learning framework for maintenance process, the content of the application support multimedia for learning purpose. QFD is used in this research to understand the needs from user. The application is built using moodle with iterative method for software development cycle and UML Diagram. The result from this research is e-learning application as sharing knowledge media for maintenance staff in the company. From the test, it is known that the application make maintenance staff easy to understand the competencies.
Evaluating the iterative development of VR/AR human factors tools for manual work.
Liston, Paul M; Kay, Alison; Cromie, Sam; Leva, Chiara; D'Cruz, Mirabelle; Patel, Harshada; Langley, Alyson; Sharples, Sarah; Aromaa, Susanna
2012-01-01
This paper outlines the approach taken to iteratively evaluate a set of VR/AR (virtual reality / augmented reality) applications for five different manual-work applications - terrestrial spacecraft assembly, assembly-line design, remote maintenance of trains, maintenance of nuclear reactors, and large-machine assembly process design - and examines the evaluation data for evidence of the effectiveness of the evaluation framework as well as the benefits to the development process of feedback from iterative evaluation. ManuVAR is an EU-funded research project that is working to develop an innovative technology platform and a framework to support high-value, high-knowledge manual work throughout the product lifecycle. The results of this study demonstrate the iterative improvements reached throughout the design cycles, observable through the trending of the quantitative results from three successive trials of the applications and the investigation of the qualitative interview findings. The paper discusses the limitations of evaluation in complex, multi-disciplinary development projects and finds evidence of the effectiveness of the use of the particular set of complementary evaluation methods incorporating a common inquiry structure used for the evaluation - particularly in facilitating triangulation of the data.
Iterative categorization (IC): a systematic technique for analysing qualitative data
2016-01-01
Abstract The processes of analysing qualitative data, particularly the stage between coding and publication, are often vague and/or poorly explained within addiction science and research more broadly. A simple but rigorous and transparent technique for analysing qualitative textual data, developed within the field of addiction, is described. The technique, iterative categorization (IC), is suitable for use with inductive and deductive codes and can support a range of common analytical approaches, e.g. thematic analysis, Framework, constant comparison, analytical induction, content analysis, conversational analysis, discourse analysis, interpretative phenomenological analysis and narrative analysis. Once the data have been coded, the only software required is a standard word processing package. Worked examples are provided. PMID:26806155
Planning as an Iterative Process
NASA Technical Reports Server (NTRS)
Smith, David E.
2012-01-01
Activity planning for missions such as the Mars Exploration Rover mission presents many technical challenges, including oversubscription, consideration of time, concurrency, resources, preferences, and uncertainty. These challenges have all been addressed by the research community to varying degrees, but significant technical hurdles still remain. In addition, the integration of these capabilities into a single planning engine remains largely unaddressed. However, I argue that there is a deeper set of issues that needs to be considered namely the integration of planning into an iterative process that begins before the goals, objectives, and preferences are fully defined. This introduces a number of technical challenges for planning, including the ability to more naturally specify and utilize constraints on the planning process, the ability to generate multiple qualitatively different plans, and the ability to provide deep explanation of plans.
A stopping criterion to halt iterations at the Richardson-Lucy deconvolution of radiographic images
NASA Astrophysics Data System (ADS)
Almeida, G. L.; Silvani, M. I.; Souza, E. S.; Lopes, R. T.
2015-07-01
Radiographic images, as any experimentally acquired ones, are affected by spoiling agents which degrade their final quality. The degradation caused by agents of systematic character, can be reduced by some kind of treatment such as an iterative deconvolution. This approach requires two parameters, namely the system resolution and the best number of iterations in order to achieve the best final image. This work proposes a novel procedure to estimate the best number of iterations, which replaces the cumbersome visual inspection by a comparison of numbers. These numbers are deduced from the image histograms, taking into account the global difference G between them for two subsequent iterations. The developed algorithm, including a Richardson-Lucy deconvolution procedure has been embodied into a Fortran program capable to plot the 1st derivative of G as the processing progresses and to stop it automatically when this derivative - within the data dispersion - reaches zero. The radiograph of a specially chosen object acquired with thermal neutrons from the Argonauta research reactor at Institutode Engenharia Nuclear - CNEN, Rio de Janeiro, Brazil, have undergone this treatment with fair results.
Iterative development of visual control systems in a research vivarium.
Bassuk, James A; Washington, Ida M
2014-01-01
The goal of this study was to test the hypothesis that reintroduction of Continuous Performance Improvement (CPI) methodology, a lean approach to management at Seattle Children's (Hospital, Research Institute, Foundation), would facilitate engagement of vivarium employees in the development and sustainment of a daily management system and a work-in-process board. Such engagement was implemented through reintroduction of aspects of the Toyota Production System. Iterations of a Work-In-Process Board were generated using Shewhart's Plan-Do-Check-Act process improvement cycle. Specific attention was given to the importance of detecting and preventing errors through assessment of the following 5 levels of quality: Level 1, customer inspects; Level 2, company inspects; Level 3, work unit inspects; Level 4, self-inspection; Level 5, mistake proofing. A functioning iteration of a Mouse Cage Work-In-Process Board was eventually established using electronic data entry, an improvement that increased the quality level from 1 to 3 while reducing wasteful steps, handoffs and queues. A visual workplace was realized via a daily management system that included a Work-In-Process Board, a problem solving board and two Heijunka boards. One Heijunka board tracked cage changing as a function of a biological kanban, which was validated via ammonia levels. A 17% reduction in cage changing frequency provided vivarium staff with additional time to support Institute researchers in their mutual goal of advancing cures for pediatric diseases. Cage washing metrics demonstrated an improvement in the flow continuum in which a traditional batch and queue push system was replaced with a supermarket-type pull system. Staff engagement during the improvement process was challenging and is discussed. The collective data indicate that the hypothesis was found to be true. The reintroduction of CPI into daily work in the vivarium is consistent with the 4P Model of the Toyota Way and selected Principles that guide implementation of the Toyota Production System.
Iterative Development of Visual Control Systems in a Research Vivarium
Bassuk, James A.; Washington, Ida M.
2014-01-01
The goal of this study was to test the hypothesis that reintroduction of Continuous Performance Improvement (CPI) methodology, a lean approach to management at Seattle Children’s (Hospital, Research Institute, Foundation), would facilitate engagement of vivarium employees in the development and sustainment of a daily management system and a work-in-process board. Such engagement was implemented through reintroduction of aspects of the Toyota Production System. Iterations of a Work-In-Process Board were generated using Shewhart’s Plan-Do-Check-Act process improvement cycle. Specific attention was given to the importance of detecting and preventing errors through assessment of the following 5 levels of quality: Level 1, customer inspects; Level 2, company inspects; Level 3, work unit inspects; Level 4, self-inspection; Level 5, mistake proofing. A functioning iteration of a Mouse Cage Work-In-Process Board was eventually established using electronic data entry, an improvement that increased the quality level from 1 to 3 while reducing wasteful steps, handoffs and queues. A visual workplace was realized via a daily management system that included a Work-In-Process Board, a problem solving board and two Heijunka boards. One Heijunka board tracked cage changing as a function of a biological kanban, which was validated via ammonia levels. A 17% reduction in cage changing frequency provided vivarium staff with additional time to support Institute researchers in their mutual goal of advancing cures for pediatric diseases. Cage washing metrics demonstrated an improvement in the flow continuum in which a traditional batch and queue push system was replaced with a supermarket-type pull system. Staff engagement during the improvement process was challenging and is discussed. The collective data indicate that the hypothesis was found to be true. The reintroduction of CPI into daily work in the vivarium is consistent with the 4P Model of the Toyota Way and selected Principles that guide implementation of the Toyota Production System. PMID:24736460
Some error bounds for K-iterated Gaussian recursive filters
NASA Astrophysics Data System (ADS)
Cuomo, Salvatore; Galletti, Ardelio; Giunta, Giulio; Marcellino, Livia
2016-10-01
Recursive filters (RFs) have achieved a central role in several research fields over the last few years. For example, they are used in image processing, in data assimilation and in electrocardiogram denoising. More in particular, among RFs, the Gaussian RFs are an efficient computational tool for approximating Gaussian-based convolutions and are suitable for digital image processing and applications of the scale-space theory. As is a common knowledge, the Gaussian RFs, applied to signals with support in a finite domain, generate distortions and artifacts, mostly localized at the boundaries. Heuristic and theoretical improvements have been proposed in literature to deal with this issue (namely boundary conditions). They include the case in which a Gaussian RF is applied more than once, i.e. the so called K-iterated Gaussian RFs. In this paper, starting from a summary of the comprehensive mathematical background, we consider the case of the K-iterated first-order Gaussian RF and provide the study of its numerical stability and some component-wise theoretical error bounds.
ERIC Educational Resources Information Center
McCaffery, Juliet
2014-01-01
In this article the author reflects on some of the methodological issues of conducting research in a local marginalised community in the UK. Her research was on attitudes to literacy in the Gypsy and Traveller community in southern England. This article describes some of the challenges and how she, as an outsider and not a member of their…
Research on error control and compensation in magnetorheological finishing.
Dai, Yifan; Hu, Hao; Peng, Xiaoqiang; Wang, Jianmin; Shi, Feng
2011-07-01
Although magnetorheological finishing (MRF) is a deterministic finishing technology, the machining results always fall short of simulation precision in the actual process, and it cannot meet the precision requirements just through a single treatment but after several iterations. We investigate the reasons for this problem through simulations and experiments. Through controlling and compensating the chief errors in the manufacturing procedure, such as removal function calculation error, positioning error of the removal function, and dynamic performance limitation of the CNC machine, the residual error convergence ratio (ratio of figure error before and after processing) in a single process is obviously increased, and higher figure precision is achieved. Finally, an improved technical process is presented based on these researches, and the verification experiment is accomplished on the experimental device we developed. The part is a circular plane mirror of fused silica material, and the surface figure error is improved from the initial λ/5 [peak-to-valley (PV) λ=632.8 nm], λ/30 [root-mean-square (rms)] to the final λ/40 (PV), λ/330 (rms) just through one iteration in 4.4 min. Results show that a higher convergence ratio and processing precision can be obtained by adopting error control and compensation techniques in MRF.
Using Analytics to Transform a Problem-Based Case Library: An Educational Design Research Approach
ERIC Educational Resources Information Center
Schmidt, Matthew; Tawfik, Andrew A.
2018-01-01
This article describes the iterative design, development, and evaluation of a case-based learning environment focusing on an ill-structured sales management problem. We discuss our processes and situate them within the broader framework of educational design research. The learning environment evolved over the course of three design phases. A…
MACBETH: Development of a Training Game for the Mitigation of Cognitive Bias
ERIC Educational Resources Information Center
Dunbar, Norah E.; Wilson, Scott N.; Adame, Bradley J.; Elizondo, Javier; Jensen, Matthew L.; Miller, Claude H.; Kauffman, Abigail Allums; Seltsam, Toby; Bessarabova, Elena; Vincent, Cindy; Straub, Sara K.; Ralston, Ryan; Dulawan, Christopher L.; Ramirez, Dennis; Squire, Kurt; Valacich, Joseph S.; Burgoon, Judee K.
2013-01-01
This paper describes the process of rapid iterative prototyping used by a research team developing a training video game for the Sirius program funded by the Intelligence Advanced Research Projects Activity (IARPA). Described are three stages of development, including a paper prototype, and builds for alpha and beta testing. Game development is…
ERIC Educational Resources Information Center
McNamara, Lauren
2013-01-01
This article describes the first two years of an ongoing, collaborative action research project focused on the troubled recess environment in 4 elementary schools in southern Ontario. The project involves an iterative, dynamic process of inquiry, planning, action, and reflection among students, teachers, university researchers, university student…
Predicting Silk Fiber Mechanical Properties through Multiscale Simulation and Protein Design.
Rim, Nae-Gyune; Roberts, Erin G; Ebrahimi, Davoud; Dinjaski, Nina; Jacobsen, Matthew M; Martín-Moldes, Zaira; Buehler, Markus J; Kaplan, David L; Wong, Joyce Y
2017-08-14
Silk is a promising material for biomedical applications, and much research is focused on how application-specific, mechanical properties of silk can be designed synthetically through proper amino acid sequences and processing parameters. This protocol describes an iterative process between research disciplines that combines simulation, genetic synthesis, and fiber analysis to better design silk fibers with specific mechanical properties. Computational methods are used to assess the protein polymer structure as it forms an interconnected fiber network through shearing and how this process affects fiber mechanical properties. Model outcomes are validated experimentally with the genetic design of protein polymers that match the simulation structures, fiber fabrication from these polymers, and mechanical testing of these fibers. Through iterative feedback between computation, genetic synthesis, and fiber mechanical testing, this protocol will enable a priori prediction capability of recombinant material mechanical properties via insights from the resulting molecular architecture of the fiber network based entirely on the initial protein monomer composition. This style of protocol may be applied to other fields where a research team seeks to design a biomaterial with biomedical application-specific properties. This protocol highlights when and how the three research groups (simulation, synthesis, and engineering) should be interacting to arrive at the most effective method for predictive design of their material.
Archiving California’s historical duck nesting data
Ackerman, Joshua T.; Herzog, Mark P.; Brady, Caroline; Eadie, John M.; Yarris, Greg S.
2015-07-14
With the conclusion of this project, most duck nest data have been entered, but all nest-captured hen data and other breeding waterfowl data that were outside the scope of this project have still not been entered and electronically archived. Maintaining an up-to-date archive will require additional resources to archive and enter the new duck nest data each year in an iterative process. Further, data proofing should be conducted whenever possible, and also should be considered an iterative process as there was sometimes missing data that could not be filled in without more direct knowledge of specific projects. Despite these disclaimers, this duck data archive represents a massive and useful dataset to inform future research and management questions.
ITER Central Solenoid Module Fabrication
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, John
The fabrication of the modules for the ITER Central Solenoid (CS) has started in a dedicated production facility located in Poway, California, USA. The necessary tools have been designed, built, installed, and tested in the facility to enable the start of production. The current schedule has first module fabrication completed in 2017, followed by testing and subsequent shipment to ITER. The Central Solenoid is a key component of the ITER tokamak providing the inductive voltage to initiate and sustain the plasma current and to position and shape the plasma. The design of the CS has been a collaborative effort betweenmore » the US ITER Project Office (US ITER), the international ITER Organization (IO) and General Atomics (GA). GA’s responsibility includes: completing the fabrication design, developing and qualifying the fabrication processes and tools, and then completing the fabrication of the seven 110 tonne CS modules. The modules will be shipped separately to the ITER site, and then stacked and aligned in the Assembly Hall prior to insertion in the core of the ITER tokamak. A dedicated facility in Poway, California, USA has been established by GA to complete the fabrication of the seven modules. Infrastructure improvements included thick reinforced concrete floors, a diesel generator for backup power, along with, cranes for moving the tooling within the facility. The fabrication process for a single module requires approximately 22 months followed by five months of testing, which includes preliminary electrical testing followed by high current (48.5 kA) tests at 4.7K. The production of the seven modules is completed in a parallel fashion through ten process stations. The process stations have been designed and built with most stations having completed testing and qualification for carrying out the required fabrication processes. The final qualification step for each process station is achieved by the successful production of a prototype coil. Fabrication of the first ITER module is in progress. The seven modules will be individually shipped to Cadarache, France upon their completion. This paper describes the processes and status of the fabrication of the CS Modules for ITER.« less
Research infrastructure support to address ecosystem dynamics
NASA Astrophysics Data System (ADS)
Los, Wouter
2014-05-01
Predicting the evolution of ecosystems to climate change or human pressures is a challenge. Even understanding past or current processes is complicated as a result of the many interactions and feedbacks that occur within and between components of the system. This talk will present an example of current research on changes in landscape evolution, hydrology, soil biogeochemical processes, zoological food webs, and plant community succession, and how these affect feedbacks to components of the systems, including the climate system. Multiple observations, experiments, and simulations provide a wealth of data, but not necessarily understanding. Model development on the coupled processes on different spatial and temporal scales is sensitive for variations in data and of parameter change. Fast high performance computing may help to visualize the effect of these changes and the potential stability (and reliability) of the models. This may than allow for iteration between data production and models towards stable models reducing uncertainty and improving the prediction of change. The role of research infrastructures becomes crucial is overcoming barriers for such research. Environmental infrastructures are covering physical site facilities, dedicated instrumentation and e-infrastructure. The LifeWatch infrastructure for biodiversity and ecosystem research will provide services for data integration, analysis and modeling. But it has to cooperate intensively with the other kinds of infrastructures in order to support the iteration between data production and model computation. The cooperation in the ENVRI project (Common operations of environmental research infrastructures) is one of the initiatives to foster such multidisciplinary research.
Iterative near-term ecological forecasting: Needs, opportunities, and challenges
Dietze, Michael C.; Fox, Andrew; Beck-Johnson, Lindsay; Betancourt, Julio L.; Hooten, Mevin B.; Jarnevich, Catherine S.; Keitt, Timothy H.; Kenney, Melissa A.; Laney, Christine M.; Larsen, Laurel G.; Loescher, Henry W.; Lunch, Claire K.; Pijanowski, Bryan; Randerson, James T.; Read, Emily; Tredennick, Andrew T.; Vargas, Rodrigo; Weathers, Kathleen C.; White, Ethan P.
2018-01-01
Two foundational questions about sustainability are “How are ecosystems and the services they provide going to change in the future?” and “How do human decisions affect these trajectories?” Answering these questions requires an ability to forecast ecological processes. Unfortunately, most ecological forecasts focus on centennial-scale climate responses, therefore neither meeting the needs of near-term (daily to decadal) environmental decision-making nor allowing comparison of specific, quantitative predictions to new observational data, one of the strongest tests of scientific theory. Near-term forecasts provide the opportunity to iteratively cycle between performing analyses and updating predictions in light of new evidence. This iterative process of gaining feedback, building experience, and correcting models and methods is critical for improving forecasts. Iterative, near-term forecasting will accelerate ecological research, make it more relevant to society, and inform sustainable decision-making under high uncertainty and adaptive management. Here, we identify the immediate scientific and societal needs, opportunities, and challenges for iterative near-term ecological forecasting. Over the past decade, data volume, variety, and accessibility have greatly increased, but challenges remain in interoperability, latency, and uncertainty quantification. Similarly, ecologists have made considerable advances in applying computational, informatic, and statistical methods, but opportunities exist for improving forecast-specific theory, methods, and cyberinfrastructure. Effective forecasting will also require changes in scientific training, culture, and institutions. The need to start forecasting is now; the time for making ecology more predictive is here, and learning by doing is the fastest route to drive the science forward.
Iterative near-term ecological forecasting: Needs, opportunities, and challenges.
Dietze, Michael C; Fox, Andrew; Beck-Johnson, Lindsay M; Betancourt, Julio L; Hooten, Mevin B; Jarnevich, Catherine S; Keitt, Timothy H; Kenney, Melissa A; Laney, Christine M; Larsen, Laurel G; Loescher, Henry W; Lunch, Claire K; Pijanowski, Bryan C; Randerson, James T; Read, Emily K; Tredennick, Andrew T; Vargas, Rodrigo; Weathers, Kathleen C; White, Ethan P
2018-02-13
Two foundational questions about sustainability are "How are ecosystems and the services they provide going to change in the future?" and "How do human decisions affect these trajectories?" Answering these questions requires an ability to forecast ecological processes. Unfortunately, most ecological forecasts focus on centennial-scale climate responses, therefore neither meeting the needs of near-term (daily to decadal) environmental decision-making nor allowing comparison of specific, quantitative predictions to new observational data, one of the strongest tests of scientific theory. Near-term forecasts provide the opportunity to iteratively cycle between performing analyses and updating predictions in light of new evidence. This iterative process of gaining feedback, building experience, and correcting models and methods is critical for improving forecasts. Iterative, near-term forecasting will accelerate ecological research, make it more relevant to society, and inform sustainable decision-making under high uncertainty and adaptive management. Here, we identify the immediate scientific and societal needs, opportunities, and challenges for iterative near-term ecological forecasting. Over the past decade, data volume, variety, and accessibility have greatly increased, but challenges remain in interoperability, latency, and uncertainty quantification. Similarly, ecologists have made considerable advances in applying computational, informatic, and statistical methods, but opportunities exist for improving forecast-specific theory, methods, and cyberinfrastructure. Effective forecasting will also require changes in scientific training, culture, and institutions. The need to start forecasting is now; the time for making ecology more predictive is here, and learning by doing is the fastest route to drive the science forward.
Inter-hospital communications and transport: turning one-way funnels into two-way networks.
Rokos, Ivan C; Sanddal, Nels D; Pancioli, Arthur M; Wolff, Catherine; Gaieski, David F
2010-12-01
The Inter-hospital Communications and Transport workgroup was charged with exploring the current status, barriers, and data necessary to optimize the initial destination and subsequent transfer of patients between and among acute care settings. The subtitle, "Turning Funnels Into Two-way Networks," is descriptive of the approach that the workgroup took by exploring how and when smaller facilities in suburban, rural, and frontier areas can contribute to the daily business of caring for emergency patients across the lower-acuity spectrum-in some instances with consultant support from academic medical centers. It also focused on the need to identify high-acuity patients and expedite triage and transfer of those patients to facilities with specialty resources. Draft research recommendations were developed through an iterative writing process and presented to a breakout session of Academic Emergency Medicine's 2010 consensus conference, "Beyond Regionalization: Integrated Networks of Emergency Care." Priority research areas were determined by informal consensus of the breakout group. A subsequent iterative writing process was undertaken to complete this article. A number of broad research questions are presented. 2010 by the Society for Academic Emergency Medicine.
Bredenoord, Albert J; Fox, Mark; Kahrilas, Peter J; Pandolfino, John E; Schwizer, Werner; Smout, AJPM; Conklin, Jeffrey L; Cook, Ian J; Gyawali, Prakash; Hebbard, Geoffrey; Holloway, Richard H; Ke, Meiyun; Keller, Jutta; Mittal, Ravinder K; Peters, Jeff; Richter, Joel; Roman, Sabine; Rommel, Nathalie; Sifrim, Daniel; Tutuian, Radu; Valdovinos, Miguel; Vela, Marcelo F; Zerbib, Frank
2011-01-01
Background The Chicago Classification of esophageal motility was developed to facilitate the interpretation of clinical high resolution esophageal pressure topography (EPT) studies, concurrent with the widespread adoption of this technology into clinical practice. The Chicago Classification has been, and will continue to be, an evolutionary process, molded first by published evidence pertinent to the clinical interpretation of high resolution manometry (HRM) studies and secondarily by group experience when suitable evidence is lacking. Methods This publication summarizes the state of our knowledge as of the most recent meeting of the International High Resolution Manometry Working Group in Ascona, Switzerland in April 2011. The prior iteration of the Chicago Classification was updated through a process of literature analysis and discussion. Key Results The major changes in this document from the prior iteration are largely attributable to research studies published since the prior iteration, in many cases research conducted in response to prior deliberations of the International High Resolution Manometry Working Group. The classification now includes criteria for subtyping achalasia, EGJ outflow obstruction, motility disorders not observed in normal subjects (Distal esophageal spasm, Hypercontractile esophagus, and Absent peristalsis), and statistically defined peristaltic abnormalities (Weak peristalsis, Frequent failed peristalsis, Rapid contractions with normal latency, and Hypertensive peristalsis). Conclusions & Inferences The Chicago Classification is an algorithmic scheme for diagnosis of esophageal motility disorders from clinical EPT studies. Moving forward, we anticipate continuing this process with increased emphasis placed on natural history studies and outcome data based on the classification. PMID:22248109
Bredenoord, A J; Fox, M; Kahrilas, P J; Pandolfino, J E; Schwizer, W; Smout, A J P M
2012-03-01
The Chicago Classification of esophageal motility was developed to facilitate the interpretation of clinical high resolution esophageal pressure topography (EPT) studies, concurrent with the widespread adoption of this technology into clinical practice. The Chicago Classification has been an evolutionary process, molded first by published evidence pertinent to the clinical interpretation of high resolution manometry (HRM) studies and secondarily by group experience when suitable evidence is lacking. This publication summarizes the state of our knowledge as of the most recent meeting of the International High Resolution Manometry Working Group in Ascona, Switzerland in April 2011. The prior iteration of the Chicago Classification was updated through a process of literature analysis and discussion. The major changes in this document from the prior iteration are largely attributable to research studies published since the prior iteration, in many cases research conducted in response to prior deliberations of the International High Resolution Manometry Working Group. The classification now includes criteria for subtyping achalasia, EGJ outflow obstruction, motility disorders not observed in normal subjects (Distal esophageal spasm, Hypercontractile esophagus, and Absent peristalsis), and statistically defined peristaltic abnormalities (Weak peristalsis, Frequent failed peristalsis, Rapid contractions with normal latency, and Hypertensive peristalsis). The Chicago Classification is an algorithmic scheme for diagnosis of esophageal motility disorders from clinical EPT studies. Moving forward, we anticipate continuing this process with increased emphasis placed on natural history studies and outcome data based on the classification. © 2012 Blackwell Publishing Ltd.
Remix as Professional Learning: Educators' Iterative Literacy Practice in CLMOOC
ERIC Educational Resources Information Center
Smith, Anna; West-Puckett, Stephanie; Cantrill, Christina; Zamora, Mia
2016-01-01
The Connected Learning Massive Open Online Collaboration (CLMOOC) is an online professional development experience designed as an openly networked, production-centered, participatory learning collaboration for educators. Addressing the paucity of research that investigates learning processes in MOOC experiences, this paper examines the situated…
Loutfy, Mona; Greene, Saara; Kennedy, V Logan; Lewis, Johanna; Thomas-Pavanel, Jamie; Conway, Tracey; de Pokomandy, Alexandra; O'Brien, Nadia; Carter, Allison; Tharao, Wangari; Nicholson, Valerie; Beaver, Kerrigan; Dubuc, Danièle; Gahagan, Jacqueline; Proulx-Boucher, Karène; Hogg, Robert S; Kaida, Angela
2016-08-19
Community-based research has gained increasing recognition in health research over the last two decades. Such participatory research approaches are lauded for their ability to anchor research in lived experiences, ensuring cultural appropriateness, accessing local knowledge, reaching marginalized communities, building capacity, and facilitating research-to-action. While having these positive attributes, the community-based health research literature is predominantly composed of small projects, using qualitative methods, and set within geographically limited communities. Its use in larger health studies, including clinical trials and cohorts, is limited. We present the Canadian HIV Women's Sexual and Reproductive Health Cohort Study (CHIWOS), a large-scale, multi-site, national, longitudinal quantitative study that has operationalized community-based research in all steps of the research process. Successes, challenges and further considerations are offered. Through the integration of community-based research principles, we have been successful in: facilitating a two-year long formative phase for this study; developing a novel survey instrument with national involvement; training 39 Peer Research Associates (PRAs); offering ongoing comprehensive support to PRAs; and engaging in an ongoing iterative community-based research process. Our community-based research approach within CHIWOS demanded that we be cognizant of challenges managing a large national team, inherent power imbalances and challenges with communication, compensation and volunteering considerations, and extensive delays in institutional processes. It is important to consider the iterative nature of community-based research and to work through tensions that emerge given the diverse perspectives of numerous team members. Community-based research, as an approach to large-scale quantitative health research projects, is an increasingly viable methodological option. Community-based research has several advantages that go hand-in-hand with its obstacles. We offer guidance on implementing this approach, such that the process can be better planned and result in success.
Export Control Requirements for Tritium Processing Design and R&D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hollis, William Kirk; Maynard, Sarah-Jane Wadsworth
This document will address requirements of export control associated with tritium plant design and processes. Los Alamos National Laboratory has been working in the area of tritium plant system design and research and development (R&D) since the early 1970’s at the Tritium Systems Test Assembly (TSTA). This work has continued to the current date with projects associated with the ITER project and other Office of Science Fusion Energy Science (OS-FES) funded programs. ITER is currently the highest funding area for the DOE OS-FES. Although export control issues have been integrated into these projects in the past a general guidance documentmore » has not been available for reference in this area. To address concerns with currently funded tritium plant programs and assist future projects for FES, this document will identify the key reference documents and specific sections within related to tritium research. Guidance as to the application of these sections will be discussed with specific detail to publications and work with foreign nationals.« less
Export Control Requirements for Tritium Processing Design and R&D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hollis, William Kirk; Maynard, Sarah-Jane Wadsworth
2015-10-30
This document will address requirements of export control associated with tritium plant design and processes. Los Alamos National Laboratory has been working in the area of tritium plant system design and research and development (R&D) since the early 1970’s at the Tritium Systems Test Assembly (TSTA). This work has continued to the current date with projects associated with the ITER project and other Office of Science Fusion Energy Science (OS-FES) funded programs. ITER is currently the highest funding area for the DOE OS-FES. Although export control issues have been integrated into these projects in the past a general guidance documentmore » has not been available for reference in this area. To address concerns with currently funded tritium plant programs and assist future projects for FES, this document will identify the key reference documents and specific sections within related to tritium research. Guidance as to the application of these sections will be discussed with specific detail to publications and work with foreign nationals.« less
Process Improvement for Interinstitutional Research Contracting
Logan, Jennifer; Bjorklund, Todd; Whitfield, Jesse; Reed, Peggy; Lesher, Laurie; Sikalis, Amy; Brown, Brent; Drollinger, Sandy; Larrabee, Kristine; Thompson, Kristie; Clark, Erin; Workman, Michael; Boi, Luca
2015-01-01
Abstract Introduction Sponsored research increasingly requires multiinstitutional collaboration. However, research contracting procedures have become more complicated and time consuming. The perinatal research units of two colocated healthcare systems sought to improve their research contracting processes. Methods The Lean Process, a management practice that iteratively involves team members in root cause analyses and process improvement, was applied to the research contracting process, initially using Process Mapping and then developing Problem Solving Reports. Results Root cause analyses revealed that the longest delays were the individual contract legal negotiations. In addition, the “business entity” was the research support personnel of both healthcare systems whose “customers” were investigators attempting to conduct interinstitutional research. Development of mutually acceptable research contract templates and language, chain of custody templates, and process development and refinement formats decreased the Notice of Grant Award to Purchase Order time from a mean of 103.5 days in the year prior to Lean Process implementation to 45.8 days in the year after implementation (p = 0.004). Conclusions The Lean Process can be applied to interinstitutional research contracting with significant improvement in contract implementation. PMID:26083433
Process Improvement for Interinstitutional Research Contracting.
Varner, Michael; Logan, Jennifer; Bjorklund, Todd; Whitfield, Jesse; Reed, Peggy; Lesher, Laurie; Sikalis, Amy; Brown, Brent; Drollinger, Sandy; Larrabee, Kristine; Thompson, Kristie; Clark, Erin; Workman, Michael; Boi, Luca
2015-08-01
Sponsored research increasingly requires multiinstitutional collaboration. However, research contracting procedures have become more complicated and time consuming. The perinatal research units of two colocated healthcare systems sought to improve their research contracting processes. The Lean Process, a management practice that iteratively involves team members in root cause analyses and process improvement, was applied to the research contracting process, initially using Process Mapping and then developing Problem Solving Reports. Root cause analyses revealed that the longest delays were the individual contract legal negotiations. In addition, the "business entity" was the research support personnel of both healthcare systems whose "customers" were investigators attempting to conduct interinstitutional research. Development of mutually acceptable research contract templates and language, chain of custody templates, and process development and refinement formats decreased the Notice of Grant Award to Purchase Order time from a mean of 103.5 days in the year prior to Lean Process implementation to 45.8 days in the year after implementation (p = 0.004). The Lean Process can be applied to interinstitutional research contracting with significant improvement in contract implementation. © 2015 Wiley Periodicals, Inc.
Iterative categorization (IC): a systematic technique for analysing qualitative data.
Neale, Joanne
2016-06-01
The processes of analysing qualitative data, particularly the stage between coding and publication, are often vague and/or poorly explained within addiction science and research more broadly. A simple but rigorous and transparent technique for analysing qualitative textual data, developed within the field of addiction, is described. The technique, iterative categorization (IC), is suitable for use with inductive and deductive codes and can support a range of common analytical approaches, e.g. thematic analysis, Framework, constant comparison, analytical induction, content analysis, conversational analysis, discourse analysis, interpretative phenomenological analysis and narrative analysis. Once the data have been coded, the only software required is a standard word processing package. Worked examples are provided. © 2016 The Authors. Addiction published by John Wiley & Sons Ltd on behalf of Society for the Study of Addiction.
AAL service development loom--from the idea to a marketable business model.
Kriegel, Johannes; Auinger, Klemens
2015-01-01
The Ambient Assisted Living (AAL) market is still in an early stage of development. Previous approaches of comprehensive AAL services are mostly supply-side driven and focused on hardware and software. Usually this type of AAL solutions does not lead to a sustainable success on the market. Research and development increasingly focuses on demand and customer requirements in addition to the social and legal framework. The question is: How can a systematic performance measurement strategy along a service development process support the market-ready design of a concrete business model for AAL service? Within the EU funded research project DALIA (Assistant for Daily Life Activities at Home) an iterative service development process uses an adapted Osterwalder business model canvas. The application of a performance measurement index (PMI) to support the process has been developed and tested. Development of an iterative service development model using a supporting PMI. The PMI framework is developed throughout the engineering of a virtual assistant (AVATAR) as a modular interface to connect informal carers with necessary and useful services. Future research should seek to ensure that the PMI enables meaningful transparency regarding targeting (e.g. innovative AAL service), design (e.g. functional hybrid AAL service) and implementation (e.g. marketable AAL support services). To this end, a further reference to further testing practices is required. The aim must be to develop a weighted PMI in the context of further research, which supports both the service engineering and the subsequent service management process.
Wells, Kristen J; Quinn, Gwendolyn P; Meade, Cathy D; Fletcher, Michelle; Tyson, Dinorah Martinez; Jim, Heather; Jacobsen, Paul B
2012-08-01
To describe processes used to develop a multi-media psycho-educational intervention to prepare patients for a discussion about cancer clinical trials (CTs). Guided by a Steering Committee, formative research was conducted to develop an informative and engaging tool about cancer CTs. Twenty-three patients and caregivers participated in formative in-depth interviews to elicit information about perceptions of cancer CTs to inform production of a new media product. Formative research revealed participants had concerns about experimentation, held beliefs that cancer CTs were for patients who had no other treatment options, and wanted a balance of information about pros and cons of CT participation. The value of physicians as credible spokespersons and the use of patients as role-models were supported. Using iterative processes, the production team infused the results into creation of a multimedia psycho-educational intervention titled Clinical Trials: Are they Right for You? An intervention, developed through an iterative consumer-focused process involving multiple stakeholders and formative research, may result in an engaging informative product. If found to be efficacious, Clinical Trials: Are they Right for You? is a low-cost and easily disseminated multimedia psycho-educational intervention to assist cancer patients with making an informed decision about cancer CTs. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
ERIC Educational Resources Information Center
Gray, Christine E.; Contreras-Shannon, Veronica E.
2017-01-01
Analyzing, interpreting, and clearly presenting real data are skills we hope to develop in all students, majors and nonmajors alike. These process skills require lots of practice coupled with targeted feedback from instructors or mentors. Here we present a pedagogy implemented within a course-based research experience that is designed to help…
Alternatives for Developing User Documentation for Applications Software
1991-09-01
style that is designed to match adult reading behaviors, using reader-based writing techniques, developing effective graphics , creating reference aids...involves research, analysis, design , and testing. The writer must have a solid understanding of the technical aspects of the document being prepared, good...ABSTRACT The preparation of software documentation is an iterative process that involves research, analysis, design , and testing. The writer must have
New methods of testing nonlinear hypothesis using iterative NLLS estimator
NASA Astrophysics Data System (ADS)
Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.
2017-11-01
This research paper discusses the method of testing nonlinear hypothesis using iterative Nonlinear Least Squares (NLLS) estimator. Takeshi Amemiya [1] explained this method. However in the present research paper, a modified Wald test statistic due to Engle, Robert [6] is proposed to test the nonlinear hypothesis using iterative NLLS estimator. An alternative method for testing nonlinear hypothesis using iterative NLLS estimator based on nonlinear hypothesis using iterative NLLS estimator based on nonlinear studentized residuals has been proposed. In this research article an innovative method of testing nonlinear hypothesis using iterative restricted NLLS estimator is derived. Pesaran and Deaton [10] explained the methods of testing nonlinear hypothesis. This paper uses asymptotic properties of nonlinear least squares estimator proposed by Jenrich [8]. The main purpose of this paper is to provide very innovative methods of testing nonlinear hypothesis using iterative NLLS estimator, iterative NLLS estimator based on nonlinear studentized residuals and iterative restricted NLLS estimator. Eakambaram et al. [12] discussed least absolute deviation estimations versus nonlinear regression model with heteroscedastic errors and also they studied the problem of heteroscedasticity with reference to nonlinear regression models with suitable illustration. William Grene [13] examined the interaction effect in nonlinear models disused by Ai and Norton [14] and suggested ways to examine the effects that do not involve statistical testing. Peter [15] provided guidelines for identifying composite hypothesis and addressing the probability of false rejection for multiple hypotheses.
Data Integration Tool: Permafrost Data Debugging
NASA Astrophysics Data System (ADS)
Wilcox, H.; Schaefer, K. M.; Jafarov, E. E.; Pulsifer, P. L.; Strawhacker, C.; Yarmey, L.; Basak, R.
2017-12-01
We developed a Data Integration Tool (DIT) to significantly speed up the time of manual processing needed to translate inconsistent, scattered historical permafrost data into files ready to ingest directly into the Global Terrestrial Network-Permafrost (GTN-P). The United States National Science Foundation funded this project through the National Snow and Ice Data Center (NSIDC) with the GTN-P to improve permafrost data access and discovery. We leverage this data to support science research and policy decisions. DIT is a workflow manager that divides data preparation and analysis into a series of steps or operations called widgets (https://github.com/PermaData/DIT). Each widget does a specific operation, such as read, multiply by a constant, sort, plot, and write data. DIT allows the user to select and order the widgets as desired to meet their specific needs, incrementally interact with and evolve the widget workflows, and save those workflows for reproducibility. Taking ideas from visual programming found in the art and design domain, debugging and iterative design principles from software engineering, and the scientific data processing and analysis power of Fortran and Python it was written for interactive, iterative data manipulation, quality control, processing, and analysis of inconsistent data in an easily installable application. DIT was used to completely translate one dataset (133 sites) that was successfully added to GTN-P, nearly translate three datasets (270 sites), and is scheduled to translate 10 more datasets ( 1000 sites) from the legacy inactive site data holdings of the Frozen Ground Data Center (FGDC). Iterative development has provided the permafrost and wider scientific community with an extendable tool designed specifically for the iterative process of translating unruly data.
NASA Technical Reports Server (NTRS)
1980-01-01
The design, fabrication, and installation of an experimental process system development unit (EPSDU) were analyzed. Supporting research and development were performed to provide an information data base usable for the EPSDU and for technological design and economical analysis for potential scale-up of the process. Iterative economic analyses were conducted for the estimated product cost for the production of semiconductor grade silicon in a facility capable of producing 1000-MT/Yr.
Cross Sectional Study of Agile Software Development Methods and Project Performance
ERIC Educational Resources Information Center
Lambert, Tracy
2011-01-01
Agile software development methods, characterized by delivering customer value via incremental and iterative time-boxed development processes, have moved into the mainstream of the Information Technology (IT) industry. However, despite a growing body of research which suggests that a predictive manufacturing approach, with big up-front…
What Physicians Reason about during Admission Case Review
ERIC Educational Resources Information Center
Juma, Salina; Goldszmidt, Mark
2017-01-01
Research suggests that physicians perform multiple reasoning tasks beyond diagnosis during patient review. However, these remain largely theoretical. The purpose of this study was to explore reasoning tasks in clinical practice during patient admission review. The authors used a constant comparative approach--an iterative and inductive process of…
How an Exchange of Perspectives Led to Tentative Ethical Guidelines for Visual Ethnography
ERIC Educational Resources Information Center
Pope, Clive C.; De Luca, Rosemary; Tolich, Martin
2010-01-01
Qualitative research, especially visual ethnography, is an iterative not a linear process, replete with good intentions, false starts, mistaken assumptions, miscommunication and a continually revised statement of the problem. That the camera freezes everything and everyone in the frame only complicates ethical considerations. This work, jointly…
Visualizing Community: Understanding Narrative Inquiry as Action Research
ERIC Educational Resources Information Center
Caine, Vera
2010-01-01
Throughout the school year I invited children in a Grade Two/Three learning strategies classroom to participate in a visual narrative inquiry. The intention was to explore children's knowledge of community in artful ways; the children photographed and wrote in what was often an iterative process, where writing/talking and photographing…
Quantum-Inspired Multidirectional Associative Memory With a Self-Convergent Iterative Learning.
Masuyama, Naoki; Loo, Chu Kiong; Seera, Manjeevan; Kubota, Naoyuki
2018-04-01
Quantum-inspired computing is an emerging research area, which has significantly improved the capabilities of conventional algorithms. In general, quantum-inspired hopfield associative memory (QHAM) has demonstrated quantum information processing in neural structures. This has resulted in an exponential increase in storage capacity while explaining the extensive memory, and it has the potential to illustrate the dynamics of neurons in the human brain when viewed from quantum mechanics perspective although the application of QHAM is limited as an autoassociation. We introduce a quantum-inspired multidirectional associative memory (QMAM) with a one-shot learning model, and QMAM with a self-convergent iterative learning model (IQMAM) based on QHAM in this paper. The self-convergent iterative learning enables the network to progressively develop a resonance state, from inputs to outputs. The simulation experiments demonstrate the advantages of QMAM and IQMAM, especially the stability to recall reliability.
NASA Technical Reports Server (NTRS)
Crasner, Aaron I.; Scola,Salvatore; Beyon, Jeffrey Y.; Petway, Larry B.
2014-01-01
Optimized designs of the Navigation Doppler Lidar (NDL) instrument for Autonomous Landing Hazard Avoidance Technology (ALHAT) were accomplished via Interdisciplinary Design Concept (IDEC) at NASA Langley Research Center during the summer of 2013. Three branches in the Engineering Directorate and three students were involved in this joint task through the NASA Langley Aerospace Research Summer Scholars (LARSS) Program. The Laser Remote Sensing Branch (LRSB), Mechanical Systems Branch (MSB), and Structural and Thermal Systems Branch (STSB) were engaged to achieve optimal designs through iterative and interactive collaborative design processes. A preliminary design iteration was able to reduce the power consumption, mass, and footprint by removing redundant components and replacing inefficient components with more efficient ones. A second design iteration reduced volume and mass by replacing bulky components with excessive performance with smaller components custom-designed for the power system. Thermal modeling software was used to run steady state thermal analyses, which were used to both validate the designs and recommend further changes. Analyses were run on each redesign, as well as the original system. Thermal Desktop was used to run trade studies to account for uncertainty and assumptions about fan performance and boundary conditions. The studies suggested that, even if the assumptions were significantly wrong, the redesigned systems would remain within operating temperature limits.
Creating a Research Agenda and Setting Research Priorities for Clinical Nurse Specialists.
Foster, Jan; Bautista, Cynthia; Ellstrom, Kathleen; Kalowes, Peggy; Manning, Jennifer; Pasek, Tracy Ann
The purpose of this article is to describe the evolution and results of the process for establishing a research agenda and identification of research priorities for clinical nurse specialists, approved by the National Association of Clinical Nurse Specialists (NACNS) membership and sanctioned by the NACNS Board of Directors. Development of the research agenda and identification of the priorities were an iterative process and involved a review of the literature; input from multiple stakeholders, including individuals with expertise in conducting research serving as task force members, and NACNS members; and feedback from national board members. A research agenda, which is to provide an enduring research platform, was established and research priorities, which are to be applied in the immediate future, were identified as a result of this process. Development of a research agenda and identification of research priorities are a key method of fulfilling the mission and goals of NACNS. The process and outcomes are described in this article.
Mission of ITER and Challenges for the Young
NASA Astrophysics Data System (ADS)
Ikeda, Kaname
2009-02-01
It is recognized that the ongoing effort to provide sufficient energy for the wellbeing of the globe's population and to power the world economy is of the greatest importance. ITER is a joint international research and development project that aims to demonstrate the scientific and technical feasibility of fusion power. It represents the responsible actions of governments whose countries comprise over half the world's population, to create fusion power as a source of clean, economic, carbon dioxide-free energy. This is the most important science initiative of our time. The partners in the Project—the ITER Parties—are the European Union, Japan, the People's Republic of China, India, the Republic of Korea, the Russian Federation and the USA. ITER will be constructed in Europe, at Cadarache in the South of France. The talk will illustrate the genesis of the ITER Organization, the ongoing work at the Cadarache site and the planned schedule for construction. There will also be an explanation of the unique aspects of international collaboration that have been developed for ITER. Although the present focus of the project is construction activities, ITER is also a major scientific and technological research program, for which the best of the world's intellectual resources is needed. Challenges for the young, imperative for fulfillment of the objective of ITER will be identified. It is important that young students and researchers worldwide recognize the rapid development of the project, and the fundamental issues that must be overcome in ITER. The talk will also cover the exciting career and fellowship opportunities for young people at the ITER Organization.
Mission of ITER and Challenges for the Young
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ikeda, Kaname
2009-02-19
It is recognized that the ongoing effort to provide sufficient energy for the wellbeing of the globe's population and to power the world economy is of the greatest importance. ITER is a joint international research and development project that aims to demonstrate the scientific and technical feasibility of fusion power. It represents the responsible actions of governments whose countries comprise over half the world's population, to create fusion power as a source of clean, economic, carbon dioxide-free energy. This is the most important science initiative of our time.The partners in the Project--the ITER Parties--are the European Union, Japan, the People'smore » Republic of China, India, the Republic of Korea, the Russian Federation and the USA. ITER will be constructed in Europe, at Cadarache in the South of France. The talk will illustrate the genesis of the ITER Organization, the ongoing work at the Cadarache site and the planned schedule for construction. There will also be an explanation of the unique aspects of international collaboration that have been developed for ITER.Although the present focus of the project is construction activities, ITER is also a major scientific and technological research program, for which the best of the world's intellectual resources is needed. Challenges for the young, imperative for fulfillment of the objective of ITER will be identified. It is important that young students and researchers worldwide recognize the rapid development of the project, and the fundamental issues that must be overcome in ITER.The talk will also cover the exciting career and fellowship opportunities for young people at the ITER Organization.« less
NASA Astrophysics Data System (ADS)
Vadolia, Gautam R.; Premjit Singh, K.
2017-04-01
Electron Beam Welding (EBW) technology is an established and widely adopted technique in nuclear research and development area. Electron beam welding was thought of as a candidate process for ITER Vacuum Vessel Fabrication. Dhruva Reactor at BARC, Mumbai and Niobium superconducting accelerator cavity at BARC has adopted the EB welding technique as a fabrication route. Study of process capability and limitations based on available literature is consolidated in this short review paper.
Improved Real-Time Scan Matching Using Corner Features
NASA Astrophysics Data System (ADS)
Mohamed, H. A.; Moussa, A. M.; Elhabiby, M. M.; El-Sheimy, N.; Sesay, Abu B.
2016-06-01
The automation of unmanned vehicle operation has gained a lot of research attention, in the last few years, because of its numerous applications. The vehicle localization is more challenging in indoor environments where absolute positioning measurements (e.g. GPS) are typically unavailable. Laser range finders are among the most widely used sensors that help the unmanned vehicles to localize themselves in indoor environments. Typically, automatic real-time matching of the successive scans is performed either explicitly or implicitly by any localization approach that utilizes laser range finders. Many accustomed approaches such as Iterative Closest Point (ICP), Iterative Matching Range Point (IMRP), Iterative Dual Correspondence (IDC), and Polar Scan Matching (PSM) handles the scan matching problem in an iterative fashion which significantly affects the time consumption. Furthermore, the solution convergence is not guaranteed especially in cases of sharp maneuvers or fast movement. This paper proposes an automated real-time scan matching algorithm where the matching process is initialized using the detected corners. This initialization step aims to increase the convergence probability and to limit the number of iterations needed to reach convergence. The corner detection is preceded by line extraction from the laser scans. To evaluate the probability of line availability in indoor environments, various data sets, offered by different research groups, have been tested and the mean numbers of extracted lines per scan for these data sets are ranging from 4.10 to 8.86 lines of more than 7 points. The set of all intersections between extracted lines are detected as corners regardless of the physical intersection of these line segments in the scan. To account for the uncertainties of the detected corners, the covariance of the corners is estimated using the extracted lines variances. The detected corners are used to estimate the transformation parameters between the successive scan using least squares. These estimated transformation parameters are used to calculate an adjusted initialization for scan matching process. The presented method can be employed solely to match the successive scans and also can be used to aid other accustomed iterative methods to achieve more effective and faster converge. The performance and time consumption of the proposed approach is compared with ICP algorithm alone without initialization in different scenarios such as static period, fast straight movement, and sharp manoeuvers.
High resolution x-ray CMT: Reconstruction methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, J.K.
This paper qualitatively discusses the primary characteristics of methods for reconstructing tomographic images from a set of projections. These reconstruction methods can be categorized as either {open_quotes}analytic{close_quotes} or {open_quotes}iterative{close_quotes} techniques. Analytic algorithms are derived from the formal inversion of equations describing the imaging process, while iterative algorithms incorporate a model of the imaging process and provide a mechanism to iteratively improve image estimates. Analytic reconstruction algorithms are typically computationally more efficient than iterative methods; however, analytic algorithms are available for a relatively limited set of imaging geometries and situations. Thus, the framework of iterative reconstruction methods is better suited formore » high accuracy, tomographic reconstruction codes.« less
The Role and Reprocessing of Attitudes in Fostering Employee Work Happiness: An Intervention Study.
Williams, Paige; Kern, Margaret L; Waters, Lea
2017-01-01
This intervention study examines the iterative reprocessing of explicit and implicit attitudes as the process underlying associations between positive employee attitudes (PsyCap), perception of positive organization culture (organizational virtuousness, OV), and work happiness. Using a quasi-experimental design, a group of school staff ( N = 69) completed surveys at three time points. After the first assessment, the treatment group ( n = 51) completed a positive psychology training intervention. Results suggest that employee PsyCap, OV, and work happiness are associated with one another through both implicit and explicit attitudes. Further, the Iterative-Reprocessing Model of attitudes (IRM) provides some insights into the processes underlying these associations. By examining the role and processes through which explicit and implicit attitudes relate to wellbeing at work, the study integrates theories on attitudes, positive organizational scholarship, positive organizational behavior and positive education. It is one of the first studies to apply the theory of the IRM to explain associations amongst PsyCap, OV and work happiness, and to test the IRM theory in a field-based setting. In applying attitude theory to wellbeing research, this study provides insights to mechanisms underlying workplace wellbeing that have not been previously examined and in doing so responds to calls for researchers to learn more about the mechanisms underlying wellbeing interventions. Further, it highlights the need to understand subconscious processes in future wellbeing research and to include implicit measures in positive psychology interventions measurement programs. Practically, this research calls attention to the importance of developing both the positive attitudes of employees and the organizational culture in developing employee work happiness.
The Role and Reprocessing of Attitudes in Fostering Employee Work Happiness: An Intervention Study
Williams, Paige; Kern, Margaret L.; Waters, Lea
2017-01-01
This intervention study examines the iterative reprocessing of explicit and implicit attitudes as the process underlying associations between positive employee attitudes (PsyCap), perception of positive organization culture (organizational virtuousness, OV), and work happiness. Using a quasi-experimental design, a group of school staff (N = 69) completed surveys at three time points. After the first assessment, the treatment group (n = 51) completed a positive psychology training intervention. Results suggest that employee PsyCap, OV, and work happiness are associated with one another through both implicit and explicit attitudes. Further, the Iterative-Reprocessing Model of attitudes (IRM) provides some insights into the processes underlying these associations. By examining the role and processes through which explicit and implicit attitudes relate to wellbeing at work, the study integrates theories on attitudes, positive organizational scholarship, positive organizational behavior and positive education. It is one of the first studies to apply the theory of the IRM to explain associations amongst PsyCap, OV and work happiness, and to test the IRM theory in a field-based setting. In applying attitude theory to wellbeing research, this study provides insights to mechanisms underlying workplace wellbeing that have not been previously examined and in doing so responds to calls for researchers to learn more about the mechanisms underlying wellbeing interventions. Further, it highlights the need to understand subconscious processes in future wellbeing research and to include implicit measures in positive psychology interventions measurement programs. Practically, this research calls attention to the importance of developing both the positive attitudes of employees and the organizational culture in developing employee work happiness. PMID:28154546
Cultural Emergence: Theorizing Culture in and from the Margins of Science Education
ERIC Educational Resources Information Center
Wood, Nathan Brent; Erichsen, Elizabeth Anne; Anicha, Cali L.
2013-01-01
This special issue of the Journal of Research in Science Teaching seeks to explore conceptualizations of culture that address contemporary challenges in science education. Toward this end, we unite two theoretical perspectives to advance a conceptualization of culture as a complex system, emerging from iterative processes of cultural bricolage,…
Is This a Meaningful Learning Experience? Interactive Critical Self-Inquiry as Investigation
ERIC Educational Resources Information Center
Allard, Andrea C.; Gallant, Andrea
2012-01-01
What conditions enable educators to engage in meaningful learning experiences with peers and beginning practitioners? This article documents a self-study on our actions-in-practice in a peer mentoring project. The investigation involved an iterative process to improve our knowledge as teacher educators, reflective practitioners, and researchers.…
ERIC Educational Resources Information Center
Helding, Brandon Alan
2010-01-01
The purpose of this dissertation is to demonstrate one iterate of a process for developing a measurement instrument for student knowledge within educational interventions. Student mathematical knowledge is framed within Cognitively Guided Instruction (CGI) and its tenets. That is, the construct underlying the measurement instrument corresponded…
Negotiating Meaning in Cross-National Studies of Mathematics Teaching: Kissing Frogs to Find Princes
ERIC Educational Resources Information Center
Andrews, Paul
2007-01-01
This paper outlines the iterative processes by which a multinational team of researchers developed a low-inference framework for the analysis of video recordings of mathematics lessons drawn from Flemish Belgium, England, Finland, Hungary and Spain. Located within a theoretical framework concerning learning as the negotiation of meaning, we…
Accurate Micro-Tool Manufacturing by Iterative Pulsed-Laser Ablation
NASA Astrophysics Data System (ADS)
Warhanek, Maximilian; Mayr, Josef; Dörig, Christian; Wegener, Konrad
2017-12-01
Iterative processing solutions, including multiple cycles of material removal and measurement, are capable of achieving higher geometric accuracy by compensating for most deviations manifesting directly on the workpiece. Remaining error sources are the measurement uncertainty and the repeatability of the material-removal process including clamping errors. Due to the lack of processing forces, process fluids and wear, pulsed-laser ablation has proven high repeatability and can be realized directly on a measuring machine. This work takes advantage of this possibility by implementing an iterative, laser-based correction process for profile deviations registered directly on an optical measurement machine. This way efficient iterative processing is enabled, which is precise, applicable for all tool materials including diamond and eliminates clamping errors. The concept is proven by a prototypical implementation on an industrial tool measurement machine and a nanosecond fibre laser. A number of measurements are performed on both the machine and the processed workpieces. Results show production deviations within 2 μm diameter tolerance.
Designing an over-the-counter consumer decision-making tool for older adults.
Martin-Hammond, Aqueasha M; Abegaz, Tamirat; Gilbert, Juan E
2015-10-01
Older adults are at increased risk of adverse drug events due to medication. Older adults tend to take more medication and are at higher risk of chronic illness. Over-the-counter (OTC) medication does not require healthcare provider oversight and understanding OTC information is heavily dependent on a consumer's ability to understand and use the medication appropriately. Coupling health technology with effective communication is one approach to address the challenge of communicating health and improving health related tasks. However, the success of many health technologies also depends on how well the technology is designed and how well it addresses users needs. This is especially true for the older adult population. This paper describes (1) a formative study performed to understand how to design novel health technology to assist older adults with OTC medication information, and (2) how a user-centered design process helped to refine the initial assumptions of user needs and help to conceptualize the technology. An iterative design process was used. The process included two brainstorming and review sessions with human-computer interaction researchers and design sessions with older adults in the form of semi-structured interviews. Methods and principles of user-centered research and design were used to inform the research design. Two researchers with expertise in human-computer interaction performed expert reviews of early system prototypes. After initial prototypes were developed, seven older adults were engaged in semi-structured interviews to understand usability concerns and features and functionality older adults may find useful for selecting appropriate OTC medication. Eight usability concerns were discovered and addressed in the two rounds of expert review, and nine additional usability concerns were discovered in design sessions with older adults. Five themes emerged from the interview transcripts as recommendations for design. These recommendations represent opportunities for technology such as the one described in this paper to support older adults in the OTC decision-making process. This paper illustrates the use of an iterative user-centered process in the formative stages of design and its usefulness for understanding aspects of the technology design that are useful to older adults when making decisions about OTC medication. The technology support mechanisms included in the initial model were revised based on the results from the iterative design sessions and helped to refine and conceptualize the system being designed. Copyright © 2015 Elsevier Inc. All rights reserved.
Lessons from a Space Analog on Adaptation for Long-Duration Exploration Missions.
Anglin, Katlin M; Kring, Jason P
2016-04-01
Exploration missions to asteroids and Mars will bring new challenges associated with communication delays and more autonomy for crews. Mission safety and success will rely on how well the entire system, from technology to the human elements, is adaptable and resilient to disruptive, novel, or potentially catastrophic events. The recent NASA Extreme Environment Missions Operations (NEEMO) 20 mission highlighted this need and produced valuable "lessons learned" that will inform future research on team adaptation and resilience. A team of NASA, industry, and academic members used an iterative process to design a tripod shaped structure, called the CORAL Tower, for two astronauts to assemble underwater with minimal tools. The team also developed assembly procedures, administered training to the crew, and provided support during the mission. During the design, training, and assembly of the Tower, the team learned first-hand how adaptation in extreme environments depends on incremental testing, thorough procedures and contingency plans that predict possible failure scenarios, and effective team adaptation and resiliency for the crew and support personnel. Findings from NEEMO 20 provide direction on the design and testing process for future space systems and crews to maximize adaptation. This experience also underscored the need for more research on team adaptation, particularly how input and process factors affect adaption outcomes, the team adaptation iterative process, and new ways to measure the adaptation process.
Corwin, Lisa A; Runyon, Christopher R; Ghanem, Eman; Sandy, Moriah; Clark, Greg; Palmer, Gregory C; Reichler, Stuart; Rodenbusch, Stacia E; Dolan, Erin L
2018-06-01
Course-based undergraduate research experiences (CUREs) provide a promising avenue to attract a larger and more diverse group of students into research careers. CUREs are thought to be distinctive in offering students opportunities to make discoveries, collaborate, engage in iterative work, and develop a sense of ownership of their lab course work. Yet how these elements affect students' intentions to pursue research-related careers remain unexplored. To address this knowledge gap, we collected data on three design features thought to be distinctive of CUREs (discovery, iteration, collaboration) and on students' levels of ownership and career intentions from ∼800 undergraduates who had completed CURE or inquiry courses, including courses from the Freshman Research Initiative (FRI), which has a demonstrated positive effect on student retention in college and in science, technology, engineering, and mathematics. We used structural equation modeling to test relationships among the design features and student ownership and career intentions. We found that discovery, iteration, and collaboration had small but significant effects on students' intentions; these effects were fully mediated by student ownership. Students in FRI courses reported significantly higher levels of discovery, iteration, and ownership than students in other CUREs. FRI research courses alone had a significant effect on students' career intentions.
O'Connor, Sydney; Ayres, Alison; Cortellini, Lynelle; Rosand, Jonathan; Rosenthal, Eric; Kimberly, W Taylor
2012-08-01
Reliable and efficient data repositories are essential for the advancement of research in Neurocritical care. Various factors, such as the large volume of patients treated within the neuro ICU, their differing length and complexity of hospital stay, and the substantial amount of desired information can complicate the process of data collection. We adapted the tools of process improvement to the data collection and database design of a research repository for a Neuroscience intensive care unit. By the Shewhart-Deming method, we implemented an iterative approach to improve the process of data collection for each element. After an initial design phase, we re-evaluated all data fields that were challenging or time-consuming to collect. We then applied root-cause analysis to optimize the accuracy and ease of collection, and to determine the most efficient manner of collecting the maximal amount of data. During a 6-month period, we iteratively analyzed the process of data collection for various data elements. For example, the pre-admission medications were found to contain numerous inaccuracies after comparison with a gold standard (sensitivity 71% and specificity 94%). Also, our first method of tracking patient admissions and discharges contained higher than expected errors (sensitivity 94% and specificity 93%). In addition to increasing accuracy, we focused on improving efficiency. Through repeated incremental improvements, we reduced the number of subject records that required daily monitoring from 40 to 6 per day, and decreased daily effort from 4.5 to 1.5 h/day. By applying process improvement methods to the design of a Neuroscience ICU data repository, we achieved a threefold improvement in efficiency and increased accuracy. Although individual barriers to data collection will vary from institution to institution, a focus on process improvement is critical to overcoming these barriers.
A new iterative triclass thresholding technique in image segmentation.
Cai, Hongmin; Yang, Zhong; Cao, Xinhua; Xia, Weiming; Xu, Xiaoyin
2014-03-01
We present a new method in image segmentation that is based on Otsu's method but iteratively searches for subregions of the image for segmentation, instead of treating the full image as a whole region for processing. The iterative method starts with Otsu's threshold and computes the mean values of the two classes as separated by the threshold. Based on the Otsu's threshold and the two mean values, the method separates the image into three classes instead of two as the standard Otsu's method does. The first two classes are determined as the foreground and background and they will not be processed further. The third class is denoted as a to-be-determined (TBD) region that is processed at next iteration. At the succeeding iteration, Otsu's method is applied on the TBD region to calculate a new threshold and two class means and the TBD region is again separated into three classes, namely, foreground, background, and a new TBD region, which by definition is smaller than the previous TBD regions. Then, the new TBD region is processed in the similar manner. The process stops when the Otsu's thresholds calculated between two iterations is less than a preset threshold. Then, all the intermediate foreground and background regions are, respectively, combined to create the final segmentation result. Tests on synthetic and real images showed that the new iterative method can achieve better performance than the standard Otsu's method in many challenging cases, such as identifying weak objects and revealing fine structures of complex objects while the added computational cost is minimal.
Modelling Feedback in Virtual Patients: An Iterative Approach.
Stathakarou, Natalia; Kononowicz, Andrzej A; Henningsohn, Lars; McGrath, Cormac
2018-01-01
Virtual Patients (VPs) offer learners the opportunity to practice clinical reasoning skills and have recently been integrated in Massive Open Online Courses (MOOCs). Feedback is a central part of a branched VP, allowing the learner to reflect on the consequences of their decisions and actions. However, there is insufficient guidance on how to design feedback models within VPs and especially in the context of their application in MOOCs. In this paper, we share our experiences from building a feedback model for a bladder cancer VP in a Urology MOOC, following an iterative process in three steps. Our results demonstrate how we can systematize the process of improving the quality of VP components by the application of known literature frameworks and extend them with a feedback module. We illustrate the design and re-design process and exemplify with content from our VP. Our results can act as starting point for discussions on modelling feedback in VPs and invite future research on the topic.
Schiller, Claire; Winters, Meghan; Hanson, Heather M; Ashe, Maureen C
2013-05-02
Stakeholders, as originally defined in theory, are groups or individual who can affect or are affected by an issue. Stakeholders are an important source of information in health research, providing critical perspectives and new insights on the complex determinants of health. The intersection of built and social environments with older adult mobility is an area of research that is fundamentally interdisciplinary and would benefit from a better understanding of stakeholder perspectives. Although a rich body of literature surrounds stakeholder theory, a systematic process for identifying health stakeholders in practice does not exist. This paper presents a framework of stakeholders related to older adult mobility and the built environment, and further outlines a process for systematically identifying stakeholders that can be applied in other health contexts, with a particular emphasis on concept mapping research. Informed by gaps in the relevant literature we developed a framework for identifying and categorizing health stakeholders. The framework was created through a novel iterative process of stakeholder identification and categorization. The development entailed a literature search to identify stakeholder categories, representation of identified stakeholders in a visual chart, and correspondence with expert informants to obtain practice-based insight. The three-step, iterative creation process progressed from identifying stakeholder categories, to identifying specific stakeholder groups and soliciting feedback from expert informants. The result was a stakeholder framework comprised of seven categories with detailed sub-groups. The main categories of stakeholders were, (1) the Public, (2) Policy makers and governments, (3) Research community, (4) Practitioners and professionals, (5) Health and social service providers, (6) Civil society organizations, and (7) Private business. Stakeholders related to older adult mobility and the built environment span many disciplines and realms of practice. Researchers studying this issue may use the detailed stakeholder framework process we present to identify participants for future projects. Health researchers pursuing stakeholder-based projects in other contexts are encouraged to incorporate this process of stakeholder identification and categorization to ensure systematic consideration of relevant perspectives in their work.
2013-01-01
Background Stakeholders, as originally defined in theory, are groups or individual who can affect or are affected by an issue. Stakeholders are an important source of information in health research, providing critical perspectives and new insights on the complex determinants of health. The intersection of built and social environments with older adult mobility is an area of research that is fundamentally interdisciplinary and would benefit from a better understanding of stakeholder perspectives. Although a rich body of literature surrounds stakeholder theory, a systematic process for identifying health stakeholders in practice does not exist. This paper presents a framework of stakeholders related to older adult mobility and the built environment, and further outlines a process for systematically identifying stakeholders that can be applied in other health contexts, with a particular emphasis on concept mapping research. Methods Informed by gaps in the relevant literature we developed a framework for identifying and categorizing health stakeholders. The framework was created through a novel iterative process of stakeholder identification and categorization. The development entailed a literature search to identify stakeholder categories, representation of identified stakeholders in a visual chart, and correspondence with expert informants to obtain practice-based insight. Results The three-step, iterative creation process progressed from identifying stakeholder categories, to identifying specific stakeholder groups and soliciting feedback from expert informants. The result was a stakeholder framework comprised of seven categories with detailed sub-groups. The main categories of stakeholders were, (1) the Public, (2) Policy makers and governments, (3) Research community, (4) Practitioners and professionals, (5) Health and social service providers, (6) Civil society organizations, and (7) Private business. Conclusions Stakeholders related to older adult mobility and the built environment span many disciplines and realms of practice. Researchers studying this issue may use the detailed stakeholder framework process we present to identify participants for future projects. Health researchers pursuing stakeholder-based projects in other contexts are encouraged to incorporate this process of stakeholder identification and categorization to ensure systematic consideration of relevant perspectives in their work. PMID:23639179
Minimum Nuclear Deterrence Postures in South Asia: An Overview
2001-10-01
states in May 1998, India and Pakistan both espoused nuclear restraint. Their senior officials soon embraced the language of "minimum credible...Air Force and Army. India’s longer-range nuclear-capable missiles such as the Agni, however, are still in the research and development process under...explained in Appendix A, Pakistan continued between 1991 and 1998 to enrich uranium to low- enriched (LEU) levels. Since enrichment is an iterative process
NASA Astrophysics Data System (ADS)
Suzuki, S.; Enoeda, M.; Hatano, T.; Hirose, T.; Hayashi, K.; Tanigawa, H.; Ochiai, K.; Nishitani, T.; Tobita, K.; Akiba, M.
2006-02-01
This paper presents the significant progress made in the research and development (R&D) of key technologies on the water-cooled solid breeder blanket for the ITER test blanket modules in JAERI. Development of module fabrication technology, bonding technology of armours, measurement of thermo-mechanical properties of pebble beds, neutronics studies on a blanket module mockup and tritium release behaviour from a Li2TiO3 pebble bed under neutron-pulsed operation conditions are summarized. With the improvement of the heat treatment process for blanket module fabrication, a fine-grained microstructure of F82H can be obtained by homogenizing it at 1150 °C followed by normalizing it at 930 °C after the hot isostatic pressing process. Moreover, a promising bonding process for a tungsten armour and an F82H structural material was developed using a solid-state bonding method based on uniaxial hot compression without any artificial compliant layer. As a result of high heat flux tests of F82H first wall mockups, it has been confirmed that a fatigue lifetime correlation, which was developed for the ITER divertor, can be made applicable for the F82H first wall mockup. As for R&D on the breeder material, Li2TiO3, the effect of compression loads on effective thermal conductivity of pebble beds has been clarified for the Li2TiO3 pebble bed. The tritium breeding ratio of a simulated multi-layer blanket structure has successfully been measured using 14 MeV neutrons with an accuracy of 10%. The tritium release rate from the Li2TiO3 pebble has also been successfully measured with pulsed neutron irradiation, which simulates ITER operation.
Is it really theoretical? A review of sampling in grounded theory studies in nursing journals.
McCrae, Niall; Purssell, Edward
2016-10-01
Grounded theory is a distinct method of qualitative research, where core features are theoretical sampling and constant comparative analysis. However, inconsistent application of these activities has been observed in published studies. This review assessed the use of theoretical sampling in grounded theory studies in nursing journals. An adapted systematic review was conducted. Three leading nursing journals (2010-2014) were searched for studies stating grounded theory as the method. Sampling was assessed using a concise rating tool. A high proportion (86%) of the 134 articles described an iterative process of data collection and analysis. However, half of the studies did not demonstrate theoretical sampling, with many studies declaring or indicating a purposive sampling approach throughout. Specific reporting guidelines for grounded theory studies should be developed to ensure that study reports describe an iterative process of fieldwork and theoretical development. © 2016 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Akiba, Masato; Jitsukawa, Shiroh; Muroga, Takeo
This paper describes the status of blanket technology and material development for fusion power demonstration plants and commercial fusion plants. In particular, the ITER Test Blanket Module, IFMIF, JAERI/DOE HFIR and JUPITER-II projects are highlighted, which have the important role to develop these technology. The ITER Test Blanket Module project has been conducted to demonstrate tritium breeding and power generation using test blanket modules, which will be installed into the ITER facility. For structural material development, the present research status is overviewed on reduced activation ferritic steel, vanadium alloys, and SiC/SiC composites.
Iteration and Prototyping in Creating Technical Specifications.
ERIC Educational Resources Information Center
Flynt, John P.
1994-01-01
Claims that the development process for computer software can be greatly aided by the writers of specifications if they employ basic iteration and prototyping techniques. Asserts that computer software configuration management practices provide ready models for iteration and prototyping. (HB)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boggs, Paul T.; Althsuler, Alan; Larzelere, Alex R.
2005-08-01
The Design-through-Analysis Realization Team (DART) is chartered with reducing the time Sandia analysts require to complete the engineering analysis process. The DART system analysis team studied the engineering analysis processes employed by analysts in Centers 9100 and 8700 at Sandia to identify opportunities for reducing overall design-through-analysis process time. The team created and implemented a rigorous analysis methodology based on a generic process flow model parameterized by information obtained from analysts. They also collected data from analysis department managers to quantify the problem type and complexity distribution throughout Sandia's analyst community. They then used this information to develop a communitymore » model, which enables a simple characterization of processes that span the analyst community. The results indicate that equal opportunity for reducing analysis process time is available both by reducing the ''once-through'' time required to complete a process step and by reducing the probability of backward iteration. In addition, reducing the rework fraction (i.e., improving the engineering efficiency of subsequent iterations) offers approximately 40% to 80% of the benefit of reducing the ''once-through'' time or iteration probability, depending upon the process step being considered. Further, the results indicate that geometry manipulation and meshing is the largest portion of an analyst's effort, especially for structural problems, and offers significant opportunity for overall time reduction. Iteration loops initiated late in the process are more costly than others because they increase ''inner loop'' iterations. Identifying and correcting problems as early as possible in the process offers significant opportunity for time savings.« less
Vaccarino, Anthony L; Evans, Kenneth R; Kalali, Amir H; Kennedy, Sidney H; Engelhardt, Nina; Frey, Benicio N; Greist, John H; Kobak, Kenneth A; Lam, Raymond W; MacQueen, Glenda; Milev, Roumen; Placenza, Franca M; Ravindran, Arun V; Sheehan, David V; Sills, Terrence; Williams, Janet B W
2016-01-01
The Depression Inventory Development project is an initiative of the International Society for CNS Drug Development whose goal is to develop a comprehensive and psychometrically sound measurement tool to be utilized as a primary endpoint in clinical trials for major depressive disorder. Using an iterative process between field testing and psychometric analysis and drawing upon expertise of international researchers in depression, the Depression Inventory Development team has established an empirically driven and collaborative protocol for the creation of items to assess symptoms in major depressive disorder. Depression-relevant symptom clusters were identified based on expert clinical and patient input. In addition, as an aid for symptom identification and item construction, the psychometric properties of existing clinical scales (assessing depression and related indications) were evaluated using blinded datasets from pharmaceutical antidepressant drug trials. A series of field tests in patients with major depressive disorder provided the team with data to inform the iterative process of scale development. We report here an overview of the Depression Inventory Development initiative, including results of the third iteration of items assessing symptoms related to anhedonia, cognition, fatigue, general malaise, motivation, anxiety, negative thinking, pain and appetite. The strategies adopted from the Depression Inventory Development program, as an empirically driven and collaborative process for scale development, have provided the foundation to develop and validate measurement tools in other therapeutic areas as well.
ERIC Educational Resources Information Center
Krall, Jodi Stotts; Lohse, Barbara
2010-01-01
Objective: Examine the validity of a self-report measure of eating competence with low-income women. Methods: Twenty-five females (18-49 years old) recruited from low-income venues in Pennsylvania completed cognitive testing through an iterative interview process. Respondents' oral responses were compared to researchers' intended meaning of…
Enhancement Process of Didactic Strategies in a Degree Course for Pre-Service Teachers
ERIC Educational Resources Information Center
Garcias, Adolfina Pérez; Marín, Victoria I.
2017-01-01
This paper presents a study on the enhancement of didactic strategies based on the idea of personal learning environments (PLE). It was conducted through three iterative cycles during three consecutive academic years according to the phases of design-based research applied to teaching in a university course for pre-service teachers in the…
State Share of Instruction Funding to Ohio Public Community Colleges: A Policy Analysis
ERIC Educational Resources Information Center
Johnson, Betsy
2012-01-01
This study investigated various state policies to determine their impact on the state share of instruction (SSI) funding to community colleges in the state of Ohio. To complete the policy analysis, the researcher utilized three policy analysis tools, defined by Gill and Saunders (2010) as iterative processes, intuition and judgment, and advice and…
Researchers Apply Lesson Study: A Cycle of Lesson Planning, Implementation, and Revision
ERIC Educational Resources Information Center
Regan, Kelley S.; Evmenova, Anya S.; Kurz, Leigh Ann; Hughes, Melissa D.; Sacco, Donna; Ahn, Soo Y.; MacVittie, Nichole; Good, Kevin; Boykin, Andrea; Schwartzer, Jessica; Chirinos, David S.
2016-01-01
Scripted lesson plans and/or professional development alone may not be sufficient to encourage teachers to reflect on the quality of their teaching and improve their teaching. One learning tool that teachers may use to improve their teaching is Lesson Study (LS). LS is a collaborative process involving educators, based on concepts of iteration and…
Design and Use of Interactive Social Stories for Children with Autism Spectrum Disorder (ASD)
ERIC Educational Resources Information Center
Sani-Bozkurt, Sunagul; Vuran, Sezgin; Akbulut, Yavuz
2017-01-01
The current study aimed to design technology-supported interactive social stories to teach social skills to children with autism spectrum disorder (ASD). A design-based research was implemented with children with ASD along with the participation of their mothers, teachers, peers and field experts. An iterative remediation process was followed…
ERIC Educational Resources Information Center
Nelson, Peter M.; Demers, Joseph A.; Christ, Theodore J.
2014-01-01
This study details the initial development of the Responsive Environmental Assessment for Classroom Teachers (REACT). REACT was developed as a questionnaire to evaluate student perceptions of the classroom teaching environment. Researchers engaged in an iterative process to develop, field test, and analyze student responses on 100 rating-scale…
ERIC Educational Resources Information Center
Barrett, M. J.; Harmin, Matthew; Maracle, Bryan; Patterson, Molly; Thomson, Christina; Flowers, Michelle; Bors, Kirk
2017-01-01
Using the iterative process of action research, we identify six portals of understanding, called threshold concepts, which can be used as curricular guideposts to disrupt the socially constituted separation, and hierarchy, between humans and the more-than-human. The threshold concepts identified in this study provide focal points for a curriculum…
Composition of web services using Markov decision processes and dynamic programming.
Uc-Cetina, Víctor; Moo-Mena, Francisco; Hernandez-Ucan, Rafael
2015-01-01
We propose a Markov decision process model for solving the Web service composition (WSC) problem. Iterative policy evaluation, value iteration, and policy iteration algorithms are used to experimentally validate our approach, with artificial and real data. The experimental results show the reliability of the model and the methods employed, with policy iteration being the best one in terms of the minimum number of iterations needed to estimate an optimal policy, with the highest Quality of Service attributes. Our experimental work shows how the solution of a WSC problem involving a set of 100,000 individual Web services and where a valid composition requiring the selection of 1,000 services from the available set can be computed in the worst case in less than 200 seconds, using an Intel Core i5 computer with 6 GB RAM. Moreover, a real WSC problem involving only 7 individual Web services requires less than 0.08 seconds, using the same computational power. Finally, a comparison with two popular reinforcement learning algorithms, sarsa and Q-learning, shows that these algorithms require one or two orders of magnitude and more time than policy iteration, iterative policy evaluation, and value iteration to handle WSC problems of the same complexity.
Increasing High School Student Interest in Science: An Action Research Study
NASA Astrophysics Data System (ADS)
Vartuli, Cindy A.
An action research study was conducted to determine how to increase student interest in learning science and pursuing a STEM career. The study began by exploring 10th-grade student and teacher perceptions of student interest in science in order to design an instructional strategy for stimulating student interest in learning and pursuing science. Data for this study included responses from 270 students to an on-line science survey and interviews with 11 students and eight science teachers. The action research intervention included two iterations of the STEM Career Project. The first iteration introduced four chemistry classes to the intervention. The researcher used student reflections and a post-project survey to determine if the intervention had influence on the students' interest in pursuing science. The second iteration was completed by three science teachers who had implemented the intervention with their chemistry classes, using student reflections and post-project surveys, as a way to make further procedural refinements and improvements to the intervention and measures. Findings from the exploratory phase of the study suggested students generally had interest in learning science but increasing that interest required including personally relevant applications and laboratory experiences. The intervention included a student-directed learning module in which students investigated three STEM careers and presented information on one of their chosen careers. The STEM Career Project enabled students to explore career possibilities in order to increase their awareness of STEM careers. Findings from the first iteration of the intervention suggested a positive influence on student interest in learning and pursuing science. The second iteration included modifications to the intervention resulting in support for the findings of the first iteration. Results of the second iteration provided modifications that would allow the project to be used for different academic levels. Insights from conducting the action research study provided the researcher with effective ways to make positive changes in her own teaching praxis and the tools used to improve student awareness of STEM career options.
PREFACE: Progress in the ITER Physics Basis
NASA Astrophysics Data System (ADS)
Ikeda, K.
2007-06-01
I would firstly like to congratulate all who have contributed to the preparation of the `Progress in the ITER Physics Basis' (PIPB) on its publication and express my deep appreciation of the hard work and commitment of the many scientists involved. With the signing of the ITER Joint Implementing Agreement in November 2006, the ITER Members have now established the framework for construction of the project, and the ITER Organization has begun work at Cadarache. The review of recent progress in the physics basis for burning plasma experiments encompassed by the PIPB will be a valuable resource for the project and, in particular, for the current Design Review. The ITER design has been derived from a physics basis developed through experimental, modelling and theoretical work on the properties of tokamak plasmas and, in particular, on studies of burning plasma physics. The `ITER Physics Basis' (IPB), published in 1999, has been the reference for the projection methodologies for the design of ITER, but the IPB also highlighted several key issues which needed to be resolved to provide a robust basis for ITER operation. In the intervening period scientists of the ITER Participant Teams have addressed these issues intensively. The International Tokamak Physics Activity (ITPA) has provided an excellent forum for scientists involved in these studies, focusing their work on the high priority physics issues for ITER. Significant progress has been made in many of the issues identified in the IPB and this progress is discussed in depth in the PIPB. In this respect, the publication of the PIPB symbolizes the strong interest and enthusiasm of the plasma physics community for the success of the ITER project, which we all recognize as one of the great scientific challenges of the 21st century. I wish to emphasize my appreciation of the work of the ITPA Coordinating Committee members, who are listed below. Their support and encouragement for the preparation of the PIPB were fundamental to its completion. I am pleased to witness the extensive collaborations, the excellent working relationships and the free exchange of views that have been developed among scientists working on magnetic fusion, and I would particularly like to acknowledge the importance which they assign to ITER in their research. This close collaboration and the spirit of free discussion will be essential to the success of ITER. Finally, the PIPB identifies issues which remain in the projection of burning plasma performance to the ITER scale and in the control of burning plasmas. Continued R&D is therefore called for to reduce the uncertainties associated with these issues and to ensure the efficient operation and exploitation of ITER. It is important that the international fusion community maintains a high level of collaboration in the future to address these issues and to prepare the physics basis for ITER operation. ITPA Coordination Committee R. Stambaugh (Chair of ITPA CC, General Atomics, USA) D.J. Campbell (Previous Chair of ITPA CC, European Fusion Development Agreement—Close Support Unit, ITER Organization) M. Shimada (Co-Chair of ITPA CC, ITER Organization) R. Aymar (ITER International Team, CERN) V. Chuyanov (ITER Organization) J.H. Han (Korea Basic Science Institute, Korea) Y. Huo (Zengzhou University, China) Y.S. Hwang (Seoul National University, Korea) N. Ivanov (Kurchatov Institute, Russia) Y. Kamada (Japan Atomic Energy Agency, Naka, Japan) P.K. Kaw (Institute for Plasma Research, India) S. Konovalov (Kurchatov Institute, Russia) M. Kwon (National Fusion Research Center, Korea) J. Li (Academy of Science, Institute of Plasma Physics, China) S. Mirnov (TRINITI, Russia) Y. Nakamura (National Institute for Fusion Studies, Japan) H. Ninomiya (Japan Atomic Energy Agency, Naka, Japan) E. Oktay (Department of Energy, USA) J. Pamela (European Fusion Development Agreement—Close Support Unit) C. Pan (Southwestern Institute of Physics, China) F. Romanelli (Ente per le Nuove tecnologie, l'Energia e l'Ambiente, Italy and European Fusion Development Agreement—Close Support Unit) N. Sauthoff (Princeton Plasma Physics Laboratory, USA and Oak Ridge National Laboratories, USA) Y. Saxena (Institute for Plasma Research, India) Y. Shimomura (ITER Organization) R. Singh (Institute for Plasma Research, India) S. Takamura (Nagoya University, Japan) K. Toi (National Institute for Fusion Studies, Japan) M. Wakatani (Kyoto University, Japan (deceased)) H. Zohm (Max-Planck-Institut für Plasmaphysik, Garching, Germany)
Plasma wall interaction, a key issue on the way to a steady state burning fusion device
NASA Astrophysics Data System (ADS)
Philipps, V.
2006-04-01
The International Tokamak Experimental Reactor (ITER), the first burning fusion plasma experiment based on the tokamak principle, is ready for construction. It is based on many years of fusion research resulting in a robust design in most of the areas. Present day fusion research concentrates on the remaining critical issues which are, to a large extent, connected with processes of plasma wall interaction. This is mainly due to extended duty cycle and the increase of the plasma stored energy in comparison with present-day machines. Critical topics are the lifetime of the plasma facing components (PFC) and the long-term tritium retention. These processes are controlled mainly by material erosion, both during steady state operation and transient power losses (disruptions and edge localized modes (ELMs)) and short- and long-range material migration and re-deposition. The extrapolation from present-day 'full carbon wall' devices suggests that the long-term tritium retention in a burning fusion device would be unacceptably high under these conditions allowing for only an unacceptable limited number of pulses in a D T mixture. As a consequence of this, research activities have been strengthened to understand in more detail the underlying processes of material erosion and re-deposition, to develop methods to remove retained tritium from the PFCs and remote areas of a fusion device and to explore these processes and the plasma performance in more detail with metallic PFC, such as beryllium (Be) and tungsten (W), which are foreseen for the ITER experiment. This paper outlines the main physical mechanisms leading to material erosion, migration and re-deposition and the associated fuel retention. It addresses the experimental database in these areas and describes the further research strategies that will be needed to tackle critical issues.
United States Research and Development effort on ITER magnet tasks
Martovetsky, Nicolai N.; Reierson, Wayne T.
2011-01-22
This study presents the status of research and development (R&D) magnet tasks that are being performed in support of the U.S. ITER Project Office (USIPO) commitment to provide a central solenoid assembly and toroidal field conductor for the ITER machine to be constructed in Cadarache, France. The following development tasks are presented: winding development, inlets and outlets development, internal and bus joints development and testing, insulation development and qualification, vacuum-pressure impregnation, bus supports, and intermodule structure and materials characterization.
Integrated Collaborative Model in Research and Education with Emphasis on Small Satellite Technology
1996-01-01
feedback; the number of iterations in a complete iteration is referred to as loop depth or iteration depth, g (i). A data packet or packet is data...loop depth, g (i)) is either a finite (constant or variable) or an infinite value. 1) Finite loop depth, variable number of iterations Some problems...design time. The time needed for the first packet to leave and a new initial data to be introduced to the iteration is min(R * ( g (k) * (N+I) + k-1
ERIC Educational Resources Information Center
De Lisle, Jerome; Seunarinesingh, Krishna; Mohammed, Rhoda; Lee-Piggott, Rinnelle
2017-01-01
In this study, methodology and theory were linked to explicate the nature of education practice within schools facing exceptionally challenging circumstances (SFECC) in Trinidad and Tobago. The research design was an iterative quan>QUAL-quan>qual multi-method research programme, consisting of 3 independent projects linked together by overall…
Strong Convergence of Iteration Processes for Infinite Family of General Extended Mappings
NASA Astrophysics Data System (ADS)
Hussein Maibed, Zena
2018-05-01
The aim of this paper, we introduce a concept of general extended mapping which is independent of nonexpansive mapping and give an iteration process of families of quasi nonexpansive and of general extended mappings. Also, the existence of common fixed point are studied for these process in the Hilbert spaces.
Finding the Optimal Guidance for Enhancing Anchored Instruction
ERIC Educational Resources Information Center
Zydney, Janet Mannheimer; Bathke, Arne; Hasselbring, Ted S.
2014-01-01
This study investigated the effect of different methods of guidance with anchored instruction on students' mathematical problem-solving performance. The purpose of this research was to iteratively design a learning environment to find the optimal level of guidance. Two iterations of the software were compared. The first iteration used explicit…
NASA Astrophysics Data System (ADS)
Bosch, Carl; Degirmenci, Soysal; Barlow, Jason; Mesika, Assaf; Politte, David G.; O'Sullivan, Joseph A.
2016-05-01
X-ray computed tomography reconstruction for medical, security and industrial applications has evolved through 40 years of experience with rotating gantry scanners using analytic reconstruction techniques such as filtered back projection (FBP). In parallel, research into statistical iterative reconstruction algorithms has evolved to apply to sparse view scanners in nuclear medicine, low data rate scanners in Positron Emission Tomography (PET) [5, 7, 10] and more recently to reduce exposure to ionizing radiation in conventional X-ray CT scanners. Multiple approaches to statistical iterative reconstruction have been developed based primarily on variations of expectation maximization (EM) algorithms. The primary benefit of EM algorithms is the guarantee of convergence that is maintained when iterative corrections are made within the limits of convergent algorithms. The primary disadvantage, however is that strict adherence to correction limits of convergent algorithms extends the number of iterations and ultimate timeline to complete a 3D volumetric reconstruction. Researchers have studied methods to accelerate convergence through more aggressive corrections [1], ordered subsets [1, 3, 4, 9] and spatially variant image updates. In this paper we describe the development of an AM reconstruction algorithm with accelerated convergence for use in a real-time explosive detection application for aviation security. By judiciously applying multiple acceleration techniques and advanced GPU processing architectures, we are able to perform 3D reconstruction of scanned passenger baggage at a rate of 75 slices per second. Analysis of the results on stream of commerce passenger bags demonstrates accelerated convergence by factors of 8 to 15, when comparing images from accelerated and strictly convergent algorithms.
Research at ITER towards DEMO: Specific reactor diagnostic studies to be carried out on ITER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krasilnikov, A. V.; Kaschuck, Y. A.; Vershkov, V. A.
2014-08-21
In ITER diagnostics will operate in the very hard radiation environment of fusion reactor. Extensive technology studies are carried out during development of the ITER diagnostics and procedures of their calibration and remote handling. Results of these studies and practical application of the developed diagnostics on ITER will provide the direct input to DEMO diagnostic development. The list of DEMO measurement requirements and diagnostics will be determined during ITER experiments on the bases of ITER plasma physics results and success of particular diagnostic application in reactor-like ITER plasma. Majority of ITER diagnostic already passed the conceptual design phase and representmore » the state of the art in fusion plasma diagnostic development. The number of related to DEMO results of ITER diagnostic studies such as design and prototype manufacture of: neutron and γ–ray diagnostics, neutral particle analyzers, optical spectroscopy including first mirror protection and cleaning technics, reflectometry, refractometry, tritium retention measurements etc. are discussed.« less
Research at ITER towards DEMO: Specific reactor diagnostic studies to be carried out on ITER
NASA Astrophysics Data System (ADS)
Krasilnikov, A. V.; Kaschuck, Y. A.; Vershkov, V. A.; Petrov, A. A.; Petrov, V. G.; Tugarinov, S. N.
2014-08-01
In ITER diagnostics will operate in the very hard radiation environment of fusion reactor. Extensive technology studies are carried out during development of the ITER diagnostics and procedures of their calibration and remote handling. Results of these studies and practical application of the developed diagnostics on ITER will provide the direct input to DEMO diagnostic development. The list of DEMO measurement requirements and diagnostics will be determined during ITER experiments on the bases of ITER plasma physics results and success of particular diagnostic application in reactor-like ITER plasma. Majority of ITER diagnostic already passed the conceptual design phase and represent the state of the art in fusion plasma diagnostic development. The number of related to DEMO results of ITER diagnostic studies such as design and prototype manufacture of: neutron and γ-ray diagnostics, neutral particle analyzers, optical spectroscopy including first mirror protection and cleaning technics, reflectometry, refractometry, tritium retention measurements etc. are discussed.
Iterative Processes and Reciprocal Controlling Relationships in a Systemic Intervention
ERIC Educational Resources Information Center
Krapfl, Jon E.; Cooke, John; Sullivan, Timothy; Cogar, William
2009-01-01
This account describes the total reengineering of an organization with the attendant changes to the culture, the nature of the work, the rethinking of the organizational purpose, and the identification of a new customer base and new concepts of how the organization will reach it. The case is presented not as scientific research but as a case of…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ives, Robert Lawrence; Marsden, David; Collins, George
Calabazas Creek Research, Inc. developed a 1.5 MW RF load for the ITER fusion research facility currently under construction in France. This program leveraged technology developed in two previous SBIR programs that successfully developed high power RF loads for fusion research applications. This program specifically focused on modifications required by revised technical performance, materials, and assembly specification for ITER. This program implemented an innovative approach to actively distribute the RF power inside the load to avoid excessive heating or arcing associated with constructive interference. The new design implemented materials and assembly changes required to meet specifications. Critical components were builtmore » and successfully tested during the program.« less
Composition of Web Services Using Markov Decision Processes and Dynamic Programming
Uc-Cetina, Víctor; Moo-Mena, Francisco; Hernandez-Ucan, Rafael
2015-01-01
We propose a Markov decision process model for solving the Web service composition (WSC) problem. Iterative policy evaluation, value iteration, and policy iteration algorithms are used to experimentally validate our approach, with artificial and real data. The experimental results show the reliability of the model and the methods employed, with policy iteration being the best one in terms of the minimum number of iterations needed to estimate an optimal policy, with the highest Quality of Service attributes. Our experimental work shows how the solution of a WSC problem involving a set of 100,000 individual Web services and where a valid composition requiring the selection of 1,000 services from the available set can be computed in the worst case in less than 200 seconds, using an Intel Core i5 computer with 6 GB RAM. Moreover, a real WSC problem involving only 7 individual Web services requires less than 0.08 seconds, using the same computational power. Finally, a comparison with two popular reinforcement learning algorithms, sarsa and Q-learning, shows that these algorithms require one or two orders of magnitude and more time than policy iteration, iterative policy evaluation, and value iteration to handle WSC problems of the same complexity. PMID:25874247
2016-01-01
outputs, customers , and outcomes (see Figure 2.1). In the Taylor-Powell and Henert simple three-part example, the food would constitute an input, finding... Customer Activities etaidemretnI Goals Strategic Goals Annual Goals Management Objectives Operations M ission External factors Annual...Partners are the individuals or organizations that work with programs to conduct activities or enable outputs. • Customers (intermediate and final
Modern Workflow Full Waveform Inversion Applied to North America and the Northern Atlantic
NASA Astrophysics Data System (ADS)
Krischer, Lion; Fichtner, Andreas; Igel, Heiner
2015-04-01
We present the current state of a new seismic tomography model obtained using full waveform inversion of the crustal and upper mantle structure beneath North America and the Northern Atlantic, including the westernmost part of Europe. Parts of the eastern portion of the initial model consists of previous models by Fichtner et al. (2013) and Rickers et al. (2013). The final results of this study will contribute to the 'Comprehensive Earth Model' being developed by the Computational Seismology group at ETH Zurich. Significant challenges include the size of the domain, the uneven event and station coverage, and the strong east-west alignment of seismic ray paths across the North Atlantic. We use as much data as feasible, resulting in several thousand recordings per event depending on the receivers deployed at the earthquakes' origin times. To manage such projects in a reproducible and collaborative manner, we, as tomographers, should abandon ad-hoc scripts and one-time programs, and adopt sustainable and reusable solutions. Therefore we developed the LArge-scale Seismic Inversion Framework (LASIF - http://lasif.net), an open-source toolbox for managing seismic data in the context of non-linear iterative inversions that greatly reduces the time to research. Information on the applied processing, modelling, iterative model updating, what happened during each iteration, and so on are systematically archived. This results in a provenance record of the final model which in the end significantly enhances the reproducibility of iterative inversions. Additionally, tools for automated data download across different data centers, window selection, misfit measurements, parallel data processing, and input file generation for various forward solvers are provided.
Programmable Iterative Optical Image And Data Processing
NASA Technical Reports Server (NTRS)
Jackson, Deborah J.
1995-01-01
Proposed method of iterative optical image and data processing overcomes limitations imposed by loss of optical power after repeated passes through many optical elements - especially, beam splitters. Involves selective, timed combination of optical wavefront phase conjugation and amplification to regenerate images in real time to compensate for losses in optical iteration loops; timing such that amplification turned on to regenerate desired image, then turned off so as not to regenerate other, undesired images or spurious light propagating through loops from unwanted reflections.
Zhao, Jing; Zong, Haili
2018-01-01
In this paper, we propose parallel and cyclic iterative algorithms for solving the multiple-set split equality common fixed-point problem of firmly quasi-nonexpansive operators. We also combine the process of cyclic and parallel iterative methods and propose two mixed iterative algorithms. Our several algorithms do not need any prior information about the operator norms. Under mild assumptions, we prove weak convergence of the proposed iterative sequences in Hilbert spaces. As applications, we obtain several iterative algorithms to solve the multiple-set split equality problem.
FENDL: International reference nuclear data library for fusion applications
NASA Astrophysics Data System (ADS)
Pashchenko, A. B.; Wienke, H.; Ganesan, S.
1996-10-01
The IAEA Nuclear Data Section, in co-operation with several national nuclear data centres and research groups, has created the first version of an internationally available Fusion Evaluated Nuclear Data Library (FENDL-1). The FENDL library has been selected to serve as a comprehensive source of processed and tested nuclear data tailored to the requirements of the engineering design activity (EDA) of the ITER project and other fusion-related development projects. The present version of FENDL consists of the following sublibraries covering the necessary nuclear input for all physics and engineering aspects of the material development, design, operation and safety of the ITER project in its current EDA phase: FENDL/A-1.1: neutron activation cross-sections, selected from different available sources, for 636 nuclides, FENDL/D-1.0: nuclear decay data for 2900 nuclides in ENDF-6 format, FENDL/DS-1.0: neutron activation data for dosimetry by foil activation, FENDL/C-1.0: data for the fusion reactions D(d,n), D(d,p), T(d,n), T(t,2n), He-3(d,p) extracted from ENDF/B-6 and processed, FENDL/E-1.0:data for coupled neutron—photon transport calculations, including a data library for neutron interaction and photon production for 63 elements or isotopes, selected from ENDF/B-6, JENDL-3, or BROND-2, and a photon—atom interaction data library for 34 elements. The benchmark validation of FENDL-1 as required by the customer, i.e. the ITER team, is considered to be a task of high priority in the coming months. The well tested and validated nuclear data libraries in processed form of the FENDL-2 are expected to be ready by mid 1996 for use by the ITER team in the final phase of ITER EDA after extensive benchmarking and integral validation studies in the 1995-1996 period. The FENDL data files can be electronically transferred to users from the IAEA nuclear data section online system through INTERNET. A grand total of 54 (sub)directories with 845 files with total size of about 2 million blocks or about 1 Gigabyte (1 block = 512 bytes) of numerical data is currently available on-line.
WE-AB-303-09: Rapid Projection Computations for On-Board Digital Tomosynthesis in Radiation Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iliopoulos, AS; Sun, X; Pitsianis, N
2015-06-15
Purpose: To facilitate fast and accurate iterative volumetric image reconstruction from limited-angle on-board projections. Methods: Intrafraction motion hinders the clinical applicability of modern radiotherapy techniques, such as lung stereotactic body radiation therapy (SBRT). The LIVE system may impact clinical practice by recovering volumetric information via Digital Tomosynthesis (DTS), thus entailing low time and radiation dose for image acquisition during treatment. The DTS is estimated as a deformation of prior CT via iterative registration with on-board images; this shifts the challenge to the computational domain, owing largely to repeated projection computations across iterations. We address this issue by composing efficient digitalmore » projection operators from their constituent parts. This allows us to separate the static (projection geometry) and dynamic (volume/image data) parts of projection operations by means of pre-computations, enabling fast on-board processing, while also relaxing constraints on underlying numerical models (e.g. regridding interpolation kernels). Further decoupling the projectors into simpler ones ensures the incurred memory overhead remains low, within the capacity of a single GPU. These operators depend only on the treatment plan and may be reused across iterations and patients. The dynamic processing load is kept to a minimum and maps well to the GPU computational model. Results: We have integrated efficient, pre-computable modules for volumetric ray-casting and FDK-based back-projection with the LIVE processing pipeline. Our results show a 60x acceleration of the DTS computations, compared to the previous version, using a single GPU; presently, reconstruction is attained within a couple of minutes. The present implementation allows for significant flexibility in terms of the numerical and operational projection model; we are investigating the benefit of further optimizations and accurate digital projection sub-kernels. Conclusion: Composable projection operators constitute a versatile research tool which can greatly accelerate iterative registration algorithms and may be conducive to the clinical applicability of LIVE. National Institutes of Health Grant No. R01-CA184173; GPU donation by NVIDIA Corporation.« less
ERIC Educational Resources Information Center
Corwin, Lisa A.; Runyon, Christopher R.; Ghanem, Eman; Sandy, Moriah; Clark, Greg; Palmer, Gregory C.; Reichler, Stuart; Rodenbusch, Stacia E.; Dolan, Erin L.
2018-01-01
Course-based undergraduate research experiences (CUREs) provide a promising avenue to attract a larger and more diverse group of students into research careers. CUREs are thought to be distinctive in offering students opportunities to make discoveries, collaborate, engage in iterative work, and develop a sense of ownership of their lab course…
Language Evolution by Iterated Learning with Bayesian Agents
ERIC Educational Resources Information Center
Griffiths, Thomas L.; Kalish, Michael L.
2007-01-01
Languages are transmitted from person to person and generation to generation via a process of iterated learning: people learn a language from other people who once learned that language themselves. We analyze the consequences of iterated learning for learning algorithms based on the principles of Bayesian inference, assuming that learners compute…
Iterative Neighbour-Information Gathering for Ranking Nodes in Complex Networks
NASA Astrophysics Data System (ADS)
Xu, Shuang; Wang, Pei; Lü, Jinhu
2017-01-01
Designing node influence ranking algorithms can provide insights into network dynamics, functions and structures. Increasingly evidences reveal that node’s spreading ability largely depends on its neighbours. We introduce an iterative neighbourinformation gathering (Ing) process with three parameters, including a transformation matrix, a priori information and an iteration time. The Ing process iteratively combines priori information from neighbours via the transformation matrix, and iteratively assigns an Ing score to each node to evaluate its influence. The algorithm appropriates for any types of networks, and includes some traditional centralities as special cases, such as degree, semi-local, LeaderRank. The Ing process converges in strongly connected networks with speed relying on the first two largest eigenvalues of the transformation matrix. Interestingly, the eigenvector centrality corresponds to a limit case of the algorithm. By comparing with eight renowned centralities, simulations of susceptible-infected-removed (SIR) model on real-world networks reveal that the Ing can offer more exact rankings, even without a priori information. We also observe that an optimal iteration time is always in existence to realize best characterizing of node influence. The proposed algorithms bridge the gaps among some existing measures, and may have potential applications in infectious disease control, designing of optimal information spreading strategies.
Maintaining Web-based Bibliographies: A Case Study of Iter, the Bibliography of Renaissance Europe.
ERIC Educational Resources Information Center
Castell, Tracy
1997-01-01
Introduces Iter, a nonprofit research project developed for the World Wide Web and dedicated to increasing access to all published materials pertaining to the Renaissance and, eventually, the Middle Ages. Discusses information management issues related to building and maintaining Iter's first Web-based bibliography, focusing on printed secondary…
Development and Evaluation of an Intuitive Operations Planning Process
2006-03-01
designed to be iterative and also prescribes the way in which iterations should occur. On the other hand, participants’ perceived level of trust and...16 4. DESIGN AND METHOD OF THE EXPERIMENTAL EVALUATION OF THE INTUITIVE PLANNING PROCESS...20 4.1.3 Design
On the breakdown modes and parameter space of Ohmic Tokamak startup
NASA Astrophysics Data System (ADS)
Peng, Yanli; Jiang, Wei; Zhang, Ya; Hu, Xiwei; Zhuang, Ge; Innocenti, Maria; Lapenta, Giovanni
2017-10-01
Tokamak plasma has to be hot. The process of turning the initial dilute neutral hydrogen gas at room temperature into fully ionized plasma is called tokamak startup. Even with over 40 years of research, the parameter ranges for the successful startup still aren't determined by numerical simulations but by trial and errors. However, in recent years it has drawn much attention due to one of the challenges faced by ITER: the maximum electric field for startup can't exceed 0.3 V/m, which makes the parameter range for successful startup narrower. Besides, this physical mechanism is far from being understood either theoretically or numerically. In this work, we have simulated the plasma breakdown phase driven by pure Ohmic heating using a particle-in-cell/Monte Carlo code, with the aim of giving a predictive parameter range for most tokamaks, even for ITER. We have found three situations during the discharge, as a function of the initial parameters: no breakdown, breakdown and runaway. Moreover, breakdown delay and volt-second consumption under different initial conditions are evaluated. In addition, we have simulated breakdown on ITER and confirmed that when the electric field is 0.3 V/m, the optimal pre-filling pressure is 0.001 Pa, which is in good agreement with ITER's design.
NASA Astrophysics Data System (ADS)
Lin, Qingyang; Andrew, Matthew; Thompson, William; Blunt, Martin J.; Bijeljic, Branko
2018-05-01
Non-invasive laboratory-based X-ray microtomography has been widely applied in many industrial and research disciplines. However, the main barrier to the use of laboratory systems compared to a synchrotron beamline is its much longer image acquisition time (hours per scan compared to seconds to minutes at a synchrotron), which results in limited application for dynamic in situ processes. Therefore, the majority of existing laboratory X-ray microtomography is limited to static imaging; relatively fast imaging (tens of minutes per scan) can only be achieved by sacrificing imaging quality, e.g. reducing exposure time or number of projections. To alleviate this barrier, we introduce an optimized implementation of a well-known iterative reconstruction algorithm that allows users to reconstruct tomographic images with reasonable image quality, but requires lower X-ray signal counts and fewer projections than conventional methods. Quantitative analysis and comparison between the iterative and the conventional filtered back-projection reconstruction algorithm was performed using a sandstone rock sample with and without liquid phases in the pore space. Overall, by implementing the iterative reconstruction algorithm, the required image acquisition time for samples such as this, with sparse object structure, can be reduced by a factor of up to 4 without measurable loss of sharpness or signal to noise ratio.
A Universal Tare Load Prediction Algorithm for Strain-Gage Balance Calibration Data Analysis
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2011-01-01
An algorithm is discussed that may be used to estimate tare loads of wind tunnel strain-gage balance calibration data. The algorithm was originally developed by R. Galway of IAR/NRC Canada and has been described in the literature for the iterative analysis technique. Basic ideas of Galway's algorithm, however, are universally applicable and work for both the iterative and the non-iterative analysis technique. A recent modification of Galway's algorithm is presented that improves the convergence behavior of the tare load prediction process if it is used in combination with the non-iterative analysis technique. The modified algorithm allows an analyst to use an alternate method for the calculation of intermediate non-linear tare load estimates whenever Galway's original approach does not lead to a convergence of the tare load iterations. It is also shown in detail how Galway's algorithm may be applied to the non-iterative analysis technique. Hand load data from the calibration of a six-component force balance is used to illustrate the application of the original and modified tare load prediction method. During the analysis of the data both the iterative and the non-iterative analysis technique were applied. Overall, predicted tare loads for combinations of the two tare load prediction methods and the two balance data analysis techniques showed excellent agreement as long as the tare load iterations converged. The modified algorithm, however, appears to have an advantage over the original algorithm when absolute voltage measurements of gage outputs are processed using the non-iterative analysis technique. In these situations only the modified algorithm converged because it uses an exact solution of the intermediate non-linear tare load estimate for the tare load iteration.
The development of Drink Less: an alcohol reduction smartphone app for excessive drinkers.
Garnett, Claire; Crane, David; West, Robert; Brown, Jamie; Michie, Susan
2018-05-04
Excessive alcohol consumption poses a serious problem for public health. Digital behavior change interventions have the potential to help users reduce their drinking. In accordance with Open Science principles, this paper describes the development of a smartphone app to help individuals who drink excessively to reduce their alcohol consumption. Following the UK Medical Research Council's guidance and the Multiphase Optimization Strategy, development consisted of two phases: (i) selection of intervention components and (ii) design and development work to implement the chosen components into modules to be evaluated further for inclusion in the app. Phase 1 involved a scoping literature review, expert consensus study and content analysis of existing alcohol apps. Findings were integrated within a broad model of behavior change (Capability, Opportunity, Motivation-Behavior). Phase 2 involved a highly iterative process and used the "Person-Based" approach to promote engagement. From Phase 1, five intervention components were selected: (i) Normative Feedback, (ii) Cognitive Bias Re-training, (iii) Self-monitoring and Feedback, (iv) Action Planning, and (v) Identity Change. Phase 2 indicated that each of these components presented different challenges for implementation as app modules; all required multiple iterations and design changes to arrive at versions that would be suitable for inclusion in a subsequent evaluation study. The development of the Drink Less app involved a thorough process of component identification with a scoping literature review, expert consensus, and review of other apps. Translation of the components into app modules required a highly iterative process involving user testing and design modification.
de Kroon, Marlou L A; Bulthuis, Jozien; Mulder, Wico; Schaafsma, Frederieke G; Anema, Johannes R
2016-12-01
Since the extent of sick leave and the problems of vocational school students are relatively large, we aimed to tailor a sick leave protocol at Dutch lower secondary education schools to the particular context of vocational schools. Four steps of the iterative process of Intervention Mapping (IM) to adapt this protocol were carried out: (1) performing a needs assessment and defining a program objective, (2) determining the performance and change objectives, (3) identifying theory-based methods and practical strategies and (4) developing a program plan. Interviews with students using structured questionnaires, in-depth interviews with relevant stakeholders, a literature research and, finally, a pilot implementation were carried out. A sick leave protocol was developed that was feasible and acceptable for all stakeholders. The main barriers for widespread implementation are time constraints in both monitoring and acting upon sick leave by school and youth health care. The iterative process of IM has shown its merits in the adaptation of the manual 'A quick return to school is much better' to a sick leave protocol for vocational school students.
Career Cartography: From Stories to Science and Scholarship.
Wilson, Deleise S; Rosemberg, Marie-Anne S; Visovatti, Moira; Munro-Kramer, Michelle L; Feetham, Suzanne
2017-05-01
To present four case scenarios reflecting the process of research career development using career cartography. Career cartography is a novel approach that enables nurses, from all clinical and academic settings, to actively engage in a process that maximizes their clinical, teaching, research, and policy contributions that can improve patient outcomes and the health of the public. Four early-career nurse researchers applied the career cartography framework to describe their iterative process of research career development. They report the development process of each of the components of career cartography, including destination statement, career map, and policy statement. Despite diverse research interests and career mapping approaches, common experiences emerged from the four nurse researchers. Common lessons learned throughout the career cartography process include: (a) have a supportive mentorship team, (b) start early and reflect regularly, (c) be brief and to the point, (d) keep it simple and avoid jargon, (e) be open to change, (f) make time, and (g) focus on the overall career destination. These four case scenarios support the need for nurse researchers to develop their individual career cartography. Regardless of their background, career cartography can help nurse researchers articulate their meaningful contributions to science, policy, and health of the public. © 2017 Sigma Theta Tau International.
NASA Astrophysics Data System (ADS)
Zhou, Y.; Zhang, X.; Xiao, W.
2018-04-01
As the geomagnetic sensor is susceptible to interference, a pre-processing total least square iteration method is proposed for calibration compensation. Firstly, the error model of the geomagnetic sensor is analyzed and the correction model is proposed, then the characteristics of the model are analyzed and converted into nine parameters. The geomagnetic data is processed by Hilbert transform (HHT) to improve the signal-to-noise ratio, and the nine parameters are calculated by using the combination of Newton iteration method and the least squares estimation method. The sifter algorithm is used to filter the initial value of the iteration to ensure that the initial error is as small as possible. The experimental results show that this method does not need additional equipment and devices, can continuously update the calibration parameters, and better than the two-step estimation method, it can compensate geomagnetic sensor error well.
Solving coupled groundwater flow systems using a Jacobian Free Newton Krylov method
NASA Astrophysics Data System (ADS)
Mehl, S.
2012-12-01
Jacobian Free Newton Kyrlov (JFNK) methods can have several advantages for simulating coupled groundwater flow processes versus conventional methods. Conventional methods are defined here as those based on an iterative coupling (rather than a direct coupling) and/or that use Picard iteration rather than Newton iteration. In an iterative coupling, the systems are solved separately, coupling information is updated and exchanged between the systems, and the systems are re-solved, etc., until convergence is achieved. Trusted simulators, such as Modflow, are based on these conventional methods of coupling and work well in many cases. An advantage of the JFNK method is that it only requires calculation of the residual vector of the system of equations and thus can make use of existing simulators regardless of how the equations are formulated. This opens the possibility of coupling different process models via augmentation of a residual vector by each separate process, which often requires substantially fewer changes to the existing source code than if the processes were directly coupled. However, appropriate perturbation sizes need to be determined for accurate approximations of the Frechet derivative, which is not always straightforward. Furthermore, preconditioning is necessary for reasonable convergence of the linear solution required at each Kyrlov iteration. Existing preconditioners can be used and applied separately to each process which maximizes use of existing code and robust preconditioners. In this work, iteratively coupled parent-child local grid refinement models of groundwater flow and groundwater flow models with nonlinear exchanges to streams are used to demonstrate the utility of the JFNK approach for Modflow models. Use of incomplete Cholesky preconditioners with various levels of fill are examined on a suite of nonlinear and linear models to analyze the effect of the preconditioner. Comparisons of convergence and computer simulation time are made using conventional iteratively coupled methods and those based on Picard iteration to those formulated with JFNK to gain insights on the types of nonlinearities and system features that make one approach advantageous. Results indicate that nonlinearities associated with stream/aquifer exchanges are more problematic than those resulting from unconfined flow.
Nested Krylov methods and preserving the orthogonality
NASA Technical Reports Server (NTRS)
Desturler, Eric; Fokkema, Diederik R.
1993-01-01
Recently the GMRESR inner-outer iteraction scheme for the solution of linear systems of equations was proposed by Van der Vorst and Vuik. Similar methods have been proposed by Axelsson and Vassilevski and Saad (FGMRES). The outer iteration is GCR, which minimizes the residual over a given set of direction vectors. The inner iteration is GMRES, which at each step computes a new direction vector by approximately solving the residual equation. However, the optimality of the approximation over the space of outer search directions is ignored in the inner GMRES iteration. This leads to suboptimal corrections to the solution in the outer iteration, as components of the outer iteration directions may reenter in the inner iteration process. Therefore we propose to preserve the orthogonality relations of GCR in the inner GMRES iteration. This gives optimal corrections; however, it involves working with a singular, non-symmetric operator. We will discuss some important properties, and we will show by experiments that, in terms of matrix vector products, this modification (almost) always leads to better convergence. However, because we do more orthogonalizations, it does not always give an improved performance in CPU-time. Furthermore, we will discuss efficient implementations as well as the truncation possibilities of the outer GCR process. The experimental results indicate that for such methods it is advantageous to preserve the orthogonality in the inner iteration. Of course we can also use iteration schemes other than GMRES as the inner method; methods with short recurrences like GICGSTAB are of interest.
NASA Astrophysics Data System (ADS)
Di Noia, Antonio; Hasekamp, Otto P.; Wu, Lianghai; van Diedenhoven, Bastiaan; Cairns, Brian; Yorks, John E.
2017-11-01
In this paper, an algorithm for the retrieval of aerosol and land surface properties from airborne spectropolarimetric measurements - combining neural networks and an iterative scheme based on Phillips-Tikhonov regularization - is described. The algorithm - which is an extension of a scheme previously designed for ground-based retrievals - is applied to measurements from the Research Scanning Polarimeter (RSP) on board the NASA ER-2 aircraft. A neural network, trained on a large data set of synthetic measurements, is applied to perform aerosol retrievals from real RSP data, and the neural network retrievals are subsequently used as a first guess for the Phillips-Tikhonov retrieval. The resulting algorithm appears capable of accurately retrieving aerosol optical thickness, fine-mode effective radius and aerosol layer height from RSP data. Among the advantages of using a neural network as initial guess for an iterative algorithm are a decrease in processing time and an increase in the number of converging retrievals.
Not so Complex: Iteration in the Complex Plane
ERIC Educational Resources Information Center
O'Dell, Robin S.
2014-01-01
The simple process of iteration can produce complex and beautiful figures. In this article, Robin O'Dell presents a set of tasks requiring students to use the geometric interpretation of complex number multiplication to construct linear iteration rules. When the outputs are plotted in the complex plane, the graphs trace pleasing designs…
Developing Conceptual Understanding and Procedural Skill in Mathematics: An Iterative Process.
ERIC Educational Resources Information Center
Rittle-Johnson, Bethany; Siegler, Robert S.; Alibali, Martha Wagner
2001-01-01
Proposes that conceptual and procedural knowledge develop in an iterative fashion and improved problem representation is one mechanism underlying the relations between them. Two experiments were conducted with 5th and 6th grade students learning about decimal fractions. Results indicate conceptual and procedural knowledge do develop, iteratively,…
Finite Volume Element (FVE) discretization and multilevel solution of the axisymmetric heat equation
NASA Astrophysics Data System (ADS)
Litaker, Eric T.
1994-12-01
The axisymmetric heat equation, resulting from a point-source of heat applied to a metal block, is solved numerically; both iterative and multilevel solutions are computed in order to compare the two processes. The continuum problem is discretized in two stages: finite differences are used to discretize the time derivatives, resulting is a fully implicit backward time-stepping scheme, and the Finite Volume Element (FVE) method is used to discretize the spatial derivatives. The application of the FVE method to a problem in cylindrical coordinates is new, and results in stencils which are analyzed extensively. Several iteration schemes are considered, including both Jacobi and Gauss-Seidel; a thorough analysis of these schemes is done, using both the spectral radii of the iteration matrices and local mode analysis. Using this discretization, a Gauss-Seidel relaxation scheme is used to solve the heat equation iteratively. A multilevel solution process is then constructed, including the development of intergrid transfer and coarse grid operators. Local mode analysis is performed on the components of the amplification matrix, resulting in the two-level convergence factors for various combinations of the operators. A multilevel solution process is implemented by using multigrid V-cycles; the iterative and multilevel results are compared and discussed in detail. The computational savings resulting from the multilevel process are then discussed.
McClain, Arianna D; Hekler, Eric B; Gardner, Christopher D
2013-01-01
Previous research from the fields of computer science and engineering highlight the importance of an iterative design process (IDP) to create more creative and effective solutions. This study describes IDP as a new method for developing health behavior interventions and evaluates the effectiveness of a dining hall-based intervention developed using IDP on college students' eating behavior and values. participants were 458 students (52.6% female, age = 19.6 ± 1.5 years [M ± SD]). The intervention was developed via an IDP parallel process. A cluster-randomized controlled study compared differences in eating behavior among students in 4 university dining halls (2 intervention, 2 control). The final intervention was a multicomponent, point-of-selection marketing campaign. Students in the intervention dining halls consumed significantly less junk food and high-fat meat and increased their perceived importance of eating a healthful diet relative to the control group. IDP may be valuable for the development of behavior change interventions.
Elastic-plastic mixed-iterative finite element analysis: Implementation and performance assessment
NASA Technical Reports Server (NTRS)
Sutjahjo, Edhi; Chamis, Christos C.
1993-01-01
An elastic-plastic algorithm based on Von Mises and associative flow criteria is implemented in MHOST-a mixed iterative finite element analysis computer program developed by NASA Lewis Research Center. The performance of the resulting elastic-plastic mixed-iterative analysis is examined through a set of convergence studies. Membrane and bending behaviors of 4-node quadrilateral shell finite elements are tested for elastic-plastic performance. Generally, the membrane results are excellent, indicating the implementation of elastic-plastic mixed-iterative analysis is appropriate.
Methods for design and evaluation of integrated hardware-software systems for concurrent computation
NASA Technical Reports Server (NTRS)
Pratt, T. W.
1985-01-01
Research activities and publications are briefly summarized. The major tasks reviewed are: (1) VAX implementation of the PISCES parallel programming environment; (2) Apollo workstation network implementation of the PISCES environment; (3) FLEX implementation of the PISCES environment; (4) sparse matrix iterative solver in PSICES Fortran; (5) image processing application of PISCES; and (6) a formal model of concurrent computation being developed.
Anderson Acceleration for Fixed-Point Iterations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walker, Homer F.
The purpose of this grant was to support research on acceleration methods for fixed-point iterations, with applications to computational frameworks and simulation problems that are of interest to DOE.
The SOFIA Mission Control System Software
NASA Astrophysics Data System (ADS)
Heiligman, G. M.; Brock, D. R.; Culp, S. D.; Decker, P. H.; Estrada, J. C.; Graybeal, J. B.; Nichols, D. M.; Paluzzi, P. R.; Sharer, P. J.; Pampell, R. J.; Papke, B. L.; Salovich, R. D.; Schlappe, S. B.; Spriestersbach, K. K.; Webb, G. L.
1999-05-01
The Stratospheric Observatory for Infrared Astronomy (SOFIA) will be delivered with a computerized mission control system (MCS). The MCS communicates with the aircraft's flight management system and coordinates the operations of the telescope assembly, mission-specific subsystems, and the science instruments. The software for the MCS must be reliable and flexible. It must be easily usable by many teams of observers with widely differing needs, and it must support non-intrusive access for education and public outreach. The technology must be appropriate for SOFIA's 20-year lifetime. The MCS software development process is an object-oriented, use case driven approach. The process is iterative: delivery will be phased over four "builds"; each build will be the result of many iterations; and each iteration will include analysis, design, implementation, and test activities. The team is geographically distributed, coordinating its work via Web pages, teleconferences, T.120 remote collaboration, and CVS (for Internet-enabled configuration management). The MCS software architectural design is derived in part from other observatories' experience. Some important features of the MCS are: * distributed computing over several UNIX and VxWorks computers * fast throughput of time-critical data * use of third-party components, such as the Adaptive Communications Environment (ACE) and the Common Object Request Broker Architecture (CORBA) * extensive configurability via stored, editable configuration files * use of several computer languages so developers have "the right tool for the job". C++, Java, scripting languages, Interactive Data Language (from Research Systems, Int'l.), XML, and HTML will all be used in the final deliverables. This paper reports on work in progress, with the final product scheduled for delivery in 2001. This work was performed for Universities Space Research Association for NASA under contract NAS2-97001.
Hoppe, Annekatrin; Heaney, Catherine A; Fujishiro, Kaori; Gong, Fang; Baron, Sherry
2015-01-01
Despite their rapid increase in number, workers in personal care and service occupations are underrepresented in research on psychosocial work characteristics and occupational health. Some of the research challenges stem from the high proportion of immigrants in these occupations. Language barriers, low literacy, and cultural differences as well as their nontraditional work setting (i.e., providing service for one person in his/her home) make generic questionnaire measures inadequate for capturing salient aspects of personal care and service work. This study presents strategies for (1) identifying psychosocial work characteristics of home care workers that may affect their occupational safety and health and (2) creating survey measures that overcome barriers posed by language, low literacy, and cultural differences. We pursued these aims in four phases: (Phase 1) Six focus groups to identify the psychosocial work characteristics affecting the home care workers' occupational safety and health; (Phase 2) Selection of questionnaire items (i.e., questions or statements to assess the target construct) and first round of cognitive interviews (n = 30) to refine the items in an iterative process; (Phase 3) Item revision and second round of cognitive interviews (n = 11); (Phase 4) Quantitative pilot test to ensure the scales' reliability and validity across three language groups (English, Spanish, and Chinese; total n = 404). Analysis of the data from each phase informed the nature of subsequent phases. This iterative process ensured that survey measures not only met the reliability and validity criteria across groups, but were also meaningful to home care workers. This complex process is necessary when conducting research with nontraditional and multilingual worker populations.
Status of the ITER Cryodistribution
NASA Astrophysics Data System (ADS)
Chang, H.-S.; Vaghela, H.; Patel, P.; Rizzato, A.; Cursan, M.; Henry, D.; Forgeas, A.; Grillot, D.; Sarkar, B.; Muralidhara, S.; Das, J.; Shukla, V.; Adler, E.
2017-12-01
Since the conceptual design of the ITER Cryodistribution many modifications have been applied due to both system optimization and improved knowledge of the clients’ requirements. Process optimizations in the Cryoplant resulted in component simplifications whereas increased heat load in some of the superconducting magnet systems required more complicated process configuration but also the removal of a cold box was possible due to component arrangement standardization. Another cold box, planned for redundancy, has been removed due to the Tokamak in-Cryostat piping layout modification. In this proceeding we will summarize the present design status and component configuration of the ITER Cryodistribution with all changes implemented which aim at process optimization and simplification as well as operational reliability, stability and flexibility.
Bryant, Jamie; Sanson-Fisher, Rob; Tzelepis, Flora; Henskens, Frans; Paul, Christine; Stevenson, William
2014-01-01
Background Effective communication with cancer patients and their families about their disease, treatment options, and possible outcomes may improve psychosocial outcomes. However, traditional approaches to providing information to patients, including verbal information and written booklets, have a number of shortcomings centered on their limited ability to meet patient preferences and literacy levels. New-generation Web-based technologies offer an innovative and pragmatic solution for overcoming these limitations by providing a platform for interactive information seeking, information sharing, and user-centered tailoring. Objective The primary goal of this paper is to discuss the advantages of comprehensive and iterative Web-based technologies for health information provision and propose a four-phase framework for the development of Web-based information tools. Methods The proposed framework draws on our experience of constructing a Web-based information tool for hematological cancer patients and their families. The framework is based on principles for the development and evaluation of complex interventions and draws on the Agile methodology of software programming that emphasizes collaboration and iteration throughout the development process. Results The DoTTI framework provides a model for a comprehensive and iterative approach to the development of Web-based informational tools for patients. The process involves 4 phases of development: (1) Design and development, (2) Testing early iterations, (3) Testing for effectiveness, and (4) Integration and implementation. At each step, stakeholders (including researchers, clinicians, consumers, and programmers) are engaged in consultations to review progress, provide feedback on versions of the Web-based tool, and based on feedback, determine the appropriate next steps in development. Conclusions This 4-phase framework is evidence-informed and consumer-centered and could be applied widely to develop Web-based programs for a diverse range of diseases. PMID:24641991
Smits, Rochelle; Bryant, Jamie; Sanson-Fisher, Rob; Tzelepis, Flora; Henskens, Frans; Paul, Christine; Stevenson, William
2014-03-14
Effective communication with cancer patients and their families about their disease, treatment options, and possible outcomes may improve psychosocial outcomes. However, traditional approaches to providing information to patients, including verbal information and written booklets, have a number of shortcomings centered on their limited ability to meet patient preferences and literacy levels. New-generation Web-based technologies offer an innovative and pragmatic solution for overcoming these limitations by providing a platform for interactive information seeking, information sharing, and user-centered tailoring. The primary goal of this paper is to discuss the advantages of comprehensive and iterative Web-based technologies for health information provision and propose a four-phase framework for the development of Web-based information tools. The proposed framework draws on our experience of constructing a Web-based information tool for hematological cancer patients and their families. The framework is based on principles for the development and evaluation of complex interventions and draws on the Agile methodology of software programming that emphasizes collaboration and iteration throughout the development process. The DoTTI framework provides a model for a comprehensive and iterative approach to the development of Web-based informational tools for patients. The process involves 4 phases of development: (1) Design and development, (2) Testing early iterations, (3) Testing for effectiveness, and (4) Integration and implementation. At each step, stakeholders (including researchers, clinicians, consumers, and programmers) are engaged in consultations to review progress, provide feedback on versions of the Web-based tool, and based on feedback, determine the appropriate next steps in development. This 4-phase framework is evidence-informed and consumer-centered and could be applied widely to develop Web-based programs for a diverse range of diseases.
Material nonlinear analysis via mixed-iterative finite element method
NASA Technical Reports Server (NTRS)
Sutjahjo, Edhi; Chamis, Christos C.
1992-01-01
The performance of elastic-plastic mixed-iterative analysis is examined through a set of convergence studies. Membrane and bending behaviors are tested using 4-node quadrilateral finite elements. The membrane result is excellent, which indicates the implementation of elastic-plastic mixed-iterative analysis is appropriate. On the other hand, further research to improve bending performance of the method seems to be warranted.
Exploiting parallel computing with limited program changes using a network of microcomputers
NASA Technical Reports Server (NTRS)
Rogers, J. L., Jr.; Sobieszczanski-Sobieski, J.
1985-01-01
Network computing and multiprocessor computers are two discernible trends in parallel processing. The computational behavior of an iterative distributed process in which some subtasks are completed later than others because of an imbalance in computational requirements is of significant interest. The effects of asynchronus processing was studied. A small existing program was converted to perform finite element analysis by distributing substructure analysis over a network of four Apple IIe microcomputers connected to a shared disk, simulating a parallel computer. The substructure analysis uses an iterative, fully stressed, structural resizing procedure. A framework of beams divided into three substructures is used as the finite element model. The effects of asynchronous processing on the convergence of the design variables are determined by not resizing particular substructures on various iterations.
Guthrie, Kate M; Rosen, Rochelle K; Vargas, Sara E; Guillen, Melissa; Steger, Arielle L; Getz, Melissa L; Smith, Kelley A; Ramirez, Jaime J; Kojic, Erna M
2017-10-01
The development of HIV-preventive topical vaginal microbicides has been challenged by a lack of sufficient adherence in later stage clinical trials to confidently evaluate effectiveness. This dilemma has highlighted the need to integrate translational research earlier in the drug development process, essentially applying behavioral science to facilitate the advances of basic science with respect to the uptake and use of biomedical prevention technologies. In the last several years, there has been an increasing recognition that the user experience, specifically the sensory experience, as well as the role of meaning-making elicited by those sensations, may play a more substantive role than previously thought. Importantly, the role of the user-their sensory perceptions, their judgements of those experiences, and their willingness to use a product-is critical in product uptake and consistent use post-marketing, ultimately realizing gains in global public health. Specifically, a successful prevention product requires an efficacious drug, an efficient drug delivery system, and an effective user. We present an integrated iterative drug development and user experience evaluation method to illustrate how user-centered formulation design can be iterated from the early stages of preclinical development to leverage the user experience. Integrating the user and their product experiences into the formulation design process may help optimize both the efficiency of drug delivery and the effectiveness of the user.
Implementing partnership-driven clinical federated electronic health record data sharing networks.
Stephens, Kari A; Anderson, Nicholas; Lin, Ching-Ping; Estiri, Hossein
2016-09-01
Building federated data sharing architectures requires supporting a range of data owners, effective and validated semantic alignment between data resources, and consistent focus on end-users. Establishing these resources requires development methodologies that support internal validation of data extraction and translation processes, sustaining meaningful partnerships, and delivering clear and measurable system utility. We describe findings from two federated data sharing case examples that detail critical factors, shared outcomes, and production environment results. Two federated data sharing pilot architectures developed to support network-based research associated with the University of Washington's Institute of Translational Health Sciences provided the basis for the findings. A spiral model for implementation and evaluation was used to structure iterations of development and support knowledge share between the two network development teams, which cross collaborated to support and manage common stages. We found that using a spiral model of software development and multiple cycles of iteration was effective in achieving early network design goals. Both networks required time and resource intensive efforts to establish a trusted environment to create the data sharing architectures. Both networks were challenged by the need for adaptive use cases to define and test utility. An iterative cyclical model of development provided a process for developing trust with data partners and refining the design, and supported measureable success in the development of new federated data sharing architectures. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Dong, Jian; Hayakawa, Yoshihiko; Kannenberg, Sven; Kober, Cornelia
2013-02-01
The objective of this study was to reduce metal-induced streak artifact on oral and maxillofacial x-ray computed tomography (CT) images by developing the fast statistical image reconstruction system using iterative reconstruction algorithms. Adjacent CT images often depict similar anatomical structures in thin slices. So, first, images were reconstructed using the same projection data of an artifact-free image. Second, images were processed by the successive iterative restoration method where projection data were generated from reconstructed image in sequence. Besides the maximum likelihood-expectation maximization algorithm, the ordered subset-expectation maximization algorithm (OS-EM) was examined. Also, small region of interest (ROI) setting and reverse processing were applied for improving performance. Both algorithms reduced artifacts instead of slightly decreasing gray levels. The OS-EM and small ROI reduced the processing duration without apparent detriments. Sequential and reverse processing did not show apparent effects. Two alternatives in iterative reconstruction methods were effective for artifact reduction. The OS-EM algorithm and small ROI setting improved the performance. Copyright © 2012 Elsevier Inc. All rights reserved.
Development of iterative techniques for the solution of unsteady compressible viscous flows
NASA Technical Reports Server (NTRS)
Sankar, Lakshmi N.; Hixon, Duane
1992-01-01
The development of efficient iterative solution methods for the numerical solution of two- and three-dimensional compressible Navier-Stokes equations is discussed. Iterative time marching methods have several advantages over classical multi-step explicit time marching schemes, and non-iterative implicit time marching schemes. Iterative schemes have better stability characteristics than non-iterative explicit and implicit schemes. In this work, another approach based on the classical conjugate gradient method, known as the Generalized Minimum Residual (GMRES) algorithm is investigated. The GMRES algorithm has been used in the past by a number of researchers for solving steady viscous and inviscid flow problems. Here, we investigate the suitability of this algorithm for solving the system of non-linear equations that arise in unsteady Navier-Stokes solvers at each time step.
A new taxonomy for stakeholder engagement in patient-centered outcomes research.
Concannon, Thomas W; Meissner, Paul; Grunbaum, Jo Anne; McElwee, Newell; Guise, Jeanne-Marie; Santa, John; Conway, Patrick H; Daudelin, Denise; Morrato, Elaine H; Leslie, Laurel K
2012-08-01
Despite widespread agreement that stakeholder engagement is needed in patient-centered outcomes research (PCOR), no taxonomy exists to guide researchers and policy makers on how to address this need. We followed an iterative process, including several stages of stakeholder review, to address three questions: (1) Who are the stakeholders in PCOR? (2) What roles and responsibilities can stakeholders have in PCOR? (3) How can researchers start engaging stakeholders? We introduce a flexible taxonomy called the 7Ps of Stakeholder Engagement and Six Stages of Research for identifying stakeholders and developing engagement strategies across the full spectrum of research activities. The path toward engagement will not be uniform across every research program, but this taxonomy offers a common starting point and a flexible approach.
Bounding the Spacecraft Atmosphere Design Space for Future Exploration Missions
NASA Technical Reports Server (NTRS)
Lange, Kevin E.; Perka, Alan T.; Duffield, Bruce E.; Jeng, Frank F.
2005-01-01
The selection of spacecraft and space suit atmospheres for future human space exploration missions will play an important, if not critical, role in the ultimate safety, productivity, and cost of such missions. Internal atmosphere pressure and composition (particularly oxygen concentration) influence many aspects of spacecraft and space suit design, operation, and technology development. Optimal atmosphere solutions must be determined by iterative process involving research, design, development, testing, and systems analysis. A necessary first step in this process is the establishment of working bounds on the atmosphere design space.
Human Engineering of Space Vehicle Displays and Controls
NASA Technical Reports Server (NTRS)
Whitmore, Mihriban; Holden, Kritina L.; Boyer, Jennifer; Stephens, John-Paul; Ezer, Neta; Sandor, Aniko
2010-01-01
Proper attention to the integration of the human needs in the vehicle displays and controls design process creates a safe and productive environment for crew. Although this integration is critical for all phases of flight, for crew interfaces that are used during dynamic phases (e.g., ascent and entry), the integration is particularly important because of demanding environmental conditions. This panel addresses the process of how human engineering involvement ensures that human-system integration occurs early in the design and development process and continues throughout the lifecycle of a vehicle. This process includes the development of requirements and quantitative metrics to measure design success, research on fundamental design questions, human-in-the-loop evaluations, and iterative design. Processes and results from research on displays and controls; the creation and validation of usability, workload, and consistency metrics; and the design and evaluation of crew interfaces for NASA's Crew Exploration Vehicle are used as case studies.
NASA Astrophysics Data System (ADS)
Kalantari, Bahman
Polynomiography is the algorithmic visualization of iterative systems for computing roots of a complex polynomial. It is well known that iterations of a rational function in the complex plane result in chaotic behavior near its Julia set. In one scheme of computing polynomiography for a given polynomial p(z), we select an individual member from the Basic Family, an infinite fundamental family of rational iteration functions that in particular include Newton's. Polynomiography is an excellent means for observing, understanding, and comparing chaotic behavior for variety of iterative systems. Other iterative schemes in polynomiography are possible and result in chaotic behavior of different kinds. In another scheme, the Basic Family is collectively applied to p(z) and the iterates for any seed in the Voronoi cell of a root converge to that root. Polynomiography reveals chaotic behavior of another kind near the boundary of the Voronoi diagram of the roots. We also describe a novel Newton-Ellipsoid iterative system with its own chaos and exhibit images demonstrating polynomiographies of chaotic behavior of different kinds. Finally, we consider chaos for the more general case of polynomiography of complex analytic functions. On the one hand polynomiography is a powerful medium capable of demonstrating chaos in different forms, it is educationally instructive to students and researchers, also it gives rise to numerous research problems. On the other hand, it is a medium resulting in images with enormous aesthetic appeal to general audiences.
Morphing Aircraft Structures: Research in AFRL/RB
2008-09-01
various iterative steps in the process, etc. The solver also internally controls the step size for integration, as this is independent of the step...Coupling of Substructures for Dynamic Analyses,” AIAA Journal , Vol. 6, No. 7, 1968, pp. 1313-1319. 2“Using the State-Dependent Modal Force (MFORCE),” AFL...an actuation system consisting of multiple internal actuators, centrally computer controlled to implement any commanded morphing configuration; and
ERIC Educational Resources Information Center
Dinkelman, Todd
2016-01-01
In "Reinventing the High School Government Course," the authors presented the latest iteration of an ambitious AP government course developed over a seven-year design-based implementation research project. Chiefly addressed to curriculum developers and civics teachers, the article elaborates key design principles, provides a description…
Simultaneous and iterative weighted regression analysis of toxicity tests using a microplate reader.
Galgani, F; Cadiou, Y; Gilbert, F
1992-04-01
A system is described for determination of LC50 or IC50 by an iterative process based on data obtained from a plate reader using a marine unicellular alga as a target species. The esterase activity of Tetraselmis suesica on fluorescein diacetate as a substrate was measured using a fluorescence titerplate. Simultaneous analysis of results was performed using an iterative process adopting the sigmoid function Y = y/1 (dose of toxicant/IC50)slope for dose-response relationships. IC50 (+/- SEM) was estimated (P less than 0.05). An application with phosalone as a toxicant is presented.
Iterative-Transform Phase Retrieval Using Adaptive Diversity
NASA Technical Reports Server (NTRS)
Dean, Bruce H.
2007-01-01
A phase-diverse iterative-transform phase-retrieval algorithm enables high spatial-frequency, high-dynamic-range, image-based wavefront sensing. [The terms phase-diverse, phase retrieval, image-based, and wavefront sensing are defined in the first of the two immediately preceding articles, Broadband Phase Retrieval for Image-Based Wavefront Sensing (GSC-14899-1).] As described below, no prior phase-retrieval algorithm has offered both high dynamic range and the capability to recover high spatial-frequency components. Each of the previously developed image-based phase-retrieval techniques can be classified into one of two categories: iterative transform or parametric. Among the modifications of the original iterative-transform approach has been the introduction of a defocus diversity function (also defined in the cited companion article). Modifications of the original parametric approach have included minimizing alternative objective functions as well as implementing a variety of nonlinear optimization methods. The iterative-transform approach offers the advantage of ability to recover low, middle, and high spatial frequencies, but has disadvantage of having a limited dynamic range to one wavelength or less. In contrast, parametric phase retrieval offers the advantage of high dynamic range, but is poorly suited for recovering higher spatial frequency aberrations. The present phase-diverse iterative transform phase-retrieval algorithm offers both the high-spatial-frequency capability of the iterative-transform approach and the high dynamic range of parametric phase-recovery techniques. In implementation, this is a focus-diverse iterative-transform phaseretrieval algorithm that incorporates an adaptive diversity function, which makes it possible to avoid phase unwrapping while preserving high-spatial-frequency recovery. The algorithm includes an inner and an outer loop (see figure). An initial estimate of phase is used to start the algorithm on the inner loop, wherein multiple intensity images are processed, each using a different defocus value. The processing is done by an iterative-transform method, yielding individual phase estimates corresponding to each image of the defocus-diversity data set. These individual phase estimates are combined in a weighted average to form a new phase estimate, which serves as the initial phase estimate for either the next iteration of the iterative-transform method or, if the maximum number of iterations has been reached, for the next several steps, which constitute the outerloop portion of the algorithm. The details of the next several steps must be omitted here for the sake of brevity. The overall effect of these steps is to adaptively update the diversity defocus values according to recovery of global defocus in the phase estimate. Aberration recovery varies with differing amounts as the amount of diversity defocus is updated in each image; thus, feedback is incorporated into the recovery process. This process is iterated until the global defocus error is driven to zero during the recovery process. The amplitude of aberration may far exceed one wavelength after completion of the inner-loop portion of the algorithm, and the classical iterative transform method does not, by itself, enable recovery of multi-wavelength aberrations. Hence, in the absence of a means of off-loading the multi-wavelength portion of the aberration, the algorithm would produce a wrapped phase map. However, a special aberration-fitting procedure can be applied to the wrapped phase data to transfer at least some portion of the multi-wavelength aberration to the diversity function, wherein the data are treated as known phase values. In this way, a multiwavelength aberration can be recovered incrementally by successively applying the aberration-fitting procedure to intermediate wrapped phase maps. During recovery, as more of the aberration is transferred to the diversity function following successive iterations around the ter loop, the estimated phase ceases to wrap in places where the aberration values become incorporated as part of the diversity function. As a result, as the aberration content is transferred to the diversity function, the phase estimate resembles that of a reference flat.
Ehret, Phillip J; Monroe, Brian M; Read, Stephen J
2015-05-01
We present a neural network implementation of central components of the iterative reprocessing (IR) model. The IR model argues that the evaluation of social stimuli (attitudes, stereotypes) is the result of the IR of stimuli in a hierarchy of neural systems: The evaluation of social stimuli develops and changes over processing. The network has a multilevel, bidirectional feedback evaluation system that integrates initial perceptual processing and later developing semantic processing. The network processes stimuli (e.g., an individual's appearance) over repeated iterations, with increasingly higher levels of semantic processing over time. As a result, the network's evaluations of stimuli evolve. We discuss the implications of the network for a number of different issues involved in attitudes and social evaluation. The success of the network supports the IR model framework and provides new insights into attitude theory. © 2014 by the Society for Personality and Social Psychology, Inc.
US NDC Modernization Iteration E1 Prototyping Report: Processing Control Framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prescott, Ryan; Hamlet, Benjamin R.
2014-12-01
During the first iteration of the US NDC Modernization Elaboration phase (E1), the SNL US NDC modernization project team developed an initial survey of applicable COTS solutions, and established exploratory prototyping related to the processing control framework in support of system architecture definition. This report summarizes these activities and discusses planned follow-on work.
ERIC Educational Resources Information Center
Apter, Brian
2014-01-01
An organisational change-process in a UK local authority (LA) over two years is examined using transcribed excerpts from three meetings. The change-process is analysed using a Foucauldian analytical tool--Iterative Learning Conversations (ILCS). An Educational Psychology Service was changed from being primarily an education-focussed…
Archibald, Mandy M; Hartling, Lisa; Ali, Samina; Caine, Vera; Scott, Shannon D
2018-06-05
Although it is well established that family-centered education is critical to managing childhood asthma, the information needs of parents of children with asthma are not being met through current educational approaches. Patient-driven educational materials that leverage the power of the storytelling and the arts show promise in communicating health information and assisting in illness self-management. However, such arts-based knowledge translation approaches are in their infancy, and little is known about how to develop such tools for parents. This paper reports on the development of "My Asthma Diary" - an innovative knowledge translation tool based on rigorous research evidence and tailored to parents' asthma-related information needs. We used a multi-stage process to develop four eBook prototypes of "My Asthma Diary." We conducted formative research on parents' information needs and identified high quality research evidence on childhood asthma, and used these data to inform the development of the asthma eBooks. We established interdisciplinary consulting teams with health researchers, practitioners, and artists to help iteratively create the knowledge translation tools. We describe the iterative, transdisciplinary process of developing asthma eBooks which incorporates: (I) parents' preferences and information needs on childhood asthma, (II) quality evidence on childhood asthma and its management, and (III) the engaging and informative powers of storytelling and visual art as methods to communicate complex health information to parents. We identified four dominant methodological and procedural challenges encountered during this process: (I) working within an inter-disciplinary team, (II) quantity and ordering of information, (III) creating a composite narrative, and (IV) balancing actual and ideal management scenarios. We describe a replicable and rigorous multi-staged approach to developing a patient-driven, creative knowledge translation tool, which can be adapted for use with different populations and contexts. We identified specific procedural and methodological challenges that others conducting comparable work should consider, particularly as creative, patient-driven knowledge translation strategies continue to emerge across health disciplines.
Implementation of a deidentified federated data network for population-based cohort discovery
Abend, Aaron; Mandel, Aaron; Geraghty, Estella; Gabriel, Davera; Wynden, Rob; Kamerick, Michael; Anderson, Kent; Rainwater, Julie; Tarczy-Hornoch, Peter
2011-01-01
Objective The Cross-Institutional Clinical Translational Research project explored a federated query tool and looked at how this tool can facilitate clinical trial cohort discovery by managing access to aggregate patient data located within unaffiliated academic medical centers. Methods The project adapted software from the Informatics for Integrating Biology and the Bedside (i2b2) program to connect three Clinical Translational Research Award sites: University of Washington, Seattle, University of California, Davis, and University of California, San Francisco. The project developed an iterative spiral software development model to support the implementation and coordination of this multisite data resource. Results By standardizing technical infrastructures, policies, and semantics, the project enabled federated querying of deidentified clinical datasets stored in separate institutional environments and identified barriers to engaging users for measuring utility. Discussion The authors discuss the iterative development and evaluation phases of the project and highlight the challenges identified and the lessons learned. Conclusion The common system architecture and translational processes provide high-level (aggregate) deidentified access to a large patient population (>5 million patients), and represent a novel and extensible resource. Enhancing the network for more focused disease areas will require research-driven partnerships represented across all partner sites. PMID:21873473
Implementation of a deidentified federated data network for population-based cohort discovery.
Anderson, Nicholas; Abend, Aaron; Mandel, Aaron; Geraghty, Estella; Gabriel, Davera; Wynden, Rob; Kamerick, Michael; Anderson, Kent; Rainwater, Julie; Tarczy-Hornoch, Peter
2012-06-01
The Cross-Institutional Clinical Translational Research project explored a federated query tool and looked at how this tool can facilitate clinical trial cohort discovery by managing access to aggregate patient data located within unaffiliated academic medical centers. The project adapted software from the Informatics for Integrating Biology and the Bedside (i2b2) program to connect three Clinical Translational Research Award sites: University of Washington, Seattle, University of California, Davis, and University of California, San Francisco. The project developed an iterative spiral software development model to support the implementation and coordination of this multisite data resource. By standardizing technical infrastructures, policies, and semantics, the project enabled federated querying of deidentified clinical datasets stored in separate institutional environments and identified barriers to engaging users for measuring utility. The authors discuss the iterative development and evaluation phases of the project and highlight the challenges identified and the lessons learned. The common system architecture and translational processes provide high-level (aggregate) deidentified access to a large patient population (>5 million patients), and represent a novel and extensible resource. Enhancing the network for more focused disease areas will require research-driven partnerships represented across all partner sites.
Low-dose CT image reconstruction using gain intervention-based dictionary learning
NASA Astrophysics Data System (ADS)
Pathak, Yadunath; Arya, K. V.; Tiwari, Shailendra
2018-05-01
Computed tomography (CT) approach is extensively utilized in clinical diagnoses. However, X-ray residue in human body may introduce somatic damage such as cancer. Owing to radiation risk, research has focused on the radiation exposure distributed to patients through CT investigations. Therefore, low-dose CT has become a significant research area. Many researchers have proposed different low-dose CT reconstruction techniques. But, these techniques suffer from various issues such as over smoothing, artifacts, noise, etc. Therefore, in this paper, we have proposed a novel integrated low-dose CT reconstruction technique. The proposed technique utilizes global dictionary-based statistical iterative reconstruction (GDSIR) and adaptive dictionary-based statistical iterative reconstruction (ADSIR)-based reconstruction techniques. In case the dictionary (D) is predetermined, then GDSIR can be used and if D is adaptively defined then ADSIR is appropriate choice. The gain intervention-based filter is also used as a post-processing technique for removing the artifacts from low-dose CT reconstructed images. Experiments have been done by considering the proposed and other low-dose CT reconstruction techniques on well-known benchmark CT images. Extensive experiments have shown that the proposed technique outperforms the available approaches.
n-Iterative Exponential Forgetting Factor for EEG Signals Parameter Estimation
Palma Orozco, Rosaura
2018-01-01
Electroencephalograms (EEG) signals are of interest because of their relationship with physiological activities, allowing a description of motion, speaking, or thinking. Important research has been developed to take advantage of EEG using classification or predictor algorithms based on parameters that help to describe the signal behavior. Thus, great importance should be taken to feature extraction which is complicated for the Parameter Estimation (PE)–System Identification (SI) process. When based on an average approximation, nonstationary characteristics are presented. For PE the comparison of three forms of iterative-recursive uses of the Exponential Forgetting Factor (EFF) combined with a linear function to identify a synthetic stochastic signal is presented. The one with best results seen through the functional error is applied to approximate an EEG signal for a simple classification example, showing the effectiveness of our proposal. PMID:29568310
Lin, Jyh-Miin; Patterson, Andrew J; Chang, Hing-Chiu; Gillard, Jonathan H; Graves, Martin J
2015-10-01
To propose a new reduced field-of-view (rFOV) strategy for iterative reconstructions in a clinical environment. Iterative reconstructions can incorporate regularization terms to improve the image quality of periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) MRI. However, the large amount of calculations required for full FOV iterative reconstructions has posed a huge computational challenge for clinical usage. By subdividing the entire problem into smaller rFOVs, the iterative reconstruction can be accelerated on a desktop with a single graphic processing unit (GPU). This rFOV strategy divides the iterative reconstruction into blocks, based on the block-diagonal dominant structure. A near real-time reconstruction system was developed for the clinical MR unit, and parallel computing was implemented using the object-oriented model. In addition, the Toeplitz method was implemented on the GPU to reduce the time required for full interpolation. Using the data acquired from the PROPELLER MRI, the reconstructed images were then saved in the digital imaging and communications in medicine format. The proposed rFOV reconstruction reduced the gridding time by 97%, as the total iteration time was 3 s even with multiple processes running. A phantom study showed that the structure similarity index for rFOV reconstruction was statistically superior to conventional density compensation (p < 0.001). In vivo study validated the increased signal-to-noise ratio, which is over four times higher than with density compensation. Image sharpness index was improved using the regularized reconstruction implemented. The rFOV strategy permits near real-time iterative reconstruction to improve the image quality of PROPELLER images. Substantial improvements in image quality metrics were validated in the experiments. The concept of rFOV reconstruction may potentially be applied to other kinds of iterative reconstructions for shortened reconstruction duration.
Modifying Photovoice for community-based participatory Indigenous research.
Castleden, Heather; Garvin, Theresa
2008-03-01
Scientific research occurs within a set of socio-political conditions, and in Canada research involving Indigenous communities has a historical association with colonialism. Consequently, Indigenous peoples have been justifiably sceptical and reluctant to become the subjects of academic research. Community-Based Participatory Research (CBPR) is an attempt to develop culturally relevant research models that address issues of injustice, inequality, and exploitation. The work reported here evaluates the use of Photovoice, a CBPR method that uses participant-employed photography and dialogue to create social change, which was employed in a research partnership with a First Nation in Western Canada. Content analysis of semi-structured interviews (n=45) evaluated participants' perspectives of the Photovoice process as part of a larger study on health and environment issues. The analysis revealed that Photovoice effectively balanced power, created a sense of ownership, fostered trust, built capacity, and responded to cultural preferences. The authors discuss the necessity of modifying Photovoice, by building in an iterative process, as being key to the methodological success of the project.
Torres, Samantha; de la Riva, Erika E; Tom, Laura S; Clayman, Marla L; Taylor, Chirisse; Dong, Xinqi; Simon, Melissa A
2015-12-01
Despite increasing need to boost the recruitment of underrepresented populations into cancer trials and biobanking research, few tools exist for facilitating dialogue between researchers and potential research participants during the recruitment process. In this paper, we describe the initial processes of a user-centered design cycle to develop a standardized research communication tool prototype for enhancing research literacy among individuals from underrepresented populations considering enrollment in cancer research and biobanking studies. We present qualitative feedback and recommendations on the prototype's design and content from potential end users: five clinical trial recruiters and ten potential research participants recruited from an academic medical center. Participants were given the prototype (a set of laminated cards) and were asked to provide feedback about the tool's content, design elements, and word choices during semi-structured, in-person interviews. Results suggest that the prototype was well received by recruiters and patients alike. They favored the simplicity, lay language, and layout of the cards. They also noted areas for improvement, leading to card refinements that included the following: addressing additional topic areas, clarifying research processes, increasing the number of diverse images, and using alternative word choices. Our process for refining user interfaces and iterating content in early phases of design may inform future efforts to develop tools for use in clinical research or biobanking studies to increase research literacy.
Reducing Design Cycle Time and Cost Through Process Resequencing
NASA Technical Reports Server (NTRS)
Rogers, James L.
2004-01-01
In today's competitive environment, companies are under enormous pressure to reduce the time and cost of their design cycle. One method for reducing both time and cost is to develop an understanding of the flow of the design processes and the effects of the iterative subcycles that are found in complex design projects. Once these aspects are understood, the design manager can make decisions that take advantage of decomposition, concurrent engineering, and parallel processing techniques to reduce the total time and the total cost of the design cycle. One software tool that can aid in this decision-making process is the Design Manager's Aid for Intelligent Decomposition (DeMAID). The DeMAID software minimizes the feedback couplings that create iterative subcycles, groups processes into iterative subcycles, and decomposes the subcycles into a hierarchical structure. The real benefits of producing the best design in the least time and at a minimum cost are obtained from sequencing the processes in the subcycles.
Simulation and Analysis of Launch Teams (SALT)
NASA Technical Reports Server (NTRS)
2008-01-01
A SALT effort was initiated in late 2005 with seed funding from the Office of Safety and Mission Assurance Human Factors organization. Its objectives included demonstrating human behavior and performance modeling and simulation technologies for launch team analysis, training, and evaluation. The goal of the research is to improve future NASA operations and training. The project employed an iterative approach, with the first iteration focusing on the last 70 minutes of a nominal-case Space Shuttle countdown, the second iteration focusing on aborts and launch commit criteria violations, the third iteration focusing on Ares I-X communications, and the fourth iteration focusing on Ares I-X Firing Room configurations. SALT applied new commercial off-the-shelf technologies from industry and the Department of Defense in the spaceport domain.
NASA Astrophysics Data System (ADS)
Greenfield, Charles M.
2017-10-01
The US Burning Plasma Organization is pleased to welcome Dr. Bernard Bigot, who will give an update on progress in the ITER Project. Dr. Bigot took over as Director General of the ITER Organization in early 2015 following a distinguished career that included serving as Chairman and CEO of the French Alternative Energies and Atomic Energy Commission and as High Commissioner for ITER in France. During his tenure at ITER the project has moved into high gear, with rapid progress evident on the construction site and preparation of a staged schedule and a research plan leading from where we are today through all the way to full DT operation. In an unprecedented international effort, seven partners ``China, the European Union, India, Japan, Korea, Russia and the United States'' have pooled their financial and scientific resources to build the biggest fusion reactor in history. ITER will open the way to the next step: a demonstration fusion power plant. All DPP attendees are welcome to attend this ITER town meeting.
From Intent to Action: An Iterative Engineering Process
ERIC Educational Resources Information Center
Mouton, Patrice; Rodet, Jacques; Vacaresse, Sylvain
2015-01-01
Quite by chance, and over the course of a few haphazard meetings, a Master's degree in "E-learning Design" gradually developed in a Faculty of Economics. Its original and evolving design was the result of an iterative process carried out, not by a single Instructional Designer (ID), but by a full ID team. Over the last 10 years it has…
Excavating Culture: Disentangling Ethnic Differences from Contextual Influences in Parenting
le, Huynh-Nhu; Ceballo, Rosario; Chao, Ruth; Hill, Nancy E.; Murry, Velma McBride; Pinderhughes, Ellen E.
2013-01-01
Historically, much of the research on parenting has not disentangled the influences of race/ethnicity, SES, and culture on family functioning and the development of children and adolescents. This special issue addresses this gap by disentangling ethnic differences in parenting behaviors from their contextual influences, thereby deepening understanding of parenting processes in diverse families. Six members of the Parenting section of the Study Group on Race, Culture and Ethnicity (SGRCE) introduce and implement a novel approach toward understanding this question. The goal of this project is to study culturally related processes and the degree to which they predict parenting. An iterative process was employed to delineate the main parenting constructs (warmth, psychological and behavioral control, monitoring, communication, and self-efficacy), cultural processes, and contextual influences, and to coordinate a data analytic plan utilizing individual datasets with diverse samples to answer the research questions. PMID:24043923
The Iterative Research Cycle: Process-Based Model Evaluation
NASA Astrophysics Data System (ADS)
Vrugt, J. A.
2014-12-01
The ever increasing pace of computational power, along with continued advances in measurement technologies and improvements in process understanding has stimulated the development of increasingly complex physics based models that simulate a myriad of processes at different spatial and temporal scales. Reconciling these high-order system models with perpetually larger volumes of field data is becoming more and more difficult, particularly because classical likelihood-based fitting methods lack the power to detect and pinpoint deficiencies in the model structure. In this talk I will give an overview of our latest research on process-based model calibration and evaluation. This approach, rooted in Bayesian theory, uses summary metrics of the calibration data rather than the data itself to help detect which component(s) of the model is (are) malfunctioning and in need of improvement. A few case studies involving hydrologic and geophysical models will be used to demonstrate the proposed methodology.
Exploiting Multi-Step Sample Trajectories for Approximate Value Iteration
2013-09-01
WORK UNIT NUMBER IH 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) AFRL/ RISC 525 Brooks Road, Rome NY 13441-4505 Binghamton University...S) AND ADDRESS(ES) Air Force Research Laboratory/Information Directorate Rome Research Site/ RISC 525 Brooks Road Rome NY 13441-4505 10. SPONSOR...iteration methods for reinforcement learning (RL) generalize experience from limited samples across large state-action spaces. The function approximators
Iterated learning and the evolution of language.
Kirby, Simon; Griffiths, Tom; Smith, Kenny
2014-10-01
Iterated learning describes the process whereby an individual learns their behaviour by exposure to another individual's behaviour, who themselves learnt it in the same way. It can be seen as a key mechanism of cultural evolution. We review various methods for understanding how behaviour is shaped by the iterated learning process: computational agent-based simulations; mathematical modelling; and laboratory experiments in humans and non-human animals. We show how this framework has been used to explain the origins of structure in language, and argue that cultural evolution must be considered alongside biological evolution in explanations of language origins. Copyright © 2014 Elsevier Ltd. All rights reserved.
Learning Efficient Sparse and Low Rank Models.
Sprechmann, P; Bronstein, A M; Sapiro, G
2015-09-01
Parsimony, including sparsity and low rank, has been shown to successfully model data in numerous machine learning and signal processing tasks. Traditionally, such modeling approaches rely on an iterative algorithm that minimizes an objective function with parsimony-promoting terms. The inherently sequential structure and data-dependent complexity and latency of iterative optimization constitute a major limitation in many applications requiring real-time performance or involving large-scale data. Another limitation encountered by these modeling techniques is the difficulty of their inclusion in discriminative learning scenarios. In this work, we propose to move the emphasis from the model to the pursuit algorithm, and develop a process-centric view of parsimonious modeling, in which a learned deterministic fixed-complexity pursuit process is used in lieu of iterative optimization. We show a principled way to construct learnable pursuit process architectures for structured sparse and robust low rank models, derived from the iteration of proximal descent algorithms. These architectures learn to approximate the exact parsimonious representation at a fraction of the complexity of the standard optimization methods. We also show that appropriate training regimes allow to naturally extend parsimonious models to discriminative settings. State-of-the-art results are demonstrated on several challenging problems in image and audio processing with several orders of magnitude speed-up compared to the exact optimization algorithms.
A horizon scan of global conservation issues for 2013.
Sutherland, William J; Bardsley, Sarah; Clout, Mick; Depledge, Michael H; Dicks, Lynn V; Fellman, Liz; Fleishman, Erica; Gibbons, David W; Keim, Brandon; Lickorish, Fiona; Margerison, Ceri; Monk, Kathryn A; Norris, Kenneth; Peck, Lloyd S; Prior, Stephanie V; Scharlemann, Jörn P W; Spalding, Mark D; Watkinson, Andrew R
2013-01-01
This paper presents the findings of our fourth annual horizon-scanning exercise, which aims to identify topics that increasingly may affect conservation of biological diversity. The 15 issues were identified via an iterative, transferable process by a team of professional horizon scanners, researchers, practitioners, and a journalist. The 15 topics include the commercial use of antimicrobial peptides, thorium-fuelled nuclear power, and undersea oil production. Copyright © 2012 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Roseman, Jo Ellen; Herrmann-Abell, Cari; Flanagan, Jean; Kruse, Rebecca; Howes, Elaine; Carlson, Janet; Roth, Kathy; Bourdelat-Parks, Brooke
2013-01-01
Researchers at AAAS and BSCS have developed a six-week unit that aims to help middle school students learn important chemistry ideas that can be used to explain growth and repair in animals and plants. By integrating core physical and life science ideas and engaging students in the science practices of modeling and constructing explanations, the…
A research agenda for gastrointestinal and endoscopic surgery.
Urbach, D R; Horvath, K D; Baxter, N N; Jobe, B A; Madan, A K; Pryor, A D; Khaitan, L; Torquati, A; Brower, S T; Trus, T L; Schwaitzberg, S
2007-09-01
Development of a research agenda may help to inform researchers and research-granting agencies about the key research gaps in an area of research and clinical care. The authors sought to develop a list of research questions for which further research was likely to have a major impact on clinical care in the area of gastrointestinal and endoscopic surgery. A formal group process was used to conduct an iterative, anonymous Web-based survey of an expert panel including the general membership of the Society of American Gastrointestinal and Endoscopic Surgeons (SAGES). In round 1, research questions were solicited, which were categorized, collapsed, and rewritten in a common format. In round 2, the expert panel rated all the questions using a priority scale ranging from 1 (lowest) to 5 (highest). In round 3, the panel re-rated the 40 questions with the highest mean priority score in round 2. A total of 241 respondents to round 1 submitted 382 questions, which were reduced by a review panel to 106 unique questions encompassing 33 topics in gastrointestinal and endoscopic surgery. In the two successive rounds, respectively, 397 and 385 respondents ranked the questions by priority, then re-ranked the 40 questions with the highest mean priority score. High-priority questions related to antireflux surgery, the oncologic and immune effects of minimally invasive surgery, and morbid obesity. The question with the highest mean priority ranking was: "What is the best treatment (antireflux surgery, endoluminal therapy, or medication) for GERD?" The second highest-ranked question was: "Does minimally invasive surgery improve oncologic outcomes as compared with open surgery?" Other questions covered a broad range of research areas including clinical research, basic science research, education and evaluation, outcomes measurement, and health technology assessment. An iterative, anonymous group survey process was used to develop a research agenda for gastrointestinal and endoscopic surgery consisting of the 40 most important research questions in the field. This research agenda can be used by researchers and research-granting agencies to focus research activity in the areas most likely to have an impact on clinical care, and to appraise the relevance of scientific contributions.
Bass, Kristin M.; Drits-Esser, Dina; Stark, Louisa A.
2016-01-01
The credibility of conclusions made about the effectiveness of educational interventions depends greatly on the quality of the assessments used to measure learning gains. This essay, intended for faculty involved in small-scale projects, courses, or educational research, provides a step-by-step guide to the process of developing, scoring, and validating high-quality content knowledge assessments. We illustrate our discussion with examples from our assessments of high school students’ understanding of concepts in cell biology and epigenetics. Throughout, we emphasize the iterative nature of the development process, the importance of creating instruments aligned to the learning goals of an intervention or curricula, and the importance of collaborating with other content and measurement specialists along the way. PMID:27055776
Lipstein, Ellen A; Britto, Maria T
2015-08-01
In the context of pediatric chronic conditions, patients and families are called upon repeatedly to make treatment decisions. However, little is known about how their decision making evolves over time. The objective was to understand parents' processes for treatment decision making in pediatric chronic conditions. We conducted a qualitative, prospective longitudinal study using recorded clinic visits and individual interviews. After consent was obtained from health care providers, parents, and patients, clinic visits during which treatment decisions were expected to be discussed were video-recorded. Parents then participated in sequential telephone interviews about their decision-making experience. Data were coded by 2 people and analyzed using framework analysis with sequential, time-ordered matrices. 21 families, including 29 parents, participated in video-recording and interviews. We found 3 dominant patterns of decision evolution. Each consisted of a series of decision events, including conversations, disease flares, and researching of treatment options. Within all 3 patterns there were both constant and evolving elements of decision making, such as role perceptions and treatment expectations, respectively. After parents made a treatment decision, they immediately turned to the next decision related to the chronic condition, creating an iterative cycle. In this study, decision making was an iterative process occurring in 3 distinct patterns. Understanding these patterns and the varying elements of parents' decision processes is an essential step toward developing interventions that are appropriate to the setting and that capitalize on the skills families may develop as they gain experience with a chronic condition. Future research should also consider the role of children and adolescents in this decision process. © The Author(s) 2015.
Twostep-by-twostep PIRK-type PC methods with continuous output formulas
NASA Astrophysics Data System (ADS)
Cong, Nguyen Huu; Xuan, Le Ngoc
2008-11-01
This paper deals with parallel predictor-corrector (PC) iteration methods based on collocation Runge-Kutta (RK) corrector methods with continuous output formulas for solving nonstiff initial-value problems (IVPs) for systems of first-order differential equations. At nth step, the continuous output formulas are used not only for predicting the stage values in the PC iteration methods but also for calculating the step values at (n+2)th step. In this case, the integration processes can be proceeded twostep-by-twostep. The resulting twostep-by-twostep (TBT) parallel-iterated RK-type (PIRK-type) methods with continuous output formulas (twostep-by-twostep PIRKC methods or TBTPIRKC methods) give us a faster integration process. Fixed stepsize applications of these TBTPIRKC methods to a few widely-used test problems reveal that the new PC methods are much more efficient when compared with the well-known parallel-iterated RK methods (PIRK methods), parallel-iterated RK-type PC methods with continuous output formulas (PIRKC methods) and sequential explicit RK codes DOPRI5 and DOP853 available from the literature.
Kitson, Nicole A; Price, Morgan; Lau, Francis Y; Showler, Grey
2013-10-17
Medication errors are a common type of preventable errors in health care causing unnecessary patient harm, hospitalization, and even fatality. Improving communication between providers and between providers and patients is a key aspect of decreasing medication errors and improving patient safety. Medication management requires extensive collaboration and communication across roles and care settings, which can reduce (or contribute to) medication-related errors. Medication management involves key recurrent activities (determine need, prescribe, dispense, administer, and monitor/evaluate) with information communicated within and between each. Despite its importance, there is a lack of conceptual models that explore medication communication specifically across roles and settings. This research seeks to address that gap. The Circle of Care Modeling (CCM) approach was used to build a model of medication communication activities across the circle of care. CCM positions the patient in the centre of his or her own healthcare system; providers and other roles are then modeled around the patient as a web of relationships. Recurrent medication communication activities were mapped to the medication management framework. The research occurred in three iterations, to test and revise the model: Iteration 1 consisted of a literature review and internal team discussion, Iteration 2 consisted of interviews, observation, and a discussion group at a Community Health Centre, and Iteration 3 consisted of interviews and a discussion group in the larger community. Each iteration provided further detail to the Circle of Care medication communication model. Specific medication communication activities were mapped along each communication pathway between roles and to the medication management framework. We could not map all medication communication activities to the medication management framework; we added Coordinate as a separate and distinct recurrent activity. We saw many examples of coordination activities, for instance, Medical Office Assistants acting as a liaison between pharmacists and family physicians to clarify prescription details. Through the use of CCM we were able to unearth tacitly held knowledge to expand our understanding of medication communication. Drawing out the coordination activities could be a missing piece for us to better understand how to streamline and improve multi-step communication processes with a goal of improving patient safety.
2013-01-01
Background Medication errors are a common type of preventable errors in health care causing unnecessary patient harm, hospitalization, and even fatality. Improving communication between providers and between providers and patients is a key aspect of decreasing medication errors and improving patient safety. Medication management requires extensive collaboration and communication across roles and care settings, which can reduce (or contribute to) medication-related errors. Medication management involves key recurrent activities (determine need, prescribe, dispense, administer, and monitor/evaluate) with information communicated within and between each. Despite its importance, there is a lack of conceptual models that explore medication communication specifically across roles and settings. This research seeks to address that gap. Methods The Circle of Care Modeling (CCM) approach was used to build a model of medication communication activities across the circle of care. CCM positions the patient in the centre of his or her own healthcare system; providers and other roles are then modeled around the patient as a web of relationships. Recurrent medication communication activities were mapped to the medication management framework. The research occurred in three iterations, to test and revise the model: Iteration 1 consisted of a literature review and internal team discussion, Iteration 2 consisted of interviews, observation, and a discussion group at a Community Health Centre, and Iteration 3 consisted of interviews and a discussion group in the larger community. Results Each iteration provided further detail to the Circle of Care medication communication model. Specific medication communication activities were mapped along each communication pathway between roles and to the medication management framework. We could not map all medication communication activities to the medication management framework; we added Coordinate as a separate and distinct recurrent activity. We saw many examples of coordination activities, for instance, Medical Office Assistants acting as a liaison between pharmacists and family physicians to clarify prescription details. Conclusions Through the use of CCM we were able to unearth tacitly held knowledge to expand our understanding of medication communication. Drawing out the coordination activities could be a missing piece for us to better understand how to streamline and improve multi-step communication processes with a goal of improving patient safety. PMID:24134454
NASA Astrophysics Data System (ADS)
Zhang, Lijuan; Li, Yang; Wang, Junnan; Liu, Ying
2018-03-01
In this paper, we propose a point spread function (PSF) reconstruction method and joint maximum a posteriori (JMAP) estimation method for the adaptive optics image restoration. Using the JMAP method as the basic principle, we establish the joint log likelihood function of multi-frame adaptive optics (AO) images based on the image Gaussian noise models. To begin with, combining the observed conditions and AO system characteristics, a predicted PSF model for the wavefront phase effect is developed; then, we build up iterative solution formulas of the AO image based on our proposed algorithm, addressing the implementation process of multi-frame AO images joint deconvolution method. We conduct a series of experiments on simulated and real degraded AO images to evaluate our proposed algorithm. Compared with the Wiener iterative blind deconvolution (Wiener-IBD) algorithm and Richardson-Lucy IBD algorithm, our algorithm has better restoration effects including higher peak signal-to-noise ratio ( PSNR) and Laplacian sum ( LS) value than the others. The research results have a certain application values for actual AO image restoration.
Iterative CT reconstruction using coordinate descent with ordered subsets of data
NASA Astrophysics Data System (ADS)
Noo, F.; Hahn, K.; Schöndube, H.; Stierstorfer, K.
2016-04-01
Image reconstruction based on iterative minimization of a penalized weighted least-square criteria has become an important topic of research in X-ray computed tomography. This topic is motivated by increasing evidence that such a formalism may enable a significant reduction in dose imparted to the patient while maintaining or improving image quality. One important issue associated with this iterative image reconstruction concept is slow convergence and the associated computational effort. For this reason, there is interest in finding methods that produce approximate versions of the targeted image with a small number of iterations and an acceptable level of discrepancy. We introduce here a novel method to produce such approximations: ordered subsets in combination with iterative coordinate descent. Preliminary results demonstrate that this method can produce, within 10 iterations and using only a constant image as initial condition, satisfactory reconstructions that retain the noise properties of the targeted image.
Multidisciplinary systems optimization by linear decomposition
NASA Technical Reports Server (NTRS)
Sobieski, J.
1984-01-01
In a typical design process major decisions are made sequentially. An illustrated example is given for an aircraft design in which the aerodynamic shape is usually decided first, then the airframe is sized for strength and so forth. An analogous sequence could be laid out for any other major industrial product, for instance, a ship. The loops in the discipline boxes symbolize iterative design improvements carried out within the confines of a single engineering discipline, or subsystem. The loops spanning several boxes depict multidisciplinary design improvement iterations. Omitted for graphical simplicity is parallelism of the disciplinary subtasks. The parallelism is important in order to develop a broad workfront necessary to shorten the design time. If all the intradisciplinary and interdisciplinary iterations were carried out to convergence, the process could yield a numerically optimal design. However, it usually stops short of that because of time and money limitations. This is especially true for the interdisciplinary iterations.
Kushniruk, Andre W; Borycki, Elizabeth M
2015-01-01
The development of more usable and effective healthcare information systems has become a critical issue. In the software industry methodologies such as agile and iterative development processes have emerged to lead to more effective and usable systems. These approaches highlight focusing on user needs and promoting iterative and flexible development practices. Evaluation and testing of iterative agile development cycles is considered an important part of the agile methodology and iterative processes for system design and re-design. However, the issue of how to effectively integrate usability testing methods into rapid and flexible agile design cycles has remained to be fully explored. In this paper we describe our application of an approach known as low-cost rapid usability testing as it has been applied within agile system development in healthcare. The advantages of the integrative approach are described, along with current methodological considerations.
Executing SPARQL Queries over the Web of Linked Data
NASA Astrophysics Data System (ADS)
Hartig, Olaf; Bizer, Christian; Freytag, Johann-Christoph
The Web of Linked Data forms a single, globally distributed dataspace. Due to the openness of this dataspace, it is not possible to know in advance all data sources that might be relevant for query answering. This openness poses a new challenge that is not addressed by traditional research on federated query processing. In this paper we present an approach to execute SPARQL queries over the Web of Linked Data. The main idea of our approach is to discover data that might be relevant for answering a query during the query execution itself. This discovery is driven by following RDF links between data sources based on URIs in the query and in partial results. The URIs are resolved over the HTTP protocol into RDF data which is continuously added to the queried dataset. This paper describes concepts and algorithms to implement our approach using an iterator-based pipeline. We introduce a formalization of the pipelining approach and show that classical iterators may cause blocking due to the latency of HTTP requests. To avoid blocking, we propose an extension of the iterator paradigm. The evaluation of our approach shows its strengths as well as the still existing challenges.
How children perceive fractals: Hierarchical self-similarity and cognitive development
Martins, Maurício Dias; Laaha, Sabine; Freiberger, Eva Maria; Choi, Soonja; Fitch, W. Tecumseh
2014-01-01
The ability to understand and generate hierarchical structures is a crucial component of human cognition, available in language, music, mathematics and problem solving. Recursion is a particularly useful mechanism for generating complex hierarchies by means of self-embedding rules. In the visual domain, fractals are recursive structures in which simple transformation rules generate hierarchies of infinite depth. Research on how children acquire these rules can provide valuable insight into the cognitive requirements and learning constraints of recursion. Here, we used fractals to investigate the acquisition of recursion in the visual domain, and probed for correlations with grammar comprehension and general intelligence. We compared second (n = 26) and fourth graders (n = 26) in their ability to represent two types of rules for generating hierarchical structures: Recursive rules, on the one hand, which generate new hierarchical levels; and iterative rules, on the other hand, which merely insert items within hierarchies without generating new levels. We found that the majority of fourth graders, but not second graders, were able to represent both recursive and iterative rules. This difference was partially accounted by second graders’ impairment in detecting hierarchical mistakes, and correlated with between-grade differences in grammar comprehension tasks. Empirically, recursion and iteration also differed in at least one crucial aspect: While the ability to learn recursive rules seemed to depend on the previous acquisition of simple iterative representations, the opposite was not true, i.e., children were able to acquire iterative rules before they acquired recursive representations. These results suggest that the acquisition of recursion in vision follows learning constraints similar to the acquisition of recursion in language, and that both domains share cognitive resources involved in hierarchical processing. PMID:24955884
Identification of stable areas in unreferenced laser scans for automated geomorphometric monitoring
NASA Astrophysics Data System (ADS)
Wujanz, Daniel; Avian, Michael; Krueger, Daniel; Neitzel, Frank
2018-04-01
Current research questions in the field of geomorphology focus on the impact of climate change on several processes subsequently causing natural hazards. Geodetic deformation measurements are a suitable tool to document such geomorphic mechanisms, e.g. by capturing a region of interest with terrestrial laser scanners which results in a so-called 3-D point cloud. The main problem in deformation monitoring is the transformation of 3-D point clouds captured at different points in time (epochs) into a stable reference coordinate system. In this contribution, a surface-based registration methodology is applied, termed the iterative closest proximity algorithm (ICProx), that solely uses point cloud data as input, similar to the iterative closest point algorithm (ICP). The aim of this study is to automatically classify deformations that occurred at a rock glacier and an ice glacier, as well as in a rockfall area. For every case study, two epochs were processed, while the datasets notably differ in terms of geometric characteristics, distribution and magnitude of deformation. In summary, the ICProx algorithm's classification accuracy is 70 % on average in comparison to reference data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Qiaofeng; Sawatzky, Alex; Anastasio, Mark A., E-mail: anastasio@wustl.edu
Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that ismore » solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets.« less
Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A
2016-04-01
The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets.
Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A.
2016-01-01
Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets. PMID:27036582
Xiong, Wenjun; Yu, Xinghuo; Chen, Yao; Gao, Jie
2017-06-01
This brief investigates the quantized iterative learning problem for digital networks with time-varying topologies. The information is first encoded as symbolic data and then transmitted. After the data are received, a decoder is used by the receiver to get an estimate of the sender's state. Iterative learning quantized communication is considered in the process of encoding and decoding. A sufficient condition is then presented to achieve the consensus tracking problem in a finite interval using the quantized iterative learning controllers. Finally, simulation results are given to illustrate the usefulness of the developed criterion.
A systematic review of patient safety in mental health: a protocol based on the inpatient setting.
D'Lima, Danielle; Archer, Stephanie; Thibaut, Bethan Ines; Ramtale, Sonny Christian; Dewa, Lindsay H; Darzi, Ara
2016-11-29
Despite the growing international interest in patient safety as a discipline, there has been a lack of exploration of its application to mental health. It cannot be assumed that findings based upon physical health in acute care hospitals can be applied to mental health patients, disorders and settings. To the authors' knowledge, there has only been one review of the literature that focuses on patient safety research in mental health settings, conducted in Canada in 2008. We have identified a need to update this review and develop the methodology in order to strengthen the findings and disseminate internationally for advancement in the field. This systematic review will explore the existing research base on patient safety in mental health within the inpatient setting. To conduct this systematic review, a thorough search across multiple databases will be undertaken, based upon four search facets ("mental health", "patient safety", "research" and "inpatient setting"). The search strategy has been developed based upon the Canadian review accompanied with input from the National Reporting and Learning System (NRLS) taxonomy of patient safety incidents and the Diagnostic and Statistical Manual of Mental Disorders (fifth edition). The screening process will involve perspectives from at least two researchers at all stages with a third researcher invited to review when discrepancies require resolution. Initial inclusion and exclusion criteria have been developed and will be refined iteratively throughout the process. Quality assessment and data extraction of included articles will be conducted by at least two researchers. A data extraction form will be developed, piloted and iterated as necessary in accordance with the research question. Extracted information will be analysed thematically. We believe that this systematic review will make a significant contribution to the advancement of patient safety in mental health inpatient settings. The findings will enable the development and implementation of interventions to improve the quality of care experienced by patients and support the identification of future research priorities. PROSPERO CRD42016034057.
Data and Workflow Management Challenges in Global Adjoint Tomography
NASA Astrophysics Data System (ADS)
Lei, W.; Ruan, Y.; Smith, J. A.; Modrak, R. T.; Orsvuran, R.; Krischer, L.; Chen, Y.; Balasubramanian, V.; Hill, J.; Turilli, M.; Bozdag, E.; Lefebvre, M. P.; Jha, S.; Tromp, J.
2017-12-01
It is crucial to take the complete physics of wave propagation into account in seismic tomography to further improve the resolution of tomographic images. The adjoint method is an efficient way of incorporating 3D wave simulations in seismic tomography. However, global adjoint tomography is computationally expensive, requiring thousands of wavefield simulations and massive data processing. Through our collaboration with the Oak Ridge National Laboratory (ORNL) computing group and an allocation on Titan, ORNL's GPU-accelerated supercomputer, we are now performing our global inversions by assimilating waveform data from over 1,000 earthquakes. The first challenge we encountered is dealing with the sheer amount of seismic data. Data processing based on conventional data formats and processing tools (such as SAC), which are not designed for parallel systems, becomes our major bottleneck. To facilitate the data processing procedures, we designed the Adaptive Seismic Data Format (ASDF) and developed a set of Python-based processing tools to replace legacy FORTRAN-based software. These tools greatly enhance reproducibility and accountability while taking full advantage of highly parallel system and showing superior scaling on modern computational platforms. The second challenge is that the data processing workflow contains more than 10 sub-procedures, making it delicate to handle and prone to human mistakes. To reduce human intervention as much as possible, we are developing a framework specifically designed for seismic inversion based on the state-of-the art workflow management research, specifically the Ensemble Toolkit (EnTK), in collaboration with the RADICAL team from Rutgers University. Using the initial developments of the EnTK, we are able to utilize the full computing power of the data processing cluster RHEA at ORNL while keeping human interaction to a minimum and greatly reducing the data processing time. Thanks to all the improvements, we are now able to perform iterations fast enough on more than a 1,000 earthquakes dataset. Starting from model GLAD-M15 (Bozdag et al., 2016), an elastic 3D model with a transversely isotropic upper mantle, we have successfully performed 5 iterations. Our goal is to finish 10 iterations, i.e., generating GLAD M25* by the end of this year.
The development of participatory health research among incarcerated women in a Canadian prison
Murphy, K.; Hanson, D.; Hemingway, C.; Ramsden, V.; Buxton, J.; Granger-Brown, A.; Condello, L-L.; Buchanan, M.; Espinoza-Magana, N.; Edworthy, G.; Hislop, T. G.
2009-01-01
This paper describes the development of a unique prison participatory research project, in which incarcerated women formed a research team, the research activities and the lessons learned. The participatory action research project was conducted in the main short sentence minimum/medium security women's prison located in a Western Canadian province. An ethnographic multi-method approach was used for data collection and analysis. Quantitative data was collected by surveys and analysed using descriptive statistics. Qualitative data was collected from orientation package entries, audio recordings, and written archives of research team discussions, forums and debriefings, and presentations. These data and ethnographic observations were transcribed and analysed using iterative and interpretative qualitative methods and NVivo 7 software. Up to 15 women worked each day as prison research team members; a total of 190 women participated at some time in the project between November 2005 and August 2007. Incarcerated women peer researchers developed the research processes including opportunities for them to develop leadership and technical skills. Through these processes, including data collection and analysis, nine health goals emerged. Lessons learned from the research processes were confirmed by the common themes that emerged from thematic analysis of the research activity data. Incarceration provides a unique opportunity for engagement of women as expert partners alongside academic researchers and primary care workers in participatory research processes to improve their health. PMID:25759141
Bringing values and deliberation to science communication.
Dietz, Thomas
2013-08-20
Decisions always involve both facts and values, whereas most science communication focuses only on facts. If science communication is intended to inform decisions, it must be competent with regard to both facts and values. Public participation inevitably involves both facts and values. Research on public participation suggests that linking scientific analysis to public deliberation in an iterative process can help decision making deal effectively with both facts and values. Thus, linked analysis and deliberation can be an effective tool for science communication. However, challenges remain in conducting such process at the national and global scales, in enhancing trust, and in reconciling diverse values.
ERIC Educational Resources Information Center
Siko, Jason Paul
2012-01-01
This design-based research study examined the effects of a game design project on student test performance, with refinements made to the implementation after each of the three iterations of the study. The changes to the implementation over the three iterations were based on the literature for the three justifications for the use of homemade…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maingi, Rajesh; Zinkle, Steven J.; Foster, Mark S.
2015-05-01
The realization of controlled thermonuclear fusion as an energy source would transform society, providing a nearly limitless energy source with renewable fuel. Under the auspices of the U.S. Department of Energy, the Fusion Energy Sciences (FES) program management recently launched a series of technical workshops to “seek community engagement and input for future program planning activities” in the targeted areas of (1) Integrated Simulation for Magnetic Fusion Energy Sciences, (2) Control of Transients, (3) Plasma Science Frontiers, and (4) Plasma-Materials Interactions aka Plasma-Materials Interface (PMI). Over the past decade, a number of strategic planning activities1-6 have highlighted PMI and plasmamore » facing components as a major knowledge gap, which should be a priority for fusion research towards ITER and future demonstration fusion energy systems. There is a strong international consensus that new PMI solutions are required in order for fusion to advance beyond ITER. The goal of the 2015 PMI community workshop was to review recent innovations and improvements in understanding the challenging PMI issues, identify high-priority scientific challenges in PMI, and to discuss potential options to address those challenges. The community response to the PMI research assessment was enthusiastic, with over 80 participants involved in the open workshop held at Princeton Plasma Physics Laboratory on May 4-7, 2015. The workshop provided a useful forum for the scientific community to review progress in scientific understanding achieved during the past decade, and to openly discuss high-priority unresolved research questions. One of the key outcomes of the workshop was a focused set of community-initiated Priority Research Directions (PRDs) for PMI. Five PRDs were identified, labeled A-E, which represent community consensus on the most urgent near-term PMI scientific issues. For each PRD, an assessment was made of the scientific challenges, as well as a set of actions to address those challenges. No prioritization was attempted amongst these five PRDs. We note that ITER, an international collaborative project to substantially extend fusion science and technology, is implicitly a driver and beneficiary of the research described in these PRDs; specific ITER issues are discussed in the background and PRD chapters. For succinctness, we describe these PRDs directly below; a brief introduction to magnetic fusion and the workshop process/timeline is given in Chapter I, and panelists are listed in the Appendix.« less
NASA Astrophysics Data System (ADS)
Yuniarto, Budi; Kurniawan, Robert
2017-03-01
PLS Path Modeling (PLS-PM) is different from covariance based SEM, where PLS-PM use an approach based on variance or component, therefore, PLS-PM is also known as a component based SEM. Multiblock Partial Least Squares (MBPLS) is a method in PLS regression which can be used in PLS Path Modeling which known as Multiblock PLS Path Modeling (MBPLS-PM). This method uses an iterative procedure in its algorithm. This research aims to modify MBPLS-PM with Back Propagation Neural Network approach. The result is MBPLS-PM algorithm can be modified using the Back Propagation Neural Network approach to replace the iterative process in backward and forward step to get the matrix t and the matrix u in the algorithm. By modifying the MBPLS-PM algorithm using Back Propagation Neural Network approach, the model parameters obtained are relatively not significantly different compared to model parameters obtained by original MBPLS-PM algorithm.
Experimental investigations of helium cryotrapping by argon frost
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mack, A.; Perinic, D.; Murdoch, D.
1992-03-01
At the Karlsruhe Nuclear Research Centre (KfK) cryopumping techniques are being investigated by which the gaseous exhausts from the NET/ITER reactor can be pumped out during the burn-and dwell-times. Cryosorption and cryotrapping are techniques which are suitable for this task. It is the target of the investigations to test the techniques under NET/ITER conditions and to determine optimum design data for a prototype. They involve measurement of the pumping speed as a function of the gas composition, gas flow and loading condition of the pump surfaces. The following parameters are subjected to variations: Ar/He ratio, specific helium volume flow rate,more » cryosurface temperature, process gas composition, impurities in argon trapping gas, three-stage operation and two-stage operation. This paper is a description of the experiments on argon trapping techniques started in 1990. Eleven tests as well as the results derived from them are described.« less
Teachers Supporting Teachers in Urban Schools: What Iterative Research Designs Can Teach Us.
Shernoff, Elisa S; Maríñez-Lora, Ane M; Frazier, Stacy L; Jakobsons, Lara J; Atkins, Marc S; Bonner, Deborah
2011-12-01
Despite alarming rates and negative consequences associated with urban teacher attrition, mentoring programs often fail to target the strongest predictors of attrition: effectiveness around classroom management and engaging learners; and connectedness to colleagues. Using a mixed-method iterative development framework, we highlight the process of developing and evaluating the feasibility of a multi-component professional development model for urban early career teachers. The model includes linking novices with peer-nominated key opinion leader teachers and an external coach who work together to (1) provide intensive support in evidence-based practices for classroom management and engaging learners, and (2) connect new teachers with their larger network of colleagues. Fidelity measures and focus group data illustrated varying attendance rates throughout the school year and that although seminars and professional learning communities were delivered as intended, adaptations to enhance the relevance, authenticity, level, and type of instrumental support were needed. Implications for science and practice are discussed.
Liang, Jennifer J; Tsou, Ching-Huei; Devarakonda, Murthy V
2017-01-01
Natural language processing (NLP) holds the promise of effectively analyzing patient record data to reduce cognitive load on physicians and clinicians in patient care, clinical research, and hospital operations management. A critical need in developing such methods is the "ground truth" dataset needed for training and testing the algorithms. Beyond localizable, relatively simple tasks, ground truth creation is a significant challenge because medical experts, just as physicians in patient care, have to assimilate vast amounts of data in EHR systems. To mitigate potential inaccuracies of the cognitive challenges, we present an iterative vetting approach for creating the ground truth for complex NLP tasks. In this paper, we present the methodology, and report on its use for an automated problem list generation task, its effect on the ground truth quality and system accuracy, and lessons learned from the effort.
Marques, Rita; Gregório, João; Pinheiro, Fernando; Póvoa, Pedro; da Silva, Miguel Mira; Lapão, Luís Velez
2017-01-31
Hospital-acquired infections are still amongst the major problems health systems are facing. Their occurrence can lead to higher morbidity and mortality rates, increased length of hospital stay, and higher costs for both hospital and patients. Performing hand hygiene is a simple and inexpensive prevention measure, but healthcare workers' compliance with it is often far from ideal. To raise awareness regarding hand hygiene compliance, individual behaviour change and performance optimization, we aimed to develop a gamification solution that collects data and provides real-time feedback accurately in a fun and engaging way. A Design Science Research Methodology (DSRM) was used to conduct this work. DSRM is useful to study the link between research and professional practices by designing, implementing and evaluating artifacts that address a specific need. It follows a development cycle (or iteration) composed by six activities. Two work iterations were performed applying gamification components, each using a different indoor location technology. Preliminary experiments, simulations and field studies were performed in an Intensive Care Unit (ICU) of a Portuguese tertiary hospital. Nurses working on this ICU were in a focus group during the research, participating in several sessions across the implementation process. Nurses enjoyed the concept and considered that it allows for a unique opportunity to receive feedback regarding their performance. Tests performed on the indoor location technology applied in the first iteration regarding distances estimation presented an unacceptable lack of accuracy. Using a proximity-based technique, it was possible to identify the sequence of positions, but beacons presented an unstable behaviour. In the second work iteration, a different indoor location technology was explored but it did not work properly, so there was no chance of testing the solution as a whole (gamification application included). Combining automated monitoring systems with gamification seems to be an innovative and promising approach, based on the already achieved results. Involving nurses in the project since the beginning allowed to align the solution with their needs. Despite strong evolution through recent years, indoor location technologies are still not ready to be applied in the healthcare field with nursing wards.
Region of interest processing for iterative reconstruction in x-ray computed tomography
NASA Astrophysics Data System (ADS)
Kopp, Felix K.; Nasirudin, Radin A.; Mei, Kai; Fehringer, Andreas; Pfeiffer, Franz; Rummeny, Ernst J.; Noël, Peter B.
2015-03-01
The recent advancements in the graphics card technology raised the performance of parallel computing and contributed to the introduction of iterative reconstruction methods for x-ray computed tomography in clinical CT scanners. Iterative maximum likelihood (ML) based reconstruction methods are known to reduce image noise and to improve the diagnostic quality of low-dose CT. However, iterative reconstruction of a region of interest (ROI), especially ML based, is challenging. But for some clinical procedures, like cardiac CT, only a ROI is needed for diagnostics. A high-resolution reconstruction of the full field of view (FOV) consumes unnecessary computation effort that results in a slower reconstruction than clinically acceptable. In this work, we present an extension and evaluation of an existing ROI processing algorithm. Especially improvements for the equalization between regions inside and outside of a ROI are proposed. The evaluation was done on data collected from a clinical CT scanner. The performance of the different algorithms is qualitatively and quantitatively assessed. Our solution to the ROI problem provides an increase in signal-to-noise ratio and leads to visually less noise in the final reconstruction. The reconstruction speed of our technique was observed to be comparable with other previous proposed techniques. The development of ROI processing algorithms in combination with iterative reconstruction will provide higher diagnostic quality in the near future.
NASA Astrophysics Data System (ADS)
Hladowski, Lukasz; Galkowski, Krzysztof; Cai, Zhonglun; Rogers, Eric; Freeman, Chris T.; Lewin, Paul L.
2011-07-01
In this article a new approach to iterative learning control for the practically relevant case of deterministic discrete linear plants with uniform rank greater than unity is developed. The analysis is undertaken in a 2D systems setting that, by using a strong form of stability for linear repetitive processes, allows simultaneous consideration of both trial-to-trial error convergence and along the trial performance, resulting in design algorithms that can be computed using linear matrix inequalities (LMIs). Finally, the control laws are experimentally verified on a gantry robot that replicates a pick and place operation commonly found in a number of applications to which iterative learning control is applicable.
Overview of the JET results in support to ITER
NASA Astrophysics Data System (ADS)
Litaudon, X.; Abduallev, S.; Abhangi, M.; Abreu, P.; Afzal, M.; Aggarwal, K. M.; Ahlgren, T.; Ahn, J. H.; Aho-Mantila, L.; Aiba, N.; Airila, M.; Albanese, R.; Aldred, V.; Alegre, D.; Alessi, E.; Aleynikov, P.; Alfier, A.; Alkseev, A.; Allinson, M.; Alper, B.; Alves, E.; Ambrosino, G.; Ambrosino, R.; Amicucci, L.; Amosov, V.; Andersson Sundén, E.; Angelone, M.; Anghel, M.; Angioni, C.; Appel, L.; Appelbee, C.; Arena, P.; Ariola, M.; Arnichand, H.; Arshad, S.; Ash, A.; Ashikawa, N.; Aslanyan, V.; Asunta, O.; Auriemma, F.; Austin, Y.; Avotina, L.; Axton, M. D.; Ayres, C.; Bacharis, M.; Baciero, A.; Baião, D.; Bailey, S.; Baker, A.; Balboa, I.; Balden, M.; Balshaw, N.; Bament, R.; Banks, J. W.; Baranov, Y. F.; Barnard, M. A.; Barnes, D.; Barnes, M.; Barnsley, R.; Baron Wiechec, A.; Barrera Orte, L.; Baruzzo, M.; Basiuk, V.; Bassan, M.; Bastow, R.; Batista, A.; Batistoni, P.; Baughan, R.; Bauvir, B.; Baylor, L.; Bazylev, B.; Beal, J.; Beaumont, P. S.; Beckers, M.; Beckett, B.; Becoulet, A.; Bekris, N.; Beldishevski, M.; Bell, K.; Belli, F.; Bellinger, M.; Belonohy, É.; Ben Ayed, N.; Benterman, N. A.; Bergsåker, H.; Bernardo, J.; Bernert, M.; Berry, M.; Bertalot, L.; Besliu, C.; Beurskens, M.; Bieg, B.; Bielecki, J.; Biewer, T.; Bigi, M.; Bílková, P.; Binda, F.; Bisoffi, A.; Bizarro, J. P. S.; Björkas, C.; Blackburn, J.; Blackman, K.; Blackman, T. R.; Blanchard, P.; Blatchford, P.; Bobkov, V.; Boboc, A.; Bodnár, G.; Bogar, O.; Bolshakova, I.; Bolzonella, T.; Bonanomi, N.; Bonelli, F.; Boom, J.; Booth, J.; Borba, D.; Borodin, D.; Borodkina, I.; Botrugno, A.; Bottereau, C.; Boulting, P.; Bourdelle, C.; Bowden, M.; Bower, C.; Bowman, C.; Boyce, T.; Boyd, C.; Boyer, H. J.; Bradshaw, J. M. A.; Braic, V.; Bravanec, R.; Breizman, B.; Bremond, S.; Brennan, P. D.; Breton, S.; Brett, A.; Brezinsek, S.; Bright, M. D. J.; Brix, M.; Broeckx, W.; Brombin, M.; Brosławski, A.; Brown, D. P. D.; Brown, M.; Bruno, E.; Bucalossi, J.; Buch, J.; Buchanan, J.; Buckley, M. A.; Budny, R.; Bufferand, H.; Bulman, M.; Bulmer, N.; Bunting, P.; Buratti, P.; Burckhart, A.; Buscarino, A.; Busse, A.; Butler, N. K.; Bykov, I.; Byrne, J.; Cahyna, P.; Calabrò, G.; Calvo, I.; Camenen, Y.; Camp, P.; Campling, D. C.; Cane, J.; Cannas, B.; Capel, A. J.; Card, P. J.; Cardinali, A.; Carman, P.; Carr, M.; Carralero, D.; Carraro, L.; Carvalho, B. B.; Carvalho, I.; Carvalho, P.; Casson, F. J.; Castaldo, C.; Catarino, N.; Caumont, J.; Causa, F.; Cavazzana, R.; Cave-Ayland, K.; Cavinato, M.; Cecconello, M.; Ceccuzzi, S.; Cecil, E.; Cenedese, A.; Cesario, R.; Challis, C. D.; Chandler, M.; Chandra, D.; Chang, C. S.; Chankin, A.; Chapman, I. T.; Chapman, S. C.; Chernyshova, M.; Chitarin, G.; Ciraolo, G.; Ciric, D.; Citrin, J.; Clairet, F.; Clark, E.; Clark, M.; Clarkson, R.; Clatworthy, D.; Clements, C.; Cleverly, M.; Coad, J. P.; Coates, P. A.; Cobalt, A.; Coccorese, V.; Cocilovo, V.; Coda, S.; Coelho, R.; Coenen, J. W.; Coffey, I.; Colas, L.; Collins, S.; Conka, D.; Conroy, S.; Conway, N.; Coombs, D.; Cooper, D.; Cooper, S. R.; Corradino, C.; Corre, Y.; Corrigan, G.; Cortes, S.; Coster, D.; Couchman, A. S.; Cox, M. P.; Craciunescu, T.; Cramp, S.; Craven, R.; Crisanti, F.; Croci, G.; Croft, D.; Crombé, K.; Crowe, R.; Cruz, N.; Cseh, G.; Cufar, A.; Cullen, A.; Curuia, M.; Czarnecka, A.; Dabirikhah, H.; Dalgliesh, P.; Dalley, S.; Dankowski, J.; Darrow, D.; Davies, O.; Davis, W.; Day, C.; Day, I. E.; De Bock, M.; de Castro, A.; de la Cal, E.; de la Luna, E.; De Masi, G.; de Pablos, J. L.; De Temmerman, G.; De Tommasi, G.; de Vries, P.; Deakin, K.; Deane, J.; Degli Agostini, F.; Dejarnac, R.; Delabie, E.; den Harder, N.; Dendy, R. O.; Denis, J.; Denner, P.; Devaux, S.; Devynck, P.; Di Maio, F.; Di Siena, A.; Di Troia, C.; Dinca, P.; D'Inca, R.; Ding, B.; Dittmar, T.; Doerk, H.; Doerner, R. P.; Donné, T.; Dorling, S. E.; Dormido-Canto, S.; Doswon, S.; Douai, D.; Doyle, P. T.; Drenik, A.; Drewelow, P.; Drews, P.; Duckworth, Ph.; Dumont, R.; Dumortier, P.; Dunai, D.; Dunne, M.; Ďuran, I.; Durodié, F.; Dutta, P.; Duval, B. P.; Dux, R.; Dylst, K.; Dzysiuk, N.; Edappala, P. V.; Edmond, J.; Edwards, A. M.; Edwards, J.; Eich, Th.; Ekedahl, A.; El-Jorf, R.; Elsmore, C. G.; Enachescu, M.; Ericsson, G.; Eriksson, F.; Eriksson, J.; Eriksson, L. G.; Esposito, B.; Esquembri, S.; Esser, H. G.; Esteve, D.; Evans, B.; Evans, G. E.; Evison, G.; Ewart, G. D.; Fagan, D.; Faitsch, M.; Falie, D.; Fanni, A.; Fasoli, A.; Faustin, J. M.; Fawlk, N.; Fazendeiro, L.; Fedorczak, N.; Felton, R. C.; Fenton, K.; Fernades, A.; Fernandes, H.; Ferreira, J.; Fessey, J. A.; Février, O.; Ficker, O.; Field, A.; Fietz, S.; Figueiredo, A.; Figueiredo, J.; Fil, A.; Finburg, P.; Firdaouss, M.; Fischer, U.; Fittill, L.; Fitzgerald, M.; Flammini, D.; Flanagan, J.; Fleming, C.; Flinders, K.; Fonnesu, N.; Fontdecaba, J. M.; Formisano, A.; Forsythe, L.; Fortuna, L.; Fortuna-Zalesna, E.; Fortune, M.; Foster, S.; Franke, T.; Franklin, T.; Frasca, M.; Frassinetti, L.; Freisinger, M.; Fresa, R.; Frigione, D.; Fuchs, V.; Fuller, D.; Futatani, S.; Fyvie, J.; Gál, K.; Galassi, D.; Gałązka, K.; Galdon-Quiroga, J.; Gallagher, J.; Gallart, D.; Galvão, R.; Gao, X.; Gao, Y.; Garcia, J.; Garcia-Carrasco, A.; García-Muñoz, M.; Gardarein, J.-L.; Garzotti, L.; Gaudio, P.; Gauthier, E.; Gear, D. F.; Gee, S. J.; Geiger, B.; Gelfusa, M.; Gerasimov, S.; Gervasini, G.; Gethins, M.; Ghani, Z.; Ghate, M.; Gherendi, M.; Giacalone, J. C.; Giacomelli, L.; Gibson, C. S.; Giegerich, T.; Gil, C.; Gil, L.; Gilligan, S.; Gin, D.; Giovannozzi, E.; Girardo, J. B.; Giroud, C.; Giruzzi, G.; Glöggler, S.; Godwin, J.; Goff, J.; Gohil, P.; Goloborod'ko, V.; Gomes, R.; Gonçalves, B.; Goniche, M.; Goodliffe, M.; Goodyear, A.; Gorini, G.; Gosk, M.; Goulding, R.; Goussarov, A.; Gowland, R.; Graham, B.; Graham, M. E.; Graves, J. P.; Grazier, N.; Grazier, P.; Green, N. R.; Greuner, H.; Grierson, B.; Griph, F. S.; Grisolia, C.; Grist, D.; Groth, M.; Grove, R.; Grundy, C. N.; Grzonka, J.; Guard, D.; Guérard, C.; Guillemaut, C.; Guirlet, R.; Gurl, C.; Utoh, H. H.; Hackett, L. J.; Hacquin, S.; Hagar, A.; Hager, R.; Hakola, A.; Halitovs, M.; Hall, S. J.; Hallworth Cook, S. P.; Hamlyn-Harris, C.; Hammond, K.; Harrington, C.; Harrison, J.; Harting, D.; Hasenbeck, F.; Hatano, Y.; Hatch, D. R.; Haupt, T. D. V.; Hawes, J.; Hawkes, N. C.; Hawkins, J.; Hawkins, P.; Haydon, P. W.; Hayter, N.; Hazel, S.; Heesterman, P. J. L.; Heinola, K.; Hellesen, C.; Hellsten, T.; Helou, W.; Hemming, O. N.; Hender, T. C.; Henderson, M.; Henderson, S. S.; Henriques, R.; Hepple, D.; Hermon, G.; Hertout, P.; Hidalgo, C.; Highcock, E. G.; Hill, M.; Hillairet, J.; Hillesheim, J.; Hillis, D.; Hizanidis, K.; Hjalmarsson, A.; Hobirk, J.; Hodille, E.; Hogben, C. H. A.; Hogeweij, G. M. D.; Hollingsworth, A.; Hollis, S.; Homfray, D. A.; Horáček, J.; Hornung, G.; Horton, A. R.; Horton, L. D.; Horvath, L.; Hotchin, S. P.; Hough, M. R.; Howarth, P. J.; Hubbard, A.; Huber, A.; Huber, V.; Huddleston, T. M.; Hughes, M.; Huijsmans, G. T. A.; Hunter, C. L.; Huynh, P.; Hynes, A. M.; Iglesias, D.; Imazawa, N.; Imbeaux, F.; Imríšek, M.; Incelli, M.; Innocente, P.; Irishkin, M.; Ivanova-Stanik, I.; Jachmich, S.; Jacobsen, A. S.; Jacquet, P.; Jansons, J.; Jardin, A.; Järvinen, A.; Jaulmes, F.; Jednoróg, S.; Jenkins, I.; Jeong, C.; Jepu, I.; Joffrin, E.; Johnson, R.; Johnson, T.; Johnston, Jane; Joita, L.; Jones, G.; Jones, T. T. C.; Hoshino, K. K.; Kallenbach, A.; Kamiya, K.; Kaniewski, J.; Kantor, A.; Kappatou, A.; Karhunen, J.; Karkinsky, D.; Karnowska, I.; Kaufman, M.; Kaveney, G.; Kazakov, Y.; Kazantzidis, V.; Keeling, D. L.; Keenan, T.; Keep, J.; Kempenaars, M.; Kennedy, C.; Kenny, D.; Kent, J.; Kent, O. N.; Khilkevich, E.; Kim, H. T.; Kim, H. S.; Kinch, A.; king, C.; King, D.; King, R. F.; Kinna, D. J.; Kiptily, V.; Kirk, A.; Kirov, K.; Kirschner, A.; Kizane, G.; Klepper, C.; Klix, A.; Knight, P.; Knipe, S. J.; Knott, S.; Kobuchi, T.; Köchl, F.; Kocsis, G.; Kodeli, I.; Kogan, L.; Kogut, D.; Koivuranta, S.; Kominis, Y.; Köppen, M.; Kos, B.; Koskela, T.; Koslowski, H. R.; Koubiti, M.; Kovari, M.; Kowalska-Strzęciwilk, E.; Krasilnikov, A.; Krasilnikov, V.; Krawczyk, N.; Kresina, M.; Krieger, K.; Krivska, A.; Kruezi, U.; Książek, I.; Kukushkin, A.; Kundu, A.; Kurki-Suonio, T.; Kwak, S.; Kwiatkowski, R.; Kwon, O. J.; Laguardia, L.; Lahtinen, A.; Laing, A.; Lam, N.; Lambertz, H. T.; Lane, C.; Lang, P. T.; Lanthaler, S.; Lapins, J.; Lasa, A.; Last, J. R.; Łaszyńska, E.; Lawless, R.; Lawson, A.; Lawson, K. D.; Lazaros, A.; Lazzaro, E.; Leddy, J.; Lee, S.; Lefebvre, X.; Leggate, H. J.; Lehmann, J.; Lehnen, M.; Leichtle, D.; Leichuer, P.; Leipold, F.; Lengar, I.; Lennholm, M.; Lerche, E.; Lescinskis, A.; Lesnoj, S.; Letellier, E.; Leyland, M.; Leysen, W.; Li, L.; Liang, Y.; Likonen, J.; Linke, J.; Linsmeier, Ch.; Lipschultz, B.; Liu, G.; Liu, Y.; Lo Schiavo, V. P.; Loarer, T.; Loarte, A.; Lobel, R. C.; Lomanowski, B.; Lomas, P. J.; Lönnroth, J.; López, J. M.; López-Razola, J.; Lorenzini, R.; Losada, U.; Lovell, J. J.; Loving, A. B.; Lowry, C.; Luce, T.; Lucock, R. M. A.; Lukin, A.; Luna, C.; Lungaroni, M.; Lungu, C. P.; Lungu, M.; Lunniss, A.; Lupelli, I.; Lyssoivan, A.; Macdonald, N.; Macheta, P.; Maczewa, K.; Magesh, B.; Maget, P.; Maggi, C.; Maier, H.; Mailloux, J.; Makkonen, T.; Makwana, R.; Malaquias, A.; Malizia, A.; Manas, P.; Manning, A.; Manso, M. E.; Mantica, P.; Mantsinen, M.; Manzanares, A.; Maquet, Ph.; Marandet, Y.; Marcenko, N.; Marchetto, C.; Marchuk, O.; Marinelli, M.; Marinucci, M.; Markovič, T.; Marocco, D.; Marot, L.; Marren, C. A.; Marshal, R.; Martin, A.; Martin, Y.; Martín de Aguilera, A.; Martínez, F. J.; Martín-Solís, J. R.; Martynova, Y.; Maruyama, S.; Masiello, A.; Maslov, M.; Matejcik, S.; Mattei, M.; Matthews, G. F.; Maviglia, F.; Mayer, M.; Mayoral, M. L.; May-Smith, T.; Mazon, D.; Mazzotta, C.; McAdams, R.; McCarthy, P. J.; McClements, K. G.; McCormack, O.; McCullen, P. A.; McDonald, D.; McIntosh, S.; McKean, R.; McKehon, J.; Meadows, R. C.; Meakins, A.; Medina, F.; Medland, M.; Medley, S.; Meigh, S.; Meigs, A. G.; Meisl, G.; Meitner, S.; Meneses, L.; Menmuir, S.; Mergia, K.; Merrigan, I. R.; Mertens, Ph.; Meshchaninov, S.; Messiaen, A.; Meyer, H.; Mianowski, S.; Michling, R.; Middleton-Gear, D.; Miettunen, J.; Militello, F.; Militello-Asp, E.; Miloshevsky, G.; Mink, F.; Minucci, S.; Miyoshi, Y.; Mlynář, J.; Molina, D.; Monakhov, I.; Moneti, M.; Mooney, R.; Moradi, S.; Mordijck, S.; Moreira, L.; Moreno, R.; Moro, F.; Morris, A. W.; Morris, J.; Moser, L.; Mosher, S.; Moulton, D.; Murari, A.; Muraro, A.; Murphy, S.; Asakura, N. N.; Na, Y. S.; Nabais, F.; Naish, R.; Nakano, T.; Nardon, E.; Naulin, V.; Nave, M. F. F.; Nedzelski, I.; Nemtsev, G.; Nespoli, F.; Neto, A.; Neu, R.; Neverov, V. S.; Newman, M.; Nicholls, K. J.; Nicolas, T.; Nielsen, A. H.; Nielsen, P.; Nilsson, E.; Nishijima, D.; Noble, C.; Nocente, M.; Nodwell, D.; Nordlund, K.; Nordman, H.; Nouailletas, R.; Nunes, I.; Oberkofler, M.; Odupitan, T.; Ogawa, M. T.; O'Gorman, T.; Okabayashi, M.; Olney, R.; Omolayo, O.; O'Mullane, M.; Ongena, J.; Orsitto, F.; Orszagh, J.; Oswuigwe, B. I.; Otin, R.; Owen, A.; Paccagnella, R.; Pace, N.; Pacella, D.; Packer, L. W.; Page, A.; Pajuste, E.; Palazzo, S.; Pamela, S.; Panja, S.; Papp, P.; Paprok, R.; Parail, V.; Park, M.; Parra Diaz, F.; Parsons, M.; Pasqualotto, R.; Patel, A.; Pathak, S.; Paton, D.; Patten, H.; Pau, A.; Pawelec, E.; Soldan, C. Paz; Peackoc, A.; Pearson, I. J.; Pehkonen, S.-P.; Peluso, E.; Penot, C.; Pereira, A.; Pereira, R.; Pereira Puglia, P. P.; Perez von Thun, C.; Peruzzo, S.; Peschanyi, S.; Peterka, M.; Petersson, P.; Petravich, G.; Petre, A.; Petrella, N.; Petržilka, V.; Peysson, Y.; Pfefferlé, D.; Philipps, V.; Pillon, M.; Pintsuk, G.; Piovesan, P.; Pires dos Reis, A.; Piron, L.; Pironti, A.; Pisano, F.; Pitts, R.; Pizzo, F.; Plyusnin, V.; Pomaro, N.; Pompilian, O. G.; Pool, P. J.; Popovichev, S.; Porfiri, M. T.; Porosnicu, C.; Porton, M.; Possnert, G.; Potzel, S.; Powell, T.; Pozzi, J.; Prajapati, V.; Prakash, R.; Prestopino, G.; Price, D.; Price, M.; Price, R.; Prior, P.; Proudfoot, R.; Pucella, G.; Puglia, P.; Puiatti, M. E.; Pulley, D.; Purahoo, K.; Pütterich, Th.; Rachlew, E.; Rack, M.; Ragona, R.; Rainford, M. S. J.; Rakha, A.; Ramogida, G.; Ranjan, S.; Rapson, C. J.; Rasmussen, J. J.; Rathod, K.; Rattá, G.; Ratynskaia, S.; Ravera, G.; Rayner, C.; Rebai, M.; Reece, D.; Reed, A.; Réfy, D.; Regan, B.; Regaña, J.; Reich, M.; Reid, N.; Reimold, F.; Reinhart, M.; Reinke, M.; Reiser, D.; Rendell, D.; Reux, C.; Reyes Cortes, S. D. A.; Reynolds, S.; Riccardo, V.; Richardson, N.; Riddle, K.; Rigamonti, D.; Rimini, F. G.; Risner, J.; Riva, M.; Roach, C.; Robins, R. J.; Robinson, S. A.; Robinson, T.; Robson, D. W.; Roccella, R.; Rodionov, R.; Rodrigues, P.; Rodriguez, J.; Rohde, V.; Romanelli, F.; Romanelli, M.; Romanelli, S.; Romazanov, J.; Rowe, S.; Rubel, M.; Rubinacci, G.; Rubino, G.; Ruchko, L.; Ruiz, M.; Ruset, C.; Rzadkiewicz, J.; Saarelma, S.; Sabot, R.; Safi, E.; Sagar, P.; Saibene, G.; Saint-Laurent, F.; Salewski, M.; Salmi, A.; Salmon, R.; Salzedas, F.; Samaddar, D.; Samm, U.; Sandiford, D.; Santa, P.; Santala, M. I. K.; Santos, B.; Santucci, A.; Sartori, F.; Sartori, R.; Sauter, O.; Scannell, R.; Schlummer, T.; Schmid, K.; Schmidt, V.; Schmuck, S.; Schneider, M.; Schöpf, K.; Schwörer, D.; Scott, S. D.; Sergienko, G.; Sertoli, M.; Shabbir, A.; Sharapov, S. E.; Shaw, A.; Shaw, R.; Sheikh, H.; Shepherd, A.; Shevelev, A.; Shumack, A.; Sias, G.; Sibbald, M.; Sieglin, B.; Silburn, S.; Silva, A.; Silva, C.; Simmons, P. A.; Simpson, J.; Simpson-Hutchinson, J.; Sinha, A.; Sipilä, S. K.; Sips, A. C. C.; Sirén, P.; Sirinelli, A.; Sjöstrand, H.; Skiba, M.; Skilton, R.; Slabkowska, K.; Slade, B.; Smith, N.; Smith, P. G.; Smith, R.; Smith, T. J.; Smithies, M.; Snoj, L.; Soare, S.; Solano, E. R.; Somers, A.; Sommariva, C.; Sonato, P.; Sopplesa, A.; Sousa, J.; Sozzi, C.; Spagnolo, S.; Spelzini, T.; Spineanu, F.; Stables, G.; Stamatelatos, I.; Stamp, M. F.; Staniec, P.; Stankūnas, G.; Stan-Sion, C.; Stead, M. J.; Stefanikova, E.; Stepanov, I.; Stephen, A. V.; Stephen, M.; Stevens, A.; Stevens, B. D.; Strachan, J.; Strand, P.; Strauss, H. R.; Ström, P.; Stubbs, G.; Studholme, W.; Subba, F.; Summers, H. P.; Svensson, J.; Świderski, Ł.; Szabolics, T.; Szawlowski, M.; Szepesi, G.; Suzuki, T. T.; Tál, B.; Tala, T.; Talbot, A. R.; Talebzadeh, S.; Taliercio, C.; Tamain, P.; Tame, C.; Tang, W.; Tardocchi, M.; Taroni, L.; Taylor, D.; Taylor, K. A.; Tegnered, D.; Telesca, G.; Teplova, N.; Terranova, D.; Testa, D.; Tholerus, E.; Thomas, J.; Thomas, J. D.; Thomas, P.; Thompson, A.; Thompson, C.-A.; Thompson, V. K.; Thorne, L.; Thornton, A.; Thrysøe, A. S.; Tigwell, P. A.; Tipton, N.; Tiseanu, I.; Tojo, H.; Tokitani, M.; Tolias, P.; Tomeš, M.; Tonner, P.; Towndrow, M.; Trimble, P.; Tripsky, M.; Tsalas, M.; Tsavalas, P.; Tskhakaya jun, D.; Turner, I.; Turner, M. M.; Turnyanskiy, M.; Tvalashvili, G.; Tyrrell, S. G. J.; Uccello, A.; Ul-Abidin, Z.; Uljanovs, J.; Ulyatt, D.; Urano, H.; Uytdenhouwen, I.; Vadgama, A. P.; Valcarcel, D.; Valentinuzzi, M.; Valisa, M.; Vallejos Olivares, P.; Valovic, M.; Van De Mortel, M.; Van Eester, D.; Van Renterghem, W.; van Rooij, G. J.; Varje, J.; Varoutis, S.; Vartanian, S.; Vasava, K.; Vasilopoulou, T.; Vega, J.; Verdoolaege, G.; Verhoeven, R.; Verona, C.; Verona Rinati, G.; Veshchev, E.; Vianello, N.; Vicente, J.; Viezzer, E.; Villari, S.; Villone, F.; Vincenzi, P.; Vinyar, I.; Viola, B.; Vitins, A.; Vizvary, Z.; Vlad, M.; Voitsekhovitch, I.; Vondráček, P.; Vora, N.; Vu, T.; Pires de Sa, W. W.; Wakeling, B.; Waldon, C. W. F.; Walkden, N.; Walker, M.; Walker, R.; Walsh, M.; Wang, E.; Wang, N.; Warder, S.; Warren, R. J.; Waterhouse, J.; Watkins, N. W.; Watts, C.; Wauters, T.; Weckmann, A.; Weiland, J.; Weisen, H.; Weiszflog, M.; Wellstood, C.; West, A. T.; Wheatley, M. R.; Whetham, S.; Whitehead, A. M.; Whitehead, B. D.; Widdowson, A. M.; Wiesen, S.; Wilkinson, J.; Williams, J.; Williams, M.; Wilson, A. R.; Wilson, D. J.; Wilson, H. R.; Wilson, J.; Wischmeier, M.; Withenshaw, G.; Withycombe, A.; Witts, D. M.; Wood, D.; Wood, R.; Woodley, C.; Wray, S.; Wright, J.; Wright, J. C.; Wu, J.; Wukitch, S.; Wynn, A.; Xu, T.; Yadikin, D.; Yanling, W.; Yao, L.; Yavorskij, V.; Yoo, M. G.; Young, C.; Young, D.; Young, I. D.; Young, R.; Zacks, J.; Zagorski, R.; Zaitsev, F. S.; Zanino, R.; Zarins, A.; Zastrow, K. D.; Zerbini, M.; Zhang, W.; Zhou, Y.; Zilli, E.; Zoita, V.; Zoletnik, S.; Zychor, I.; JET Contributors
2017-10-01
The 2014-2016 JET results are reviewed in the light of their significance for optimising the ITER research plan for the active and non-active operation. More than 60 h of plasma operation with ITER first wall materials successfully took place since its installation in 2011. New multi-machine scaling of the type I-ELM divertor energy flux density to ITER is supported by first principle modelling. ITER relevant disruption experiments and first principle modelling are reported with a set of three disruption mitigation valves mimicking the ITER setup. Insights of the L-H power threshold in Deuterium and Hydrogen are given, stressing the importance of the magnetic configurations and the recent measurements of fine-scale structures in the edge radial electric. Dimensionless scans of the core and pedestal confinement provide new information to elucidate the importance of the first wall material on the fusion performance. H-mode plasmas at ITER triangularity (H = 1 at β N ~ 1.8 and n/n GW ~ 0.6) have been sustained at 2 MA during 5 s. The ITER neutronics codes have been validated on high performance experiments. Prospects for the coming D-T campaign and 14 MeV neutron calibration strategy are reviewed.
2016-01-01
Family Policy’s SECO program, which reviewed existing SECO metrics and data sources, as well as analytic methods of previ- ous research, to determine ...process that requires an iterative cycle of assessment of collected data (typically, but not solely, quantitative data) to determine whether SECO...RAND suggests five steps to develop and implement the SECO inter- nal monitoring system: Step 1. Describe the logic or theory of how activities are
2016-08-05
technique which used unobserved ”intermediate” variables to break a high-dimensional estimation problem such as least- squares (LS) optimization of a large...Least Squares (GEM-LS). The estimator is iterative and the work in this time period focused on characterizing the convergence properties of this...ap- proach by relaxing the statistical assumptions which is termed the Relaxed Approximate Graph-Structured Recursive Least Squares (RAGS-RLS). This
1988-01-01
Deblurring This long-standing research area was wrapped up this year with the preparation of a major tutorial paper. This paper summarizes all of the work...that we have done. The iterative procedures were shown to perform significantly better at the deblurring task than Kalman filtering, Wiener filtering...suited to the resolution of multiple impulsive sources on a uniform background. Such applications occur in radio astronomy and in a number of
RADC Multi-Dimensional Signal-Processing Research Program.
1980-09-30
Formulation 7 3.2.2 Methods of Accelerating Convergence 8 3.2.3 Application to Image Deblurring 8 3.2.4 Extensions 11 3.3 Convergence of Iterative Signal... noise -driven linear filters, permit development of the joint probability density function oz " kelihood function for the image. With an expression...spatial linear filter driven by white noise (see Fig. i). If the probability density function for the white noise is known, Fig. t. Model for image
A Technique for Transient Thermal Testing of Thick Structures
NASA Technical Reports Server (NTRS)
Horn, Thomas J.; Richards, W. Lance; Gong, Leslie
1997-01-01
A new open-loop heat flux control technique has been developed to conduct transient thermal testing of thick, thermally-conductive aerospace structures. This technique uses calibration of the radiant heater system power level as a function of heat flux, predicted aerodynamic heat flux, and the properties of an instrumented test article. An iterative process was used to generate open-loop heater power profiles prior to each transient thermal test. Differences between the measured and predicted surface temperatures were used to refine the heater power level command profiles through the iteration process. This iteration process has reduced the effects of environmental and test system design factors, which are normally compensated for by closed-loop temperature control, to acceptable levels. The final revised heater power profiles resulted in measured temperature time histories which deviated less than 25 F from the predicted surface temperatures.
Scientific and technical challenges on the road towards fusion electricity
NASA Astrophysics Data System (ADS)
Donné, A. J. H.; Federici, G.; Litaudon, X.; McDonald, D. C.
2017-10-01
The goal of the European Fusion Roadmap is to deliver fusion electricity to the grid early in the second half of this century. It breaks the quest for fusion energy into eight missions, and for each of them it describes a research and development programme to address all the open technical gaps in physics and technology and estimates the required resources. It points out the needs to intensify industrial involvement and to seek all opportunities for collaboration outside Europe. The roadmap covers three periods: the short term, which runs parallel to the European Research Framework Programme Horizon 2020, the medium term and the long term. ITER is the key facility of the roadmap as it is expected to achieve most of the important milestones on the path to fusion power. Thus, the vast majority of present resources are dedicated to ITER and its accompanying experiments. The medium term is focussed on taking ITER into operation and bringing it to full power, as well as on preparing the construction of a demonstration power plant DEMO, which will for the first time demonstrate fusion electricity to the grid around the middle of this century. Building and operating DEMO is the subject of the last roadmap phase: the long term. Clearly, the Fusion Roadmap is tightly connected to the ITER schedule. Three key milestones are the first operation of ITER, the start of the DT operation in ITER and reaching the full performance at which the thermal fusion power is 10 times the power put in to the plasma. The Engineering Design Activity of DEMO needs to start a few years after the first ITER plasma, while the start of the construction phase will be a few years after ITER reaches full performance. In this way ITER can give viable input to the design and development of DEMO. Because the neutron fluence in DEMO will be much higher than in ITER, it is important to develop and validate materials that can handle these very high neutron loads. For the testing of the materials, a dedicated 14 MeV neutron source is needed. This DEMO Oriented Neutron Source (DONES) is therefore an important facility to support the fusion roadmap.
NASA Astrophysics Data System (ADS)
Yang, C.; Zheng, W.; Zhang, M.; Yuan, T.; Zhuang, G.; Pan, Y.
2016-06-01
Measurement and control of the plasma in real-time are critical for advanced Tokamak operation. It requires high speed real-time data acquisition and processing. ITER has designed the Fast Plant System Controllers (FPSC) for these purposes. At J-TEXT Tokamak, a real-time data acquisition and processing framework has been designed and implemented using standard ITER FPSC technologies. The main hardware components of this framework are an Industrial Personal Computer (IPC) with a real-time system and FlexRIO devices based on FPGA. With FlexRIO devices, data can be processed by FPGA in real-time before they are passed to the CPU. The software elements are based on a real-time framework which runs under Red Hat Enterprise Linux MRG-R and uses Experimental Physics and Industrial Control System (EPICS) for monitoring and configuring. That makes the framework accord with ITER FPSC standard technology. With this framework, any kind of data acquisition and processing FlexRIO FPGA program can be configured with a FPSC. An application using the framework has been implemented for the polarimeter-interferometer diagnostic system on J-TEXT. The application is able to extract phase-shift information from the intermediate frequency signal produced by the polarimeter-interferometer diagnostic system and calculate plasma density profile in real-time. Different algorithms implementations on the FlexRIO FPGA are compared in the paper.
Lucero, Julie; Wallerstein, Nina; Duran, Bonnie; Alegria, Margarita; Greene-Moton, Ella; Israel, Barbara; Kastelic, Sarah; Magarati, Maya; Oetzel, John; Pearson, Cynthia; Schulz, Amy; Villegas, Malia; White Hat, Emily R
2018-01-01
This article describes a mixed methods study of community-based participatory research (CBPR) partnership practices and the links between these practices and changes in health status and disparities outcomes. Directed by a CBPR conceptual model and grounded in indigenous-transformative theory, our nation-wide, cross-site study showcases the value of a mixed methods approach for better understanding the complexity of CBPR partnerships across diverse community and research contexts. The article then provides examples of how an iterative, integrated approach to our mixed methods analysis yielded enriched understandings of two key constructs of the model: trust and governance. Implications and lessons learned while using mixed methods to study CBPR are provided.
Langlois, Etienne V; Becerril Montekio, Victor; Young, Taryn; Song, Kayla; Alcalde-Rabanal, Jacqueline; Tran, Nhan
2016-03-17
There is an increasing interest worldwide to ensure evidence-informed health policymaking as a means to improve health systems performance. There is a need to engage policymakers in collaborative approaches to generate and use knowledge in real world settings. To address this gap, we implemented two interventions based on iterative exchanges between researchers and policymakers/implementers. This article aims to reflect on the implementation and impact of these multi-site evidence-to-policy approaches implemented in low-resource settings. The first approach was implemented in Mexico and Nicaragua and focused on implementation research facilitated by communities of practice (CoP) among maternal health stakeholders. We conducted a process evaluation of the CoPs and assessed the professionals' abilities to acquire, analyse, adapt and apply research. The second approach, called the Policy BUilding Demand for evidence in Decision making through Interaction and Enhancing Skills (Policy BUDDIES), was implemented in South Africa and Cameroon. The intervention put forth a 'buddying' process to enhance demand and use of systematic reviews by sub-national policymakers. The Policy BUDDIES initiative was assessed using a mixed-methods realist evaluation design. In Mexico, the implementation research supported by CoPs triggered monitoring by local health organizations of the quality of maternal healthcare programs. Health programme personnel involved in CoPs in Mexico and Nicaragua reported improved capacities to identify and use evidence in solving implementation problems. In South Africa, Policy BUDDIES informed a policy framework for medication adherence for chronic diseases, including both HIV and non-communicable diseases. Policymakers engaged in the buddying process reported an enhanced recognition of the value of research, and greater demand for policy-relevant knowledge. The collaborative evidence-to-policy approaches underline the importance of iterations and continuity in the engagement of researchers and policymakers/programme managers, in order to account for swift evolutions in health policy planning and implementation. In developing and supporting evidence-to-policy interventions, due consideration should be given to fit-for-purpose approaches, as different needs in policymaking cycles require adapted processes and knowledge. Greater consideration should be provided to approaches embedding the use of research in real-world policymaking, better suited to the complex adaptive nature of health systems.
ERIC Educational Resources Information Center
Mozelius, Peter; Hettiarachchi, Enosha
2012-01-01
This paper describes the iterative development process of a Learning Object Repository (LOR), named eNOSHA. Discussions on a project for a LOR started at the e-Learning Centre (eLC) at The University of Colombo, School of Computing (UCSC) in 2007. The eLC has during the last decade been developing learning content for a nationwide e-learning…
NASA Astrophysics Data System (ADS)
Al-Chalabi, Rifat M. Khalil
1997-09-01
Development of an improvement to the computational efficiency of the existing nested iterative solution strategy of the Nodal Exapansion Method (NEM) nodal based neutron diffusion code NESTLE is presented. The improvement in the solution strategy is the result of developing a multilevel acceleration scheme that does not suffer from the numerical stalling associated with a number of iterative solution methods. The acceleration scheme is based on the multigrid method, which is specifically adapted for incorporation into the NEM nonlinear iterative strategy. This scheme optimizes the computational interplay between the spatial discretization and the NEM nonlinear iterative solution process through the use of the multigrid method. The combination of the NEM nodal method, calculation of the homogenized, neutron nodal balance coefficients (i.e. restriction operator), efficient underlying smoothing algorithm (power method of NESTLE), and the finer mesh reconstruction algorithm (i.e. prolongation operator), all operating on a sequence of coarser spatial nodes, constitutes the multilevel acceleration scheme employed in this research. Two implementations of the multigrid method into the NESTLE code were examined; the Imbedded NEM Strategy and the Imbedded CMFD Strategy. The main difference in implementation between the two methods is that in the Imbedded NEM Strategy, the NEM solution is required at every MG level. Numerical tests have shown that the Imbedded NEM Strategy suffers from divergence at coarse- grid levels, hence all the results for the different benchmarks presented here were obtained using the Imbedded CMFD Strategy. The novelties in the developed MG method are as follows: the formulation of the restriction and prolongation operators, and the selection of the relaxation method. The restriction operator utilizes a variation of the reactor physics, consistent homogenization technique. The prolongation operator is based upon a variant of the pin power reconstruction methodology. The relaxation method, which is the power method, utilizes a constant coefficient matrix within the NEM non-linear iterative strategy. The choice of the MG nesting within the nested iterative strategy enables the incorporation of other non-linear effects with no additional coding effort. In addition, if an eigenvalue problem is being solved, it remains an eigenvalue problem at all grid levels, simplifying coding implementation. The merit of the developed MG method was tested by incorporating it into the NESTLE iterative solver, and employing it to solve four different benchmark problems. In addition to the base cases, three different sensitivity studies are performed, examining the effects of number of MG levels, homogenized coupling coefficients correction (i.e. restriction operator), and fine-mesh reconstruction algorithm (i.e. prolongation operator). The multilevel acceleration scheme developed in this research provides the foundation for developing adaptive multilevel acceleration methods for steady-state and transient NEM nodal neutron diffusion equations. (Abstract shortened by UMI.)
Michelson, Kelly N; Frader, Joel; Sorce, Lauren; Clayman, Marla L; Persell, Stephen D; Fragen, Patricia; Ciolino, Jody D; Campbell, Laura C; Arenson, Melanie; Aniciete, Danica Y; Brown, Melanie L; Ali, Farah N; White, Douglas
2016-12-01
Stakeholder-developed interventions are needed to support pediatric intensive care unit (PICU) communication and decision-making. Few publications delineate methods and outcomes of stakeholder engagement in research. We describe the process and impact of stakeholder engagement on developing a PICU communication and decision-making support intervention. We also describe the resultant intervention. Stakeholders included parents of PICU patients, healthcare team members (HTMs), and research experts. Through a year-long iterative process, we involved 96 stakeholders in 25 meetings and 26 focus groups or interviews. Stakeholders adapted an adult navigator model by identifying core intervention elements and then determining how to operationalize those core elements in pediatrics. The stakeholder input led to PICU-specific refinements, such as supporting transitions after PICU discharge and including ancillary tools. The resultant intervention includes navigator involvement with parents and HTMs and navigator-guided use of ancillary tools. Subsequent research will test the feasibility and efficacy of our intervention.
Integrating a Genetic Algorithm Into a Knowledge-Based System for Ordering Complex Design Processes
NASA Technical Reports Server (NTRS)
Rogers, James L.; McCulley, Collin M.; Bloebaum, Christina L.
1996-01-01
The design cycle associated with large engineering systems requires an initial decomposition of the complex system into design processes which are coupled through the transference of output data. Some of these design processes may be grouped into iterative subcycles. In analyzing or optimizing such a coupled system, it is essential to be able to determine the best ordering of the processes within these subcycles to reduce design cycle time and cost. Many decomposition approaches assume the capability is available to determine what design processes and couplings exist and what order of execution will be imposed during the design cycle. Unfortunately, this is often a complex problem and beyond the capabilities of a human design manager. A new feature, a genetic algorithm, has been added to DeMAID (Design Manager's Aid for Intelligent Decomposition) to allow the design manager to rapidly examine many different combinations of ordering processes in an iterative subcycle and to optimize the ordering based on cost, time, and iteration requirements. Two sample test cases are presented to show the effects of optimizing the ordering with a genetic algorithm.
Advances in Global Full Waveform Inversion
NASA Astrophysics Data System (ADS)
Tromp, J.; Bozdag, E.; Lei, W.; Ruan, Y.; Lefebvre, M. P.; Modrak, R. T.; Orsvuran, R.; Smith, J. A.; Komatitsch, D.; Peter, D. B.
2017-12-01
Information about Earth's interior comes from seismograms recorded at its surface. Seismic imaging based on spectral-element and adjoint methods has enabled assimilation of this information for the construction of 3D (an)elastic Earth models. These methods account for the physics of wave excitation and propagation by numerically solving the equations of motion, and require the execution of complex computational procedures that challenge the most advanced high-performance computing systems. Current research is petascale; future research will require exascale capabilities. The inverse problem consists of reconstructing the characteristics of the medium from -often noisy- observations. A nonlinear functional is minimized, which involves both the misfit to the measurements and a Tikhonov-type regularization term to tackle inherent ill-posedness. Achieving scalability for the inversion process on tens of thousands of multicore processors is a task that offers many research challenges. We initiated global "adjoint tomography" using 253 earthquakes and produced the first-generation model named GLAD-M15, with a transversely isotropic model parameterization. We are currently running iterations for a second-generation anisotropic model based on the same 253 events. In parallel, we continue iterations for a transversely isotropic model with a larger dataset of 1,040 events to determine higher-resolution plume and slab images. A significant part of our research has focused on eliminating I/O bottlenecks in the adjoint tomography workflow. This has led to the development of a new Adaptable Seismic Data Format based on HDF5, and post-processing tools based on the ADIOS library developed by Oak Ridge National Laboratory. We use the Ensemble Toolkit for workflow stabilization & management to automate the workflow with minimal human interaction.
Evolutionary Software Development (Developpement Evolutionnaire de Logiciels)
2008-08-01
development processes. While this may be true, frequently it is not. MIL-STD-498 was explicitly introduced to encourage iterative development; ISO /IEC... 12207 was carefully worded not to prohibit iterative development. Yet both standards were widely interpreted as requiring waterfall development, as
Evolutionary Software Development (Developpement evolutionnaire de logiciels)
2008-08-01
development processes. While this may be true, frequently it is not. MIL-STD-498 was explicitly introduced to encourage iterative development; ISO /IEC... 12207 was carefully worded not to prohibit iterative development. Yet both standards were widely interpreted as requiring waterfall development, as
NASA Astrophysics Data System (ADS)
Antonuk, Larry E.; Zhao, Qihua; Su, Zhong; Yamamoto, Jin; El-Mohri, Youcef; Li, Yixin; Wang, Yi; Sawant, Amit R.
2004-05-01
The development of fluoroscopic imagers exhibiting performance that is primarily limited by the noise of the incident x-ray quanta, even at very low exposures, remains a highly desirable objective for active matrix flat-panel technology. Previous theoretical and empirical studies have indicated that promising strategies to acheiving this goal include the development of array designs incorporating improved optical collection fill factors, pixel-level amplifiers, or very high-gain photoconductors. Our group is pursuing all three strategies and this paper describes progress toward the systematic development of array designs involving the last approach. The research involved the iterative fabrication and evaluation of a series of prototype imagers incorporating a promising high-gain photoconductive material, mercuric iodide (HgI2). Over many cycles of photoconductor deposition and array evaluation, improvements ina variety of properties have been observed and remaining fundamental challenges have become apparent. For example, process compatibility between the deposited HgI2 and the arrays have been greatly improved, while preserving efficient, prompt signal extraction. As a result, x-ray sensitivities within a factor of two of the nominal limit associated with the single-crystal form of HgI2 have been observed at relatively low electric fields (~0.1 to 0.6 V/μm), for some iterations. In addition, for a number of iterations, performance targets for dark current stability and range of linearity have been met or exceeded. However, spotting of the array, due to localized chemical reactions, is still a concern. Moreover, the dark current, uniformity of pixel response, and degree of charge trapping, though markedly improved for some iterations, require further optimization. Furthermore, achieving the desired performance for all properties simultaneously remains an important goal. In this paper, a broad overview of the progress of the research will be presented, remaining challenges in the development of this photoconductive material will be outlined, and prospects for further improvement will be discussed.
NASA Astrophysics Data System (ADS)
Swastika, Windra
2017-03-01
A money's nominal value recognition system has been developed using Artificial Neural Network (ANN). ANN with Back Propagation has one disadvantage. The learning process is very slow (or never reach the target) in the case of large number of iteration, weight and samples. One way to speed up the learning process is using Quickprop method. Quickprop method is based on Newton's method and able to speed up the learning process by assuming that the weight adjustment (E) is a parabolic function. The goal is to minimize the error gradient (E'). In our system, we use 5 types of money's nominal value, i.e. 1,000 IDR, 2,000 IDR, 5,000 IDR, 10,000 IDR and 50,000 IDR. One of the surface of each nominal were scanned and digitally processed. There are 40 patterns to be used as training set in ANN system. The effectiveness of Quickprop method in the ANN system was validated by 2 factors, (1) number of iterations required to reach error below 0.1; and (2) the accuracy to predict nominal values based on the input. Our results shows that the use of Quickprop method is successfully reduce the learning process compared to Back Propagation method. For 40 input patterns, Quickprop method successfully reached error below 0.1 for only 20 iterations, while Back Propagation method required 2000 iterations. The prediction accuracy for both method is higher than 90%.
P80 SRM low torque flex-seal development - thermal and chemical modeling of molding process
NASA Astrophysics Data System (ADS)
Descamps, C.; Gautronneau, E.; Rousseau, G.; Daurat, M.
2009-09-01
The development of the flex-seal component of the P80 nozzle gave the opportunity to set up new design and manufacturing process methods. Due to the short development lead time required by VEGA program, the usual manufacturing iterative tests work flow, which is usually time consuming, had to be enhanced in order to use a more predictive approach. A newly refined rubber vulcanization description was built up and identified on laboratory samples. This chemical model was implemented in a thermal analysis code. The complete model successfully supports the manufacturing processes. These activities were conducted with the support of ESA/CNES Research & Technologies and DGA (General Delegation for Armament).
Mixed Material Plasma-Surface Interactions in ITER: Recent Results from the PISCES Group
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tynan, George R.; Baldwin, Matthew; Doerner, Russell
This paper summarizes recent PISCES studies focused on the effects associated with mixed species plasmas that are similar in composition to what one might expect in ITER. Formation of nanometer scale whiskerlike features occurs in W surfaces exposed to pure He and mixed D/He plasmas and appears to be associated with the formation of He nanometer-scaled bubbles in the W surface. Studies of Be-W alloy formation in Be-seeded D plasmas suggest that this process may be important in ITER all metal wall operational scenarios. Studies also suggest that BeD formation via chemical sputtering of Be walls may be an importantmore » first wall erosion mechanism. D retention in ITER mixed materials has also been studied. The D release behavior from beryllium co-deposits does not appear to be a diffusion dominated process, but instead is consistent with thermal release from a number of variable trapping energy sites. As a result, the amount of tritium remaining in codeposits in ITER after baking will be determined by the maximum temperature achieved, rather than by the duration of the baking cycle.« less
Six sigma: process of understanding the control and capability of ranitidine hydrochloride tablet.
Chabukswar, Ar; Jagdale, Sc; Kuchekar, Bs; Joshi, Vd; Deshmukh, Gr; Kothawade, Hs; Kuckekar, Ab; Lokhande, Pd
2011-01-01
The process of understanding the control and capability (PUCC) is an iterative closed loop process for continuous improvement. It covers the DMAIC toolkit in its three phases. PUCC is an iterative approach that rotates between the three pillars of the process of understanding, process control, and process capability, with each iteration resulting in a more capable and robust process. It is rightly said that being at the top is a marathon and not a sprint. The objective of the six sigma study of Ranitidine hydrochloride tablets is to achieve perfection in tablet manufacturing by reviewing the present robust manufacturing process, to find out ways to improve and modify the process, which will yield tablets that are defect-free and will give more customer satisfaction. The application of six sigma led to an improved process capability, due to the improved sigma level of the process from 1.5 to 4, a higher yield, due to reduced variation and reduction of thick tablets, reduction in packing line stoppages, reduction in re-work by 50%, a more standardized process, with smooth flow and change in coating suspension reconstitution level (8%w/w), a huge cost reduction of approximately Rs.90 to 95 lakhs per annum, an improved overall efficiency by 30% approximately, and improved overall quality of the product.
Six Sigma: Process of Understanding the Control and Capability of Ranitidine Hydrochloride Tablet
Chabukswar, AR; Jagdale, SC; Kuchekar, BS; Joshi, VD; Deshmukh, GR; Kothawade, HS; Kuckekar, AB; Lokhande, PD
2011-01-01
The process of understanding the control and capability (PUCC) is an iterative closed loop process for continuous improvement. It covers the DMAIC toolkit in its three phases. PUCC is an iterative approach that rotates between the three pillars of the process of understanding, process control, and process capability, with each iteration resulting in a more capable and robust process. It is rightly said that being at the top is a marathon and not a sprint. The objective of the six sigma study of Ranitidine hydrochloride tablets is to achieve perfection in tablet manufacturing by reviewing the present robust manufacturing process, to find out ways to improve and modify the process, which will yield tablets that are defect-free and will give more customer satisfaction. The application of six sigma led to an improved process capability, due to the improved sigma level of the process from 1.5 to 4, a higher yield, due to reduced variation and reduction of thick tablets, reduction in packing line stoppages, reduction in re-work by 50%, a more standardized process, with smooth flow and change in coating suspension reconstitution level (8%w/w), a huge cost reduction of approximately Rs.90 to 95 lakhs per annum, an improved overall efficiency by 30% approximately, and improved overall quality of the product. PMID:21607050
Paz-Soldan, Valerie A; Yukich, Josh; Soonthorndhada, Amara; Giron, Maziel; Apperson, Charles S; Ponnusamy, Loganathan; Schal, Coby; Morrison, Amy C; Keating, Joseph; Wesson, Dawn M
2016-01-01
Dengue virus (and Chikungunya and Zika viruses) is transmitted by Aedes aegypti and Aedes albopictus mosquitoes and causes considerable human morbidity and mortality. As there is currently no vaccine or chemoprophylaxis to protect people from dengue virus infection, vector control is the only viable option for disease prevention. The purpose of this paper is to illustrate the design and placement process for an attractive lethal ovitrap to reduce vector populations and to describe lessons learned in the development of the trap. This study was conducted in 2010 in Iquitos, Peru and Lopburi Province, Thailand and used an iterative community-based participatory approach to adjust design specifications of the trap, based on community members' perceptions and feedback, entomological findings in the lab, and design and research team observations. Multiple focus group discussions (FGD) were held over a 6 month period, stratified by age, sex and motherhood status, to inform the design process. Trap testing transitioned from the lab to within households. Through an iterative process of working with specifications from the research team, findings from the laboratory testing, and feedback from FGD, the design team narrowed trap design options from 22 to 6. Comments from the FGD centered on safety for children and pets interacting with traps, durability, maintenance issues, and aesthetics. Testing in the laboratory involved releasing groups of 50 gravid Ae. aegypti in walk-in rooms and assessing what percentage were caught in traps of different colors, with different trap cover sizes, and placed under lighter or darker locations. Two final trap models were mocked up and tested in homes for a week; one model was the top choice in both Iquitos and Lopburi. The community-based participatory process was essential for the development of novel traps that provided effective vector control, but also met the needs and concerns of community members.
Yukich, Josh; Soonthorndhada, Amara; Giron, Maziel; Apperson, Charles S.; Ponnusamy, Loganathan; Schal, Coby; Morrison, Amy C.; Keating, Joseph; Wesson, Dawn M.
2016-01-01
Background Dengue virus (and Chikungunya and Zika viruses) is transmitted by Aedes aegypti and Aedes albopictus mosquitoes and causes considerable human morbidity and mortality. As there is currently no vaccine or chemoprophylaxis to protect people from dengue virus infection, vector control is the only viable option for disease prevention. The purpose of this paper is to illustrate the design and placement process for an attractive lethal ovitrap to reduce vector populations and to describe lessons learned in the development of the trap. Methods This study was conducted in 2010 in Iquitos, Peru and Lopburi Province, Thailand and used an iterative community-based participatory approach to adjust design specifications of the trap, based on community members’ perceptions and feedback, entomological findings in the lab, and design and research team observations. Multiple focus group discussions (FGD) were held over a 6 month period, stratified by age, sex and motherhood status, to inform the design process. Trap testing transitioned from the lab to within households. Results Through an iterative process of working with specifications from the research team, findings from the laboratory testing, and feedback from FGD, the design team narrowed trap design options from 22 to 6. Comments from the FGD centered on safety for children and pets interacting with traps, durability, maintenance issues, and aesthetics. Testing in the laboratory involved releasing groups of 50 gravid Ae. aegypti in walk-in rooms and assessing what percentage were caught in traps of different colors, with different trap cover sizes, and placed under lighter or darker locations. Two final trap models were mocked up and tested in homes for a week; one model was the top choice in both Iquitos and Lopburi. Discussion The community-based participatory process was essential for the development of novel traps that provided effective vector control, but also met the needs and concerns of community members. PMID:27532497
The Union RAP: Industry-Wide Research-Action Projects to Win Health and Safety Improvements
McQuiston, Thomas H.; Lippin, Tobi Mae; Anderson, Leeann G.; Beach, M. Josie; Frederick, James; Seymour, Thomas A.
2009-01-01
Unions are ripe to engage in community-based participatory research (CBPR). We briefly profile 3 United Steelworker CBPR projects aimed at uncovering often-undocumented, industry-wide health and safety conditions in which US industrial workers toil. The results are to be used to advocate improvements at workplace, industry, and national policy levels. We offer details of our CBPR approach (Research-Action Project [RAP]) that engages workers and others in all research stages. Elements of RAPs include strategically constructed teams with knowledge of the industry and health and safety and with skills in research, participatory facilitation, and training; reciprocal training on these knowledge and skill areas; iterative processes of large and small group work; use of technology; and facilitator-developed tools and intermediate products. PMID:19890145
Iteration in Early-Elementary Engineering Design
NASA Astrophysics Data System (ADS)
McFarland Kendall, Amber Leigh
K-12 standards and curricula are beginning to include engineering design as a key practice within Science Technology Engineering and Mathematics (STEM) education. However, there is little research on how the youngest students engage in engineering design within the elementary classroom. This dissertation focuses on iteration as an essential aspect of engineering design, and because research at the college and professional level suggests iteration improves the designer's understanding of problems and the quality of design solutions. My research presents qualitative case studies of students in kindergarten and third-grade as they engage in classroom engineering design challenges which integrate with traditional curricula standards in mathematics, science, and literature. I discuss my results through the lens of activity theory, emphasizing practices, goals, and mediating resources. Through three chapters, I provide insight into how early-elementary students iterate upon their designs by characterizing the ways in which lesson design impacts testing and revision, by analyzing the plan-driven and experimentation-driven approaches that student groups use when solving engineering design challenges, and by investigating how students attend to constraints within the challenge. I connect these findings to teacher practices and curriculum design in order to suggest methods of promoting iteration within open-ended, classroom-based engineering design challenges. This dissertation contributes to the field of engineering education by providing evidence of productive engineering practices in young students and support for the value of engineering design challenges in developing students' participation and agency in these practices.
LC Data QUEST: A Technical Architecture for Community Federated Clinical Data Sharing.
Stephens, Kari A; Lin, Ching-Ping; Baldwin, Laura-Mae; Echo-Hawk, Abigail; Keppel, Gina A; Buchwald, Dedra; Whitener, Ron J; Korngiebel, Diane M; Berg, Alfred O; Black, Robert A; Tarczy-Hornoch, Peter
2012-01-01
The University of Washington Institute of Translational Health Sciences is engaged in a project, LC Data QUEST, building data sharing capacity in primary care practices serving rural and tribal populations in the Washington, Wyoming, Alaska, Montana, Idaho region to build research infrastructure. We report on the iterative process of developing the technical architecture for semantically aligning electronic health data in primary care settings across our pilot sites and tools that will facilitate linkages between the research and practice communities. Our architecture emphasizes sustainable technical solutions for addressing data extraction, alignment, quality, and metadata management. The architecture provides immediate benefits to participating partners via a clinical decision support tool and data querying functionality to support local quality improvement efforts. The FInDiT tool catalogues type, quantity, and quality of the data that are available across the LC Data QUEST data sharing architecture. These tools facilitate the bi-directional process of translational research.
LC Data QUEST: A Technical Architecture for Community Federated Clinical Data Sharing
Stephens, Kari A.; Lin, Ching-Ping; Baldwin, Laura-Mae; Echo-Hawk, Abigail; Keppel, Gina A.; Buchwald, Dedra; Whitener, Ron J.; Korngiebel, Diane M.; Berg, Alfred O.; Black, Robert A.; Tarczy-Hornoch, Peter
2012-01-01
The University of Washington Institute of Translational Health Sciences is engaged in a project, LC Data QUEST, building data sharing capacity in primary care practices serving rural and tribal populations in the Washington, Wyoming, Alaska, Montana, Idaho region to build research infrastructure. We report on the iterative process of developing the technical architecture for semantically aligning electronic health data in primary care settings across our pilot sites and tools that will facilitate linkages between the research and practice communities. Our architecture emphasizes sustainable technical solutions for addressing data extraction, alignment, quality, and metadata management. The architecture provides immediate benefits to participating partners via a clinical decision support tool and data querying functionality to support local quality improvement efforts. The FInDiT tool catalogues type, quantity, and quality of the data that are available across the LC Data QUEST data sharing architecture. These tools facilitate the bi-directional process of translational research. PMID:22779052
Schindler, Holly S.; Fisher, Philip A.; Shonkoff, Jack P.
2017-01-01
This paper presents a description of how an interdisciplinary network of academic researchers, community-based programs, parents, and state agencies have joined together to design, test, and scale a suite of innovative intervention strategies rooted in new knowledge about the biology of adversity. Through a process of co-creation, collective pilot-testing, and the support of a measurement and evaluation hub, the Washington State Innovation Cluster is using rapid cycle, iterative learning to elucidate differential impacts of interventions designed to build child and caregiver capacities and address the developmental consequences of socioeconomic disadvantage. Key characteristics of the Innovation Cluster model are described and an example is presented of a video-coaching intervention that has been implemented, adapted, and evaluated through this distinctive, collaborative process. PMID:28777436
NASA Astrophysics Data System (ADS)
Wilson, J. R.; Bonoli, P. T.
2015-02-01
Ion cyclotron range of frequency (ICRF) heating is foreseen as an integral component of the initial ITER operation. The status of ICRF preparations for ITER and supporting research were updated in the 2007 [Gormezano et al., Nucl. Fusion 47, S285 (2007)] report on the ITER physics basis. In this report, we summarize progress made toward the successful application of ICRF power on ITER since that time. Significant advances have been made in support of the technical design by development of new techniques for arc protection, new algorithms for tuning and matching, carrying out experimental tests of more ITER like antennas and demonstration on mockups that the design assumptions are correct. In addition, new applications of the ICRF system, beyond just bulk heating, have been proposed and explored.
NASA Astrophysics Data System (ADS)
Shimomura, Y.; Aymar, R.; Chuyanov, V. A.; Huguet, M.; Matsumoto, H.; Mizoguchi, T.; Murakami, Y.; Polevoi, A. R.; Shimada, M.; ITER Joint Central Team; ITER Home Teams
2001-03-01
ITER is planned to be the first fusion experimental reactor in the world operating for research in physics and engineering. The first ten years of operation will be devoted primarily to physics issues at low neutron fluence and the following ten years of operation to engineering testing at higher fluence. ITER can accommodate various plasma configurations and plasma operation modes, such as inductive high Q modes, long pulse hybrid modes and non-inductive steady state modes, with large ranges of plasma current, density, beta and fusion power, and with various heating and current drive methods. This flexibility will provide an advantage for coping with uncertainties in the physics database, in studying burning plasmas, in introducing advanced features and in optimizing the plasma performance for the different programme objectives. Remote sites will be able to participate in the ITER experiment. This concept will provide an advantage not only in operating ITER for 24 hours a day but also in involving the worldwide fusion community and in promoting scientific competition among the ITER Parties.
The PRIMA Test Facility: SPIDER and MITICA test-beds for ITER neutral beam injectors
NASA Astrophysics Data System (ADS)
Toigo, V.; Piovan, R.; Dal Bello, S.; Gaio, E.; Luchetta, A.; Pasqualotto, R.; Zaccaria, P.; Bigi, M.; Chitarin, G.; Marcuzzi, D.; Pomaro, N.; Serianni, G.; Agostinetti, P.; Agostini, M.; Antoni, V.; Aprile, D.; Baltador, C.; Barbisan, M.; Battistella, M.; Boldrin, M.; Brombin, M.; Dalla Palma, M.; De Lorenzi, A.; Delogu, R.; De Muri, M.; Fellin, F.; Ferro, A.; Fiorentin, A.; Gambetta, G.; Gnesotto, F.; Grando, L.; Jain, P.; Maistrello, A.; Manduchi, G.; Marconato, N.; Moresco, M.; Ocello, E.; Pavei, M.; Peruzzo, S.; Pilan, N.; Pimazzoni, A.; Recchia, M.; Rizzolo, A.; Rostagni, G.; Sartori, E.; Siragusa, M.; Sonato, P.; Sottocornola, A.; Spada, E.; Spagnolo, S.; Spolaore, M.; Taliercio, C.; Valente, M.; Veltri, P.; Zamengo, A.; Zaniol, B.; Zanotto, L.; Zaupa, M.; Boilson, D.; Graceffa, J.; Svensson, L.; Schunke, B.; Decamps, H.; Urbani, M.; Kushwah, M.; Chareyre, J.; Singh, M.; Bonicelli, T.; Agarici, G.; Garbuglia, A.; Masiello, A.; Paolucci, F.; Simon, M.; Bailly-Maitre, L.; Bragulat, E.; Gomez, G.; Gutierrez, D.; Mico, G.; Moreno, J.-F.; Pilard, V.; Kashiwagi, M.; Hanada, M.; Tobari, H.; Watanabe, K.; Maejima, T.; Kojima, A.; Umeda, N.; Yamanaka, H.; Chakraborty, A.; Baruah, U.; Rotti, C.; Patel, H.; Nagaraju, M. V.; Singh, N. P.; Patel, A.; Dhola, H.; Raval, B.; Fantz, U.; Heinemann, B.; Kraus, W.; Hanke, S.; Hauer, V.; Ochoa, S.; Blatchford, P.; Chuilon, B.; Xue, Y.; De Esch, H. P. L.; Hemsworth, R.; Croci, G.; Gorini, G.; Rebai, M.; Muraro, A.; Tardocchi, M.; Cavenago, M.; D'Arienzo, M.; Sandri, S.; Tonti, A.
2017-08-01
The ITER Neutral Beam Test Facility (NBTF), called PRIMA (Padova Research on ITER Megavolt Accelerator), is hosted in Padova, Italy and includes two experiments: MITICA, the full-scale prototype of the ITER heating neutral beam injector, and SPIDER, the full-size radio frequency negative-ions source. The NBTF realization and the exploitation of SPIDER and MITICA have been recognized as necessary to make the future operation of the ITER heating neutral beam injectors efficient and reliable, fundamental to the achievement of thermonuclear-relevant plasma parameters in ITER. This paper reports on design and R&D carried out to construct PRIMA, SPIDER and MITICA, and highlights the huge progress made in just a few years, from the signature of the agreement for the NBTF realization in 2011, up to now—when the buildings and relevant infrastructures have been completed, SPIDER is entering the integrated commissioning phase and the procurements of several MITICA components are at a well advanced stage.
APRN Usability Testing of a Tailored Computer-Mediated Health Communication Program
Lin, Carolyn A.; Neafsey, Patricia J.; Anderson, Elizabeth
2010-01-01
This study tested the usability of a touch-screen enabled “Personal Education Program” (PEP) with Advanced Practice Registered Nurses (APRN). The PEP is designed to enhance medication adherence and reduce adverse self-medication behaviors in older adults with hypertension. An iterative research process was employed, which involved the use of: (1) pre-trial focus groups to guide the design of system information architecture, (2) two different cycles of think-aloud trials to test the software interface, and (3) post-trial focus groups to gather feedback on the think-aloud studies. Results from this iterative usability testing process were utilized to systematically modify and improve the three PEP prototype versions—the pilot, Prototype-1 and Prototype-2. Findings contrasting the two separate think-aloud trials showed that APRN users rated the PEP system usability, system information and system-use satisfaction at a moderately high level between trials. In addition, errors using the interface were reduced by 76 percent and the interface time was reduced by 18.5 percent between the two trials. The usability testing processes employed in this study ensured an interface design adapted to APRNs' needs and preferences to allow them to effectively utilize the computer-mediated health-communication technology in a clinical setting. PMID:19940619
Re-typograph phase I: a proof-of-concept for typeface parameter extraction from historical documents
NASA Astrophysics Data System (ADS)
Lamiroy, Bart; Bouville, Thomas; Blégean, Julien; Cao, Hongliu; Ghamizi, Salah; Houpin, Romain; Lloyd, Matthias
2015-01-01
This paper reports on the first phase of an attempt to create a full retro-engineering pipeline that aims to construct a complete set of coherent typographic parameters defining the typefaces used in a printed homogenous text. It should be stressed that this process cannot reasonably be expected to be fully automatic and that it is designed to include human interaction. Although font design is governed by a set of quite robust and formal geometric rulesets, it still heavily relies on subjective human interpretation. Furthermore, different parameters, applied to the generic rulesets may actually result in quite similar and visually difficult to distinguish typefaces, making the retro-engineering an inverse problem that is ill conditioned once shape distortions (related to the printing and/or scanning process) come into play. This work is the first phase of a long iterative process, in which we will progressively study and assess the techniques from the state-of-the-art that are most suited to our problem and investigate new directions when they prove to not quite adequate. As a first step, this is more of a feasibility proof-of-concept, that will allow us to clearly pinpoint the items that will require more in-depth research over the next iterations.
NASA Astrophysics Data System (ADS)
Saha, Gouranga Chandra
Very often a number of factors, especially time, space and money, deter many science educators from using inquiry-based, hands-on, laboratory practical tasks as alternative assessment instruments in science. A shortage of valid inquiry-based laboratory tasks for high school biology has been cited. Driven by this need, this study addressed the following three research questions: (1) How can laboratory-based performance tasks be designed and developed that are doable by students for whom they are designed/written? (2) Do student responses to the laboratory-based performance tasks validly represent at least some of the intended process skills that new biology learning goals want students to acquire? (3) Are the laboratory-based performance tasks psychometrically consistent as individual tasks and as a set? To answer these questions, three tasks were used from the six biology tasks initially designed and developed by an iterative process of trial testing. Analyses of data from 224 students showed that performance-based laboratory tasks that are doable by all students require careful and iterative process of development. Although the students demonstrated more skill in performing than planning and reasoning, their performances at the item level were very poor for some items. Possible reasons for the poor performances have been discussed and suggestions on how to remediate the deficiencies have been made. Empirical evidences for validity and reliability of the instrument have been presented both from the classical and the modern validity criteria point of view. Limitations of the study have been identified. Finally implications of the study and directions for further research have been discussed.
A non-iterative extension of the multivariate random effects meta-analysis.
Makambi, Kepher H; Seung, Hyunuk
2015-01-01
Multivariate methods in meta-analysis are becoming popular and more accepted in biomedical research despite computational issues in some of the techniques. A number of approaches, both iterative and non-iterative, have been proposed including the multivariate DerSimonian and Laird method by Jackson et al. (2010), which is non-iterative. In this study, we propose an extension of the method by Hartung and Makambi (2002) and Makambi (2001) to multivariate situations. A comparison of the bias and mean square error from a simulation study indicates that, in some circumstances, the proposed approach perform better than the multivariate DerSimonian-Laird approach. An example is presented to demonstrate the application of the proposed approach.
Improvements in surface singularity analysis and design methods. [applicable to airfoils
NASA Technical Reports Server (NTRS)
Bristow, D. R.
1979-01-01
The coupling of the combined source vortex distribution of Green's potential flow function with contemporary numerical techniques is shown to provide accurate, efficient, and stable solutions to subsonic inviscid analysis and design problems for multi-element airfoils. The analysis problem is solved by direct calculation of the surface singularity distribution required to satisfy the flow tangency boundary condition. The design or inverse problem is solved by an iteration process. In this process, the geometry and the associated pressure distribution are iterated until the pressure distribution most nearly corresponding to the prescribed design distribution is obtained. Typically, five iteration cycles are required for convergence. A description of the analysis and design method is presented, along with supporting examples.
Iteration in Early-Elementary Engineering Design
ERIC Educational Resources Information Center
McFarland Kendall, Amber Leigh
2017-01-01
K-12 standards and curricula are beginning to include engineering design as a key practice within Science Technology Engineering and Mathematics (STEM) education. However, there is little research on how the youngest students engage in engineering design within the elementary classroom. This dissertation focuses on iteration as an essential aspect…
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2011-02-25
There are many voices calling for a future of abundant clean energy. The choices are difficult and the challenges daunting. How will we get there? The National Renewable Energy Laboratory integrates the entire spectrum of innovation including fundamental science, market relevant research, systems integration, testing and validation, commercialization and deployment. The innovation process at NREL is interdependent and iterative. Many scientific breakthroughs begin in our own laboratories, but new ideas and technologies come to NREL at any point along the innovation spectrum to be validated and refined for commercial use.
None
2018-05-11
There are many voices calling for a future of abundant clean energy. The choices are difficult and the challenges daunting. How will we get there? The National Renewable Energy Laboratory integrates the entire spectrum of innovation including fundamental science, market relevant research, systems integration, testing and validation, commercialization and deployment. The innovation process at NREL is interdependent and iterative. Many scientific breakthroughs begin in our own laboratories, but new ideas and technologies come to NREL at any point along the innovation spectrum to be validated and refined for commercial use.
Overview of the JET results in support to ITER
Litaudon, X.; Abduallev, S.; Abhangi, M.; ...
2017-06-15
Here, the 2014–2016 JET results are reviewed in the light of their significance for optimising the ITER research plan for the active and non-active operation. More than 60 h of plasma operation with ITER first wall materials successfully took place since its installation in 2011. New multi-machine scaling of the type I-ELM divertor energy flux density to ITER is supported by first principle modelling. ITER relevant disruption experiments and first principle modelling are reported with a set of three disruption mitigation valves mimicking the ITER setup. Insights of the L–H power threshold in Deuterium and Hydrogen are given, stressing themore » importance of the magnetic configurations and the recent measurements of fine-scale structures in the edge radial electric. Dimensionless scans of the core and pedestal confinement provide new information to elucidate the importance of the first wall material on the fusion performance. H-mode plasmas at ITER triangularity (H = 1 at β N ~ 1.8 and n/n GW ~ 0.6) have been sustained at 2 MA during 5 s. The ITER neutronics codes have been validated on high performance experiments. Prospects for the coming D–T campaign and 14 MeV neutron calibration strategy are reviewed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Litaudon, X; Bernard, J. M.; Colas, L.
2013-01-01
To support the design of an ITER ion-cyclotron range of frequency heating (ICRH) system and to mitigate risks of operation in ITER, CEA has initiated an ambitious Research & Development program accompanied by experiments on Tore Supra or test-bed facility together with a significant modelling effort. The paper summarizes the recent results in the following areas: Comprehensive characterization (experiments and modelling) of a new Faraday screen concept tested on the Tore Supra antenna. A new model is developed for calculating the ICRH sheath rectification at the antenna vicinity. The model is applied to calculate the local heat flux on Toremore » Supra and ITER ICRH antennas. Full-wave modelling of ITER ICRH heating and current drive scenarios with the EVE code. With 20 MW of power, a current of 400 kA could be driven on axis in the DT scenario. Comparison between DT and DT(3He) scenario is given for heating and current drive efficiencies. First operation of CW test-bed facility, TITAN, designed for ITER ICRH components testing and could host up to a quarter of an ITER antenna. R&D of high permittivity materials to improve load of test facilities to better simulate ITER plasma antenna loading conditions.« less
Overview of the JET results in support to ITER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Litaudon, X.; Abduallev, S.; Abhangi, M.
Here, the 2014–2016 JET results are reviewed in the light of their significance for optimising the ITER research plan for the active and non-active operation. More than 60 h of plasma operation with ITER first wall materials successfully took place since its installation in 2011. New multi-machine scaling of the type I-ELM divertor energy flux density to ITER is supported by first principle modelling. ITER relevant disruption experiments and first principle modelling are reported with a set of three disruption mitigation valves mimicking the ITER setup. Insights of the L–H power threshold in Deuterium and Hydrogen are given, stressing themore » importance of the magnetic configurations and the recent measurements of fine-scale structures in the edge radial electric. Dimensionless scans of the core and pedestal confinement provide new information to elucidate the importance of the first wall material on the fusion performance. H-mode plasmas at ITER triangularity (H = 1 at β N ~ 1.8 and n/n GW ~ 0.6) have been sustained at 2 MA during 5 s. The ITER neutronics codes have been validated on high performance experiments. Prospects for the coming D–T campaign and 14 MeV neutron calibration strategy are reviewed.« less
ERIC Educational Resources Information Center
Camp, Dane R.
1991-01-01
After introducing the two-dimensional Koch curve, which is generated by simple recursions on an equilateral triangle, the process is extended to three dimensions with simple recursions on a regular tetrahedron. Included, for both fractal sequences, are iterative formulae, illustrations of the first several iterations, and a sample PASCAL program.…
The application of contraction theory to an iterative formulation of electromagnetic scattering
NASA Technical Reports Server (NTRS)
Brand, J. C.; Kauffman, J. F.
1985-01-01
Contraction theory is applied to an iterative formulation of electromagnetic scattering from periodic structures and a computational method for insuring convergence is developed. A short history of spectral (or k-space) formulation is presented with an emphasis on application to periodic surfaces. To insure a convergent solution of the iterative equation, a process called the contraction corrector method is developed. Convergence properties of previously presented iterative solutions to one-dimensional problems are examined utilizing contraction theory and the general conditions for achieving a convergent solution are explored. The contraction corrector method is then applied to several scattering problems including an infinite grating of thin wires with the solution data compared to previous works.
Fast generating Greenberger-Horne-Zeilinger state via iterative interaction pictures
NASA Astrophysics Data System (ADS)
Huang, Bi-Hua; Chen, Ye-Hong; Wu, Qi-Cheng; Song, Jie; Xia, Yan
2016-10-01
We delve a little deeper into the construction of shortcuts to adiabatic passage for three-level systems by iterative interaction picture (multiple Schrödinger dynamics). As an application example, we use the deduced iterative based shortcuts to rapidly generate the Greenberger-Horne-Zeilinger (GHZ) state in a three-atom system with the help of quantum Zeno dynamics. Numerical simulation shows the dynamics designed by the iterative picture method is physically feasible and the shortcut scheme performs much better than that using the conventional adiabatic passage techniques. Also, the influences of various decoherence processes are discussed by numerical simulation and the results prove that the scheme is fast and robust against decoherence and operational imperfection.
Applying the scientific method to small catchment studies: Areview of the Panola Mountain experience
Hooper, R.P.
2001-01-01
A hallmark of the scientific method is its iterative application to a problem to increase and refine the understanding of the underlying processes controlling it. A successful iterative application of the scientific method to catchment science (including the fields of hillslope hydrology and biogeochemistry) has been hindered by two factors. First, the scale at which controlled experiments can be performed is much smaller than the scale of the phenomenon of interest. Second, computer simulation models generally have not been used as hypothesis-testing tools as rigorously as they might have been. Model evaluation often has gone only so far as evaluation of goodness of fit, rather than a full structural analysis, which is more useful when treating the model as a hypothesis. An iterative application of a simple mixing model to the Panola Mountain Research Watershed is reviewed to illustrate the increase in understanding gained by this approach and to discern general principles that may be applicable to other studies. The lessons learned include the need for an explicitly stated conceptual model of the catchment, the definition of objective measures of its applicability, and a clear linkage between the scale of observations and the scale of predictions. Published in 2001 by John Wiley & Sons. Ltd.
Turbulence Enhancement by Fractal Square Grids: Effects of the Number of Fractal Scales
NASA Astrophysics Data System (ADS)
Omilion, Alexis; Ibrahim, Mounir; Zhang, Wei
2017-11-01
Fractal square grids offer a unique solution for passive flow control as they can produce wakes with a distinct turbulence intensity peak and a prolonged turbulence decay region at the expense of only minimal pressure drop. While previous studies have solidified this characteristic of fractal square grids, how the number of scales (or fractal iterations N) affect turbulence production and decay of the induced wake is still not well understood. The focus of this research is to determine the relationship between the fractal iteration N and the turbulence produced in the wake flow using well-controlled water-tunnel experiments. Particle Image Velocimetry (PIV) is used to measure the instantaneous velocity fields downstream of four different fractal grids with increasing number of scales (N = 1, 2, 3, and 4) and a conventional single-scale grid. By comparing the turbulent scales and statistics of the wake, we are able to determine how each iteration affects the peak turbulence intensity and the production/decay of turbulence from the grid. In light of the ability of these fractal grids to increase turbulence intensity with low pressure drop, this work can potentially benefit a wide variety of applications where energy efficient mixing or convective heat transfer is a key process.
NASA Astrophysics Data System (ADS)
Klein, Eran; Ojemann, Jeffrey
2016-08-01
Objective. Implantable brain-computer interface (BCI) research promises improvements in human health and enhancements in quality of life. Informed consent of subjects is a central tenet of this research. Rapid advances in neuroscience, and the intimate connection between functioning of the brain and conceptions of the self, make informed consent particularly challenging in BCI research. Identification of safety and research-related risks associated with BCI devices is an important step in ensuring meaningful informed consent. Approach. This paper highlights a number of BCI research risks, including safety concerns, cognitive and communicative impairments, inappropriate subject expectations, group vulnerabilities, privacy and security, and disruptions of identity. Main results. Based on identified BCI research risks, best practices are needed for understanding and incorporating BCI-related risks into informed consent protocols. Significance. Development of best practices should be guided by processes that are: multidisciplinary, systematic and transparent, iterative, relational and exploratory.
Klein, Eran; Ojemann, Jeffrey
2016-08-01
Implantable brain-computer interface (BCI) research promises improvements in human health and enhancements in quality of life. Informed consent of subjects is a central tenet of this research. Rapid advances in neuroscience, and the intimate connection between functioning of the brain and conceptions of the self, make informed consent particularly challenging in BCI research. Identification of safety and research-related risks associated with BCI devices is an important step in ensuring meaningful informed consent. This paper highlights a number of BCI research risks, including safety concerns, cognitive and communicative impairments, inappropriate subject expectations, group vulnerabilities, privacy and security, and disruptions of identity. Based on identified BCI research risks, best practices are needed for understanding and incorporating BCI-related risks into informed consent protocols. Development of best practices should be guided by processes that are: multidisciplinary, systematic and transparent, iterative, relational and exploratory.
A Holistic Approach to Systems Development
NASA Technical Reports Server (NTRS)
Wong, Douglas T.
2008-01-01
Introduces a Holistic and Iterative Design Process. Continuous process but can be loosely divided into four stages. More effort spent early on in the design. Human-centered and Multidisciplinary. Emphasis on Life-Cycle Cost. Extensive use of modeling, simulation, mockups, human subjects, and proven technologies. Human-centered design doesn t mean the human factors discipline is the most important Disciplines should be involved in the design: Subsystem vendors, configuration management, operations research, manufacturing engineering, simulation/modeling, cost engineering, hardware engineering, software engineering, test and evaluation, human factors, electromagnetic compatibility, integrated logistics support, reliability/maintainability/availability, safety engineering, test equipment, training systems, design-to-cost, life cycle cost, application engineering etc. 9
Bringing values and deliberation to science communication
Dietz, Thomas
2013-01-01
Decisions always involve both facts and values, whereas most science communication focuses only on facts. If science communication is intended to inform decisions, it must be competent with regard to both facts and values. Public participation inevitably involves both facts and values. Research on public participation suggests that linking scientific analysis to public deliberation in an iterative process can help decision making deal effectively with both facts and values. Thus, linked analysis and deliberation can be an effective tool for science communication. However, challenges remain in conducting such process at the national and global scales, in enhancing trust, and in reconciling diverse values. PMID:23940350
McCullagh, Marjorie C; Sanon, Marie-Anne; Cohen, Michael A
2014-11-01
Challenges associated with recruiting and retaining community-based populations in research studies have been recognized yet remain of major concern for researchers. There is a need for exchange of recruitment and retention techniques that inform recruitment and retention strategies. Here, the authors discuss a variety of methods that were successful in exceeding target recruitment and retention goals in a randomized clinical trial of hearing protector use among farm operators. Recruitment and retention strategies were 1) based on a philosophy of mutually beneficial engagement in the research process, 2) culturally appropriate, 3) tailored to the unique needs of partnering agencies, and 4) developed and refined in a cyclical and iterative process. Sponsoring organizations are interested in cost-effective recruitment and retention strategies, particularly relating to culturally and ethnically diverse groups. These approaches may result in enhanced subject recruitment and retention, concomitant containment of study costs, and timely accomplishment of study aims. Copyright © 2014 Elsevier Inc. All rights reserved.
Xu, Peng; Tian, Yin; Lei, Xu; Hu, Xiao; Yao, Dezhong
2008-12-01
How to localize the neural electric activities within brain effectively and precisely from the scalp electroencephalogram (EEG) recordings is a critical issue for current study in clinical neurology and cognitive neuroscience. In this paper, based on the charge source model and the iterative re-weighted strategy, proposed is a new maximum neighbor weight based iterative sparse source imaging method, termed as CMOSS (Charge source model based Maximum neighbOr weight Sparse Solution). Different from the weight used in focal underdetermined system solver (FOCUSS) where the weight for each point in the discrete solution space is independently updated in iterations, the new designed weight for each point in each iteration is determined by the source solution of the last iteration at both the point and its neighbors. Using such a new weight, the next iteration may have a bigger chance to rectify the local source location bias existed in the previous iteration solution. The simulation studies with comparison to FOCUSS and LORETA for various source configurations were conducted on a realistic 3-shell head model, and the results confirmed the validation of CMOSS for sparse EEG source localization. Finally, CMOSS was applied to localize sources elicited in a visual stimuli experiment, and the result was consistent with those source areas involved in visual processing reported in previous studies.
Iterative nonlinear joint transform correlation for the detection of objects in cluttered scenes
NASA Astrophysics Data System (ADS)
Haist, Tobias; Tiziani, Hans J.
1999-03-01
An iterative correlation technique with digital image processing in the feedback loop for the detection of small objects in cluttered scenes is proposed. A scanning aperture is combined with the method in order to improve the immunity against noise and clutter. Multiple reference objects or different views of one object are processed in parallel. We demonstrate the method by detecting a noisy and distorted face in a crowd with a nonlinear joint transform correlator.
Compensation for the phase-type spatial periodic modulation of the near-field beam at 1053 nm
NASA Astrophysics Data System (ADS)
Gao, Yaru; Liu, Dean; Yang, Aihua; Tang, Ruyu; Zhu, Jianqiang
2017-10-01
A phase-only spatial light modulator is used to provide and compensate for the spatial periodic modulation (SPM) of the near-field beam at the near infrared at 1053nm wavelength with an improved iterative weight-based method. The transmission characteristics of the incident beam has been changed by a spatial light modulator (SLM) to shape the spatial intensity of the output beam. The propagation and reverse propagation of the light in free space are two important processes in the iterative process. The based theory is the beam angular spectrum transmit formula (ASTF) and the principle of the iterative weight-based method. We have made two improvements to the originally proposed iterative weight-based method. We select the appropriate parameter by choosing the minimum value of the output beam contrast degree and use the MATLAB built-in angle function to acquire the corresponding phase of the light wave function. The required phase that compensates for the intensity distribution of the incident SPM beam is iterated by this algorithm, which can decrease the magnitude of the SPM of the intensity on the observation plane. The experimental results show that the phase-type SPM of the near-field beam is subject to a certain restriction. We have also analyzed some factors that make the results imperfect. The experiment results verifies the possible applicability of this iterative weight-based method to compensate for the SPM of the near-field beam.
Chen, Kun; Zhang, Hongyuan; Wei, Haoyun; Li, Yan
2014-08-20
In this paper, we propose an improved subtraction algorithm for rapid recovery of Raman spectra that can substantially reduce the computation time. This algorithm is based on an improved Savitzky-Golay (SG) iterative smoothing method, which involves two key novel approaches: (a) the use of the Gauss-Seidel method and (b) the introduction of a relaxation factor into the iterative procedure. By applying a novel successive relaxation (SG-SR) iterative method to the relaxation factor, additional improvement in the convergence speed over the standard Savitzky-Golay procedure is realized. The proposed improved algorithm (the RIA-SG-SR algorithm), which uses SG-SR-based iteration instead of Savitzky-Golay iteration, has been optimized and validated with a mathematically simulated Raman spectrum, as well as experimentally measured Raman spectra from non-biological and biological samples. The method results in a significant reduction in computing cost while yielding consistent rejection of fluorescence and noise for spectra with low signal-to-fluorescence ratios and varied baselines. In the simulation, RIA-SG-SR achieved 1 order of magnitude improvement in iteration number and 2 orders of magnitude improvement in computation time compared with the range-independent background-subtraction algorithm (RIA). Furthermore the computation time of the experimentally measured raw Raman spectrum processing from skin tissue decreased from 6.72 to 0.094 s. In general, the processing of the SG-SR method can be conducted within dozens of milliseconds, which can provide a real-time procedure in practical situations.
Development of an evidence-based review with recommendations using an online iterative process.
Rudmik, Luke; Smith, Timothy L
2011-01-01
The practice of modern medicine is governed by evidence-based principles. Due to the plethora of medical literature, clinicians often rely on systematic reviews and clinical guidelines to summarize the evidence and provide best practices. Implementation of an evidence-based clinical approach can minimize variation in health care delivery and optimize the quality of patient care. This article reports a method for developing an "Evidence-based Review with Recommendations" using an online iterative process. The manuscript describes the following steps involved in this process: Clinical topic selection, Evidence-hased review assignment, Literature review and initial manuscript preparation, Iterative review process with author selection, and Manuscript finalization. The goal of this article is to improve efficiency and increase the production of evidence-based reviews while maintaining the high quality and transparency associated with the rigorous methodology utilized for clinical guideline development. With the rise of evidence-based medicine, most medical and surgical specialties have an abundance of clinical topics which would benefit from a formal evidence-based review. Although clinical guideline development is an important methodology, the associated challenges limit development to only the absolute highest priority clinical topics. As outlined in this article, the online iterative approach to the development of an Evidence-based Review with Recommendations may improve productivity without compromising the quality associated with formal guideline development methodology. Copyright © 2011 American Rhinologic Society-American Academy of Otolaryngic Allergy, LLC.
Performing Systematic Literature Reviews with Novices: An Iterative Approach
ERIC Educational Resources Information Center
Lavallée, Mathieu; Robillard, Pierre-N.; Mirsalari, Reza
2014-01-01
Reviewers performing systematic literature reviews require understanding of the review process and of the knowledge domain. This paper presents an iterative approach for conducting systematic literature reviews that addresses the problems faced by reviewers who are novices in one or both levels of understanding. This approach is derived from…
Stakeholder assessment of comparative effectiveness research needs for Medicaid populations.
Fischer, Michael A; Allen-Coleman, Cora; Farrell, Stephen F; Schneeweiss, Sebastian
2015-09-01
Patients, providers and policy-makers rely heavily on comparative effectiveness research (CER) when making complex, real-world medical decisions. In particular, Medicaid providers and policy-makers face unique challenges in decision-making because their program cares for traditionally underserved populations, especially children, pregnant women and people with mental illness. Because these patient populations have generally been underrepresented in research discussions, CER questions for these groups may be understudied. To address this problem, the Agency for Healthcare Research and Quality commissioned our team to work with Medicaid Medical Directors and other stakeholders to identify relevant CER questions. Through an iterative process of topic identification and refinement, we developed relevant, feasible and actionable questions based on issues affecting Medicaid programs nationwide. We describe challenges and limitations and provide recommendations for future stakeholder engagement.
Teaching and learning recursive programming: a review of the research literature
NASA Astrophysics Data System (ADS)
McCauley, Renée; Grissom, Scott; Fitzgerald, Sue; Murphy, Laurie
2015-01-01
Hundreds of articles have been published on the topics of teaching and learning recursion, yet fewer than 50 of them have published research results. This article surveys the computing education research literature and presents findings on challenges students encounter in learning recursion, mental models students develop as they learn recursion, and best practices in introducing recursion. Effective strategies for introducing the topic include using different contexts such as recurrence relations, programming examples, fractal images, and a description of how recursive methods are processed using a call stack. Several studies compared the efficacy of introducing iteration before recursion and vice versa. The paper concludes with suggestions for future research into how students learn and understand recursion, including a look at the possible impact of instructor attitude and newer pedagogies.
Utilizing research in practice and generating evidence from practice.
Learmonth, A M
2000-12-01
This paper gives an overview of evidence-based practice in health promotion, with reference mainly to the National Health Service (NHS) context within the UK, but with wider international relevance. It starts by looking at the tensions raised at the interface of the two activities of research and health promotion. It goes on to explore two aspects of evidence-based practice: incorporating research evidence into health promotion activity and developing robustly evaluated practice in such a way as to feed the developing research agenda. Each of these two aspects is explored using a specific example, from within the UK. Finally, the paper goes on to make eight recommendations that taken together would help create an iterative process contributing to the development of health promotion theory and practice.
Stakeholder assessment of comparative effectiveness research needs for Medicaid populations
Fischer, Michael A; Allen-Coleman, Cora; Farrell, Stephen F; Schneeweiss, Sebastian
2015-01-01
Patients, providers and policy-makers rely heavily on comparative effectiveness research (CER) when making complex, real-world medical decisions. In particular, Medicaid providers and policy-makers face unique challenges in decision-making because their program cares for traditionally underserved populations, especially children, pregnant women and people with mental illness. Because these patient populations have generally been underrepresented in research discussions, CER questions for these groups may be understudied. To address this problem, the Agency for Healthcare Research and Quality commissioned our team to work with Medicaid Medical Directors and other stakeholders to identify relevant CER questions. Through an iterative process of topic identification and refinement, we developed relevant, feasible and actionable questions based on issues affecting Medicaid programs nationwide. We describe challenges and limitations and provide recommendations for future stakeholder engagement. PMID:26388438
Development of the Learning Health System Researcher Core Competencies.
Forrest, Christopher B; Chesley, Francis D; Tregear, Michelle L; Mistry, Kamila B
2017-08-04
To develop core competencies for learning health system (LHS) researchers to guide the development of training programs. Data were obtained from literature review, expert interviews, a modified Delphi process, and consensus development meetings. The competencies were developed from August to December 2016 using qualitative methods. The literature review formed the basis for the initial draft of a competency domain framework. Key informant semi-structured interviews, a modified Delphi survey, and three expert panel (n = 19 members) consensus development meetings produced the final set of competencies. The iterative development process yielded seven competency domains: (1) systems science; (2) research questions and standards of scientific evidence; (3) research methods; (4) informatics; (5) ethics of research and implementation in health systems; (6) improvement and implementation science; and (7) engagement, leadership, and research management. A total of 33 core competencies were prioritized across these seven domains. The real-world milieu of LHS research, the embeddedness of the researcher within the health system, and engagement of stakeholders are distinguishing characteristics of this emerging field. The LHS researcher core competencies can be used to guide the development of learning objectives, evaluation methods, and curricula for training programs. © Health Research and Educational Trust.
Setodji, Claude Messan; Le, Vi-Nhuan; Schaack, Diana
2013-04-01
Research linking high-quality child care programs and children's cognitive development has contributed to the growing popularity of child care quality benchmarking efforts such as quality rating and improvement systems (QRIS). Consequently, there has been an increased interest in and a need for approaches to identifying thresholds, or cutpoints, in the child care quality measures used in these benchmarking efforts that differentiate between different levels of children's cognitive functioning. To date, research has provided little guidance to policymakers as to where these thresholds should be set. Using the Early Childhood Longitudinal Study, Birth Cohort (ECLS-B) data set, this study explores the use of generalized additive modeling (GAM) as a method of identifying thresholds on the Infant/Toddler Environment Rating Scale (ITERS) in relation to toddlers' performance on the Mental Development subscale of the Bayley Scales of Infant Development (the Bayley Mental Development Scale Short Form-Research Edition, or BMDSF-R). The present findings suggest that simple linear models do not always correctly depict the relationships between ITERS scores and BMDSF-R scores and that GAM-derived thresholds were more effective at differentiating among children's performance levels on the BMDSF-R. Additionally, the present findings suggest that there is a minimum threshold on the ITERS that must be exceeded before significant improvements in children's cognitive development can be expected. There may also be a ceiling threshold on the ITERS, such that beyond a certain level, only marginal increases in children's BMDSF-R scores are observed. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Soni, Jigensh; Yadav, R. K.; Patel, A.; Gahlaut, A.; Mistry, H.; Parmar, K. G.; Mahesh, V.; Parmar, D.; Prajapati, B.; Singh, M. J.; Bandyopadhyay, M.; Bansal, G.; Pandya, K.; Chakraborty, A.
2013-02-01
Twin Source - An Inductively coupled two RF driver based 180 kW, 1 MHz negative ion source experimental setup is initiated at IPR, Gandhinagar, under Indian program, with the objective of understanding the physics and technology of multi-driver coupling. Twin Source [1] (TS) also provides an intermediate platform between operational ROBIN [2] [5] and eight RF drivers based Indian test facility -INTF [3]. A twin source experiment requires a central system to provide control, data acquisition and communication interface, referred as TS-CODAC, for which a software architecture similar to ITER CODAC core system has been decided for implementation. The Core System is a software suite for ITER plant system manufacturers to use as a template for the development of their interface with CODAC. The ITER approach, in terms of technology, has been adopted for the TS-CODAC so as to develop necessary expertise for developing and operating a control system based on the ITER guidelines as similar configuration needs to be implemented for the INTF. This cost effective approach will provide an opportunity to evaluate and learn ITER CODAC technology, documentation, information technology and control system processes, on an operational machine. Conceptual design of the TS-CODAC system has been completed. For complete control of the system, approximately 200 Nos. control signals and 152 acquisition signals are needed. In TS-CODAC, control loop time required is within the range of 5ms - 10 ms, therefore for the control system, PLC (Siemens S-7 400) has been chosen as suggested in the ITER slow controller catalog. For the data acquisition, the maximum sampling interval required is 100 micro second, and therefore National Instruments (NI) PXIe system and NI 6259 digitizer cards have been selected as suggested in the ITER fast controller catalog. This paper will present conceptual design of TS -CODAC system based on ITER CODAC Core software and applicable plant system integration processes.
Shading correction assisted iterative cone-beam CT reconstruction
NASA Astrophysics Data System (ADS)
Yang, Chunlin; Wu, Pengwei; Gong, Shutao; Wang, Jing; Lyu, Qihui; Tang, Xiangyang; Niu, Tianye
2017-11-01
Recent advances in total variation (TV) technology enable accurate CT image reconstruction from highly under-sampled and noisy projection data. The standard iterative reconstruction algorithms, which work well in conventional CT imaging, fail to perform as expected in cone beam CT (CBCT) applications, wherein the non-ideal physics issues, including scatter and beam hardening, are more severe. These physics issues result in large areas of shading artifacts and cause deterioration to the piecewise constant property assumed in reconstructed images. To overcome this obstacle, we incorporate a shading correction scheme into low-dose CBCT reconstruction and propose a clinically acceptable and stable three-dimensional iterative reconstruction method that is referred to as the shading correction assisted iterative reconstruction. In the proposed method, we modify the TV regularization term by adding a shading compensation image to the reconstructed image to compensate for the shading artifacts while leaving the data fidelity term intact. This compensation image is generated empirically, using image segmentation and low-pass filtering, and updated in the iterative process whenever necessary. When the compensation image is determined, the objective function is minimized using the fast iterative shrinkage-thresholding algorithm accelerated on a graphic processing unit. The proposed method is evaluated using CBCT projection data of the Catphan© 600 phantom and two pelvis patients. Compared with the iterative reconstruction without shading correction, the proposed method reduces the overall CT number error from around 200 HU to be around 25 HU and increases the spatial uniformity by a factor of 20 percent, given the same number of sparsely sampled projections. A clinically acceptable and stable iterative reconstruction algorithm for CBCT is proposed in this paper. Differing from the existing algorithms, this algorithm incorporates a shading correction scheme into the low-dose CBCT reconstruction and achieves more stable optimization path and more clinically acceptable reconstructed image. The method proposed by us does not rely on prior information and thus is practically attractive to the applications of low-dose CBCT imaging in the clinic.
Layout compliance for triple patterning lithography: an iterative approach
NASA Astrophysics Data System (ADS)
Yu, Bei; Garreton, Gilda; Pan, David Z.
2014-10-01
As the semiconductor process further scales down, the industry encounters many lithography-related issues. In the 14nm logic node and beyond, triple patterning lithography (TPL) is one of the most promising techniques for Metal1 layer and possibly Via0 layer. As one of the most challenging problems in TPL, recently layout decomposition efforts have received more attention from both industry and academia. Ideally the decomposer should point out locations in the layout that are not triple patterning decomposable and therefore manual intervention by designers is required. A traditional decomposition flow would be an iterative process, where each iteration consists of an automatic layout decomposition step and manual layout modification task. However, due to the NP-hardness of triple patterning layout decomposition, automatic full chip level layout decomposition requires long computational time and therefore design closure issues continue to linger around in the traditional flow. Challenged by this issue, we present a novel incremental layout decomposition framework to facilitate accelerated iterative decomposition. In the first iteration, our decomposer not only points out all conflicts, but also provides the suggestions to fix them. After the layout modification, instead of solving the full chip problem from scratch, our decomposer can provide a quick solution for a selected portion of layout. We believe this framework is efficient, in terms of performance and designer friendly.
Holtkamp, Norbert
2018-01-09
ITER (in Latin âthe wayâ) is designed to demonstrate the scientific and technological feasibility of fusion energy. Fusion is the process by which two light atomic nuclei combine to form a heavier over one and thus release energy. In the fusion process two isotopes of hydrogen â deuterium and tritium â fuse together to form a helium atom and a neutron. Thus fusion could provide large scale energy production without greenhouse effects; essentially limitless fuel would be available all over the world. The principal goals of ITER are to generate 500 megawatts of fusion power for periods of 300 to 500 seconds with a fusion power multiplication factor, Q, of at least 10. Q ? 10 (input power 50 MW / output power 500 MW). The ITER Organization was officially established in Cadarache, France, on 24 October 2007. The seven members engaged in the project â China, the European Union, India, Japan, Korea, Russia and the United States â represent more than half the worldâs population. The costs for ITER are shared by the seven members. The cost for the construction will be approximately 5.5 billion Euros, a similar amount is foreseen for the twenty-year phase of operation and the subsequent decommissioning.
Lucero, Julie; Wallerstein, Nina; Duran, Bonnie; Alegria, Margarita; Greene-Moton, Ella; Israel, Barbara; Kastelic, Sarah; Magarati, Maya; Oetzel, John; Pearson, Cynthia; Schulz, Amy; Villegas, Malia; White Hat, Emily R.
2017-01-01
This article describes a mixed methods study of community-based participatory research (CBPR) partnership practices and the links between these practices and changes in health status and disparities outcomes. Directed by a CBPR conceptual model and grounded in indigenous-transformative theory, our nation-wide, cross-site study showcases the value of a mixed methods approach for better understanding the complexity of CBPR partnerships across diverse community and research contexts. The article then provides examples of how an iterative, integrated approach to our mixed methods analysis yielded enriched understandings of two key constructs of the model: trust and governance. Implications and lessons learned while using mixed methods to study CBPR are provided. PMID:29230152
On the safety of ITER accelerators.
Li, Ge
2013-01-01
Three 1 MV/40A accelerators in heating neutral beams (HNB) are on track to be implemented in the International Thermonuclear Experimental Reactor (ITER). ITER may produce 500 MWt of power by 2026 and may serve as a green energy roadmap for the world. They will generate -1 MV 1 h long-pulse ion beams to be neutralised for plasma heating. Due to frequently occurring vacuum sparking in the accelerators, the snubbers are used to limit the fault arc current to improve ITER safety. However, recent analyses of its reference design have raised concerns. General nonlinear transformer theory is developed for the snubber to unify the former snubbers' different design models with a clear mechanism. Satisfactory agreement between theory and tests indicates that scaling up to a 1 MV voltage may be possible. These results confirm the nonlinear process behind transformer theory and map out a reliable snubber design for a safer ITER.
NASA Technical Reports Server (NTRS)
Brand, J. C.
1985-01-01
Contraction theory is applied to an iterative formulation of electromagnetic scattering from periodic structures and a computational method for insuring convergence is developed. A short history of spectral (or k-space) formulation is presented with an emphasis on application to periodic surfaces. The mathematical background for formulating an iterative equation is covered using straightforward single variable examples including an extension to vector spaces. To insure a convergent solution of the iterative equation, a process called the contraction corrector method is developed. Convergence properties of previously presented iterative solutions to one-dimensional problems are examined utilizing contraction theory and the general conditions for achieving a convergent solution are explored. The contraction corrector method is then applied to several scattering problems including an infinite grating of thin wires with the solution data compared to previous works.
On the safety of ITER accelerators
Li, Ge
2013-01-01
Three 1 MV/40A accelerators in heating neutral beams (HNB) are on track to be implemented in the International Thermonuclear Experimental Reactor (ITER). ITER may produce 500 MWt of power by 2026 and may serve as a green energy roadmap for the world. They will generate −1 MV 1 h long-pulse ion beams to be neutralised for plasma heating. Due to frequently occurring vacuum sparking in the accelerators, the snubbers are used to limit the fault arc current to improve ITER safety. However, recent analyses of its reference design have raised concerns. General nonlinear transformer theory is developed for the snubber to unify the former snubbers' different design models with a clear mechanism. Satisfactory agreement between theory and tests indicates that scaling up to a 1 MV voltage may be possible. These results confirm the nonlinear process behind transformer theory and map out a reliable snubber design for a safer ITER. PMID:24008267
Single-agent parallel window search
NASA Technical Reports Server (NTRS)
Powley, Curt; Korf, Richard E.
1991-01-01
Parallel window search is applied to single-agent problems by having different processes simultaneously perform iterations of Iterative-Deepening-A(asterisk) (IDA-asterisk) on the same problem but with different cost thresholds. This approach is limited by the time to perform the goal iteration. To overcome this disadvantage, the authors consider node ordering. They discuss how global node ordering by minimum h among nodes with equal f = g + h values can reduce the time complexity of serial IDA-asterisk by reducing the time to perform the iterations prior to the goal iteration. Finally, the two ideas of parallel window search and node ordering are combined to eliminate the weaknesses of each approach while retaining the strengths. The resulting approach, called simply parallel window search, can be used to find a near-optimal solution quickly, improve the solution until it is optimal, and then finally guarantee optimality, depending on the amount of time available.
A fast reconstruction algorithm for fluorescence optical diffusion tomography based on preiteration.
Song, Xiaolei; Xiong, Xiaoyun; Bai, Jing
2007-01-01
Fluorescence optical diffusion tomography in the near-infrared (NIR) bandwidth is considered to be one of the most promising ways for noninvasive molecular-based imaging. Many reconstructive approaches to it utilize iterative methods for data inversion. However, they are time-consuming and they are far from meeting the real-time imaging demands. In this work, a fast preiteration algorithm based on the generalized inverse matrix is proposed. This method needs only one step of matrix-vector multiplication online, by pushing the iteration process to be executed offline. In the preiteration process, the second-order iterative format is employed to exponentially accelerate the convergence. Simulations based on an analytical diffusion model show that the distribution of fluorescent yield can be well estimated by this algorithm and the reconstructed speed is remarkably increased.
Iterative Authoring Using Story Generation Feedback: Debugging or Co-creation?
NASA Astrophysics Data System (ADS)
Swartjes, Ivo; Theune, Mariët
We explore the role that story generation feedback may play within the creative process of interactive story authoring. While such feedback is often used as 'debugging' information, we explore here a 'co-creation' view, in which the outcome of the story generator influences authorial intent. We illustrate an iterative authoring approach in which each iteration consists of idea generation, implementation and simulation. We find that the tension between authorial intent and the partially uncontrollable story generation outcome may be relieved by taking such a co-creation approach.
NASA Astrophysics Data System (ADS)
Wu, Zhixiong; Huang, Rongjin; Huang, ChuanJun; Yang, Yanfang; Huang, Xiongyi; Li, Laifeng
2017-12-01
The Glass-fiber reinforced plastic (GFRP) fabricated by the vacuum bag process was selected as the high voltage electrical insulation and mechanical support for the superconducting joints and the current leads for the ITER Feeder system. To evaluate the cryogenic mechanical properties of the GFRP, the mechanical properties such as the short beam strength (SBS), the tensile strength and the fatigue fracture strength after 30,000 cycles, were measured at 77K in this study. The results demonstrated that the GFRP met the design requirements of ITER.
Rescheduling with iterative repair
NASA Technical Reports Server (NTRS)
Zweben, Monte; Davis, Eugene; Daun, Brian; Deale, Michael
1992-01-01
This paper presents a new approach to rescheduling called constraint-based iterative repair. This approach gives our system the ability to satisfy domain constraints, address optimization concerns, minimize perturbation to the original schedule, and produce modified schedules quickly. The system begins with an initial, flawed schedule and then iteratively repairs constraint violations until a conflict-free schedule is produced. In an empirical demonstration, we vary the importance of minimizing perturbation and report how fast the system is able to resolve conflicts in a given time bound. These experiments were performed within the domain of Space Shuttle ground processing.
NASA Astrophysics Data System (ADS)
Kobayashi, K.; Isobe, K.; Iwai, Y.; Hayashi, T.; Shu, W.; Nakamura, H.; Kawamura, Y.; Yamada, M.; Suzuki, T.; Miura, H.; Uzawa, M.; Nishikawa, M.; Yamanishi, T.
2007-12-01
Confinement and the removal of tritium are key subjects for the safety of ITER. The ITER buildings are confinement barriers of tritium. In a hot cell, tritium is often released as vapour and is in contact with the inner walls. The inner walls of the ITER tritium plant building will also be exposed to tritium in an accident. The tritium released in the buildings is removed by the atmosphere detritiation systems (ADS), where the tritium is oxidized by catalysts and is removed as water. A special gas of SF6 is used in ITER and is expected to be released in an accident such as a fire. Although the SF6 gas has potential as a catalyst poison, the performance of ADS with the existence of SF6 has not been confirmed as yet. Tritiated water is produced in the regeneration process of ADS and is subsequently processed by the ITER water detritiation system (WDS). One of the key components of the WDS is an electrolysis cell. To overcome the issues in a global tritium confinement, a series of experimental studies have been carried out as an ITER R&D task: (1) tritium behaviour in concrete; (2) the effect of SF6 on the performance of ADS and (3) tritium durability of the electrolysis cell of the ITER-WDS. (1) The tritiated water vapour penetrated up to 50 mm into the concrete from the surface in six months' exposure. The penetration rate of tritium in the concrete was thus appreciably first, the isotope exchange capacity of the cement paste plays an important role in tritium trapping and penetration into concrete materials when concrete is exposed to tritiated water vapour. It is required to evaluate the effect of coating on the penetration rate quantitatively from the actual tritium tests. (2) SF6 gas decreased the detritiation factor of ADS. Since the effect of SF6 depends closely on its concentration, the amount of SF6 released into the tritium handling area in an accident should be reduced by some ideas of arrangement of components in the buildings. (3) It was expected that the electrolysis cell of the ITER-WDS could endure 3 years' operation under the ITER design conditions. Measuring the concentration of the fluorine ions could be a promising technique for monitoring the damage to the electrolysis cell.
Improving Drive Files for Vehicle Road Simulations
NASA Astrophysics Data System (ADS)
Cherng, John G.; Goktan, Ali; French, Mark; Gu, Yi; Jacob, Anil
2001-09-01
Shaker tables are commonly used in laboratories for automotive vehicle component testing to study durability and acoustics performance. An example is development testing of car seats. However, it is difficult to repeat the measured road data perfectly with the response of a shaker table as there are basic differences in dynamic characteristics between a flexible vehicle and substantially rigid shaker table. In addition, there are performance limits in the shaker table drive systems that can limit correlation. In practice, an optimal drive signal for the actuators is created iteratively. During each iteration, the error between the road data and the response data is minimised by an optimising algorithm which is generally a part of the feed back loop of the shake table controller. This study presents a systematic investigation to the errors in time and frequency domains as well as joint time-frequency domain and an evaluation of different digital signal processing techniques that have been used in previous work. In addition, we present an innovative approach that integrates the dynamic characteristics of car seats and the human body into the error-minimising iteration process. We found that the iteration process can be shortened and the error reduced by using a weighting function created by normalising the frequency response function of the car seat. Two road data test sets were used in the study.
NASA Astrophysics Data System (ADS)
1990-09-01
The main purpose of the International Thermonuclear Experimental Reactor (ITER) is to develop an experimental fusion reactor through the united efforts of many technologically advanced countries. The ITER terms of reference, issued jointly by the European Community, Japan, the USSR, and the United States, call for an integrated international design activity and constitute the basis of current activities. Joint work on ITER is carried out under the auspices of the International Atomic Energy Agency (IAEA), according to the terms of quadripartite agreement reached between the European Community, Japan, the USSR, and the United States. The site for joint technical work sessions is at the Max Planck Institute of Plasma Physics. Garching, Federal Republic of Germany. The ITER activities have two phases: a definition phase performed in 1988 and the present design phase (1989 to 1990). During the definition phase, a set of ITER technical characteristics and supporting research and development (R and D) activities were developed and reported. The present conceptual design phase of ITER lasts until the end of 1990. The objectives of this phase are to develop the design of ITER, perform a safety and environmental analysis, develop site requirements, define future R and D needs, and estimate cost, manpower, and schedule for construction and operation. A final report will be submitted at the end of 1990. This paper summarizes progress in the ITER program during the 1989 design phase.
Iterated reaction graphs: simulating complex Maillard reaction pathways.
Patel, S; Rabone, J; Russell, S; Tissen, J; Klaffke, W
2001-01-01
This study investigates a new method of simulating a complex chemical system including feedback loops and parallel reactions. The practical purpose of this approach is to model the actual reactions that take place in the Maillard process, a set of food browning reactions, in sufficient detail to be able to predict the volatile composition of the Maillard products. The developed framework, called iterated reaction graphs, consists of two main elements: a soup of molecules and a reaction base of Maillard reactions. An iterative process loops through the reaction base, taking reactants from and feeding products back to the soup. This produces a reaction graph, with molecules as nodes and reactions as arcs. The iterated reaction graph is updated and validated by comparing output with the main products found by classical gas-chromatographic/mass spectrometric analysis. To ensure a realistic output and convergence to desired volatiles only, the approach contains a number of novel elements: rate kinetics are treated as reaction probabilities; only a subset of the true chemistry is modeled; and the reactions are blocked into groups.
Ritter, Alison; Lancaster, Kari
2013-01-01
Assessing the extent to which drug research influences and impacts upon policy decision-making needs to go beyond bibliometric analysis of academic citations. Policy makers do not necessarily access the academic literature, and policy processes are largely iterative and rely on interactions and relationships. Furthermore, media representation of research contributes to public opinion and can influence policy uptake. In this context, assessing research influence involves examining the extent to which a research project is taken up in policy documents, used within policy processes, and disseminated via the media. This three component approach is demonstrated using a case example of two ongoing illicit drug monitoring systems: the Illicit Drug Reporting System (IDRS) and the Ecstasy and related Drugs Reporting System (EDRS). Systematic searches for reference to the IDRS and/or EDRS within policy documents, across multiple policy processes (such as parliamentary inquiries) and in the media, in conjunction with analysis of the types of mentions in these three sources, enables an analysis of policy influence. The context for the research is also described as the foundation for the approach. The application of the three component approach to the case study demonstrates a practical and systematic retrospective approach to measure drug research influence. For example, the ways in which the IDRS and EDRS were mentioned in policy documents demonstrated research utilisation. Policy processes were inclusive of IDRS and EDRS findings, while the media analysis revealed only a small contribution in the context of wider media reporting. Consistent with theories of policy processes, assessing the extent of research influence requires a systematic analysis of policy documents and processes. Development of such analyses and associated methods will better equip researchers to evaluate the impact of research. Copyright © 2012 Elsevier B.V. All rights reserved.
A fast method to emulate an iterative POCS image reconstruction algorithm.
Zeng, Gengsheng L
2017-10-01
Iterative image reconstruction algorithms are commonly used to optimize an objective function, especially when the objective function is nonquadratic. Generally speaking, the iterative algorithms are computationally inefficient. This paper presents a fast algorithm that has one backprojection and no forward projection. This paper derives a new method to solve an optimization problem. The nonquadratic constraint, for example, an edge-preserving denoising constraint is implemented as a nonlinear filter. The algorithm is derived based on the POCS (projections onto projections onto convex sets) approach. A windowed FBP (filtered backprojection) algorithm enforces the data fidelity. An iterative procedure, divided into segments, enforces edge-enhancement denoising. Each segment performs nonlinear filtering. The derived iterative algorithm is computationally efficient. It contains only one backprojection and no forward projection. Low-dose CT data are used for algorithm feasibility studies. The nonlinearity is implemented as an edge-enhancing noise-smoothing filter. The patient studies results demonstrate its effectiveness in processing low-dose x ray CT data. This fast algorithm can be used to replace many iterative algorithms. © 2017 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Wang, Limin; Shen, Yiteng; Yu, Jingxian; Li, Ping; Zhang, Ridong; Gao, Furong
2018-01-01
In order to cope with system disturbances in multi-phase batch processes with different dimensions, a hybrid robust control scheme of iterative learning control combined with feedback control is proposed in this paper. First, with a hybrid iterative learning control law designed by introducing the state error, the tracking error and the extended information, the multi-phase batch process is converted into a two-dimensional Fornasini-Marchesini (2D-FM) switched system with different dimensions. Second, a switching signal is designed using the average dwell-time method integrated with the related switching conditions to give sufficient conditions ensuring stable running for the system. Finally, the minimum running time of the subsystems and the control law gains are calculated by solving the linear matrix inequalities. Meanwhile, a compound 2D controller with robust performance is obtained, which includes a robust extended feedback control for ensuring the steady-state tracking error to converge rapidly. The application on an injection molding process displays the effectiveness and superiority of the proposed strategy.
NASA Astrophysics Data System (ADS)
Gamage, K. R.
2016-02-01
An effective approach to introduce 2YC students to ocean science research is through propagating inquiry-based experiences into existing geosciences courses using a series of research activities. The proposed activity is based on scientific ocean drilling, where students begin their research experience (pre-field activity) by reading articles from scientific journals and analyzing and interpreting core and log data on a specific research topic. At the end of the pre-field activity, students will visit the Gulf Coast Repository to examine actual cores, smear slides, thin sections etc. After the visit, students will integrate findings from their pre-field and field activities to produce a term paper. These simple activities allow students to experience in the iterative process of scientific research, illuminates how scientists approach ocean science, and can be the hook to get students interested in pursuing ocean science as a career.
Xu, Xin; Huang, Zhenhua; Graves, Daniel; Pedrycz, Witold
2014-12-01
In order to deal with the sequential decision problems with large or continuous state spaces, feature representation and function approximation have been a major research topic in reinforcement learning (RL). In this paper, a clustering-based graph Laplacian framework is presented for feature representation and value function approximation (VFA) in RL. By making use of clustering-based techniques, that is, K-means clustering or fuzzy C-means clustering, a graph Laplacian is constructed by subsampling in Markov decision processes (MDPs) with continuous state spaces. The basis functions for VFA can be automatically generated from spectral analysis of the graph Laplacian. The clustering-based graph Laplacian is integrated with a class of approximation policy iteration algorithms called representation policy iteration (RPI) for RL in MDPs with continuous state spaces. Simulation and experimental results show that, compared with previous RPI methods, the proposed approach needs fewer sample points to compute an efficient set of basis functions and the learning control performance can be improved for a variety of parameter settings.
Teachers Supporting Teachers in Urban Schools: What Iterative Research Designs Can Teach Us
Shernoff, Elisa S.; Maríñez-Lora, Ane M.; Frazier, Stacy L.; Jakobsons, Lara J.; Atkins, Marc S.; Bonner, Deborah
2012-01-01
Despite alarming rates and negative consequences associated with urban teacher attrition, mentoring programs often fail to target the strongest predictors of attrition: effectiveness around classroom management and engaging learners; and connectedness to colleagues. Using a mixed-method iterative development framework, we highlight the process of developing and evaluating the feasibility of a multi-component professional development model for urban early career teachers. The model includes linking novices with peer-nominated key opinion leader teachers and an external coach who work together to (1) provide intensive support in evidence-based practices for classroom management and engaging learners, and (2) connect new teachers with their larger network of colleagues. Fidelity measures and focus group data illustrated varying attendance rates throughout the school year and that although seminars and professional learning communities were delivered as intended, adaptations to enhance the relevance, authenticity, level, and type of instrumental support were needed. Implications for science and practice are discussed. PMID:23275682
Kumar, Sudhir; Stecher, Glen; Peterson, Daniel; Tamura, Koichiro
2012-10-15
There is a growing need in the research community to apply the molecular evolutionary genetics analysis (MEGA) software tool for batch processing a large number of datasets and to integrate it into analysis workflows. Therefore, we now make available the computing core of the MEGA software as a stand-alone executable (MEGA-CC), along with an analysis prototyper (MEGA-Proto). MEGA-CC provides users with access to all the computational analyses available through MEGA's graphical user interface version. This includes methods for multiple sequence alignment, substitution model selection, evolutionary distance estimation, phylogeny inference, substitution rate and pattern estimation, tests of natural selection and ancestral sequence inference. Additionally, we have upgraded the source code for phylogenetic analysis using the maximum likelihood methods for parallel execution on multiple processors and cores. Here, we describe MEGA-CC and outline the steps for using MEGA-CC in tandem with MEGA-Proto for iterative and automated data analysis. http://www.megasoftware.net/.
Three dimensional shape measurement of wear particle by iterative volume intersection
NASA Astrophysics Data System (ADS)
Wu, Hongkun; Li, Ruowei; Liu, Shilong; Rahman, Md Arifur; Liu, Sanchi; Kwok, Ngaiming; Peng, Zhongxiao
2018-04-01
The morphology of wear particle is a fundamental indicator where wear oriented machine health can be assessed. Previous research proved that thorough measurement of the particle shape allows more reliable explanation of the occurred wear mechanism. However, most of current particle measurement techniques are focused on extraction of the two-dimensional (2-D) morphology, while other critical particle features including volume and thickness are not available. As a result, a three-dimensional (3-D) shape measurement method is developed to enable a more comprehensive particle feature description. The developed method is implemented in three steps: (1) particle profiles in multiple views are captured via a camera mounted above a micro fluid channel; (2) a preliminary reconstruction is accomplished by the shape-from-silhouette approach with the collected particle contours; (3) an iterative re-projection process follows to obtain the final 3-D measurement by minimizing the difference between the original and the re-projected contours. Results from real data are presented, demonstrating the feasibility of the proposed method.
NASA Astrophysics Data System (ADS)
Li, Jing; Singh, Chandralekha
2017-09-01
We discuss an investigation of the difficulties that students in a university introductory physics course have with the electric field and superposition principle and how that research was used as a guide in the development and evaluation of a research-validated tutorial on these topics to help students learn these concepts better. The tutorial uses a guided enquiry-based approach to learning and involved an iterative process of development and evaluation. During its development, we obtained feedback both from physics instructors who regularly teach introductory physics in which these concepts are taught and from students for whom the tutorial is intended. The iterative process continued and the feedback was incorporated in the later versions of the tutorial until the researchers were satisfied with the performance of a diverse group of introductory physics students on the post-test after they worked on the tutorial in an individual one-on-one interview situation. Then the final version of the tutorial was administered in several sections of the university physics course after traditional instruction in relevant concepts. We discuss the performance of students in individual interviews and on the pre-test administered before the tutorial (but after traditional lecture-based instruction) and on the post-test administered after the tutorial. We also compare student performance in sections of the class in which students worked on the tutorial with other similar sections of the class in which students only learned via traditional instruction. We find that students performed significantly better in the sections of the class in which the tutorial was used compared to when students learned the material via only lecture-based instruction.
Infant/Toddler Environment Rating Scale (ITERS-3). Third Edition
ERIC Educational Resources Information Center
Harms, Thelma; Cryer, Debby; Clifford, Richard M.; Yazejian, Noreen
2017-01-01
Building on extensive feedback from the field as well as vigorous new research on how best to support infant and toddler development and learning, the authors have revised and updated the widely used "Infant/Toddler Environment Rating Scale." ITERS-3 is the next-generation assessment tool for use in center-based child care programs for…
Low Quality of Basic Caregiving Environments in Child Care: Actual Reality or Artifact of Scoring?
ERIC Educational Resources Information Center
Norris, Deborah J.; Guss, Shannon
2016-01-01
Quality Rating Improvement Systems (QRIS) frequently include the Infant-Toddler Environment Rating Scale-Revised (ITERS-R) as part of rating and improving child care quality. However, studies utilizing the ITERS-R consistently report low quality, especially for basic caregiving items. This research examined whether the low scores reflected the…
Gorman, Susanna M
2011-09-01
Australian Human Research Ethics Committees (HRECs) have to contend with ever-increasing workloads and responsibilities which go well beyond questions of mere ethics. In this article, I shall examine how the roles of HRECs have changed, and show how this is reflected in the iterations of the National Statement on Ethical Conduct in Human Research 2007 (NS). In particular I suggest that the focus of the National Statement has shifted to concentrate on matters of research governance at the expense of research ethics, compounded by its linkage to the Australian Code for the Responsible Conduct of Research (2007) in its most recent iteration. I shall explore some of the challenges this poses for HRECs and institutions and the risks it poses to ensuring that Australian researchers receive clear ethical guidance and review.
The ITER Neutral Beam Test Facility towards SPIDER operation
NASA Astrophysics Data System (ADS)
Toigo, V.; Dal Bello, S.; Gaio, E.; Luchetta, A.; Pasqualotto, R.; Zaccaria, P.; Bigi, M.; Chitarin, G.; Marcuzzi, D.; Pomaro, N.; Serianni, G.; Agostinetti, P.; Agostini, M.; Antoni, V.; Aprile, D.; Baltador, C.; Barbisan, M.; Battistella, M.; Boldrin, M.; Brombin, M.; Dalla Palma, M.; De Lorenzi, A.; Delogu, R.; De Muri, M.; Fellin, F.; Ferro, A.; Gambetta, G.; Grando, L.; Jain, P.; Maistrello, A.; Manduchi, G.; Marconato, N.; Pavei, M.; Peruzzo, S.; Pilan, N.; Pimazzoni, A.; Piovan, R.; Recchia, M.; Rizzolo, A.; Sartori, E.; Siragusa, M.; Spada, E.; Spagnolo, S.; Spolaore, M.; Taliercio, C.; Valente, M.; Veltri, P.; Zamengo, A.; Zaniol, B.; Zanotto, L.; Zaupa, M.; Boilson, D.; Graceffa, J.; Svensson, L.; Schunke, B.; Decamps, H.; Urbani, M.; Kushwah, M.; Chareyre, J.; Singh, M.; Bonicelli, T.; Agarici, G.; Garbuglia, A.; Masiello, A.; Paolucci, F.; Simon, M.; Bailly-Maitre, L.; Bragulat, E.; Gomez, G.; Gutierrez, D.; Mico, G.; Moreno, J.-F.; Pilard, V.; Chakraborty, A.; Baruah, U.; Rotti, C.; Patel, H.; Nagaraju, M. V.; Singh, N. P.; Patel, A.; Dhola, H.; Raval, B.; Fantz, U.; Fröschle, M.; Heinemann, B.; Kraus, W.; Nocentini, R.; Riedl, R.; Schiesko, L.; Wimmer, C.; Wünderlich, D.; Cavenago, M.; Croci, G.; Gorini, G.; Rebai, M.; Muraro, A.; Tardocchi, M.; Hemsworth, R.
2017-08-01
SPIDER is one of two projects of the ITER Neutral Beam Test Facility under construction in Padova, Italy, at the Consorzio RFX premises. It will have a 100 keV beam source with a full-size prototype of the radiofrequency ion source for the ITER neutral beam injector (NBI) and also, similar to the ITER diagnostic neutral beam, it is designed to operate with a pulse length of up to 3600 s, featuring an ITER-like magnetic filter field configuration (for high extraction of negative ions) and caesium oven (for high production of negative ions) layout as well as a wide set of diagnostics. These features will allow a reproduction of the ion source operation in ITER, which cannot be done in any other existing test facility. SPIDER realization is well advanced and the first operation is expected at the beginning of 2018, with the mission of achieving the ITER heating and diagnostic NBI ion source requirements and of improving its performance in terms of reliability and availability. This paper mainly focuses on the preparation of the first SPIDER operations—integration and testing of SPIDER components, completion and implementation of diagnostics and control and formulation of operation and research plan, based on a staged strategy.
McDonald, Paige L; Harwood, Kenneth J; Butler, Joan T; Schlumpf, Karen S; Eschmann, Carson W; Drago, Daniela
2018-12-01
Intensive courses (ICs), or accelerated courses, are gaining popularity in medical and health professions education, particularly as programs adopt e-learning models to negotiate challenges of flexibility, space, cost, and time. In 2014, the Department of Clinical Research and Leadership (CRL) at the George Washington University School of Medicine and Health Sciences began the process of transitioning two online 15-week graduate programs to an IC model. Within a year, a third program also transitioned to this model. A literature review yielded little guidance on the process of transitioning from 15-week, traditional models of delivery to IC models, particularly in online learning environments. Correspondingly, this paper describes the process by which CRL transitioned three online graduate programs to an IC model and details best practices for course design and facilitation resulting from our iterative redesign process. Finally, we present lessons-learned for the benefit of other medical and health professions' programs contemplating similar transitions. CRL: Department of Clinical Research and Leadership; HSCI: Health Sciences; IC: Intensive course; PD: Program director; QM: Quality Matters.
McDonald, Paige L.; Harwood, Kenneth J.; Butler, Joan T.; Schlumpf, Karen S.; Eschmann, Carson W.; Drago, Daniela
2018-01-01
ABSTRACT Intensive courses (ICs), or accelerated courses, are gaining popularity in medical and health professions education, particularly as programs adopt e-learning models to negotiate challenges of flexibility, space, cost, and time. In 2014, the Department of Clinical Research and Leadership (CRL) at the George Washington University School of Medicine and Health Sciences began the process of transitioning two online 15-week graduate programs to an IC model. Within a year, a third program also transitioned to this model. A literature review yielded little guidance on the process of transitioning from 15-week, traditional models of delivery to IC models, particularly in online learning environments. Correspondingly, this paper describes the process by which CRL transitioned three online graduate programs to an IC model and details best practices for course design and facilitation resulting from our iterative redesign process. Finally, we present lessons-learned for the benefit of other medical and health professionsʼ programs contemplating similar transitions. Abbreviations: CRL: Department of Clinical Research and Leadership; HSCI: Health Sciences; IC: Intensive course; PD: Program director; QM: Quality Matters PMID:29277143
Baumann, Ana A.; Domenech Rodríguez, Melanie M.; Amador, Nancy G.; Forgatch, Marion S.; Parra-Cardona, J. Rubén
2015-01-01
This article describes the process of cultural adaptation at the start of the implementation of the Parent Management Training intervention-Oregon model (PMTO) in Mexico City. The implementation process was guided by the model, and the cultural adaptation of PMTO was theoretically guided by the cultural adaptation process (CAP) model. During the process of the adaptation, we uncovered the potential for the CAP to be embedded in the implementation process, taking into account broader training and economic challenges and opportunities. We discuss how cultural adaptation and implementation processes are inextricably linked and iterative and how maintaining a collaborative relationship with the treatment developer has guided our work and has helped expand our research efforts, and how building human capital to implement PMTO in Mexico supported the implementation efforts of PMTO in other places in the United States. PMID:26052184
Dementia Grief: A Theoretical Model of a Unique Grief Experience
Blandin, Kesstan; Pepin, Renee
2016-01-01
Previous literature reveals a high prevalence of grief in dementia caregivers before physical death of the person with dementia that is associated with stress, burden, and depression. To date, theoretical models and therapeutic interventions with grief in caregivers have not adequately considered the grief process, but instead have focused on grief as a symptom that manifests within the process of caregiving. The Dementia Grief Model explicates the unique process of pre-death grief in dementia caregivers. In this paper we introduce the Dementia Grief Model, describe the unique characteristics dementia grief, and present the psychological states associated with the process of dementia grief. The model explicates an iterative grief process involving three states – separation, liminality, and re-emergence – each with a dynamic mechanism that facilitates or hinders movement through the dementia grief process. Finally, we offer potential applied research questions informed by the model. PMID:25883036
Baumann, Ana A; Domenech Rodríguez, Melanie M; Amador, Nancy G; Forgatch, Marion S; Parra-Cardona, J Rubén
2014-03-01
This article describes the process of cultural adaptation at the start of the implementation of the Parent Management Training intervention-Oregon model (PMTO) in Mexico City. The implementation process was guided by the model, and the cultural adaptation of PMTO was theoretically guided by the cultural adaptation process (CAP) model. During the process of the adaptation, we uncovered the potential for the CAP to be embedded in the implementation process, taking into account broader training and economic challenges and opportunities. We discuss how cultural adaptation and implementation processes are inextricably linked and iterative and how maintaining a collaborative relationship with the treatment developer has guided our work and has helped expand our research efforts, and how building human capital to implement PMTO in Mexico supported the implementation efforts of PMTO in other places in the United States.
Using the CER Hub to ensure data quality in a multi-institution smoking cessation study.
Walker, Kari L; Kirillova, Olga; Gillespie, Suzanne E; Hsiao, David; Pishchalenko, Valentyna; Pai, Akshatha Kalsanka; Puro, Jon E; Plumley, Robert; Kudyakov, Rustam; Hu, Weiming; Allisany, Art; McBurnie, MaryAnn; Kurtz, Stephen E; Hazlehurst, Brian L
2014-01-01
Comparative effectiveness research (CER) studies involving multiple institutions with diverse electronic health records (EHRs) depend on high quality data. To ensure uniformity of data derived from different EHR systems and implementations, the CER Hub informatics platform developed a quality assurance (QA) process using tools and data formats available through the CER Hub. The QA process, implemented here in a study of smoking cessation services in primary care, used the 'emrAdapter' tool programmed with a set of quality checks to query large samples of primary care encounter records extracted in accord with the CER Hub common data framework. The tool, deployed to each study site, generated error reports indicating data problems to be fixed locally and aggregate data sharable with the central site for quality review. Across the CER Hub network of six health systems, data completeness and correctness issues were prevalent in the first iteration and were considerably improved after three iterations of the QA process. A common issue encountered was incomplete mapping of local EHR data values to those defined by the common data framework. A highly automated and distributed QA process helped to ensure the correctness and completeness of patient care data extracted from EHRs for a multi-institution CER study in smoking cessation. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Ethics of Health Research in Communities: Perspectives From the Southwestern United States
Williams, Robert L.; Willging, Cathleen E.; Quintero, Gilbert; Kalishman, Summers; Sussman, Andrew L.; Freeman, William L.
2010-01-01
PURPOSE The increasing attention paid to community-based research highlights the question of whether human research protections focused on the individual are adequate to safeguard communities. We conducted a study to explore how community members perceive low-risk health research, the adequacy of human research protection processes, and the ethical conduct of community-based research. METHODS Eighteen focus groups were conducted among rural and urban Hispanic and Native American communities in New Mexico using a semistructured guide. Group transcriptions were analyzed using iterative readings and coding, with review of the analytic summary by group members. RESULTS Although participants recognized the value of health research, many also identified several adverse effects of research in their communities, including social (community and individual labeling, stigmatization, and discrimination) and economic (community job losses, increased insurance rates, and loss of community income). A lack of community beneficence was emphasized by participants who spoke of researchers who fail to communicate results adequately or assist with follow-through. Many group members did not believe current human research and data privacy processes were adequate to protect or assist communities. CONCLUSIONS Ethical review of community-based health research should apply the Belmont principles to communities. Researchers should adopt additional approaches to community-based research by engaging communities as active partners throughout the research process, focusing on community priorities, and taking extra precautions to assure individual and community privacy. Plans for meaningful dissemination of results to communities should be part of the research design. PMID:20843885
Hatala, Rose; Sawatsky, Adam P; Dudek, Nancy; Ginsburg, Shiphra; Cook, David A
2017-06-01
In-training evaluation reports (ITERs) constitute an integral component of medical student and postgraduate physician trainee (resident) assessment. ITER narrative comments have received less attention than the numeric scores. The authors sought both to determine what validity evidence informs the use of narrative comments from ITERs for assessing medical students and residents and to identify evidence gaps. Reviewers searched for relevant English-language studies in MEDLINE, EMBASE, Scopus, and ERIC (last search June 5, 2015), and in reference lists and author files. They included all original studies that evaluated ITERs for qualitative assessment of medical students and residents. Working in duplicate, they selected articles for inclusion, evaluated quality, and abstracted information on validity evidence using Kane's framework (inferences of scoring, generalization, extrapolation, and implications). Of 777 potential articles, 22 met inclusion criteria. The scoring inference is supported by studies showing that rich narratives are possible, that changing the prompt can stimulate more robust narratives, and that comments vary by context. Generalization is supported by studies showing that narratives reach thematic saturation and that analysts make consistent judgments. Extrapolation is supported by favorable relationships between ITER narratives and numeric scores from ITERs and non-ITER performance measures, and by studies confirming that narratives reflect constructs deemed important in clinical work. Evidence supporting implications is scant. The use of ITER narratives for trainee assessment is generally supported, except that evidence is lacking for implications and decisions. Future research should seek to confirm implicit assumptions and evaluate the impact of decisions.
The ITER project construction status
NASA Astrophysics Data System (ADS)
Motojima, O.
2015-10-01
The pace of the ITER project in St Paul-lez-Durance, France is accelerating rapidly into its peak construction phase. With the completion of the B2 slab in August 2014, which will support about 400 000 metric tons of the tokamak complex structures and components, the construction is advancing on a daily basis. Magnet, vacuum vessel, cryostat, thermal shield, first wall and divertor structures are under construction or in prototype phase in the ITER member states of China, Europe, India, Japan, Korea, Russia, and the United States. Each of these member states has its own domestic agency (DA) to manage their procurements of components for ITER. Plant systems engineering is being transformed to fully integrate the tokamak and its auxiliary systems in preparation for the assembly and operations phase. CODAC, diagnostics, and the three main heating and current drive systems are also progressing, including the construction of the neutral beam test facility building in Padua, Italy. The conceptual design of the Chinese test blanket module system for ITER has been completed and those of the EU are well under way. Significant progress has been made addressing several outstanding physics issues including disruption load characterization, prediction, avoidance, and mitigation, first wall and divertor shaping, edge pedestal and SOL plasma stability, fuelling and plasma behaviour during confinement transients and W impurity transport. Further development of the ITER Research Plan has included a definition of the required plant configuration for 1st plasma and subsequent phases of ITER operation as well as the major plasma commissioning activities and the needs of the accompanying R&D program to ITER construction by the ITER parties.
Novel aspects of plasma control in ITER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humphreys, D.; Jackson, G.; Walker, M.
2015-02-15
ITER plasma control design solutions and performance requirements are strongly driven by its nuclear mission, aggressive commissioning constraints, and limited number of operational discharges. In addition, high plasma energy content, heat fluxes, neutron fluxes, and very long pulse operation place novel demands on control performance in many areas ranging from plasma boundary and divertor regulation to plasma kinetics and stability control. Both commissioning and experimental operations schedules provide limited time for tuning of control algorithms relative to operating devices. Although many aspects of the control solutions required by ITER have been well-demonstrated in present devices and even designed satisfactorily formore » ITER application, many elements unique to ITER including various crucial integration issues are presently under development. We describe selected novel aspects of plasma control in ITER, identifying unique parts of the control problem and highlighting some key areas of research remaining. Novel control areas described include control physics understanding (e.g., current profile regulation, tearing mode (TM) suppression), control mathematics (e.g., algorithmic and simulation approaches to high confidence robust performance), and integration solutions (e.g., methods for management of highly subscribed control resources). We identify unique aspects of the ITER TM suppression scheme, which will pulse gyrotrons to drive current within a magnetic island, and turn the drive off following suppression in order to minimize use of auxiliary power and maximize fusion gain. The potential role of active current profile control and approaches to design in ITER are discussed. Issues and approaches to fault handling algorithms are described, along with novel aspects of actuator sharing in ITER.« less
Novel aspects of plasma control in ITER
Humphreys, David; Ambrosino, G.; de Vries, Peter; ...
2015-02-12
ITER plasma control design solutions and performance requirements are strongly driven by its nuclear mission, aggressive commissioning constraints, and limited number of operational discharges. In addition, high plasma energy content, heat fluxes, neutron fluxes, and very long pulse operation place novel demands on control performance in many areas ranging from plasma boundary and divertor regulation to plasma kinetics and stability control. Both commissioning and experimental operations schedules provide limited time for tuning of control algorithms relative to operating devices. Although many aspects of the control solutions required by ITER have been well-demonstrated in present devices and even designed satisfactorily formore » ITER application, many elements unique to ITER including various crucial integration issues are presently under development. We describe selected novel aspects of plasma control in ITER, identifying unique parts of the control problem and highlighting some key areas of research remaining. Novel control areas described include control physics understanding (e.g. current profile regulation, tearing mode suppression (TM)), control mathematics (e.g. algorithmic and simulation approaches to high confidence robust performance), and integration solutions (e.g. methods for management of highly-subscribed control resources). We identify unique aspects of the ITER TM suppression scheme, which will pulse gyrotrons to drive current within a magnetic island, and turn the drive off following suppression in order to minimize use of auxiliary power and maximize fusion gain. The potential role of active current profile control and approaches to design in ITER are discussed. Finally, issues and approaches to fault handling algorithms are described, along with novel aspects of actuator sharing in ITER.« less
The tug-of-war: fidelity versus adaptation throughout the health promotion program life cycle.
Bopp, Melissa; Saunders, Ruth P; Lattimore, Diana
2013-06-01
Researchers across multiple fields have described the iterative and nonlinear phases of the translational research process from program development to dissemination. This process can be conceptualized within a "program life cycle" framework that includes overlapping and nonlinear phases: development, adoption, implementation, maintenance, sustainability or termination, and dissemination or diffusion, characterized by tensions between fidelity to the original plan and adaptation for the setting and population. In this article, we describe the life cycle (phases) for research-based health promotion programs, the key influences at each phase, and the issues related to the tug-of-war between fidelity and adaptation throughout the process using a fictionalized case study based on our previous research. This article suggests the importance of reconceptualizing intervention design, involving stakeholders, and monitoring fidelity and adaptation throughout all phases to maintain implementation fidelity and completeness. Intervention fidelity should be based on causal mechanisms to ensure effectiveness, while allowing for appropriate adaption to ensure maximum implementation and sustainability. Recommendations for future interventions include considering the determinants of implementation including contextual factors at each phase, the roles of stakeholders, and the importance of developing a rigorous, adaptive, and flexible definition of implementation fidelity and completeness.
Assay Development Process | Office of Cancer Clinical Proteomics Research
Typical steps involved in the development of a mass spectrometry-based targeted assay include: (1) selection of surrogate or signature peptides corresponding to the targeted protein or modification of interest; (2) iterative optimization of instrument and method parameters for optimal detection of the selected peptide; (3) method development for protein extraction from biological matrices such as tissue, whole cell lysates, or blood plasma/serum and proteolytic digestion of proteins (usually with trypsin); (4) evaluation of the assay in the intended biological matrix to determine if e
A horizon scan of global conservation issues for 2014
Sutherland, William J.; Aveling, Rosalind; Brooks, Thomas M.; Clout, Mick; Dicks, Lynn V.; Fellman, Liz; Fleishman, Erica; Gibbons, David W.; Keim, Brandon; Lickorish, Fiona; Monk, Kathryn A.; Mortimer, Diana; Peck, Lloyd S.; Pretty, Jules; Rockström, Johan; Rodríguez, Jon Paul; Smith, Rebecca K.; Spalding, Mark D.; Tonneijck, Femke H.; Watkinson, Andrew R.
2014-01-01
This paper presents the output of our fifth annual horizon-scanning exercise, which aims to identify topics that increasingly may affect conservation of biological diversity, but have yet to be widely considered. A team of professional horizon scanners, researchers, practitioners, and a journalist identified 15 topics which were identified via an iterative, Delphi-like process. The 15 topics include a carbon market induced financial crash, rapid geographic expansion of macroalgal cultivation, genetic control of invasive species, probiotic therapy for amphibians, and an emerging snake fungal disease. PMID:24332318
A horizon scan of global conservation issues for 2015.
Sutherland, William J; Clout, Mick; Depledge, Michael; Dicks, Lynn V; Dinsdale, Jason; Entwistle, Abigail C; Fleishman, Erica; Gibbons, David W; Keim, Brandon; Lickorish, Fiona A; Monk, Kathryn A; Ockendon, Nancy; Peck, Lloyd S; Pretty, Jules; Rockström, Johan; Spalding, Mark D; Tonneijck, Femke H; Wintle, Bonnie C
2015-01-01
This paper presents the results of our sixth annual horizon scan, which aims to identify phenomena that may have substantial effects on the global environment, but are not widely known or well understood. A group of professional horizon scanners, researchers, practitioners, and a journalist identified 15 topics via an iterative, Delphi-like process. The topics include a novel class of insecticide compounds, legalisation of recreational drugs, and the emergence of a new ecosystem associated with ice retreat in the Antarctic. Copyright © 2014 Elsevier Ltd. All rights reserved.
Application of a simple cerebellar model to geologic surface mapping
Hagens, A.; Doveton, J.H.
1991-01-01
Neurophysiological research into the structure and function of the cerebellum has inspired computational models that simulate information processing associated with coordination and motor movement. The cerebellar model arithmetic computer (CMAC) has a design structure which makes it readily applicable as an automated mapping device that "senses" a surface, based on a sample of discrete observations of surface elevation. The model operates as an iterative learning process, where cell weights are continuously modified by feedback to improve surface representation. The storage requirements are substantially less than those of a conventional memory allocation, and the model is extended easily to mapping in multidimensional space, where the memory savings are even greater. ?? 1991.
NASA Astrophysics Data System (ADS)
Parvathi, S. P.; Ramanan, R. V.
2018-06-01
An iterative analytical trajectory design technique that includes perturbations in the departure phase of the interplanetary orbiter missions is proposed. The perturbations such as non-spherical gravity of Earth and the third body perturbations due to Sun and Moon are included in the analytical design process. In the design process, first the design is obtained using the iterative patched conic technique without including the perturbations and then modified to include the perturbations. The modification is based on, (i) backward analytical propagation of the state vector obtained from the iterative patched conic technique at the sphere of influence by including the perturbations, and (ii) quantification of deviations in the orbital elements at periapsis of the departure hyperbolic orbit. The orbital elements at the sphere of influence are changed to nullify the deviations at the periapsis. The analytical backward propagation is carried out using the linear approximation technique. The new analytical design technique, named as biased iterative patched conic technique, does not depend upon numerical integration and all computations are carried out using closed form expressions. The improved design is very close to the numerical design. The design analysis using the proposed technique provides a realistic insight into the mission aspects. Also, the proposed design is an excellent initial guess for numerical refinement and helps arrive at the four distinct design options for a given opportunity.
Hohenstein, Jess; O'Dell, Dakota; Murnane, Elizabeth L; Lu, Zhengda; Erickson, David; Gay, Geri
2017-11-21
In today's health care environment, increasing costs and inadequate medical resources have created a worldwide need for more affordable diagnostic tools that are also portable, fast, and easy to use. To address this issue, numerous research and commercial efforts have focused on developing rapid diagnostic technologies; however, the efficacy of existing systems has been hindered by usability problems or high production costs, making them infeasible for deployment in at-home, point-of-care (POC), or resource-limited settings. The aim of this study was to create a low-cost optical reader system that integrates with any smart device and accepts any type of rapid diagnostic test strip to provide fast and accurate data collection, sample analysis, and diagnostic result reporting. An iterative design methodology was employed by a multidisciplinary research team to engineer three versions of a portable diagnostic testing device that were evaluated for usability and overall user receptivity. Repeated design critiques and usability studies identified a number of system requirements and considerations (eg, software compatibility, biomatter contamination, and physical footprint) that we worked to incrementally incorporate into successive system variants. Our final design phase culminated in the development of Tidbit, a reader that is compatible with any Wi-Fi-enabled device and test strip format. The Tidbit includes various features that support intuitive operation, including a straightforward test strip insertion point, external indicator lights, concealed electronic components, and an asymmetric shape, which inherently signals correct device orientation. Usability testing of the Tidbit indicates high usability for potential user communities. This study presents the design process, specification, and user reception of the Tidbit, an inexpensive, easy-to-use, portable optical reader for fast, accurate quantification of rapid diagnostic test results. Usability testing suggests that the reader is usable among and can benefit a wide group of potential users, including in POC contexts. Generally, the methodology of this study demonstrates the importance of testing these types of systems with potential users and exemplifies how iterative design processes can be employed by multidisciplinary research teams to produce compelling technological solutions. ©Jess Hohenstein, Dakota O'Dell, Elizabeth L Murnane, Zhengda Lu, David Erickson, Geri Gay. Originally published in JMIR Human Factors (http://humanfactors.jmir.org), 21.11.2017.
Cultural adaptation in translational research: field experiences.
Dévieux, Jessy G; Malow, Robert M; Rosenberg, Rhonda; Jean-Gilles, Michèle; Samuels, Deanne; Ergon-Pérez, Emma; Jacobs, Robin
2005-06-01
The increase in the incidence of HIV/AIDS among minorities in the United States and in certain developing nations has prompted new intervention priorities, stressing the adaptation of efficacious interventions for diverse and marginalized groups. The experiences of Florida International University's AIDS Prevention Program in translating HIV primary and secondary prevention interventions among these multicultural populations provide insight into the process of cultural adaptations and address the new scientific emphasis on ecological validity. An iterative process involving forward and backward translation, a cultural linguistic committee, focus group discussions, documentation of project procedures, and consultations with other researchers in the field was used to modify interventions. This article presents strategies used to ensure fidelity in implementing the efficacious core components of evidence-based interventions for reducing HIV transmission and drug use behaviors and the challenges posed by making cultural adaptation for participants with low literacy. This experience demonstrates the importance of integrating culturally relevant material in the translation process with intense focus on language and nuance. The process must ensure that the level of intervention is appropriate for the educational level of participants. Furthermore, the rights of participants must be protected during consenting procedures by instituting policies that recognize the socioeconomic, educational, and systemic pressures to participate in research.
ERIC Educational Resources Information Center
Mavrikis, Manolis; Gutierrez-Santos, Sergio
2010-01-01
This paper presents a methodology for the design of intelligent learning environments. We recognise that in the educational technology field, theory development and system-design should be integrated and rely on an iterative process that addresses: (a) the difficulty to elicit precise, concise, and operationalized knowledge from "experts" and (b)…
Item Purification Does Not Always Improve DIF Detection: A Counterexample with Angoff's Delta Plot
ERIC Educational Resources Information Center
Magis, David; Facon, Bruno
2013-01-01
Item purification is an iterative process that is often advocated as improving the identification of items affected by differential item functioning (DIF). With test-score-based DIF detection methods, item purification iteratively removes the items currently flagged as DIF from the test scores to get purified sets of items, unaffected by DIF. The…
Development of FWIGPR, an open-source package for full-waveform inversion of common-offset GPR data
NASA Astrophysics Data System (ADS)
Jazayeri, S.; Kruse, S.
2017-12-01
We introduce a package for full-waveform inversion (FWI) of Ground Penetrating Radar (GPR) data based on a combination of open-source programs. The FWI requires a good starting model, based on direct knowledge of field conditions or on traditional ray-based inversion methods. With a good starting model, the FWI can improve resolution of selected subsurface features. The package will be made available for general use in educational and research activities. The FWIGPR package consists of four main components: 3D to 2D data conversion, source wavelet estimation, forward modeling, and inversion. (These four components additionally require the development, by the user, of a good starting model.) A major challenge with GPR data is the unknown form of the waveform emitted by the transmitter held close to the ground surface. We apply a blind deconvolution method to estimate the source wavelet, based on a sparsity assumption about the reflectivity series of the subsurface model (Gholami and Sacchi 2012). The estimated wavelet is deconvolved from the data and the sparsest reflectivity series with fewest reflectors. The gprMax code (www.gprmax.com) is used as the forward modeling tool and the PEST parameter estimation package (www.pesthomepage.com) for the inversion. To reduce computation time, the field data are converted to an effective 2D equivalent, and the gprMax code can be run in 2D mode. In the first step, the user must create a good starting model of the data, presumably using ray-based methods. This estimated model will be introduced to the FWI process as an initial model. Next, the 3D data is converted to 2D, then the user estimates the source wavelet that best fits the observed data by sparsity assumption of the earth's response. Last, PEST runs gprMax with the initial model and calculates the misfit between the synthetic and observed data, and using an iterative algorithm calling gprMax several times ineach iteration, finds successive models that better fit the data. To gauge whether the iterative process has arrived at a local or global minima, the process can be repeated with a range of starting models. Tests have shown that this package can successfully improve estimates of selected subsurface model parameters for simple synthetic and real data. Ongoing research will focus on FWI of more complex scenarios.
An improved 2D MoF method by using high order derivatives
NASA Astrophysics Data System (ADS)
Chen, Xiang; Zhang, Xiong
2017-11-01
The MoF (Moment of Fluid) method is one of the most accurate approaches among various interface reconstruction algorithms. Alike other second order methods, the MoF method needs to solve an implicit optimization problem to obtain the optimal approximate interface, so an iteration process is inevitable under most circumstances. In order to solve the optimization efficiently, the properties of the objective function are worthy of studying. In 2D problems, the first order derivative has been deduced and applied in the previous researches. In this paper, the high order derivatives of the objective function are deduced on the convex polygon. We show that the nth (n ≥ 2) order derivatives are discontinuous, and the number of the discontinuous points is two times the number of the polygon edge. A rotation algorithm is proposed to successively calculate these discontinuous points, thus the target interval where the optimal solution is located can be determined. Since the high order derivatives of the objective function are continuous in the target interval, the iteration schemes based on high order derivatives can be used to improve the convergence rate. Moreover, when iterating in the target interval, the value of objective function and its derivatives can be directly updated without explicitly solving the volume conservation equation. The direct update makes a further improvement of the efficiency especially when the number of edges of the polygon is increasing. The Halley's method, which is based on the first three order derivatives, is applied as the iteration scheme in this paper and the numerical results indicate that the CPU time is about half of the previous method on the quadrilateral cell and is about one sixth on the decagon cell.
The Research of Multiple Attenuation Based on Feedback Iteration and Independent Component Analysis
NASA Astrophysics Data System (ADS)
Xu, X.; Tong, S.; Wang, L.
2017-12-01
How to solve the problem of multiple suppression is a difficult problem in seismic data processing. The traditional technology for multiple attenuation is based on the principle of the minimum output energy of the seismic signal, this criterion is based on the second order statistics, and it can't achieve the multiple attenuation when the primaries and multiples are non-orthogonal. In order to solve the above problems, we combine the feedback iteration method based on the wave equation and the improved independent component analysis (ICA) based on high order statistics to suppress the multiple waves. We first use iterative feedback method to predict the free surface multiples of each order. Then, in order to predict multiples from real multiple in amplitude and phase, we design an expanded pseudo multi-channel matching filtering method to get a more accurate matching multiple result. Finally, we present the improved fast ICA algorithm which is based on the maximum non-Gauss criterion of output signal to the matching multiples and get better separation results of the primaries and the multiples. The advantage of our method is that we don't need any priori information to the prediction of the multiples, and can have a better separation result. The method has been applied to several synthetic data generated by finite-difference model technique and the Sigsbee2B model multiple data, the primaries and multiples are non-orthogonal in these models. The experiments show that after three to four iterations, we can get the perfect multiple results. Using our matching method and Fast ICA adaptive multiple subtraction, we can not only effectively preserve the effective wave energy in seismic records, but also can effectively suppress the free surface multiples, especially the multiples related to the middle and deep areas.
A Review on Medical Image Registration as an Optimization Problem
Song, Guoli; Han, Jianda; Zhao, Yiwen; Wang, Zheng; Du, Huibin
2017-01-01
Objective: In the course of clinical treatment, several medical media are required by a phy-sician in order to provide accurate and complete information about a patient. Medical image registra-tion techniques can provide a richer diagnosis and treatment information to doctors and to provide a comprehensive reference source for the researchers involved in image registration as an optimization problem. Methods: The essence of image registration is associating two or more different images spatial asso-ciation, and getting the translation of their spatial relationship. For medical image registration, its pro-cess is not absolute. Its core purpose is finding the conversion relationship between different images. Result: The major step of image registration includes the change of geometrical dimensions, and change of the image of the combination, image similarity measure, iterative optimization and interpo-lation process. Conclusion: The contribution of this review is sort of related image registration research methods, can provide a brief reference for researchers about image registration. PMID:28845149
NASA's Platform for Cross-Disciplinary Microchannel Research
NASA Technical Reports Server (NTRS)
Son, Sang Young; Spearing, Scott; Allen, Jeffrey; Monaco, Lisa A.
2003-01-01
A team from the Structural Biology group located at the NASA Marshall Space Flight Center in Huntsville, Alabama is developing a platform suitable for cross-disciplinary microchannel research. The original objective of this engineering development effort was to deliver a multi-user flight-certified facility for iterative investigations of protein crystal growth; that is, Iterative Biological Crystallization (IBC). However, the unique capabilities of this facility are not limited to the low-gravity structural biology research community. Microchannel-based research in a number of other areas may be greatly accelerated through use of this facility. In particular, the potential for gas-liquid flow investigations and cellular biological research utilizing the exceptional pressure control and simplified coupling to macroscale diagnostics inherent in the IBC facility will be discussed. In conclusion, the opportunities for research-specific modifications to the microchannel configuration, control, and diagnostics will be discussed.
The role of simulation in the design of a neural network chip
NASA Technical Reports Server (NTRS)
Desai, Utpal; Roppel, Thaddeus A.; Padgett, Mary L.
1993-01-01
An iterative, simulation-based design procedure for a neural network chip is introduced. For this design procedure, the goal is to produce a chip layout for a neural network in which the weights are determined by transistor gate width-to-length ratios. In a given iteration, the current layout is simulated using the circuit simulator SPICE, and layout adjustments are made based on conventional gradient-decent methods. After the iteration converges, the chip is fabricated. Monte Carlo analysis is used to predict the effect of statistical fabrication process variations on the overall performance of the neural network chip.
Chen, Tinggui; Xiao, Renbin
2014-01-01
Due to fierce market competition, how to improve product quality and reduce development cost determines the core competitiveness of enterprises. However, design iteration generally causes increases of product cost and delays of development time as well, so how to identify and model couplings among tasks in product design and development has become an important issue for enterprises to settle. In this paper, the shortcomings existing in WTM model are discussed and tearing approach as well as inner iteration method is used to complement the classic WTM model. In addition, the ABC algorithm is also introduced to find out the optimal decoupling schemes. In this paper, firstly, tearing approach and inner iteration method are analyzed for solving coupled sets. Secondly, a hybrid iteration model combining these two technologies is set up. Thirdly, a high-performance swarm intelligence algorithm, artificial bee colony, is adopted to realize problem-solving. Finally, an engineering design of a chemical processing system is given in order to verify its reasonability and effectiveness.
Achieving algorithmic resilience for temporal integration through spectral deferred corrections
Grout, Ray; Kolla, Hemanth; Minion, Michael; ...
2017-05-08
Spectral deferred corrections (SDC) is an iterative approach for constructing higher-order-accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering frommore » soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual of the first correction iteration and changes slowly between successive iterations. Here, we demonstrate the effectiveness of this strategy for both canonical test problems and a comprehensive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less
Fusion Power measurement at ITER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertalot, L.; Barnsley, R.; Krasilnikov, V.
2015-07-01
Nuclear fusion research aims to provide energy for the future in a sustainable way and the ITER project scope is to demonstrate the feasibility of nuclear fusion energy. ITER is a nuclear experimental reactor based on a large scale fusion plasma (tokamak type) device generating Deuterium - Tritium (DT) fusion reactions with emission of 14 MeV neutrons producing up to 700 MW fusion power. The measurement of fusion power, i.e. total neutron emissivity, will play an important role for achieving ITER goals, in particular the fusion gain factor Q related to the reactor performance. Particular attention is given also tomore » the development of the neutron calibration strategy whose main scope is to achieve the required accuracy of 10% for the measurement of fusion power. Neutron Flux Monitors located in diagnostic ports and inside the vacuum vessel will measure ITER total neutron emissivity, expected to range from 1014 n/s in Deuterium - Deuterium (DD) plasmas up to almost 10{sup 21} n/s in DT plasmas. The neutron detection systems as well all other ITER diagnostics have to withstand high nuclear radiation and electromagnetic fields as well ultrahigh vacuum and thermal loads. (authors)« less
Achieving algorithmic resilience for temporal integration through spectral deferred corrections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grout, Ray; Kolla, Hemanth; Minion, Michael
2017-05-08
Spectral deferred corrections (SDC) is an iterative approach for constructing higher- order accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited tomore » recovering from soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual on the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehen- sive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less
Achieving algorithmic resilience for temporal integration through spectral deferred corrections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grout, Ray; Kolla, Hemanth; Minion, Michael
2017-05-08
Spectral deferred corrections (SDC) is an iterative approach for constructing higher-order-accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering frommore » soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual of the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehensive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less
Usability Evaluation of a Clinical Decision Support System for Geriatric ED Pain Treatment.
Genes, Nicholas; Kim, Min Soon; Thum, Frederick L; Rivera, Laura; Beato, Rosemary; Song, Carolyn; Soriano, Jared; Kannry, Joseph; Baumlin, Kevin; Hwang, Ula
2016-01-01
Older adults are at risk for inadequate emergency department (ED) pain care. Unrelieved acute pain is associated with poor outcomes. Clinical decision support systems (CDSS) hold promise to improve patient care, but CDSS quality varies widely, particularly when usability evaluation is not employed. To conduct an iterative usability and redesign process of a novel geriatric abdominal pain care CDSS. We hypothesized this process would result in the creation of more usable and favorable pain care interventions. Thirteen emergency physicians familiar with the Electronic Health Record (EHR) in use at the study site were recruited. Over a 10-week period, 17 1-hour usability test sessions were conducted across 3 rounds of testing. Participants were given 3 patient scenarios and provided simulated clinical care using the EHR, while interacting with the CDSS interventions. Quantitative System Usability Scores (SUS), favorability scores and qualitative narrative feedback were collected for each session. Using a multi-step review process by an interdisciplinary team, positive and negative usability issues in effectiveness, efficiency, and satisfaction were considered, prioritized and incorporated in the iterative redesign process of the CDSS. Video analysis was used to determine the appropriateness of the CDS appearances during simulated clinical care. Over the 3 rounds of usability evaluations and subsequent redesign processes, mean SUS progressively improved from 74.8 to 81.2 to 88.9; mean favorability scores improved from 3.23 to 4.29 (1 worst, 5 best). Video analysis revealed that, in the course of the iterative redesign processes, rates of physicians' acknowledgment of CDS interventions increased, however most rates of desired actions by physicians (such as more frequent pain score updates) decreased. The iterative usability redesign process was instrumental in improving the usability of the CDSS; if implemented in practice, it could improve geriatric pain care. The usability evaluation process led to improved acknowledgement and favorability. Incorporating usability testing when designing CDSS interventions for studies may be effective to enhance clinician use.
NASA Astrophysics Data System (ADS)
Baránek, M.; Běhal, J.; Bouchal, Z.
2018-01-01
In the phase retrieval applications, the Gerchberg-Saxton (GS) algorithm is widely used for the simplicity of implementation. This iterative process can advantageously be deployed in the combination with a spatial light modulator (SLM) enabling simultaneous correction of optical aberrations. As recently demonstrated, the accuracy and efficiency of the aberration correction using the GS algorithm can be significantly enhanced by a vortex image spot used as the target intensity pattern in the iterative process. Here we present an optimization of the spiral phase modulation incorporated into the GS algorithm.
Stakeholder prioritization of zoonoses in Japan with analytic hierarchy process method.
Kadohira, M; Hill, G; Yoshizaki, R; Ota, S; Yoshikawa, Y
2015-05-01
There exists an urgent need to develop iterative risk assessment strategies of zoonotic diseases. The aim of this study is to develop a method of prioritizing 98 zoonoses derived from animal pathogens in Japan and to involve four major groups of stakeholders: researchers, physicians, public health officials, and citizens. We used a combination of risk profiling and analytic hierarchy process (AHP). Profiling risk was accomplished with semi-quantitative analysis of existing public health data. AHP data collection was performed by administering questionnaires to the four stakeholder groups. Results showed that researchers and public health officials focused on case fatality as the chief important factor, while physicians and citizens placed more weight on diagnosis and prevention, respectively. Most of the six top-ranked diseases were similar among all stakeholders. Transmissible spongiform encephalopathy, severe acute respiratory syndrome, and Ebola fever were ranked first, second, and third, respectively.
Ellis, Heidi J C; Nowling, Ronald J; Vyas, Jay; Martyn, Timothy O; Gryk, Michael R
2011-04-11
The CONNecticut Joint University Research (CONNJUR) team is a group of biochemical and software engineering researchers at multiple institutions. The vision of the team is to develop a comprehensive application that integrates a variety of existing analysis tools with workflow and data management to support the process of protein structure determination using Nuclear Magnetic Resonance (NMR). The use of multiple disparate tools and lack of data management, currently the norm in NMR data processing, provides strong motivation for such an integrated environment. This manuscript briefly describes the domain of NMR as used for protein structure determination and explains the formation of the CONNJUR team and its operation in developing the CONNJUR application. The manuscript also describes the evolution of the CONNJUR application through four prototypes and describes the challenges faced while developing the CONNJUR application and how those challenges were met.
Issues and challenges of involving users in medical device development.
Bridgelal Ram, Mala; Grocott, Patricia R; Weir, Heather C M
2008-03-01
User engagement has become a central tenet of health-care policy. This paper reports on a case study in progress that highlights user engagement in the research process in relation to medical device development. To work with a specific group of medical device users to uncover unmet needs, translating these into design concepts, novel technologies and products. To validate a knowledge transfer model that may be replicated for a range of medical device applications and user groups. In depth qualitative case study to elicit and analyse user needs. The focus is on identifying design concepts for medical device applications from unmet needs, and validating these in an iterative feedback loop to the users. The case study has highlighted three interrelated challenges: ensuring unmet needs drive new design concepts and technology development; managing user expectations and managing the research process. Despite the challenges, active participation of users is crucial to developing usable and clinically effective devices.
NASA Astrophysics Data System (ADS)
Evans, T. E.
2013-07-01
Large edge-localized mode (ELM) control techniques must be developed to help ensure the success of burning and ignited fusion plasma devices such as tokamaks and stellarators. In full performance ITER tokamak discharges, with QDT = 10, the energy released by a single ELM could reach ˜30 MJ which is expected to result in an energy density of 10-15 MJ/m2on the divertor targets. This will exceed the estimated divertor ablation limit by a factor of 20-30. A worldwide research program is underway to develop various types of ELM control techniques in preparation for ITER H-mode plasma operations. An overview of the ELM control techniques currently being developed is discussed along with the requirements for applying these techniques to plasmas in ITER. Particular emphasis is given to the primary approaches, pellet pacing and resonant magnetic perturbation fields, currently being considered for ITER.
QUAGOL: a guide for qualitative data analysis.
Dierckx de Casterlé, Bernadette; Gastmans, Chris; Bryon, Els; Denier, Yvonne
2012-03-01
Data analysis is a complex and contested part of the qualitative research process, which has received limited theoretical attention. Researchers are often in need of useful instructions or guidelines on how to analyze the mass of qualitative data, but face the lack of clear guidance for using particular analytic methods. The aim of this paper is to propose and discuss the Qualitative Analysis Guide of Leuven (QUAGOL), a guide that was developed in order to be able to truly capture the rich insights of qualitative interview data. The article describes six major problems researchers are often struggling with during the process of qualitative data analysis. Consequently, the QUAGOL is proposed as a guide to facilitate the process of analysis. Challenges emerged and lessons learned from own extensive experiences with qualitative data analysis within the Grounded Theory Approach, as well as from those of other researchers (as described in the literature), were discussed and recommendations were presented. Strengths and pitfalls of the proposed method were discussed in detail. The Qualitative Analysis Guide of Leuven (QUAGOL) offers a comprehensive method to guide the process of qualitative data analysis. The process consists of two parts, each consisting of five stages. The method is systematic but not rigid. It is characterized by iterative processes of digging deeper, constantly moving between the various stages of the process. As such, it aims to stimulate the researcher's intuition and creativity as optimal as possible. The QUAGOL guide is a theory and practice-based guide that supports and facilitates the process of analysis of qualitative interview data. Although the method can facilitate the process of analysis, it cannot guarantee automatic quality. The skills of the researcher and the quality of the research team remain the most crucial components of a successful process of analysis. Additionally, the importance of constantly moving between the various stages throughout the research process cannot be overstated. Copyright © 2011 Elsevier Ltd. All rights reserved.
Paudel, M; MacKenzie, M; Fallone, B; Rathee, S
2012-06-01
To evaluate the performance of a model based image reconstruction in reducing metal artifacts in MVCT systems, and to compare with filtered-back projection (FBP) technique. Iterative maximum likelihood polychromatic algorithm for CT (IMPACT) is used with pair/triplet production process and the energy dependent response of detectors. The beam spectra for in-house bench-top and TomotherapyTM MVCT are modelled for use in IMPACT. The energy dependent gain of detectors is calculated using a constrained optimization technique and measured attenuation produced by 0 - 24 cm thick solid water slabs. A cylindrical (19 cm diameter) plexiglass phantom containing various central cylindrical inserts (relative electron density of 0.28-1.69) between two steel rods (2 cm diameter) is scanned in the bench-top [the bremsstrahlung radiation from 6 MeV electron beam passed through 4 cm solid water on the Varian Clinac 2300C] and TomotherapyTM MVCTs. The FBP reconstructs images from raw signal normalised to air scan and corrected for beam hardening using a uniform plexi-glass cylinder (20 cm diameter). IMPACT starts with FBP reconstructed seed image and reconstructs final image at 1.25 MeV in 150 iterations. FBP produces a visible dark shading in the image between two steel rods that becomes darker with higher density central insert causing 5-8 % underestimation of electron density compared to the case without the steel rods. In the IMPACT image the dark shading connecting the steel rods is nearly removed and the uniform background restored. The average attenuation coefficients of the inserts and the background are very close to the corresponding theoretical values at 1.25 MeV. The dark shading metal artifact due to beam hardening can be removed in MVCT using the iterative reconstruction algorithm such as IMPACT. However, the accurate modelling of detectors' energy dependent response and physical processes are crucial for successful implementation. Funding support for the research is obtained from "Vanier Canada Graduate Scholarship" and "Canadian Institute of Health Research". © 2012 American Association of Physicists in Medicine.
Improving marine disease surveillance through sea temperature monitoring, outlooks and projections
Maynard, Jeffrey; van Hooidonk, Ruben; Harvell, C. Drew; Eakin, C. Mark; Liu, Gang; Willis, Bette L.; Williams, Gareth J.; Dobson, Andrew; Heron, Scott F.; Glenn, Robert; Reardon, Kathleen; Shields, Jeffrey D.
2016-01-01
To forecast marine disease outbreaks as oceans warm requires new environmental surveillance tools. We describe an iterative process for developing these tools that combines research, development and deployment for suitable systems. The first step is to identify candidate host–pathogen systems. The 24 candidate systems we identified include sponges, corals, oysters, crustaceans, sea stars, fishes and sea grasses (among others). To illustrate the other steps, we present a case study of epizootic shell disease (ESD) in the American lobster. Increasing prevalence of ESD is a contributing factor to lobster fishery collapse in southern New England (SNE), raising concerns that disease prevalence will increase in the northern Gulf of Maine under climate change. The lowest maximum bottom temperature associated with ESD prevalence in SNE is 12°C. Our seasonal outlook for 2015 and long-term projections show bottom temperatures greater than or equal to 12°C may occur in this and coming years in the coastal bays of Maine. The tools presented will allow managers to target efforts to monitor the effects of ESD on fishery sustainability and will be iteratively refined. The approach and case example highlight that temperature-based surveillance tools can inform research, monitoring and management of emerging and continuing marine disease threats. PMID:26880840
Improving marine disease surveillance through sea temperature monitoring, outlooks and projections.
Maynard, Jeffrey; van Hooidonk, Ruben; Harvell, C Drew; Eakin, C Mark; Liu, Gang; Willis, Bette L; Williams, Gareth J; Groner, Maya L; Dobson, Andrew; Heron, Scott F; Glenn, Robert; Reardon, Kathleen; Shields, Jeffrey D
2016-03-05
To forecast marine disease outbreaks as oceans warm requires new environmental surveillance tools. We describe an iterative process for developing these tools that combines research, development and deployment for suitable systems. The first step is to identify candidate host-pathogen systems. The 24 candidate systems we identified include sponges, corals, oysters, crustaceans, sea stars, fishes and sea grasses (among others). To illustrate the other steps, we present a case study of epizootic shell disease (ESD) in the American lobster. Increasing prevalence of ESD is a contributing factor to lobster fishery collapse in southern New England (SNE), raising concerns that disease prevalence will increase in the northern Gulf of Maine under climate change. The lowest maximum bottom temperature associated with ESD prevalence in SNE is 12 °C. Our seasonal outlook for 2015 and long-term projections show bottom temperatures greater than or equal to 12 °C may occur in this and coming years in the coastal bays of Maine. The tools presented will allow managers to target efforts to monitor the effects of ESD on fishery sustainability and will be iteratively refined. The approach and case example highlight that temperature-based surveillance tools can inform research, monitoring and management of emerging and continuing marine disease threats. © 2016 The Authors.
Precise and fast spatial-frequency analysis using the iterative local Fourier transform.
Lee, Sukmock; Choi, Heejoo; Kim, Dae Wook
2016-09-19
The use of the discrete Fourier transform has decreased since the introduction of the fast Fourier transform (fFT), which is a numerically efficient computing process. This paper presents the iterative local Fourier transform (ilFT), a set of new processing algorithms that iteratively apply the discrete Fourier transform within a local and optimal frequency domain. The new technique achieves 210 times higher frequency resolution than the fFT within a comparable computation time. The method's superb computing efficiency, high resolution, spectrum zoom-in capability, and overall performance are evaluated and compared to other advanced high-resolution Fourier transform techniques, such as the fFT combined with several fitting methods. The effectiveness of the ilFT is demonstrated through the data analysis of a set of Talbot self-images (1280 × 1024 pixels) obtained with an experimental setup using grating in a diverging beam produced by a coherent point source.
Approximate techniques of structural reanalysis
NASA Technical Reports Server (NTRS)
Noor, A. K.; Lowder, H. E.
1974-01-01
A study is made of two approximate techniques for structural reanalysis. These include Taylor series expansions for response variables in terms of design variables and the reduced-basis method. In addition, modifications to these techniques are proposed to overcome some of their major drawbacks. The modifications include a rational approach to the selection of the reduced-basis vectors and the use of Taylor series approximation in an iterative process. For the reduced basis a normalized set of vectors is chosen which consists of the original analyzed design and the first-order sensitivity analysis vectors. The use of the Taylor series approximation as a first (initial) estimate in an iterative process, can lead to significant improvements in accuracy, even with one iteration cycle. Therefore, the range of applicability of the reanalysis technique can be extended. Numerical examples are presented which demonstrate the gain in accuracy obtained by using the proposed modification techniques, for a wide range of variations in the design variables.
Kassam, Aliya; Donnon, Tyrone; Rigby, Ian
2014-03-01
There is a question of whether a single assessment tool can assess the key competencies of residents as mandated by the Royal College of Physicians and Surgeons of Canada CanMEDS roles framework. The objective of the present study was to investigate the reliability and validity of an emergency medicine (EM) in-training evaluation report (ITER). ITER data from 2009 to 2011 were combined for residents across the 5 years of the EM residency training program. An exploratory factor analysis with varimax rotation was used to explore the construct validity of the ITER. A total of 172 ITERs were completed on residents across their first to fifth year of training. A combined, 24-item ITER yielded a five-factor solution measuring the CanMEDs role Medical Expert/Scholar, Communicator/Collaborator, Professional, Health Advocate and Manager subscales. The factor solution accounted for 79% of the variance, and reliability coefficients (Cronbach alpha) ranged from α = 0.90 to 0.95 for each subscale and α = 0.97 overall. The combined, 24-item ITER used to assess residents' competencies in the EM residency program showed strong reliability and evidence of construct validity for assessment of the CanMEDS roles. Further research is needed to develop and test ITER items that will differentiate each CanMEDS role exclusively.
Collier, Aileen; Wyer, Mary
2016-06-01
Patient safety research has to date offered few opportunities for patients and families to be actively involved in the research process. This article describes our collaboration with patients and families in two separate studies, involving end-of-life care and infection control in acute care. We used the collaborative methodology of video-reflexive ethnography, which has been primarily used with clinicians, to involve patients and families as active participants and collaborators in our research. The purpose of this article is to share our experiences and findings that iterative researcher reflexivity in the field was critical to the progress and success of each study. We present and analyze the complexities of reflexivity-in-the-field through a framework of multilayered reflexivity. We share our lessons here for other researchers seeking to actively involve patients and families in patient safety research using collaborative visual methods. © The Author(s) 2015.
Angelis, G I; Reader, A J; Markiewicz, P J; Kotasidis, F A; Lionheart, W R; Matthews, J C
2013-08-07
Recent studies have demonstrated the benefits of a resolution model within iterative reconstruction algorithms in an attempt to account for effects that degrade the spatial resolution of the reconstructed images. However, these algorithms suffer from slower convergence rates, compared to algorithms where no resolution model is used, due to the additional need to solve an image deconvolution problem. In this paper, a recently proposed algorithm, which decouples the tomographic and image deconvolution problems within an image-based expectation maximization (EM) framework, was evaluated. This separation is convenient, because more computational effort can be placed on the image deconvolution problem and therefore accelerate convergence. Since the computational cost of solving the image deconvolution problem is relatively small, multiple image-based EM iterations do not significantly increase the overall reconstruction time. The proposed algorithm was evaluated using 2D simulations, as well as measured 3D data acquired on the high-resolution research tomograph. Results showed that bias reduction can be accelerated by interleaving multiple iterations of the image-based EM algorithm solving the resolution model problem, with a single EM iteration solving the tomographic problem. Significant improvements were observed particularly for voxels that were located on the boundaries between regions of high contrast within the object being imaged and for small regions of interest, where resolution recovery is usually more challenging. Minor differences were observed using the proposed nested algorithm, compared to the single iteration normally performed, when an optimal number of iterations are performed for each algorithm. However, using the proposed nested approach convergence is significantly accelerated enabling reconstruction using far fewer tomographic iterations (up to 70% fewer iterations for small regions). Nevertheless, the optimal number of nested image-based EM iterations is hard to be defined and it should be selected according to the given application.
ERIC Educational Resources Information Center
Rodriguez, Gabriel R.
2017-01-01
A growing number of schools are implementing PLCs to address school improvement, staff engage with data to identify student needs and determine instructional interventions. This is a starting point for engaging in the iterative process of learning for the teach in order to increase student learning (Hord & Sommers, 2008). The iterative process…
Building an experience factory for maintenance
NASA Technical Reports Server (NTRS)
Valett, Jon D.; Condon, Steven E.; Briand, Lionel; Kim, Yong-Mi; Basili, Victor R.
1994-01-01
This paper reports the preliminary results of a study of the software maintenance process in the Flight Dynamics Division (FDD) of the National Aeronautics and Space Administration/Goddard Space Flight Center (NASA/GSFC). This study is being conducted by the Software Engineering Laboratory (SEL), a research organization sponsored by the Software Engineering Branch of the FDD, which investigates the effectiveness of software engineering technologies when applied to the development of applications software. This software maintenance study began in October 1993 and is being conducted using the Quality Improvement Paradigm (QIP), a process improvement strategy based on three iterative steps: understanding, assessing, and packaging. The preliminary results represent the outcome of the understanding phase, during which SEL researchers characterized the maintenance environment, product, and process. Findings indicate that a combination of quantitative and qualitative analysis is effective for studying the software maintenance process, that additional measures should be collected for maintenance (as opposed to new development), and that characteristics such as effort, error rate, and productivity are best considered on a 'release' basis rather than on a project basis. The research thus far has documented some basic differences between new development and software maintenance. It lays the foundation for further application of the QIP to investigate means of improving the maintenance process and product in the FDD.
An adaptive Gaussian process-based iterative ensemble smoother for data assimilation
NASA Astrophysics Data System (ADS)
Ju, Lei; Zhang, Jiangjiang; Meng, Long; Wu, Laosheng; Zeng, Lingzao
2018-05-01
Accurate characterization of subsurface hydraulic conductivity is vital for modeling of subsurface flow and transport. The iterative ensemble smoother (IES) has been proposed to estimate the heterogeneous parameter field. As a Monte Carlo-based method, IES requires a relatively large ensemble size to guarantee its performance. To improve the computational efficiency, we propose an adaptive Gaussian process (GP)-based iterative ensemble smoother (GPIES) in this study. At each iteration, the GP surrogate is adaptively refined by adding a few new base points chosen from the updated parameter realizations. Then the sensitivity information between model parameters and measurements is calculated from a large number of realizations generated by the GP surrogate with virtually no computational cost. Since the original model evaluations are only required for base points, whose number is much smaller than the ensemble size, the computational cost is significantly reduced. The applicability of GPIES in estimating heterogeneous conductivity is evaluated by the saturated and unsaturated flow problems, respectively. Without sacrificing estimation accuracy, GPIES achieves about an order of magnitude of speed-up compared with the standard IES. Although subsurface flow problems are considered in this study, the proposed method can be equally applied to other hydrological models.
Iteration of ultrasound aberration correction methods
NASA Astrophysics Data System (ADS)
Maasoey, Svein-Erik; Angelsen, Bjoern; Varslot, Trond
2004-05-01
Aberration in ultrasound medical imaging is usually modeled by time-delay and amplitude variations concentrated on the transmitting/receiving array. This filter process is here denoted a TDA filter. The TDA filter is an approximation to the physical aberration process, which occurs over an extended part of the human body wall. Estimation of the TDA filter, and performing correction on transmit and receive, has proven difficult. It has yet to be shown that this method works adequately for severe aberration. Estimation of the TDA filter can be iterated by retransmitting a corrected signal and re-estimate until a convergence criterion is fulfilled (adaptive imaging). Two methods for estimating time-delay and amplitude variations in receive signals from random scatterers have been developed. One method correlates each element signal with a reference signal. The other method use eigenvalue decomposition of the receive cross-spectrum matrix, based upon a receive energy-maximizing criterion. Simulations of iterating aberration correction with a TDA filter have been investigated to study its convergence properties. A weak and strong human-body wall model generated aberration. Both emulated the human abdominal wall. Results after iteration improve aberration correction substantially, and both estimation methods converge, even for the case of strong aberration.
NASA Astrophysics Data System (ADS)
Furuichi, Mikito; Nishiura, Daisuke
2017-10-01
We developed dynamic load-balancing algorithms for Particle Simulation Methods (PSM) involving short-range interactions, such as Smoothed Particle Hydrodynamics (SPH), Moving Particle Semi-implicit method (MPS), and Discrete Element method (DEM). These are needed to handle billions of particles modeled in large distributed-memory computer systems. Our method utilizes flexible orthogonal domain decomposition, allowing the sub-domain boundaries in the column to be different for each row. The imbalances in the execution time between parallel logical processes are treated as a nonlinear residual. Load-balancing is achieved by minimizing the residual within the framework of an iterative nonlinear solver, combined with a multigrid technique in the local smoother. Our iterative method is suitable for adjusting the sub-domain frequently by monitoring the performance of each computational process because it is computationally cheaper in terms of communication and memory costs than non-iterative methods. Numerical tests demonstrated the ability of our approach to handle workload imbalances arising from a non-uniform particle distribution, differences in particle types, or heterogeneous computer architecture which was difficult with previously proposed methods. We analyzed the parallel efficiency and scalability of our method using Earth simulator and K-computer supercomputer systems.
A multiresolution approach to iterative reconstruction algorithms in X-ray computed tomography.
De Witte, Yoni; Vlassenbroeck, Jelle; Van Hoorebeke, Luc
2010-09-01
In computed tomography, the application of iterative reconstruction methods in practical situations is impeded by their high computational demands. Especially in high resolution X-ray computed tomography, where reconstruction volumes contain a high number of volume elements (several giga voxels), this computational burden prevents their actual breakthrough. Besides the large amount of calculations, iterative algorithms require the entire volume to be kept in memory during reconstruction, which quickly becomes cumbersome for large data sets. To overcome this obstacle, we present a novel multiresolution reconstruction, which greatly reduces the required amount of memory without significantly affecting the reconstructed image quality. It is shown that, combined with an efficient implementation on a graphical processing unit, the multiresolution approach enables the application of iterative algorithms in the reconstruction of large volumes at an acceptable speed using only limited resources.
Stålberg, Anna; Sandberg, Anette; Söderbäck, Maja; Larsson, Thomas
2016-06-01
During the last decade, interactive technology has entered mainstream society. Its many users also include children, even the youngest ones, who use the technology in different situations for both fun and learning. When designing technology for children, it is crucial to involve children in the process in order to arrive at an age-appropriate end product. In this study we describe the specific iterative process by which an interactive application was developed. This application is intended to facilitate young children's, three-to five years old, participation in healthcare situations. We also describe the specific contributions of the children, who tested the prototypes in a preschool, a primary health care clinic and an outpatient unit at a hospital, during the development process. The iterative phases enabled the children to be involved at different stages of the process and to evaluate modifications and improvements made after each prior iteration. The children contributed their own perspectives (the child's perspective) on the usability, content and graphic design of the application, substantially improving the software and resulting in an age-appropriate product. Copyright © 2016 Elsevier Inc. All rights reserved.
Iterative dip-steering median filter
NASA Astrophysics Data System (ADS)
Huo, Shoudong; Zhu, Weihong; Shi, Taikun
2017-09-01
Seismic data are always contaminated with high noise components, which present processing challenges especially for signal preservation and its true amplitude response. This paper deals with an extension of the conventional median filter, which is widely used in random noise attenuation. It is known that the standard median filter works well with laterally aligned coherent events but cannot handle steep events, especially events with conflicting dips. In this paper, an iterative dip-steering median filter is proposed for the attenuation of random noise in the presence of multiple dips. The filter first identifies the dominant dips inside an optimized processing window by a Fourier-radial transform in the frequency-wavenumber domain. The optimum size of the processing window depends on the intensity of random noise that needs to be attenuated and the amount of signal to be preserved. It then applies median filter along the dominant dip and retains the signals. Iterations are adopted to process the residual signals along the remaining dominant dips in a descending sequence, until all signals have been retained. The method is tested by both synthetic and field data gathers and also compared with the commonly used f-k least squares de-noising and f-x deconvolution.
Mousa Bacha, Rasha; Abdelaziz, Somaia
2017-01-01
Objectives To explore feedback processes of Western-based health professional student training curricula conducted in an Arab clinical teaching setting. Methods This qualitative study employed document analysis of in-training evaluation reports (ITERs) used by Canadian nursing, pharmacy, respiratory therapy, paramedic, dental hygiene, and pharmacy technician programs established in Qatar. Six experiential training program coordinators were interviewed between February and May 2016 to explore how national cultural differences are perceived to affect feedback processes between students and clinical supervisors. Interviews were recorded, transcribed, and coded according to a priori cultural themes. Results Document analysis found all programs’ ITERs outlined competency items for students to achieve. Clinical supervisors choose a response option corresponding to their judgment of student performance and may provide additional written feedback in spaces provided. Only one program required formal face-to-face feedback exchange between students and clinical supervisors. Experiential training program coordinators identified that no ITER was expressly culturally adapted, although in some instances, modifications were made for differences in scopes of practice between Canada and Qatar. Power distance was recognized by all coordinators who also identified both student and supervisor reluctance to document potentially negative feedback in ITERs. Instances of collectivism were described as more lenient student assessment by clinical supervisors of the same cultural background. Uncertainty avoidance did not appear to impact feedback processes. Conclusions Our findings suggest that differences in specific cultural dimensions between Qatar and Canada have implications on the feedback process in experiential training which may be addressed through simple measures to accommodate communication preferences. PMID:28315858
Wilbur, Kerry; Mousa Bacha, Rasha; Abdelaziz, Somaia
2017-03-17
To explore feedback processes of Western-based health professional student training curricula conducted in an Arab clinical teaching setting. This qualitative study employed document analysis of in-training evaluation reports (ITERs) used by Canadian nursing, pharmacy, respiratory therapy, paramedic, dental hygiene, and pharmacy technician programs established in Qatar. Six experiential training program coordinators were interviewed between February and May 2016 to explore how national cultural differences are perceived to affect feedback processes between students and clinical supervisors. Interviews were recorded, transcribed, and coded according to a priori cultural themes. Document analysis found all programs' ITERs outlined competency items for students to achieve. Clinical supervisors choose a response option corresponding to their judgment of student performance and may provide additional written feedback in spaces provided. Only one program required formal face-to-face feedback exchange between students and clinical supervisors. Experiential training program coordinators identified that no ITER was expressly culturally adapted, although in some instances, modifications were made for differences in scopes of practice between Canada and Qatar. Power distance was recognized by all coordinators who also identified both student and supervisor reluctance to document potentially negative feedback in ITERs. Instances of collectivism were described as more lenient student assessment by clinical supervisors of the same cultural background. Uncertainty avoidance did not appear to impact feedback processes. Our findings suggest that differences in specific cultural dimensions between Qatar and Canada have implications on the feedback process in experiential training which may be addressed through simple measures to accommodate communication preferences.
Toye, Francine; Seers, Kate; Allcock, Nick; Briggs, Michelle; Carr, Eloise; Barker, Karen
2014-06-21
Studies that systematically search for and synthesise qualitative research are becoming more evident in health care, and they can make an important contribution to patient care. Our team was funded to complete a meta-ethnography of patients' experience of chronic musculoskeletal pain. It has been 25 years since Noblit and Hare published their core text on meta-ethnography, and the current health research environment brings additional challenges to researchers aiming to synthesise qualitative research. Noblit and Hare propose seven stages of meta-ethnography which take the researcher from formulating a research idea to expressing the findings. These stages are not discrete but form part of an iterative research process. We aimed to build on the methods of Noblit and Hare and explore the challenges of including a large number of qualitative studies into a qualitative systematic review. These challenges hinge upon epistemological and practical issues to be considered alongside expectations about what determines high quality research. This paper describes our method and explores these challenges. Central to our method was the process of collaborative interpretation of concepts and the decision to exclude original material where we could not decipher a concept. We use excerpts from our research team's reflexive statements to illustrate the development of our methods.
2014-01-01
Studies that systematically search for and synthesise qualitative research are becoming more evident in health care, and they can make an important contribution to patient care. Our team was funded to complete a meta-ethnography of patients’ experience of chronic musculoskeletal pain. It has been 25 years since Noblit and Hare published their core text on meta-ethnography, and the current health research environment brings additional challenges to researchers aiming to synthesise qualitative research. Noblit and Hare propose seven stages of meta-ethnography which take the researcher from formulating a research idea to expressing the findings. These stages are not discrete but form part of an iterative research process. We aimed to build on the methods of Noblit and Hare and explore the challenges of including a large number of qualitative studies into a qualitative systematic review. These challenges hinge upon epistemological and practical issues to be considered alongside expectations about what determines high quality research. This paper describes our method and explores these challenges. Central to our method was the process of collaborative interpretation of concepts and the decision to exclude original material where we could not decipher a concept. We use excerpts from our research team’s reflexive statements to illustrate the development of our methods. PMID:24951054
Model for Simulating a Spiral Software-Development Process
NASA Technical Reports Server (NTRS)
Mizell, Carolyn; Curley, Charles; Nayak, Umanath
2010-01-01
A discrete-event simulation model, and a computer program that implements the model, have been developed as means of analyzing a spiral software-development process. This model can be tailored to specific development environments for use by software project managers in making quantitative cases for deciding among different software-development processes, courses of action, and cost estimates. A spiral process can be contrasted with a waterfall process, which is a traditional process that consists of a sequence of activities that include analysis of requirements, design, coding, testing, and support. A spiral process is an iterative process that can be regarded as a repeating modified waterfall process. Each iteration includes assessment of risk, analysis of requirements, design, coding, testing, delivery, and evaluation. A key difference between a spiral and a waterfall process is that a spiral process can accommodate changes in requirements at each iteration, whereas in a waterfall process, requirements are considered to be fixed from the beginning and, therefore, a waterfall process is not flexible enough for some projects, especially those in which requirements are not known at the beginning or may change during development. For a given project, a spiral process may cost more and take more time than does a waterfall process, but may better satisfy a customer's expectations and needs. Models for simulating various waterfall processes have been developed previously, but until now, there have been no models for simulating spiral processes. The present spiral-process-simulating model and the software that implements it were developed by extending a discrete-event simulation process model of the IEEE 12207 Software Development Process, which was built using commercially available software known as the Process Analysis Tradeoff Tool (PATT). Typical inputs to PATT models include industry-average values of product size (expressed as number of lines of code), productivity (number of lines of code per hour), and number of defects per source line of code. The user provides the number of resources, the overall percent of effort that should be allocated to each process step, and the number of desired staff members for each step. The output of PATT includes the size of the product, a measure of effort, a measure of rework effort, the duration of the entire process, and the numbers of injected, detected, and corrected defects as well as a number of other interesting features. In the development of the present model, steps were added to the IEEE 12207 waterfall process, and this model and its implementing software were made to run repeatedly through the sequence of steps, each repetition representing an iteration in a spiral process. Because the IEEE 12207 model is founded on a waterfall paradigm, it enables direct comparison of spiral and waterfall processes. The model can be used throughout a software-development project to analyze the project as more information becomes available. For instance, data from early iterations can be used as inputs to the model, and the model can be used to estimate the time and cost of carrying the project to completion.
Varying face occlusion detection and iterative recovery for face recognition
NASA Astrophysics Data System (ADS)
Wang, Meng; Hu, Zhengping; Sun, Zhe; Zhao, Shuhuan; Sun, Mei
2017-05-01
In most sparse representation methods for face recognition (FR), occlusion problems were usually solved via removing the occlusion part of both query samples and training samples to perform the recognition process. This practice ignores the global feature of facial image and may lead to unsatisfactory results due to the limitation of local features. Considering the aforementioned drawback, we propose a method called varying occlusion detection and iterative recovery for FR. The main contributions of our method are as follows: (1) to detect an accurate occlusion area of facial images, an image processing and intersection-based clustering combination method is used for occlusion FR; (2) according to an accurate occlusion map, the new integrated facial images are recovered iteratively and put into a recognition process; and (3) the effectiveness on recognition accuracy of our method is verified by comparing it with three typical occlusion map detection methods. Experiments show that the proposed method has a highly accurate detection and recovery performance and that it outperforms several similar state-of-the-art methods against partial contiguous occlusion.
NASA Astrophysics Data System (ADS)
Yuan, Jian-guo; Tong, Qing-zhen; Huang, Sheng; Wang, Yong
2013-11-01
An effective hierarchical reliable belief propagation (HRBP) decoding algorithm is proposed according to the structural characteristics of systematically constructed Gallager low-density parity-check (SCG-LDPC) codes. The novel decoding algorithm combines the layered iteration with the reliability judgment, and can greatly reduce the number of the variable nodes involved in the subsequent iteration process and accelerate the convergence rate. The result of simulation for SCG-LDPC(3969,3720) code shows that the novel HRBP decoding algorithm can greatly reduce the computing amount at the condition of ensuring the performance compared with the traditional belief propagation (BP) algorithm. The bit error rate (BER) of the HRBP algorithm is considerable at the threshold value of 15, but in the subsequent iteration process, the number of the variable nodes for the HRBP algorithm can be reduced by about 70% at the high signal-to-noise ratio (SNR) compared with the BP algorithm. When the threshold value is further increased, the HRBP algorithm will gradually degenerate into the layered-BP algorithm, but at the BER of 10-7 and the maximal iteration number of 30, the net coding gain (NCG) of the HRBP algorithm is 0.2 dB more than that of the BP algorithm, and the average iteration times can be reduced by about 40% at the high SNR. Therefore, the novel HRBP decoding algorithm is more suitable for optical communication systems.
Rescheduling with iterative repair
NASA Technical Reports Server (NTRS)
Zweben, Monte; Davis, Eugene; Daun, Brian; Deale, Michael
1992-01-01
This paper presents a new approach to rescheduling called constraint-based iterative repair. This approach gives our system the ability to satisfy domain constraints, address optimization concerns, minimize perturbation to the original schedule, produce modified schedules, quickly, and exhibits 'anytime' behavior. The system begins with an initial, flawed schedule and then iteratively repairs constraint violations until a conflict-free schedule is produced. In an empirical demonstration, we vary the importance of minimizing perturbation and report how fast the system is able to resolve conflicts in a given time bound. We also show the anytime characteristics of the system. These experiments were performed within the domain of Space Shuttle ground processing.
Scheduling and rescheduling with iterative repair
NASA Technical Reports Server (NTRS)
Zweben, Monte; Davis, Eugene; Daun, Brian; Deale, Michael
1992-01-01
This paper describes the GERRY scheduling and rescheduling system being applied to coordinate Space Shuttle Ground Processing. The system uses constraint-based iterative repair, a technique that starts with a complete but possibly flawed schedule and iteratively improves it by using constraint knowledge within repair heuristics. In this paper we explore the tradeoff between the informedness and the computational cost of several repair heuristics. We show empirically that some knowledge can greatly improve the convergence speed of a repair-based system, but that too much knowledge, such as the knowledge embodied within the MIN-CONFLICTS lookahead heuristic, can overwhelm a system and result in degraded performance.
Service-Learning in the Environmental Sciences for Teaching Sustainability Science
NASA Astrophysics Data System (ADS)
Truebe, S.; Strong, A. L.
2016-12-01
Understanding and developing effective strategies for the use of community-engaged learning (service-learning) approaches in the environmental geosciences is an important research need in curricular and pedagogical innovation for sustainability. In 2015, we designed and implemented a new community-engaged learning practicum course through the Earth Systems Program in the School of Earth, Energy and Environmental Sciences at Stanford University focused on regional open space management and land stewardship. Undergraduate and graduate students partnered with three different regional land trust and environmental stewardship organizations to conduct quarter-long research projects ranging from remote sensing studies of historical land use, to fire ecology, to ranchland management, to volunteer retention strategies. Throughout the course, students reflected on the decision-making processes and stewardship actions of the organizations. Two iterations of the course were run in Winter and Fall 2015. Using coded and analyzed pre- and post-course student surveys from the two course iterations, we evaluate undergraduate and graduate student learning outcomes and changes in perceptions and understanding of sustainability science. We find that engagement with community partners to conduct research projects on a wide variety of aspects of open space management, land management, and environmental stewardship (1) increased an understanding of trade-offs inherent in sustainability and resource management and (2) altered student perceptions of the role of scientific information and research in environmental management and decision-making. Furthermore, students initially conceived of open space as purely ecological/biophysical, but by the end of the course, (3) their understanding was of open space as a coupled human/ecological system. This shift is crucial for student development as sustainability scientists.
Hochstenbach, Laura M J; Courtens, Annemie M; Zwakhalen, Sandra M G; Vermeulen, Joan; van Kleef, Maarten; de Witte, Luc P
2017-08-01
Co-creative methods, having an iterative character and including different perspectives, allow for the development of complex nursing interventions. Information about the development process is essential in providing justification for the ultimate intervention and crucial in interpreting the outcomes of subsequent evaluations. This paper describes a co-creative method directed towards the development of an eHealth intervention delivered by registered nurses to support self-management in outpatients with cancer pain. Intervention development was divided into three consecutive phases (exploration of context, specification of content, organisation of care). In each phase, researchers and technicians addressed five iterative steps: research, ideas, prototyping, evaluation, and documentation. Health professionals and patients were consulted during research and evaluation steps. Collaboration of researchers, health professionals, patients and technicians was positive and valuable in optimising outcomes. The intervention includes a mobile application for patients and a web application for nurses. Patients are requested to monitor pain, adverse effects and medication intake, while being provided with graphical feedback, education and contact possibilities. Nurses monitor data, advise patients, and collaborate with the treating physician. Integration of patient self-management and professional care by means of eHealth key into well-known barriers and seem promising in improving cancer pain follow-up. Nurses are able to make substantial contributions because of their expertise, focus on daily living, and their bridging function between patients and health professionals in different care settings. Insights from the intervention development as well as the intervention content give thought for applications in different patients and care settings. Copyright © 2017 Elsevier Inc. All rights reserved.
Analytic TOF PET reconstruction algorithm within DIRECT data partitioning framework
Matej, Samuel; Daube-Witherspoon, Margaret E.; Karp, Joel S.
2016-01-01
Iterative reconstruction algorithms are routinely used for clinical practice; however, analytic algorithms are relevant candidates for quantitative research studies due to their linear behavior. While iterative algorithms also benefit from the inclusion of accurate data and noise models the widespread use of TOF scanners with less sensitivity to noise and data imperfections make analytic algorithms even more promising. In our previous work we have developed a novel iterative reconstruction approach (Direct Image Reconstruction for TOF) providing convenient TOF data partitioning framework and leading to very efficient reconstructions. In this work we have expanded DIRECT to include an analytic TOF algorithm with confidence weighting incorporating models of both TOF and spatial resolution kernels. Feasibility studies using simulated and measured data demonstrate that analytic-DIRECT with appropriate resolution and regularization filters is able to provide matched bias vs. variance performance to iterative TOF reconstruction with a matched resolution model. PMID:27032968
Improved Convergence and Robustness of USM3D Solutions on Mixed-Element Grids
NASA Technical Reports Server (NTRS)
Pandya, Mohagna J.; Diskin, Boris; Thomas, James L.; Frink, Neal T.
2016-01-01
Several improvements to the mixed-element USM3D discretization and defect-correction schemes have been made. A new methodology for nonlinear iterations, called the Hierarchical Adaptive Nonlinear Iteration Method, has been developed and implemented. The Hierarchical Adaptive Nonlinear Iteration Method provides two additional hierarchies around a simple and approximate preconditioner of USM3D. The hierarchies are a matrix-free linear solver for the exact linearization of Reynolds-averaged Navier-Stokes equations and a nonlinear control of the solution update. Two variants of the Hierarchical Adaptive Nonlinear Iteration Method are assessed on four benchmark cases, namely, a zero-pressure-gradient flat plate, a bump-in-channel configuration, the NACA 0012 airfoil, and a NASA Common Research Model configuration. The new methodology provides a convergence acceleration factor of 1.4 to 13 over the preconditioner-alone method representing the baseline solver technology.
Improved Convergence and Robustness of USM3D Solutions on Mixed-Element Grids
NASA Technical Reports Server (NTRS)
Pandya, Mohagna J.; Diskin, Boris; Thomas, James L.; Frinks, Neal T.
2016-01-01
Several improvements to the mixed-elementUSM3Ddiscretization and defect-correction schemes have been made. A new methodology for nonlinear iterations, called the Hierarchical Adaptive Nonlinear Iteration Method, has been developed and implemented. The Hierarchical Adaptive Nonlinear Iteration Method provides two additional hierarchies around a simple and approximate preconditioner of USM3D. The hierarchies are a matrix-free linear solver for the exact linearization of Reynolds-averaged Navier-Stokes equations and a nonlinear control of the solution update. Two variants of the Hierarchical Adaptive Nonlinear Iteration Method are assessed on four benchmark cases, namely, a zero-pressure-gradient flat plate, a bump-in-channel configuration, the NACA 0012 airfoil, and a NASA Common Research Model configuration. The new methodology provides a convergence acceleration factor of 1.4 to 13 over the preconditioner-alone method representing the baseline solver technology.
Analytic TOF PET reconstruction algorithm within DIRECT data partitioning framework
NASA Astrophysics Data System (ADS)
Matej, Samuel; Daube-Witherspoon, Margaret E.; Karp, Joel S.
2016-05-01
Iterative reconstruction algorithms are routinely used for clinical practice; however, analytic algorithms are relevant candidates for quantitative research studies due to their linear behavior. While iterative algorithms also benefit from the inclusion of accurate data and noise models the widespread use of time-of-flight (TOF) scanners with less sensitivity to noise and data imperfections make analytic algorithms even more promising. In our previous work we have developed a novel iterative reconstruction approach (DIRECT: direct image reconstruction for TOF) providing convenient TOF data partitioning framework and leading to very efficient reconstructions. In this work we have expanded DIRECT to include an analytic TOF algorithm with confidence weighting incorporating models of both TOF and spatial resolution kernels. Feasibility studies using simulated and measured data demonstrate that analytic-DIRECT with appropriate resolution and regularization filters is able to provide matched bias versus variance performance to iterative TOF reconstruction with a matched resolution model.
Using Sandelowski and Barroso's Meta-Synthesis Method in Advancing Qualitative Evidence.
Ludvigsen, Mette S; Hall, Elisabeth O C; Meyer, Gabriele; Fegran, Liv; Aagaard, Hanne; Uhrenfeldt, Lisbeth
2016-02-01
The purpose of this article was to iteratively account for and discuss the handling of methodological challenges in two qualitative research syntheses concerning patients' experiences of hospital transition. We applied Sandelowski and Barroso's guidelines for synthesizing qualitative research, and to our knowledge, this is the first time researchers discuss their methodological steps. In the process, we identified a need for prolonged discussions to determine mutual understandings of the methodology. We discussed how to identify the appropriate qualitative research literature and how to best conduct exhaustive literature searches on our target phenomena. Another finding concerned our status as third-order interpreters of participants' experiences and what this meant for synthesizing the primary findings. Finally, we discussed whether our studies could be classified as metasummaries or metasyntheses. Although we have some concerns regarding the applicability of the methodology, we conclude that following Sandelowski and Barroso's guidelines contributed to valid syntheses of our studies. © The Author(s) 2015.
de Wit, Maarten; Kirwan, John R; Tugwell, Peter; Beaton, Dorcas; Boers, Maarten; Brooks, Peter; Collins, Sarah; Conaghan, Philip G; D'Agostino, Maria-Antonietta; Hofstetter, Cathie; Hughes, Rod; Leong, Amye; Lyddiatt, Ann; March, Lyn; May, James; Montie, Pamela; Richards, Pamela; Simon, Lee S; Singh, Jasvinder A; Strand, Vibeke; Voshaar, Marieke; Bingham, Clifton O; Gossec, Laure
2017-04-01
There is increasing interest in making patient participation an integral component of medical research. However, practical guidance on optimizing this engagement in healthcare is scarce. Since 2002, patient involvement has been one of the key features of the Outcome Measures in Rheumatology (OMERACT) international consensus effort. Based on a review of cumulative data from qualitative studies and internal surveys among OMERACT participants, we explored the potential benefits and challenges of involving patient research partners in conferences and working group activities. We supplemented our review with personal experiences and reflections regarding patient participation in the OMERACT process. We found that between 2002 and 2016, 67 patients have attended OMERACT conferences, of whom 28 had sustained involvement; many other patients contributed to OMERACT working groups. Their participation provided face validity to the OMERACT process and expanded the research agenda. Essential facilitators have been the financial commitment to guarantee sustainable involvement of patients at these conferences, procedures for recruitment, selection and support, and dedicated time allocated in the program for patient issues. Current challenges include the representativeness of the patient panel, risk of pseudo-professionalization, and disparity in patients' and researchers' perception of involvement. In conclusion, OMERACT has embedded long-term patient involvement in the consensus-building process on the measurement of core health outcomes. This integrative process continues to evolve iteratively. We believe that the practical points raised here can improve participatory research implementation.
Is there a need for a specific educational scholarship for using e-learning in medical education?
Sandars, John; Goh, Poh Sun
2016-10-01
We propose the need for a specific educational scholarship when using e-learning in medical education. Effective e-learning has additional factors that require specific critical attention, including the design and delivery of e-learning. An important aspect is the recognition that e-learning is a complex intervention, with several interconnecting components that have to be aligned. This alignment requires an essential iterative development process with usability testing. Effectiveness of e-learning in one context may not be fully realized in another context unless there is further consideration of applicability and scalability. We recommend a participatory approach for an educational scholarship for using e-learning in medical education, such as by action research or design-based research.
2010-02-24
electronic Schrodinger equation . In previous grant cycles, we implemented the NEO approach at the Hartree-Fock (NEO-HF),13 configuration interaction...electronic and nuclear molecular orbitals. The resulting electronic and nuclear Hartree-Fock-Roothaan equations are solved iteratively until self...directly into the standard Hartree- Fock-Roothaan equations , which are solved iteratively to self-consistency. The density matrix representation
Electronic patient-reported data capture as a foundation of rapid learning cancer care.
Abernethy, Amy P; Ahmad, Asif; Zafar, S Yousuf; Wheeler, Jane L; Reese, Jennifer Barsky; Lyerly, H Kim
2010-06-01
"Rapid learning healthcare" presents a new infrastructure to support comparative effectiveness research. By leveraging heterogeneous datasets (eg, clinical, administrative, genomic, registry, and research), health information technology, and sophisticated iterative analyses, rapid learning healthcare provides a real-time framework in which clinical studies can evaluate the relative impact of therapeutic approaches on a diverse array of measures. This article describes an effort, at 1 academic medical center, to demonstrate what rapid learning healthcare might look like in operation. The article describes the process of developing and testing the components of this new model of integrated clinical/research function, with the pilot site being an academic oncology clinic and with electronic patient-reported outcomes (ePROs) being the foundational dataset. Steps included: feasibility study of the ePRO system; validation study of ePRO collection across 3 cancers; linking ePRO and other datasets; implementation; stakeholder alignment and buy in, and; demonstration through use cases. Two use cases are presented; participants were metastatic breast cancer (n = 65) and gastrointestinal cancer (n = 113) patients at 2 academic medical centers. (1) Patient-reported symptom data were collected with tablet computers; patients with breast and gastrointestinal cancer indicated high levels of sexual distress, which prompted multidisciplinary response, design of an intervention, and successful application for funding to study the intervention's impact. (2) The system evaluated the longitudinal impact of a psychosocial care program provided to patients with breast cancer. Participants used tablet computers to complete PRO surveys; data indicated significant impact on psychosocial outcomes, notably distress and despair, despite advanced disease. Results return to the clinic, allowing iterative update and evaluation. An ePRO-based rapid learning cancer clinic is feasible, providing real-time research-quality data to support comparative effectiveness research.
Waldron, Nicholas; Johnson, Claire E; Saul, Peter; Waldron, Heidi; Chong, Jeffrey C; Hill, Anne-Marie; Hayes, Barbara
2016-10-06
Advance cardiopulmonary resuscitation (CPR) decision-making and escalation of care discussions are variable in routine clinical practice. We aimed to explore physician barriers to advance CPR decision-making in an inpatient hospital setting and develop a pragmatic intervention to support clinicians to undertake and document routine advance care planning discussions. Two focus groups, which involved eight consultants and ten junior doctors, were conducted following a review of the current literature. A subsequent iterative consensus process developed two intervention elements: (i) an updated 'Goals of Patient Care' (GOPC) form and process; (ii) an education video and resources for teaching advance CPR decision-making and communication. A multidisciplinary group of health professionals and policy-makers with experience in systems development, education and research provided critical feedback. Three key themes emerged from the focus groups and the literature, which identified a structure for the intervention: (i) knowing what to say; (ii) knowing how to say it; (iii) wanting to say it. The themes informed the development of a video to provide education about advance CPR decision-making framework, improving communication and contextualising relevant clinical issues. Critical feedback assisted in refining the video and further guided development and evolution of a medical GOPC approach to discussing and recording medical treatment and advance care plans. Through an iterative process of consultation and review, video-based education and an expanded GOPC form and approach were developed to address physician and systemic barriers to advance CPR decision-making and documentation. Implementation and evaluation across hospital settings is required to examine utility and determine effect on quality of care.
NASA Astrophysics Data System (ADS)
Boski, Marcin; Paszke, Wojciech
2015-11-01
This paper deals with the problem of designing an iterative learning control algorithm for discrete linear systems using repetitive process stability theory. The resulting design produces a stabilizing output feedback controller in the time domain and a feedforward controller that guarantees monotonic convergence in the trial-to-trial domain. The results are also extended to limited frequency range design specification. New design procedure is introduced in terms of linear matrix inequality (LMI) representations, which guarantee the prescribed performances of ILC scheme. A simulation example is given to illustrate the theoretical developments.
Analysis of one dimension migration law from rainfall runoff on urban roof
NASA Astrophysics Data System (ADS)
Weiwei, Chen
2017-08-01
Research was taken on the hydrology and water quality process in the natural rain condition and water samples were collected and analyzed. The pollutant were included SS, COD and TN. Based on the mass balance principle, one dimension migration model was built for the rainfall runoff pollution in surface. The difference equation was developed according to the finite difference method, by applying the Newton iteration method for solving it. The simulated pollutant concentration process was in consistent with the measured value on model, and Nash-Sutcliffe coefficient was higher than 0.80. The model had better practicability, which provided evidence for effectively utilizing urban rainfall resource, non-point source pollution of making management technologies and measures, sponge city construction, and so on.
Compositional Verification of a Communication Protocol for a Remotely Operated Vehicle
NASA Technical Reports Server (NTRS)
Goodloe, Alwyn E.; Munoz, Cesar A.
2009-01-01
This paper presents the specification and verification in the Prototype Verification System (PVS) of a protocol intended to facilitate communication in an experimental remotely operated vehicle used by NASA researchers. The protocol is defined as a stack-layered com- position of simpler protocols. It can be seen as the vertical composition of protocol layers, where each layer performs input and output message processing, and the horizontal composition of different processes concurrently inhabiting the same layer, where each process satisfies a distinct requirement. It is formally proven that the protocol components satisfy certain delivery guarantees. Compositional techniques are used to prove these guarantees also hold in the composed system. Although the protocol itself is not novel, the methodology employed in its verification extends existing techniques by automating the tedious and usually cumbersome part of the proof, thereby making the iterative design process of protocols feasible.
Process control strategy for ITER central solenoid operation
NASA Astrophysics Data System (ADS)
Maekawa, R.; Takami, S.; Iwamoto, A.; Chang, H.-S.; Forgeas, A.; Chalifour, M.
2016-12-01
ITER Central Solenoid (CS) pulse operation induces significant flow disturbance in the forced-flow Supercritical Helium (SHe) cooling circuit, which could impact primarily on the operation of cold circulator (SHe centrifugal pump) in Auxiliary Cold Box (ACB). Numerical studies using Venecia®, SUPERMAGNET and 4C have identified reverse flow at the CS module inlet due to the substantial thermal energy deposition at the inner-most winding. To assess the reliable operation of ACB-CS (dedicated ACB for CS), the process analyses have been conducted with a dynamic process simulation model developed by Cryogenic Process REal-time SimulaTor (C-PREST). As implementing process control of hydrodynamic instability, several strategies have been applied to evaluate their feasibility. The paper discusses control strategy to protect the centrifugal type cold circulator/compressor operations and its impact on the CS cooling.
Improving performances of suboptimal greedy iterative biclustering heuristics via localization.
Erten, Cesim; Sözdinler, Melih
2010-10-15
Biclustering gene expression data is the problem of extracting submatrices of genes and conditions exhibiting significant correlation across both the rows and the columns of a data matrix of expression values. Even the simplest versions of the problem are computationally hard. Most of the proposed solutions therefore employ greedy iterative heuristics that locally optimize a suitably assigned scoring function. We provide a fast and simple pre-processing algorithm called localization that reorders the rows and columns of the input data matrix in such a way as to group correlated entries in small local neighborhoods within the matrix. The proposed localization algorithm takes its roots from effective use of graph-theoretical methods applied to problems exhibiting a similar structure to that of biclustering. In order to evaluate the effectivenesss of the localization pre-processing algorithm, we focus on three representative greedy iterative heuristic methods. We show how the localization pre-processing can be incorporated into each representative algorithm to improve biclustering performance. Furthermore, we propose a simple biclustering algorithm, Random Extraction After Localization (REAL) that randomly extracts submatrices from the localization pre-processed data matrix, eliminates those with low similarity scores, and provides the rest as correlated structures representing biclusters. We compare the proposed localization pre-processing with another pre-processing alternative, non-negative matrix factorization. We show that our fast and simple localization procedure provides similar or even better results than the computationally heavy matrix factorization pre-processing with regards to H-value tests. We next demonstrate that the performances of the three representative greedy iterative heuristic methods improve with localization pre-processing when biological correlations in the form of functional enrichment and PPI verification constitute the main performance criteria. The fact that the random extraction method based on localization REAL performs better than the representative greedy heuristic methods under same criteria also confirms the effectiveness of the suggested pre-processing method. Supplementary material including code implementations in LEDA C++ library, experimental data, and the results are available at http://code.google.com/p/biclustering/ cesim@khas.edu.tr; melihsozdinler@boun.edu.tr Supplementary data are available at Bioinformatics online.
USDA-ARS?s Scientific Manuscript database
Under the traditional “loading-dock” model of research, stakeholders are involved in determining priorities prior to research activities and then recieve one-way communication about findings after research is completed. This approach lacks iterative engagement of stakeholders during the research pro...
Burning plasma regime for Fussion-Fission Research Facility
NASA Astrophysics Data System (ADS)
Zakharov, Leonid E.
2010-11-01
The basic aspects of burning plasma regimes of Fusion-Fission Research Facility (FFRF, R/a=4/1 m/m, Ipl=5 MA, Btor=4-6 T, P^DT=50-100 MW, P^fission=80-4000 MW, 1 m thick blanket), which is suggested as the next step device for Chinese fusion program, are presented. The mission of FFRF is to advance magnetic fusion to the level of a stationary neutron source and to create a technical, scientific, and technology basis for the utilization of high-energy fusion neutrons for the needs of nuclear energy and technology. FFRF will rely as much as possible on ITER design. Thus, the magnetic system, especially TFC, will take advantage of ITER experience. TFC will use the same superconductor as ITER. The plasma regimes will represent an extension of the stationary plasma regimes on HT-7 and EAST tokamaks at ASIPP. Both inductive discharges and stationary non-inductive Lower Hybrid Current Drive (LHCD) will be possible. FFRF strongly relies on new, Lithium Wall Fusion (LiWF) plasma regimes, the development of which will be done on NSTX, HT-7, EAST in parallel with the design work. This regime will eliminate a number of uncertainties, still remaining unresolved in the ITER project. Well controlled, hours long inductive current drive operation at P^DT=50-100 MW is predicted.
NASA Astrophysics Data System (ADS)
Bittner-Rohrhofer, K.; Humer, K.; Weber, H. W.
The windings of the superconducting magnet coils for the ITER-FEAT fusion device are affected by high mechanical stresses at cryogenic temperatures and by a radiation environment, which impose certain constraints especially on the insulating materials. A glass fiber reinforced plastic (GFRP) laminate, which consists of Kapton/R-glass-fiber reinforcement tapes, vacuum-impregnated in a DGEBA epoxy system, was used for the European toroidal field model coil turn insulation of ITER. In order to assess its mechanical properties under the actual operating conditions of ITER-FEAT, cryogenic (77 K) static tensile tests and tension-tension fatigue measurements were done before and after irradiation to a fast neutron fluence of 1×10 22 m -2 ( E>0.1 MeV), i.e. the ITER-FEAT design fluence level. We find that the mechanical strength and the fracture behavior of this GFRP are strongly influenced by the winding direction of the tape and by the radiation induced delamination process. In addition, the composite swells by 3%, forming bubbles inside the laminate, and loses weight (1.4%) at the design fluence.
A heuristic statistical stopping rule for iterative reconstruction in emission tomography.
Ben Bouallègue, F; Crouzet, J F; Mariano-Goulart, D
2013-01-01
We propose a statistical stopping criterion for iterative reconstruction in emission tomography based on a heuristic statistical description of the reconstruction process. The method was assessed for MLEM reconstruction. Based on Monte-Carlo numerical simulations and using a perfectly modeled system matrix, our method was compared with classical iterative reconstruction followed by low-pass filtering in terms of Euclidian distance to the exact object, noise, and resolution. The stopping criterion was then evaluated with realistic PET data of a Hoffman brain phantom produced using the GATE platform for different count levels. The numerical experiments showed that compared with the classical method, our technique yielded significant improvement of the noise-resolution tradeoff for a wide range of counting statistics compatible with routine clinical settings. When working with realistic data, the stopping rule allowed a qualitatively and quantitatively efficient determination of the optimal image. Our method appears to give a reliable estimation of the optimal stopping point for iterative reconstruction. It should thus be of practical interest as it produces images with similar or better quality than classical post-filtered iterative reconstruction with a mastered computation time.
2014-01-01
Due to fierce market competition, how to improve product quality and reduce development cost determines the core competitiveness of enterprises. However, design iteration generally causes increases of product cost and delays of development time as well, so how to identify and model couplings among tasks in product design and development has become an important issue for enterprises to settle. In this paper, the shortcomings existing in WTM model are discussed and tearing approach as well as inner iteration method is used to complement the classic WTM model. In addition, the ABC algorithm is also introduced to find out the optimal decoupling schemes. In this paper, firstly, tearing approach and inner iteration method are analyzed for solving coupled sets. Secondly, a hybrid iteration model combining these two technologies is set up. Thirdly, a high-performance swarm intelligence algorithm, artificial bee colony, is adopted to realize problem-solving. Finally, an engineering design of a chemical processing system is given in order to verify its reasonability and effectiveness. PMID:25431584
Differential Characteristics Based Iterative Multiuser Detection for Wireless Sensor Networks
Chen, Xiaoguang; Jiang, Xu; Wu, Zhilu; Zhuang, Shufeng
2017-01-01
High throughput, low latency and reliable communication has always been a hot topic for wireless sensor networks (WSNs) in various applications. Multiuser detection is widely used to suppress the bad effect of multiple access interference in WSNs. In this paper, a novel multiuser detection method based on differential characteristics is proposed to suppress multiple access interference. The proposed iterative receive method consists of three stages. Firstly, a differential characteristics function is presented based on the optimal multiuser detection decision function; then on the basis of differential characteristics, a preliminary threshold detection is utilized to find the potential wrongly received bits; after that an error bit corrector is employed to correct the wrong bits. In order to further lower the bit error ratio (BER), the differential characteristics calculation, threshold detection and error bit correction process described above are iteratively executed. Simulation results show that after only a few iterations the proposed multiuser detection method can achieve satisfactory BER performance. Besides, BER and near far resistance performance are much better than traditional suboptimal multiuser detection methods. Furthermore, the proposed iterative multiuser detection method also has a large system capacity. PMID:28212328
Self-consistent hybrid functionals for solids: a fully-automated implementation
NASA Astrophysics Data System (ADS)
Erba, A.
2017-08-01
A fully-automated algorithm for the determination of the system-specific optimal fraction of exact exchange in self-consistent hybrid functionals of the density-functional-theory is illustrated, as implemented into the public Crystal program. The exchange fraction of this new class of functionals is self-consistently updated proportionally to the inverse of the dielectric response of the system within an iterative procedure (Skone et al 2014 Phys. Rev. B 89, 195112). Each iteration of the present scheme, in turn, implies convergence of a self-consistent-field (SCF) and a coupled-perturbed-Hartree-Fock/Kohn-Sham (CPHF/KS) procedure. The present implementation, beside improving the user-friendliness of self-consistent hybrids, exploits the unperturbed and electric-field perturbed density matrices from previous iterations as guesses for subsequent SCF and CPHF/KS iterations, which is documented to reduce the overall computational cost of the whole process by a factor of 2.
Choosing order of operations to accelerate strip structure analysis in parameter range
NASA Astrophysics Data System (ADS)
Kuksenko, S. P.; Akhunov, R. R.; Gazizov, T. R.
2018-05-01
The paper considers the issue of using iteration methods in solving the sequence of linear algebraic systems obtained in quasistatic analysis of strip structures with the method of moments. Using the analysis of 4 strip structures, the authors have proved that additional acceleration (up to 2.21 times) of the iterative process can be obtained during the process of solving linear systems repeatedly by means of choosing a proper order of operations and a preconditioner. The obtained results can be used to accelerate the process of computer-aided design of various strip structures. The choice of the order of operations to accelerate the process is quite simple, universal and could be used not only for strip structure analysis but also for a wide range of computational problems.
Evaluation of noise limits to improve image processing in soft X-ray projection microscopy.
Jamsranjav, Erdenetogtokh; Kuge, Kenichi; Ito, Atsushi; Kinjo, Yasuhito; Shiina, Tatsuo
2017-03-03
Soft X-ray microscopy has been developed for high resolution imaging of hydrated biological specimens due to the availability of water window region. In particular, a projection type microscopy has advantages in wide viewing area, easy zooming function and easy extensibility to computed tomography (CT). The blur of projection image due to the Fresnel diffraction of X-rays, which eventually reduces spatial resolution, could be corrected by an iteration procedure, i.e., repetition of Fresnel and inverse Fresnel transformations. However, it was found that the correction is not enough to be effective for all images, especially for images with low contrast. In order to improve the effectiveness of image correction by computer processing, we in this study evaluated the influence of background noise in the iteration procedure through a simulation study. In the study, images of model specimen with known morphology were used as a substitute for the chromosome images, one of the targets of our microscope. Under the condition that artificial noise was distributed on the images randomly, we introduced two different parameters to evaluate noise effects according to each situation where the iteration procedure was not successful, and proposed an upper limit of the noise within which the effective iteration procedure for the chromosome images was possible. The study indicated that applying the new simulation and noise evaluation method was useful for image processing where background noises cannot be ignored compared with specimen images.
Kannry, Joseph; Mukani, Sonia; Myers, Kristin
2006-01-01
The experience of Mount Sinai Hospital is representative of the challenges and problems facing large academic medical centers in selecting an ambulatory EMR. The facility successfully revived a stalled process in a challenging financial climate, using a framework of science and rigorous investigation. The process incorporated several innovations: 1) There was a thorough review of medical informatics literature to develop a mission statement, determine practical objectives and guide the demonstration process; 2) The process involved rigorous investigation of vendor statements, industry statements and other institution's views of vendors; 3) The initiative focused on user-centric selection, and the survey instrument was scientifically and specifically designed to assess user feedback; 4) There was scientific analysis of validated findings and survey results at all steering meetings; 5) The process included an assessment of vendors' ability to support research by identifying funded and published research; 6) Selection involved meticulous total cost of ownership analysis to assess and compare real costs of implementing a vendor solution; and finally, 7) There were iterative meetings with stakeholders, executives and users to understand needs, address concerns and communicate the vision.
NASA Technical Reports Server (NTRS)
Whitlow, W., Jr.; Bennett, R. M.
1982-01-01
Since the aerodynamic theory is nonlinear, the method requires the coupling of two iterative processes - an aerodynamic analysis and a structural analysis. A full potential analysis code, FLO22, is combined with a linear structural analysis to yield aerodynamic load distributions on and deflections of elastic wings. This method was used to analyze an aeroelastically-scaled wind tunnel model of a proposed executive-jet transport wing and an aeroelastic research wing. The results are compared with the corresponding rigid-wing analyses, and some effects of elasticity on the aerodynamic loading are noted.
A transatlantic perspective on 20 emerging issues in biological engineering.
Wintle, Bonnie C; Boehm, Christian R; Rhodes, Catherine; Molloy, Jennifer C; Millett, Piers; Adam, Laura; Breitling, Rainer; Carlson, Rob; Casagrande, Rocco; Dando, Malcolm; Doubleday, Robert; Drexler, Eric; Edwards, Brett; Ellis, Tom; Evans, Nicholas G; Hammond, Richard; Haseloff, Jim; Kahl, Linda; Kuiken, Todd; Lichman, Benjamin R; Matthewman, Colette A; Napier, Johnathan A; ÓhÉigeartaigh, Seán S; Patron, Nicola J; Perello, Edward; Shapira, Philip; Tait, Joyce; Takano, Eriko; Sutherland, William J
2017-11-14
Advances in biological engineering are likely to have substantial impacts on global society. To explore these potential impacts we ran a horizon scanning exercise to capture a range of perspectives on the opportunities and risks presented by biological engineering. We first identified 70 potential issues, and then used an iterative process to prioritise 20 issues that we considered to be emerging, to have potential global impact, and to be relatively unknown outside the field of biological engineering. The issues identified may be of interest to researchers, businesses and policy makers in sectors such as health, energy, agriculture and the environment.
Designing an intuitive web application for drug discovery scientists.
Karamanis, Nikiforos; Pignatelli, Miguel; Carvalho-Silva, Denise; Rowland, Francis; Cham, Jennifer A; Dunham, Ian
2018-06-01
We discuss how we designed the Open Targets Platform (www.targetvalidation.org), an intuitive application for bench scientists working in early drug discovery. To meet the needs of our users, we applied lean user experience (UX) design methods: we started engaging with users very early and carried out research, design and evaluation activities within an iterative development process. We also emphasize the collaborative nature of applying lean UX design, which we believe is a foundation for success in this and many other scientific projects. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Steele Gray, Carolyn; Khan, Anum Irfan; Kuluski, Kerry; McKillop, Ian; Sharpe, Sarah; Bierman, Arlene S; Lyons, Renee F; Cott, Cheryl
2016-02-18
Many mHealth technologies do not meet the needs of patients with complex chronic disease and disabilities (CCDDs) who are among the highest users of health systems worldwide. Furthermore, many of the development methodologies used in the creation of mHealth and eHealth technologies lack the ability to embrace users with CCDD in the specification process. This paper describes how we adopted and modified development techniques to create the electronic Patient-Reported Outcomes (ePRO) tool, a patient-centered mHealth solution to help improve primary health care for patients experiencing CCDD. This paper describes the design and development approach, specifically the process of incorporating qualitative research methods into user-centered design approaches to create the ePRO tool. Key lessons learned are offered as a guide for other eHealth and mHealth research and technology developers working with complex patient populations and their primary health care providers. Guided by user-centered design principles, interpretive descriptive qualitative research methods were adopted to capture user experiences through interviews and working groups. Consistent with interpretive descriptive methods, an iterative analysis technique was used to generate findings, which were then organized in relation to the tool design and function to help systematically inform modifications to the tool. User feedback captured and analyzed through this method was used to challenge the design and inform the iterative development of the tool. Interviews with primary health care providers (n=7) and content experts (n=6), and four focus groups with patients and carers (n=14) along with a PICK analysis-Possible, Implementable, (to be) Challenged, (to be) Killed-guided development of the first prototype. The initial prototype was presented in three design working groups with patients/carers (n=5), providers (n=6), and experts (n=5). Working group findings were broken down into categories of what works and what does not work to inform modifications to the prototype. This latter phase led to a major shift in the purpose and design of the prototype, validating the importance of using iterative codesign processes. Interpretive descriptive methods allow for an understanding of user experiences of patients with CCDD, their carers, and primary care providers. Qualitative methods help to capture and interpret user needs, and identify contextual barriers and enablers to tool adoption, informing a redesign to better suit the needs of this diverse user group. This study illustrates the value of adopting interpretive descriptive methods into user-centered mHealth tool design and can also serve to inform the design of other eHealth technologies. Our approach is particularly useful in requirements determination when developing for a complex user group and their health care providers.
Cashin, Andrew; Gallagher, Hilary; Newman, Claire; Hughes, Mark
2012-08-01
The next iteration of the Diagnostic and Statistical Manual of Mental Disorders is due for release in May 2013. The current diagnostic criteria for autism are based on a behavioral triad of impairment, which has been helpful for diagnosis and identifying the need for intervention, but is not useful with regard to developing interventions. Revised diagnostic criteria are needed to better inform research and therapeutic intervention. This article examines the research underpinning the behavioral triad of impairment to consider alternative explanations and a more useful framing for diagnosis and intervention. Contemporary research and literature on autism were used in this study. It is proposed that the cognitive processing triad of impaired abstraction, impaired theory of mind, and impaired linguistic processing become the triad of impairment for autism in the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders. These are investigable at the diagnostic level and can usefully inform intervention. Further, in addressing the debate on whether restrictive and repetitive behavior should remain central to diagnosis or be replaced by a deficit in imagination, the authors argue that both behavioral manifestations are underpinned by impaired abstraction. © 2012 Wiley Periodicals, Inc.
Dynamic Bayesian wavelet transform: New methodology for extraction of repetitive transients
NASA Astrophysics Data System (ADS)
Wang, Dong; Tsui, Kwok-Leung
2017-05-01
Thanks to some recent research works, dynamic Bayesian wavelet transform as new methodology for extraction of repetitive transients is proposed in this short communication to reveal fault signatures hidden in rotating machine. The main idea of the dynamic Bayesian wavelet transform is to iteratively estimate posterior parameters of wavelet transform via artificial observations and dynamic Bayesian inference. First, a prior wavelet parameter distribution can be established by one of many fast detection algorithms, such as the fast kurtogram, the improved kurtogram, the enhanced kurtogram, the sparsogram, the infogram, continuous wavelet transform, discrete wavelet transform, wavelet packets, multiwavelets, empirical wavelet transform, empirical mode decomposition, local mean decomposition, etc.. Second, artificial observations can be constructed based on one of many metrics, such as kurtosis, the sparsity measurement, entropy, approximate entropy, the smoothness index, a synthesized criterion, etc., which are able to quantify repetitive transients. Finally, given artificial observations, the prior wavelet parameter distribution can be posteriorly updated over iterations by using dynamic Bayesian inference. More importantly, the proposed new methodology can be extended to establish the optimal parameters required by many other signal processing methods for extraction of repetitive transients.
Clay, Zanna; Pople, Sally; Hood, Bruce; Kita, Sotaro
2014-08-01
Research on Nicaraguan Sign Language, created by deaf children, has suggested that young children use gestures to segment the semantic elements of events and linearize them in ways similar to those used in signed and spoken languages. However, it is unclear whether this is due to children's learning processes or to a more general effect of iterative learning. We investigated whether typically developing children, without iterative learning, segment and linearize information. Gestures produced in the absence of speech to express a motion event were examined in 4-year-olds, 12-year-olds, and adults (all native English speakers). We compared the proportions of gestural expressions that segmented semantic elements into linear sequences and that encoded them simultaneously. Compared with adolescents and adults, children reshaped the holistic stimuli by segmenting and recombining their semantic features into linearized sequences. A control task on recognition memory ruled out the possibility that this was due to different event perception or memory. Young children spontaneously bring fundamental properties of language into their communication system. © The Author(s) 2014.
A Burning Plasma Experiment: the role of international collaboration
NASA Astrophysics Data System (ADS)
Prager, Stewart
2003-04-01
The world effort to develop fusion energy is at the threshold of a new stage in its research: the investigation of burning plasmas. A burning plasma is self-heated. The 100 million degree temperature of the plasma is maintained by the heat generated by the fusion reactions themselves, as occurs in burning stars. The fusion-generated alpha particles produce new physical phenomena that are strongly coupled together as a nonlinear complex system, posing a major plasma physics challenge. Two attractive options are being considered by the US fusion community as burning plasma facilities: the international ITER experiment and the US-based FIRE experiment. ITER (the International Thermonuclear Experimental Reactor) is a large, power-plant scale facility. It was conceived and designed by a partnership of the European Union, Japan, the Soviet Union, and the United States. At the completion of the first engineering design in 1998, the US discontinued its participation. FIRE (the Fusion Ignition Research Experiment) is a smaller, domestic facility that is at an advanced pre-conceptual design stage. Each facility has different scientific, programmatic and political implications. Selecting the optimal path for burning plasma science is itself a challenge. Recently, the Fusion Energy Sciences Advisory Committee recommended a dual path strategy in which the US seek to rejoin ITER, but be prepared to move forward with FIRE if the ITER negotiations do not reach fruition by July, 2004. Either the ITER or FIRE experiment would reveal the behavior of burning plasmas, generate large amounts of fusion power, and be a huge step in establishing the potential of fusion energy to contribute to the world's energy security.
Plasma-surface interaction in the context of ITER.
Kleyn, A W; Lopes Cardozo, N J; Samm, U
2006-04-21
The decreasing availability of energy and concern about climate change necessitate the development of novel sustainable energy sources. Fusion energy is such a source. Although it will take several decades to develop it into routinely operated power sources, the ultimate potential of fusion energy is very high and badly needed. A major step forward in the development of fusion energy is the decision to construct the experimental test reactor ITER. ITER will stimulate research in many areas of science. This article serves as an introduction to some of those areas. In particular, we discuss research opportunities in the context of plasma-surface interactions. The fusion plasma, with a typical temperature of 10 keV, has to be brought into contact with a physical wall in order to remove the helium produced and drain the excess energy in the fusion plasma. The fusion plasma is far too hot to be brought into direct contact with a physical wall. It would degrade the wall and the debris from the wall would extinguish the plasma. Therefore, schemes are developed to cool down the plasma locally before it impacts on a physical surface. The resulting plasma-surface interaction in ITER is facing several challenges including surface erosion, material redeposition and tritium retention. In this article we introduce how the plasma-surface interaction relevant for ITER can be studied in small scale experiments. The various requirements for such experiments are introduced and examples of present and future experiments will be given. The emphasis in this article will be on the experimental studies of plasma-surface interactions.
Active Interaction Mapping as a tool to elucidate hierarchical functions of biological processes.
Farré, Jean-Claude; Kramer, Michael; Ideker, Trey; Subramani, Suresh
2017-07-03
Increasingly, various 'omics data are contributing significantly to our understanding of novel biological processes, but it has not been possible to iteratively elucidate hierarchical functions in complex phenomena. We describe a general systems biology approach called Active Interaction Mapping (AI-MAP), which elucidates the hierarchy of functions for any biological process. Existing and new 'omics data sets can be iteratively added to create and improve hierarchical models which enhance our understanding of particular biological processes. The best datatypes to further improve an AI-MAP model are predicted computationally. We applied this approach to our understanding of general and selective autophagy, which are conserved in most eukaryotes, setting the stage for the broader application to other cellular processes of interest. In the particular application to autophagy-related processes, we uncovered and validated new autophagy and autophagy-related processes, expanded known autophagy processes with new components, integrated known non-autophagic processes with autophagy and predict other unexplored connections.
Mitigation of crosstalk based on CSO-ICA in free space orbital angular momentum multiplexing systems
NASA Astrophysics Data System (ADS)
Xing, Dengke; Liu, Jianfei; Zeng, Xiangye; Lu, Jia; Yi, Ziyao
2018-09-01
Orbital angular momentum (OAM) multiplexing has caused a lot of concerns and researches in recent years because of its great spectral efficiency and many OAM systems in free space channel have been demonstrated. However, due to the existence of atmospheric turbulence, the power of OAM beams will diffuse to beams with neighboring topological charges and inter-mode crosstalk will emerge in these systems, resulting in the system nonavailability in severe cases. In this paper, we introduced independent component analysis (ICA), which is known as a popular method of signal separation, to mitigate inter-mode crosstalk effects; furthermore, aiming at the shortcomings of traditional ICA algorithm's fixed iteration speed, we proposed a joint algorithm, CSO-ICA, to improve the process of solving the separation matrix by taking advantage of fast convergence rate and high convergence precision of chicken swarm algorithm (CSO). We can get the optimal separation matrix by adjusting the step size according to the last iteration in CSO-ICA. Simulation results indicate that the proposed algorithm has a good performance in inter-mode crosstalk mitigation and the optical signal-to-noise ratio (OSNR) requirement of received signals (OAM+2, OAM+4, OAM+6, OAM+8) is reduced about 3.2 dB at bit error ratio (BER) of 3.8 × 10-3. Meanwhile, the convergence speed is much faster than the traditional ICA algorithm by improving about an order of iteration times.
Using the Tritium Plasma Experiment to evaluate ITER PFC safety
NASA Astrophysics Data System (ADS)
Longhurst, Glen R.; Anderl, Robert A.; Bartlit, John R.; Causey, Rion A.; Haines, John R.
The Tritium Plasma Experiment was assembled at Sandia National Laboratories, Livermore to investigate interactions between dense plasmas at low energies and plasma-facing component materials. This apparatus has the unique capability of replicating plasma conditions in a tokamak divertor with particle flux densities of 2 x 10(exp 19) ions/((sq cm)(s)) and a plasma temperature of about 15 eV using a plasma that includes tritium. With the closure of the Tritium Research Laboratory at Livermore, the experiment was moved to the Tritium Systems Test Assembly facility at Los Alamos National Laboratory. An experimental program has been initiated there using the Tritium Plasma Experiment to examine safety issues related to tritium in plasma-facing components, particularly the ITER divertor. Those issues include tritium retention and release characteristics, tritium permeation rates and transient times to coolant streams, surface modification and erosion by the plasma, the effects of thermal loads and cycling, and particulate production. A considerable lack of data exists in these areas for many of the materials, especially beryllium, being considered for use in ITER. Not only will basic material behavior with respect to safety issues in the divertor environment be examined, but innovative techniques for optimizing performance with respect to tritium safety by material modification and process control will be investigated. Supplementary experiments will be carried out at the Idaho National Engineering Laboratory and Sandia National Laboratory to expand and clarify results obtained on the Tritium Plasma Experiment.
Solving large mixed linear models using preconditioned conjugate gradient iteration.
Strandén, I; Lidauer, M
1999-12-01
Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.
NASA Astrophysics Data System (ADS)
Maheshwari, A.; Pathak, H. A.; Mehta, B. K.; Phull, G. S.; Laad, R.; Shaikh, M. S.; George, S.; Joshi, K.; Khan, Z.
2017-04-01
ITER Vacuum Vessel is a torus-shaped, double wall structure. The space between the double walls of the VV is filled with In-Wall Shielding Blocks (IWS) and Water. The main purpose of IWS is to provide neutron shielding during ITER plasma operation and to reduce ripple of Toroidal Magnetic Field (TF). Although In-Wall Shield Blocks (IWS) will be submerged in water in between the walls of the ITER Vacuum Vessel (VV), Outgassing Rate (OGR) of IWS materials plays a significant role in leak detection of Vacuum Vessel of ITER. Thermal Outgassing Rate of a material critically depends on the Surface Roughness of material. During leak detection process using RGA equipped Leak detector and tracer gas Helium, there will be a spill over of mass 3 and mass 2 to mass 4 which creates a background reading. Helium background will have contribution of Hydrogen too. So it is necessary to ensure the low OGR of Hydrogen. To achieve an effective leak test it is required to obtain a background below 1 × 10-8 mbar 1 s-1 and hence the maximum Outgassing rate of IWS Materials should comply with the maximum Outgassing rate required for hydrogen i.e. 1 x 10-10 mbar 1 s-1 cm-2 at room temperature. As IWS Materials are special materials developed for ITER project, it is necessary to ensure the compliance of Outgassing rate with the requirement. There is a possibility of diffusing the gasses in material at the time of production. So, to validate the production process of materials as well as manufacturing of final product from this material, three coupons of each IWS material have been manufactured with the same technique which is being used in manufacturing of IWS blocks. Manufacturing records of these coupons have been approved by ITER-IO (International Organization). Outgassing rates of these coupons have been measured at room temperature and found in acceptable limit to obtain the required Helium Background. On the basis of these measurements, test reports have been generated and got approved by IO. This paper will describe the preparation, characteristics and cleaning procedure of samples, description of the system, Outgassing rate Measurement of these samples to ensure the accurate leak detection.
Nithiananthan, Sajendra; Schafer, Sebastian; Uneri, Ali; Mirota, Daniel J; Stayman, J Webster; Zbijewski, Wojciech; Brock, Kristy K; Daly, Michael J; Chan, Harley; Irish, Jonathan C; Siewerdsen, Jeffrey H
2011-04-01
A method of intensity-based deformable registration of CT and cone-beam CT (CBCT) images is described, in which intensity correction occurs simultaneously within the iterative registration process. The method preserves the speed and simplicity of the popular Demons algorithm while providing robustness and accuracy in the presence of large mismatch between CT and CBCT voxel values ("intensity"). A variant of the Demons algorithm was developed in which an estimate of the relationship between CT and CBCT intensity values for specific materials in the image is computed at each iteration based on the set of currently overlapping voxels. This tissue-specific intensity correction is then used to estimate the registration output for that iteration and the process is repeated. The robustness of the method was tested in CBCT images of a cadaveric head exhibiting a broad range of simulated intensity variations associated with x-ray scatter, object truncation, and/or errors in the reconstruction algorithm. The accuracy of CT-CBCT registration was also measured in six real cases, exhibiting deformations ranging from simple to complex during surgery or radiotherapy guided by a CBCT-capable C-arm or linear accelerator, respectively. The iterative intensity matching approach was robust against all levels of intensity variation examined, including spatially varying errors in voxel value of a factor of 2 or more, as can be encountered in cases of high x-ray scatter. Registration accuracy without intensity matching degraded severely with increasing magnitude of intensity error and introduced image distortion. A single histogram match performed prior to registration alleviated some of these effects but was also prone to image distortion and was quantifiably less robust and accurate than the iterative approach. Within the six case registration accuracy study, iterative intensity matching Demons reduced mean TRE to (2.5 +/- 2.8) mm compared to (3.5 +/- 3.0) mm with rigid registration. A method was developed to iteratively correct CT-CBCT intensity disparity during Demons registration, enabling fast, intensity-based registration in CBCT-guided procedures such as surgery and radiotherapy, in which CBCT voxel values may be inaccurate. Accurate CT-CBCT registration in turn facilitates registration of multimodality preoperative image and planning data to intraoperative CBCT by way of the preoperative CT, thereby linking the intraoperative frame of reference to a wealth of preoperative information that could improve interventional guidance.
ECOMAT INC. BIOLOGICAL DENITRIFICATION PROCESS, ITER
EcoMat, Inc. of Hayward, California (EcoMat) has developed an ex situ anoxic biofilter biodenitrification (BDN) process. The process uses specific biocarriers and bacteria to treat nitrate-contaminated water and employs a patented reactor that retains biocarrier within the syste...
NASA Astrophysics Data System (ADS)
Loarte, A.; Huijsmans, G.; Futatani, S.; Baylor, L. R.; Evans, T. E.; Orlov, D. M.; Schmitz, O.; Becoulet, M.; Cahyna, P.; Gribov, Y.; Kavin, A.; Sashala Naik, A.; Campbell, D. J.; Casper, T.; Daly, E.; Frerichs, H.; Kischner, A.; Laengner, R.; Lisgo, S.; Pitts, R. A.; Saibene, G.; Wingen, A.
2014-03-01
Progress in the definition of the requirements for edge localized mode (ELM) control and the application of ELM control methods both for high fusion performance DT operation and non-active low-current operation in ITER is described. Evaluation of the power fluxes for low plasma current H-modes in ITER shows that uncontrolled ELMs will not lead to damage to the tungsten (W) divertor target, unlike for high-current H-modes in which divertor damage by uncontrolled ELMs is expected. Despite the lack of divertor damage at lower currents, ELM control is found to be required in ITER under these conditions to prevent an excessive contamination of the plasma by W, which could eventually lead to an increased disruptivity. Modelling with the non-linear MHD code JOREK of the physics processes determining the flow of energy from the confined plasma onto the plasma-facing components during ELMs at the ITER scale shows that the relative contribution of conductive and convective losses is intrinsically linked to the magnitude of the ELM energy loss. Modelling of the triggering of ELMs by pellet injection for DIII-D and ITER has identified the minimum pellet size required to trigger ELMs and, from this, the required fuel throughput for the application of this technique to ITER is evaluated and shown to be compatible with the installed fuelling and tritium re-processing capabilities in ITER. The evaluation of the capabilities of the ELM control coil system in ITER for ELM suppression is carried out (in the vacuum approximation) and found to have a factor of ˜2 margin in terms of coil current to achieve its design criterion, although such a margin could be substantially reduced when plasma shielding effects are taken into account. The consequences for the spatial distribution of the power fluxes at the divertor of ELM control by three-dimensional (3D) fields are evaluated and found to lead to substantial toroidal asymmetries in zones of the divertor target away from the separatrix. Therefore, specifications for the rotation of the 3D perturbation applied for ELM control in order to avoid excessive localized erosion of the ITER divertor target are derived. It is shown that a rotation frequency in excess of 1 Hz for the whole toroidally asymmetric divertor power flux pattern is required (corresponding to n Hz frequency in the variation of currents in the coils, where n is the toroidal symmetry of the perturbation applied) in order to avoid unacceptable thermal cycling of the divertor target for the highest power fluxes and worst toroidal power flux asymmetries expected. The possible use of the in-vessel vertical stability coils for ELM control as a back-up to the main ELM control systems in ITER is described and the feasibility of its application to control ELMs in low plasma current H-modes, foreseen for initial ITER operation, is evaluated and found to be viable for plasma currents up to 5-10 MA depending on modelling assumptions.
Designing Image Analysis Pipelines in Light Microscopy: A Rational Approach.
Arganda-Carreras, Ignacio; Andrey, Philippe
2017-01-01
With the progress of microscopy techniques and the rapidly growing amounts of acquired imaging data, there is an increased need for automated image processing and analysis solutions in biological studies. Each new application requires the design of a specific image analysis pipeline, by assembling a series of image processing operations. Many commercial or free bioimage analysis software are now available and several textbooks and reviews have presented the mathematical and computational fundamentals of image processing and analysis. Tens, if not hundreds, of algorithms and methods have been developed and integrated into image analysis software, resulting in a combinatorial explosion of possible image processing sequences. This paper presents a general guideline methodology to rationally address the design of image processing and analysis pipelines. The originality of the proposed approach is to follow an iterative, backwards procedure from the target objectives of analysis. The proposed goal-oriented strategy should help biologists to better apprehend image analysis in the context of their research and should allow them to efficiently interact with image processing specialists.
Cold Test and Performance Evaluation of Prototype Cryoline-X
NASA Astrophysics Data System (ADS)
Shah, N.; Choukekar, K.; Kapoor, H.; Muralidhara, S.; Garg, A.; Kumar, U.; Jadon, M.; Dash, B.; Bhattachrya, R.; Badgujar, S.; Billot, V.; Bravais, P.; Cadeau, P.
2017-12-01
The multi-process pipe vacuum jacketed cryolines for the ITER project are probably world’s most complex cryolines in terms of layout, load cases, quality, safety and regulatory requirements. As a risk mitigation plan, design, manufacturing and testing of prototype cryoline (PTCL) was planned before the approval of final design of ITER cryolines. The 29 meter long PTCL consist of 6 process pipes encased by thermal shield inside Outer Vacuum Jacket of DN 600 size and carries cold helium at 4.5 K and 80 K. The global heat load limit was defined as 1.2 W/m at 4.5 K and 4.5 W/m at 80 K. The PTCL-X (PTCL for Group-X cryolines) was specified in detail by ITER-India and designed as well as manufactured by Air Liquide. PTCL-X was installed and tested at cryogenic temperature at ITER-India Cryogenic Laboratory in 2016. The heat load at 4.5 K and 80 K, estimated using enthalpy difference method, was found to be approximately 0.8 W/m at 4.5 K, 4.2 W/m at 80 K, which is well within the defined limits. Thermal shield temperature profile was also found to be satisfactory. Paper summarizes the cold test results of PTCL-X
Sharing Research Models: Using Software Engineering Practices for Facilitation
Bryant, Stephanie P.; Solano, Eric; Cantor, Susanna; Cooley, Philip C.; Wagener, Diane K.
2011-01-01
Increasingly, researchers are turning to computational models to understand the interplay of important variables on systems’ behaviors. Although researchers may develop models that meet the needs of their investigation, application limitations—such as nonintuitive user interface features and data input specifications—may limit the sharing of these tools with other research groups. By removing these barriers, other research groups that perform related work can leverage these work products to expedite their own investigations. The use of software engineering practices can enable managed application production and shared research artifacts among multiple research groups by promoting consistent models, reducing redundant effort, encouraging rigorous peer review, and facilitating research collaborations that are supported by a common toolset. This report discusses three established software engineering practices— the iterative software development process, object-oriented methodology, and Unified Modeling Language—and the applicability of these practices to computational model development. Our efforts to modify the MIDAS TranStat application to make it more user-friendly are presented as an example of how computational models that are based on research and developed using software engineering practices can benefit a broader audience of researchers. PMID:21687780
How good are the Garvey-Kelson predictions of nuclear masses?
NASA Astrophysics Data System (ADS)
Morales, Irving O.; López Vieyra, J. C.; Hirsch, J. G.; Frank, A.
2009-09-01
The Garvey-Kelson relations are used in an iterative process to predict nuclear masses in the neighborhood of nuclei with measured masses. Average errors in the predicted masses for the first three iteration shells are smaller than those obtained with the best nuclear mass models. Their quality is comparable with the Audi-Wapstra extrapolations, offering a simple and reproducible procedure for short range mass predictions. A systematic study of the way the error grows as a function of the iteration and the distance to the known masses region, shows that a correlation exists between the error and the residual neutron-proton interaction, produced mainly by the implicit assumption that V varies smoothly along the nuclear landscape.
Review of the ITER diagnostics suite for erosion, deposition, dust and tritium measurements
NASA Astrophysics Data System (ADS)
Reichle, R.; Andrew, P.; Bates, P.; Bede, O.; Casal, N.; Choi, C. H.; Barnsley, R.; Damiani, C.; Bertalot, L.; Dubus, G.; Ferreol, J.; Jagannathan, G.; Kocan, M.; Leipold, F.; Lisgo, S. W.; Martin, V.; Palmer, J.; Pearce, R.; Philipps, V.; Pitts, R. A.; Pampin, R.; Passedat, G.; Puiu, A.; Suarez, A.; Shigin, P.; Shu, W.; Vayakis, G.; Veshchev, E.; Walsh, M.
2015-08-01
Dust and tritium inventories in the vacuum vessel have upper limits in ITER that are set by nuclear safety requirements. Erosion, migration and re-deposition of wall material together with fuel co-deposition will be largely responsible for these inventories. The diagnostic suite required to monitor these processes, along with the set of the corresponding measurement requirements is currently under review given the recent decision by the ITER Organization to eliminate the first carbon/tungsten (C/W) divertor and begin operations with a full-W variant Pitts et al. [1]. This paper presents the result of this review as well as the status of the chosen diagnostics.
NASA Astrophysics Data System (ADS)
Ahunov, Roman R.; Kuksenko, Sergey P.; Gazizov, Talgat R.
2016-06-01
A multiple solution of linear algebraic systems with dense matrix by iterative methods is considered. To accelerate the process, the recomputing of the preconditioning matrix is used. A priory condition of the recomputing based on change of the arithmetic mean of the current solution time during the multiple solution is proposed. To confirm the effectiveness of the proposed approach, the numerical experiments using iterative methods BiCGStab and CGS for four different sets of matrices on two examples of microstrip structures are carried out. For solution of 100 linear systems the acceleration up to 1.6 times, compared to the approach without recomputing, is obtained.
NASA Astrophysics Data System (ADS)
Imamura, Seigo; Ono, Kenji; Yokokawa, Mitsuo
2016-07-01
Ensemble computing, which is an instance of capacity computing, is an effective computing scenario for exascale parallel supercomputers. In ensemble computing, there are multiple linear systems associated with a common coefficient matrix. We improve the performance of iterative solvers for multiple vectors by solving them at the same time, that is, by solving for the product of the matrices. We implemented several iterative methods and compared their performance. The maximum performance on Sparc VIIIfx was 7.6 times higher than that of a naïve implementation. Finally, to deal with the different convergence processes of linear systems, we introduced a control method to eliminate the calculation of already converged vectors.
Further investigation on "A multiplicative regularization for force reconstruction"
NASA Astrophysics Data System (ADS)
Aucejo, M.; De Smet, O.
2018-05-01
We have recently proposed a multiplicative regularization to reconstruct mechanical forces acting on a structure from vibration measurements. This method does not require any selection procedure for choosing the regularization parameter, since the amount of regularization is automatically adjusted throughout an iterative resolution process. The proposed iterative algorithm has been developed with performance and efficiency in mind, but it is actually a simplified version of a full iterative procedure not described in the original paper. The present paper aims at introducing the full resolution algorithm and comparing it with its simplified version in terms of computational efficiency and solution accuracy. In particular, it is shown that both algorithms lead to very similar identified solutions.
NASA Astrophysics Data System (ADS)
Clayton, N.; Crouchen, M.; Devred, A.; Evans, D.; Gung, C.-Y.; Lathwell, I.
2017-04-01
It is planned that the high voltage electrical insulation on the ITER feeder busbars will consist of interleaved layers of epoxy resin pre-impregnated glass tapes ('pre-preg') and polyimide. In addition to its electrical insulation function, the busbar insulation must have adequate mechanical properties to sustain the loads imposed on it during ITER magnet operation. This paper reports an investigation into suitable materials to manufacture the high voltage insulation for the ITER superconducting busbars and pipework. An R&D programme was undertaken in order to identify suitable pre-preg and polyimide materials from a range of suppliers. Pre-preg materials were obtained from 3 suppliers and used with Kapton HN, to make mouldings using the desired insulation architecture. Two main processing routes for pre-pregs have been investigated, namely vacuum bag processing (out of autoclave processing) and processing using a material with a high coefficient of thermal expansion (silicone rubber), to apply the compaction pressure on the insulation. Insulation should have adequate mechanical properties to cope with the stresses induced by the operating environment and a low void content necessary in a high voltage application. The quality of the mouldings was assessed by mechanical testing at 77 K and by the measurement of the void content.
Deep learning methods to guide CT image reconstruction and reduce metal artifacts
NASA Astrophysics Data System (ADS)
Gjesteby, Lars; Yang, Qingsong; Xi, Yan; Zhou, Ye; Zhang, Junping; Wang, Ge
2017-03-01
The rapidly-rising field of machine learning, including deep learning, has inspired applications across many disciplines. In medical imaging, deep learning has been primarily used for image processing and analysis. In this paper, we integrate a convolutional neural network (CNN) into the computed tomography (CT) image reconstruction process. Our first task is to monitor the quality of CT images during iterative reconstruction and decide when to stop the process according to an intelligent numerical observer instead of using a traditional stopping rule, such as a fixed error threshold or a maximum number of iterations. After training on ground truth images, the CNN was successful in guiding an iterative reconstruction process to yield high-quality images. Our second task is to improve a sinogram to correct for artifacts caused by metal objects. A large number of interpolation and normalization-based schemes were introduced for metal artifact reduction (MAR) over the past four decades. The NMAR algorithm is considered a state-of-the-art method, although residual errors often remain in the reconstructed images, especially in cases of multiple metal objects. Here we merge NMAR with deep learning in the projection domain to achieve additional correction in critical image regions. Our results indicate that deep learning can be a viable tool to address CT reconstruction challenges.
2007-02-28
Iterative Ultrasonic Signal and Image Deconvolution for Estimation of the Complex Medium Response, International Journal of Imaging Systems and...1767-1782, 2006. 31. Z. Mu, R. Plemmons, and P. Santago. Iterative Ultrasonic Signal and Image Deconvolution for Estimation of the Complex...rigorous mathematical and computational research on inverse problems in optical imaging of direct interest to the Army and also the intelligence agencies
NASA Astrophysics Data System (ADS)
Walton, Karl; Blunt, Liam; Fleming, Leigh
2015-09-01
Mass finishing is amongst the most widely used finishing processes in modern manufacturing, in applications from deburring to edge radiusing and polishing. Processing objectives are varied, ranging from the cosmetic to the functionally critical. One such critical application is the hydraulically smooth polishing of aero engine component gas-washed surfaces. In this, and many other applications the drive to improve process control and finish tolerance is ever present. Considering its widespread use mass finishing has seen limited research activity, particularly with respect to surface characterization. The objectives of the current paper are to; characterise the mass finished stratified surface and its development process using areal surface parameters, provide guidance on the optimal parameters and sampling method to characterise this surface type for a given application, and detail the spatial variation in surface topography due to coupon edge shadowing. Blasted and peened square plate coupons in titanium alloy are wet (vibro) mass finished iteratively with increasing duration. Measurement fields are precisely relocated between iterations by fixturing and an image superimposition alignment technique. Surface topography development is detailed with ‘log of process duration’ plots of the ‘areal parameters for scale-limited stratified functional surfaces’, (the Sk family). Characteristic features of the Smr2 plot are seen to map out the processing of peak, core and dale regions in turn. These surface process regions also become apparent in the ‘log of process duration’ plot for Sq, where lower core and dale regions are well modelled by logarithmic functions. Surface finish (Ra or Sa) with mass finishing duration is currently predicted with an exponential model. This model is shown to be limited for the current surface type at a critical range of surface finishes. Statistical analysis provides a group of areal parameters including; Vvc, Sq, and Sdq, showing optimal discrimination for a specific range of surface finish outcomes. As a consequence of edge shadowing surface segregation is suggested for characterization purposes.
Coordination of the health policy dialogue process in Guinea: pre- and post-Ebola.
Ade, Nadege; Réne, Adzodo; Khalifa, Mara; Babila, Kevin Ousman; Monono, Martin Ekeke; Tarcisse, Elongo; Nabyonga-Orem, Juliet
2016-07-18
Policy dialogue can be defined as an iterative process that involves a broad range of stakeholders discussing a particular issue with a concrete purpose in mind. Policy dialogue in health is increasingly being recognised by health stakeholders in developing countries, as an important process or mechanism for improving collaboration and harmonization in health and for developing comprehensive and evidence-based health sector strategies and plans. It is with this perspective in mind that Guinea, in 2013, started a policy dialogue process, engaging a plethora of actors to revise the country's national health policy and develop a new national health development plan (2015-2024). This study examines the coordination of the policy dialogue process in developing these key strategic governance documents of the Guinean health sector from the actors' perspective. A qualitative case study approach was undertaken, comprising of interviews with key stakeholders who participated in the policy dialogue process. A review of the literature informed the development of a conceptual framework and the data collection survey questionnaire. The results were analysed both inductively and deductively. A total of 22 out of 32 individuals were interviewed. The results suggest both areas of strengths and weaknesses in the coordination of the policy dialogue process in Guinea. The aspects of good coordination observed were the iterative nature of the dialogue and the availability of neutral and well-experienced facilitators. Weak coordination was perceived through the unavailability of supporting documentation, time and financial constraints experienced during the dialogue process. The onset of the Ebola epidemic in Guinea impacted on coordination dynamics by causing a slowdown of its activities and then its virtual halt. The findings herein highlight the need for policy dialogue coordination structures to have the necessary administrative and institutional support to facilitate their effective functioning. The findings also point to the need for further research on the practical and operational aspects of national dialogue coordination structures to determine how to best strengthen their capacities.
Iterative reactions of transient boronic acids enable sequential C-C bond formation
NASA Astrophysics Data System (ADS)
Battilocchio, Claudio; Feist, Florian; Hafner, Andreas; Simon, Meike; Tran, Duc N.; Allwood, Daniel M.; Blakemore, David C.; Ley, Steven V.
2016-04-01
The ability to form multiple carbon-carbon bonds in a controlled sequence and thus rapidly build molecular complexity in an iterative fashion is an important goal in modern chemical synthesis. In recent times, transition-metal-catalysed coupling reactions have dominated in the development of C-C bond forming processes. A desire to reduce the reliance on precious metals and a need to obtain products with very low levels of metal impurities has brought a renewed focus on metal-free coupling processes. Here, we report the in situ preparation of reactive allylic and benzylic boronic acids, obtained by reacting flow-generated diazo compounds with boronic acids, and their application in controlled iterative C-C bond forming reactions is described. Thus far we have shown the formation of up to three C-C bonds in a sequence including the final trapping of a reactive boronic acid species with an aldehyde to generate a range of new chemical structures.
Modeling Data Containing Outliers using ARIMA Additive Outlier (ARIMA-AO)
NASA Astrophysics Data System (ADS)
Saleh Ahmar, Ansari; Guritno, Suryo; Abdurakhman; Rahman, Abdul; Awi; Alimuddin; Minggi, Ilham; Arif Tiro, M.; Kasim Aidid, M.; Annas, Suwardi; Utami Sutiksno, Dian; Ahmar, Dewi S.; Ahmar, Kurniawan H.; Abqary Ahmar, A.; Zaki, Ahmad; Abdullah, Dahlan; Rahim, Robbi; Nurdiyanto, Heri; Hidayat, Rahmat; Napitupulu, Darmawan; Simarmata, Janner; Kurniasih, Nuning; Andretti Abdillah, Leon; Pranolo, Andri; Haviluddin; Albra, Wahyudin; Arifin, A. Nurani M.
2018-01-01
The aim this study is discussed on the detection and correction of data containing the additive outlier (AO) on the model ARIMA (p, d, q). The process of detection and correction of data using an iterative procedure popularized by Box, Jenkins, and Reinsel (1994). By using this method we obtained an ARIMA models were fit to the data containing AO, this model is added to the original model of ARIMA coefficients obtained from the iteration process using regression methods. In the simulation data is obtained that the data contained AO initial models are ARIMA (2,0,0) with MSE = 36,780, after the detection and correction of data obtained by the iteration of the model ARIMA (2,0,0) with the coefficients obtained from the regression Zt = 0,106+0,204Z t-1+0,401Z t-2-329X 1(t)+115X 2(t)+35,9X 3(t) and MSE = 19,365. This shows that there is an improvement of forecasting error rate data.
NASA Astrophysics Data System (ADS)
Choong, Zhengyang
2017-08-01
Student research projects are increasingly common at the K-12 level. However, students often face difficulties in the course of their school research projects such as setting realistic timelines and expectations, handling problems stemming from a lack of self-confidence, as well as being sufficiently disciplined for sustained communication and experimentation. In this work, we explore manifestations of these problems in the context of a photonics project, characterising the spectrum of the breakdown flash from Silicon Avalanche Photodiodes. We report on the process of planning and building the setup, data collection, analysis and troubleshooting, as well as the technical and human problems at each step. Approaches that were found to be helpful in managing the aforementioned problems are discussed, including an attention to detail during experimental work, as well as communicating in a forthcoming manner. e former allowed for clearer planning and the setting of quantifiable proximal goals; the latter helped in motivating discipline, and also helped in the understanding of research as an iterative learning process without a clear definition of success or failure.
Fujishiro, Kaori; Gong, Fang; Baron, Sherry; Jacobson, C Jeffery; DeLaney, Sheli; Flynn, Michael; Eggerth, Donald E
2010-02-01
The increasing ethnic diversity of the US workforce has created a need for research tools that can be used with multi-lingual worker populations. Developing multi-language questionnaire items is a complex process; however, very little has been documented in the literature. Commonly used English items from the Job Content Questionnaire and Quality of Work Life Questionnaire were translated by two interdisciplinary bilingual teams and cognitively tested in interviews with English-, Spanish-, and Chinese-speaking workers. Common problems across languages mainly concerned response format. Language-specific problems required more conceptual than literal translations. Some items were better understood by non-English speakers than by English speakers. De-centering (i.e., modifying the English original to correspond with translation) produced better understanding for one item. Translating questionnaire items and achieving equivalence across languages require various kinds of expertise. Backward translation itself is not sufficient. More research efforts should be concentrated on qualitative approaches to developing useful research tools. Published 2009 Wiley-Liss, Inc.
Salvador-Carulla, Luis; Cloninger, C Robert; Thornicroft, Amalia; Mezzich, Juan E.
2015-01-01
Declarations are relevant tools to frame new areas in health care, to raise awareness and to facilitate knowledge-to-action. The International College on Person Centered Medicine (ICPCM) is seeking to extend the impact of the ICPCM Conference Series by producing a declaration on every main topic. The aim of this paper is to describe the development of the 2013 Geneva Declaration on Person-centered Health Research and to provide additional information on the research priority areas identified during this iterative process. There is a need for more PCM research and for the incorporation of the PCM approach into general health research. Main areas of research focus include: Conceptual, terminological, and ontological issues; research to enhance the empirical evidence of PCM main components such as PCM informed clinical communication; PCM-based diagnostic models; person-centered care and interventions; and people-centered care, research on training and curriculum development. Dissemination and implementation of PCM knowledge-base is integral to Person-centered Health Research and shall engage currently available scientific and translational dissemination tools such journals, events and eHealth. PMID:26146541
Preliminary Thermal-Mechanical Sizing of Metallic TPS: Process Development and Sensitivity Studies
NASA Technical Reports Server (NTRS)
Poteet, Carl C.; Abu-Khajeel, Hasan; Hsu, Su-Yuen
2002-01-01
The purpose of this research was to perform sensitivity studies and develop a process to perform thermal and structural analysis and sizing of the latest Metallic Thermal Protection System (TPS) developed at NASA LaRC (Langley Research Center). Metallic TPS is a key technology for reducing the cost of reusable launch vehicles (RLV), offering the combination of increased durability and competitive weights when compared to other systems. Accurate sizing of metallic TPS requires combined thermal and structural analysis. Initial sensitivity studies were conducted using transient one-dimensional finite element thermal analysis to determine the influence of various TPS and analysis parameters on TPS weight. The thermal analysis model was then used in combination with static deflection and failure mode analysis of the sandwich panel outer surface of the TPS to obtain minimum weight TPS configurations at three vehicle stations on the windward centerline of a representative RLV. The coupled nature of the analysis requires an iterative analysis process, which will be described herein. Findings from the sensitivity analysis are reported, along with TPS designs at the three RLV vehicle stations considered.
Afifi, Rema A; Makhoul, Jihad; El Hajj, Taghreed; Nakkash, Rima T
2011-01-01
Although logic models are now touted as an important component of health promotion planning, implementation and evaluation, there are few published manuscripts that describe the process of logic model development, and fewer which do so with community involvement, despite the increasing emphasis on participatory research. This paper describes a process leading to the development of a logic model for a youth mental health promotion intervention using a participatory approach in a Palestinian refugee camp in Beirut, Lebanon. First, a needs assessment, including quantitative and qualitative data collection was carried out with children, parents and teachers. The second phase was identification of a priority health issue and analysis of determinants. The final phase in the construction of the logic model involved development of an intervention. The process was iterative and resulted in a more grounded depiction of the pathways of influence informed by evidence. Constructing a logic model with community input ensured that the intervention was more relevant to community needs, feasible for implementation and more likely to be sustainable. PMID:21278370
Connor, Carol McDonald; Phillips, Beth M.; Kaschak, Michael; Apel, Kenn; Kim, Young-Suk; Al Otaiba, Stephanie; Crowe, Elizabeth C.; Thomas-Tate, Shurita; Johnson, Lakeisha Cooper; Lonigan, Christopher J.
2015-01-01
This paper describes the theoretical framework, as well as the development and testing of the intervention, Comprehension Tools for Teachers (CTT), which is composed of eight component interventions targeting malleable language and reading comprehension skills that emerging research indicates contribute to proficient reading for understanding for prekindergarteners through fourth graders. Component interventions target processes considered largely automatic as well as more reflective processes, with interacting and reciprocal effects. Specifically, we present component interventions targeting cognitive, linguistic, and text-specific processes, including morphological awareness, syntax, mental-state verbs, comprehension monitoring, narrative and expository text structure, enacted comprehension, academic knowledge, and reading to learn from informational text. Our aim was to develop a tool set composed of intensive meaningful individualized small group interventions. We improved feasibility in regular classrooms through the use of design-based iterative research methods including careful lesson planning, targeted scripting, pre- and postintervention proximal assessments, and technology. In addition to the overall framework, we discuss seven of the component interventions and general results of design and efficacy studies. PMID:26500420
Camden, Chantal; Shikako-Thomas, Keiko; Nguyen, Tram; Graham, Emma; Thomas, Aliki; Sprung, Jennifer; Morris, Christopher; Russell, Dianne J
2015-01-01
To describe how stakeholder engagement has been undertaken and evaluated in rehabilitation research. A scoping review of the scientific literature using five search strategies. Quantitative and qualitative analyses using extracted data. Interpretation of results was iteratively discussed within the team, which included a parent stakeholder. Searches identified 101 candidate papers; 28 were read in full to assess eligibility and 19 were included in the review. People with disabilities and their families were more frequently involved compared to other stakeholders. Stakeholders were often involved in planning and evaluating service delivery. A key issue was identifying stakeholders; strategies used to support their involvement included creating committees, organizing meetings, clarifying roles and offering training. Communication, power sharing and resources influenced how stakeholders could be engaged in the research. Perceived outcomes of stakeholder engagement included the creation of partnerships, facilitating the research process and the application of the results, and empowering stakeholders. Stakeholder engagement outcomes were rarely formally evaluated. There is a great interest in rehabilitation to engage stakeholders in the research process. However, further evidence is needed to identify effective strategies for meaningful stakeholder engagement that leads to more useful research that positively impacts practice. Implications for Rehabilitation Using several strategies to engage various stakeholders throughout the research process is thought to increase the quality of the research and the rehabilitation process by developing proposals and programs responding better to their needs. Engagement strategies need to be better reported and evaluated in the literature. Engagement facilitate uptake of research findings by increasing stakeholders' awareness of the evidence, the resources available and their own ability to act upon a situation. Factors influencing opportunities for stakeholder engagement need to be better understood.
Rankin, Gabrielle; Rushton, Alison; Olver, Pat; Moore, Ann
2012-09-01
To define research priorities to strategically inform the evidence base for physiotherapy practice. A modified Delphi method using SurveyMonkey software identified priorities for physiotherapy research through national consensus. An iterative process of three rounds provided feedback. Round 1 requested five priorities using pre-defined prioritisation criteria. Content analysis identified research themes and topics. Round 2 requested rating of the importance of the research topics using a 1-5 Likert scale. Round 3 requested a further process of rating. Quantitative and qualitative data informed decision-making. Level of consensus was established as mean rating ≥ 3.5, coefficient of variation ≤ 30%, and ≥ 55% agreement. Consensus across participants was evaluated using Kendall's W. Four expert panels (n=40-61) encompassing a range of stakeholders and reflecting four core areas of physiotherapy practice were established by steering groups (n=204 participants overall). Response rates of 53-78% across three rounds were good. The identification of 24/185 topics for musculoskeletal, 43/174 for neurology, 30/120 for cardiorespiratory and medical rehabilitation, and 30/113 for mental and physical health and wellbeing as priorities demonstrated discrimination of the process. Consensus between participants was good for most topics. Measurement validity of the research topics was good. The involvement of multiple stakeholders as participants ensured the current context of the intended use of the priorities. From a process of national consensus involving key stakeholders, including service users, physiotherapy research topics have been identified and prioritised. Setting priorities provides a vision of how research can contribute to the developing research base in physiotherapy to maximise focus. Copyright © 2012 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.
Wood, Richard M; Rilling, James K; Sanfey, Alan G; Bhagwagar, Zubin; Rogers, Robert D
2006-05-01
Adaptive social behavior often necessitates choosing to cooperate with others for long-term gains at the expense of noncooperative behaviors giving larger immediate gains. Although little is know about the neural substrates that support cooperative over noncooperative behaviors, recent research has shown that mutually cooperative behavior in the context of a mixed-motive game, the Prisoner's Dilemma (PD), is associated with increased neural activity within reinforcement circuitry. Other research attests to a role for serotonin in the modulation of social behavior and in reward processing. In this study, we used a within-subject, crossover, double-blind design to investigate performance of an iterated, sequential PD game for monetary reward by healthy human adult participants following ingestion of an amino-acid drink that either did (T+) or did not (T-) contain l-tryptophan. Tryptophan depletion produced significant reductions in the level of cooperation shown by participants when playing the game on the first, but not the second, study days. This effect was accompanied by a significantly diminished probability of cooperative responding given previous mutually cooperative behavior. These data suggest that serotonin plays a significant role in the acquisition of socially cooperative behavior in human adult participants, and suggest novel hypotheses concerning the serotonergic modulation of reward information in socially cooperative behavior in both health and psychiatric illness.
The presence of a perseverative iterative style in poor vs. good sleepers.
Barclay, N L; Gregory, A M
2010-03-01
Catastrophizing is present in worriers and poor sleepers. This study investigates whether poor sleepers possess a 'perseverative iterative style' which predisposes them to catastrophize any topic, regardless of content or affective valence, a style previously found to occur more commonly in worriers as compared to others. Poor (n=23) and good sleepers (n=37) were distinguished using the Pittsburgh Sleep Quality Index (PSQI), from a sample of adults in the general population. Participants were required to catastrophize 2 topics: worries about sleep, and a current personal worry; and to iterate the positive aspects of a hypothetical topic. Poor sleepers catastrophized/iterated more steps to a greater extent than good sleepers to these three interviews, (F(1, 58)=7.35, p<.05). However, after controlling for anxiety and worry, this effect was reduced to non-significance for the 'sleep' and 'worry' topics, suggesting that anxiety may mediate some of the association between catastrophizing and sleep. However there was still a tendency for poor sleepers to iterate more steps to the 'hypothetical' topic, after controlling for anxiety and worry, which also suggests that poor sleepers possess a cognitive style which may predispose them to continue iterating consecutive steps to open-ended tasks regardless of anxiety and worry. Future research should examine whether the presence of this cognitive style is significant in leading to or maintaining insomnia.
Conceptual design of ACB-CP for ITER cryogenic system
NASA Astrophysics Data System (ADS)
Jiang, Yongcheng; Xiong, Lianyou; Peng, Nan; Tang, Jiancheng; Liu, Liqiang; Zhang, Liang
2012-06-01
ACB-CP (Auxiliary Cold Box for Cryopumps) is used to supply the cryopumps system with necessary cryogen in ITER (International Thermonuclear Experimental Reactor) cryogenic distribution system. The conceptual design of ACB-CP contains thermo-hydraulic analysis, 3D structure design and strength checking. Through the thermohydraulic analysis, the main specifications of process valves, pressure safety valves, pipes, heat exchangers can be decided. During the 3D structure design process, vacuum requirement, adiabatic requirement, assembly constraints and maintenance requirement have been considered to arrange the pipes, valves and other components. The strength checking has been performed to crosscheck if the 3D design meets the strength requirements for the ACB-CP.
Munoz-Plaza, Corrine E; Parry, Carla; Hahn, Erin E; Tang, Tania; Nguyen, Huong Q; Gould, Michael K; Kanter, Michael H; Sharp, Adam L
2016-08-15
Despite reports advocating for integration of research into healthcare delivery, scant literature exists describing how this can be accomplished. Examples highlighting application of qualitative research methods embedded into a healthcare system are particularly needed. This article describes the process and value of embedding qualitative research as the second phase of an explanatory, sequential, mixed methods study to improve antibiotic stewardship for acute sinusitis. Purposive sampling of providers for in-depth interviews improved understanding of unwarranted antibiotic prescribing and elicited stakeholder recommendations for improvement. Qualitative data collection, transcription and constant comparative analyses occurred iteratively. Emerging themes and sub-themes identified primary drivers of unwarranted antibiotic prescribing patterns and recommendations for improving practice. These findings informed the design of a health system intervention to improve antibiotic stewardship for acute sinusitis. Core components of the intervention are also described. Qualitative research can be effectively applied in learning healthcare systems to elucidate quantitative results and inform improvement efforts.
NASA Astrophysics Data System (ADS)
Knoth, Kenneth Charles
Course-based undergraduate research experiences (CUREs) provide authentic research benefits to an entire laboratory course population. CURE experiences are proposed to enhance research skills, critical thinking, productivity, and retention in science. CURE curriculum developers face numerous obstacles, such as the logistics and time commitment involved in bringing a CURE to larger student populations. In addition, an ideal CURE topic requires affordable resources, lab techniques that can be quickly mastered, time for multiple iterations within one semester, and the opportunity to generate new data. This study identifies some of the CURE activities that lead to proposed participant outcomes. Introductory Biology I CURE lab students at Southern Illinois University Edwardsville completed research related to the process of converting storage lipids in microalgae into biodiesel. Data collected from CURE and traditional lab student participants indicate increased CURE student reports of project ownership, scientific self-efficacy, identification as a scientist, and sense of belonging to a science community. Study limitations and unanticipated benefits are discussed.
NASA Astrophysics Data System (ADS)
Akhlaghi, H.; Roohi, E.; Myong, R. S.
2012-11-01
Micro/nano geometries with specified wall heat flux are widely encountered in electronic cooling and micro-/nano-fluidic sensors. We introduce a new technique to impose the desired (positive/negative) wall heat flux boundary condition in the DSMC simulations. This technique is based on an iterative progress on the wall temperature magnitude. It is found that the proposed iterative technique has a good numerical performance and could implement both positive and negative values of wall heat flux rates accurately. Using present technique, rarefied gas flow through micro-/nanochannels under specified wall heat flux conditions is simulated and unique behaviors are observed in case of channels with cooling walls. For example, contrary to the heating process, it is observed that cooling of micro/nanochannel walls would result in small variations in the density field. Upstream thermal creep effects in the cooling process decrease the velocity slip despite of the Knudsen number increase along the channel. Similarly, cooling process decreases the curvature of the pressure distribution below the linear incompressible distribution. Our results indicate that flow cooling increases the mass flow rate through the channel, and vice versa.
Linkage between Researchers and Practitioners: A Qualitative Study.
ERIC Educational Resources Information Center
Huberman, Michael
1990-01-01
A multiple-case, "tracer" study was undertaken involving 11 research projects of the "Education et Vie Active" (Education and the Active Life)--a national vocational education program in Switzerland--to assess the importance of contacts between researchers and practitioners. Iterative data from interviews, observations, and…
Communicating Glacier Change and Associated Impacts to Communities and Decision-makers
NASA Astrophysics Data System (ADS)
Timm, K.; Hood, E. W.; O'Neel, S.; Wolken, G. J.
2017-12-01
A critical, but often overlooked, part of making cryosphere science relevant to decision makers is ensuring that the communication and translation of scientific information is deliberate, dialogic, and the product of careful planning. This presentation offers several lessons learned from a team of scientists and a communication professional who have collaboratively produced several award-winning and repeatedly used communication products. Consisting of illustrations (for presentations, publications, and other uses), posters, and fact sheets, the products communicate how Alaska's glaciers are changing, how changing glaciers influence nearby ecosystems, and the natural hazards that emerge as glaciers recede and thin to a range of audiences, including community members, business owners, resource managers, and other decision makers. The success of these communication products can be attributed in part to six broad characteristics of the development process, which are based on the literature from science communication research and reflections from the team: connect, design, respect, iterate, share, and reflect. For example, connecting with other people is important because effective science communication is usually the product of a team of researchers and communication professionals. Connecting with the audience or stakeholders is also important for developing an understanding of their information needs. In addition, respect is essential, as this process relies on the diverse skills, experience, and knowledge that everyone brings to the endeavor. Also for consideration, developing a shared language and executing a scientifically accurate design takes synthesis and iteration, which must be accounted for in the project timeline. Taken together, these factors and others that will be described in the presentation can help improve the communication of cryosphere science and expand its utility for important societal decisions.
Yassi, Annalee; O’Hara, Lyndsay Michelle; Engelbrecht, Michelle C.; Uebel, Kerry; Nophale, Letshego Elizabeth; Bryce, Elizabeth Ann; Buxton, Jane A; Siegel, Jacob; Spiegel, Jerry Malcolm
2014-01-01
Background Community-based cluster-randomized controlled trials (RCTs) are increasingly being conducted to address pressing global health concerns. Preparations for clinical trials are well-described, as are the steps for multi-component health service trials. However, guidance is lacking for addressing the ethical and logistic challenges in (cluster) RCTs of population health interventions in low- and middle-income countries. Objective We aimed to identify the factors that population health researchers must explicitly consider when planning RCTs within North–South partnerships. Design We reviewed our experiences and identified key ethical and logistic issues encountered during the pre-trial phase of a recently implemented RCT. This trial aimed to improve tuberculosis (TB) and Human Immunodeficiency Virus (HIV) prevention and care for health workers by enhancing workplace assessment capability, addressing concerns about confidentiality and stigma, and providing onsite counseling, testing, and treatment. An iterative framework was used to synthesize this analysis with lessons taken from other studies. Results The checklist of critical factors was grouped into eight categories: 1) Building trust and shared ownership; 2) Conducting feasibility studies throughout the process; 3) Building capacity; 4) Creating an appropriate information system; 5) Conducting pilot studies; 6) Securing stakeholder support, with a view to scale-up; 7) Continuously refining methodological rigor; and 8) Explicitly addressing all ethical issues both at the start and continuously as they arise. Conclusion Researchers should allow for the significant investment of time and resources required for successful implementation of population health RCTs within North–South collaborations, recognize the iterative nature of the process, and be prepared to revise protocols as challenges emerge. PMID:24802561
van den Eertwegh, Valerie; van Dulmen, Sandra; van Dalen, Jan; Scherpbier, Albert J J A; van der Vleuten, Cees P M
2013-02-01
In order to reduce the inconsistencies of findings and the apparent low transfer of communication skills from training to medical practice, this narrative review identifies some main gaps in research on medical communication skills training and presents insights from theories on learning and transfer to broaden the view for future research. Relevant literature was identified using Pubmed, GoogleScholar, Cochrane database, and Web of Science; and analyzed using an iterative procedure. Research findings on the effectiveness of medical communication training still show inconsistencies and variability. Contemporary theories on learning based on a constructivist paradigm offer the following insights: acquisition of knowledge and skills should be viewed as an ongoing process of exchange between the learner and his environment, so called lifelong learning. This process can neither be atomized nor separated from the context in which it occurs. Four contemporary approaches are presented as examples. The following shift in focus for future research is proposed: beyond isolated single factor effectiveness studies toward constructivist, non-reductionistic studies integrating the context. Future research should investigate how constructivist approaches can be used in the medical context to increase effective learning and transition of communication skills. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Plasma-surface interaction in the Be/W environment: Conclusions drawn from the JET-ILW for ITER
NASA Astrophysics Data System (ADS)
Brezinsek, S.; JET-EFDA contributors
2015-08-01
The JET ITER-Like Wall experiment (JET-ILW) provides an ideal test bed to investigate plasma-surface interaction (PSI) and plasma operation with the ITER plasma-facing material selection employing beryllium in the main chamber and tungsten in the divertor. The main PSI processes: material erosion and migration, (b) fuel recycling and retention, (c) impurity concentration and radiation have be1en studied and compared between JET-C and JET-ILW. The current physics understanding of these key processes in the JET-ILW revealed that both interpretation of previously obtained carbon results (JET-C) and predictions to ITER need to be revisited. The impact of the first-wall material on the plasma was underestimated. Main observations are: (a) low primary erosion source in H-mode plasmas and reduction of the material migration from the main chamber to the divertor (factor 7) as well as within the divertor from plasma-facing to remote areas (factor 30 - 50). The energetic threshold for beryllium sputtering minimises the primary erosion source and inhibits multi-step re-erosion in the divertor. The physical sputtering yield of tungsten is low as 10-5 and determined by beryllium ions. (b) Reduction of the long-term fuel retention (factor 10 - 20) in JET-ILW with respect to JET-C. The remaining retention is caused by implantation and co-deposition with beryllium and residual impurities. Outgassing has gained importance and impacts on the recycling properties of beryllium and tungsten. (c) The low effective plasma charge (Zeff = 1.2) and low radiation capability of beryllium reveal the bare deuterium plasma physics. Moderate nitrogen seeding, reaching Zeff = 1.6 , restores in particular the confinement and the L-H threshold behaviour. ITER-compatible divertor conditions with stable semi-detachment were obtained owing to a higher density limit with ILW. Overall JET demonstrated successful plasma operation in the Be/W material combination and confirms its advantageous PSI behaviour and gives strong support to the ITER material selection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Young, S; Hoffman, J; McNitt-Gray, M
Purpose: Iterative reconstruction methods show promise for improving image quality and lowering the dose in helical CT. We aim to develop a novel model-based reconstruction method that offers potential for dose reduction with reasonable computation speed and storage requirements for vendor-independent reconstruction from clinical data on a normal desktop computer. Methods: In 2012, Xu proposed reconstructing on rotating slices to exploit helical symmetry and reduce the storage requirements for the CT system matrix. Inspired by this concept, we have developed a novel reconstruction method incorporating the stored-system-matrix approach together with iterative coordinate-descent (ICD) optimization. A penalized-least-squares objective function with amore » quadratic penalty term is solved analytically voxel-by-voxel, sequentially iterating along the axial direction first, followed by the transaxial direction. 8 in-plane (transaxial) neighbors are used for the ICD algorithm. The forward problem is modeled via a unique approach that combines the principle of Joseph’s method with trilinear B-spline interpolation to enable accurate reconstruction with low storage requirements. Iterations are accelerated with multi-CPU OpenMP libraries. For preliminary evaluations, we reconstructed (1) a simulated 3D ellipse phantom and (2) an ACR accreditation phantom dataset exported from a clinical scanner (Definition AS, Siemens Healthcare). Image quality was evaluated in the resolution module. Results: Image quality was excellent for the ellipse phantom. For the ACR phantom, image quality was comparable to clinical reconstructions and reconstructions using open-source FreeCT-wFBP software. Also, we did not observe any deleterious impact associated with the utilization of rotating slices. The system matrix storage requirement was only 4.5GB, and reconstruction time was 50 seconds per iteration. Conclusion: Our reconstruction method shows potential for furthering research in low-dose helical CT, in particular as part of our ongoing development of an acquisition/reconstruction pipeline for generating images under a wide range of conditions. Our algorithm will be made available open-source as “FreeCT-ICD”. NIH U01 CA181156; Disclosures (McNitt-Gray): Institutional research agreement, Siemens Healthcare; Past recipient, research grant support, Siemens Healthcare; Consultant, Toshiba America Medical Systems; Consultant, Samsung Electronics.« less
Virtual patients: practical advice for clinical authors using Labyrinth.
Begg, Michael
2010-09-01
Labyrinth is a tool originally developed in the University of Edinburgh's Learning Technology Section for authoring and delivering branching case scenarios. The scenarios can incorporate game-informed elements such as scoring, randomising, avatars and counters. Labyrinth has grown more popular internationally since a version of the build was made available on the open source network Source Forge. This paper offers help and advice for clinical educators interested in creating cases. Labyrinth is increasingly recognised as a tool offering great potential for delivering cases that promote rich, situated learning opportunities for learners. There are, however, significant challenges to generating such cases, not least of which is the challenge for potential authors in approaching the process of constructing narrative-rich, context-sensitive cases in an unfamiliar authoring environment. This paper offers a brief overview of the principles informing Labyrinth cases (game-informed learning), and offers some practical advice to better prepare educators with little or no prior experience. Labyrinth has continued to grow and develop, from its roots as a research and development environment to one that is optimised for use by non-technical clinical educators. The process becomes increasingly iterative and better informed as the teaching community push the software further. The positive implications of providing practical advice and concept insight to new case authors is that it ideally leads to a broader base of users who will inform future iterations of the software. © Blackwell Publishing Ltd 2010.
Curvelet-domain multiple matching method combined with cubic B-spline function
NASA Astrophysics Data System (ADS)
Wang, Tong; Wang, Deli; Tian, Mi; Hu, Bin; Liu, Chengming
2018-05-01
Since the large amount of surface-related multiple existed in the marine data would influence the results of data processing and interpretation seriously, many researchers had attempted to develop effective methods to remove them. The most successful surface-related multiple elimination method was proposed based on data-driven theory. However, the elimination effect was unsatisfactory due to the existence of amplitude and phase errors. Although the subsequent curvelet-domain multiple-primary separation method achieved better results, poor computational efficiency prevented its application. In this paper, we adopt the cubic B-spline function to improve the traditional curvelet multiple matching method. First, select a little number of unknowns as the basis points of the matching coefficient; second, apply the cubic B-spline function on these basis points to reconstruct the matching array; third, build constraint solving equation based on the relationships of predicted multiple, matching coefficients, and actual data; finally, use the BFGS algorithm to iterate and realize the fast-solving sparse constraint of multiple matching algorithm. Moreover, the soft-threshold method is used to make the method perform better. With the cubic B-spline function, the differences between predicted multiple and original data diminish, which results in less processing time to obtain optimal solutions and fewer iterative loops in the solving procedure based on the L1 norm constraint. The applications to synthetic and field-derived data both validate the practicability and validity of the method.
Lennox, Laura; Doyle, Cathal; Reed, Julie E
2017-01-01
Objectives Although improvement initiatives show benefits to patient care, they often fail to sustain. Models and frameworks exist to address this challenge, but issues with design, clarity and usability have been barriers to use in healthcare settings. This work aimed to collaborate with stakeholders to develop a sustainability tool relevant to people in healthcare settings and practical for use in improvement initiatives. Design Tool development was conducted in six stages. A scoping literature review, group discussions and a stakeholder engagement event explored literature findings and their resonance with stakeholders in healthcare settings. Interviews, small-scale trialling and piloting explored the design and tested the practicality of the tool in improvement initiatives. Setting National Institute for Health Research Collaboration for Leadership in Applied Health Research and Care for Northwest London (CLAHRC NWL). Participants CLAHRC NWL improvement initiative teams and staff. Results The iterative design process and engagement of stakeholders informed the articulation of the sustainability factors identified from the literature and guided tool design for practical application. Key iterations of factors and tool design are discussed. From the development process, the Long Term Success Tool (LTST) has been designed. The Tool supports those implementing improvements to reflect on 12 sustainability factors to identify risks to increase chances of achieving sustainability over time. The Tool is designed to provide a platform for improvement teams to share their own views on sustainability as well as learn about the different views held within their team to prompt discussion and actions. Conclusion The development of the LTST has reinforced the importance of working with stakeholders to design strategies which respond to their needs and preferences and can practically be implemented in real-world settings. Further research is required to study the use and effectiveness of the tool in practice and assess engagement with the method over time. PMID:28947436
Why and how Mastering an Incremental and Iterative Software Development Process
NASA Astrophysics Data System (ADS)
Dubuc, François; Guichoux, Bernard; Cormery, Patrick; Mescam, Jean Christophe
2004-06-01
One of the key issues regularly mentioned in the current software crisis of the space domain is related to the software development process that must be performed while the system definition is not yet frozen. This is especially true for complex systems like launchers or space vehicles.Several more or less mature solutions are under study by EADS SPACE Transportation and are going to be presented in this paper. The basic principle is to develop the software through an iterative and incremental process instead of the classical waterfall approach, with the following advantages:- It permits systematic management and incorporation of requirements changes over the development cycle with a minimal cost. As far as possible the most dimensioning requirements are analyzed and developed in priority for validating very early the architecture concept without the details.- A software prototype is very quickly available. It improves the communication between system and software teams, as it enables to check very early and efficiently the common understanding of the system requirements.- It allows the software team to complete a whole development cycle very early, and thus to become quickly familiar with the software development environment (methodology, technology, tools...). This is particularly important when the team is new, or when the environment has changed since the previous development. Anyhow, it improves a lot the learning curve of the software team.These advantages seem very attractive, but mastering efficiently an iterative development process is not so easy and induces a lot of difficulties such as:- How to freeze one configuration of the system definition as a development baseline, while most of thesystem requirements are completely and naturally unstable?- How to distinguish stable/unstable and dimensioning/standard requirements?- How to plan the development of each increment?- How to link classical waterfall development milestones with an iterative approach: when should theclassical reviews be performed: Software Specification Review? Preliminary Design Review? CriticalDesign Review? Code Review? Etc...Several solutions envisaged or already deployed by EADS SPACE Transportation will be presented, both from a methodological and technological point of view:- How the MELANIE EADS ST internal methodology improves the concurrent engineering activitiesbetween GNC, software and simulation teams in a very iterative and reactive way.- How the CMM approach can help by better formalizing Requirements Management and Planningprocesses.- How the Automatic Code Generation with "certified" tools (SCADE) can still dramatically shorten thedevelopment cycle.Then the presentation will conclude by showing an evaluation of the cost and planning reduction based on a pilot application by comparing figures on two similar projects: one with the classical waterfall process, the other one with an iterative and incremental approach.
Non-axisymmetric ideal equilibrium and stability of ITER plasmas with rotating RMPs
NASA Astrophysics Data System (ADS)
Ham, C. J.; Cramp, R. G. J.; Gibson, S.; Lazerson, S. A.; Chapman, I. T.; Kirk, A.
2016-08-01
The magnetic perturbations produced by the resonant magnetic perturbation (RMP) coils will be rotated in ITER so that the spiral patterns due to strike point splitting which are locked to the RMP also rotate. This is to ensure even power deposition on the divertor plates. VMEC equilibria are calculated for different phases of the RMP rotation. It is demonstrated that the off harmonics rotate in the opposite direction to the main harmonic. This is an important topic for future research to control and optimize ITER appropriately. High confinement mode (H-mode) is favourable for the economics of a potential fusion power plant and its use is planned in ITER. However, the high pressure gradient at the edge of the plasma can trigger periodic eruptions called edge localized modes (ELMs). ELMs have the potential to shorten the life of the divertor in ITER (Loarte et al 2003 Plasma Phys. Control. Fusion 45 1549) and so methods for mitigating or suppressing ELMs in ITER will be important. Non-axisymmetric RMP coils will be installed in ITER for ELM control. Sampling theory is used to show that there will be significant a {{n}\\text{coils}}-{{n}\\text{rmp}} harmonic sideband. There are nine coils toroidally in ITER so {{n}\\text{coils}}=9 . This results in a significant n = 6 component to the {{n}\\text{rmp}}=3 applied field and a significant n = 5 component to the {{n}\\text{rmp}}=4 applied field. Although the vacuum field has similar amplitudes of these harmonics the plasma response to the various harmonics dictates the final equilibrium. Magnetic perturbations with toroidal mode number n = 3 and n = 4 are applied to a 15 MA, {{q}95}≈ 3 burning ITER plasma. We use a three-dimensional ideal magnetohydrodynamic model (VMEC) to calculate ITER equilibria with applied RMPs and to determine growth rates of infinite n ballooning modes (COBRA). The {{n}\\text{rmp}}=4 case shows little change in ballooning mode growth rate as the RMP is rotated, however there is a change with rotation for the {{n}\\text{rmp}}=3 case.
A Dynamic Model of the Initial Spares Support List Development Process
1979-06-01
S117Z1NOTE NREI -NOT READI END ITERS IIT7INOTE GPEI -QUANTITY OF PARTS M. END ITER 11775NOTE FUSERF -PARTS USE RATE FACTOR U8WOTE OP U -OTHER PARTS USE...FAILURES ’I 1675R PtJER.L=(NREI.K) (QPEI) (PUSERF.K)+OPUR II7HNOTE PUSER -PARTS USE RATE II7t5NOTE NREI -NOT READY END ITEMS II756NOTE GPEI -QUANTITY
Sundareshan, Malur K; Bhattacharjee, Supratik; Inampudi, Radhika; Pang, Ho-Yuen
2002-12-10
Computational complexity is a major impediment to the real-time implementation of image restoration and superresolution algorithms in many applications. Although powerful restoration algorithms have been developed within the past few years utilizing sophisticated mathematical machinery (based on statistical optimization and convex set theory), these algorithms are typically iterative in nature and require a sufficient number of iterations to be executed to achieve the desired resolution improvement that may be needed to meaningfully perform postprocessing image exploitation tasks in practice. Additionally, recent technological breakthroughs have facilitated novel sensor designs (focal plane arrays, for instance) that make it possible to capture megapixel imagery data at video frame rates. A major challenge in the processing of these large-format images is to complete the execution of the image processing steps within the frame capture times and to keep up with the output rate of the sensor so that all data captured by the sensor can be efficiently utilized. Consequently, development of novel methods that facilitate real-time implementation of image restoration and superresolution algorithms is of significant practical interest and is the primary focus of this study. The key to designing computationally efficient processing schemes lies in strategically introducing appropriate preprocessing steps together with the superresolution iterations to tailor optimized overall processing sequences for imagery data of specific formats. For substantiating this assertion, three distinct methods for tailoring a preprocessing filter and integrating it with the superresolution processing steps are outlined. These methods consist of a region-of-interest extraction scheme, a background-detail separation procedure, and a scene-derived information extraction step for implementing a set-theoretic restoration of the image that is less demanding in computation compared with the superresolution iterations. A quantitative evaluation of the performance of these algorithms for restoring and superresolving various imagery data captured by diffraction-limited sensing operations are also presented.
Systems of Selves: the Construction of Meaning in Multiple Personality Disorder
NASA Astrophysics Data System (ADS)
Hughes, Dureen Jean
Current models for understanding both Multiple Personality Disorder and human mentation in general are both linear in nature and self-perpetuating insofar as most research in this area has been informed and shaped by extant psychological concepts, paradigms and methods. The research for this dissertation made use of anthropological concepts and methods in an attempt to gain a richer understanding of both multiple personality and fundamental universal processes of the mind. Intensive fieldwork using in-depth, open-ended interviewing techniques was conducted with people diagnosed with Multiple Personality Disorder with the purpose of mapping their personality systems in order to discover the nature of the relationships between the various alternate personalities and subsystems comprising the overall personality systems. These data were then analyzed in terms of dynamical systems theory ("Chaos Theory") as a way of understanding various phenomena of multiple personality disorder as well as the overall structure of each system. It was found that the application of the formal characteristics of nonlinear models and equations to multiple personality systems provided a number of new perspectives on mental phenomena. The underlying organizational structure of multiple personality systems can be understood as a phenomenon of spontaneous self-organization in far-from -equilibrium states which characterizes dissipative structures. Chaos Theory allows the perspective that the nature of the process of the self and the nature of relationship are one and the same, and that both can be conceived as ideas in struggle at a fractal boundary. Further, such application makes it possible to postulate an iterative process which would have as one of its consequences the formation of a processural self who is conscious of self as separate self. Finally, given that the iterative application of a few simple rules (or instructions) can result in complex systems, an attempt was made to discern what the rules pertaining to human mentation might be.
Development of a set of community-informed Ebola messages for Sierra Leone
de Bruijne, Kars; Jalloh, Alpha M.; Harris, Muriel; Abdullah, Hussainatu; Boye-Thompson, Titus; Sankoh, Osman; Jalloh, Abdul K.; Jalloh-Vos, Heidi
2017-01-01
The West African Ebola epidemic of 2013–2016 was by far the largest outbreak of the disease on record. Sierra Leone suffered nearly half of the 28,646 reported cases. This paper presents a set of culturally contextualized Ebola messages that are based on the findings of qualitative interviews and focus group discussions conducted in 'hotspot' areas of rural Bombali District and urban Freetown in Sierra Leone, between January and March 2015. An iterative approach was taken in the message development process, whereby (i) data from formative research was subjected to thematic analysis to identify areas of community concern about Ebola and the national response; (ii) draft messages to address these concerns were produced; (iii) the messages were field tested; (iv) the messages were refined; and (v) a final set of messages on 14 topics was disseminated to relevant national and international stakeholders. Each message included details of its rationale, audience, dissemination channels, messengers, and associated operational issues that need to be taken into account. While developing the 14 messages, a set of recommendations emerged that could be adopted in future public health emergencies. These included the importance of embedding systematic, iterative qualitative research fully into the message development process; communication of the subsequent messages through a two-way dialogue with communities, using trusted messengers, and not only through a one-way, top-down communication process; provision of good, parallel operational services; and engagement with senior policy makers and managers as well as people in key operational positions to ensure national ownership of the messages, and to maximize the chance of their being utilised. The methodological approach that we used to develop our messages along with our suggested recommendations constitute a set of tools that could be incorporated into international and national public health emergency preparedness and response plans. PMID:28787444
Berendonk, Christoph; Schirlo, Christian; Balestra, Gianmarco; Bonvin, Raphael; Feller, Sabine; Huber, Philippe; Jünger, Ernst; Monti, Matteo; Schnabel, Kai; Beyeler, Christine; Guttormsen, Sissel; Huwendiek, Sören
2015-01-01
Objective: Since 2011, the new national final examination in human medicine has been implemented in Switzerland, with a structured clinical-practical part in the OSCE format. From the perspective of the national Working Group, the current article describes the essential steps in the development, implementation and evaluation of the Federal Licensing Examination Clinical Skills (FLE CS) as well as the applied quality assurance measures. Finally, central insights gained from the last years are presented. Methods: Based on the principles of action research, the FLE CS is in a constant state of further development. On the foundation of systematically documented experiences from previous years, in the Working Group, unresolved questions are discussed and resulting solution approaches are substantiated (planning), implemented in the examination (implementation) and subsequently evaluated (reflection). The presented results are the product of this iterative procedure. Results: The FLE CS is created by experts from all faculties and subject areas in a multistage process. The examination is administered in German and French on a decentralised basis and consists of twelve interdisciplinary stations per candidate. As important quality assurance measures, the national Review Board (content validation) and the meetings of the standardised patient trainers (standardisation) have proven worthwhile. The statistical analyses show good measurement reliability and support the construct validity of the examination. Among the central insights of the past years, it has been established that the consistent implementation of the principles of action research contributes to the successful further development of the examination. Conclusion: The centrally coordinated, collaborative-iterative process, incorporating experts from all faculties, makes a fundamental contribution to the quality of the FLE CS. The processes and insights presented here can be useful for others planning a similar undertaking. PMID:26483853
Evolution Of USDOE Performance Assessments Over 20 Years
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seitz, Roger R.; Suttora, Linda C.
2013-02-26
Performance assessments (PAs) have been used for many years for the analysis of post-closure hazards associated with a radioactive waste disposal facility and to provide a reasonable expectation of the ability of the site and facility design to meet objectives for the protection of members of the public and the environment. The use of PA to support decision-making for LLW disposal facilities has been mandated in United States Department of Energy (USDOE) directives governing radioactive waste management since 1988 (currently DOE Order 435.1, Radioactive Waste Management). Prior to that time, PAs were also used in a less formal role. Overmore » the past 20+ years, the USDOE approach to conduct, review and apply PAs has evolved into an efficient, rigorous and mature process that includes specific requirements for continuous improvement and independent reviews. The PA process has evolved through refinement of a graded and iterative approach designed to help focus efforts on those aspects of the problem expected to have the greatest influence on the decision being made. Many of the evolutionary changes to the PA process are linked to the refinement of the PA maintenance concept that has proven to be an important element of USDOE PA requirements in the context of supporting decision-making for safe disposal of LLW. The PA maintenance concept represents the evolution of the graded and iterative philosophy and has helped to drive the evolution of PAs from a deterministic compliance calculation into a systematic approach that helps to focus on critical aspects of the disposal system in a manner designed to provide a more informed basis for decision-making throughout the life of a disposal facility (e.g., monitoring, research and testing, waste acceptance criteria, design improvements, data collection, model refinements). A significant evolution in PA modeling has been associated with improved use of uncertainty and sensitivity analysis techniques to support efficient implementation of the graded and iterative approach. Rather than attempt to exactly predict the migration of radionuclides in a disposal unit, the best PAs have evolved into tools that provide a range of results to guide decision-makers in planning the most efficient, cost effective, and safe disposal of radionuclides.« less
Koivisto, J-M; Haavisto, E; Niemi, H; Haho, P; Nylund, S; Multisilta, J
2018-01-01
Nurses sometimes lack the competence needed for recognising deterioration in patient conditions and this is often due to poor clinical reasoning. There is a need to develop new possibilities for learning this crucial competence area. In addition, educators need to be future oriented; they need to be able to design and adopt new pedagogical innovations. The purpose of the study is to describe the development process and to generate principles for the design of nursing simulation games. A design-based research methodology is applied in this study. Iterative cycles of analysis, design, development, testing and refinement were conducted via collaboration among researchers, educators, students, and game designers. The study facilitated the generation of reusable design principles for simulation games to guide future designers when designing and developing simulation games for learning clinical reasoning. This study makes a major contribution to research on simulation game development in the field of nursing education. The results of this study provide important insights into the significance of involving nurse educators in the design and development process of educational simulation games for the purpose of nursing education. Copyright © 2017 Elsevier Ltd. All rights reserved.
The electrostatics of parachutes
NASA Astrophysics Data System (ADS)
Yu, Li; Ming, Xiao
2007-12-01
In the research of parachute, canopy inflation process modeling is one of the most complicated tasks. As canopy often experiences the largest deformations and loadings during a very short time, it is of great difficulty for theoretical analysis and experimental measurements. In this paper, aerodynamic equations and structural dynamics equations were developed for describing parachute opening process, and an iterative coupling solving strategy incorporating the above equations was proposed for a small-scale, flexible and flat-circular parachute. Then, analyses were carried out for canopy geometry, time-dependent pressure difference between the inside and outside of the canopy, transient vortex around the canopy and the flow field in the radial plane as a sequence in opening process. The mechanism of the canopy shape development was explained from perspective of transient flow fields during the inflation process. Experiments of the parachute opening process were conducted in a wind tunnel, in which instantaneous shape of the canopy was measured by high velocity camera and the opening loading was measured by dynamometer balance. The theoretical predictions were found in good agreement with the experimental results, validating the proposed approach. This numerical method can improve the situation of strong dependence of parachute research on wind tunnel tests, and is of significance to the understanding of the mechanics of parachute inflation process.
NASA Astrophysics Data System (ADS)
Arndt, S.; Merkel, P.; Monticello, D. A.; Reiman, A. H.
1999-04-01
Fixed- and free-boundary equilibria for Wendelstein 7-X (W7-X) [W. Lotz et al., Plasma Physics and Controlled Nuclear Fusion Research 1990 (Proc. 13th Int. Conf. Washington, DC, 1990), (International Atomic Energy Agency, Vienna, 1991), Vol. 2, p. 603] configurations are calculated using the Princeton Iterative Equilibrium Solver (PIES) [A. H. Reiman et al., Comput. Phys. Commun., 43, 157 (1986)] to deal with magnetic islands and stochastic regions. Usually, these W7-X configurations require a large number of iterations for PIES convergence. Here, two methods have been successfully tested in an attempt to decrease the number of iterations needed for convergence. First, periodic sequences of different blending parameters are used. Second, the initial guess is vastly improved by using results of the Variational Moments Equilibrium Code (VMEC) [S. P. Hirshmann et al., Phys. Fluids 26, 3553 (1983)]. Use of these two methods have allowed verification of the Hamada condition and tendency of "self-healing" of islands has been observed.
Post-Stall Aerodynamic Modeling and Gain-Scheduled Control Design
NASA Technical Reports Server (NTRS)
Wu, Fen; Gopalarathnam, Ashok; Kim, Sungwan
2005-01-01
A multidisciplinary research e.ort that combines aerodynamic modeling and gain-scheduled control design for aircraft flight at post-stall conditions is described. The aerodynamic modeling uses a decambering approach for rapid prediction of post-stall aerodynamic characteristics of multiple-wing con.gurations using known section data. The approach is successful in bringing to light multiple solutions at post-stall angles of attack right during the iteration process. The predictions agree fairly well with experimental results from wind tunnel tests. The control research was focused on actuator saturation and .ight transition between low and high angles of attack regions for near- and post-stall aircraft using advanced LPV control techniques. The new control approaches maintain adequate control capability to handle high angle of attack aircraft control with stability and performance guarantee.
Tyler, Carl; Werner, James J.
2016-01-01
There is often a rich but untold history of events that occurred and relationships that formed prior to the launching of a practice-based research network (PBRN.) This is particularly the case in PBRNs that are community-based and comprised of partnerships outside of the health care system. In this article we summarize an organizational "prenatal history" prior to the birth of a PBRN devoted to persons with developmental disabilities. Using a case study approach, this article describes the historical events that preceded and fostered the evolution of this PBRN and contrasts how the processes leading to the creation of this multi-stakeholder community-based PBRN differ from those of typical academic-clinical practice PBRNs. We propose potential advantages and complexities inherent to this newest iteration of PBRNs. PMID:25381081
Haji, Faizal A; Da Silva, Celina; Daigle, Delton T; Dubrowski, Adam
2014-08-01
Presently, health care simulation research is largely conducted on a study-by-study basis. Although such "project-based" research generates a plethora of evidence, it can be chaotic and contradictory. A move toward sustained, thematic, theory-based programs of research is necessary to advance knowledge in the field. Recognizing that simulation is a complex intervention, we present a framework for developing research programs in simulation-based education adapted from the Medical Research Council (MRC) guidance. This framework calls for an iterative approach to developing, refining, evaluating, and implementing simulation interventions. The adapted framework guidance emphasizes: (1) identification of theory and existing evidence; (2) modeling and piloting interventions to clarify active ingredients and identify mechanisms linking the context, intervention, and outcomes; and (3) evaluation of intervention processes and outcomes in both the laboratory and real-world setting. The proposed framework will aid simulation researchers in developing more robust interventions that optimize simulation-based education and advance our understanding of simulation pedagogy.
Acceleration of linear stationary iterative processes in multiprocessor computers. II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romm, Ya.E.
1982-05-01
For pt.I, see Kibernetika, vol.18, no.1, p.47 (1982). For pt.I, see Cybernetics, vol.18, no.1, p.54 (1982). Considers a reduced system of linear algebraic equations x=ax+b, where a=(a/sub ij/) is a real n*n matrix; b is a real vector with common euclidean norm >>>. It is supposed that the existence and uniqueness of solution det (0-a) not equal to e is given, where e is a unit matrix. The linear iterative process converging to x x/sup (k+1)/=fx/sup (k)/, k=0, 1, 2, ..., where the operator f translates r/sup n/ into r/sup n/. In considering implementation of the iterative process (ip) inmore » a multiprocessor system, it is assumed that the number of processors is constant, and are various values of the latter investigated; it is assumed in addition, that the processors perform elementary binary arithmetic operations of addition and multiestimates only include the time of execution of arithmetic operations. With any paralleling of individual iteration, the execution time of the ip is proportional to the number of sequential steps k+1. The author sets the task of reducing the number of sequential steps in the ip so as to execute it in a time proportional to a value smaller than k+1. He also sets the goal of formulating a method of accelerated bit serial-parallel execution of each successive step of the ip, with, in the modification sought, a reduced number of steps in a time comparable to the operation time of logical elements. 6 references.« less
Automation of a Wave-Optics Simulation and Image Post-Processing Package on Riptide
NASA Astrophysics Data System (ADS)
Werth, M.; Lucas, J.; Thompson, D.; Abercrombie, M.; Holmes, R.; Roggemann, M.
Detailed wave-optics simulations and image post-processing algorithms are computationally expensive and benefit from the massively parallel hardware available at supercomputing facilities. We created an automated system that interfaces with the Maui High Performance Computing Center (MHPCC) Distributed MATLAB® Portal interface to submit massively parallel waveoptics simulations to the IBM iDataPlex (Riptide) supercomputer. This system subsequently postprocesses the output images with an improved version of physically constrained iterative deconvolution (PCID) and analyzes the results using a series of modular algorithms written in Python. With this architecture, a single person can simulate thousands of unique scenarios and produce analyzed, archived, and briefing-compatible output products with very little effort. This research was developed with funding from the Defense Advanced Research Projects Agency (DARPA). The views, opinions, and/or findings expressed are those of the author(s) and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government.
Siek, Katie A; Khan, Danish U; Ross, Stephen E; Haverhals, Leah M; Meyers, Jane; Cali, Steven R
2011-10-01
Older adults with multiple chronic conditions often go through care transitions where they move between care facilities or providers during their treatment. These transitions are often uncoordinated and can imperil patients by omitted, duplicative, or contradictory care plans. Older adults sometimes feel overwhelmed with the new responsibility of coordinating the care plan with providers and changing their medication regimes. In response, we developed a Lesser General Public License (LGPL) open source, web-based Personal Health Application (PHA) using an iterative participatory design process that provided older adults and their caregivers the ability to manage their personal health information. In this paper, we document the PHA design process from low-fidelity prototypes to high-fidelity prototypes over the course of six user studies. Our findings establish the imperative need for interdisciplinary research and collaboration among all stakeholders to create effective PHAs. We conclude with design guidelines that encourage researchers to gradually increase functionality as users become more proficient.
Brown, Ottilia; Goliath, Veonna; van Rooyen, Dalena R M; Aldous, Colleen; Marais, Leonard Charles
2017-01-01
Communicating the diagnosis of cancer in cross-cultural clinical settings is a complex task. This qualitative research article describes the content and process of informing Zulu patients in South Africa of the diagnosis of cancer, using osteosarcoma as the index diagnosis. We used a descriptive research design with census sampling and focus group interviews. We used an iterative thematic data analysis process and Guba's model of trustworthiness to ensure scientific rigor. Our results reinforced the use of well-accepted strategies for communicating the diagnosis of cancer. In addition, new strategies emerged which may be useful in other cross-cultural settings. These strategies included using the stages of cancer to explain the disease and its progression and instilling hope using a multidisciplinary team care model. We identified several patients, professionals, and organizational factors that complicate cross-cultural communication. We conclude by recommending the development of protocols for communication in these cross-cultural clinical settings.
Parameter Identification Of Multilayer Thermal Insulation By Inverse Problems
NASA Astrophysics Data System (ADS)
Nenarokomov, Aleksey V.; Alifanov, Oleg M.; Gonzalez, Vivaldo M.
2012-07-01
The purpose of this paper is to introduce an iterative regularization method in the research of radiative and thermal properties of materials with further applications in the design of Thermal Control Systems (TCS) of spacecrafts. In this paper the radiative and thermal properties (heat capacity, emissivity and thermal conductance) of a multilayered thermal-insulating blanket (MLI), which is a screen-vacuum thermal insulation as a part of the (TCS) for perspective spacecrafts, are estimated. Properties of the materials under study are determined in the result of temperature and heat flux measurement data processing based on the solution of the Inverse Heat Transfer Problem (IHTP) technique. Given are physical and mathematical models of heat transfer processes in a specimen of the multilayered thermal-insulating blanket located in the experimental facility. A mathematical formulation of the IHTP, based on sensitivity function approach, is presented too. The practical testing was performed for specimen of the real MLI. This paper consists of recent researches, which developed the approach suggested at [1].
2015-12-01
AFRL-RY-WP-TR-2015-0144 COGNITIVE RADIO LOW-ENERGY SIGNAL ANALYSIS SENSOR INTEGRATED CIRCUITS (CLASIC) A Broadband Mixed-Signal Iterative Down...See additional restrictions described on inside pages STINFO COPY AIR FORCE RESEARCH LABORATORY SENSORS DIRECTORATE WRIGHT-PATTERSON AIR FORCE...Signature// TODD KASTLE, Chief Spectrum Warfare Division Sensors Directorate This report is published in the interest of scientific and technical
A Participatory Research Approach to develop an Arabic Symbol Dictionary.
Draffan, E A; Kadous, Amatullah; Idris, Amal; Banes, David; Zeinoun, Nadine; Wald, Mike; Halabi, Nawar
2015-01-01
The purpose of the Arabic Symbol Dictionary research discussed in this paper, is to provide a resource of culturally, environmentally and linguistically suitable symbols to aid communication and literacy skills. A participatory approach with the use of online social media and a bespoke symbol management system has been established to enhance the process of matching a user based Arabic and English core vocabulary with appropriate imagery. Participants including AAC users, their families, carers, teachers and therapists who have been involved in the research from the outset, collating the vocabularies, debating cultural nuances for symbols and critiquing the design of technologies for selection procedures. The positive reaction of those who have voted on the symbols with requests for early use have justified the iterative nature of the methodologies used for this part of the project. However, constant re-evaluation will be necessary and in depth analysis of all the data received has yet to be completed.
Nelson, Peter M; Demers, Joseph A; Christ, Theodore J
2014-06-01
This study details the initial development of the Responsive Environmental Assessment for Classroom Teachers (REACT). REACT was developed as a questionnaire to evaluate student perceptions of the classroom teaching environment. Researchers engaged in an iterative process to develop, field test, and analyze student responses on 100 rating-scale items. Participants included 1,465 middle school students across 48 classrooms in the Midwest. Item analysis, including exploratory and confirmatory factor analysis, was used to refine a 27-item scale with a second-order factor structure. Results support the interpretation of a single general dimension of the Classroom Teaching Environment with 6 subscale dimensions: Positive Reinforcement, Instructional Presentation, Goal Setting, Differentiated Instruction, Formative Feedback, and Instructional Enjoyment. Applications of REACT in research and practice are discussed along with implications for future research and the development of classroom environment measures. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Akit’s house: identification of vernacular coastal architecture in Meranti Island
NASA Astrophysics Data System (ADS)
Faisal, G.; Amanati, R.
2018-03-01
Akit people can be found on Meranti islands near east coast Sumatra. Their houses made mainly by wood construction as stilt type house. The roof of the house was made by leaves, and bark of the tree was used on house wall. Nowadays, some changes have occurred on this vernacular house. The changes are not only as responding to the environment, environment but also are affecting by way of their life. In turn, this changing becomes an interesting phenomenon, particular comparing to the house on other islands. This research has conducted in qualitative research approach to identify how the changes of the house. Field data gathered by a range of methods such as observation, story-telling, and documentation. The data are analyzed and interpreted within an iterative process to expand understanding of the house’s changing. This research offers an architectural insight into how the vernacular houses are changing.
Joint Transmit Power Allocation and Splitting for SWIPT Aided OFDM-IDMA in Wireless Sensor Networks
Li, Shanshan; Zhou, Xiaotian; Wang, Cheng-Xiang; Yuan, Dongfeng; Zhang, Wensheng
2017-01-01
In this paper, we propose to combine Orthogonal Frequency Division Multiplexing-Interleave Division Multiple Access (OFDM-IDMA) with Simultaneous Wireless Information and Power Transfer (SWIPT), resulting in SWIPT aided OFDM-IDMA scheme for power-limited sensor networks. In the proposed system, the Receive Node (RN) applies Power Splitting (PS) to coordinate the Energy Harvesting (EH) and Information Decoding (ID) process, where the harvested energy is utilized to guarantee the iterative Multi-User Detection (MUD) of IDMA to work under sufficient number of iterations. Our objective is to minimize the total transmit power of Source Node (SN), while satisfying the requirements of both minimum harvested energy and Bit Error Rate (BER) performance from individual receive nodes. We formulate such a problem as a joint power allocation and splitting one, where the iteration number of MUD is also taken into consideration as the key parameter to affect both EH and ID constraints. To solve it, a sub-optimal algorithm is proposed to determine the power profile, PS ratio and iteration number of MUD in an iterative manner. Simulation results verify that the proposed algorithm can provide significant performance improvement. PMID:28677636
Joint Transmit Power Allocation and Splitting for SWIPT Aided OFDM-IDMA in Wireless Sensor Networks.
Li, Shanshan; Zhou, Xiaotian; Wang, Cheng-Xiang; Yuan, Dongfeng; Zhang, Wensheng
2017-07-04
In this paper, we propose to combine Orthogonal Frequency Division Multiplexing-Interleave Division Multiple Access (OFDM-IDMA) with Simultaneous Wireless Information and Power Transfer (SWIPT), resulting in SWIPT aided OFDM-IDMA scheme for power-limited sensor networks. In the proposed system, the Receive Node (RN) applies Power Splitting (PS) to coordinate the Energy Harvesting (EH) and Information Decoding (ID) process, where the harvested energy is utilized to guarantee the iterative Multi-User Detection (MUD) of IDMA to work under sufficient number of iterations. Our objective is to minimize the total transmit power of Source Node (SN), while satisfying the requirements of both minimum harvested energy and Bit Error Rate (BER) performance from individual receive nodes. We formulate such a problem as a joint power allocation and splitting one, where the iteration number of MUD is also taken into consideration as the key parameter to affect both EH and ID constraints. To solve it, a sub-optimal algorithm is proposed to determine the power profile, PS ratio and iteration number of MUD in an iterative manner. Simulation results verify that the proposed algorithm can provide significant performance improvement.
Pseudo-time methods for constrained optimization problems governed by PDE
NASA Technical Reports Server (NTRS)
Taasan, Shlomo
1995-01-01
In this paper we present a novel method for solving optimization problems governed by partial differential equations. Existing methods are gradient information in marching toward the minimum, where the constrained PDE is solved once (sometimes only approximately) per each optimization step. Such methods can be viewed as a marching techniques on the intersection of the state and costate hypersurfaces while improving the residuals of the design equations per each iteration. In contrast, the method presented here march on the design hypersurface and at each iteration improve the residuals of the state and costate equations. The new method is usually much less expensive per iteration step since, in most problems of practical interest, the design equation involves much less unknowns that that of either the state or costate equations. Convergence is shown using energy estimates for the evolution equations governing the iterative process. Numerical tests show that the new method allows the solution of the optimization problem in a cost of solving the analysis problems just a few times, independent of the number of design parameters. The method can be applied using single grid iterations as well as with multigrid solvers.
NASA Astrophysics Data System (ADS)
Ikelle, Luc T.; Osen, Are; Amundsen, Lasse; Shen, Yunqing
2004-12-01
The classical linear solutions to the problem of multiple attenuation, like predictive deconvolution, τ-p filtering, or F-K filtering, are generally fast, stable, and robust compared to non-linear solutions, which are generally either iterative or in the form of a series with an infinite number of terms. These qualities have made the linear solutions more attractive to seismic data-processing practitioners. However, most linear solutions, including predictive deconvolution or F-K filtering, contain severe assumptions about the model of the subsurface and the class of free-surface multiples they can attenuate. These assumptions limit their usefulness. In a recent paper, we described an exception to this assertion for OBS data. We showed in that paper that a linear and non-iterative solution to the problem of attenuating free-surface multiples which is as accurate as iterative non-linear solutions can be constructed for OBS data. We here present a similar linear and non-iterative solution for attenuating free-surface multiples in towed-streamer data. For most practical purposes, this linear solution is as accurate as the non-linear ones.
NASA Astrophysics Data System (ADS)
Valderrama, Joaquin T.; de la Torre, Angel; Van Dun, Bram
2018-02-01
Objective. Artifact reduction in electroencephalogram (EEG) signals is usually necessary to carry out data analysis appropriately. Despite the large amount of denoising techniques available with a multichannel setup, there is a lack of efficient algorithms that remove (not only detect) blink-artifacts from a single channel EEG, which is of interest in many clinical and research applications. This paper describes and evaluates the iterative template matching and suppression (ITMS), a new method proposed for detecting and suppressing the artifact associated with the blink activity from a single channel EEG. Approach. The approach of ITMS consists of (a) an iterative process in which blink-events are detected and the blink-artifact waveform of the analyzed subject is estimated, (b) generation of a signal modeling the blink-artifact, and (c) suppression of this signal from the raw EEG. The performance of ITMS is compared with the multi-window summation of derivatives within a window (MSDW) technique using both synthesized and real EEG data. Main results. Results suggest that ITMS presents an adequate performance in detecting and suppressing blink-artifacts from a single channel EEG. When applied to the analysis of cortical auditory evoked potentials (CAEPs), ITMS provides a significant quality improvement in the resulting responses, i.e. in a cohort of 30 adults, the mean correlation coefficient improved from 0.37 to 0.65 when the blink-artifacts were detected and suppressed by ITMS. Significance. ITMS is an efficient solution to the problem of denoising blink-artifacts in single-channel EEG applications, both in clinical and research fields. The proposed ITMS algorithm is stable; automatic, since it does not require human intervention; low-invasive, because the EEG segments not contaminated by blink-artifacts remain unaltered; and easy to implement, as can be observed in the Matlab script implemeting the algorithm provided as supporting material.
An overview of NSPCG: A nonsymmetric preconditioned conjugate gradient package
NASA Astrophysics Data System (ADS)
Oppe, Thomas C.; Joubert, Wayne D.; Kincaid, David R.
1989-05-01
The most recent research-oriented software package developed as part of the ITPACK Project is called "NSPCG" since it contains many nonsymmetric preconditioned conjugate gradient procedures. It is designed to solve large sparse systems of linear algebraic equations by a variety of different iterative methods. One of the main purposes for the development of the package is to provide a common modular structure for research on iterative methods for nonsymmetric matrices. Another purpose for the development of the package is to investigate the suitability of several iterative methods for vector computers. Since the vectorizability of an iterative method depends greatly on the matrix structure, NSPCG allows great flexibility in the operator representation. The coefficient matrix can be passed in one of several different matrix data storage schemes. These sparse data formats allow matrices with a wide range of structures from highly structured ones such as those with all nonzeros along a relatively small number of diagonals to completely unstructured sparse matrices. Alternatively, the package allows the user to call the accelerators directly with user-supplied routines for performing certain matrix operations. In this case, one can use the data format from an application program and not be required to copy the matrix into one of the package formats. This is particularly advantageous when memory space is limited. Some of the basic preconditioners that are available are point methods such as Jacobi, Incomplete LU Decomposition and Symmetric Successive Overrelaxation as well as block and multicolor preconditioners. The user can select from a large collection of accelerators such as Conjugate Gradient (CG), Chebyshev (SI, for semi-iterative), Generalized Minimal Residual (GMRES), Biconjugate Gradient Squared (BCGS) and many others. The package is modular so that almost any accelerator can be used with almost any preconditioner.
System Matrix Analysis for Computed Tomography Imaging
Flores, Liubov; Vidal, Vicent; Verdú, Gumersindo
2015-01-01
In practical applications of computed tomography imaging (CT), it is often the case that the set of projection data is incomplete owing to the physical conditions of the data acquisition process. On the other hand, the high radiation dose imposed on patients is also undesired. These issues demand that high quality CT images can be reconstructed from limited projection data. For this reason, iterative methods of image reconstruction have become a topic of increased research interest. Several algorithms have been proposed for few-view CT. We consider that the accurate solution of the reconstruction problem also depends on the system matrix that simulates the scanning process. In this work, we analyze the application of the Siddon method to generate elements of the matrix and we present results based on real projection data. PMID:26575482
Stotz, Sarah; Lee, Jung Sun
2018-01-01
The objective of this report was to describe the development process of an innovative smartphone-based electronic learning (eLearning) nutrition education program targeted to Supplemental Nutrition Assistance Program-Education-eligible individuals, entitled Food eTalk. Lessons learned from the Food eTalk development process suggest that it is critical to include all key team members from the program's inception using effective inter-team communication systems, understand the unique resources needed, budget ample time for development, and employ an iterative development and evaluation model. These lessons have implications for researchers and funding agencies in developing an innovative evidence-based eLearning nutrition education program to an increasingly technology-savvy, low-income audience. Copyright © 2016 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.
A Biopsychosocial Model of the Development of Chronic Conduct Problems in Adolescence
Dodge, Kenneth A.; Pettit, Gregory S.
2009-01-01
A biopsychosocial model of the development of adolescent chronic conduct problems is presented and supported through a review of empirical findings. This model posits that biological dispositions and sociocultural contexts place certain children at risk in early life but that life experiences with parents, peers, and social institutions increment and mediate this risk. A transactional developmental model is best equipped to describe the emergence of chronic antisocial behavior across time. Reciprocal influences among dispositions, contexts, and life experiences lead to recursive iterations across time that exacerbate or diminish antisocial development. Cognitive and emotional processes within the child, including the acquisition of knowledge and social-information-processing patterns, mediate the relation between life experiences and conduct problem outcomes. Implications for prevention research and public policy are noted. PMID:12661890