NASA Astrophysics Data System (ADS)
Wang, Hao; Wang, Qunwei; He, Ming
2018-05-01
In order to investigate and improve the level of detection technology of water content in liquid chemical reagents of domestic laboratories, proficiency testing provider PT0031 (CNAS) has organized proficiency testing program of water content in toluene, 48 laboratories from 18 provinces/cities/municipals took part in the PT. This paper introduces the implementation process of proficiency testing for determination of water content in toluene, including sample preparation, homogeneity and stability test, the results of statistics of iteration robust statistic technique and analysis, summarized and analyzed those of the different test standards which are widely used in the laboratories, put forward the technological suggestions for the improvement of the test quality of water content. Satisfactory results were obtained by 43 laboratories, amounting to 89.6% of the total participating laboratories.
de Vries, Peter C.; Luce, Timothy C.; Bae, Young-soon; ...
2017-11-22
To improve our understanding of the dynamics and control of ITER terminations, a study has been carried out on data from existing tokamaks. The aim of this joint analysis is to compare the assumptions for ITER terminations with the present experience basis. The study examined the parameter ranges in which present day devices operated during their terminations, as well as the dynamics of these parameters. The analysis of a database, built using a selected set of experimental termination cases, showed that, the H-mode density decays slower than the plasma current ramp-down. The consequential increase in fGW limits the duration ofmore » the H-mode phase or result in disruptions. The lower temperatures after the drop out of H-mode will allow the plasma internal inductance to increase. But vertical stability control remains manageable in ITER at high internal inductance when accompanied by a strong elongation reduction. This will result in ITER terminations remaining longer at low q (q95~3) than most present-day devices during the current ramp-down. A fast power ramp-down leads to a larger change in βp at the H-L transition, but the experimental data showed that these are manageable for the ITER radial position control. The analysis of JET data shows that radiation and impurity levels significantly alter the H-L transition dynamics. Self-consistent calculations of the impurity content and resulting radiation should be taken into account when modelling ITER termination scenarios. Here, the results from this analysis can be used to better prescribe the inputs for the detailed modelling and preparation of ITER termination scenarios.« less
NASA Astrophysics Data System (ADS)
de Vries, P. C.; Luce, T. C.; Bae, Y. S.; Gerhardt, S.; Gong, X.; Gribov, Y.; Humphreys, D.; Kavin, A.; Khayrutdinov, R. R.; Kessel, C.; Kim, S. H.; Loarte, A.; Lukash, V. E.; de la Luna, E.; Nunes, I.; Poli, F.; Qian, J.; Reinke, M.; Sauter, O.; Sips, A. C. C.; Snipes, J. A.; Stober, J.; Treutterer, W.; Teplukhina, A. A.; Voitsekhovitch, I.; Woo, M. H.; Wolfe, S.; Zabeo, L.; the Alcator C-MOD Team; the ASDEX Upgrade Team; the DIII-D Team; the EAST Team; contributors, JET; the KSTAR Team; the NSTX-U Team; the TCV Team; IOS members, ITPA; experts
2018-02-01
To improve our understanding of the dynamics and control of ITER terminations, a study has been carried out on data from existing tokamaks. The aim of this joint analysis is to compare the assumptions for ITER terminations with the present experience basis. The study examined the parameter ranges in which present day devices operated during their terminations, as well as the dynamics of these parameters. The analysis of a database, built using a selected set of experimental termination cases, showed that, the H-mode density decays slower than the plasma current ramp-down. The consequential increase in f GW limits the duration of the H-mode phase or result in disruptions. The lower temperatures after the drop out of H-mode will allow the plasma internal inductance to increase. But vertical stability control remains manageable in ITER at high internal inductance when accompanied by a strong elongation reduction. This will result in ITER terminations remaining longer at low q (q 95 ~ 3) than most present-day devices during the current ramp-down. A fast power ramp-down leads to a larger change in β p at the H-L transition, but the experimental data showed that these are manageable for the ITER radial position control. The analysis of JET data shows that radiation and impurity levels significantly alter the H-L transition dynamics. Self-consistent calculations of the impurity content and resulting radiation should be taken into account when modelling ITER termination scenarios. The results from this analysis can be used to better prescribe the inputs for the detailed modelling and preparation of ITER termination scenarios.
Iterative categorization (IC): a systematic technique for analysing qualitative data
2016-01-01
Abstract The processes of analysing qualitative data, particularly the stage between coding and publication, are often vague and/or poorly explained within addiction science and research more broadly. A simple but rigorous and transparent technique for analysing qualitative textual data, developed within the field of addiction, is described. The technique, iterative categorization (IC), is suitable for use with inductive and deductive codes and can support a range of common analytical approaches, e.g. thematic analysis, Framework, constant comparison, analytical induction, content analysis, conversational analysis, discourse analysis, interpretative phenomenological analysis and narrative analysis. Once the data have been coded, the only software required is a standard word processing package. Worked examples are provided. PMID:26806155
Anderson, S J; Dewhirst, T; Ling, P M
2006-06-01
In this article we present communication theory as a conceptual framework for conducting documents research on tobacco advertising strategies, and we discuss two methods for analysing advertisements: semiotics and content analysis. We provide concrete examples of how we have used tobacco industry documents archives and tobacco advertisement collections iteratively in our research to yield a synergistic analysis of these two complementary data sources. Tobacco promotion researchers should consider adopting these theoretical and methodological approaches.
Iterative Correction of Reference Nucleotides (iCORN) using second generation sequencing technology.
Otto, Thomas D; Sanders, Mandy; Berriman, Matthew; Newbold, Chris
2010-07-15
The accuracy of reference genomes is important for downstream analysis but a low error rate requires expensive manual interrogation of the sequence. Here, we describe a novel algorithm (Iterative Correction of Reference Nucleotides) that iteratively aligns deep coverage of short sequencing reads to correct errors in reference genome sequences and evaluate their accuracy. Using Plasmodium falciparum (81% A + T content) as an extreme example, we show that the algorithm is highly accurate and corrects over 2000 errors in the reference sequence. We give examples of its application to numerous other eukaryotic and prokaryotic genomes and suggest additional applications. The software is available at http://icorn.sourceforge.net
Iterative categorization (IC): a systematic technique for analysing qualitative data.
Neale, Joanne
2016-06-01
The processes of analysing qualitative data, particularly the stage between coding and publication, are often vague and/or poorly explained within addiction science and research more broadly. A simple but rigorous and transparent technique for analysing qualitative textual data, developed within the field of addiction, is described. The technique, iterative categorization (IC), is suitable for use with inductive and deductive codes and can support a range of common analytical approaches, e.g. thematic analysis, Framework, constant comparison, analytical induction, content analysis, conversational analysis, discourse analysis, interpretative phenomenological analysis and narrative analysis. Once the data have been coded, the only software required is a standard word processing package. Worked examples are provided. © 2016 The Authors. Addiction published by John Wiley & Sons Ltd on behalf of Society for the Study of Addiction.
Anderson, S J; Dewhirst, T; Ling, P M
2006-01-01
In this article we present communication theory as a conceptual framework for conducting documents research on tobacco advertising strategies, and we discuss two methods for analysing advertisements: semiotics and content analysis. We provide concrete examples of how we have used tobacco industry documents archives and tobacco advertisement collections iteratively in our research to yield a synergistic analysis of these two complementary data sources. Tobacco promotion researchers should consider adopting these theoretical and methodological approaches. PMID:16728758
Embodied Design: Constructing Means for Constructing Meaning
ERIC Educational Resources Information Center
Abrahamson, Dor
2009-01-01
Design-based research studies are conducted as iterative implementation-analysis-modification cycles, in which emerging theoretical models and pedagogically plausible activities are reciprocally tuned toward each other as a means of investigating conjectures pertaining to mechanisms underlying content teaching and learning. Yet this approach, even…
Comparisons of Observed Process Quality in German and American Infant/Toddler Programs
ERIC Educational Resources Information Center
Tietze, Wolfgang; Cryer, Debby
2004-01-01
Observed process quality in infant/toddler classrooms was compared in Germany (n = 75) and the USA (n = 219). Process quality was assessed with the Infant/Toddler Environment Rating Scale(ITERS) and parent attitudes about ITERS content with the ITERS Parent Questionnaire (ITERSPQ). The ITERS had comparable reliabilities in the two countries and…
Chatterji, Madhabi
2002-01-01
This study examines validity of data generated by the School Readiness for Reforms: Leader Questionnaire (SRR-LQ) using an iterative procedure that combines classical and Rasch rating scale analysis. Following content-validation and pilot-testing, principal axis factor extraction and promax rotation of factors yielded a five factor structure consistent with the content-validated subscales of the original instrument. Factors were identified based on inspection of pattern and structure coefficients. The rotated factor pattern, inter-factor correlations, convergent validity coefficients, and Cronbach's alpha reliability estimates supported the hypothesized construct properties. To further examine unidimensionality and efficacy of the rating scale structures, item-level data from each factor-defined subscale were subjected to analysis with the Rasch rating scale model. Data-to-model fit statistics and separation reliability for items and persons met acceptable criteria. Rating scale results suggested consistency of expected and observed step difficulties in rating categories, and correspondence of step calibrations with increases in the underlying variables. The combined approach yielded more comprehensive diagnostic information on the quality of the five SRR-LQ subscales; further research is continuing.
NASA Astrophysics Data System (ADS)
Tsuru, Daigo; Tanigawa, Hisashi; Hirose, Takanori; Mohri, Kensuke; Seki, Yohji; Enoeda, Mikio; Ezato, Koichiro; Suzuki, Satoshi; Nishi, Hiroshi; Akiba, Masato
2009-06-01
As the primary candidate of ITER Test Blanket Module (TBM) to be tested under the leadership of Japan, a water cooled solid breeder (WCSB) TBM is being developed. This paper shows the recent achievements towards the milestones of ITER TBMs prior to the installation, which consist of design integration in ITER, module qualification and safety assessment. With respect to the design integration, targeting the detailed design final report in 2012, structure designs of the WCSB TBM and the interfacing components (common frame and backside shielding) that are placed in a test port of ITER and the layout of the cooling system are presented. As for the module qualification, a real-scale first wall mock-up fabricated by using the hot isostatic pressing method by structural material of reduced activation martensitic ferritic steel, F82H, and flow and irradiation test of the mock-up are presented. As for safety milestones, the contents of the preliminary safety report in 2008 consisting of source term identification, failure mode and effect analysis (FMEA) and identification of postulated initiating events (PIEs) and safety analyses are presented.
ERIC Educational Resources Information Center
Shea, Peter; Hayes, Suzanne; Smith, Sedef Uzuner; Vickers, Jason; Bidjerano, Temi; Gozza-Cohen, Mary; Jian, Shou-Bang; Pickett, Alexandra M.; Wilde, Jane; Tseng, Chi-Hua
2013-01-01
This paper presents an extension of an ongoing study of online learning framed within the community of inquiry (CoI) model (Garrison, Anderson, & Archer, 2001) in which we further examine a new construct labeled as "learning presence." We use learning presence to refer to the iterative processes of forethought and planning,…
Advanced Agent Methods in Adversarial Environment
2005-11-30
2 Contents Contents 1 Introduction – Technical Statement of Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.1...37 5.4.1 Deriving Trust Observations from Coalition Cooperation Results . . . . . . . . . . . 37 Contents 3 5.4.2 Iterative Learning of...85 4 Contents A.3.5 Class Finder
Health in All Social Work Programs: Findings From a US National Analysis
Wachman, Madeline K.; Marshall, Jamie W.; Backman, Allison R.; Harrington, Calla B.; Schultz, Neena S.; Ouimet, Kaitlyn J.
2017-01-01
Objectives. To establish a baseline of health content in 4 domains of US social work education—baccalaureate, master’s, doctoral, and continuing education programs—and to introduce the Social Work Health Impact Model, illustrating social work’s multifaceted health services, from clinical to wide-lens population health approaches. Methods. We analyzed US social work programs’ Web site content to determine amount and types of health content in mission statements, courses, and specializations. Coding criterion determined if content was (1) health or health-related (HHR) and (2) had wide-lens health (WLH) emphasis. A second iteration categorized HHR and WLH courses into health topics. Results. We reviewed 4831 courses. We found broad HHR content in baccalaureate, master’s, and continuing education curricula; doctoral programs had limited health content. We identified minimal WLH content across all domains. Topical analysis indicated that more than 50% of courses concentrated on 3 areas: mental and behavioral health, abuse and violence, and substance use and addictions. Conclusions. As a core health profession, social work must strengthen its health and wide-lens content to better prepare graduates for integrated practice and collaboration in the changing health environment. PMID:29236538
EC assisted start-up experiments reproduction in FTU and AUG for simulations of the ITER case
DOE Office of Scientific and Technical Information (OSTI.GOV)
Granucci, G.; Ricci, D.; Farina, D.
The breakdown and plasma start-up in ITER are well known issues studied in the last few years in many tokamaks with the aid of calculation based on simplified modeling. The thickness of ITER metallic wall and the voltage limits of the Central Solenoid Power Supply strongly limit the maximum toroidal electric field achievable (0.3 V/m), well below the level used in the present generation of tokamaks. In order to have a safe and robust breakdown, the use of Electron Cyclotron Power to assist plasma formation and current rump up has been foreseen. This has raised attention on plasma formation phasemore » in presence of EC wave, especially in order to predict the required power for a robust breakdown in ITER. Few detailed theory studies have been performed up to nowadays, due to the complexity of the problems. A simplified approach, extended from that proposed in ref[1] has been developed including a impurity multispecies distribution and an EC wave propagation and absorption based on GRAY code. This integrated model (BK0D) has been benchmarked on ohmic and EC assisted experiments on FTU and AUG, finding the key aspects for a good reproduction of data. On the basis of this, the simulation has been devoted to understand the best configuration for ITER case. The dependency of impurity distribution content and neutral gas pressure limits has been considered. As results of the analysis a reasonable amount of power (1 - 2 MW) seems to be enough to extend in a significant way the breakdown and current start up capability of ITER. The work reports the FTU data reproduction and the ITER case simulations.« less
Haraldseid, Cecilie; Friberg, Febe; Aase, Karina
2016-01-01
Policy initiatives and an increasing amount of the literature within higher education both call for students to become more involved in creating their own learning. However, there is a lack of studies in undergraduate nursing education that actively involve students in developing such learning material with descriptions of the students' roles in these interactive processes. Explorative qualitative study, using data from focus group interviews, field notes and student notes. The data has been subjected to qualitative content analysis. Active student involvement through an iterative process identified five different learning needs that are especially important to the students: clarification of learning expectations, help to recognize the bigger picture, stimulation of interaction, creation of structure, and receiving context- specific content. The iterative process involvement of students during the development of new technological learning material will enhance the identification of important learning needs for students. The use of student and teacher knowledge through an adapted co-design process is the most optimal level of that involvement.
Department of Defense Costing References Web. Phase 1. Establishing the Foundation.
1997-03-01
a functional economic analysis under one set of constraints and having to repeat the entire process for the MAISRC. Recommendations for automated...MAISRC s acquisition oversight process . The cost and cycle time for each iteration can be in the order of $300,000 and 6 months, respectively...Institute resources were expected to become available at the conclusion of another BPR project. The contents list for the first Business Process
A Universal Tare Load Prediction Algorithm for Strain-Gage Balance Calibration Data Analysis
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2011-01-01
An algorithm is discussed that may be used to estimate tare loads of wind tunnel strain-gage balance calibration data. The algorithm was originally developed by R. Galway of IAR/NRC Canada and has been described in the literature for the iterative analysis technique. Basic ideas of Galway's algorithm, however, are universally applicable and work for both the iterative and the non-iterative analysis technique. A recent modification of Galway's algorithm is presented that improves the convergence behavior of the tare load prediction process if it is used in combination with the non-iterative analysis technique. The modified algorithm allows an analyst to use an alternate method for the calculation of intermediate non-linear tare load estimates whenever Galway's original approach does not lead to a convergence of the tare load iterations. It is also shown in detail how Galway's algorithm may be applied to the non-iterative analysis technique. Hand load data from the calibration of a six-component force balance is used to illustrate the application of the original and modified tare load prediction method. During the analysis of the data both the iterative and the non-iterative analysis technique were applied. Overall, predicted tare loads for combinations of the two tare load prediction methods and the two balance data analysis techniques showed excellent agreement as long as the tare load iterations converged. The modified algorithm, however, appears to have an advantage over the original algorithm when absolute voltage measurements of gage outputs are processed using the non-iterative analysis technique. In these situations only the modified algorithm converged because it uses an exact solution of the intermediate non-linear tare load estimate for the tare load iteration.
NASA Astrophysics Data System (ADS)
Quercia, A.; Albanese, R.; Fresa, R.; Minucci, S.; Arshad, S.; Vayakis, G.
2017-12-01
The paper carries out a comprehensive study of the performances of Rogowski coils. It describes methodologies that were developed in order to assess the capabilities of the Continuous External Rogowski (CER), which measures the total toroidal current in the ITER machine. Even though the paper mainly considers the CER, the contents are general and relevant to any Rogowski sensor. The CER consists of two concentric helical coils which are wound along a complex closed path. Modelling and computational activities were performed to quantify the measurement errors, taking detailed account of the ITER environment. The geometrical complexity of the sensor is accurately accounted for and the standard model which provides the classical expression to compute the flux linkage of Rogowski sensors is quantitatively validated. Then, in order to take into account the non-ideality of the winding, a generalized expression, formally analogue to the classical one, is presented. Models to determine the worst case and the statistical measurement accuracies are hence provided. The following sources of error are considered: effect of the joints, disturbances due to external sources of field (the currents flowing in the poloidal field coils and the ferromagnetic inserts of ITER), deviations from ideal geometry, toroidal field variations, calibration, noise and integration drift. The proposed methods are applied to the measurement error of the CER, in particular in its high and low operating ranges, as prescribed by the ITER system design description documents, and during transients, which highlight the large time constant related to the shielding of the vacuum vessel. The analyses presented in the paper show that the design of the CER diagnostic is capable of achieving the requisite performance as needed for the operation of the ITER machine.
Syntax-directed content analysis of videotext: application to a map detection recognition system
NASA Astrophysics Data System (ADS)
Aradhye, Hrishikesh; Herson, James A.; Myers, Gregory
2003-01-01
Video is an increasingly important and ever-growing source of information to the intelligence and homeland defense analyst. A capability to automatically identify the contents of video imagery would enable the analyst to index relevant foreign and domestic news videos in a convenient and meaningful way. To this end, the proposed system aims to help determine the geographic focus of a news story directly from video imagery by detecting and geographically localizing political maps from news broadcasts, using the results of videotext recognition in lieu of a computationally expensive, scale-independent shape recognizer. Our novel method for the geographic localization of a map is based on the premise that the relative placement of text superimposed on a map roughly corresponds to the geographic coordinates of the locations the text represents. Our scheme extracts and recognizes videotext, and iteratively identifies the geographic area, while allowing for OCR errors and artistic freedom. The fast and reliable recognition of such maps by our system may provide valuable context and supporting evidence for other sources, such as speech recognition transcripts. The concepts of syntax-directed content analysis of videotext presented here can be extended to other content analysis systems.
The ATLAS Public Web Pages: Online Management of HEP External Communication Content
NASA Astrophysics Data System (ADS)
Goldfarb, S.; Marcelloni, C.; Eli Phoboo, A.; Shaw, K.
2015-12-01
The ATLAS Education and Outreach Group is in the process of migrating its public online content to a professionally designed set of web pages built on the Drupal [1] content management system. Development of the front-end design passed through several key stages, including audience surveys, stakeholder interviews, usage analytics, and a series of fast design iterations, called sprints. Implementation of the web site involves application of the html design using Drupal templates, refined development iterations, and the overall population of the site with content. We present the design and development processes and share the lessons learned along the way, including the results of the data-driven discovery studies. We also demonstrate the advantages of selecting a back-end supported by content management, with a focus on workflow. Finally, we discuss usage of the new public web pages to implement outreach strategy through implementation of clearly presented themes, consistent audience targeting and messaging, and the enforcement of a well-defined visual identity.
Elastic-plastic mixed-iterative finite element analysis: Implementation and performance assessment
NASA Technical Reports Server (NTRS)
Sutjahjo, Edhi; Chamis, Christos C.
1993-01-01
An elastic-plastic algorithm based on Von Mises and associative flow criteria is implemented in MHOST-a mixed iterative finite element analysis computer program developed by NASA Lewis Research Center. The performance of the resulting elastic-plastic mixed-iterative analysis is examined through a set of convergence studies. Membrane and bending behaviors of 4-node quadrilateral shell finite elements are tested for elastic-plastic performance. Generally, the membrane results are excellent, indicating the implementation of elastic-plastic mixed-iterative analysis is appropriate.
Comparing direct and iterative equation solvers in a large structural analysis software system
NASA Technical Reports Server (NTRS)
Poole, E. L.
1991-01-01
Two direct Choleski equation solvers and two iterative preconditioned conjugate gradient (PCG) equation solvers used in a large structural analysis software system are described. The two direct solvers are implementations of the Choleski method for variable-band matrix storage and sparse matrix storage. The two iterative PCG solvers include the Jacobi conjugate gradient method and an incomplete Choleski conjugate gradient method. The performance of the direct and iterative solvers is compared by solving several representative structural analysis problems. Some key factors affecting the performance of the iterative solvers relative to the direct solvers are identified.
Ferguson, Melanie; Leighton, Paul; Brandreth, Marian; Wharrad, Heather
2018-05-02
To develop content for a series of interactive video tutorials (or reusable learning objects, RLOs) for first-time adult hearing aid users, to enhance knowledge of hearing aids and communication. RLO content was based on an electronically-delivered Delphi review, workshops, and iterative peer-review and feedback using a mixed-methods participatory approach. An expert panel of 33 hearing healthcare professionals, and workshops involving 32 hearing aid users and 11 audiologists. This ensured that social, emotional and practical experiences of the end-user alongside clinical validity were captured. Content for evidence-based, self-contained RLOs based on pedagogical principles was developed for delivery via DVD for television, PC or internet. Content was developed based on Delphi review statements about essential information that reached consensus (≥90%), visual representations of relevant concepts relating to hearing aids and communication, and iterative peer-review and feedback of content. This participatory approach recognises and involves key stakeholders in the design process to create content for a user-friendly multimedia educational intervention, to supplement the clinical management of first-time hearing aid users. We propose participatory methodologies are used in the development of content for e-learning interventions in hearing-related research and clinical practice.
ERIC Educational Resources Information Center
Mozelius, Peter; Hettiarachchi, Enosha
2012-01-01
This paper describes the iterative development process of a Learning Object Repository (LOR), named eNOSHA. Discussions on a project for a LOR started at the e-Learning Centre (eLC) at The University of Colombo, School of Computing (UCSC) in 2007. The eLC has during the last decade been developing learning content for a nationwide e-learning…
Using Gender Schema Theory to Examine Gender Equity in Computing: a Preliminary Study
NASA Astrophysics Data System (ADS)
Agosto, Denise E.
Women continue to constitute a minority of computer science majors in the United States and Canada. One possible contributing factor is that most Web sites, CD-ROMs, and other digital resources do not reflect girls' design and content preferences. This article describes a pilot study that considered whether gender schema theory can serve as a framework for investigating girls' Web site design and content preferences. Eleven 14- and 15-year-old girls participated in the study. The methodology included the administration of the Children's Sex-Role Inventory (CSRI), Web-surfing sessions, interviews, and data analysis using iterative pattern coding. On the basis of their CSRI scores, the participants were divided into feminine-high (FH) and masculine-high (MH) groups. Data analysis uncovered significant differences in the criteria the groups used to evaluate Web sites. The FH group favored evaluation criteria relating to graphic and multimedia design, whereas the MH group favored evaluation criteria relating to subject content. Models of the two groups' evaluation criteria are presented, and the implications of the findings are discussed.
Material nonlinear analysis via mixed-iterative finite element method
NASA Technical Reports Server (NTRS)
Sutjahjo, Edhi; Chamis, Christos C.
1992-01-01
The performance of elastic-plastic mixed-iterative analysis is examined through a set of convergence studies. Membrane and bending behaviors are tested using 4-node quadrilateral finite elements. The membrane result is excellent, which indicates the implementation of elastic-plastic mixed-iterative analysis is appropriate. On the other hand, further research to improve bending performance of the method seems to be warranted.
Preparing the next generation of genomicists: a laboratory-style course in medical genomics.
Linderman, Michael D; Bashir, Ali; Diaz, George A; Kasarskis, Andrew; Sanderson, Saskia C; Zinberg, Randi E; Mahajan, Milind; Shah, Hardik; Suckiel, Sabrina; Zweig, Micol; Schadt, Eric E
2015-08-12
The growing gap between the demand for genome sequencing and the supply of trained genomics professionals is creating an acute need to develop more effective genomics education. In response we developed "Practical Analysis of Your Personal Genome", a novel laboratory-style medical genomics course in which students have the opportunity to obtain and analyze their own whole genome. This report describes our motivations for and the content of a "practical" genomics course that incorporates personal genome sequencing and the lessons we learned during the first three iterations of this course.
First-Order Hyperbolic System Method for Time-Dependent Advection-Diffusion Problems
2014-03-01
accuracy, with rapid convergence over each physical time step, typically less than five Newton iter - ations. 1 Contents 1 Introduction 3 2 Hyperbolic...however, we employ the Gauss - Seidel (GS) relaxation, which is also an O(N) method for the discretization arising from hyperbolic advection-diffusion system...advection-diffusion scheme. The linear dependency of the iterations on Table 1: Boundary layer problem ( Convergence criteria: Residuals < 10−8.) log10Re
Discrete-Time Deterministic $Q$ -Learning: A Novel Convergence Analysis.
Wei, Qinglai; Lewis, Frank L; Sun, Qiuye; Yan, Pengfei; Song, Ruizhuo
2017-05-01
In this paper, a novel discrete-time deterministic Q -learning algorithm is developed. In each iteration of the developed Q -learning algorithm, the iterative Q function is updated for all the state and control spaces, instead of updating for a single state and a single control in traditional Q -learning algorithm. A new convergence criterion is established to guarantee that the iterative Q function converges to the optimum, where the convergence criterion of the learning rates for traditional Q -learning algorithms is simplified. During the convergence analysis, the upper and lower bounds of the iterative Q function are analyzed to obtain the convergence criterion, instead of analyzing the iterative Q function itself. For convenience of analysis, the convergence properties for undiscounted case of the deterministic Q -learning algorithm are first developed. Then, considering the discounted factor, the convergence criterion for the discounted case is established. Neural networks are used to approximate the iterative Q function and compute the iterative control law, respectively, for facilitating the implementation of the deterministic Q -learning algorithm. Finally, simulation results and comparisons are given to illustrate the performance of the developed algorithm.
Iterative Strain-Gage Balance Calibration Data Analysis for Extended Independent Variable Sets
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert Manfred
2011-01-01
A new method was developed that makes it possible to use an extended set of independent calibration variables for an iterative analysis of wind tunnel strain gage balance calibration data. The new method permits the application of the iterative analysis method whenever the total number of balance loads and other independent calibration variables is greater than the total number of measured strain gage outputs. Iteration equations used by the iterative analysis method have the limitation that the number of independent and dependent variables must match. The new method circumvents this limitation. It simply adds a missing dependent variable to the original data set by using an additional independent variable also as an additional dependent variable. Then, the desired solution of the regression analysis problem can be obtained that fits each gage output as a function of both the original and additional independent calibration variables. The final regression coefficients can be converted to data reduction matrix coefficients because the missing dependent variables were added to the data set without changing the regression analysis result for each gage output. Therefore, the new method still supports the application of the two load iteration equation choices that the iterative method traditionally uses for the prediction of balance loads during a wind tunnel test. An example is discussed in the paper that illustrates the application of the new method to a realistic simulation of temperature dependent calibration data set of a six component balance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kawaguchi, Tomoya; Liu, Yihua; Reiter, Anthony
Here, a one-dimensional non-iterative direct method was employed for normalized crystal truncation rod analysis. The non-iterative approach, utilizing the Kramers–Kronig relation, avoids the ambiguities due to an improper initial model or incomplete convergence in the conventional iterative methods. The validity and limitations of the present method are demonstrated through both numerical simulations and experiments with Pt(111) in a 0.1 M CsF aqueous solution. The present method is compared with conventional iterative phase-retrieval methods.
Kawaguchi, Tomoya; Liu, Yihua; Reiter, Anthony; ...
2018-04-20
Here, a one-dimensional non-iterative direct method was employed for normalized crystal truncation rod analysis. The non-iterative approach, utilizing the Kramers–Kronig relation, avoids the ambiguities due to an improper initial model or incomplete convergence in the conventional iterative methods. The validity and limitations of the present method are demonstrated through both numerical simulations and experiments with Pt(111) in a 0.1 M CsF aqueous solution. The present method is compared with conventional iterative phase-retrieval methods.
Parallel computation of multigroup reactivity coefficient using iterative method
NASA Astrophysics Data System (ADS)
Susmikanti, Mike; Dewayatna, Winter
2013-09-01
One of the research activities to support the commercial radioisotope production program is a safety research target irradiation FPM (Fission Product Molybdenum). FPM targets form a tube made of stainless steel in which the nuclear degrees of superimposed high-enriched uranium. FPM irradiation tube is intended to obtain fission. The fission material widely used in the form of kits in the world of nuclear medicine. Irradiation FPM tube reactor core would interfere with performance. One of the disorders comes from changes in flux or reactivity. It is necessary to study a method for calculating safety terrace ongoing configuration changes during the life of the reactor, making the code faster became an absolute necessity. Neutron safety margin for the research reactor can be reused without modification to the calculation of the reactivity of the reactor, so that is an advantage of using perturbation method. The criticality and flux in multigroup diffusion model was calculate at various irradiation positions in some uranium content. This model has a complex computation. Several parallel algorithms with iterative method have been developed for the sparse and big matrix solution. The Black-Red Gauss Seidel Iteration and the power iteration parallel method can be used to solve multigroup diffusion equation system and calculated the criticality and reactivity coeficient. This research was developed code for reactivity calculation which used one of safety analysis with parallel processing. It can be done more quickly and efficiently by utilizing the parallel processing in the multicore computer. This code was applied for the safety limits calculation of irradiated targets FPM with increment Uranium.
Novel aspects of plasma control in ITER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humphreys, D.; Jackson, G.; Walker, M.
2015-02-15
ITER plasma control design solutions and performance requirements are strongly driven by its nuclear mission, aggressive commissioning constraints, and limited number of operational discharges. In addition, high plasma energy content, heat fluxes, neutron fluxes, and very long pulse operation place novel demands on control performance in many areas ranging from plasma boundary and divertor regulation to plasma kinetics and stability control. Both commissioning and experimental operations schedules provide limited time for tuning of control algorithms relative to operating devices. Although many aspects of the control solutions required by ITER have been well-demonstrated in present devices and even designed satisfactorily formore » ITER application, many elements unique to ITER including various crucial integration issues are presently under development. We describe selected novel aspects of plasma control in ITER, identifying unique parts of the control problem and highlighting some key areas of research remaining. Novel control areas described include control physics understanding (e.g., current profile regulation, tearing mode (TM) suppression), control mathematics (e.g., algorithmic and simulation approaches to high confidence robust performance), and integration solutions (e.g., methods for management of highly subscribed control resources). We identify unique aspects of the ITER TM suppression scheme, which will pulse gyrotrons to drive current within a magnetic island, and turn the drive off following suppression in order to minimize use of auxiliary power and maximize fusion gain. The potential role of active current profile control and approaches to design in ITER are discussed. Issues and approaches to fault handling algorithms are described, along with novel aspects of actuator sharing in ITER.« less
Novel aspects of plasma control in ITER
Humphreys, David; Ambrosino, G.; de Vries, Peter; ...
2015-02-12
ITER plasma control design solutions and performance requirements are strongly driven by its nuclear mission, aggressive commissioning constraints, and limited number of operational discharges. In addition, high plasma energy content, heat fluxes, neutron fluxes, and very long pulse operation place novel demands on control performance in many areas ranging from plasma boundary and divertor regulation to plasma kinetics and stability control. Both commissioning and experimental operations schedules provide limited time for tuning of control algorithms relative to operating devices. Although many aspects of the control solutions required by ITER have been well-demonstrated in present devices and even designed satisfactorily formore » ITER application, many elements unique to ITER including various crucial integration issues are presently under development. We describe selected novel aspects of plasma control in ITER, identifying unique parts of the control problem and highlighting some key areas of research remaining. Novel control areas described include control physics understanding (e.g. current profile regulation, tearing mode suppression (TM)), control mathematics (e.g. algorithmic and simulation approaches to high confidence robust performance), and integration solutions (e.g. methods for management of highly-subscribed control resources). We identify unique aspects of the ITER TM suppression scheme, which will pulse gyrotrons to drive current within a magnetic island, and turn the drive off following suppression in order to minimize use of auxiliary power and maximize fusion gain. The potential role of active current profile control and approaches to design in ITER are discussed. Finally, issues and approaches to fault handling algorithms are described, along with novel aspects of actuator sharing in ITER.« less
Nobile, Hélène; Bergmann, Manuela M; Moldenhauer, Jennifer; Borry, Pascal
2016-07-01
Reliable participation and sustained retention rates are crucial in longitudinal studies involving human subjects and biomaterials. Understanding the decision to enroll is an essential step to develop adequate strategies promoting long-term participation. Semi-structured interviews were implemented with newly recruited and long-term participants randomly drawn from two ongoing longitudinal studies with a biobank component in Germany. Iterative qualitative content analysis was applied to the transcribed interviews. Participants (n = 31) expressed their decision to enroll or remain in the study as the result of the complex interplay of individual factors, institutional cues, study-related features, and societal dynamics. Different forms of trust were identified as central within the elements used to explain participation and could be compared to Dibben, Morris, and Lean's dynamic model of interpersonal trust. Given these high levels of trust, an investigation of the morality of the trustful relationship at stake between participants and research(ers) is warranted. © The Author(s) 2016.
Examination of the Entry to Burn and Burn Control for the ITER 15 MA Baseline and Other Scenarios
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kesse, Charles E.; Kim, S-H.; Koechl, F.
2014-09-01
The entry to burn and flattop burn control in ITER will be a critical need from the first DT experiments. Simulations are used to address time-dependent behavior under a range of possible conditions that include injected power level, impurity content (W, Ar, Be), density evolution, H-mode regimes, controlled parameter (Wth, Pnet, Pfusion), and actuator (Paux, fueling, fAr), with a range of transport models. A number of physics issues at the L-H transition require better understanding to project to ITER, however, simulations indicate viable control with sufficient auxiliary power (up to 73 MW), while lower powers become marginal (as low asmore » 43 MW).« less
Henman, Lita Jo; Corrigan, Robert; Carrico, Ruth; Suh, Kathryn N
2015-07-01
The Certification Board of Infection Control and Epidemiology, Inc (CBIC) is a voluntary autonomous multidisciplinary board that provides direction and administers the certification process for professionals who are responsible for the infection prevention and control program in a health care facility. The CBIC performs a practice analysis approximately every 4-5 years. The practice analysis is an integral part of the certification examination development process and serves as the backbone of the test content outline. In 2013, the CBIC determined that a practice analysis was required and contracted with Prometric to facilitate the process. The practice analysis was carried out in 2014 by a diverse group of subject matter experts from the United States and Canada. The practice analysis results showed a significant change in the number of tasks and associated knowledge required for the competent practice of infection prevention. As authorized by the CBIC, the test committee is currently reclassifying the bank of examination questions as required and is writing and reviewing questions based on the updated test specifications and content outline. The new content outline will be reflected in examinations that are taken beginning in July 2015. This iterative process of assessing and updating the certification examination ensures not only a valid competency tool but a true reflection of current practices. Copyright © 2015 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.
Solving Upwind-Biased Discretizations: Defect-Correction Iterations
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.
1999-01-01
This paper considers defect-correction solvers for a second order upwind-biased discretization of the 2D convection equation. The following important features are reported: (1) The asymptotic convergence rate is about 0.5 per defect-correction iteration. (2) If the operators involved in defect-correction iterations have different approximation order, then the initial convergence rates may be very slow. The number of iterations required to get into the asymptotic convergence regime might grow on fine grids as a negative power of h. In the case of a second order target operator and a first order driver operator, this number of iterations is roughly proportional to h-1/3. (3) If both the operators have the second approximation order, the defect-correction solver demonstrates the asymptotic convergence rate after three iterations at most. The same three iterations are required to converge algebraic error below the truncation error level. A novel comprehensive half-space Fourier mode analysis (which, by the way, can take into account the influence of discretized outflow boundary conditions as well) for the defect-correction method is developed. This analysis explains many phenomena observed in solving non-elliptic equations and provides a close prediction of the actual solution behavior. It predicts the convergence rate for each iteration and the asymptotic convergence rate. As a result of this analysis, a new very efficient adaptive multigrid algorithm solving the discrete problem to within a given accuracy is proposed. Numerical simulations confirm the accuracy of the analysis and the efficiency of the proposed algorithm. The results of the numerical tests are reported.
A Monte Carlo Study of an Iterative Wald Test Procedure for DIF Analysis
ERIC Educational Resources Information Center
Cao, Mengyang; Tay, Louis; Liu, Yaowu
2017-01-01
This study examined the performance of a proposed iterative Wald approach for detecting differential item functioning (DIF) between two groups when preknowledge of anchor items is absent. The iterative approach utilizes the Wald-2 approach to identify anchor items and then iteratively tests for DIF items with the Wald-1 approach. Monte Carlo…
Simulation and Analysis of Launch Teams (SALT)
NASA Technical Reports Server (NTRS)
2008-01-01
A SALT effort was initiated in late 2005 with seed funding from the Office of Safety and Mission Assurance Human Factors organization. Its objectives included demonstrating human behavior and performance modeling and simulation technologies for launch team analysis, training, and evaluation. The goal of the research is to improve future NASA operations and training. The project employed an iterative approach, with the first iteration focusing on the last 70 minutes of a nominal-case Space Shuttle countdown, the second iteration focusing on aborts and launch commit criteria violations, the third iteration focusing on Ares I-X communications, and the fourth iteration focusing on Ares I-X Firing Room configurations. SALT applied new commercial off-the-shelf technologies from industry and the Department of Defense in the spaceport domain.
Calibration and Data Analysis of the MC-130 Air Balance
NASA Technical Reports Server (NTRS)
Booth, Dennis; Ulbrich, N.
2012-01-01
Design, calibration, calibration analysis, and intended use of the MC-130 air balance are discussed. The MC-130 balance is an 8.0 inch diameter force balance that has two separate internal air flow systems and one external bellows system. The manual calibration of the balance consisted of a total of 1854 data points with both unpressurized and pressurized air flowing through the balance. A subset of 1160 data points was chosen for the calibration data analysis. The regression analysis of the subset was performed using two fundamentally different analysis approaches. First, the data analysis was performed using a recently developed extension of the Iterative Method. This approach fits gage outputs as a function of both applied balance loads and bellows pressures while still allowing the application of the iteration scheme that is used with the Iterative Method. Then, for comparison, the axial force was also analyzed using the Non-Iterative Method. This alternate approach directly fits loads as a function of measured gage outputs and bellows pressures and does not require a load iteration. The regression models used by both the extended Iterative and Non-Iterative Method were constructed such that they met a set of widely accepted statistical quality requirements. These requirements lead to reliable regression models and prevent overfitting of data because they ensure that no hidden near-linear dependencies between regression model terms exist and that only statistically significant terms are included. Finally, a comparison of the axial force residuals was performed. Overall, axial force estimates obtained from both methods show excellent agreement as the differences of the standard deviation of the axial force residuals are on the order of 0.001 % of the axial force capacity.
Fostering learners' interaction with content: A learner-centered mobile device interface
NASA Astrophysics Data System (ADS)
Abdous, M.
2015-12-01
With the ever-increasing omnipresence of mobile devices in student life, leveraging smart devices to foster students' interaction with course content is critical. Following a learner-centered design iterative approach, we designed a mobile interface that may enable learners to access and interact with online course content efficiently and intuitively. Our design process leveraged recent technologies, such as bootstrap, Google's Material Design, HTML5, and JavaScript to design an intuitive, efficient, and portable mobile interface with a variety of built-in features, including context sensitive bookmarking, searching, progress tracking, captioning, and transcript display. The mobile interface also offers students the ability to ask context-related questions and to complete self-checks as they watch audio/video presentations. Our design process involved ongoing iterative feedback from learners, allowing us to refine and tweak the interface to provide learners with a unified experience across platforms and devices. The innovative combination of technologies built around well-structured and well-designed content seems to provide an effective learning experience to mobile learners. Early feedback indicates a high level of satisfaction with the interface's efficiency, intuitiveness, and robustness from both students and faculty.
Fourier mode analysis of slab-geometry transport iterations in spatially periodic media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larsen, E; Zika, M
1999-04-01
We describe a Fourier analysis of the diffusion-synthetic acceleration (DSA) and transport-synthetic acceleration (TSA) iteration schemes for a spatially periodic, but otherwise arbitrarily heterogeneous, medium. Both DSA and TSA converge more slowly in a heterogeneous medium than in a homogeneous medium composed of the volume-averaged scattering ratio. In the limit of a homogeneous medium, our heterogeneous analysis contains eigenvalues of multiplicity two at ''resonant'' wave numbers. In the presence of material heterogeneities, error modes corresponding to these resonant wave numbers are ''excited'' more than other error modes. For DSA and TSA, the iteration spectral radius may occur at these resonantmore » wave numbers, in which case the material heterogeneities most strongly affect iterative performance.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Hongxing; Fang, Hengrui; Miller, Mitchell D.
2016-07-15
An iterative transform algorithm is proposed to improve the conventional molecular-replacement method for solving the phase problem in X-ray crystallography. Several examples of successful trial calculations carried out with real diffraction data are presented. An iterative transform method proposed previously for direct phasing of high-solvent-content protein crystals is employed for enhancing the molecular-replacement (MR) algorithm in protein crystallography. Target structures that are resistant to conventional MR due to insufficient similarity between the template and target structures might be tractable with this modified phasing method. Trial calculations involving three different structures are described to test and illustrate the methodology. The relationshipmore » of the approach to PHENIX Phaser-MR and MR-Rosetta is discussed.« less
Kuijpers, Wilma; Groen, Wim G; Oldenburg, Hester Sa; Wouters, Michel Wjm; Aaronson, Neil K; van Harten, Wim H
2015-01-22
MijnAVL (MyAVL) is an interactive portal being developed to empower cancer survivors. Literature review and focus groups yielded the selection of features such as access to the electronic medical record (EMR), patient reported outcomes (PROs) and related feedback, and a physical activity support program. Our aim was to present a final design of MijnAVL based on (1) health professionals' evaluation of proposed features, (2) cancer survivors' evaluation of a first draft, and (3) cancer survivors' evaluation of a functional online prototype. Professionals from various disciplines gave input to the content of and procedures related to MijnAVL. Subsequently, 16 cancer survivors participated in an interview to evaluate content and graphic design of a first draft (shown with screenshots). Finally, 7 survivors participated in a usability test with a fully functional prototype. They performed predefined tasks (eg, logging in, finding a test result, completing a questionnaire) while thinking aloud. Descriptive statistics and simple content analysis were used to analyze the data of both the interviews and the usability tests. Professionals supported access to the EMR (eg, histology reports, lab results, and their letters to general practitioners). They also informed the development of PROs and the physical activity support program. Based on the first draft, survivors selected the preferred graphic design, approved the features and provided suggestions for the content (eg, explanation of medical jargon, more concise texts, notification by emails). Usability tests revealed that it was relatively easy to navigate the website and use the different features. Recommendations included, among others, a frequently asked questions section and the use of hyperlinks between different parts of the website. The development of MijnAVL, an interactive portal to empower breast and lung cancer survivors, was performed iteratively and involved multiple groups of end-users. This approach resulted in a usable and understandable final version. Its effectiveness should be determined in further research.
Kuijpers, Wilma; Groen, Wim G; Oldenburg, Hester SA; Wouters, Michel WJM; Aaronson, Neil K
2015-01-01
Background MijnAVL (MyAVL) is an interactive portal being developed to empower cancer survivors. Literature review and focus groups yielded the selection of features such as access to the electronic medical record (EMR), patient reported outcomes (PROs) and related feedback, and a physical activity support program. Objective Our aim was to present a final design of MijnAVL based on (1) health professionals' evaluation of proposed features, (2) cancer survivors’ evaluation of a first draft, and (3) cancer survivors’ evaluation of a functional online prototype. Methods Professionals from various disciplines gave input to the content of and procedures related to MijnAVL. Subsequently, 16 cancer survivors participated in an interview to evaluate content and graphic design of a first draft (shown with screenshots). Finally, 7 survivors participated in a usability test with a fully functional prototype. They performed predefined tasks (eg, logging in, finding a test result, completing a questionnaire) while thinking aloud. Descriptive statistics and simple content analysis were used to analyze the data of both the interviews and the usability tests. Results Professionals supported access to the EMR (eg, histology reports, lab results, and their letters to general practitioners). They also informed the development of PROs and the physical activity support program. Based on the first draft, survivors selected the preferred graphic design, approved the features and provided suggestions for the content (eg, explanation of medical jargon, more concise texts, notification by emails). Usability tests revealed that it was relatively easy to navigate the website and use the different features. Recommendations included, among others, a frequently asked questions section and the use of hyperlinks between different parts of the website. Conclusions The development of MijnAVL, an interactive portal to empower breast and lung cancer survivors, was performed iteratively and involved multiple groups of end-users. This approach resulted in a usable and understandable final version. Its effectiveness should be determined in further research. PMID:25614924
Value Iteration Adaptive Dynamic Programming for Optimal Control of Discrete-Time Nonlinear Systems.
Wei, Qinglai; Liu, Derong; Lin, Hanquan
2016-03-01
In this paper, a value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon undiscounted optimal control problems for discrete-time nonlinear systems. The present value iteration ADP algorithm permits an arbitrary positive semi-definite function to initialize the algorithm. A novel convergence analysis is developed to guarantee that the iterative value function converges to the optimal performance index function. Initialized by different initial functions, it is proven that the iterative value function will be monotonically nonincreasing, monotonically nondecreasing, or nonmonotonic and will converge to the optimum. In this paper, for the first time, the admissibility properties of the iterative control laws are developed for value iteration algorithms. It is emphasized that new termination criteria are established to guarantee the effectiveness of the iterative control laws. Neural networks are used to approximate the iterative value function and compute the iterative control law, respectively, for facilitating the implementation of the iterative ADP algorithm. Finally, two simulation examples are given to illustrate the performance of the present method.
User-oriented evaluation of a medical image retrieval system for radiologists.
Markonis, Dimitrios; Holzer, Markus; Baroz, Frederic; De Castaneda, Rafael Luis Ruiz; Boyer, Célia; Langs, Georg; Müller, Henning
2015-10-01
This article reports the user-oriented evaluation of a text- and content-based medical image retrieval system. User tests with radiologists using a search system for images in the medical literature are presented. The goal of the tests is to assess the usability of the system, identify system and interface aspects that need improvement and useful additions. Another objective is to investigate the system's added value to radiology information retrieval. The study provides an insight into required specifications and potential shortcomings of medical image retrieval systems through a concrete methodology for conducting user tests. User tests with a working image retrieval system of images from the biomedical literature were performed in an iterative manner, where each iteration had the participants perform radiology information seeking tasks and then refining the system as well as the user study design itself. During these tasks the interaction of the users with the system was monitored, usability aspects were measured, retrieval success rates recorded and feedback was collected through survey forms. In total, 16 radiologists participated in the user tests. The success rates in finding relevant information were on average 87% and 78% for image and case retrieval tasks, respectively. The average time for a successful search was below 3 min in both cases. Users felt quickly comfortable with the novel techniques and tools (after 5 to 15 min), such as content-based image retrieval and relevance feedback. User satisfaction measures show a very positive attitude toward the system's functionalities while the user feedback helped identifying the system's weak points. The participants proposed several potentially useful new functionalities, such as filtering by imaging modality and search for articles using image examples. The iterative character of the evaluation helped to obtain diverse and detailed feedback on all system aspects. Radiologists are quickly familiar with the functionalities but have several comments on desired functionalities. The analysis of the results can potentially assist system refinement for future medical information retrieval systems. Moreover, the methodology presented as well as the discussion on the limitations and challenges of such studies can be useful for user-oriented medical image retrieval evaluation, as user-oriented evaluation of interactive system is still only rarely performed. Such interactive evaluations can be limited in effort if done iteratively and can give many insights for developing better systems. Copyright © 2015. Published by Elsevier Ireland Ltd.
A Model and Simple Iterative Algorithm for Redundancy Analysis.
ERIC Educational Resources Information Center
Fornell, Claes; And Others
1988-01-01
This paper shows that redundancy maximization with J. K. Johansson's extension can be accomplished via a simple iterative algorithm based on H. Wold's Partial Least Squares. The model and the iterative algorithm for the least squares approach to redundancy maximization are presented. (TJH)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boggs, Paul T.; Althsuler, Alan; Larzelere, Alex R.
2005-08-01
The Design-through-Analysis Realization Team (DART) is chartered with reducing the time Sandia analysts require to complete the engineering analysis process. The DART system analysis team studied the engineering analysis processes employed by analysts in Centers 9100 and 8700 at Sandia to identify opportunities for reducing overall design-through-analysis process time. The team created and implemented a rigorous analysis methodology based on a generic process flow model parameterized by information obtained from analysts. They also collected data from analysis department managers to quantify the problem type and complexity distribution throughout Sandia's analyst community. They then used this information to develop a communitymore » model, which enables a simple characterization of processes that span the analyst community. The results indicate that equal opportunity for reducing analysis process time is available both by reducing the ''once-through'' time required to complete a process step and by reducing the probability of backward iteration. In addition, reducing the rework fraction (i.e., improving the engineering efficiency of subsequent iterations) offers approximately 40% to 80% of the benefit of reducing the ''once-through'' time or iteration probability, depending upon the process step being considered. Further, the results indicate that geometry manipulation and meshing is the largest portion of an analyst's effort, especially for structural problems, and offers significant opportunity for overall time reduction. Iteration loops initiated late in the process are more costly than others because they increase ''inner loop'' iterations. Identifying and correcting problems as early as possible in the process offers significant opportunity for time savings.« less
The presence of a perseverative iterative style in poor vs. good sleepers.
Barclay, N L; Gregory, A M
2010-03-01
Catastrophizing is present in worriers and poor sleepers. This study investigates whether poor sleepers possess a 'perseverative iterative style' which predisposes them to catastrophize any topic, regardless of content or affective valence, a style previously found to occur more commonly in worriers as compared to others. Poor (n=23) and good sleepers (n=37) were distinguished using the Pittsburgh Sleep Quality Index (PSQI), from a sample of adults in the general population. Participants were required to catastrophize 2 topics: worries about sleep, and a current personal worry; and to iterate the positive aspects of a hypothetical topic. Poor sleepers catastrophized/iterated more steps to a greater extent than good sleepers to these three interviews, (F(1, 58)=7.35, p<.05). However, after controlling for anxiety and worry, this effect was reduced to non-significance for the 'sleep' and 'worry' topics, suggesting that anxiety may mediate some of the association between catastrophizing and sleep. However there was still a tendency for poor sleepers to iterate more steps to the 'hypothetical' topic, after controlling for anxiety and worry, which also suggests that poor sleepers possess a cognitive style which may predispose them to continue iterating consecutive steps to open-ended tasks regardless of anxiety and worry. Future research should examine whether the presence of this cognitive style is significant in leading to or maintaining insomnia.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henning, C.
This report contains papers on the following topics: conceptual design; radiation damage of ITER magnet systems; insulation system of the magnets; critical current density and strain sensitivity; toroidal field coil structural analysis; stress analysis for the ITER central solenoid; and volt-second capabilities and PF magnet configurations.
Wen, Kuang-Yi; Miller, Suzanne M; Stanton, Annette L; Fleisher, Linda; Morra, Marion E; Jorge, Alexandra; Diefenbach, Michael A; Ropka, Mary E; Marcus, Alfred C
2012-08-01
This paper describes the development of a theory-guided and evidence-based multimedia training module to facilitate breast cancer survivors' preparedness for effective communication with their health care providers after active treatment. The iterative developmental process used included: (1) theory and evidence-based content development and vetting; (2) user testing; (3) usability testing; and (4) participant module utilization. Formative evaluation of the training module prototype occurred through user testing (n = 12), resulting in modification of the content and layout. Usability testing (n = 10) was employed to improve module functionality. Preliminary web usage data (n = 256, mean age = 53, 94.5% White, 75% college graduate and above) showed that 59% of the participants accessed the communication module, for an average of 7 min per login. The iterative developmental process was informative in enhancing the relevance of the communication module. Preliminary web usage results demonstrate the potential feasibility of such a program. Our study demonstrates survivors' openness to the use of a web-based communication skills training module and outlines a systematic iterative user and interface program development and testing process, which can serve as a prototype for others considering such an approach. Copyright © 2012. Published by Elsevier Ireland Ltd.
NASA Astrophysics Data System (ADS)
Clayton, N.; Crouchen, M.; Devred, A.; Evans, D.; Gung, C.-Y.; Lathwell, I.
2017-04-01
It is planned that the high voltage electrical insulation on the ITER feeder busbars will consist of interleaved layers of epoxy resin pre-impregnated glass tapes ('pre-preg') and polyimide. In addition to its electrical insulation function, the busbar insulation must have adequate mechanical properties to sustain the loads imposed on it during ITER magnet operation. This paper reports an investigation into suitable materials to manufacture the high voltage insulation for the ITER superconducting busbars and pipework. An R&D programme was undertaken in order to identify suitable pre-preg and polyimide materials from a range of suppliers. Pre-preg materials were obtained from 3 suppliers and used with Kapton HN, to make mouldings using the desired insulation architecture. Two main processing routes for pre-pregs have been investigated, namely vacuum bag processing (out of autoclave processing) and processing using a material with a high coefficient of thermal expansion (silicone rubber), to apply the compaction pressure on the insulation. Insulation should have adequate mechanical properties to cope with the stresses induced by the operating environment and a low void content necessary in a high voltage application. The quality of the mouldings was assessed by mechanical testing at 77 K and by the measurement of the void content.
Wen, Kuang-Yi; Miller, Suzanne M.; Stanton, Annette L.; Fleisher, Linda; Morra, Marion E.; Jorge, Alexandra; Diefenbach, Michael A.; Ropka, Mary E.; Marcus, Alfred C.
2012-01-01
Objective This paper describes the development of a theory-guided and evidence-based multimedia training module to facilitate breast cancer survivors’ preparedness for effective communication with their health care providers after active treatment. Methods The iterative developmental process used included: (1) theory and evidence-based content development and vetting; (2) user testing; (3) usability testing; and (4) participant module utilization. Results Formative evaluation of the training module prototype occurred through user testing (n = 12), resulting in modification of the content and layout. Usability testing (n = 10) was employed to improve module functionality. Preliminary web usage data (n = 256, mean age = 53, 94.5% White, 75% college graduate and above) showed that 59% of the participants accessed the communication module, for an average of 7 min per login. Conclusion The iterative developmental process was informative in enhancing the relevance of the communication module. Preliminary web usage results demonstrate the potential feasibility of such a program. Practice implications Our study demonstrates survivors’ openness to the use of a web-based communication skills training module and outlines a systematic iterative user and interface program development and testing process, which can serve as a prototype for others considering such an approach. PMID:22770812
IMatter: validation of the NHS Scotland Employee Engagement Index.
Snowden, Austyn; MacArthur, Ewan
2014-11-08
Employee engagement is a fundamental component of quality healthcare. In order to provide empirical data of engagement in NHS Scotland an Employee Engagement Index was co-constructed with staff. 'iMatter' consists of 25 Likert questions developed iteratively from the literature and a series of validation events with NHS Scotland staff. The aim of this study was to test the face, content and construct validity of iMatter. Cross sectional survey of NHS Scotland staff. In January 2013 iMatter was sent to 2300 staff across all disciplines in NHS Scotland. 1280 staff completed it. Demographic data were collected. Internal consistency of the scale was calculated. Construct validity consisted of concurrent application of factor analysis and Rasch analysis. Face and content validity were checked using 3 focus groups. The sample was representative of the NHSScotland population. iMatter showed very strong reliability (α = 0.958). Factor analysis revealed a four-factor structure consistent with the following interpretation: iMatter showed evidence of high reliability and validity. It is a popular measure of staff engagement in NHS Scotland. Implications for practice focus on the importance of coproduction in psychometric development.
Steel, Emily J
2018-06-08
Reforms to Australia's disability and rehabilitation sectors have espoused the potential of assistive technology as an enabler. As new insurance systems are being developed it is timely to examine the structure of existing systems. This exploratory study examined the policies guiding assistive technology provision in the motor accident insurance sector of one Australian state. Policy documents were analyzed iteratively with set of qualitative questions to understand the intent and interpretation of policies guiding assistive technology provision. Content analysis identified relevant sections and meaningful terminology, and context analysis explored the dominant perspectives informing policy. The concepts and language of assistive technology are not part of the policy frameworks guiding rehabilitation practice in Queensland's motor accident insurance sector. The definition of rehabilitation in the legislation is consistent contemporary international interpretations that focus on optimizing functioning in interaction with the environment. However, the supporting documents are focused on recovery from injuries where decisions are guided by clinical need and affordability. The policies frame rehabilitation in a medical model that assistive technology provision from the rehabilitation plan. The legislative framework provides opportunities to develop and improve assistive technology provision as part of an integrated approach to rehabilitation.
Benzi, Michele; Evans, Thomas M.; Hamilton, Steven P.; ...
2017-03-05
Here, we consider hybrid deterministic-stochastic iterative algorithms for the solution of large, sparse linear systems. Starting from a convergent splitting of the coefficient matrix, we analyze various types of Monte Carlo acceleration schemes applied to the original preconditioned Richardson (stationary) iteration. We expect that these methods will have considerable potential for resiliency to faults when implemented on massively parallel machines. We also establish sufficient conditions for the convergence of the hybrid schemes, and we investigate different types of preconditioners including sparse approximate inverses. Numerical experiments on linear systems arising from the discretization of partial differential equations are presented.
A non-iterative extension of the multivariate random effects meta-analysis.
Makambi, Kepher H; Seung, Hyunuk
2015-01-01
Multivariate methods in meta-analysis are becoming popular and more accepted in biomedical research despite computational issues in some of the techniques. A number of approaches, both iterative and non-iterative, have been proposed including the multivariate DerSimonian and Laird method by Jackson et al. (2010), which is non-iterative. In this study, we propose an extension of the method by Hartung and Makambi (2002) and Makambi (2001) to multivariate situations. A comparison of the bias and mean square error from a simulation study indicates that, in some circumstances, the proposed approach perform better than the multivariate DerSimonian-Laird approach. An example is presented to demonstrate the application of the proposed approach.
Iterative inversion of deformation vector fields with feedback control.
Dubey, Abhishek; Iliopoulos, Alexandros-Stavros; Sun, Xiaobai; Yin, Fang-Fang; Ren, Lei
2018-05-14
Often, the inverse deformation vector field (DVF) is needed together with the corresponding forward DVF in four-dimesional (4D) reconstruction and dose calculation, adaptive radiation therapy, and simultaneous deformable registration. This study aims at improving both accuracy and efficiency of iterative algorithms for DVF inversion, and advancing our understanding of divergence and latency conditions. We introduce a framework of fixed-point iteration algorithms with active feedback control for DVF inversion. Based on rigorous convergence analysis, we design control mechanisms for modulating the inverse consistency (IC) residual of the current iterate, to be used as feedback into the next iterate. The control is designed adaptively to the input DVF with the objective to enlarge the convergence area and expedite convergence. Three particular settings of feedback control are introduced: constant value over the domain throughout the iteration; alternating values between iteration steps; and spatially variant values. We also introduce three spectral measures of the displacement Jacobian for characterizing a DVF. These measures reveal the critical role of what we term the nontranslational displacement component (NTDC) of the DVF. We carry out inversion experiments with an analytical DVF pair, and with DVFs associated with thoracic CT images of six patients at end of expiration and end of inspiration. The NTDC-adaptive iterations are shown to attain a larger convergence region at a faster pace compared to previous nonadaptive DVF inversion iteration algorithms. By our numerical experiments, alternating control yields smaller IC residuals and inversion errors than constant control. Spatially variant control renders smaller residuals and errors by at least an order of magnitude, compared to other schemes, in no more than 10 steps. Inversion results also show remarkable quantitative agreement with analysis-based predictions. Our analysis captures properties of DVF data associated with clinical CT images, and provides new understanding of iterative DVF inversion algorithms with a simple residual feedback control. Adaptive control is necessary and highly effective in the presence of nonsmall NTDCs. The adaptive iterations or the spectral measures, or both, may potentially be incorporated into deformable image registration methods. © 2018 American Association of Physicists in Medicine.
Influence of Primary Gage Sensitivities on the Convergence of Balance Load Iterations
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert Manfred
2012-01-01
The connection between the convergence of wind tunnel balance load iterations and the existence of the primary gage sensitivities of a balance is discussed. First, basic elements of two load iteration equations that the iterative method uses in combination with results of a calibration data analysis for the prediction of balance loads are reviewed. Then, the connection between the primary gage sensitivities, the load format, the gage output format, and the convergence characteristics of the load iteration equation choices is investigated. A new criterion is also introduced that may be used to objectively determine if the primary gage sensitivity of a balance gage exists. Then, it is shown that both load iteration equations will converge as long as a suitable regression model is used for the analysis of the balance calibration data, the combined influence of non linear terms of the regression model is very small, and the primary gage sensitivities of all balance gages exist. The last requirement is fulfilled, e.g., if force balance calibration data is analyzed in force balance format. Finally, it is demonstrated that only one of the two load iteration equation choices, i.e., the iteration equation used by the primary load iteration method, converges if one or more primary gage sensitivities are missing. This situation may occur, e.g., if force balance calibration data is analyzed in direct read format using the original gage outputs. Data from the calibration of a six component force balance is used to illustrate the connection between the convergence of the load iteration equation choices and the existence of the primary gage sensitivities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhai, Y.; Loesser, G.; Smith, M.
ITER diagnostic first walls (DFWs) and diagnostic shield modules (DSMs) inside the port plugs (PPs) are designed to protect diagnostic instrument and components from a harsh plasma environment and provide structural support while allowing for diagnostic access to the plasma. The design of DFWs and DSMs are driven by 1) plasma radiation and nuclear heating during normal operation 2) electromagnetic loads during plasma events and associate component structural responses. A multi-physics engineering analysis protocol for the design has been established at Princeton Plasma Physics Laboratory and it was used for the design of ITER DFWs and DSMs. The analyses weremore » performed to address challenging design issues based on resultant stresses and deflections of the DFW-DSM-PP assembly for the main load cases. ITER Structural Design Criteria for In-Vessel Components (SDC-IC) required for design by analysis and three major issues driving the mechanical design of ITER DFWs are discussed. The general guidelines for the DSM design have been established as a result of design parametric studies.« less
The development of Drink Less: an alcohol reduction smartphone app for excessive drinkers.
Garnett, Claire; Crane, David; West, Robert; Brown, Jamie; Michie, Susan
2018-05-04
Excessive alcohol consumption poses a serious problem for public health. Digital behavior change interventions have the potential to help users reduce their drinking. In accordance with Open Science principles, this paper describes the development of a smartphone app to help individuals who drink excessively to reduce their alcohol consumption. Following the UK Medical Research Council's guidance and the Multiphase Optimization Strategy, development consisted of two phases: (i) selection of intervention components and (ii) design and development work to implement the chosen components into modules to be evaluated further for inclusion in the app. Phase 1 involved a scoping literature review, expert consensus study and content analysis of existing alcohol apps. Findings were integrated within a broad model of behavior change (Capability, Opportunity, Motivation-Behavior). Phase 2 involved a highly iterative process and used the "Person-Based" approach to promote engagement. From Phase 1, five intervention components were selected: (i) Normative Feedback, (ii) Cognitive Bias Re-training, (iii) Self-monitoring and Feedback, (iv) Action Planning, and (v) Identity Change. Phase 2 indicated that each of these components presented different challenges for implementation as app modules; all required multiple iterations and design changes to arrive at versions that would be suitable for inclusion in a subsequent evaluation study. The development of the Drink Less app involved a thorough process of component identification with a scoping literature review, expert consensus, and review of other apps. Translation of the components into app modules required a highly iterative process involving user testing and design modification.
User and Content Characteristics of Public Tweets Referencing Little Cigars.
Step, Mary M; Bracken, Cheryl C; Trapl, Erika S; Flocke, Susan A
2016-01-01
Compared to cigarettes, little cigars and cigarillos (LCC) are minimally regulated, affordable, and widely available to young people. Because Twitter is a preferred mode of communication among younger people, product portrayals may be useful for informing both interventions and public health or tobacco policy. A mixed-methods study was implemented to analyze the content of public tweets (N = 288) and profile photos sampled from a search of 2 LCC brands (Black & Mild and Swisher Sweets). Metadata and manifest attributes of profile photo demographic features and tweet message features were coded and analyzed. Thematic analysis of the tweets was conducted using an iterative immersion/ crystallization method. Tweeters were most often boys or men (63%) and appeared young (76%). Prevalent content themes included expressing affiliation for the LCC product and reporting smoking activity. Although men and women tweeted affiliation for LCC products and reported smoking activity in similar numbers, women were significantly less likely to tweet about blunting than men. Twitter provides a potentially potent source of nuanced information about how young people are using little cigars. These observed characteristics may be useful to inform counter-messaging strategies and interventions.
Computational aspects of helicopter trim analysis and damping levels from Floquet theory
NASA Technical Reports Server (NTRS)
Gaonkar, Gopal H.; Achar, N. S.
1992-01-01
Helicopter trim settings of periodic initial state and control inputs are investigated for convergence of Newton iteration in computing the settings sequentially and in parallel. The trim analysis uses a shooting method and a weak version of two temporal finite element methods with displacement formulation and with mixed formulation of displacements and momenta. These three methods broadly represent two main approaches of trim analysis: adaptation of initial-value and finite element boundary-value codes to periodic boundary conditions, particularly for unstable and marginally stable systems. In each method, both the sequential and in-parallel schemes are used and the resulting nonlinear algebraic equations are solved by damped Newton iteration with an optimally selected damping parameter. The impact of damped Newton iteration, including earlier-observed divergence problems in trim analysis, is demonstrated by the maximum condition number of the Jacobian matrices of the iterative scheme and by virtual elimination of divergence. The advantages of the in-parallel scheme over the conventional sequential scheme are also demonstrated.
High recall document content extraction
NASA Astrophysics Data System (ADS)
An, Chang; Baird, Henry S.
2011-01-01
We report methodologies for computing high-recall masks for document image content extraction, that is, the location and segmentation of regions containing handwriting, machine-printed text, photographs, blank space, etc. The resulting segmentation is pixel-accurate, which accommodates arbitrary zone shapes (not merely rectangles). We describe experiments showing that iterated classifiers can increase recall of all content types, with little loss of precision. We also introduce two methodological enhancements: (1) a multi-stage voting rule; and (2) a scoring policy that views blank pixels as a "don't care" class with other content classes. These enhancements improve both recall and precision, achieving at least 89% recall and at least 87% precision among three content types: machine-print, handwriting, and photo.
Wei, Qinglai; Liu, Derong; Lin, Qiao
In this paper, a novel local value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon optimal control problems for discrete-time nonlinear systems. The focuses of this paper are to study admissibility properties and the termination criteria of discrete-time local value iteration ADP algorithms. In the discrete-time local value iteration ADP algorithm, the iterative value functions and the iterative control laws are both updated in a given subset of the state space in each iteration, instead of the whole state space. For the first time, admissibility properties of iterative control laws are analyzed for the local value iteration ADP algorithm. New termination criteria are established, which terminate the iterative local ADP algorithm with an admissible approximate optimal control law. Finally, simulation results are given to illustrate the performance of the developed algorithm.In this paper, a novel local value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon optimal control problems for discrete-time nonlinear systems. The focuses of this paper are to study admissibility properties and the termination criteria of discrete-time local value iteration ADP algorithms. In the discrete-time local value iteration ADP algorithm, the iterative value functions and the iterative control laws are both updated in a given subset of the state space in each iteration, instead of the whole state space. For the first time, admissibility properties of iterative control laws are analyzed for the local value iteration ADP algorithm. New termination criteria are established, which terminate the iterative local ADP algorithm with an admissible approximate optimal control law. Finally, simulation results are given to illustrate the performance of the developed algorithm.
Improvements in surface singularity analysis and design methods. [applicable to airfoils
NASA Technical Reports Server (NTRS)
Bristow, D. R.
1979-01-01
The coupling of the combined source vortex distribution of Green's potential flow function with contemporary numerical techniques is shown to provide accurate, efficient, and stable solutions to subsonic inviscid analysis and design problems for multi-element airfoils. The analysis problem is solved by direct calculation of the surface singularity distribution required to satisfy the flow tangency boundary condition. The design or inverse problem is solved by an iteration process. In this process, the geometry and the associated pressure distribution are iterated until the pressure distribution most nearly corresponding to the prescribed design distribution is obtained. Typically, five iteration cycles are required for convergence. A description of the analysis and design method is presented, along with supporting examples.
Establishing Factor Validity Using Variable Reduction in Confirmatory Factor Analysis.
ERIC Educational Resources Information Center
Hofmann, Rich
1995-01-01
Using a 21-statement attitude-type instrument, an iterative procedure for improving confirmatory model fit is demonstrated within the context of the EQS program of P. M. Bentler and maximum likelihood factor analysis. Each iteration systematically eliminates the poorest fitting statement as identified by a variable fit index. (SLD)
Performance analysis of improved iterated cubature Kalman filter and its application to GNSS/INS.
Cui, Bingbo; Chen, Xiyuan; Xu, Yuan; Huang, Haoqian; Liu, Xiao
2017-01-01
In order to improve the accuracy and robustness of GNSS/INS navigation system, an improved iterated cubature Kalman filter (IICKF) is proposed by considering the state-dependent noise and system uncertainty. First, a simplified framework of iterated Gaussian filter is derived by using damped Newton-Raphson algorithm and online noise estimator. Then the effect of state-dependent noise coming from iterated update is analyzed theoretically, and an augmented form of CKF algorithm is applied to improve the estimation accuracy. The performance of IICKF is verified by field test and numerical simulation, and results reveal that, compared with non-iterated filter, iterated filter is less sensitive to the system uncertainty, and IICKF improves the accuracy of yaw, roll and pitch by 48.9%, 73.1% and 83.3%, respectively, compared with traditional iterated KF. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Non-iterative determination of the stress-density relation from ramp wave data through a window
NASA Astrophysics Data System (ADS)
Dowling, Evan; Fratanduono, Dayne; Swift, Damian
2017-06-01
In the canonical ramp compression experiment, a smoothly-increasing load is applied the surface of the sample, and the particle velocity history is measured at interfaces two or more different distances into the sample. The velocity histories are used to deduce a stress-density relation by correcting for perturbations caused by reflected release waves, usually via the iterative Lagrangian analysis technique of Rothman and Maw. We previously described a non-iterative (recursive) method of analysis, which was more stable and orders of magnitude faster than iteration, but was subject to the limitation that the free surface velocity had to be sampled at uniform intervals. We have now developed more general recursive algorithms suitable for analyzing ramp data through a finite-impedance window. Free surfaces can be treated seamlessly, and the need for uniform velocity sampling has been removed. These calculations require interpolation of partially-released states using the partially-constructed isentrope, making them slower than the previous free-surface scheme, but they are still much faster than iterative analysis. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Shape reanalysis and sensitivities utilizing preconditioned iterative boundary solvers
NASA Technical Reports Server (NTRS)
Guru Prasad, K.; Kane, J. H.
1992-01-01
The computational advantages associated with the utilization of preconditined iterative equation solvers are quantified for the reanalysis of perturbed shapes using continuum structural boundary element analysis (BEA). Both single- and multi-zone three-dimensional problems are examined. Significant reductions in computer time are obtained by making use of previously computed solution vectors and preconditioners in subsequent analyses. The effectiveness of this technique is demonstrated for the computation of shape response sensitivities required in shape optimization. Computer times and accuracies achieved using the preconditioned iterative solvers are compared with those obtained via direct solvers and implicit differentiation of the boundary integral equations. It is concluded that this approach employing preconditioned iterative equation solvers in reanalysis and sensitivity analysis can be competitive with if not superior to those involving direct solvers.
Online Classrooms: Powerful Tools for Rapid-Iteration Pedagogical Improvements
NASA Astrophysics Data System (ADS)
Horodyskyj, L.; Semken, S.; Anbar, A.; Buxner, S.
2015-11-01
Online education offers the opportunity to reach a variety of students including non-traditional and geographically diverse students. Research has shown that online courses modeled after traditional lecture-exam courses are ineffective. Over the past three years, Arizona State University developed and offered Habitable Worlds, an online-only astrobiology lab course featuring active learning tools. The course is offered in an intelligent tutoring system (ITS) that records a wealth of student data. In analyzing data from the Fall 2013 offering of the course, we were able to identify pre-post quiz results that were suboptimal and where in the lesson and how precisely students were missing concepts. The problem areas were redesigned, and the improved lessons were deployed a few months later. We saw significant improvements in our pre-post quiz results due to the implemented changes. This demonstrates the effectiveness of using robust ITS not only to present content online, but to provide instantaneous data for rapid iteration and improvement of existing content.
ITER L-mode confinement database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaye, S.M.
This paper describes the content of an L-mode database that has been compiled with data from Alcator C-Mod, ASDEX, DIII, DIII-D, FTU, JET, JFT-2M, JT-60, PBX-M, PDX, T-10, TEXTOR, TFTR, and Tore-Supra. The database consists of a total of 2938 entries, 1881 of which are in the L-phase while 922 are ohmically heated only (OH). Each entry contains up to 95 descriptive parameters, including global and kinetic information, machine conditioning, and configuration. The paper presents a description of the database and the variables contained therein, and it also presents global and thermal scalings along with predictions for ITER.
NASA Astrophysics Data System (ADS)
Fu, Linyun; Ma, Xiaogang; Zheng, Jin; Goldstein, Justin; Duggan, Brian; West, Patrick; Aulenbach, Steve; Tilmes, Curt; Fox, Peter
2014-05-01
This poster will show how we used a case-driven iterative methodology to develop an ontology to represent the content structure and the associated provenance information in a National Climate Assessment (NCA) report of the US Global Change Research Program (USGCRP). We applied the W3C PROV-O ontology to implement a formal representation of provenance. We argue that the use case-driven, iterative development process and the application of a formal provenance ontology help efficiently incorporate domain knowledge from earth and environmental scientists in a well-structured model interoperable in the context of the Web of Data.
Perturbation-iteration theory for analyzing microwave striplines
NASA Technical Reports Server (NTRS)
Kretch, B. E.
1985-01-01
A perturbation-iteration technique is presented for determining the propagation constant and characteristic impedance of an unshielded microstrip transmission line. The method converges to the correct solution with a few iterations at each frequency and is equivalent to a full wave analysis. The perturbation-iteration method gives a direct solution for the propagation constant without having to find the roots of a transcendental dispersion equation. The theory is presented in detail along with numerical results for the effective dielectric constant and characteristic impedance for a wide range of substrate dielectric constants, stripline dimensions, and frequencies.
Bragg x-ray survey spectrometer for ITER.
Varshney, S K; Barnsley, R; O'Mullane, M G; Jakhar, S
2012-10-01
Several potential impurity ions in the ITER plasmas will lead to loss of confined energy through line and continuum emission. For real time monitoring of impurities, a seven channel Bragg x-ray spectrometer (XRCS survey) is considered. This paper presents design and analysis of the spectrometer, including x-ray tracing by the Shadow-XOP code, sensitivity calculations for reference H-mode plasma and neutronics assessment. The XRCS survey performance analysis shows that the ITER measurement requirements of impurity monitoring in 10 ms integration time at the minimum levels for low-Z to high-Z impurity ions can largely be met.
Simultaneous and iterative weighted regression analysis of toxicity tests using a microplate reader.
Galgani, F; Cadiou, Y; Gilbert, F
1992-04-01
A system is described for determination of LC50 or IC50 by an iterative process based on data obtained from a plate reader using a marine unicellular alga as a target species. The esterase activity of Tetraselmis suesica on fluorescein diacetate as a substrate was measured using a fluorescence titerplate. Simultaneous analysis of results was performed using an iterative process adopting the sigmoid function Y = y/1 (dose of toxicant/IC50)slope for dose-response relationships. IC50 (+/- SEM) was estimated (P less than 0.05). An application with phosalone as a toxicant is presented.
Novel approach in k0-NAA for highly concentrated REE Samples.
Abdollahi Neisiani, M; Latifi, M; Chaouki, J; Chilian, C
2018-04-01
The present paper presents a new approach for k 0 -NAA for accurate quantification with short turnaround analysis times for rare earth elements (REEs) in high content mineral matrices. REE k 0 and Q 0 values, spectral interferences and nuclear interferences were experimentally evaluated and improved with Alfa Aesar Specpure Plasma Standard 1000mgkg -1 mono-rare earth solutions. The new iterative gamma-ray self-attenuation and neutron self-shielding methods were investigated with powder standards prepared from 100mg of 99.9% Alfa Aesar mono rare earth oxide diluted with silica oxide. The overall performance of the new k 0 -NAA method for REEs was validated using a certified reference material (CRM) from Canadian Certified Reference Materials Project (REE-2) with REE content ranging from 7.2mgkg -1 for Yb to 9610mgkg -1 for Ce. The REE concentration was determined with uncertainty below 7% (at 95% confidence level) and proved good consistency with the CRM certified concentrations. Copyright © 2017 Elsevier B.V. All rights reserved.
Accelerating the weighted histogram analysis method by direct inversion in the iterative subspace.
Zhang, Cheng; Lai, Chun-Liang; Pettitt, B Montgomery
The weighted histogram analysis method (WHAM) for free energy calculations is a valuable tool to produce free energy differences with the minimal errors. Given multiple simulations, WHAM obtains from the distribution overlaps the optimal statistical estimator of the density of states, from which the free energy differences can be computed. The WHAM equations are often solved by an iterative procedure. In this work, we use a well-known linear algebra algorithm which allows for more rapid convergence to the solution. We find that the computational complexity of the iterative solution to WHAM and the closely-related multiple Bennett acceptance ratio (MBAR) method can be improved by using the method of direct inversion in the iterative subspace. We give examples from a lattice model, a simple liquid and an aqueous protein solution.
Gaile, Jacqueline; Adams, Catherine
2018-01-01
Metacognition is a significant component of complex interventions for children who have developmental language disorders. Research into how metacognition operates in the content or process of developmental language therapy delivery is limited. Identification and description of proposed active therapy components, such as metacognition, may contribute to our understanding of how to deliver complex communication interventions in an optimal manner. To analyse aspects of metacognition during therapy derived from a manualized speech and language intervention (the Social Communication Intervention Programme-SCIP) as delivered to children who have social (pragmatic) communication disorder (SPCD) and to examine the dynamic process of delivering therapy. A purposive sample of eight filmed therapy sessions was selected from the video data corpus of intervention-arm participants within a randomized controlled trial. The child-therapist interactions during therapy sessions from five children (aged between 5;11 and 10;3) in the SCIP trial were transcribed. Filmed sessions represented a variety of communication profiles and SCIP therapy content. Starting from existing theory on metacognition, cycles of iterative analysis were performed using a mixed inductive-deductive qualitative analysis. A preliminary list of metacognitive content embedded in the intervention was developed into a metacognitive coding framework (MCF). A thematic analysis of the identified metacognitive content of the intervention was then carried out across the whole sample. Thematic analysis revealed the presence of metacognition in the content and delivery of SCIP intervention. Four main themes of metacognitive person, task and strategy knowledge, and monitoring/control were identified. Metacognition was a feature of how children's ability to monitor language, pragmatic and social interaction skills, in themselves and other people, was developed. Task design and delivery methods were found to play a particular role in adjusting the metacognitive content of the therapy activities. This study makes explicit the metacognitive content and delivery within a complex developmental communication intervention. Discussion of the findings about metacognitive content provides an explanation of how the skilled speech and language therapist manipulates task demands, person knowledge and therapy methods towards the therapy goal. Clinical applications of the metacognitive framework are discussed. We suggest that the process of making the tacit knowledge of the therapist explicit can contribute to the implementation of complex evidence-based interventions. © 2017 Royal College of Speech and Language Therapists.
Finite Volume Element (FVE) discretization and multilevel solution of the axisymmetric heat equation
NASA Astrophysics Data System (ADS)
Litaker, Eric T.
1994-12-01
The axisymmetric heat equation, resulting from a point-source of heat applied to a metal block, is solved numerically; both iterative and multilevel solutions are computed in order to compare the two processes. The continuum problem is discretized in two stages: finite differences are used to discretize the time derivatives, resulting is a fully implicit backward time-stepping scheme, and the Finite Volume Element (FVE) method is used to discretize the spatial derivatives. The application of the FVE method to a problem in cylindrical coordinates is new, and results in stencils which are analyzed extensively. Several iteration schemes are considered, including both Jacobi and Gauss-Seidel; a thorough analysis of these schemes is done, using both the spectral radii of the iteration matrices and local mode analysis. Using this discretization, a Gauss-Seidel relaxation scheme is used to solve the heat equation iteratively. A multilevel solution process is then constructed, including the development of intergrid transfer and coarse grid operators. Local mode analysis is performed on the components of the amplification matrix, resulting in the two-level convergence factors for various combinations of the operators. A multilevel solution process is implemented by using multigrid V-cycles; the iterative and multilevel results are compared and discussed in detail. The computational savings resulting from the multilevel process are then discussed.
Design of Chemistry Teacher Education Course on Nature of Science
ERIC Educational Resources Information Center
Vesterinen, Veli-Matti; Aksela, Maija
2013-01-01
To enhance students' understanding of nature of science (NOS), teachers need adequate pedagogical content knowledge related to NOS. The educational design research study presented here describes the design and development of a pre-service chemistry teacher education course on NOS instruction. The study documents two iterative cycles of…
76 FR 43374 - Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-20
...'s design and content, using an iterative process to improve the draft form to make it easier for... considered private. Type of Review: New Collection. Affected Public: Individuals and businesses or other for... other forms of information technology; and (e) estimates of capital or start-up costs and costs of...
Supporting Mathematics Instruction through Community
ERIC Educational Resources Information Center
Amidon, Joel C.; Trevathan, Morgan L.
2016-01-01
Raising expectations is nothing new. Every iteration of standards elevates the expectations for what students should know and be able to do. The Common Core State Standards for Mathematics (CCSSM) is no exception, with standards for content and practice that move beyond memorization of traditional algorithms to "make sense of problems and…
Data from: Solving the Robot-World Hand-Eye(s) Calibration Problem with
Iterative Methods | National Agricultural Library Skip to main content Home National Agricultural Library United States Department of Agriculture Ag Data Commons Beta Toggle navigation Datasets . License U.S. Public Domain Funding Source(s) National Science Foundation IOS-1339211 Agricultural Research
Arc detection for the ICRF system on ITER
NASA Astrophysics Data System (ADS)
D'Inca, R.
2011-12-01
The ICRF system for ITER is designed to respect the high voltage breakdown limits. However arcs can still statistically happen and must be quickly detected and suppressed by shutting the RF power down. For the conception of a reliable and efficient detector, the analysis of the mechanism of arcs is necessary to find their unique signature. Numerous systems have been conceived to address the issues of arc detection. VSWR-based detectors, RF noise detectors, sound detectors, optical detectors, S-matrix based detectors. Until now, none of them has succeeded in demonstrating the fulfillment of all requirements and the studies for ITER now follow three directions: improvement of the existing concepts to fix their flaws, development of new theoretically fully compliant detectors (like the GUIDAR) and combination of several detectors to benefit from the advantages of each of them. Together with the physical and engineering challenges, the development of an arc detection system for ITER raises methodological concerns to extrapolate the results from basic experiments and present machines to the ITER scale ICRF system and to conduct a relevant risk analysis.
Joanny, M; Salasca, S; Dapena, M; Cantone, B; Travère, J M; Thellier, C; Fermé, J J; Marot, L; Buravand, O; Perrollaz, G; Zeile, C
2012-10-01
ITER first mirrors (FMs), as the first components of most ITER optical diagnostics, will be exposed to high plasma radiation flux and neutron load. To reduce the FMs heating and optical surface deformation induced during ITER operation, the use of relevant materials and cooling system are foreseen. The calculations led on different materials and FMs designs and geometries (100 mm and 200 mm) show that the use of CuCrZr and TZM, and a complex integrated cooling system can limit efficiently the FMs heating and reduce their optical surface deformation under plasma radiation flux and neutron load. These investigations were used to evaluate, for the ITER equatorial port visible∕infrared wide angle viewing system, the impact of the FMs properties change during operation on the instrument main optical performances. The results obtained are presented and discussed.
Nitsch, Martina; Dimopoulos, Christina N; Flaschberger, Edith; Saffran, Kristina; Kruger, Jenna F; Garlock, Lindsay; Wilfley, Denise E; Taylor, Craig B; Jones, Megan
2016-01-11
Numerous digital health interventions have been developed for mental health promotion and intervention, including eating disorders. Efficacy of many interventions has been evaluated, yet knowledge about reasons for dropout and poor adherence is scarce. Most digital health intervention studies lack appropriate research design and methods to investigate individual engagement issues. User engagement and program usability are inextricably linked, making usability studies vital in understanding and improving engagement. The aim of this study was to explore engagement and corresponding usability issues of the Healthy Body Image Program-a guided online intervention for individuals with body image concerns or eating disorders. The secondary aim was to demonstrate the value of usability research in order to investigate engagement. We conducted an iterative usability study based on a mixed-methods approach, combining cognitive and semistructured interviews as well as questionnaires, prior to program launch. Two separate rounds of usability studies were completed, testing a total of 9 potential users. Thematic analysis and descriptive statistics were used to analyze the think-aloud tasks, interviews, and questionnaires. Participants were satisfied with the overall usability of the program. The average usability score was 77.5/100 for the first test round and improved to 83.1/100 after applying modifications for the second iteration. The analysis of the qualitative data revealed five central themes: layout, navigation, content, support, and engagement conditions. The first three themes highlight usability aspects of the program, while the latter two highlight engagement issues. An easy-to-use format, clear wording, the nature of guidance, and opportunity for interactivity were important issues related to usability. The coach support, time investment, and severity of users' symptoms, the program's features and effectiveness, trust, anonymity, and affordability were relevant to engagement. This study identified salient usability and engagement features associated with participant motivation to use the Healthy Body Image Program and ultimately helped improve the program prior to its implementation. This research demonstrates that improvements in usability and engagement can be achieved by testing and adjusting intervention design and content prior to program launch. The results are consistent with related research and reinforce the need for further research to identify usage patterns and effective means for reducing dropout. Digital health research should include usability studies prior to efficacy trials to help create more user-friendly programs that have a higher likelihood of "real-world" adoption.
MacNeil, Cheryl; Hand, Theresa
2014-01-01
This article discusses a 1-yr evaluation study of a master of science in occupational therapy program to examine curriculum content and pedagogical practices as a way to gauge program preparedness to move to a clinical doctorate. Faculty members participated in a multitiered qualitative study that included curriculum mapping, semistructured individual interviewing, and iterative group analysis. Findings indicate that curriculum mapping and authentic dialogue helped the program formulate a more streamlined and integrated curriculum with increased faculty collaboration. Curriculum mapping and collaborative pedagogical reflection are valuable evaluation strategies for examining preparedness to offer a clinical doctorate, enhancing a self-study process, and providing information for ongoing formative curriculum review. Copyright © 2014 by the American Occupational Therapy Association, Inc.
Vandermause, Roxanne; Barbosa-Leiker, Celestina; Fritz, Roschelle
2014-12-01
This multimethod, qualitative study provides results for educators of nursing doctoral students to consider. Combining the expertise of an empirical analytical researcher (who uses statistical methods) and an interpretive phenomenological researcher (who uses hermeneutic methods), a course was designed that would place doctoral students in the midst of multiparadigmatic discussions while learning fundamental research methods. Field notes and iterative analytical discussions led to patterns and themes that highlight the value of this innovative pedagogical application. Using content analysis and interpretive phenomenological approaches, together with one of the students, data were analyzed from field notes recorded in real time over the period the course was offered. This article describes the course and the study analysis, and offers the pedagogical experience as transformative. A link to a sample syllabus is included in the article. The results encourage nurse educators of doctoral nursing students to focus educational practice on multiple methodological perspectives. Copyright 2014, SLACK Incorporated.
Li, Xiongwei; Wang, Zhe; Fu, Yangting; Li, Zheng; Liu, Jianmin; Ni, Weidou
2014-01-01
Measurement of coal carbon content using laser-induced breakdown spectroscopy (LIBS) is limited by its low precision and accuracy. A modified spectrum standardization method was proposed to achieve both reproducible and accurate results for the quantitative analysis of carbon content in coal using LIBS. The proposed method used the molecular emissions of diatomic carbon (C2) and cyanide (CN) to compensate for the diminution of atomic carbon emissions in high volatile content coal samples caused by matrix effect. The compensated carbon line intensities were further converted into an assumed standard state with standard plasma temperature, electron number density, and total number density of carbon, under which the carbon line intensity is proportional to its concentration in the coal samples. To obtain better compensation for fluctuations of total carbon number density, the segmental spectral area was used and an iterative algorithm was applied that is different from our previous spectrum standardization calculations. The modified spectrum standardization model was applied to the measurement of carbon content in 24 bituminous coal samples. The results demonstrate that the proposed method has superior performance over the generally applied normalization methods. The average relative standard deviation was 3.21%, the coefficient of determination was 0.90, the root mean square error of prediction was 2.24%, and the average maximum relative error for the modified model was 12.18%, showing an overall improvement over the corresponding values for the normalization with segmental spectrum area, 6.00%, 0.75, 3.77%, and 15.40%, respectively.
Exploiting parallel computing with limited program changes using a network of microcomputers
NASA Technical Reports Server (NTRS)
Rogers, J. L., Jr.; Sobieszczanski-Sobieski, J.
1985-01-01
Network computing and multiprocessor computers are two discernible trends in parallel processing. The computational behavior of an iterative distributed process in which some subtasks are completed later than others because of an imbalance in computational requirements is of significant interest. The effects of asynchronus processing was studied. A small existing program was converted to perform finite element analysis by distributing substructure analysis over a network of four Apple IIe microcomputers connected to a shared disk, simulating a parallel computer. The substructure analysis uses an iterative, fully stressed, structural resizing procedure. A framework of beams divided into three substructures is used as the finite element model. The effects of asynchronous processing on the convergence of the design variables are determined by not resizing particular substructures on various iterations.
2011-01-01
Background Available measures of patient-reported outcomes for complementary and alternative medicine (CAM) inadequately capture the range of patient-reported treatment effects. The Self-Assessment of Change questionnaire was developed to measure multi-dimensional shifts in well-being for CAM users. With content derived from patient narratives, items were subsequently focused through interviews on a new cohort of participants. Here we present the development of the final version in which the content and format is refined through cognitive interviews. Methods We conducted cognitive interviews across five iterations of questionnaire refinement with a culturally diverse sample of 28 CAM users. In each iteration, participant critiques were used to revise the questionnaire, which was then re-tested in subsequent rounds of cognitive interviews. Following all five iterations, transcripts of cognitive interviews were systematically coded and analyzed to examine participants' understanding of the format and content of the final questionnaire. Based on this data, we established summary descriptions and selected exemplar quotations for each word pair on the final questionnaire. Results The final version of the Self-Assessment of Change questionnaire (SAC) includes 16 word pairs, nine of which remained unchanged from the original draft. Participants consistently said that these stable word pairs represented opposite ends of the same domain of experience and the meanings of these terms were stable across the participant pool. Five pairs underwent revision and two word pairs were added. Four word pairs were eliminated for redundancy or because participants did not agree on the meaning of the terms. Cognitive interviews indicate that participants understood the format of the questionnaire and considered each word pair to represent opposite poles of a shared domain of experience. Conclusions We have placed lay language and direct experience at the center of questionnaire revision and refinement. In so doing, we provide an innovative model for the development of truly patient-centered outcome measures. Although this instrument was designed and tested in a CAM-specific population, it may be useful in assessing multi-dimensional shifts in well-being across a broader patient population. PMID:22206409
High-performance equation solvers and their impact on finite element analysis
NASA Technical Reports Server (NTRS)
Poole, Eugene L.; Knight, Norman F., Jr.; Davis, D. Dale, Jr.
1990-01-01
The role of equation solvers in modern structural analysis software is described. Direct and iterative equation solvers which exploit vectorization on modern high-performance computer systems are described and compared. The direct solvers are two Cholesky factorization methods. The first method utilizes a novel variable-band data storage format to achieve very high computation rates and the second method uses a sparse data storage format designed to reduce the number of operations. The iterative solvers are preconditioned conjugate gradient methods. Two different preconditioners are included; the first uses a diagonal matrix storage scheme to achieve high computation rates and the second requires a sparse data storage scheme and converges to the solution in fewer iterations that the first. The impact of using all of the equation solvers in a common structural analysis software system is demonstrated by solving several representative structural analysis problems.
High-performance equation solvers and their impact on finite element analysis
NASA Technical Reports Server (NTRS)
Poole, Eugene L.; Knight, Norman F., Jr.; Davis, D. D., Jr.
1992-01-01
The role of equation solvers in modern structural analysis software is described. Direct and iterative equation solvers which exploit vectorization on modern high-performance computer systems are described and compared. The direct solvers are two Cholesky factorization methods. The first method utilizes a novel variable-band data storage format to achieve very high computation rates and the second method uses a sparse data storage format designed to reduce the number od operations. The iterative solvers are preconditioned conjugate gradient methods. Two different preconditioners are included; the first uses a diagonal matrix storage scheme to achieve high computation rates and the second requires a sparse data storage scheme and converges to the solution in fewer iterations that the first. The impact of using all of the equation solvers in a common structural analysis software system is demonstrated by solving several representative structural analysis problems.
Long-term evolution of the impurity composition and impurity events with the ITER-like wall at JET
NASA Astrophysics Data System (ADS)
Coenen, J. W.; Sertoli, M.; Brezinsek, S.; Coffey, I.; Dux, R.; Giroud, C.; Groth, M.; Huber, A.; Ivanova, D.; Krieger, K.; Lawson, K.; Marsen, S.; Meigs, A.; Neu, R.; Puetterich, T.; van Rooij, G. J.; Stamp, M. F.; Contributors, JET-EFDA
2013-07-01
This paper covers aspects of long-term evolution of intrinsic impurities in the JET tokamak with respect to the newly installed ITER-like wall (ILW). At first the changes related to the change over from the JET-C to the JET-ILW with beryllium (Be) as the main wall material and tungsten (W) in the divertor are discussed. The evolution of impurity fluxes in the newly installed W divertor with respect to studying material migration is described. In addition, a statistical analysis of transient impurity events causing significant plasma contamination and radiation losses is shown. The main findings comprise a drop in carbon content (×20) (see also Brezinsek et al (2013 J. Nucl. Mater. 438 S303)), low oxygen content (×10) due to the Be first wall (Douai et al 2013 J. Nucl. Mater. 438 S1172-6) as well as the evolution of the material mix in the divertor. Initially, a short period of repetitive ohmic plasmas was carried out to study material migration (Krieger et al 2013 J. Nucl. Mater. 438 S262). After the initial 1600 plasma seconds the material surface composition is, however, still evolving. With operational time, the levels of recycled C are increasing slightly by 20% while the Be levels in the deposition-dominated inner divertor are dropping, hinting at changes in the surface layer material mix made of Be, C and W. A steady number of transient impurity events, consisting of W and constituents of inconel, is observed despite the increase in variation in machine operation and changes in magnetic configuration as well as the auxiliary power increase.
Rosneck, James S; Hughes, Joel; Gunstad, John; Josephson, Richard; Noe, Donald A; Waechter, Donna
2014-01-01
This article describes the systematic construction and psychometric analysis of a knowledge assessment instrument for phase II cardiac rehabilitation (CR) patients measuring risk modification disease management knowledge and behavioral outcomes derived from national standards relevant to secondary prevention and management of cardiovascular disease. First, using adult curriculum based on disease-specific learning outcomes and competencies, a systematic test item development process was completed by clinical staff. Second, a panel of educational and clinical experts used an iterative process to identify test content domain and arrive at consensus in selecting items meeting criteria. Third, the resulting 31-question instrument, the Cardiac Knowledge Assessment Tool (CKAT), was piloted in CR patients to ensure use of application. Validity and reliability analyses were performed on 3638 adults before test administrations with additional focused analyses on 1999 individuals completing both pretreatment and posttreatment administrations within 6 months. Evidence of CKAT content validity was substantiated, with 85% agreement among content experts. Evidence of construct validity was demonstrated via factor analysis identifying key underlying factors. Estimates of internal consistency, for example, Cronbach's α = .852 and Spearman-Brown split-half reliability = 0.817 on pretesting, support test reliability. Item analysis, using point biserial correlation, measured relationships between performance on single items and total score (P < .01). Analyses using item difficulty and item discrimination indices further verified item stability and validity of the CKAT. A knowledge instrument specifically designed for an adult CR population was systematically developed and tested in a large representative patient population, satisfying psychometric parameters, including validity and reliability.
Hyperchromatic laser scanning cytometry
NASA Astrophysics Data System (ADS)
Tárnok, Attila; Mittag, Anja
2007-02-01
In the emerging fields of high-content and high-throughput single cell analysis for Systems Biology and Cytomics multi- and polychromatic analysis of biological specimens has become increasingly important. Combining different technologies and staining methods polychromatic analysis (i.e. using 8 or more fluorescent colors at a time) can be pushed forward to measure anything stainable in a cell, an approach termed hyperchromatic cytometry. For cytometric cell analysis microscope based Slide Based Cytometry (SBC) technologies are ideal as, unlike flow cytometry, they are non-consumptive, i.e. the analyzed sample is fixed on the slide. Based on the feature of relocation identical cells can be subsequently reanalyzed. In this manner data on the single cell level after manipulation steps can be collected. In this overview various components for hyperchromatic cytometry are demonstrated for a SBC instrument, the Laser Scanning Cytometer (Compucyte Corp., Cambridge, MA): 1) polychromatic cytometry, 2) iterative restaining (using the same fluorochrome for restaining and subsequent reanalysis), 3) differential photobleaching (differentiating fluorochromes by their different photostability), 4) photoactivation (activating fluorescent nanoparticles or photocaged dyes), and 5) photodestruction (destruction of FRET dyes). With the intelligent combination of several of these techniques hyperchromatic cytometry allows to quantify and analyze virtually all components of relevance on the identical cell. The combination of high-throughput and high-content SBC analysis with high-resolution confocal imaging allows clear verification of phenotypically distinct subpopulations of cells with structural information. The information gained per specimen is only limited by the number of available antibodies and by sterical hindrance.
Stokes-Doppler coherence imaging for ITER boundary tomography.
Howard, J; Kocan, M; Lisgo, S; Reichle, R
2016-11-01
An optical coherence imaging system is presently being designed for impurity transport studies and other applications on ITER. The wide variation in magnetic field strength and pitch angle (assumed known) across the field of view generates additional Zeeman-polarization-weighting information that can improve the reliability of tomographic reconstructions. Because background reflected light will be somewhat depolarized analysis of only the polarized fraction may be enough to provide a level of background suppression. We present the principles behind these ideas and some simulations that demonstrate how the approach might work on ITER. The views and opinions expressed herein do not necessarily reflect those of the ITER Organization.
Efficient solution of the simplified P N equations
Hamilton, Steven P.; Evans, Thomas M.
2014-12-23
We show new solver strategies for the multigroup SPN equations for nuclear reactor analysis. By forming the complete matrix over space, moments, and energy a robust set of solution strategies may be applied. Moreover, power iteration, shifted power iteration, Rayleigh quotient iteration, Arnoldi's method, and a generalized Davidson method, each using algebraic and physics-based multigrid preconditioners, have been compared on C5G7 MOX test problem as well as an operational PWR model. These results show that the most ecient approach is the generalized Davidson method, that is 30-40 times faster than traditional power iteration and 6-10 times faster than Arnoldi's method.
TH-AB-BRA-09: Stability Analysis of a Novel Dose Calculation Algorithm for MRI Guided Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zelyak, O; Fallone, B; Cross Cancer Institute, Edmonton, AB
2016-06-15
Purpose: To determine the iterative deterministic solution stability of the Linear Boltzmann Transport Equation (LBTE) in the presence of magnetic fields. Methods: The LBTE with magnetic fields under investigation is derived using a discrete ordinates approach. The stability analysis is performed using analytical and numerical methods. Analytically, the spectral Fourier analysis is used to obtain the convergence rate of the source iteration procedures based on finding the largest eigenvalue of the iterative operator. This eigenvalue is a function of relevant physical parameters, such as magnetic field strength and material properties, and provides essential information about the domain of applicability requiredmore » for clinically optimal parameter selection and maximum speed of convergence. The analytical results are reinforced by numerical simulations performed using the same discrete ordinates method in angle, and a discontinuous finite element spatial approach. Results: The spectral radius for the source iteration technique of the time independent transport equation with isotropic and anisotropic scattering centers inside infinite 3D medium is equal to the ratio of differential and total cross sections. The result is confirmed numerically by solving LBTE and is in full agreement with previously published results. The addition of magnetic field reveals that the convergence becomes dependent on the strength of magnetic field, the energy group discretization, and the order of anisotropic expansion. Conclusion: The source iteration technique for solving the LBTE with magnetic fields with the discrete ordinates method leads to divergent solutions in the limiting cases of small energy discretizations and high magnetic field strengths. Future investigations into non-stationary Krylov subspace techniques as an iterative solver will be performed as this has been shown to produce greater stability than source iteration. Furthermore, a stability analysis of a discontinuous finite element space-angle approach (which has been shown to provide the greatest stability) will also be investigated. Dr. B Gino Fallone is a co-founder and CEO of MagnetTx Oncology Solutions (under discussions to license Alberta bi-planar linac MR for commercialization)« less
ERIC Educational Resources Information Center
Morlier, Rebecca
2012-01-01
The purpose of this paper is to evaluate the effectiveness of the 2009-2010 iteration of the Correlated Science and Mathematics (CSM) professional development program which provides teachers and principals experience with integrated and effective science and mathematics teaching strategies and content. Archival CSM data was analyzed via mixed…
Representing Targets of Measurement within ECD
ERIC Educational Resources Information Center
Ewing, Maureen; Packman, Sheryl; Hamen, Cynthia; Clark, Allison
2009-01-01
Presented at the Annual Meeting of National Council on Measurement in Education (NCME) in San Diego, CA in April 2009. This presentation describes the methodology that was used with subject-matter experts (SMEs) to articulate the content and skills important in the domain, and then the iterative processes that were used to articulate the claims…
Functional materials for breeding blankets—status and developments
NASA Astrophysics Data System (ADS)
Konishi, S.; Enoeda, M.; Nakamichi, M.; Hoshino, T.; Ying, A.; Sharafat, S.; Smolentsev, S.
2017-09-01
The development of tritium breeder, neutron multiplier and flow channel insert materials for the breeding blanket of the DEMO reactor is reviewed. Present emphasis is on the ITER test blanket module (TBM); lithium metatitanate (Li2TiO3) and lithium orthosilicate (Li4SiO4) pebbles have been developed by leading TBM parties. Beryllium pebbles have been selected as the neutron multiplier. Good progress has been made in their fabrication; however, verification of the design by experiments is in the planning stage. Irradiation data are also limited, but the decrease in thermal conductivity of beryllium due to irradiation followed by swelling is a concern. Tests at ITER are regarded as a major milestone. For the DEMO reactor, improvement of the breeder has been attempted to obtain a higher lithium content, and Be12Ti and other beryllide intermetallic compounds that have superior chemical stability have been studied. LiPb eutectic has been considered as a DEMO blanket in the liquid breeder option and is used as a coolant to achieve a higher outlet temperature; a SiC flow channel insert is used to prevent magnetohydrodynamic pressure drop and corrosion. A significant technical gap between ITER TBM and DEMO is recognized, and the world fusion community is working on ITER TBM and DEMO blanket development in parallel.
Electromagnetic Analysis of ITER Diagnostic Equatorial Port Plugs During Plasma Disruptions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Y. Zhai, R. Feder, A. Brooks, M. Ulrickson, C.S. Pitcher and G.D. Loesser
2012-08-27
ITER diagnostic port plugs perform many functionsincluding structural support of diagnostic systems under high electromagnetic loads while allowing for diagnostic access to the plasma. The design of diagnostic equatorial port plugs (EPP) are largely driven by electromagnetic loads and associate responses of EPP structure during plasma disruptions and VDEs. This paper summarizes results of transient electromagnetic analysis using Opera 3d in support of the design activities for ITER diagnostic EPP. A complete distribution of disruption loads on the Diagnostic First Walls (DFWs), Diagnostic Shield Modules (DSMs) and the EPP structure, as well as impact on the system design integration duemore » to electrical contact among various EPP structural components are discussed.« less
Observer-based distributed adaptive iterative learning control for linear multi-agent systems
NASA Astrophysics Data System (ADS)
Li, Jinsha; Liu, Sanyang; Li, Junmin
2017-10-01
This paper investigates the consensus problem for linear multi-agent systems from the viewpoint of two-dimensional systems when the state information of each agent is not available. Observer-based fully distributed adaptive iterative learning protocol is designed in this paper. A local observer is designed for each agent and it is shown that without using any global information about the communication graph, all agents achieve consensus perfectly for all undirected connected communication graph when the number of iterations tends to infinity. The Lyapunov-like energy function is employed to facilitate the learning protocol design and property analysis. Finally, simulation example is given to illustrate the theoretical analysis.
Numerical analysis of modified Central Solenoid insert design
Khodak, Andrei; Martovetsky, Nicolai; Smirnov, Aleksandre; ...
2015-06-21
The United States ITER Project Office (USIPO) is responsible for fabrication of the Central Solenoid (CS) for ITER project. The ITER machine is currently under construction by seven parties in Cadarache, France. The CS Insert (CSI) project should provide a verification of the conductor performance in relevant conditions of temperature, field, currents and mechanical strain. The US IPO designed the CSI that will be tested at the Central Solenoid Model Coil (CSMC) Test Facility at JAEA, Naka. To validate the modified design we performed three-dimensional numerical simulations using coupled solver for simultaneous structural, thermal and electromagnetic analysis. Thermal and electromagneticmore » simulations supported structural calculations providing necessary loads and strains. According to current analysis design of the modified coil satisfies ITER magnet structural design criteria for the following conditions: (1) room temperature, no current, (2) temperature 4K, no current, (3) temperature 4K, current 60 kA direct charge, and (4) temperature 4K, current 60 kA reverse charge. Fatigue life assessment analysis is performed for the alternating conditions of: temperature 4K, no current, and temperature 4K, current 45 kA direct charge. Results of fatigue analysis show that parts of the coil assembly can be qualified for up to 1 million cycles. Distributions of the Current Sharing Temperature (TCS) in the superconductor were obtained from numerical results using parameterization of the critical surface in the form similar to that proposed for ITER. Lastly, special ADPL scripts were developed for ANSYS allowing one-dimensional representation of TCS along the cable, as well as three-dimensional fields of TCS in superconductor material. Published by Elsevier B.V.« less
Ginsburg, Shiphra; Eva, Kevin; Regehr, Glenn
2013-10-01
Although scores on in-training evaluation reports (ITERs) are often criticized for poor reliability and validity, ITER comments may yield valuable information. The authors assessed across-rotation reliability of ITER scores in one internal medicine program, ability of ITER scores and comments to predict postgraduate year three (PGY3) performance, and reliability and incremental predictive validity of attendings' analysis of written comments. Numeric and narrative data from the first two years of ITERs for one cohort of residents at the University of Toronto Faculty of Medicine (2009-2011) were assessed for reliability and predictive validity of third-year performance. Twenty-four faculty attendings rank-ordered comments (without scores) such that each resident was ranked by three faculty. Mean ITER scores and comment rankings were submitted to regression analyses; dependent variables were PGY3 ITER scores and program directors' rankings. Reliabilities of ITER scores across nine rotations for 63 residents were 0.53 for both postgraduate year one (PGY1) and postgraduate year two (PGY2). Interrater reliabilities across three attendings' rankings were 0.83 for PGY1 and 0.79 for PGY2. There were strong correlations between ITER scores and comments within each year (0.72 and 0.70). Regressions revealed that PGY1 and PGY2 ITER scores collectively explained 25% of variance in PGY3 scores and 46% of variance in PGY3 rankings. Comment rankings did not improve predictions. ITER scores across multiple rotations showed decent reliability and predictive validity. Comment ranks did not add to the predictive ability, but correlation analyses suggest that trainee performance can be measured through these comments.
Heggen, Kristin Livelten; Pedersen, Hans Kristian; Andersen, Hilde Kjernlie; Martinsen, Anne Catrine T
2016-01-01
Background Iterative reconstruction can reduce image noise and thereby facilitate dose reduction. Purpose To evaluate qualitative and quantitative image quality for full dose and dose reduced head computed tomography (CT) protocols reconstructed using filtered back projection (FBP) and adaptive statistical iterative reconstruction (ASIR). Material and Methods Fourteen patients undergoing follow-up head CT were included. All patients underwent full dose (FD) exam and subsequent 15% dose reduced (DR) exam, reconstructed using FBP and 30% ASIR. Qualitative image quality was assessed using visual grading characteristics. Quantitative image quality was assessed using ROI measurements in cerebrospinal fluid (CSF), white matter, peripheral and central gray matter. Additionally, quantitative image quality was measured in Catphan and vendor’s water phantom. Results There was no significant difference in qualitative image quality between FD FBP and DR ASIR. Comparing same scan FBP versus ASIR, a noise reduction of 28.6% in CSF and between −3.7 and 3.5% in brain parenchyma was observed. Comparing FD FBP versus DR ASIR, a noise reduction of 25.7% in CSF, and −7.5 and 6.3% in brain parenchyma was observed. Image contrast increased in ASIR reconstructions. Contrast-to-noise ratio was improved in DR ASIR compared to FD FBP. In phantoms, noise reduction was in the range of 3 to 28% with image content. Conclusion There was no significant difference in qualitative image quality between full dose FBP and dose reduced ASIR. CNR improved in DR ASIR compared to FD FBP mostly due to increased contrast, not reduced noise. Therefore, we recommend using caution if reducing dose and applying ASIR to maintain image quality. PMID:27583169
Østerås, Bjørn Helge; Heggen, Kristin Livelten; Pedersen, Hans Kristian; Andersen, Hilde Kjernlie; Martinsen, Anne Catrine T
2016-08-01
Iterative reconstruction can reduce image noise and thereby facilitate dose reduction. To evaluate qualitative and quantitative image quality for full dose and dose reduced head computed tomography (CT) protocols reconstructed using filtered back projection (FBP) and adaptive statistical iterative reconstruction (ASIR). Fourteen patients undergoing follow-up head CT were included. All patients underwent full dose (FD) exam and subsequent 15% dose reduced (DR) exam, reconstructed using FBP and 30% ASIR. Qualitative image quality was assessed using visual grading characteristics. Quantitative image quality was assessed using ROI measurements in cerebrospinal fluid (CSF), white matter, peripheral and central gray matter. Additionally, quantitative image quality was measured in Catphan and vendor's water phantom. There was no significant difference in qualitative image quality between FD FBP and DR ASIR. Comparing same scan FBP versus ASIR, a noise reduction of 28.6% in CSF and between -3.7 and 3.5% in brain parenchyma was observed. Comparing FD FBP versus DR ASIR, a noise reduction of 25.7% in CSF, and -7.5 and 6.3% in brain parenchyma was observed. Image contrast increased in ASIR reconstructions. Contrast-to-noise ratio was improved in DR ASIR compared to FD FBP. In phantoms, noise reduction was in the range of 3 to 28% with image content. There was no significant difference in qualitative image quality between full dose FBP and dose reduced ASIR. CNR improved in DR ASIR compared to FD FBP mostly due to increased contrast, not reduced noise. Therefore, we recommend using caution if reducing dose and applying ASIR to maintain image quality.
Be ITER-like wall at the JET tokamak under plasma
NASA Astrophysics Data System (ADS)
Tsavalas, P.; Lagoyannis, A.; Mergia, K.; Rubel, M.; Triantou, K.; Harissopulos, S.; Kokkoris, M.; Petersson, P.; Contributors, JET
2017-12-01
The JET tokamak is operated with beryllium and tungsten plasma-facing components to prepare for the exploitation of ITER. To determine beryllium erosion and migration in JET a set of markers were installed. Specimens from different beryllium marker tiles of the main wall of the ITER-like wall (ILW) JET tokamak from the first and the second D-D campaign were analyzed with nuclear reaction analysis, x-ray fluorescence spectroscopy, scanning electron microscopy and x-ray diffraction (XRD). Emphasis was on the determination of carbon plasma impurities deposited on beryllium surfaces. The 12C(d, p0)13C reaction was used to quantify carbon deposition and to determine depth profiles. Carbon quantities on the surface of the Be tiles are low, varying from (0.35 ± 0.07) × 1017 to (11.8 ± 0.6) × 1017 at cm-2 in the deposition depth from 0.4 to 6.7 μm, respectively. In the 0.4-0.5 mm wide grooves of castellation sides the carbon content is found up to (14.3 ± 2.5) × 1017 at cm-2 while it is higher (up to (38 ± 4) × 1017 at cm-2) in wider gaps (0.8 mm) separating tile segments. Oxygen (O), titanium (Ti), chromium (Cr), manganese (Mn), iron (Fe), nickel (Ni) and tungsten (W) were detected in all samples exposed to plasma and the reference one but at lower quantities at the latter. In the central part of the Inner Wall Guard Limiter from the first ILW campaign and in the Outer Poloidal Limiter from the second ILW campaign the Ni interlayer has been completely eroded. XRD shows the formation of BeNi in most specimens.
Multi-Mbar Ramp Compression of Copper
NASA Astrophysics Data System (ADS)
Kraus, Rick; Davis, Jean-Paul; Seagle, Christopher; Fratanduono, Dayne; Swift, Damian; Eggert, Jon; Collins, Gilbert
2015-06-01
The cold curve is a critical component of equation of state models. Diamond anvil cell measurements can be used to determine isotherms, but these have generally been limited to pressures below 1 Mbar. The cold curve can also be extracted from Hugoniot data, but only with assumptions about the thermal pressure. As the National Ignition Facility will be using copper as an ablator material at pressures in excess of 10 Mbar, we need a better understanding of the high-density equation of state. Here we present ramp-wave compression experiments at the Sandia Z-Machine that we have used to constrain the isentrope of copper to a stress state of nearly 5 Mbar. We use the iterative Lagrangian analysis technique, developed by Rothman and Maw, to determine the stress-strain path. We also present a new iterative forward analysis (IFA) technique coupled to the ARES hydrocode that performs a non-linear optimization over the pressure drive and equation of state in order to match the free surface velocities. The IFA technique is an advantage over iterative Lagrangian analysis for experiments with growing shocks or systems with time dependent strength, which violate the assumptions of iterative Lagrangian analysis. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
NASA Astrophysics Data System (ADS)
Wu, Jie; Besnehard, Quentin; Marchessoux, Cédric
2011-03-01
Clinical studies for the validation of new medical imaging devices require hundreds of images. An important step in creating and tuning the study protocol is the classification of images into "difficult" and "easy" cases. This consists of classifying the image based on features like the complexity of the background, the visibility of the disease (lesions). Therefore, an automatic medical background classification tool for mammograms would help for such clinical studies. This classification tool is based on a multi-content analysis framework (MCA) which was firstly developed to recognize image content of computer screen shots. With the implementation of new texture features and a defined breast density scale, the MCA framework is able to automatically classify digital mammograms with a satisfying accuracy. BI-RADS (Breast Imaging Reporting Data System) density scale is used for grouping the mammograms, which standardizes the mammography reporting terminology and assessment and recommendation categories. Selected features are input into a decision tree classification scheme in MCA framework, which is the so called "weak classifier" (any classifier with a global error rate below 50%). With the AdaBoost iteration algorithm, these "weak classifiers" are combined into a "strong classifier" (a classifier with a low global error rate) for classifying one category. The results of classification for one "strong classifier" show the good accuracy with the high true positive rates. For the four categories the results are: TP=90.38%, TN=67.88%, FP=32.12% and FN =9.62%.
2009 Space Shuttle Probabilistic Risk Assessment Overview
NASA Technical Reports Server (NTRS)
Hamlin, Teri L.; Canga, Michael A.; Boyer, Roger L.; Thigpen, Eric B.
2010-01-01
Loss of a Space Shuttle during flight has severe consequences, including loss of a significant national asset; loss of national confidence and pride; and, most importantly, loss of human life. The Shuttle Probabilistic Risk Assessment (SPRA) is used to identify risk contributors and their significance; thus, assisting management in determining how to reduce risk. In 2006, an overview of the SPRA Iteration 2.1 was presented at PSAM 8 [1]. Like all successful PRAs, the SPRA is a living PRA and has undergone revisions since PSAM 8. The latest revision to the SPRA is Iteration 3. 1, and it will not be the last as the Shuttle program progresses and more is learned. This paper discusses the SPRA scope, overall methodology, and results, as well as provides risk insights. The scope, assumptions, uncertainties, and limitations of this assessment provide risk-informed perspective to aid management s decision-making process. In addition, this paper compares the Iteration 3.1 analysis and results to the Iteration 2.1 analysis and results presented at PSAM 8.
Stability of the iterative solutions of integral equations as one phase freezing criterion.
Fantoni, R; Pastore, G
2003-10-01
A recently proposed connection between the threshold for the stability of the iterative solution of integral equations for the pair correlation functions of a classical fluid and the structural instability of the corresponding real fluid is carefully analyzed. Direct calculation of the Lyapunov exponent of the standard iterative solution of hypernetted chain and Percus-Yevick integral equations for the one-dimensional (1D) hard rods fluid shows the same behavior observed in 3D systems. Since no phase transition is allowed in such 1D system, our analysis shows that the proposed one phase criterion, at least in this case, fails. We argue that the observed proximity between the numerical and the structural instability in 3D originates from the enhanced structure present in the fluid but, in view of the arbitrary dependence on the iteration scheme, it seems uneasy to relate the numerical stability analysis to a robust one-phase criterion for predicting a thermodynamic phase transition.
Vedanthan, Rajesh; Blank, Evan; Tuikong, Nelly; Kamano, Jemima; Misoi, Lawrence; Tulienge, Deborah; Hutchinson, Claire; Ascheim, Deborah D; Kimaiyo, Sylvester; Fuster, Valentin; Were, Martin C
2015-03-01
Mobile health (mHealth) applications have recently proliferated, especially in low- and middle-income countries, complementing task-redistribution strategies with clinical decision support. Relatively few studies address usability and feasibility issues that may impact success or failure of implementation, and few have been conducted for non-communicable diseases such as hypertension. To conduct iterative usability and feasibility testing of a tablet-based Decision Support and Integrated Record-keeping (DESIRE) tool, a technology intended to assist rural clinicians taking care of hypertension patients at the community level in a resource-limited setting in western Kenya. Usability testing consisted of "think aloud" exercises and "mock patient encounters" with five nurses, as well as one focus group discussion. Feasibility testing consisted of semi-structured interviews of five nurses and two members of the implementation team, and one focus group discussion with nurses. Content analysis was performed using both deductive codes and significant inductive codes. Critical incidents were identified and ranked according to severity. A cause-of-error analysis was used to develop corresponding design change suggestions. Fifty-seven critical incidents were identified in usability testing, 21 of which were unique. The cause-of-error analysis yielded 23 design change suggestions. Feasibility themes included barriers to implementation along both human and technical axes, facilitators to implementation, provider issues, patient issues and feature requests. This participatory, iterative human-centered design process revealed previously unaddressed usability and feasibility issues affecting the implementation of the DESIRE tool in western Kenya. In addition to well-known technical issues, we highlight the importance of human factors that can impact implementation of mHealth interventions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Vedanthan, Rajesh; Blank, Evan; Tuikong, Nelly; Kamano, Jemima; Misoi, Lawrence; Tulienge, Deborah; Hutchinson, Claire; Ascheim, Deborah D.; Kimaiyo, Sylvester; Fuster, Valentin; Were, Martin C.
2015-01-01
Background Mobile health (mHealth) applications have recently proliferated, especially in low- and middle-income countries, complementing task-redistribution strategies with clinical decision support. Relatively few studies address usability and feasibility issues that may impact success or failure of implementation, and few have been conducted for non-communicable diseases such as hypertension. Objective To conduct iterative usability and feasibility testing of a tablet-based Decision Support and Integrated Record-keeping (DESIRE) tool, a technology intended to assist rural clinicians taking care of hypertension patients at the community level in a resource-limited setting in western Kenya. Methods Usability testing consisted of “think aloud” exercises and “mock patient encounters” with five nurses, as well as one focus group discussion. Feasibility testing consisted of semi-structured interviews of five nurses and two members of the implementation team, and one focus group discussion with nurses. Content analysis was performed using both deductive codes and significant inductive codes. Critical incidents were identified and ranked according to severity. A cause-of-error analysis was used to develop corresponding design change suggestions. Results Fifty-seven critical incidents were identified in usability testing, 21 of which were unique. The cause-of-error analysis yielded 23 design change suggestions. Feasibility themes included barriers to implementation along both human and technical axes, facilitators to implementation, provider issues, patient issues and feature requests. Conclusions This participatory, iterative human-centered design process revealed previously unaddressed usability and feasibility issues affecting the implementation of the DESIRE tool in western Kenya. In addition to well-known technical issues, we highlight the importance of human factors that can impact implementation of mHealth interventions. PMID:25612791
Performance and capacity analysis of Poisson photon-counting based Iter-PIC OCDMA systems.
Li, Lingbin; Zhou, Xiaolin; Zhang, Rong; Zhang, Dingchen; Hanzo, Lajos
2013-11-04
In this paper, an iterative parallel interference cancellation (Iter-PIC) technique is developed for optical code-division multiple-access (OCDMA) systems relying on shot-noise limited Poisson photon-counting reception. The novel semi-analytical tool of extrinsic information transfer (EXIT) charts is used for analysing both the bit error rate (BER) performance as well as the channel capacity of these systems and the results are verified by Monte Carlo simulations. The proposed Iter-PIC OCDMA system is capable of achieving two orders of magnitude BER improvements and a 0.1 nats of capacity improvement over the conventional chip-level OCDMA systems at a coding rate of 1/10.
Electromagnetic Analysis For The Design Of ITER Diagnostic Port Plugs During Plasma Disruptions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhai, Y
2014-03-03
ITER diagnostic port plugs perform many functions including structural support of diagnostic systems under high electromagnetic loads while allowing for diagnostic access to plasma. The design of diagnotic equatorial port plugs (EPP) are largely driven by electromagnetic loads and associate response of EPP structure during plasma disruptions and VDEs. This paper summarizes results of transient electromagnetic analysis using Opera 3d in support of the design activities for ITER diagnostic EPP. A complete distribution of disruption loads on the Diagnostic First Walls (DFWs). Diagnostic Shield Modules (DSMs) and the EPP structure, as well as impact on the system design integration duemore » to electrical contact among various EPP structural components are discussed.« less
Update of the FANTOM web resource: high resolution transcriptome of diverse cell types in mammals
Lizio, Marina; Harshbarger, Jayson; Abugessaisa, Imad; Noguchi, Shuei; Kondo, Atsushi; Severin, Jessica; Mungall, Chris; Arenillas, David; Mathelier, Anthony; Medvedeva, Yulia A.; Lennartsson, Andreas; Drabløs, Finn; Ramilowski, Jordan A.; Rackham, Owen; Gough, Julian; Andersson, Robin; Sandelin, Albin; Ienasescu, Hans; Ono, Hiromasa; Bono, Hidemasa; Hayashizaki, Yoshihide; Carninci, Piero; Forrest, Alistair R.R.; Kasukawa, Takeya; Kawaji, Hideya
2017-01-01
Upon the first publication of the fifth iteration of the Functional Annotation of Mammalian Genomes collaborative project, FANTOM5, we gathered a series of primary data and database systems into the FANTOM web resource (http://fantom.gsc.riken.jp) to facilitate researchers to explore transcriptional regulation and cellular states. In the course of the collaboration, primary data and analysis results have been expanded, and functionalities of the database systems enhanced. We believe that our data and web systems are invaluable resources, and we think the scientific community will benefit for this recent update to deepen their understanding of mammalian cellular organization. We introduce the contents of FANTOM5 here, report recent updates in the web resource and provide future perspectives. PMID:27794045
Development and Feasibility of a Structured Goals of Care Communication Guide.
Bekelman, David B; Johnson-Koenke, Rachel; Ahluwalia, Sangeeta C; Walling, Anne M; Peterson, Jamie; Sudore, Rebecca L
2017-09-01
Discussing goals of care and advance care planning is beneficial, yet how to best integrate goals of care communication into clinical care remains unclear. To develop and determine the feasibility of a structured goals of care communication guide for nurses and social workers. Developmental study with providers in an academic and Veterans Affairs (VA) health system (n = 42) and subsequent pilot testing with patients with chronic obstructive pulmonary disease or heart failure (n = 15) and informal caregivers (n = 4) in a VA health system. During pilot testing, the communication guide was administered, followed by semistructured, open-ended questions about the content and process of communication. Changes to the guide were made iteratively, and subsequent piloting occurred until no additional changes emerged. Provider and patient feedback to the communication guide. Iterative input resulted in the goals of care communication guide. The guide included questions to elicit patient understanding of and attitudes toward the future of illness, clarify values and goals, identify end-of-life preferences, and agree on a follow-up plan. Revisions to guide content and phrasing continued during development and pilot testing. In pilot testing, patients validated the importance of the topic; none said the goals of care discussion should not be conducted. Patients and informal caregivers liked the final guide length (∼30 minutes), felt it flowed well, and was clear. In this developmental and pilot study, a structured goals of care communication guide was iteratively designed, implemented by nurses and social workers, and was feasible based on administration time and acceptability by patients and providers.
Robust Mean and Covariance Structure Analysis through Iteratively Reweighted Least Squares.
ERIC Educational Resources Information Center
Yuan, Ke-Hai; Bentler, Peter M.
2000-01-01
Adapts robust schemes to mean and covariance structures, providing an iteratively reweighted least squares approach to robust structural equation modeling. Each case is weighted according to its distance, based on first and second order moments. Test statistics and standard error estimators are given. (SLD)
Liu, Xiao; Shi, Jun; Zhou, Shichong; Lu, Minhua
2014-01-01
The dimensionality reduction is an important step in ultrasound image based computer-aided diagnosis (CAD) for breast cancer. A newly proposed l2,1 regularized correntropy algorithm for robust feature selection (CRFS) has achieved good performance for noise corrupted data. Therefore, it has the potential to reduce the dimensions of ultrasound image features. However, in clinical practice, the collection of labeled instances is usually expensive and time costing, while it is relatively easy to acquire the unlabeled or undetermined instances. Therefore, the semi-supervised learning is very suitable for clinical CAD. The iterated Laplacian regularization (Iter-LR) is a new regularization method, which has been proved to outperform the traditional graph Laplacian regularization in semi-supervised classification and ranking. In this study, to augment the classification accuracy of the breast ultrasound CAD based on texture feature, we propose an Iter-LR-based semi-supervised CRFS (Iter-LR-CRFS) algorithm, and then apply it to reduce the feature dimensions of ultrasound images for breast CAD. We compared the Iter-LR-CRFS with LR-CRFS, original supervised CRFS, and principal component analysis. The experimental results indicate that the proposed Iter-LR-CRFS significantly outperforms all other algorithms.
Berkowitz, Seth A; Eisenstat, Stephanie A; Barnard, Lily S; Wexler, Deborah J
2018-06-01
To explore the patient perspective on coordinated multidisciplinary diabetes team care among a socioeconomically diverse group of adults with type 2 diabetes. Qualitative research design using 8 focus groups (n=53). We randomly sampled primary care patients with type 2 diabetes and conducted focus groups at their primary care clinic. Discussion prompts queried current perceptions of team care. Each focus group was audio recorded, transcribed verbatim, and independently coded by three reviewers. Coding used an iterative process. Thematic saturation was achieved. Data were analyzed using content analysis. Most participants believed that coordinated multidisciplinary diabetes team care was a good approach, feeling that diabetes was too complicated for any one care team member to manage. Primary care physicians were seen as too busy to manage diabetes alone, and participants were content to be treated by other care team members, especially if there was a single point of contact and the care was coordinated. Participants suggested that an ideal multidisciplinary approach would additionally include support for exercise and managing socioeconomic challenges, components perceived to be missing from the existing approach to diabetes care. Coordinated, multidisciplinary diabetes team care is understood by and acceptable to patients with type 2 diabetes. Copyright © 2018 Primary Care Diabetes Europe. Published by Elsevier Ltd. All rights reserved.
Fourier analysis of the SOR iteration
NASA Technical Reports Server (NTRS)
Leveque, R. J.; Trefethen, L. N.
1986-01-01
The SOR iteration for solving linear systems of equations depends upon an overrelaxation factor omega. It is shown that for the standard model problem of Poisson's equation on a rectangle, the optimal omega and corresponding convergence rate can be rigorously obtained by Fourier analysis. The trick is to tilt the space-time grid so that the SOR stencil becomes symmetrical. The tilted grid also gives insight into the relation between convergence rates of several variants.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Panayotov, Dobromir; Poitevin, Yves; Grief, Andrew
'Fusion for Energy' (F4E) is designing, developing, and implementing the European Helium-Cooled Lead-Lithium (HCLL) and Helium-Cooled Pebble-Bed (HCPB) Test Blanket Systems (TBSs) for ITER (Nuclear Facility INB-174). Safety demonstration is an essential element for the integration of these TBSs into ITER and accident analysis is one of its critical components. A systematic approach to accident analysis has been developed under the F4E contract on TBS safety analyses. F4E technical requirements, together with Amec Foster Wheeler and INL efforts, have resulted in a comprehensive methodology for fusion breeding blanket accident analysis that addresses the specificity of the breeding blanket designs, materials,more » and phenomena while remaining consistent with the approach already applied to ITER accident analyses. Furthermore, the methodology phases are illustrated in the paper by its application to the EU HCLL TBS using both MELCOR and RELAP5 codes.« less
Street, Annette F; Swift, Kathleen; Annells, Merilyn; Woodruff, Roger; Gliddon, Terry; Oakley, Anne; Ottman, Goetz
2007-01-01
Background General Practitioners and community nurses rely on easily accessible, evidence-based online information to guide practice. To date, the methods that underpin the scoping of user-identified online information needs in palliative care have remained under-explored. This paper describes the benefits and challenges of a collaborative approach involving users and experts that informed the first stage of the development of a palliative care website [1]. Method The action research-inspired methodology included a panel assessment of an existing palliative care website based in Victoria, Australia; a pre-development survey (n = 197) scoping potential audiences and palliative care information needs; working parties conducting a needs analysis about necessary information content for a redeveloped website targeting health professionals and caregivers/patients; an iterative evaluation process involving users and experts; as well as a final evaluation survey (n = 166). Results Involving users in the identification of content and links for a palliative care website is time-consuming and requires initial resources, strong networking skills and commitment. However, user participation provided crucial information that led to the widened the scope of the website audience and guided the development and testing of the website. The needs analysis underpinning the project suggests that palliative care peak bodies need to address three distinct audiences (clinicians, allied health professionals as well as patients and their caregivers). Conclusion Web developers should pay close attention to the content, language, and accessibility needs of these groups. Given the substantial cost associated with the maintenance of authoritative health information sites, the paper proposes a more collaborative development in which users can be engaged in the definition of content to ensure relevance and responsiveness, and to eliminate unnecessary detail. Access to volunteer networks forms an integral part of such an approach. PMID:17854509
Barbara, Angela M; Dobbins, Maureen; Haynes, R Brian; Iorio, Alfonso; Lavis, John N; Raina, Parminder; Levinson, Anthony J
2016-05-11
Increasingly, older adults and their informal caregivers are using the Internet to search for health-related information. There is a proliferation of health information online, but the quality of this information varies, often based on exaggerated or dramatic findings, and not easily comprehended by consumers. The McMaster Optimal Aging Portal (Portal) was developed to provide Internet users with high-quality evidence about aging and address some of these current limitations of health information posted online. The Portal includes content for health professionals coming from three best-in-class resources (MacPLUS, Health Evidence, and Health Systems Evidence) and four types of content specifically prepared for the general public (Evidence Summaries, Web Resource Ratings, Blog Posts, and Twitter messages). Our objectives were to share the findings of the usability evaluation of the Portal with particular focus on the content features for the general public and to inform designers of health information websites and online resources for older adults about key usability themes. Data analysis included task performance during usability testing and qualitative content analyses of both the usability sessions and interviews to identify core themes. A total of 37 participants took part in 33 usability testing sessions and 21 focused interviews. Qualitative analysis revealed common themes regarding the Portal's strengths and challenges to usability. The strengths of the website were related to credibility, applicability, browsing function, design, and accessibility. The usability challenges included reluctance to register, process of registering, searching, terminology, and technical features. The study reinforced the importance of including end users during the development of this unique, dynamic, evidence-based health information website. The feedback was applied to iteratively improve website usability. Our findings can be applied by designers of health-related websites.
Dobbins, Maureen; Haynes, R. Brian; Iorio, Alfonso; Lavis, John N; Raina, Parminder
2016-01-01
Background Increasingly, older adults and their informal caregivers are using the Internet to search for health-related information. There is a proliferation of health information online, but the quality of this information varies, often based on exaggerated or dramatic findings, and not easily comprehended by consumers. The McMaster Optimal Aging Portal (Portal) was developed to provide Internet users with high-quality evidence about aging and address some of these current limitations of health information posted online. The Portal includes content for health professionals coming from three best-in-class resources (MacPLUS, Health Evidence, and Health Systems Evidence) and four types of content specifically prepared for the general public (Evidence Summaries, Web Resource Ratings, Blog Posts, and Twitter messages). Objective Our objectives were to share the findings of the usability evaluation of the Portal with particular focus on the content features for the general public and to inform designers of health information websites and online resources for older adults about key usability themes. Methods Data analysis included task performance during usability testing and qualitative content analyses of both the usability sessions and interviews to identify core themes. Results A total of 37 participants took part in 33 usability testing sessions and 21 focused interviews. Qualitative analysis revealed common themes regarding the Portal’s strengths and challenges to usability. The strengths of the website were related to credibility, applicability, browsing function, design, and accessibility. The usability challenges included reluctance to register, process of registering, searching, terminology, and technical features. Conclusions The study reinforced the importance of including end users during the development of this unique, dynamic, evidence-based health information website. The feedback was applied to iteratively improve website usability. Our findings can be applied by designers of health-related websites. PMID:27170443
Panayotov, Dobromir; Poitevin, Yves; Grief, Andrew; ...
2016-09-23
'Fusion for Energy' (F4E) is designing, developing, and implementing the European Helium-Cooled Lead-Lithium (HCLL) and Helium-Cooled Pebble-Bed (HCPB) Test Blanket Systems (TBSs) for ITER (Nuclear Facility INB-174). Safety demonstration is an essential element for the integration of these TBSs into ITER and accident analysis is one of its critical components. A systematic approach to accident analysis has been developed under the F4E contract on TBS safety analyses. F4E technical requirements, together with Amec Foster Wheeler and INL efforts, have resulted in a comprehensive methodology for fusion breeding blanket accident analysis that addresses the specificity of the breeding blanket designs, materials,more » and phenomena while remaining consistent with the approach already applied to ITER accident analyses. Furthermore, the methodology phases are illustrated in the paper by its application to the EU HCLL TBS using both MELCOR and RELAP5 codes.« less
The solution of linear systems of equations with a structural analysis code on the NAS CRAY-2
NASA Technical Reports Server (NTRS)
Poole, Eugene L.; Overman, Andrea L.
1988-01-01
Two methods for solving linear systems of equations on the NAS Cray-2 are described. One is a direct method; the other is an iterative method. Both methods exploit the architecture of the Cray-2, particularly the vectorization, and are aimed at structural analysis applications. To demonstrate and evaluate the methods, they were installed in a finite element structural analysis code denoted the Computational Structural Mechanics (CSM) Testbed. A description of the techniques used to integrate the two solvers into the Testbed is given. Storage schemes, memory requirements, operation counts, and reformatting procedures are discussed. Finally, results from the new methods are compared with results from the initial Testbed sparse Choleski equation solver for three structural analysis problems. The new direct solvers described achieve the highest computational rates of the methods compared. The new iterative methods are not able to achieve as high computation rates as the vectorized direct solvers but are best for well conditioned problems which require fewer iterations to converge to the solution.
Iterative Integration of Visual Insights during Scalable Patent Search and Analysis.
Koch, S; Bosch, H; Giereth, M; Ertl, T
2011-05-01
Patents are of growing importance in current economic markets. Analyzing patent information has, therefore, become a common task for many interest groups. As a prerequisite for patent analysis, extensive search for relevant patent information is essential. Unfortunately, the complexity of patent material inhibits a straightforward retrieval of all relevant patent documents and leads to iterative, time-consuming approaches in practice. Already the amount of patent data to be analyzed poses challenges with respect to scalability. Further scalability issues arise concerning the diversity of users and the large variety of analysis tasks. With "PatViz", a system for interactive analysis of patent information has been developed addressing scalability at various levels. PatViz provides a visual environment allowing for interactive reintegration of insights into subsequent search iterations, thereby bridging the gap between search and analytic processes. Because of its extensibility, we expect that the approach we have taken can be employed in different problem domains that require high quality of search results regarding their completeness.
Okariz, Ana; Guraya, Teresa; Iturrondobeitia, Maider; Ibarretxe, Julen
2017-02-01
The SIRT (Simultaneous Iterative Reconstruction Technique) algorithm is commonly used in Electron Tomography to calculate the original volume of the sample from noisy images, but the results provided by this iterative procedure are strongly dependent on the specific implementation of the algorithm, as well as on the number of iterations employed for the reconstruction. In this work, a methodology for selecting the iteration number of the SIRT reconstruction that provides the most accurate segmentation is proposed. The methodology is based on the statistical analysis of the intensity profiles at the edge of the objects in the reconstructed volume. A phantom which resembles a a carbon black aggregate has been created to validate the methodology and the SIRT implementations of two free software packages (TOMOJ and TOMO3D) have been used. Copyright © 2016 Elsevier B.V. All rights reserved.
Wake Vortex Inverse Model User's Guide
NASA Technical Reports Server (NTRS)
Lai, David; Delisi, Donald
2008-01-01
NorthWest Research Associates (NWRA) has developed an inverse model for inverting landing aircraft vortex data. The data used for the inversion are the time evolution of the lateral transport position and vertical position of both the port and starboard vortices. The inverse model performs iterative forward model runs using various estimates of vortex parameters, vertical crosswind profiles, and vortex circulation as a function of wake age. Forward model predictions of lateral transport and altitude are then compared with the observed data. Differences between the data and model predictions guide the choice of vortex parameter values, crosswind profile and circulation evolution in the next iteration. Iterations are performed until a user-defined criterion is satisfied. Currently, the inverse model is set to stop when the improvement in the rms deviation between the data and model predictions is less than 1 percent for two consecutive iterations. The forward model used in this inverse model is a modified version of the Shear-APA model. A detailed description of this forward model, the inverse model, and its validation are presented in a different report (Lai, Mellman, Robins, and Delisi, 2007). This document is a User's Guide for the Wake Vortex Inverse Model. Section 2 presents an overview of the inverse model program. Execution of the inverse model is described in Section 3. When executing the inverse model, a user is requested to provide the name of an input file which contains the inverse model parameters, the various datasets, and directories needed for the inversion. A detailed description of the list of parameters in the inversion input file is presented in Section 4. A user has an option to save the inversion results of each lidar track in a mat-file (a condensed data file in Matlab format). These saved mat-files can be used for post-inversion analysis. A description of the contents of the saved files is given in Section 5. An example of an inversion input file, with preferred parameters values, is given in Appendix A. An example of the plot generated at a normal completion of the inversion is shown in Appendix B.
Implementation on a nonlinear concrete cracking algorithm in NASTRAN
NASA Technical Reports Server (NTRS)
Herting, D. N.; Herendeen, D. L.; Hoesly, R. L.; Chang, H.
1976-01-01
A computer code for the analysis of reinforced concrete structures was developed using NASTRAN as a basis. Nonlinear iteration procedures were developed for obtaining solutions with a wide variety of loading sequences. A direct access file system was used to save results at each load step to restart within the solution module for further analysis. A multi-nested looping capability was implemented to control the iterations and change the loads. The basis for the analysis is a set of mutli-layer plate elements which allow local definition of materials and cracking properties.
Quality in Inclusive and Noninclusive Infant and Toddler Classrooms
ERIC Educational Resources Information Center
Hestenes, Linda L.; Cassidy, Deborah J.; Hegde, Archana V.; Lower, Joanna K.
2007-01-01
The quality of care in infant and toddler classrooms was compared across inclusive (n=64) and noninclusive classrooms (n=400). Quality was measured using the Infant/Toddler Environment Rating Scale-Revised (ITERS-R). An exploratory and confirmatory factor analysis revealed four distinct dimensions of quality within the ITERS-R. Inclusive…
Representation-Independent Iteration of Sparse Data Arrays
NASA Technical Reports Server (NTRS)
James, Mark
2007-01-01
An approach is defined that describes a method of iterating over massively large arrays containing sparse data using an approach that is implementation independent of how the contents of the sparse arrays are laid out in memory. What is unique and important here is the decoupling of the iteration over the sparse set of array elements from how they are internally represented in memory. This enables this approach to be backward compatible with existing schemes for representing sparse arrays as well as new approaches. What is novel here is a new approach for efficiently iterating over sparse arrays that is independent of the underlying memory layout representation of the array. A functional interface is defined for implementing sparse arrays in any modern programming language with a particular focus for the Chapel programming language. Examples are provided that show the translation of a loop that computes a matrix vector product into this representation for both the distributed and not-distributed cases. This work is directly applicable to NASA and its High Productivity Computing Systems (HPCS) program that JPL and our current program are engaged in. The goal of this program is to create powerful, scalable, and economically viable high-powered computer systems suitable for use in national security and industry by 2010. This is important to NASA for its computationally intensive requirements for analyzing and understanding the volumes of science data from our returned missions.
Wang, G; Doyle, E J; Peebles, W A
2016-11-01
A monostatic antenna array arrangement has been designed for the microwave front-end of the ITER low-field-side reflectometer (LFSR) system. This paper presents details of the antenna coupling coefficient analyses performed using GENRAY, a 3-D ray tracing code, to evaluate the plasma height accommodation capability of such an antenna array design. Utilizing modeled data for the plasma equilibrium and profiles for the ITER baseline and half-field scenarios, a design study was performed for measurement locations varying from the plasma edge to inside the top of the pedestal. A front-end antenna configuration is recommended for the ITER LFSR system based on the results of this coupling analysis.
Tight-frame based iterative image reconstruction for spectral breast CT
Zhao, Bo; Gao, Hao; Ding, Huanjun; Molloi, Sabee
2013-01-01
Purpose: To investigate tight-frame based iterative reconstruction (TFIR) technique for spectral breast computed tomography (CT) using fewer projections while achieving greater image quality. Methods: The experimental data were acquired with a fan-beam breast CT system based on a cadmium zinc telluride photon-counting detector. The images were reconstructed with a varying number of projections using the TFIR and filtered backprojection (FBP) techniques. The image quality between these two techniques was evaluated. The image's spatial resolution was evaluated using a high-resolution phantom, and the contrast to noise ratio (CNR) was evaluated using a postmortem breast sample. The postmortem breast samples were decomposed into water, lipid, and protein contents based on images reconstructed from TFIR with 204 projections and FBP with 614 projections. The volumetric fractions of water, lipid, and protein from the image-based measurements in both TFIR and FBP were compared to the chemical analysis. Results: The spatial resolution and CNR were comparable for the images reconstructed by TFIR with 204 projections and FBP with 614 projections. Both reconstruction techniques provided accurate quantification of water, lipid, and protein composition of the breast tissue when compared with data from the reference standard chemical analysis. Conclusions: Accurate breast tissue decomposition can be done with three fold fewer projection images by the TFIR technique without any reduction in image spatial resolution and CNR. This can result in a two-third reduction of the patient dose in a multislit and multislice spiral CT system in addition to the reduced scanning time in this system. PMID:23464320
NASA Astrophysics Data System (ADS)
Chen, Hao; Lv, Wen; Zhang, Tongtong
2018-05-01
We study preconditioned iterative methods for the linear system arising in the numerical discretization of a two-dimensional space-fractional diffusion equation. Our approach is based on a formulation of the discrete problem that is shown to be the sum of two Kronecker products. By making use of an alternating Kronecker product splitting iteration technique we establish a class of fixed-point iteration methods. Theoretical analysis shows that the new method converges to the unique solution of the linear system. Moreover, the optimal choice of the involved iteration parameters and the corresponding asymptotic convergence rate are computed exactly when the eigenvalues of the system matrix are all real. The basic iteration is accelerated by a Krylov subspace method like GMRES. The corresponding preconditioner is in a form of a Kronecker product structure and requires at each iteration the solution of a set of discrete one-dimensional fractional diffusion equations. We use structure preserving approximations to the discrete one-dimensional fractional diffusion operators in the action of the preconditioning matrix. Numerical examples are presented to illustrate the effectiveness of this approach.
Registered nurses views of caring in coronary care--a deductive and inductive content analysis.
Andersson, Ewa K; Sjöström-Strand, Annica; Willman, Ania; Borglin, Gunilla
2015-12-01
To extend nurses' descriptions of how they understood caring, as reflected in the findings of an earlier study (i.e. the hierarchical outcome space) and to gain additional understandings and perspectives of nurses' views of caring in relation to a coronary care patient case. Scientific literature from the 1970s-1990s contains descriptions of caring in nursing. In contrast, the contemporary literature on this topic--particularly in the context of coronary care--is very sparse, and the few studies that do contain descriptions rarely do so from the perspective of nurses. Qualitative descriptive study. Twenty-one nurses were interviewed using the stimulated recall interview technique. The data were analysed using deductive and inductive qualitative content analysis. The results of the iterative and integrated content analysis showed that the data mainly reproduced the content of the hierarchical outcome space describing how nurses could understand caring; however, in the outcome space, the relationship broke up (i.e. flipped). The nurses' views of caring could now also be understood as: person-centredness 'lurking' in the shadows; limited 'potential' for safeguarding patients' best interests; counselling as virtually the 'only' nursing intervention; and caring preceded by the 'almighty' context. Their views offered alternative and, at times, contrasting perspectives of caring, thereby adding to our understanding of it. Caring was described as operating somewhere between the nurses caring values and the contextual conditions in which caring occurred. This challenged their ability to sustain caring in accordance with their values and the patients' preferences. To ensure that the essentials of caring are met at all times, nurses need to plan and deliver caring in a systematic way. The use of systematic structures in caring, as the nursing process, can help nurses to work in a person-centred way, while sustaining their professional values. © 2015 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
1990-09-01
The main purpose of the International Thermonuclear Experimental Reactor (ITER) is to develop an experimental fusion reactor through the united efforts of many technologically advanced countries. The ITER terms of reference, issued jointly by the European Community, Japan, the USSR, and the United States, call for an integrated international design activity and constitute the basis of current activities. Joint work on ITER is carried out under the auspices of the International Atomic Energy Agency (IAEA), according to the terms of quadripartite agreement reached between the European Community, Japan, the USSR, and the United States. The site for joint technical work sessions is at the Max Planck Institute of Plasma Physics. Garching, Federal Republic of Germany. The ITER activities have two phases: a definition phase performed in 1988 and the present design phase (1989 to 1990). During the definition phase, a set of ITER technical characteristics and supporting research and development (R and D) activities were developed and reported. The present conceptual design phase of ITER lasts until the end of 1990. The objectives of this phase are to develop the design of ITER, perform a safety and environmental analysis, develop site requirements, define future R and D needs, and estimate cost, manpower, and schedule for construction and operation. A final report will be submitted at the end of 1990. This paper summarizes progress in the ITER program during the 1989 design phase.
2015-12-01
AFRL-RY-WP-TR-2015-0144 COGNITIVE RADIO LOW-ENERGY SIGNAL ANALYSIS SENSOR INTEGRATED CIRCUITS (CLASIC) A Broadband Mixed-Signal Iterative Down...See additional restrictions described on inside pages STINFO COPY AIR FORCE RESEARCH LABORATORY SENSORS DIRECTORATE WRIGHT-PATTERSON AIR FORCE...Signature// TODD KASTLE, Chief Spectrum Warfare Division Sensors Directorate This report is published in the interest of scientific and technical
NASA Astrophysics Data System (ADS)
Li, Husheng; Betz, Sharon M.; Poor, H. Vincent
2007-05-01
This paper examines the performance of decision feedback based iterative channel estimation and multiuser detection in channel coded aperiodic DS-CDMA systems operating over multipath fading channels. First, explicit expressions describing the performance of channel estimation and parallel interference cancellation based multiuser detection are developed. These results are then combined to characterize the evolution of the performance of a system that iterates among channel estimation, multiuser detection and channel decoding. Sufficient conditions for convergence of this system to a unique fixed point are developed.
Cristancho-Lacroix, Victoria; Moulin, Florence; Wrobel, Jérémy; Batrancourt, Bénédicte; Plichart, Matthieu; De Rotrou, Jocelyne; Cantegreil-Kallen, Inge; Rigaud, Anne-Sophie
2014-09-15
Web-based programs have been developed for informal caregivers of people with Alzheimer's disease (PWAD). However, these programs can prove difficult to adopt, especially for older people, who are less familiar with the Internet than other populations. Despite the fundamental role of usability testing in promoting caregivers' correct use and adoption of these programs, to our knowledge, this is the first study describing this process before evaluating a program for caregivers of PWAD in a randomized clinical trial. The objective of the study was to describe the development process of a fully automated Web-based program for caregivers of PWAD, aiming to reduce caregivers' stress, and based on the user-centered design approach. There were 49 participants (12 health care professionals, 6 caregivers, and 31 healthy older adults) that were involved in a double iterative design allowing for the adaptation of program content and for the enhancement of website usability. This process included three component parts: (1) project team workshops, (2) a proof of concept, and (3) two usability tests. The usability tests were based on a mixed methodology using behavioral analysis, semistructured interviews, and a usability questionnaire. The user-centered design approach provided valuable guidelines to adapt the content and design of the program, and to improve website usability. The professionals, caregivers (mainly spouses), and older adults considered that our project met the needs of isolated caregivers. Participants underlined that contact between caregivers would be desirable. During usability observations, the mistakes of users were also due to ergonomics issues from Internet browsers and computer interfaces. Moreover, negative self-stereotyping was evidenced, when comparing interviews and results of behavioral analysis. Face-to-face psycho-educational programs may be used as a basis for Web-based programs. Nevertheless, a user-centered design approach involving targeted users (or their representatives) remains crucial for their correct use and adoption. For future user-centered design studies, we recommend to involve end-users from preconception stages, using a mixed research method in usability evaluations, and implementing pilot studies to evaluate acceptability and feasibility of programs.
NASA Technical Reports Server (NTRS)
Puliafito, E.; Bevilacqua, R.; Olivero, J.; Degenhardt, W.
1992-01-01
The formal retrieval error analysis of Rodgers (1990) allows the quantitative determination of such retrieval properties as measurement error sensitivity, resolution, and inversion bias. This technique was applied to five numerical inversion techniques and two nonlinear iterative techniques used for the retrieval of middle atmospheric constituent concentrations from limb-scanning millimeter-wave spectroscopic measurements. It is found that the iterative methods have better vertical resolution, but are slightly more sensitive to measurement error than constrained matrix methods. The iterative methods converge to the exact solution, whereas two of the matrix methods under consideration have an explicit constraint, the sensitivity of the solution to the a priori profile. Tradeoffs of these retrieval characteristics are presented.
ITER L-Mode Confinement Database
DOE Office of Scientific and Technical Information (OSTI.GOV)
S.M. Kaye and the ITER Confinement Database Working Group
This paper describes the content of an L-mode database that has been compiled with data from Alcator C-Mod, ASDEX, DIII, DIII-D, FTU, JET, JFT-2M, JT-60, PBX-M, PDX, T-10, TEXTOR, TFTR, and Tore-Supra. The database consists of a total of 2938 entries, 1881 of which are in the L-phase while 922 are ohmically heated (OH) only. Each entry contains up to 95 descriptive parameters, including global and kinetic information, machine conditioning, and configuration. The paper presents a description of the database and the variables contained therein, and it also presents global and thermal scalings along with predictions for ITER. The L-modemore » thermal confinement time scaling was determined from a subset of 1312 entries for which the thermal confinement time scaling was provided.« less
ERIC Educational Resources Information Center
Adelstein, David; Barbour, Michael
2016-01-01
In 2011, the International Association for K-12 Online Learning released the second iteration of the "National Standards for Quality Online Courses." These standards have been used by numerous institutions and states around the country to help design and create K-12 online courses. However, there has been no reported research on the…
Bass, Kristin M.; Drits-Esser, Dina; Stark, Louisa A.
2016-01-01
The credibility of conclusions made about the effectiveness of educational interventions depends greatly on the quality of the assessments used to measure learning gains. This essay, intended for faculty involved in small-scale projects, courses, or educational research, provides a step-by-step guide to the process of developing, scoring, and validating high-quality content knowledge assessments. We illustrate our discussion with examples from our assessments of high school students’ understanding of concepts in cell biology and epigenetics. Throughout, we emphasize the iterative nature of the development process, the importance of creating instruments aligned to the learning goals of an intervention or curricula, and the importance of collaborating with other content and measurement specialists along the way. PMID:27055776
Harmonics analysis of the ITER poloidal field converter based on a piecewise method
NASA Astrophysics Data System (ADS)
Xudong, WANG; Liuwei, XU; Peng, FU; Ji, LI; Yanan, WU
2017-12-01
Poloidal field (PF) converters provide controlled DC voltage and current to PF coils. The many harmonics generated by the PF converter flow into the power grid and seriously affect power systems and electric equipment. Due to the complexity of the system, the traditional integral operation in Fourier analysis is complicated and inaccurate. This paper presents a piecewise method to calculate the harmonics of the ITER PF converter. The relationship between the grid input current and the DC output current of the ITER PF converter is deduced. The grid current is decomposed into the sum of some simple functions. By calculating simple function harmonics based on the piecewise method, the harmonics of the PF converter under different operation modes are obtained. In order to examine the validity of the method, a simulation model is established based on Matlab/Simulink and a relevant experiment is implemented in the ITER PF integration test platform. Comparative results are given. The calculated results are found to be consistent with simulation and experiment. The piecewise method is proved correct and valid for calculating the system harmonics.
Installation and Testing of ITER Integrated Modeling and Analysis Suite (IMAS) on DIII-D
NASA Astrophysics Data System (ADS)
Lao, L.; Kostuk, M.; Meneghini, O.; Smith, S.; Staebler, G.; Kalling, R.; Pinches, S.
2017-10-01
A critical objective of the ITER Integrated Modeling Program is the development of IMAS to support ITER plasma operation and research activities. An IMAS framework has been established based on the earlier work carried out within the EU. It consists of a physics data model and a workflow engine. The data model is capable of representing both simulation and experimental data and is applicable to ITER and other devices. IMAS has been successfully installed on a local DIII-D server using a flexible installer capable of managing the core data access tools (Access Layer and Data Dictionary) and optionally the Kepler workflow engine and coupling tools. A general adaptor for OMFIT (a workflow engine) is being built for adaptation of any analysis code to IMAS using a new IMAS universal access layer (UAL) interface developed from an existing OMFIT EU Integrated Tokamak Modeling UAL. Ongoing work includes development of a general adaptor for EFIT and TGLF based on this new UAL that can be readily extended for other physics codes within OMFIT. Work supported by US DOE under DE-FC02-04ER54698.
Performance of spectral MSE diagnostic on C-Mod and ITER
NASA Astrophysics Data System (ADS)
Liao, Ken; Rowan, William; Mumgaard, Robert; Granetz, Robert; Scott, Steve; Marchuk, Oleksandr; Ralchenko, Yuri; Alcator C-Mod Team
2015-11-01
Magnetic field was measured on Alcator C-mod by applying spectral Motional Stark Effect techniques based on line shift (MSE-LS) and line ratio (MSE-LR) to the H-alpha emission spectrum of the diagnostic neutral beam atoms. The high field of Alcator C-mod allows measurements to be made at close to ITER values of Stark splitting (~ Bv⊥) with similar background levels to those expected for ITER. Accurate modeling of the spectrum requires a non-statistical, collisional-radiative analysis of the excited beam population and quadratic and Zeeman corrections to the Stark shift. A detailed synthetic diagnostic was developed and used to estimate the performance of the diagnostic at C-Mod and ITER parameters. Our analysis includes the sensitivity to view and beam geometry, aperture and divergence broadening, magnetic field, pixel size, background noise, and signal levels. Analysis of preliminary experiments agree with Kinetic+(polarization)MSE EFIT within ~2° in pitch angle and simulations predict uncertainties of 20 mT in | B | and <2° in pitch angle. This material is based upon work supported by the U.S. Department of Energy Office of Science, Office of Fusion Energy Sciences under Award Number DE-FG03-96ER-54373 and DE-FC02-99ER54512.
Two-dimensional over-all neutronics analysis of the ITER device
NASA Astrophysics Data System (ADS)
Zimin, S.; Takatsu, Hideyuki; Mori, Seiji; Seki, Yasushi; Satoh, Satoshi; Tada, Eisuke; Maki, Koichi
1993-07-01
The present work attempts to carry out a comprehensive neutronics analysis of the International Thermonuclear Experimental Reactor (ITER) developed during the Conceptual Design Activities (CDA). The two-dimensional cylindrical over-all calculational models of ITER CDA device including the first wall, blanket, shield, vacuum vessel, magnets, cryostat and support structures were developed for this purpose with a help of the DOGII code. Two dimensional DOT 3.5 code with the FUSION-40 nuclear data library was employed for transport calculations of neutron and gamma ray fluxes, tritium breeding ratio (TBR), and nuclear heating in reactor components. The induced activity calculational code CINAC was employed for the calculations of exposure dose rate after reactor shutdown around the ITER CDA device. The two-dimensional over-all calculational model includes the design specifics such as the pebble bed Li2O/Be layered blanket, the thin double wall vacuum vessel, the concrete cryostat integrated with the over-all ITER design, the top maintenance shield plug, the additional ring biological shield placed under the top cryostat lid around the above-mentioned top maintenance shield plug etc. All the above-mentioned design specifics were included in the employed calculational models. Some alternative design options, such as the water-rich shielding blanket instead of lithium-bearing one, the additional biological shield plug at the top zone between the poloidal field (PF) coil No. 5, and the maintenance shield plug, were calculated as well. Much efforts have been focused on analyses of obtained results. These analyses aimed to obtain necessary recommendations on improving the ITER CDA design.
Ramani, Sathish; Liu, Zhihao; Rosen, Jeffrey; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.
2012-01-01
Regularized iterative reconstruction algorithms for imaging inverse problems require selection of appropriate regularization parameter values. We focus on the challenging problem of tuning regularization parameters for nonlinear algorithms for the case of additive (possibly complex) Gaussian noise. Generalized cross-validation (GCV) and (weighted) mean-squared error (MSE) approaches (based on Stein's Unbiased Risk Estimate— SURE) need the Jacobian matrix of the nonlinear reconstruction operator (representative of the iterative algorithm) with respect to the data. We derive the desired Jacobian matrix for two types of nonlinear iterative algorithms: a fast variant of the standard iterative reweighted least-squares method and the contemporary split-Bregman algorithm, both of which can accommodate a wide variety of analysis- and synthesis-type regularizers. The proposed approach iteratively computes two weighted SURE-type measures: Predicted-SURE and Projected-SURE (that require knowledge of noise variance σ2), and GCV (that does not need σ2) for these algorithms. We apply the methods to image restoration and to magnetic resonance image (MRI) reconstruction using total variation (TV) and an analysis-type ℓ1-regularization. We demonstrate through simulations and experiments with real data that minimizing Predicted-SURE and Projected-SURE consistently lead to near-MSE-optimal reconstructions. We also observed that minimizing GCV yields reconstruction results that are near-MSE-optimal for image restoration and slightly sub-optimal for MRI. Theoretical derivations in this work related to Jacobian matrix evaluations can be extended, in principle, to other types of regularizers and reconstruction algorithms. PMID:22531764
NASA Astrophysics Data System (ADS)
Brooks, J. N.; Hassanein, A.; Sizyuk, T.
2013-07-01
Plasma interactions with mixed-material surfaces are being analyzed using advanced modeling of time-dependent surface evolution/erosion. Simulations use the REDEP/WBC erosion/redeposition code package coupled to the HEIGHTS package ITMC-DYN mixed-material formation/response code, with plasma parameter input from codes and data. We report here on analysis for a DIII-D Mo/C containing tokamak divertor. A DIII-D/DiMES probe experiment simulation predicts that sputtered molybdenum from a 1 cm diameter central spot quickly saturates (˜4 s) in the 5 cm diameter surrounding carbon probe surface, with subsequent re-sputtering and transport to off-probe divertor regions, and with high (˜50%) redeposition on the Mo spot. Predicted Mo content in the carbon agrees well with post-exposure probe data. We discuss implications and mixed-material analysis issues for Be/W mixing at the ITER outer divertor, and Li, C, Mo mixing at an NSTX divertor.
Modifying Photovoice for community-based participatory Indigenous research.
Castleden, Heather; Garvin, Theresa
2008-03-01
Scientific research occurs within a set of socio-political conditions, and in Canada research involving Indigenous communities has a historical association with colonialism. Consequently, Indigenous peoples have been justifiably sceptical and reluctant to become the subjects of academic research. Community-Based Participatory Research (CBPR) is an attempt to develop culturally relevant research models that address issues of injustice, inequality, and exploitation. The work reported here evaluates the use of Photovoice, a CBPR method that uses participant-employed photography and dialogue to create social change, which was employed in a research partnership with a First Nation in Western Canada. Content analysis of semi-structured interviews (n=45) evaluated participants' perspectives of the Photovoice process as part of a larger study on health and environment issues. The analysis revealed that Photovoice effectively balanced power, created a sense of ownership, fostered trust, built capacity, and responded to cultural preferences. The authors discuss the necessity of modifying Photovoice, by building in an iterative process, as being key to the methodological success of the project.
Walsh, Wendy E
Using a phenomenological approach, this study investigated visibility and perception of the profession of occupational therapy in three media outlets. Content analysis occurred on LexisNexis Academic (LNA), Google Images, and Twitter platforms. Analysis of LNA identified the prevalence of articles about occupational therapy in domestic newspapers and similar media avenues, MaxQDA qualitative software coded Google Images from a search on occupational therapy, and AnalyzeWords evaluated Twitter feeds of four health care professions for presence and tone in a social media context. Results indicate that although occupational therapy is 100 years old, its presence in news and online platforms could be stronger. This study suggests that a clear professional identity for occupational therapy practitioners must be strategically communicated through academic and social platforms. Such advocacy promotes the profession, meets the next iteration of occupational therapy's professional vision, and allows occupational therapy to remain a prominent and formidable stakeholder in today's health care marketplace. Copyright © 2018 by the American Occupational Therapy Association, Inc.
Stability analysis of a deterministic dose calculation for MRI-guided radiotherapy.
Zelyak, O; Fallone, B G; St-Aubin, J
2017-12-14
Modern effort in radiotherapy to address the challenges of tumor localization and motion has led to the development of MRI guided radiotherapy technologies. Accurate dose calculations must properly account for the effects of the MRI magnetic fields. Previous work has investigated the accuracy of a deterministic linear Boltzmann transport equation (LBTE) solver that includes magnetic field, but not the stability of the iterative solution method. In this work, we perform a stability analysis of this deterministic algorithm including an investigation of the convergence rate dependencies on the magnetic field, material density, energy, and anisotropy expansion. The iterative convergence rate of the continuous and discretized LBTE including magnetic fields is determined by analyzing the spectral radius using Fourier analysis for the stationary source iteration (SI) scheme. The spectral radius is calculated when the magnetic field is included (1) as a part of the iteration source, and (2) inside the streaming-collision operator. The non-stationary Krylov subspace solver GMRES is also investigated as a potential method to accelerate the iterative convergence, and an angular parallel computing methodology is investigated as a method to enhance the efficiency of the calculation. SI is found to be unstable when the magnetic field is part of the iteration source, but unconditionally stable when the magnetic field is included in the streaming-collision operator. The discretized LBTE with magnetic fields using a space-angle upwind stabilized discontinuous finite element method (DFEM) was also found to be unconditionally stable, but the spectral radius rapidly reaches unity for very low-density media and increasing magnetic field strengths indicating arbitrarily slow convergence rates. However, GMRES is shown to significantly accelerate the DFEM convergence rate showing only a weak dependence on the magnetic field. In addition, the use of an angular parallel computing strategy is shown to potentially increase the efficiency of the dose calculation.
Stability analysis of a deterministic dose calculation for MRI-guided radiotherapy
NASA Astrophysics Data System (ADS)
Zelyak, O.; Fallone, B. G.; St-Aubin, J.
2018-01-01
Modern effort in radiotherapy to address the challenges of tumor localization and motion has led to the development of MRI guided radiotherapy technologies. Accurate dose calculations must properly account for the effects of the MRI magnetic fields. Previous work has investigated the accuracy of a deterministic linear Boltzmann transport equation (LBTE) solver that includes magnetic field, but not the stability of the iterative solution method. In this work, we perform a stability analysis of this deterministic algorithm including an investigation of the convergence rate dependencies on the magnetic field, material density, energy, and anisotropy expansion. The iterative convergence rate of the continuous and discretized LBTE including magnetic fields is determined by analyzing the spectral radius using Fourier analysis for the stationary source iteration (SI) scheme. The spectral radius is calculated when the magnetic field is included (1) as a part of the iteration source, and (2) inside the streaming-collision operator. The non-stationary Krylov subspace solver GMRES is also investigated as a potential method to accelerate the iterative convergence, and an angular parallel computing methodology is investigated as a method to enhance the efficiency of the calculation. SI is found to be unstable when the magnetic field is part of the iteration source, but unconditionally stable when the magnetic field is included in the streaming-collision operator. The discretized LBTE with magnetic fields using a space-angle upwind stabilized discontinuous finite element method (DFEM) was also found to be unconditionally stable, but the spectral radius rapidly reaches unity for very low-density media and increasing magnetic field strengths indicating arbitrarily slow convergence rates. However, GMRES is shown to significantly accelerate the DFEM convergence rate showing only a weak dependence on the magnetic field. In addition, the use of an angular parallel computing strategy is shown to potentially increase the efficiency of the dose calculation.
Corrigendum to "Stability analysis of a deterministic dose calculation for MRI-guided radiotherapy".
Zelyak, Oleksandr; Fallone, B Gino; St-Aubin, Joel
2018-03-12
Modern effort in radiotherapy to address the challenges of tumor localization and motion has led to the development of MRI guided radiotherapy technologies. Accurate dose calculations must properly account for the effects of the MRI magnetic fields. Previous work has investigated the accuracy of a deterministic linear Boltzmann transport equation (LBTE) solver that includes magnetic field, but not the stability of the iterative solution method. In this work, we perform a stability analysis of this deterministic algorithm including an investigation of the convergence rate dependencies on the magnetic field, material density, energy, and anisotropy expansion. The iterative convergence rate of the continuous and discretized LBTE including magnetic fields is determined by analyzing the spectral radius using Fourier analysis for the stationary source iteration (SI) scheme. The spectral radius is calculated when the magnetic field is included (1) as a part of the iteration source, and (2) inside the streaming-collision operator. The non-stationary Krylov subspace solver GMRES is also investigated as a potential method to accelerate the iterative convergence, and an angular parallel computing methodology is investigated as a method to enhance the efficiency of the calculation. SI is found to be unstable when the magnetic field is part of the iteration source, but unconditionally stable when the magnetic field is included in the streaming-collision operator. The discretized LBTE with magnetic fields using a space-angle upwind stabilized discontinuous finite element method (DFEM) was also found to be unconditionally stable, but the spectral radius rapidly reaches unity for very low density media and increasing magnetic field strengths indicating arbitrarily slow convergence rates. However, GMRES is shown to significantly accelerate the DFEM convergence rate showing only a weak dependence on the magnetic field. In addition, the use of an angular parallel computing strategy is shown to potentially increase the efficiency of the dose calculation. © 2018 Institute of Physics and Engineering in Medicine.
Carbon charge exchange analysis in the ITER-like wall environment.
Menmuir, S; Giroud, C; Biewer, T M; Coffey, I H; Delabie, E; Hawkes, N C; Sertoli, M
2014-11-01
Charge exchange spectroscopy has long been a key diagnostic tool for fusion plasmas and is well developed in devices with Carbon Plasma-Facing Components. Operation with the ITER-like wall at JET has resulted in changes to the spectrum in the region of the Carbon charge exchange line at 529.06 nm and demonstrates the need to revise the core charge exchange analysis for this line. An investigation has been made of this spectral region in different plasma conditions and the revised description of the spectral lines to be included in the analysis is presented.
Ali, S. J.; Kraus, R. G.; Fratanduono, D. E.; ...
2017-05-18
Here, we developed an iterative forward analysis (IFA) technique with the ability to use hydrocode simulations as a fitting function for analysis of dynamic compression experiments. The IFA method optimizes over parameterized quantities in the hydrocode simulations, breaking the degeneracy of contributions to the measured material response. Velocity profiles from synthetic data generated using a hydrocode simulation are analyzed as a first-order validation of the technique. We also analyze multiple magnetically driven ramp compression experiments on copper and compare with more conventional techniques. Excellent agreement is obtained in both cases.
Adaptive Dynamic Programming for Discrete-Time Zero-Sum Games.
Wei, Qinglai; Liu, Derong; Lin, Qiao; Song, Ruizhuo
2018-04-01
In this paper, a novel adaptive dynamic programming (ADP) algorithm, called "iterative zero-sum ADP algorithm," is developed to solve infinite-horizon discrete-time two-player zero-sum games of nonlinear systems. The present iterative zero-sum ADP algorithm permits arbitrary positive semidefinite functions to initialize the upper and lower iterations. A novel convergence analysis is developed to guarantee the upper and lower iterative value functions to converge to the upper and lower optimums, respectively. When the saddle-point equilibrium exists, it is emphasized that both the upper and lower iterative value functions are proved to converge to the optimal solution of the zero-sum game, where the existence criteria of the saddle-point equilibrium are not required. If the saddle-point equilibrium does not exist, the upper and lower optimal performance index functions are obtained, respectively, where the upper and lower performance index functions are proved to be not equivalent. Finally, simulation results and comparisons are shown to illustrate the performance of the present method.
College students' alcohol displays on Facebook: intervention considerations.
Moreno, Megan A; Grant, Allison; Kacvinsky, Lauren; Egan, Katie G; Fleming, Michael F
2012-01-01
The purpose of this study was to investigate college freshmen's views towards potential social networking site (SNS) screening or intervention efforts regarding alcohol. Freshmen college students between February 2010 and May 2011. Participants were interviewed; all interviews were audio-recorded and transcribed. Qualitative analysis was conducted using an iterative approach. A total of 132 participants completed the interview (70% response rate), the average age was 18.4 years (SD 0.49), and 64 were males (48.5%). Three themes emerged from our data. First, most participants stated they viewed displayed alcohol content as indicative of alcohol use. Second, they explained they would prefer to be approached in a direct manner by someone they knew. Third, the style of approach was considered critical. When approaching college students regarding alcohol messages on SNSs, both the relationship and the approach are key factors.
Update of the FANTOM web resource: high resolution transcriptome of diverse cell types in mammals.
Lizio, Marina; Harshbarger, Jayson; Abugessaisa, Imad; Noguchi, Shuei; Kondo, Atsushi; Severin, Jessica; Mungall, Chris; Arenillas, David; Mathelier, Anthony; Medvedeva, Yulia A; Lennartsson, Andreas; Drabløs, Finn; Ramilowski, Jordan A; Rackham, Owen; Gough, Julian; Andersson, Robin; Sandelin, Albin; Ienasescu, Hans; Ono, Hiromasa; Bono, Hidemasa; Hayashizaki, Yoshihide; Carninci, Piero; Forrest, Alistair R R; Kasukawa, Takeya; Kawaji, Hideya
2017-01-04
Upon the first publication of the fifth iteration of the Functional Annotation of Mammalian Genomes collaborative project, FANTOM5, we gathered a series of primary data and database systems into the FANTOM web resource (http://fantom.gsc.riken.jp) to facilitate researchers to explore transcriptional regulation and cellular states. In the course of the collaboration, primary data and analysis results have been expanded, and functionalities of the database systems enhanced. We believe that our data and web systems are invaluable resources, and we think the scientific community will benefit for this recent update to deepen their understanding of mammalian cellular organization. We introduce the contents of FANTOM5 here, report recent updates in the web resource and provide future perspectives. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Radioisotope Power Systems Reference Book for Mission Designers and Planners
NASA Technical Reports Server (NTRS)
Lee, Young; Bairstow, Brian
2015-01-01
The RPS Program's Program Planning and Assessment (PPA) Office commissioned the Mission Analysis team to develop the Radioisotope Power Systems (RPS) Reference Book for Mission Planners and Designers to define a baseline of RPS technology capabilities with specific emphasis on performance parameters and technology readiness. The main objective of this book is to provide RPS technology information that could be utilized by future mission concept studies and concurrent engineering practices. A progress summary from the major branches of RPS technology research provides mission analysis teams with a vital tool for assessing the RPS trade space, and provides concurrent engineering centers with a consistent set of guidelines for RPS performance characteristics. This book will be iterated when substantial new information becomes available to ensure continued relevance, serving as one of the cornerstone products of the RPS PPA Office. This book updates the original 2011 internal document, using data from the relevant publicly released RPS technology references and consultations with RPS technologists. Each performance parameter and RPS product subsection has been reviewed and cleared by at least one subject matter representative. A virtual workshop was held to reach consensus on the scope and contents of the book, and the definitions and assumptions that should be used. The subject matter experts then reviewed and updated the appropriate sections of the book. The RPS Mission Analysis Team then performed further updates and crosschecked the book for consistency. Finally, a second virtual workshop was held to ensure all subject matter experts and stakeholders concurred on the contents.
The role of simulation in the design of a neural network chip
NASA Technical Reports Server (NTRS)
Desai, Utpal; Roppel, Thaddeus A.; Padgett, Mary L.
1993-01-01
An iterative, simulation-based design procedure for a neural network chip is introduced. For this design procedure, the goal is to produce a chip layout for a neural network in which the weights are determined by transistor gate width-to-length ratios. In a given iteration, the current layout is simulated using the circuit simulator SPICE, and layout adjustments are made based on conventional gradient-decent methods. After the iteration converges, the chip is fabricated. Monte Carlo analysis is used to predict the effect of statistical fabrication process variations on the overall performance of the neural network chip.
Optimized iterative decoding method for TPC coded CPM
NASA Astrophysics Data System (ADS)
Ma, Yanmin; Lai, Penghui; Wang, Shilian; Xie, Shunqin; Zhang, Wei
2018-05-01
Turbo Product Code (TPC) coded Continuous Phase Modulation (CPM) system (TPC-CPM) has been widely used in aeronautical telemetry and satellite communication. This paper mainly investigates the improvement and optimization on the TPC-CPM system. We first add the interleaver and deinterleaver to the TPC-CPM system, and then establish an iterative system to iteratively decode. However, the improved system has a poor convergence ability. To overcome this issue, we use the Extrinsic Information Transfer (EXIT) analysis to find the optimal factors for the system. The experiments show our method is efficient to improve the convergence performance.
The ITER bolometer diagnostic: Status and plansa)
NASA Astrophysics Data System (ADS)
Meister, H.; Giannone, L.; Horton, L. D.; Raupp, G.; Zeidner, W.; Grunda, G.; Kalvin, S.; Fischer, U.; Serikov, A.; Stickel, S.; Reichle, R.
2008-10-01
A consortium consisting of four EURATOM Associations has been set up to develop the project plan for the full development of the ITER bolometer diagnostic and to continue urgent R&D activities. An overview of the current status is given, including detector development, line-of-sight optimization, performance analysis as well as the design of the diagnostic components and their integration in ITER. This is complemented by the presentation of plans for future activities required to successfully implement the bolometer diagnostic, ranging from the detector development over diagnostic design and prototype testing to RH tools for calibration.
ERIC Educational Resources Information Center
Hilchey, Christian Thomas
2014-01-01
This dissertation examines prefixation of simplex pairs. A simplex pair consists of an iterative imperfective and a semelfactive perfective verb. When prefixed, both of these verbs are perfective. The prefixed forms derived from semelfactives are labeled single act verbs, while the prefixed forms derived from iterative imperfective simplex verbs…
Iterative Ellipsoidal Trimming.
1980-02-11
to above. Iterative ellipsoidal trimming has been investigated before by other statisticians, most notably by Gnanadesikan and his coworkers...J., Gnanadesikan R., and Kettenring, J. R. (1975). "Robust estimation and outlier detection with correlation coefficients." Biometrika. 62, 531-45. [6...Duda, Richard, and Hart, Peter (1973). Pattern Classification and Scene Analysis. Wiley, New York. [7] Gnanadesikan , R. (1977). Methods for
Combining Static Analysis and Model Checking for Software Analysis
NASA Technical Reports Server (NTRS)
Brat, Guillaume; Visser, Willem; Clancy, Daniel (Technical Monitor)
2003-01-01
We present an iterative technique in which model checking and static analysis are combined to verify large software systems. The role of the static analysis is to compute partial order information which the model checker uses to reduce the state space. During exploration, the model checker also computes aliasing information that it gives to the static analyzer which can then refine its analysis. The result of this refined analysis is then fed back to the model checker which updates its partial order reduction. At each step of this iterative process, the static analysis computes optimistic information which results in an unsafe reduction of the state space. However we show that the process converges to a fired point at which time the partial order information is safe and the whole state space is explored.
Parametric Thermal and Flow Analysis of ITER Diagnostic Shield Module
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khodak, A.; Zhai, Y.; Wang, W.
As part of the diagnostic port plug assembly, the ITER Diagnostic Shield Module (DSM) is designed to provide mechanical support and the plasma shielding while allowing access to plasma diagnostics. Thermal and hydraulic analysis of the DSM was performed using a conjugate heat transfer approach, in which heat transfer was resolved in both solid and liquid parts, and simultaneously, fluid dynamics analysis was performed only in the liquid part. ITER Diagnostic First Wall (DFW) and cooling tubing were also included in the analysis. This allowed direct modeling of the interface between DSM and DFW, and also direct assessment of themore » coolant flow distribution between the parts of DSM and DFW to ensure DSM design meets the DFW cooling requirements. Design of the DSM included voids filled with Boron Carbide pellets, allowing weight reduction while keeping shielding capability of the DSM. These voids were modeled as a continuous solid with smeared material properties using analytical relation for thermal conductivity. Results of the analysis lead to design modifications improving heat transfer efficiency of the DSM. Furthermore, the effect of design modifications on thermal performance as well as effect of Boron Carbide will be presented.« less
Development of parallel algorithms for electrical power management in space applications
NASA Technical Reports Server (NTRS)
Berry, Frederick C.
1989-01-01
The application of parallel techniques for electrical power system analysis is discussed. The Newton-Raphson method of load flow analysis was used along with the decomposition-coordination technique to perform load flow analysis. The decomposition-coordination technique enables tasks to be performed in parallel by partitioning the electrical power system into independent local problems. Each independent local problem represents a portion of the total electrical power system on which a loan flow analysis can be performed. The load flow analysis is performed on these partitioned elements by using the Newton-Raphson load flow method. These independent local problems will produce results for voltage and power which can then be passed to the coordinator portion of the solution procedure. The coordinator problem uses the results of the local problems to determine if any correction is needed on the local problems. The coordinator problem is also solved by an iterative method much like the local problem. The iterative method for the coordination problem will also be the Newton-Raphson method. Therefore, each iteration at the coordination level will result in new values for the local problems. The local problems will have to be solved again along with the coordinator problem until some convergence conditions are met.
Parametric Thermal and Flow Analysis of ITER Diagnostic Shield Module
Khodak, A.; Zhai, Y.; Wang, W.; ...
2017-06-19
As part of the diagnostic port plug assembly, the ITER Diagnostic Shield Module (DSM) is designed to provide mechanical support and the plasma shielding while allowing access to plasma diagnostics. Thermal and hydraulic analysis of the DSM was performed using a conjugate heat transfer approach, in which heat transfer was resolved in both solid and liquid parts, and simultaneously, fluid dynamics analysis was performed only in the liquid part. ITER Diagnostic First Wall (DFW) and cooling tubing were also included in the analysis. This allowed direct modeling of the interface between DSM and DFW, and also direct assessment of themore » coolant flow distribution between the parts of DSM and DFW to ensure DSM design meets the DFW cooling requirements. Design of the DSM included voids filled with Boron Carbide pellets, allowing weight reduction while keeping shielding capability of the DSM. These voids were modeled as a continuous solid with smeared material properties using analytical relation for thermal conductivity. Results of the analysis lead to design modifications improving heat transfer efficiency of the DSM. Furthermore, the effect of design modifications on thermal performance as well as effect of Boron Carbide will be presented.« less
Creating ISO/EN 13606 archetypes based on clinical information needs.
Rinner, Christoph; Kohler, Michael; Hübner-Bloder, Gudrun; Saboor, Samrend; Ammenwerth, Elske; Duftschmid, Georg
2011-01-01
Archetypes model individual EHR contents and build the basis of the dual-model approach used in the ISO/EN 13606 EHR architecture. We present an approach to create archetypes using an iterative development process. It includes automated generation of electronic case report forms from archetypes. We evaluated our approach by developing 128 archetypes which represent 446 clinical information items from the diabetes domain.
NASA Astrophysics Data System (ADS)
Borchert, James W.; Stewart, Ian E.; Ye, Shengrong; Rathmell, Aaron R.; Wiley, Benjamin J.; Winey, Karen I.
2015-08-01
Development of thin-film transparent conductors (TC) based on percolating networks of metal nanowires has leaped forward in recent years, owing to the improvement of nanowire synthetic methods and modeling efforts by several research groups. While silver nanowires are the first commercially viable iteration of this technology, systems based on copper nanowires are not far behind. Here we present an analysis of TCs composed of copper nanowire networks on sheets of polyethylene terephthalate that have been treated with various oxide-removing post treatments to improve conductivity. A pseudo-2D rod network modeling approach has been modified to include lognormal distributions in length that more closely reflect experimental data collected from the nanowire TCs. In our analysis, we find that the copper nanowire TCs are capable of achieving comparable electrical performance to silver nanowire TCs with similar dimensions. Lastly, we present a method for more accurately determining the nanowire area coverage in a TC over a large area using Rutherford Backscattering Spectrometry (RBS) to directly measure the metal content in the TCs. These developments will aid research and industry groups alike in the characterization of nanowire based TCs.Development of thin-film transparent conductors (TC) based on percolating networks of metal nanowires has leaped forward in recent years, owing to the improvement of nanowire synthetic methods and modeling efforts by several research groups. While silver nanowires are the first commercially viable iteration of this technology, systems based on copper nanowires are not far behind. Here we present an analysis of TCs composed of copper nanowire networks on sheets of polyethylene terephthalate that have been treated with various oxide-removing post treatments to improve conductivity. A pseudo-2D rod network modeling approach has been modified to include lognormal distributions in length that more closely reflect experimental data collected from the nanowire TCs. In our analysis, we find that the copper nanowire TCs are capable of achieving comparable electrical performance to silver nanowire TCs with similar dimensions. Lastly, we present a method for more accurately determining the nanowire area coverage in a TC over a large area using Rutherford Backscattering Spectrometry (RBS) to directly measure the metal content in the TCs. These developments will aid research and industry groups alike in the characterization of nanowire based TCs. Electronic supplementary information (ESI) available: Contains calibration curve for %T vs. area fraction. See DOI: 10.1039/c5nr03671b
Hudak, R P; Brooke, P P; Finstuen, K; Riley, P
1993-01-01
This research identifies the most important domains in health care administration (HCA) from now to the year 2000 and differentiates job skill, knowledge, and ability requirements necessary for successful management. Fellows of the American College of Healthcare Executives from about half of the United States responded to two iterations of a Delphi mail inquiry. Fellows identified 102 issues that were content-analyzed into nine domains by an HCA expert panel. Domains, in order of ranked importance, were cost/finance, leadership, professional staff interactions, health care delivery concepts, accessibility, ethics, quality/risk management, technology, and marketing. In the second Delphi iteration, Fellows reviewed domain results and rated job requirements on required job importance. Results indicated that while a business orientation is needed for organizational survival, an equal emphasis on person-oriented skills, knowledge, and abilities is required.
Stålberg, Anna; Sandberg, Anette; Söderbäck, Maja; Larsson, Thomas
2016-06-01
During the last decade, interactive technology has entered mainstream society. Its many users also include children, even the youngest ones, who use the technology in different situations for both fun and learning. When designing technology for children, it is crucial to involve children in the process in order to arrive at an age-appropriate end product. In this study we describe the specific iterative process by which an interactive application was developed. This application is intended to facilitate young children's, three-to five years old, participation in healthcare situations. We also describe the specific contributions of the children, who tested the prototypes in a preschool, a primary health care clinic and an outpatient unit at a hospital, during the development process. The iterative phases enabled the children to be involved at different stages of the process and to evaluate modifications and improvements made after each prior iteration. The children contributed their own perspectives (the child's perspective) on the usability, content and graphic design of the application, substantially improving the software and resulting in an age-appropriate product. Copyright © 2016 Elsevier Inc. All rights reserved.
Kassam, Aliya; Donnon, Tyrone; Rigby, Ian
2014-03-01
There is a question of whether a single assessment tool can assess the key competencies of residents as mandated by the Royal College of Physicians and Surgeons of Canada CanMEDS roles framework. The objective of the present study was to investigate the reliability and validity of an emergency medicine (EM) in-training evaluation report (ITER). ITER data from 2009 to 2011 were combined for residents across the 5 years of the EM residency training program. An exploratory factor analysis with varimax rotation was used to explore the construct validity of the ITER. A total of 172 ITERs were completed on residents across their first to fifth year of training. A combined, 24-item ITER yielded a five-factor solution measuring the CanMEDs role Medical Expert/Scholar, Communicator/Collaborator, Professional, Health Advocate and Manager subscales. The factor solution accounted for 79% of the variance, and reliability coefficients (Cronbach alpha) ranged from α = 0.90 to 0.95 for each subscale and α = 0.97 overall. The combined, 24-item ITER used to assess residents' competencies in the EM residency program showed strong reliability and evidence of construct validity for assessment of the CanMEDS roles. Further research is needed to develop and test ITER items that will differentiate each CanMEDS role exclusively.
Gariani, Joanna; Martin, Steve P; Botsikas, Diomidis; Becker, Christoph D; Montet, Xavier
2018-06-14
To compare radiation dose and image quality of thoracoabdominal scans obtained with a high-pitch protocol (pitch 3.2) and iterative reconstruction (Sinogram Affirmed Iterative Reconstruction) in comparison to standard pitch reconstructed with filtered back projection (FBP) using dual source CT. 114 CT scans (Somatom Definition Flash, Siemens Healthineers, Erlangen, Germany), 39 thoracic scans, 54 thoracoabdominal scans and 21 abdominal scans were performed. Analysis of three protocols was undertaken; pitch of 1 reconstructed with FBP, pitch of 3.2 reconstructed with SAFIRE, pitch of 3.2 with stellar detectors reconstructed with SAFIRE. Objective and subjective image analysis were performed. Dose differences of the protocols used were compared. Dose was reduced when comparing scans with a pitch of 1 reconstructed with FBP to high-pitch scans with a pitch of 3.2 reconstructed with SAFIRE with a reduction of volume CT dose index of 75% for thoracic scans, 64% for thoracoabdominal scans and 67% for abdominal scans. There was a further reduction after the implementation of stellar detectors reflected in a reduction of 36% of the dose-length product for thoracic scans. This was not at the detriment of image quality, contrast-to-noise ratio, signal-to-noise ratio and the qualitative image analysis revealed a superior image quality in the high-pitch protocols. The combination of a high pitch protocol with iterative reconstruction allows significant dose reduction in routine chest and abdominal scans whilst maintaining or improving diagnostic image quality, with a further reduction in thoracic scans with stellar detectors. Advances in knowledge: High pitch imaging with iterative reconstruction is a tool that can be used to reduce dose without sacrificing image quality.
Exploring science teachers' pedagogical content knowledge in the teaching of genetics in Swaziland
NASA Astrophysics Data System (ADS)
Mthethwa-Kunene, Khetsiwe Eunice Faith
Recent trends show that learners' enrolment and performance in science at secondary school level is dwindling. Some science topics including genetics in biology are said to be difficult for learners to learn and thus they perform poorly in examinations. Teacher knowledge base, particularly topic-specific pedagogical content knowledge (PCK), has been identified by many researchers as an important factor that is linked with learner understanding and achievement in science. This qualitative study was an attempt to explore the PCK of four successful biology teachers and how they developed it in the context of teaching genetics. The purposive sampling technique was employed to select the participating teachers based on their schools' performance in biology public examinations and recommendations by science specialists and school principals. Pedagogical content knowledge was used as a theoretical framework for the study, which guided the inquiry in data collection, analysis and discussion of the research findings. The study adopted the case study method and various sources of evidence including concept maps, lesson plans, pre-lesson interviews, lesson observations, post-teaching teacher questionnaire, post-lesson interviews and document analysis were used to collect data on teachers' PCK as well as how PCK was assumed to have developed. The data were analysed in an attempt to determine the individual teachers' school genetics' content knowledge, related knowledge of instructional strategies and knowledge of learners' preconceptions and learning difficulties. The analysis involved an iterative process of coding data into PCK categories of content knowledge, pedagogical knowledge and knowledge of learners' preconceptions and learning difficulties. The findings of the study indicate that the four successful biology teachers generally have the necessary content knowledge of school genetics, used certain topic-specific instructional strategies, but lacked knowledge of genetics-related learners' preconceptions and learning difficulties despite having taught the topic for many years. There were some instructional deficits in their approaches and techniques in teaching genetics. The teachers failed to use physical models, teacher demonstration and/or learner experimentation in their lessons (or include them in their lesson plans) to assist learners in visualizing or internalizing the genetics concepts or processes located at the sub-microscopic level. The teachers' PCK in genetics teaching was assumed to have developed mainly through formal university education programmes, classroom teaching experiences, peer support and participation in in-service workshops. The implications for biology teacher education are also discussed.
Progressive content-based retrieval of image and video with adaptive and iterative refinement
NASA Technical Reports Server (NTRS)
Li, Chung-Sheng (Inventor); Turek, John Joseph Edward (Inventor); Castelli, Vittorio (Inventor); Chen, Ming-Syan (Inventor)
1998-01-01
A method and apparatus for minimizing the time required to obtain results for a content based query in a data base. More specifically, with this invention, the data base is partitioned into a plurality of groups. Then, a schedule or sequence of groups is assigned to each of the operations of the query, where the schedule represents the order in which an operation of the query will be applied to the groups in the schedule. Each schedule is arranged so that each application of the operation operates on the group which will yield intermediate results that are closest to final results.
Liang, Xue; Ji, Hai-yan; Wang, Peng-xin; Rao, Zhen-hong; Shen, Bing-hui
2010-01-01
Preprocess method of multiplicative scatter correction (MSC) was used to reject noises in the original spectra produced by the environmental physical factor effectively, then the principal components of near-infrared spectroscopy were calculated by nonlinear iterative partial least squares (NIPALS) before building the back propagation artificial neural networks method (BP-ANN), and the numbers of principal components were calculated by the method of cross validation. The calculated principal components were used as the inputs of the artificial neural networks model, and the artificial neural networks model was used to find the relation between chlorophyll in winter wheat and reflective spectrum, which can predict the content of chlorophyll in winter wheat. The correlation coefficient (r) of calibration set was 0.9604, while the standard deviation (SD) and relative standard deviation (RSD) was 0.187 and 5.18% respectively. The correlation coefficient (r) of predicted set was 0.9600, and the standard deviation (SD) and relative standard deviation (RSD) was 0.145 and 4.21% respectively. It means that the MSC-ANN algorithm can reject noises in the original spectra produced by the environmental physical factor effectively and set up an exact model to predict the contents of chlorophyll in living leaves veraciously to replace the classical method and meet the needs of fast analysis of agricultural products.
Iterative-Transform Phase Retrieval Using Adaptive Diversity
NASA Technical Reports Server (NTRS)
Dean, Bruce H.
2007-01-01
A phase-diverse iterative-transform phase-retrieval algorithm enables high spatial-frequency, high-dynamic-range, image-based wavefront sensing. [The terms phase-diverse, phase retrieval, image-based, and wavefront sensing are defined in the first of the two immediately preceding articles, Broadband Phase Retrieval for Image-Based Wavefront Sensing (GSC-14899-1).] As described below, no prior phase-retrieval algorithm has offered both high dynamic range and the capability to recover high spatial-frequency components. Each of the previously developed image-based phase-retrieval techniques can be classified into one of two categories: iterative transform or parametric. Among the modifications of the original iterative-transform approach has been the introduction of a defocus diversity function (also defined in the cited companion article). Modifications of the original parametric approach have included minimizing alternative objective functions as well as implementing a variety of nonlinear optimization methods. The iterative-transform approach offers the advantage of ability to recover low, middle, and high spatial frequencies, but has disadvantage of having a limited dynamic range to one wavelength or less. In contrast, parametric phase retrieval offers the advantage of high dynamic range, but is poorly suited for recovering higher spatial frequency aberrations. The present phase-diverse iterative transform phase-retrieval algorithm offers both the high-spatial-frequency capability of the iterative-transform approach and the high dynamic range of parametric phase-recovery techniques. In implementation, this is a focus-diverse iterative-transform phaseretrieval algorithm that incorporates an adaptive diversity function, which makes it possible to avoid phase unwrapping while preserving high-spatial-frequency recovery. The algorithm includes an inner and an outer loop (see figure). An initial estimate of phase is used to start the algorithm on the inner loop, wherein multiple intensity images are processed, each using a different defocus value. The processing is done by an iterative-transform method, yielding individual phase estimates corresponding to each image of the defocus-diversity data set. These individual phase estimates are combined in a weighted average to form a new phase estimate, which serves as the initial phase estimate for either the next iteration of the iterative-transform method or, if the maximum number of iterations has been reached, for the next several steps, which constitute the outerloop portion of the algorithm. The details of the next several steps must be omitted here for the sake of brevity. The overall effect of these steps is to adaptively update the diversity defocus values according to recovery of global defocus in the phase estimate. Aberration recovery varies with differing amounts as the amount of diversity defocus is updated in each image; thus, feedback is incorporated into the recovery process. This process is iterated until the global defocus error is driven to zero during the recovery process. The amplitude of aberration may far exceed one wavelength after completion of the inner-loop portion of the algorithm, and the classical iterative transform method does not, by itself, enable recovery of multi-wavelength aberrations. Hence, in the absence of a means of off-loading the multi-wavelength portion of the aberration, the algorithm would produce a wrapped phase map. However, a special aberration-fitting procedure can be applied to the wrapped phase data to transfer at least some portion of the multi-wavelength aberration to the diversity function, wherein the data are treated as known phase values. In this way, a multiwavelength aberration can be recovered incrementally by successively applying the aberration-fitting procedure to intermediate wrapped phase maps. During recovery, as more of the aberration is transferred to the diversity function following successive iterations around the ter loop, the estimated phase ceases to wrap in places where the aberration values become incorporated as part of the diversity function. As a result, as the aberration content is transferred to the diversity function, the phase estimate resembles that of a reference flat.
Hoskinson, Anne-Marie
2010-01-01
Biological problems in the twenty-first century are complex and require mathematical insight, often resulting in mathematical models of biological systems. Building mathematical-biological models requires cooperation among biologists and mathematicians, and mastery of building models. A new course in mathematical modeling presented the opportunity to build both content and process learning of mathematical models, the modeling process, and the cooperative process. There was little guidance from the literature on how to build such a course. Here, I describe the iterative process of developing such a course, beginning with objectives and choosing content and process competencies to fulfill the objectives. I include some inductive heuristics for instructors seeking guidance in planning and developing their own courses, and I illustrate with a description of one instructional model cycle. Students completing this class reported gains in learning of modeling content, the modeling process, and cooperative skills. Student content and process mastery increased, as assessed on several objective-driven metrics in many types of assessments.
2010-01-01
Biological problems in the twenty-first century are complex and require mathematical insight, often resulting in mathematical models of biological systems. Building mathematical–biological models requires cooperation among biologists and mathematicians, and mastery of building models. A new course in mathematical modeling presented the opportunity to build both content and process learning of mathematical models, the modeling process, and the cooperative process. There was little guidance from the literature on how to build such a course. Here, I describe the iterative process of developing such a course, beginning with objectives and choosing content and process competencies to fulfill the objectives. I include some inductive heuristics for instructors seeking guidance in planning and developing their own courses, and I illustrate with a description of one instructional model cycle. Students completing this class reported gains in learning of modeling content, the modeling process, and cooperative skills. Student content and process mastery increased, as assessed on several objective-driven metrics in many types of assessments. PMID:20810966
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Hou, Gene W.
1996-01-01
An incremental iterative formulation together with the well-known spatially split approximate-factorization algorithm, is presented for solving the large, sparse systems of linear equations that are associated with aerodynamic sensitivity analysis. This formulation is also known as the 'delta' or 'correction' form. For the smaller two dimensional problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. However, iterative methods are needed for larger two-dimensional and three dimensional applications because direct methods require more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioned coefficient matrix; this problem is overcome when these equations are cast in the incremental form. The methodology is successfully implemented and tested using an upwind cell-centered finite-volume formulation applied in two dimensions to the thin-layer Navier-Stokes equations for external flow over an airfoil. In three dimensions this methodology is demonstrated with a marching-solution algorithm for the Euler equations to calculate supersonic flow over the High-Speed Civil Transport configuration (HSCT 24E). The sensitivity derivatives obtained with the incremental iterative method from a marching Euler code are used in a design-improvement study of the HSCT configuration that involves thickness. camber, and planform design variables.
NASA Astrophysics Data System (ADS)
Suparmi, A.; Cari, C.; Lilis Elviyanti, Isnaini
2018-04-01
Analysis of relativistic energy and wave function for zero spin particles using Klein Gordon equation was influenced by separable noncentral cylindrical potential was solved by asymptotic iteration method (AIM). By using cylindrical coordinates, the Klein Gordon equation for the case of symmetry spin was reduced to three one-dimensional Schrodinger like equations that were solvable using variable separation method. The relativistic energy was calculated numerically with Matlab software, and the general unnormalized wave function was expressed in hypergeometric terms.
NASA Technical Reports Server (NTRS)
Tilton, James C.
1988-01-01
Image segmentation can be a key step in data compression and image analysis. However, the segmentation results produced by most previous approaches to region growing are suspect because they depend on the order in which portions of the image are processed. An iterative parallel segmentation algorithm avoids this problem by performing globally best merges first. Such a segmentation approach, and two implementations of the approach on NASA's Massively Parallel Processor (MPP) are described. Application of the segmentation approach to data compression and image analysis is then described, and results of such application are given for a LANDSAT Thematic Mapper image.
NASA Technical Reports Server (NTRS)
Gossard, Myron L
1952-01-01
An iterative transformation procedure suggested by H. Wielandt for numerical solution of flutter and similar characteristic-value problems is presented. Application of this procedure to ordinary natural-vibration problems and to flutter problems is shown by numerical examples. Comparisons of computed results with experimental values and with results obtained by other methods of analysis are made.
VIMOS Instrument Control Software Design: an Object Oriented Approach
NASA Astrophysics Data System (ADS)
Brau-Nogué, Sylvie; Lucuix, Christian
2002-12-01
The Franco-Italian VIMOS instrument is a VIsible imaging Multi-Object Spectrograph with outstanding multiplex capabilities, allowing to take spectra of more than 800 objects simultaneously, or integral field spectroscopy mode in a 54x54 arcsec area. VIMOS is being installed at the Nasmyth focus of the third Unit Telescope of the European Southern Observatory Very Large Telescope (VLT) at Mount Paranal in Chile. This paper will describe the analysis, the design and the implementation of the VIMOS Instrument Control System, using UML notation. Our Control group followed an Object Oriented software process while keeping in mind the ESO VLT standard control concepts. At ESO VLT a complete software library is available. Rather than applying waterfall lifecycle, ICS project used iterative development, a lifecycle consisting of several iterations. Each iteration consisted in : capture and evaluate the requirements, visual modeling for analysis and design, implementation, test, and deployment. Depending of the project phases, iterations focused more or less on specific activity. The result is an object model (the design model), including use-case realizations. An implementation view and a deployment view complement this product. An extract of VIMOS ICS UML model will be presented and some implementation, integration and test issues will be discussed.
Noise models for low counting rate coherent diffraction imaging.
Godard, Pierre; Allain, Marc; Chamard, Virginie; Rodenburg, John
2012-11-05
Coherent diffraction imaging (CDI) is a lens-less microscopy method that extracts the complex-valued exit field from intensity measurements alone. It is of particular importance for microscopy imaging with diffraction set-ups where high quality lenses are not available. The inversion scheme allowing the phase retrieval is based on the use of an iterative algorithm. In this work, we address the question of the choice of the iterative process in the case of data corrupted by photon or electron shot noise. Several noise models are presented and further used within two inversion strategies, the ordered subset and the scaled gradient. Based on analytical and numerical analysis together with Monte-Carlo studies, we show that any physical interpretations drawn from a CDI iterative technique require a detailed understanding of the relationship between the noise model and the used inversion method. We observe that iterative algorithms often assume implicitly a noise model. For low counting rates, each noise model behaves differently. Moreover, the used optimization strategy introduces its own artefacts. Based on this analysis, we develop a hybrid strategy which works efficiently in the absence of an informed initial guess. Our work emphasises issues which should be considered carefully when inverting experimental data.
Smoliński, Adam; Drobek, Leszek; Dombek, Václav; Bąk, Andrzej
2016-11-01
The main objective of the study presented was to investigate the differences between 20 mine waste dumps located in the Silesian Region of Poland and Czech Republic, in terms of trace elements and polycyclic aromatic hydrocarbons contents. The Principal Component Analysis and Hierarchical Clustering Analysis were applied in exploration of the studied data. Since the data set was affected by outlying objects, the employment of a relevant analysis strategy was necessary. The final PCA model was constructed with the use of the Expectation-Maximization iterative approach preceded by a correct identification of outliers. The analysis of the experimental data indicated that three mine waste dumps located in Poland were characterized by the highest concentrations of dibenzo(g,h,i)anthracene and benzo(g,h,i)perylene, and six objects located in Czech Republic and three objects in Poland were distinguished by high concentrations of chrysene and indeno (1.2.3-cd) pyrene. Three of studied mine waste dumps, one located in Czech Republic and two in Poland, were characterized by low concentrations of Cr, Ni, V, naphthalene, acenaphthene, fluorene, phenanthrene, anthracene, fluoranthen, benzo(a)anthracene, chrysene, benzo (b) fluoranthene, benzo (k) fluoranthene, benzo(a)pyrene, dibenzo(g,h,i)anthracene, benzo(g,h,i)perylene and indeno (1.2.3-cd) pyrene in comparison with the remaining ones. The analysis contributes to the assessment and prognosis of ecological and health risks related to the emission of trace elements and organic compounds (PAHs) from the waste dumps examined. No previous research of similar scope and aims has been reported for the area concerned. Copyright © 2016 Elsevier Ltd. All rights reserved.
Iterative Stable Alignment and Clustering of 2D Transmission Electron Microscope Images
Yang, Zhengfan; Fang, Jia; Chittuluru, Johnathan; Asturias, Francisco J.; Penczek, Pawel A.
2012-01-01
SUMMARY Identification of homogeneous subsets of images in a macromolecular electron microscopy (EM) image data set is a critical step in single-particle analysis. The task is handled by iterative algorithms, whose performance is compromised by the compounded limitations of image alignment and K-means clustering. Here we describe an approach, iterative stable alignment and clustering (ISAC) that, relying on a new clustering method and on the concepts of stability and reproducibility, can extract validated, homogeneous subsets of images. ISAC requires only a small number of simple parameters and, with minimal human intervention, can eliminate bias from two-dimensional image clustering and maximize the quality of group averages that can be used for ab initio three-dimensional structural determination and analysis of macromolecular conformational variability. Repeated testing of the stability and reproducibility of a solution within ISAC eliminates heterogeneous or incorrect classes and introduces critical validation to the process of EM image clustering. PMID:22325773
Precise and fast spatial-frequency analysis using the iterative local Fourier transform.
Lee, Sukmock; Choi, Heejoo; Kim, Dae Wook
2016-09-19
The use of the discrete Fourier transform has decreased since the introduction of the fast Fourier transform (fFT), which is a numerically efficient computing process. This paper presents the iterative local Fourier transform (ilFT), a set of new processing algorithms that iteratively apply the discrete Fourier transform within a local and optimal frequency domain. The new technique achieves 210 times higher frequency resolution than the fFT within a comparable computation time. The method's superb computing efficiency, high resolution, spectrum zoom-in capability, and overall performance are evaluated and compared to other advanced high-resolution Fourier transform techniques, such as the fFT combined with several fitting methods. The effectiveness of the ilFT is demonstrated through the data analysis of a set of Talbot self-images (1280 × 1024 pixels) obtained with an experimental setup using grating in a diverging beam produced by a coherent point source.
Reid, Helen J; Thomson, Clare; McGlade, Kieran J
2016-07-22
Elearning is ubiquitous in healthcare professions education. Its equivalence to 'traditional' educational delivery methods is well established. There is a research imperative to clarify when and how to use elearning most effectively to mitigate the potential of it becoming merely a 'disruptive technology.' Research has begun to broadly identify challenges encountered by elearning users. In this study, we explore in depth the perceived obstacles to elearning engagement amongst medical students. Sensitising concepts of achievement emotions and the cognitive demands of multi-tasking highlight why students' deeply emotional responses to elearning may be so important in their learning. This study used focus groups as a data collection tool. A purposeful sample of 31 participated. Iterative data gathering and analysis phases employed a constant comparative approach to generate themes firmly grounded in participant experience. Key themes that emerged from the data included a sense of injustice, passivity and a feeling of being 'lost at sea'. The actual content of the elearning resource provided important context. The identified themes have strong emotional foundations. These responses, interpreted through the lens of achievement emotions, have not previously been described. Appreciation of their importance is of benefit to educators involved in curriculum development or delivery.
Creating a Body of Knowledge for cartography
NASA Astrophysics Data System (ADS)
Fairbairn, David
2018-05-01
The nature of knowledge is considered, in particular its creation and formalisation, and some of the issues which re-late to disciplinary knowledge in particular. It is suggested that cartography has particular needs addressing its disciplinary boundaries, the role of uncertain and `troublesome' knowledge in its subject-matter, and the enhancement of its subject-specific knowledge, with more generic supporting material, including skills and attitudes. An overview of Bodies of Knowledge (BoK) in other disciplines has been undertaken, and models of BoK structure, content and usage have been assessed. BoKs in closely related subjects, including civil engineering, GIS and software engineering, give examples of good practice. The paper concentrates on the work done to date to create the cartography BoK, and the adoption of the `Delphi' method of consultation to develop it. The Delphi method is intended to yield consensus on the scope, content, context and use of the BoK. It is regarded as a rigorous process, iterative (and therefore time consuming), involving questionnaire survey, opinion gathering, discourse analysis, and feedback. The participants are expected to be experts, from a range of different sectors, but `volunteer amateurs' are also important consultants.
NASA Astrophysics Data System (ADS)
Weiss, Chester J.
2013-08-01
An essential element for computational hypothesis testing, data inversion and experiment design for electromagnetic geophysics is a robust forward solver, capable of easily and quickly evaluating the electromagnetic response of arbitrary geologic structure. The usefulness of such a solver hinges on the balance among competing desires like ease of use, speed of forward calculation, scalability to large problems or compute clusters, parsimonious use of memory access, accuracy and by necessity, the ability to faithfully accommodate a broad range of geologic scenarios over extremes in length scale and frequency content. This is indeed a tall order. The present study addresses recent progress toward the development of a forward solver with these properties. Based on the Lorenz-gauged Helmholtz decomposition, a new finite volume solution over Cartesian model domains endowed with complex-valued electrical properties is shown to be stable over the frequency range 10-2-1010 Hz and range 10-3-105 m in length scale. Benchmark examples are drawn from magnetotellurics, exploration geophysics, geotechnical mapping and laboratory-scale analysis, showing excellent agreement with reference analytic solutions. Computational efficiency is achieved through use of a matrix-free implementation of the quasi-minimum-residual (QMR) iterative solver, which eliminates explicit storage of finite volume matrix elements in favor of "on the fly" computation as needed by the iterative Krylov sequence. Further efficiency is achieved through sparse coupling matrices between the vector and scalar potentials whose non-zero elements arise only in those parts of the model domain where the conductivity gradient is non-zero. Multi-thread parallelization in the QMR solver through OpenMP pragmas is used to reduce the computational cost of its most expensive step: the single matrix-vector product at each iteration. High-level MPI communicators farm independent processes to available compute nodes for simultaneous computation of multi-frequency or multi-transmitter responses.
Forward marching procedure for separated boundary-layer flows
NASA Technical Reports Server (NTRS)
Carter, J. E.; Wornom, S. F.
1975-01-01
A forward-marching procedure for separated boundary-layer flows which permits the rapid and accurate solution of flows of limited extent is presented. The streamwise convection of vorticity in the reversed flow region is neglected, and this approximation is incorporated into a previously developed (Carter, 1974) inverse boundary-layer procedure. The equations are solved by the Crank-Nicolson finite-difference scheme in which column iteration is carried out at each streamwise station. Instabilities encountered in the column iterations are removed by introducing timelike terms in the finite-difference equations. This provides both unconditional diagonal dominance and a column iterative scheme, found to be stable using the von Neumann stability analysis.
NASA Astrophysics Data System (ADS)
Philipps, V.; Malaquias, A.; Hakola, A.; Karhunen, J.; Maddaluno, G.; Almaviva, S.; Caneve, L.; Colao, F.; Fortuna, E.; Gasior, P.; Kubkowska, M.; Czarnecka, A.; Laan, M.; Lissovski, A.; Paris, P.; van der Meiden, H. J.; Petersson, P.; Rubel, M.; Huber, A.; Zlobinski, M.; Schweer, B.; Gierse, N.; Xiao, Q.; Sergienko, G.
2013-09-01
Analysis and understanding of wall erosion, material transport and fuel retention are among the most important tasks for ITER and future devices, since these questions determine largely the lifetime and availability of the fusion reactor. These data are also of extreme value to improve the understanding and validate the models of the in vessel build-up of the T inventory in ITER and future D-T devices. So far, research in these areas is largely supported by post-mortem analysis of wall tiles. However, access to samples will be very much restricted in the next-generation devices (such as ITER, JT-60SA, W7-X, etc) with actively cooled plasma-facing components (PFC) and increasing duty cycle. This has motivated the development of methods to measure the deposition of material and retention of plasma fuel on the walls of fusion devices in situ, without removal of PFC samples. For this purpose, laser-based methods are the most promising candidates. Their feasibility has been assessed in a cooperative undertaking in various European associations under EFDA coordination. Different laser techniques have been explored both under laboratory and tokamak conditions with the emphasis to develop a conceptual design for a laser-based wall diagnostic which is integrated into an ITER port plug, aiming to characterize in situ relevant parts of the inner wall, the upper region of the inner divertor, part of the dome and the upper X-point region.
Civil Restitution as an Objective of Department of Homeland Security Mission 3
2014-06-01
applicability and effectiveness, and allow for iterations accordingly. The 1975 Georgia’s Restitution Center Program, and the 1977 Georgia’s Non-Residential...the United States without lawful admission, Vermont Service Center , Application for Permission to Reapply for Admission into the United States...Types of Research Designs.” September 23, 2013. http://libguides.usc.edu/ content.php?pid=83009&sid=818072. Vermont Service Center . Application
NASA Astrophysics Data System (ADS)
Sipio, Eloisa Di; Bertermann, David
2018-04-01
In engineering, agricultural and meteorological project design, sediment thermal properties are highly important parameters, and thermal conductivity plays a fundamental role when dimensioning ground heat exchangers, especially in very shallow geothermal systems. Herein, the first 2 m of depth from surface is of critical importance. However, the heat transfer determination in unconsolidated material is difficult to estimate, as it depends on several factors, including particle size, bulk density, water content, mineralogy composition and ground temperature. The performance of a very shallow geothermal system, as a horizontal collector or heat basket, is strongly correlated to the type of sediment at disposal and rapidly decreases in the case of dry-unsaturated conditions. The available experimental data are often scattered, incomplete and do not fully support thermo-active ground structure modeling. The ITER project, funded by the European Union, contributes to a better knowledge of the relationship between thermal conductivity and water content, required for understanding the very shallow geothermal systems behaviour in saturated and unsaturated conditions. So as to enhance the performance of horizontal geothermal heat exchangers, thermally enhanced backfilling material were tested in the laboratory, and an overview of physical-thermal properties variations under several moisture and load conditions for different mixtures of natural material was here presented.
Iterative atmospheric correction scheme and the polarization color of alpine snow
NASA Astrophysics Data System (ADS)
Ottaviani, Matteo; Cairns, Brian; Ferrare, Rich; Rogers, Raymond
2012-07-01
Characterization of the Earth's surface is crucial to remote sensing, both to map geomorphological features and because subtracting this signal is essential during retrievals of the atmospheric constituents located between the surface and the sensor. Current operational algorithms model the surface total reflectance through a weighted linear combination of a few geometry-dependent kernels, each devised to describe a particular scattering mechanism. The information content of these measurements is overwhelmed by that of instruments with polarization capabilities: proposed models in this case are based on the Fresnel reflectance of an isotropic distribution of facets. Because of its remarkable lack of spectral contrast, the polarized reflectance of land surfaces in the shortwave infrared spectral region, where atmospheric scattering is minimal, can be used to model the surface also at shorter wavelengths, where aerosol retrievals are attempted based on well-established scattering theories.In radiative transfer simulations, straightforward separation of the surface and atmospheric contributions is not possible without approximations because of the coupling introduced by multiple reflections. Within a general inversion framework, the problem can be eliminated by linearizing the radiative transfer calculation, and making the Jacobian (i.e., the derivative expressing the sensitivity of the reflectance with respect to model parameters) available at output. We present a general methodology based on a Gauss-Newton iterative search, which automates this procedure and eliminates de facto the need of an ad hoc atmospheric correction.In this case study we analyze the color variations in the polarized reflectance measured by the NASA Goddard Institute of Space Studies Research Scanning Polarimeter during a survey of late-season snowfields in the High Sierra. This insofar unique dataset presents challenges linked to the rugged topography associated with the alpine environment and a likely high water content due to melting. The analysis benefits from ancillary information provided by the NASA Langley High Spectral Resolution Lidar deployed on the same aircraft.The results obtained from the iterative scheme are contrasted against the surface polarized reflectance obtained ignoring multiple reflections, via the simplistic subtraction of the atmospheric scattering contribution. Finally, the retrieved reflectance is modeled after the scattering properties of a dense collection of ice crystals at the surface. Confirming that the polarized reflectance of snow is spectrally flat would allow to extend the techniques already in use for polarimetric retrievals of aerosol properties over land to the large portion of snow-covered pixels plaguing orbital and suborbital observations.
Iterative Atmospheric Correction Scheme and the Polarization Color of Alpine Snow
NASA Technical Reports Server (NTRS)
Ottaviani, Matteo; Cairns, Brian; Ferrare, Rich; Rogers, Raymond
2012-01-01
Characterization of the Earth's surface is crucial to remote sensing, both to map geomorphological features and because subtracting this signal is essential during retrievals of the atmospheric constituents located between the surface and the sensor. Current operational algorithms model the surface total reflectance through a weighted linear combination of a few geometry-dependent kernels, each devised to describe a particular scattering mechanism. The information content of these measurements is overwhelmed by that of instruments with polarization capabilities: proposed models in this case are based on the Fresnel reflectance of an isotropic distribution of facets. Because of its remarkable lack of spectral contrast, the polarized reflectance of land surfaces in the shortwave infrared spectral region, where atmospheric scattering is minimal, can be used to model the surface also at shorter wavelengths, where aerosol retrievals are attempted based on well-established scattering theories. In radiative transfer simulations, straightforward separation of the surface and atmospheric contributions is not possible without approximations because of the coupling introduced by multiple reflections. Within a general inversion framework, the problem can be eliminated by linearizing the radiative transfer calculation, and making the Jacobian (i.e., the derivative expressing the sensitivity of the reflectance with respect to model parameters) available at output. We present a general methodology based on a Gauss-Newton iterative search, which automates this procedure and eliminates de facto the need of an ad hoc atmospheric correction. In this case study we analyze the color variations in the polarized reflectance measured by the NASA Goddard Institute of Space Studies Research Scanning Polarimeter during a survey of late-season snowfields in the High Sierra. This insofar unique dataset presents challenges linked to the rugged topography associated with the alpine environment and a likely high water content due to melting. The analysis benefits from ancillary information provided by the NASA Langley High Spectral Resolution Lidar deployed on the same aircraft. The results obtained from the iterative scheme are contrasted against the surface polarized reflectance obtained ignoring multiple reflections, via the simplistic subtraction of the atmospheric scattering contribution. Finally, the retrieved reflectance is modeled after the scattering properties of a dense collection of ice crystals at the surface. Confirming that the polarized reflectance of snow is spectrally flat would allow to extend the techniques already in use for polarimetric retrievals of aerosol properties over land to the large portion of snow-covered pixels plaguing orbital and suborbital observations.
Human Factors Assessment and Redesign of the ISS Respiratory Support Pack (RSP) Cue Card
NASA Technical Reports Server (NTRS)
Byrne, Vicky; Hudy, Cynthia; Whitmore, Mihriban; Smith, Danielle
2007-01-01
The Respiratory Support Pack (RSP) is a medical pack onboard the International Space Station (ISS) that contains much of the necessary equipment for providing aid to a conscious or unconscious crewmember in respiratory distress. Inside the RSP lid pocket is a 5.5 by 11 inch paper procedural cue card, which is used by a Crew Medical Officer (CMO) to set up the equipment and deliver oxygen to a crewmember. In training, crewmembers expressed concerns about the readability and usability of the cue card; consequently, updating the cue card was prioritized as an activity to be completed. The Usability Testing and Analysis Facility at the Johnson Space Center (JSC) evaluated the original layout of the cue card, and proposed several new cue card designs based on human factors principles. The approach taken for the assessment was an iterative process. First, in order to completely understand the issues with the RSP cue card, crewmember post training comments regarding the RSP cue card were taken into consideration. Over the course of the iterative process, the procedural information was reorganized into a linear flow after the removal of irrelevant (non-emergency) content. Pictures, color coding, and borders were added to highlight key components in the RSP to aid in quickly identifying those components. There were minimal changes to the actual text content. Three studies were conducted using non-medically trained JSC personnel (total of 34 participants). Non-medically trained personnel participated in order to approximate a scenario of limited CMO exposure to the RSP equipment and training (which can occur six months prior to the mission). In each study, participants were asked to perform two respiratory distress scenarios using one of the cue card designs to simulate resuscitation (using a mannequin along with the hardware). Procedure completion time, errors, and subjective ratings were recorded. The last iteration of the cue card featured a schematic of the RSP, colors, borders, and simplification of the flow of information. The time to complete the RSP procedure was reduced by approximately three minutes with the new design. In an emergency situation, three minutes significantly increases the probability of saving a life. In addition, participants showed the highest preference for this design. The results of the studies and the new design were presented to a focus group of astronauts, flight surgeons, medical trainers, and procedures personnel. The final cue card was presented to a medical control board and approved for flight. The revised RSP cue card is currently onboard ISS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S
The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quanti cation, and optimization under uncertainty that provide the foundation for many of Dakota's iterative analysis capabilities.« less
Recent Updates to the MELCOR 1.8.2 Code for ITER Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Merrill, Brad J
This report documents recent changes made to the MELCOR 1.8.2 computer code for application to the International Thermonuclear Experimental Reactor (ITER), as required by ITER Task Agreement ITA 81-18. There are four areas of change documented by this report. The first area is the addition to this code of a model for transporting HTO. The second area is the updating of the material oxidation correlations to match those specified in the ITER Safety Analysis Data List (SADL). The third area replaces a modification to an aerosol tranpsort subroutine that specified the nominal aerosol density internally with one that now allowsmore » the user to specify this density through user input. The fourth area corrected an error that existed in an air condensation subroutine of previous versions of this modified MELCOR code. The appendices of this report contain FORTRAN listings of the coding for these modifications.« less
Uniform convergence of multigrid V-cycle iterations for indefinite and nonsymmetric problems
NASA Technical Reports Server (NTRS)
Bramble, James H.; Kwak, Do Y.; Pasciak, Joseph E.
1993-01-01
In this paper, we present an analysis of a multigrid method for nonsymmetric and/or indefinite elliptic problems. In this multigrid method various types of smoothers may be used. One type of smoother which we consider is defined in terms of an associated symmetric problem and includes point and line, Jacobi, and Gauss-Seidel iterations. We also study smoothers based entirely on the original operator. One is based on the normal form, that is, the product of the operator and its transpose. Other smoothers studied include point and line, Jacobi, and Gauss-Seidel. We show that the uniform estimates for symmetric positive definite problems carry over to these algorithms. More precisely, the multigrid iteration for the nonsymmetric and/or indefinite problem is shown to converge at a uniform rate provided that the coarsest grid in the multilevel iteration is sufficiently fine (but not depending on the number of multigrid levels).
Spotting the difference in molecular dynamics simulations of biomolecules
NASA Astrophysics Data System (ADS)
Sakuraba, Shun; Kono, Hidetoshi
2016-08-01
Comparing two trajectories from molecular simulations conducted under different conditions is not a trivial task. In this study, we apply a method called Linear Discriminant Analysis with ITERative procedure (LDA-ITER) to compare two molecular simulation results by finding the appropriate projection vectors. Because LDA-ITER attempts to determine a projection such that the projections of the two trajectories do not overlap, the comparison does not suffer from a strong anisotropy, which is an issue in protein dynamics. LDA-ITER is applied to two test cases: the T4 lysozyme protein simulation with or without a point mutation and the allosteric protein PDZ2 domain of hPTP1E with or without a ligand. The projection determined by the method agrees with the experimental data and previous simulations. The proposed procedure, which complements existing methods, is a versatile analytical method that is specialized to find the "difference" between two trajectories.
Iterative Monte Carlo analysis of spin-dependent parton distributions
Sato, Nobuo; Melnitchouk, Wally; Kuhn, Sebastian E.; ...
2016-04-05
We present a comprehensive new global QCD analysis of polarized inclusive deep-inelastic scattering, including the latest high-precision data on longitudinal and transverse polarization asymmetries from Jefferson Lab and elsewhere. The analysis is performed using a new iterative Monte Carlo fitting technique which generates stable fits to polarized parton distribution functions (PDFs) with statistically rigorous uncertainties. Inclusion of the Jefferson Lab data leads to a reduction in the PDF errors for the valence and sea quarks, as well as in the gluon polarization uncertainty at x ≳ 0.1. Furthermore, the study also provides the first determination of the flavor-separated twist-3 PDFsmore » and the d 2 moment of the nucleon within a global PDF analysis.« less
Drawing dynamical and parameters planes of iterative families and methods.
Chicharro, Francisco I; Cordero, Alicia; Torregrosa, Juan R
2013-01-01
The complex dynamical analysis of the parametric fourth-order Kim's iterative family is made on quadratic polynomials, showing the MATLAB codes generated to draw the fractal images necessary to complete the study. The parameter spaces associated with the free critical points have been analyzed, showing the stable (and unstable) regions where the selection of the parameter will provide us the excellent schemes (or dreadful ones).
Viscous and Interacting Flow Field Effects.
1980-06-01
in the inviscid flow analysis using free vortex sheets whose shapes are determined by iteration. The outer iteration employs boundary layer...Methods, Inc. which replaces the source distribution in the separation zone by a vortex wake model . This model is described in some detail in (2), but...in the potential flow is obtained using linearly varying vortex singularities distributed on planar panels. The wake is represented by sheets of
Gas Flows in Rocket Motors. Volume 2. Appendix C. Time Iterative Solution of Viscous Supersonic Flow
1989-08-01
by b!ock number) FIELD GROUP SUB- GROUP nozzle analysis, Navier-Stokes, turbulent flow, equilibrium S 20 04 chemistry 19. ABSTRACT (Continue on reverse... quasi -conservative formulations lead to unacrepilably large mass conservation errors. Along with the investigations of Navier-Stkes algorithins...Characteristics Splitting ................................... 125 4.2.3 Non -Iterative PNS Procedure ............................... 125 4.2.4 Comparisons of
Blind One-Bit Compressive Sampling
2013-01-17
14] Q. Li, C. A. Micchelli, L. Shen, and Y. Xu, A proximity algorithm accelerated by Gauss - Seidel iterations for L1/TV denoising models, Inverse...methods for nonconvex optimization on the unit sphere and has a provable convergence guarantees. Binary iterative hard thresholding (BIHT) algorithms were... Convergence analysis of the algorithm is presented. Our approach is to obtain a sequence of optimization problems by successively approximating the ℓ0
Wang, An; Cao, Yang; Shi, Quan
2018-01-01
In this paper, we demonstrate a complete version of the convergence theory of the modulus-based matrix splitting iteration methods for solving a class of implicit complementarity problems proposed by Hong and Li (Numer. Linear Algebra Appl. 23:629-641, 2016). New convergence conditions are presented when the system matrix is a positive-definite matrix and an [Formula: see text]-matrix, respectively.
Including information technology project management in the nursing informatics curriculum.
Sockolow, Paulina; Bowles, Kathryn H
2008-01-01
Project management is a critical skill for nurse informaticists who are in prominent roles developing and implementing clinical information systems. It should be included in the nursing informatics curriculum, as evidenced by its inclusion in informatics competencies and surveys of important skills for informaticists. The University of Pennsylvania School of Nursing includes project management in two of the four courses in the master's level informatics minor. Course content includes the phases of the project management process; the iterative unified process methodology; and related systems analysis and project management skills. During the introductory course, students learn about the project plan, requirements development, project feasibility, and executive summary documents. In the capstone course, students apply the system development life cycle and project management skills during precepted informatics projects. During this in situ experience, students learn, the preceptors benefit, and the institution better prepares its students for the real world.
Influence of ICRF heating on the stability of TAEs
NASA Astrophysics Data System (ADS)
Sears, J.; Burke, W.; Parker, R. R.; Snipes, J. A.; Wolfe, S.
2007-11-01
Unstable toroidicity-induced Alfv'en eigenmodes (TAEs) can appear spontaneously due to resonant interaction with fast particles such as fusion alphas, raising concern that TAEs may threaten ITER performance. This work investigates the progression of stable TAE damping rates toward instability during a scan of ICRF heating power up to 3.1 MW. Stable eigenmodes are identified in Alcator C-Mod by the Active MHD diagnostic. Unstable TAEs are observed to appear spontaneously in C-Mod limited L-mode plasmas at sufficient tail energies generated by >3 MW of ICRF heating. However preliminary analysis of experiments with moderate ICRF heating power show that TAE stability may not simply degrade with overall fast particle content. There are hints that the stability of some TAEs may be enhanced in the presence of fast particle distribution tails. Furthermore, the radial profile of the energetic particle distribution relative to the safety factor profile affects the ICRF power influence on TAE stability.
College Students’ Alcohol Displays on Facebook: Intervention Considerations
Moreno, Megan A.; Grant, Allison; Kacvinsky, Lauren; Egan, Katie G.; Fleming, Michael F.
2012-01-01
Objective The purpose of this study was to investigate college freshmen’s views towards potential social networking site (SNS) screening or intervention efforts regarding alcohol. Participants Freshmen college students between February 2010 and May 2011. Methods Participants were interviewed, all interviews were audio recorded and transcribed. Qualitative analysis was conducted using an iterative approach. Results A total of 132 participants completed the interview (70% response rate), the average age was 18.4 years (SD 0.49) and 64 were males (48.5%). Three themes emerged from our data. First, most participants stated they viewed displayed alcohol content as indicative of alcohol use. Second, they explained they would prefer to be approached in a direct manner by someone they knew. Third, the style of approach was considered critical. Conclusions When approaching college students regarding alcohol messages on SNSs, both the relationship and the approach are key factors. PMID:22686361
Migration and HIV risk: Life histories of Mexican-born men living with HIV in North Carolina
Mann, Lilli; Valera, Erik; Hightow-Weidman, Lisa B.; Barrington, Clare
2015-01-01
Latino men in the Southeastern USA are disproportionately affected by HIV, but little is known about how the migration process influences HIV-related risk. In North Carolina (NC), a relatively new immigrant destination, Latino men are predominantly young and from Mexico. We conducted 31 iterative life history interviews with 15 Mexican-born men living with HIV. We used holistic content narrative analysis methods to examine HIV vulnerability in the context of migration and to identify important turning points. Major themes included the prominence of traumatic early life experiences, migration as an ongoing process rather than a finite event, and HIV diagnosis as a final turning point in migration trajectories. Findings provide a nuanced understanding of HIV vulnerability throughout the migration process and have implications including the need for bi-national HIV prevention approaches, improved outreach around early testing and linkage to care, and attention to mental health. PMID:24866206
van der Werf, N R; Willemink, M J; Willems, T P; Greuter, M J W; Leiner, T
2017-12-28
The objective of this study was to evaluate the influence of iterative reconstruction on coronary calcium scores (CCS) at different heart rates for four state-of-the-art CT systems. Within an anthropomorphic chest phantom, artificial coronary arteries were translated in a water-filled compartment. The arteries contained three different calcifications with low (38 mg), medium (80 mg) and high (157 mg) mass. Linear velocities were applied, corresponding to heart rates of 0, < 60, 60-75 and > 75 bpm. Data were acquired on four state-of-the-art CT systems (CT1-CT4) with routinely used CCS protocols. Filtered back projection (FBP) and three increasing levels of iterative reconstruction (L1-L3) were used for reconstruction. CCS were quantified as Agatston score and mass score. An iterative reconstruction susceptibility (IRS) index was used to assess susceptibility of Agatston score (IRS AS ) and mass score (IRS MS ) to iterative reconstruction. IRS values were compared between CT systems and between calcification masses. For each heart rate, differences in CCS of iterative reconstructed images were evaluated with CCS of FBP images as reference, and indicated as small (< 5%), medium (5-10%) or large (> 10%). Statistical analysis was performed with repeated measures ANOVA tests. While subtle differences were found for Agatston scores of low mass calcification, medium and high mass calcifications showed increased CCS up to 77% with increasing heart rates. IRS AS of CT1-T4 were 17, 41, 130 and 22% higher than IRS MS . Not only were IRS significantly different between all CT systems, but also between calcification masses. Up to a fourfold increase in IRS was found for the low mass calcification in comparison with the high mass calcification. With increasing iterative reconstruction strength, maximum decreases of 21 and 13% for Agatston and mass score were found. In total, 21 large differences between Agatston scores from FBP and iterative reconstruction were found, while only five large differences were found between FBP and iterative reconstruction mass scores. Iterative reconstruction results in reduced CCS. The effect of iterative reconstruction on CCS is more prominent with low-density calcifications, high heart rates and increasing iterative reconstruction strength.
The SeaDataNet data products: regional temperature and salinity historical data collections
NASA Astrophysics Data System (ADS)
Simoncelli, Simona; Coatanoan, Christine; Bäck, Orjan; Sagen, Helge; Scoy, Serge; Myroshnychenko, Volodymyr; Schaap, Dick; Schlitzer, Reiner; Iona, Sissy; Fichaut, Michele
2016-04-01
Temperature and Salinity (TS) historical data collections covering the time period 1900-2013 were created for each European marginal sea (Arctic Sea, Baltic Sea, Black Sea, North Sea, North Atlantic Ocean and Mediterranean Sea) within the framework of SeaDataNet2 (SDN) EU-Project and they are now available as ODV collections through the SeaDataNet web catalog at http://sextant.ifremer.fr/en/web/seadatanet/. Two versions have been published and they represent a snapshot of the SDN database content at two different times: V1.1 (January 2014) and V2 (March 2015). A Quality Control Strategy (QCS) has been developped and continuously refined in order to improve the quality of the SDN database content and to create the best product deriving from SDN data. The QCS was originally implemented in collaboration with MyOcean2 and MyOcean Follow On projects in order to develop a true synergy at regional level to serve operational oceanography and climate change communities. The QCS involved the Regional Coordinators, responsible of the scientific assessment, the National Oceanographic Data Centers (NODC) and the data providers that, on the base of the data quality assessment outcome, checked and eventually corrected anomalies in the original data. The QCS consists of four main phases: 1) data harvesting from the central CDI; 2) file and parameter aggregation; 3) quality check analysis at regional level; 4) analysis and correction of data anomalies. The approach is iterative to facilitate the upgrade of SDN database content and it allows also the versioning of data products with the release of new regional data collections at the end of each QCS loop. SDN data collections and the QCS will be presented and the results summarized.
Zhao, Jian; Glueck, Michael; Breslav, Simon; Chevalier, Fanny; Khan, Azam
2017-01-01
User-authored annotations of data can support analysts in the activity of hypothesis generation and sensemaking, where it is not only critical to document key observations, but also to communicate insights between analysts. We present annotation graphs, a dynamic graph visualization that enables meta-analysis of data based on user-authored annotations. The annotation graph topology encodes annotation semantics, which describe the content of and relations between data selections, comments, and tags. We present a mixed-initiative approach to graph layout that integrates an analyst's manual manipulations with an automatic method based on similarity inferred from the annotation semantics. Various visual graph layout styles reveal different perspectives on the annotation semantics. Annotation graphs are implemented within C8, a system that supports authoring annotations during exploratory analysis of a dataset. We apply principles of Exploratory Sequential Data Analysis (ESDA) in designing C8, and further link these to an existing task typology in the visualization literature. We develop and evaluate the system through an iterative user-centered design process with three experts, situated in the domain of analyzing HCI experiment data. The results suggest that annotation graphs are effective as a method of visually extending user-authored annotations to data meta-analysis for discovery and organization of ideas.
DOE Office of Scientific and Technical Information (OSTI.GOV)
S.R. Hudson; D.A. Monticello; A.H. Reiman
For the (non-axisymmetric) stellarator class of plasma confinement devices to be feasible candidates for fusion power stations it is essential that, to a good approximation, the magnetic field lines lie on nested flux surfaces; however, the inherent lack of a continuous symmetry implies that magnetic islands responsible for breaking the smooth topology of the flux surfaces are guaranteed to exist. Thus, the suppression of magnetic islands is a critical issue for stellarator design, particularly for small aspect ratio devices. Pfirsch-Schluter currents, diamagnetic currents, and resonant coil fields contribute to the formation of magnetic islands, and the challenge is to designmore » the plasma and coils such that these effects cancel. Magnetic islands in free-boundary high-pressure full-current stellarator magnetohydrodynamic equilibria are suppressed using a procedure based on the Princeton Iterative Equilibrium Solver [Reiman and Greenside, Comp. Phys. Comm. 43 (1986) 157] which iterate s the equilibrium equations to obtain the plasma equilibrium. At each iteration, changes to a Fourier representation of the coil geometry are made to cancel resonant fields produced by the plasma. The changes are constrained to preserve certain measures of engineering acceptability and to preserve the stability of ideal kink modes. As the iterations continue, the coil geometry and the plasma simultaneously converge to an equilibrium in which the island content is negligible, the plasma is stable to ideal kink modes, and the coils satisfy engineering constraints. The method is applied to a candidate plasma and coil design for the National Compact Stellarator Experiment [Reiman, et al., Phys. Plasmas 8 (May 2001) 2083].« less
NASA Astrophysics Data System (ADS)
Hudson, S. R.; Monticello, D. A.; Reiman, A. H.; Strickler, D. J.; Hirshman, S. P.; Ku, L.-P.; Lazarus, E.; Brooks, A.; Zarnstorff, M. C.; Boozer, A. H.; Fu, G.-Y.; Neilson, G. H.
2003-10-01
For the (non-axisymmetric) stellarator class of plasma confinement devices to be feasible candidates for fusion power stations it is essential that, to a good approximation, the magnetic field lines lie on nested flux surfaces; however, the inherent lack of a continuous symmetry implies that magnetic islands responsible for breaking the smooth topology of the flux surfaces are guaranteed to exist. Thus, the suppression of magnetic islands is a critical issue for stellarator design, particularly for small aspect ratio devices. Pfirsch-Schlüter currents, diamagnetic currents and resonant coil fields contribute to the formation of magnetic islands, and the challenge is to design the plasma and coils such that these effects cancel. Magnetic islands in free-boundary high-pressure full-current stellarator magnetohydrodynamic equilibria are suppressed using a procedure based on the Princeton Iterative Equilibrium Solver (Reiman and Greenside 1986 Comput. Phys. Commun. 43 157) which iterates the equilibrium equations to obtain the plasma equilibrium. At each iteration, changes to a Fourier representation of the coil geometry are made to cancel resonant fields produced by the plasma. The changes are constrained to preserve certain measures of engineering acceptability and to preserve the stability of ideal kink modes. As the iterations continue, the coil geometry and the plasma simultaneously converge to an equilibrium in which the island content is negligible, the plasma is stable to ideal kink modes, and the coils satisfy engineering constraints. The method is applied to a candidate plasma and coil design for the National Compact Stellarator eXperiment (Reiman et al 2001 Phys. Plasma 8 2083).
Threatt, Anthony L; Merino, Jessica; Brooks, Johnell O; Healy, Stan; Truesdail, Constance; Manganelli, Joseph; Walker, Ian; Green, Keith Evan
2017-04-01
This article presents the results of an exploratory study in which 14 healthcare subject matter experts (H-SMEs) in addition to four research and design subject matter experts (RD-SMEs) at a regional rehabilitation hospital engaged in a series of complementary, participatory activities in order to design an assistive robotic table (ART). As designers, human factor experts, and healthcare professionals continue to work to integrate assistive human-robot technologies in healthcare, it is imperative to understand how the technology affects patient care from clinicians' perspectives. Fourteen clinical H-SMEs rated a subset of conceptual ART design ideas; participated in the iterative design process of ART; and evaluated a final cardboard prototype, the rehabilitation hospital's current over-the-bed table (OBT), an ART built with true materials, and two therapy surface prototypes. Four RD-SMEs conducted a heuristic evaluation on the ART built with true materials. Data were analyzed by frequency and content analysis. The results include a design and prototype for the next generation ART and a pneumatically controlled therapy surface, a broadened list of specifications for the future design and implementation of assistive robotic furniture, and final observations. When compared to the rehabilitation hospital's current OBT, the developed ART in this study was successful. Designing novel features is dependent upon ensuring patient safety. The inclusion of clinicians in the participatory iterative design and evaluation process and the use of personas provided a broadened list of specifications for the successful implementation of assistive robotic furniture.
Is universal health coverage the practical expression of the right to health care?
Ooms, Gorik; Latif, Laila A; Waris, Attiya; Brolan, Claire E; Hammonds, Rachel; Friedman, Eric A; Mulumba, Moses; Forman, Lisa
2014-02-24
The present Millennium Development Goals are set to expire in 2015 and their next iteration is now being discussed within the international community. With regards to health, the World Health Organization proposes universal health coverage as a 'single overarching health goal' for the next iteration of the Millennium Development Goals.The present Millennium Development Goals have been criticised for being 'duplicative' or even 'competing alternatives' to international human rights law. The question then arises, if universal health coverage would indeed become the single overarching health goal, replacing the present health-related Millennium Development Goals, would that be more consistent with the right to health? The World Health Organization seems to have anticipated the question, as it labels universal health coverage as "by definition, a practical expression of the concern for health equity and the right to health".Rather than waiting for the negotiations to unfold, we thought it would be useful to verify this contention, using a comparative normative analysis. We found that--to be a practical expression of the right to health--at least one element is missing in present authoritative definitions of universal health coverage: a straightforward confirmation that international assistance is essential, not optional.But universal health coverage is a 'work in progress'. A recent proposal by the United Nations Sustainable Development Solutions Network proposed universal health coverage with a set of targets, including a target for international assistance, which would turn universal health coverage into a practical expression of the right to health care.
Kassam-Adams, Nancy; Marsac, Meghan L; Kohser, Kristen L; Kenardy, Justin A; March, Sonja; Winston, Flaura K
2015-04-15
The advent of eHealth interventions to address psychological concerns and health behaviors has created new opportunities, including the ability to optimize the effectiveness of intervention activities and then deliver these activities consistently to a large number of individuals in need. Given that eHealth interventions grounded in a well-delineated theoretical model for change are more likely to be effective and that eHealth interventions can be costly to develop, assuring the match of final intervention content and activities to the underlying model is a key step. We propose to apply the concept of "content validity" as a crucial checkpoint to evaluate the extent to which proposed intervention activities in an eHealth intervention program are valid (eg, relevant and likely to be effective) for the specific mechanism of change that each is intended to target and the intended target population for the intervention. The aims of this paper are to define content validity as it applies to model-based eHealth intervention development, to present a feasible method for assessing content validity in this context, and to describe the implementation of this new method during the development of a Web-based intervention for children. We designed a practical 5-step method for assessing content validity in eHealth interventions that includes defining key intervention targets, delineating intervention activity-target pairings, identifying experts and using a survey tool to gather expert ratings of the relevance of each activity to its intended target, its likely effectiveness in achieving the intended target, and its appropriateness with a specific intended audience, and then using quantitative and qualitative results to identify intervention activities that may need modification. We applied this method during our development of the Coping Coach Web-based intervention for school-age children. In the evaluation of Coping Coach content validity, 15 experts from five countries rated each of 15 intervention activity-target pairings. Based on quantitative indices, content validity was excellent for relevance and good for likely effectiveness and age-appropriateness. Two intervention activities had item-level indicators that suggested the need for further review and potential revision by the development team. This project demonstrated that assessment of content validity can be straightforward and feasible to implement and that results of this assessment provide useful information for ongoing development and iterations of new eHealth interventions, complementing other sources of information (eg, user feedback, effectiveness evaluations). This approach can be utilized at one or more points during the development process to guide ongoing optimization of eHealth interventions.
NASA Astrophysics Data System (ADS)
Ahunov, Roman R.; Kuksenko, Sergey P.; Gazizov, Talgat R.
2016-06-01
A multiple solution of linear algebraic systems with dense matrix by iterative methods is considered. To accelerate the process, the recomputing of the preconditioning matrix is used. A priory condition of the recomputing based on change of the arithmetic mean of the current solution time during the multiple solution is proposed. To confirm the effectiveness of the proposed approach, the numerical experiments using iterative methods BiCGStab and CGS for four different sets of matrices on two examples of microstrip structures are carried out. For solution of 100 linear systems the acceleration up to 1.6 times, compared to the approach without recomputing, is obtained.
NASA Astrophysics Data System (ADS)
Mazon, D.; Liegeard, C.; Jardin, A.; Barnsley, R.; Walsh, M.; O'Mullane, M.; Sirinelli, A.; Dorchies, F.
2016-11-01
Measuring Soft X-Ray (SXR) radiation [0.1 keV; 15 keV] in tokamaks is a standard way of extracting valuable information on the particle transport and magnetohydrodynamic activity. Generally, the analysis is performed with detectors positioned close to the plasma for a direct line of sight. A burning plasma, like the ITER deuterium-tritium phase, is too harsh an environment to permit the use of such detectors in close vicinity of the machine. We have thus investigated in this article the possibility of using polycapillary lenses in ITER to transport the SXR information several meters away from the plasma in the complex port-plug geometry.
Mazon, D; Liegeard, C; Jardin, A; Barnsley, R; Walsh, M; O'Mullane, M; Sirinelli, A; Dorchies, F
2016-11-01
Measuring Soft X-Ray (SXR) radiation [0.1 keV; 15 keV] in tokamaks is a standard way of extracting valuable information on the particle transport and magnetohydrodynamic activity. Generally, the analysis is performed with detectors positioned close to the plasma for a direct line of sight. A burning plasma, like the ITER deuterium-tritium phase, is too harsh an environment to permit the use of such detectors in close vicinity of the machine. We have thus investigated in this article the possibility of using polycapillary lenses in ITER to transport the SXR information several meters away from the plasma in the complex port-plug geometry.
NASA Astrophysics Data System (ADS)
Hladowski, Lukasz; Galkowski, Krzysztof; Cai, Zhonglun; Rogers, Eric; Freeman, Chris T.; Lewin, Paul L.
2011-07-01
In this article a new approach to iterative learning control for the practically relevant case of deterministic discrete linear plants with uniform rank greater than unity is developed. The analysis is undertaken in a 2D systems setting that, by using a strong form of stability for linear repetitive processes, allows simultaneous consideration of both trial-to-trial error convergence and along the trial performance, resulting in design algorithms that can be computed using linear matrix inequalities (LMIs). Finally, the control laws are experimentally verified on a gantry robot that replicates a pick and place operation commonly found in a number of applications to which iterative learning control is applicable.
Numerical solution of Euler's equation by perturbed functionals
NASA Technical Reports Server (NTRS)
Dey, S. K.
1985-01-01
A perturbed functional iteration has been developed to solve nonlinear systems. It adds at each iteration level, unique perturbation parameters to nonlinear Gauss-Seidel iterates which enhances its convergence properties. As convergence is approached these parameters are damped out. Local linearization along the diagonal has been used to compute these parameters. The method requires no computation of Jacobian or factorization of matrices. Analysis of convergence depends on properties of certain contraction-type mappings, known as D-mappings. In this article, application of this method to solve an implicit finite difference approximation of Euler's equation is studied. Some representative results for the well known shock tube problem and compressible flows in a nozzle are given.
Drawing Dynamical and Parameters Planes of Iterative Families and Methods
Chicharro, Francisco I.
2013-01-01
The complex dynamical analysis of the parametric fourth-order Kim's iterative family is made on quadratic polynomials, showing the MATLAB codes generated to draw the fractal images necessary to complete the study. The parameter spaces associated with the free critical points have been analyzed, showing the stable (and unstable) regions where the selection of the parameter will provide us the excellent schemes (or dreadful ones). PMID:24376386
2014-10-01
nonlinear and non-stationary signals. It aims at decomposing a signal, via an iterative sifting procedure, into several intrinsic mode functions ...stationary signals. It aims at decomposing a signal, via an iterative sifting procedure into several intrinsic mode functions (IMFs), and each of the... function , optimization. 1 Introduction It is well known that nonlinear and non-stationary signal analysis is important and difficult. His- torically
NASA Astrophysics Data System (ADS)
Domnisoru, L.; Modiga, A.; Gasparotti, C.
2016-08-01
At the ship's design, the first step of the hull structural assessment is based on the longitudinal strength analysis, with head wave equivalent loads by the ships' classification societies’ rules. This paper presents an enhancement of the longitudinal strength analysis, considering the general case of the oblique quasi-static equivalent waves, based on the own non-linear iterative procedure and in-house program. The numerical approach is developed for the mono-hull ships, without restrictions on 3D-hull offset lines non-linearities, and involves three interlinked iterative cycles on floating, pitch and roll trim equilibrium conditions. Besides the ship-wave equilibrium parameters, the ship's girder wave induced loads are obtained. As numerical study case we have considered a large LPG liquefied petroleum gas carrier. The numerical results of the large LPG are compared with the statistical design values from several ships' classification societies’ rules. This study makes possible to obtain the oblique wave conditions that are inducing the maximum loads into the large LPG ship's girder. The numerical results of this study are pointing out that the non-linear iterative approach is necessary for the computation of the extreme loads induced by the oblique waves, ensuring better accuracy of the large LPG ship's longitudinal strength assessment.
NASA Astrophysics Data System (ADS)
Cao, Jian; Chen, Jing-Bo; Dai, Meng-Xue
2018-01-01
An efficient finite-difference frequency-domain modeling of seismic wave propagation relies on the discrete schemes and appropriate solving methods. The average-derivative optimal scheme for the scalar wave modeling is advantageous in terms of the storage saving for the system of linear equations and the flexibility for arbitrary directional sampling intervals. However, using a LU-decomposition-based direct solver to solve its resulting system of linear equations is very costly for both memory and computational requirements. To address this issue, we consider establishing a multigrid-preconditioned BI-CGSTAB iterative solver fit for the average-derivative optimal scheme. The choice of preconditioning matrix and its corresponding multigrid components is made with the help of Fourier spectral analysis and local mode analysis, respectively, which is important for the convergence. Furthermore, we find that for the computation with unequal directional sampling interval, the anisotropic smoothing in the multigrid precondition may affect the convergence rate of this iterative solver. Successful numerical applications of this iterative solver for the homogenous and heterogeneous models in 2D and 3D are presented where the significant reduction of computer memory and the improvement of computational efficiency are demonstrated by comparison with the direct solver. In the numerical experiments, we also show that the unequal directional sampling interval will weaken the advantage of this multigrid-preconditioned iterative solver in the computing speed or, even worse, could reduce its accuracy in some cases, which implies the need for a reasonable control of directional sampling interval in the discretization.
Cai, Jia; Tang, Yi
2018-02-01
Canonical correlation analysis (CCA) is a powerful statistical tool for detecting the linear relationship between two sets of multivariate variables. Kernel generalization of it, namely, kernel CCA is proposed to describe nonlinear relationship between two variables. Although kernel CCA can achieve dimensionality reduction results for high-dimensional data feature selection problem, it also yields the so called over-fitting phenomenon. In this paper, we consider a new kernel CCA algorithm via randomized Kaczmarz method. The main contributions of the paper are: (1) A new kernel CCA algorithm is developed, (2) theoretical convergence of the proposed algorithm is addressed by means of scaled condition number, (3) a lower bound which addresses the minimum number of iterations is presented. We test on both synthetic dataset and several real-world datasets in cross-language document retrieval and content-based image retrieval to demonstrate the effectiveness of the proposed algorithm. Numerical results imply the performance and efficiency of the new algorithm, which is competitive with several state-of-the-art kernel CCA methods. Copyright © 2017 Elsevier Ltd. All rights reserved.
Cook-Cottone, Catherine; Lemish, Erga; Guyker, Wendy
2017-11-01
This study focused on the perspectives of school personnel affiliated with the Encinitas Union School District in California following a lawsuit arguing that their yoga-based program included religion and therefore was unsuitable for implementation in public schools and was unconstitutional. Participants (N = 32) were interviewed using a semistructured interview, and data were analyzed according to Interpretative Phenomenological Analysis. Five super-ordinate themes (including sub-themes) were identified in an iterative process, including: participants' perspectives on the roots of yoga and the type of yoga taught in their district; the process of introducing a yoga-in-the-schools program in light of this contention (including challenges and obstacles, and how these were met); perspectives on the lawsuit and how the process unfolded; effects of the lawsuit on school climate and beyond; and perspectives on yoga as, and as not, religious. The study attempts to shed light on the impact of an ongoing lawsuit on a school district at the time of implementation of a program for students' well being.
Cook-Cottone, Catherine; Lemish, Erga; Guyker, Wendy
2017-08-01
This study focused on the perspectives of school personnel affiliated with the Encinitas Union School District in California following a lawsuit arguing that their yoga-based program included religion and therefore was unsuitable for implementation in public schools and was unconstitutional. Participants (N = 32) were interviewed using a semistructured interview, and data were analyzed according to Interpretative Phenomenological Analysis. Five super-ordinate themes (including sub-themes) were identified in an iterative process, including: participants' perspectives on the roots of yoga and the type of yoga taught in their district; the process of introducing a yoga-in-the-schools program in light of this contention (including challenges and obstacles, and how these were met); perspectives on the lawsuit and how the process unfolded; effects of the lawsuit on school climate and beyond; and perspectives on yoga as, and as not, religious. The study attempts to shed light on the impact of an ongoing lawsuit on a school district at the time of implementation of a program for students' well being.
Choosing order of operations to accelerate strip structure analysis in parameter range
NASA Astrophysics Data System (ADS)
Kuksenko, S. P.; Akhunov, R. R.; Gazizov, T. R.
2018-05-01
The paper considers the issue of using iteration methods in solving the sequence of linear algebraic systems obtained in quasistatic analysis of strip structures with the method of moments. Using the analysis of 4 strip structures, the authors have proved that additional acceleration (up to 2.21 times) of the iterative process can be obtained during the process of solving linear systems repeatedly by means of choosing a proper order of operations and a preconditioner. The obtained results can be used to accelerate the process of computer-aided design of various strip structures. The choice of the order of operations to accelerate the process is quite simple, universal and could be used not only for strip structure analysis but also for a wide range of computational problems.
NASA Technical Reports Server (NTRS)
Smith, D. R.
1982-01-01
The Purdue Regional Objective Analysis of the Mesoscale (PROAM) is a Barness-type scheme for the analysis of surface meteorological data. Modifications are introduced to the original version in order to increase its flexibility and to permit greater ease of usage. The code was rewritten for an interactive computer environment. Furthermore, a multiple iteration technique suggested by Barnes was implemented for greater accuracy. PROAM was subjected to a series of experiments in order to evaluate its performance under a variety of analysis conditions. The tests include use of a known analytic temperature distribution in order to quantify error bounds for the scheme. Similar experiments were conducted using actual atmospheric data. Results indicate that the multiple iteration technique increases the accuracy of the analysis. Furthermore, the tests verify appropriate values for the analysis parameters in resolving meso-beta scale phenomena.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lines, L.; Burton, A.; Lu, H.X.
Accurate velocity models are a necessity for reliable migration results. Velocity analysis generally involves the use of methods such as normal moveout analysis (NMO), seismic traveltime tomography, or iterative prestack migration. These techniques can be effective, and each has its own advantage or disadvantage. Conventional NMO methods are relatively inexpensive but basically require simplifying assumptions about geology. Tomography is a more general method but requires traveltime interpretation of prestack data. Iterative prestack depth migration is very general but is computationally expensive. In some cases, there is the opportunity to estimate vertical velocities by use of well information. The well informationmore » can be used to optimize poststack migrations, thereby eliminating some of the time and expense of iterative prestack migration. The optimized poststack migration procedure defined here computes the velocity model which minimizes the depth differences between seismic images and formation depths at the well by using a least squares inversion method. The optimization methods described in this paper will hopefully produce ``migrations without migraines.``« less
NASA Technical Reports Server (NTRS)
Nakazawa, Shohei
1991-01-01
Formulations and algorithms implemented in the MHOST finite element program are discussed. The code uses a novel concept of the mixed iterative solution technique for the efficient 3-D computations of turbine engine hot section components. The general framework of variational formulation and solution algorithms are discussed which were derived from the mixed three field Hu-Washizu principle. This formulation enables the use of nodal interpolation for coordinates, displacements, strains, and stresses. Algorithmic description of the mixed iterative method includes variations for the quasi static, transient dynamic and buckling analyses. The global-local analysis procedure referred to as the subelement refinement is developed in the framework of the mixed iterative solution, of which the detail is presented. The numerically integrated isoparametric elements implemented in the framework is discussed. Methods to filter certain parts of strain and project the element discontinuous quantities to the nodes are developed for a family of linear elements. Integration algorithms are described for linear and nonlinear equations included in MHOST program.
Analysis of Anderson Acceleration on a Simplified Neutronics/Thermal Hydraulics System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Toth, Alex; Kelley, C. T.; Slattery, Stuart R
ABSTRACT A standard method for solving coupled multiphysics problems in light water reactors is Picard iteration, which sequentially alternates between solving single physics applications. This solution approach is appealing due to simplicity of implementation and the ability to leverage existing software packages to accurately solve single physics applications. However, there are several drawbacks in the convergence behavior of this method; namely slow convergence and the necessity of heuristically chosen damping factors to achieve convergence in many cases. Anderson acceleration is a method that has been seen to be more robust and fast converging than Picard iteration for many problems, withoutmore » significantly higher cost per iteration or complexity of implementation, though its effectiveness in the context of multiphysics coupling is not well explored. In this work, we develop a one-dimensional model simulating the coupling between the neutron distribution and fuel and coolant properties in a single fuel pin. We show that this model generally captures the convergence issues noted in Picard iterations which couple high-fidelity physics codes. We then use this model to gauge potential improvements with regard to rate of convergence and robustness from utilizing Anderson acceleration as an alternative to Picard iteration.« less
Pseudo-time methods for constrained optimization problems governed by PDE
NASA Technical Reports Server (NTRS)
Taasan, Shlomo
1995-01-01
In this paper we present a novel method for solving optimization problems governed by partial differential equations. Existing methods are gradient information in marching toward the minimum, where the constrained PDE is solved once (sometimes only approximately) per each optimization step. Such methods can be viewed as a marching techniques on the intersection of the state and costate hypersurfaces while improving the residuals of the design equations per each iteration. In contrast, the method presented here march on the design hypersurface and at each iteration improve the residuals of the state and costate equations. The new method is usually much less expensive per iteration step since, in most problems of practical interest, the design equation involves much less unknowns that that of either the state or costate equations. Convergence is shown using energy estimates for the evolution equations governing the iterative process. Numerical tests show that the new method allows the solution of the optimization problem in a cost of solving the analysis problems just a few times, independent of the number of design parameters. The method can be applied using single grid iterations as well as with multigrid solvers.
NASA Astrophysics Data System (ADS)
Wu, Weibin; Dai, Yifan; Zhou, Lin; Xu, Mingjin
2016-09-01
Material removal accuracy has a direct impact on the machining precision and efficiency of ion beam figuring. By analyzing the factors suppressing the improvement of material removal accuracy, we conclude that correcting the removal function deviation and reducing the removal material amount during each iterative process could help to improve material removal accuracy. Removal function correcting principle can effectively compensate removal function deviation between actual figuring and simulated processes, while experiments indicate that material removal accuracy decreases with a long machining time, so a small amount of removal material in each iterative process is suggested. However, more clamping and measuring steps will be introduced in this way, which will also generate machining errors and suppress the improvement of material removal accuracy. On this account, a free-measurement iterative process method is put forward to improve material removal accuracy and figuring efficiency by using less measuring and clamping steps. Finally, an experiment on a φ 100-mm Zerodur planar is preformed, which shows that, in similar figuring time, three free-measurement iterative processes could improve the material removal accuracy and the surface error convergence rate by 62.5% and 17.6%, respectively, compared with a single iterative process.
Survey on the Performance of Source Localization Algorithms.
Fresno, José Manuel; Robles, Guillermo; Martínez-Tarifa, Juan Manuel; Stewart, Brian G
2017-11-18
The localization of emitters using an array of sensors or antennas is a prevalent issue approached in several applications. There exist different techniques for source localization, which can be classified into multilateration, received signal strength (RSS) and proximity methods. The performance of multilateration techniques relies on measured time variables: the time of flight (ToF) of the emission from the emitter to the sensor, the time differences of arrival (TDoA) of the emission between sensors and the pseudo-time of flight (pToF) of the emission to the sensors. The multilateration algorithms presented and compared in this paper can be classified as iterative and non-iterative methods. Both standard least squares (SLS) and hyperbolic least squares (HLS) are iterative and based on the Newton-Raphson technique to solve the non-linear equation system. The metaheuristic technique particle swarm optimization (PSO) used for source localisation is also studied. This optimization technique estimates the source position as the optimum of an objective function based on HLS and is also iterative in nature. Three non-iterative algorithms, namely the hyperbolic positioning algorithms (HPA), the maximum likelihood estimator (MLE) and Bancroft algorithm, are also presented. A non-iterative combined algorithm, MLE-HLS, based on MLE and HLS, is further proposed in this paper. The performance of all algorithms is analysed and compared in terms of accuracy in the localization of the position of the emitter and in terms of computational time. The analysis is also undertaken with three different sensor layouts since the positions of the sensors affect the localization; several source positions are also evaluated to make the comparison more robust. The analysis is carried out using theoretical time differences, as well as including errors due to the effect of digital sampling of the time variables. It is shown that the most balanced algorithm, yielding better results than the other algorithms in terms of accuracy and short computational time, is the combined MLE-HLS algorithm.
Survey on the Performance of Source Localization Algorithms
2017-01-01
The localization of emitters using an array of sensors or antennas is a prevalent issue approached in several applications. There exist different techniques for source localization, which can be classified into multilateration, received signal strength (RSS) and proximity methods. The performance of multilateration techniques relies on measured time variables: the time of flight (ToF) of the emission from the emitter to the sensor, the time differences of arrival (TDoA) of the emission between sensors and the pseudo-time of flight (pToF) of the emission to the sensors. The multilateration algorithms presented and compared in this paper can be classified as iterative and non-iterative methods. Both standard least squares (SLS) and hyperbolic least squares (HLS) are iterative and based on the Newton–Raphson technique to solve the non-linear equation system. The metaheuristic technique particle swarm optimization (PSO) used for source localisation is also studied. This optimization technique estimates the source position as the optimum of an objective function based on HLS and is also iterative in nature. Three non-iterative algorithms, namely the hyperbolic positioning algorithms (HPA), the maximum likelihood estimator (MLE) and Bancroft algorithm, are also presented. A non-iterative combined algorithm, MLE-HLS, based on MLE and HLS, is further proposed in this paper. The performance of all algorithms is analysed and compared in terms of accuracy in the localization of the position of the emitter and in terms of computational time. The analysis is also undertaken with three different sensor layouts since the positions of the sensors affect the localization; several source positions are also evaluated to make the comparison more robust. The analysis is carried out using theoretical time differences, as well as including errors due to the effect of digital sampling of the time variables. It is shown that the most balanced algorithm, yielding better results than the other algorithms in terms of accuracy and short computational time, is the combined MLE-HLS algorithm. PMID:29156565
Cognitive search model and a new query paradigm
NASA Astrophysics Data System (ADS)
Xu, Zhonghui
2001-06-01
This paper proposes a cognitive model in which people begin to search pictures by using semantic content and find a right picture by judging whether its visual content is a proper visualization of the semantics desired. It is essential that human search is not just a process of matching computation on visual feature but rather a process of visualization of the semantic content known. For people to search electronic images in the way as they manually do in the model, we suggest that querying be a semantic-driven process like design. A query-by-design paradigm is prosed in the sense that what you design is what you find. Unlike query-by-example, query-by-design allows users to specify the semantic content through an iterative and incremental interaction process so that a retrieval can start with association and identification of the given semantic content and get refined while further visual cues are available. An experimental image retrieval system, Kuafu, has been under development using the query-by-design paradigm and an iconic language is adopted.
Polarimetric Thomson scattering for high Te fusion plasmas
NASA Astrophysics Data System (ADS)
Giudicotti, L.
2017-11-01
Polarimetric Thomson scattering (TS) is a technique for the analysis of TS spectra in which the electron temperature Te is determined from the depolarization of the scattered radiation, a relativistic effect noticeable only in very hot (Te >= 10 keV) fusion plasmas. It has been proposed as a complementary technique to supplement the conventional spectral analysis in the ITER CPTS (Core Plasma Thomson Scattering) system for measurements in high Te, low ne plasma conditions. In this paper we review the characteristics of the depolarized TS radiation with special emphasis to the conditions of the ITER CPTS system and we describe a possible implementation of this diagnostic method suitable to significantly improve the performances of the conventional TS spectral analysis in the high Te range.
Discrete fourier transform (DFT) analysis for applications using iterative transform methods
NASA Technical Reports Server (NTRS)
Dean, Bruce H. (Inventor)
2012-01-01
According to various embodiments, a method is provided for determining aberration data for an optical system. The method comprises collecting a data signal, and generating a pre-transformation algorithm. The data is pre-transformed by multiplying the data with the pre-transformation algorithm. A discrete Fourier transform of the pre-transformed data is performed in an iterative loop. The method further comprises back-transforming the data to generate aberration data.
Numerical Grid Generation and Potential Airfoil Analysis and Design
1988-01-01
Gauss- Seidel , SOR and ADI iterative methods e JACOBI METHOD In the Jacobi method each new value of a function is computed entirely from old values...preceding iteration and adding the inhomogeneous (boundary condition) term. * GAUSS- SEIDEL METHOD When we compute I in a Jacobi method, we have already...Gauss- Seidel method. Sufficient condition for p convergence of the Gauss- Seidel method is diagonal-dominance of [A].9W e SUCESSIVE OVER-RELAXATION (SOR
Heaton, Brenda; Gebel, Christina; Crawford, Andrew; Barker, Judith C; Henshaw, Michelle; Garcia, Raul I; Riedy, Christine; Wimsatt, Maureen A
2018-05-24
We conducted a qualitative analysis to evaluate the acceptability of using storytelling as a way to communicate oral health messages regarding early childhood caries (ECC) prevention in the American Indian and Alaska Native (AIAN) population. A traditional story was developed and pilot tested among AIAN mothers residing in 3 tribal locations in northern California. Evaluations of the story content and acceptability followed a multistep process consisting of initial feedback from 4 key informants, a focus group of 7 AIAN mothers, and feedback from the Community Advisory Board. Upon story approval, 9 additional focus group sessions (N = 53 participants) were held with AIAN mothers following an oral telling of the story. Participants reported that the story was culturally appropriate and used relatable characters. Messages about oral health were considered to be valuable. Concerns arose about the oral-only delivery of the story, story content, length, story messages that conflicted with normative community values, and the intent to target audiences. Feedback by focus group participants raised some doubts about the relevance and frequency of storytelling in AIAN communities today. AIAN communities value the need for oral health messaging for community members. However, the acceptability of storytelling as a method for the messaging raises concerns, because the influence of modern technology and digital communications may weaken the acceptability of the oral tradition. Careful attention must be made to the delivery mode, content, and targeting with continual iterative feedback from community members to make these messages engaging, appropriate, relatable, and inclusive.
Near-Infrared Spectroscopy Assay of Key Quality-Indicative Ingredients of Tongkang Tablets.
Pan, Wenjie; Ma, Jinfang; Xiao, Xue; Huang, Zhengwei; Zhou, Huanbin; Ge, Fahuan; Pan, Xin
2017-04-01
The objective of this paper is to develop an easy and fast near-infrared spectroscopy (NIRS) assay for the four key quality-indicative active ingredients of Tongkang tablets by comparing the true content of the active ingredients measured by high performance liquid chromatography (HPLC) and the NIRS data. The HPLC values for the active ingredients content of Cimicifuga glycoside, calycosin glucoside, 5-O-methylvisamminol and hesperidin in Tongkang tablets were set as reference values. The NIRS raw spectra of Tongkang tablets were processed using first-order convolution method. The iterative optimization method was chosen to optimize the band for Cimicifuga glycoside and 5-O-methylvisamminol, and correlation coefficient method was used to determine the optimal band of calycosin glucoside and hesperidin. A near-infrared quantitative calibration model was established for each quality-indicative ingredient by partial least-squares method on the basis of the contents detected by HPLC and the obtained NIRS spectra. The correlation coefficient R 2 values of the four models of Cimicifuga glycoside, calycosin glucoside, 5-O-methylvisamminol and hesperidin were 0.9025, 0.8582, 0.9250, and 0.9325, respectively. It was demonstrated that the accuracy of the validation values was approximately 90% by comparison of the predicted results from NIRS models and the HPLC true values, which suggested that NIRS assay was successfully established and validated. It was expected that the quantitative analysis models of the four indicative ingredients could be used to rapidly perform quality control in industrial production of Tongkang tablets.
NASA Astrophysics Data System (ADS)
Sato, S.; Takatsu, H.; Maki, K.; Yamada, K.; Mori, S.; Iida, H.; Santoro, R. T.
1997-09-01
Gamma-ray exposure dose rates at the ITER site boundary were estimated for the cases of removal of a failed activated Toroidal Field (TF) coil from the torus and removal of a failed activated TF coil together with a sector of the activated Vacuum Vessel (VV). Skyshine analyses were performed using the two-dimensional SN radiation transport code, DOT3.5. The exposure gamma-ray dose rates on the ground at the site boundary (presently assumed to be 1 km from the ITER building), were calculated to be 1.1 and 84 μSv/year for removal of the TF coil without and with a VV sector, respectively. The dose rate level for the latter case is close to the tentative radiation limit of 100 μSv/year so an additional ˜14 cm of concrete is required in the ITER building roof to satisfy the criterion for a safety factor often for the site boundary dose rate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prindle, N.H.; Mendenhall, F.T.; Trauth, K.
1996-05-01
The Systems Prioritization Method (SPM) is a decision-aiding tool developed by Sandia National Laboratories (SNL). SPM provides an analytical basis for supporting programmatic decisions for the Waste Isolation Pilot Plant (WIPP) to meet selected portions of the applicable US EPA long-term performance regulations. The first iteration of SPM (SPM-1), the prototype for SPM< was completed in 1994. It served as a benchmark and a test bed for developing the tools needed for the second iteration of SPM (SPM-2). SPM-2, completed in 1995, is intended for programmatic decision making. This is Volume II of the three-volume final report of the secondmore » iteration of the SPM. It describes the technical input and model implementation for SPM-2, and presents the SPM-2 technical baseline and the activities, activity outcomes, outcome probabilities, and the input parameters for SPM-2 analysis.« less
Thermal analysis of the in-vessel components of the ITER plasma-position reflectometry.
Quental, P B; Policarpo, H; Luís, R; Varela, P
2016-11-01
The ITER plasma position reflectometry system measures the edge electron density profile of the plasma, providing real-time supplementary contribution to the magnetic measurements of the plasma-wall distance. Some of the system components will be in direct sight of the plasma and therefore subject to plasma and stray radiation, which may cause excessive temperatures and stresses. In this work, thermal finite element analysis of the antenna and adjacent waveguides is conducted with ANSYS V17 (ANSYS® Academic Research, Release 17.0, 2016). Results allow the identification of critical temperature points, and solutions are proposed to improve the thermal behavior of the system.
Thermal analysis of the in-vessel components of the ITER plasma-position reflectometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quental, P. B., E-mail: pquental@ipfn.tecnico.ulisboa.pt; Policarpo, H.; Luís, R.
The ITER plasma position reflectometry system measures the edge electron density profile of the plasma, providing real-time supplementary contribution to the magnetic measurements of the plasma-wall distance. Some of the system components will be in direct sight of the plasma and therefore subject to plasma and stray radiation, which may cause excessive temperatures and stresses. In this work, thermal finite element analysis of the antenna and adjacent waveguides is conducted with ANSYS V17 (ANSYS® Academic Research, Release 17.0, 2016). Results allow the identification of critical temperature points, and solutions are proposed to improve the thermal behavior of the system.
Deductive Evaluation: Formal Code Analysis With Low User Burden
NASA Technical Reports Server (NTRS)
Di Vito, Ben. L
2016-01-01
We describe a framework for symbolically evaluating iterative C code using a deductive approach that automatically discovers and proves program properties. Although verification is not performed, the method can infer detailed program behavior. Software engineering work flows could be enhanced by this type of analysis. Floyd-Hoare verification principles are applied to synthesize loop invariants, using a library of iteration-specific deductive knowledge. When needed, theorem proving is interleaved with evaluation and performed on the fly. Evaluation results take the form of inferred expressions and type constraints for values of program variables. An implementation using PVS (Prototype Verification System) is presented along with results for sample C functions.
Conceptual design of ACB-CP for ITER cryogenic system
NASA Astrophysics Data System (ADS)
Jiang, Yongcheng; Xiong, Lianyou; Peng, Nan; Tang, Jiancheng; Liu, Liqiang; Zhang, Liang
2012-06-01
ACB-CP (Auxiliary Cold Box for Cryopumps) is used to supply the cryopumps system with necessary cryogen in ITER (International Thermonuclear Experimental Reactor) cryogenic distribution system. The conceptual design of ACB-CP contains thermo-hydraulic analysis, 3D structure design and strength checking. Through the thermohydraulic analysis, the main specifications of process valves, pressure safety valves, pipes, heat exchangers can be decided. During the 3D structure design process, vacuum requirement, adiabatic requirement, assembly constraints and maintenance requirement have been considered to arrange the pipes, valves and other components. The strength checking has been performed to crosscheck if the 3D design meets the strength requirements for the ACB-CP.
Sorting Five Human Tumor Types Reveals Specific Biomarkers and Background Classification Genes.
Roche, Kimberly E; Weinstein, Marvin; Dunwoodie, Leland J; Poehlman, William L; Feltus, Frank A
2018-05-25
We applied two state-of-the-art, knowledge independent data-mining methods - Dynamic Quantum Clustering (DQC) and t-Distributed Stochastic Neighbor Embedding (t-SNE) - to data from The Cancer Genome Atlas (TCGA). We showed that the RNA expression patterns for a mixture of 2,016 samples from five tumor types can sort the tumors into groups enriched for relevant annotations including tumor type, gender, tumor stage, and ethnicity. DQC feature selection analysis discovered 48 core biomarker transcripts that clustered tumors by tumor type. When these transcripts were removed, the geometry of tumor relationships changed, but it was still possible to classify the tumors using the RNA expression profiles of the remaining transcripts. We continued to remove the top biomarkers for several iterations and performed cluster analysis. Even though the most informative transcripts were removed from the cluster analysis, the sorting ability of remaining transcripts remained strong after each iteration. Further, in some iterations we detected a repeating pattern of biological function that wasn't detectable with the core biomarker transcripts present. This suggests the existence of a "background classification" potential in which the pattern of gene expression after continued removal of "biomarker" transcripts could still classify tumors in agreement with the tumor type.
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Hou, Gene W.
1993-01-01
In this study involving advanced fluid flow codes, an incremental iterative formulation (also known as the delta or correction form) together with the well-known spatially-split approximate factorization algorithm, is presented for solving the very large sparse systems of linear equations which are associated with aerodynamic sensitivity analysis. For smaller 2D problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. Iterative methods are needed for larger 2D and future 3D applications, however, because direct methods require much more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioning of the coefficient matrix; this problem can be overcome when these equations are cast in the incremental form. These and other benefits are discussed. The methodology is successfully implemented and tested in 2D using an upwind, cell-centered, finite volume formulation applied to the thin-layer Navier-Stokes equations. Results are presented for two sample airfoil problems: (1) subsonic low Reynolds number laminar flow; and (2) transonic high Reynolds number turbulent flow.
A Study of Morrison's Iterative Noise Removal Method. Final Report M. S. Thesis
NASA Technical Reports Server (NTRS)
Ioup, G. E.; Wright, K. A. R.
1985-01-01
Morrison's iterative noise removal method is studied by characterizing its effect upon systems of differing noise level and response function. The nature of data acquired from a linear shift invariant instrument is discussed so as to define the relationship between the input signal, the instrument response function, and the output signal. Fourier analysis is introduced, along with several pertinent theorems, as a tool to more thorough understanding of the nature of and difficulties with deconvolution. In relation to such difficulties the necessity of a noise removal process is discussed. Morrison's iterative noise removal method and the restrictions upon its application are developed. The nature of permissible response functions is discussed, as is the choice of the response functions used.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mazon, D., E-mail: Didier.Mazon@cea.fr; Jardin, A.; Liegeard, C.
2016-11-15
Measuring Soft X-Ray (SXR) radiation [0.1 keV; 15 keV] in tokamaks is a standard way of extracting valuable information on the particle transport and magnetohydrodynamic activity. Generally, the analysis is performed with detectors positioned close to the plasma for a direct line of sight. A burning plasma, like the ITER deuterium-tritium phase, is too harsh an environment to permit the use of such detectors in close vicinity of the machine. We have thus investigated in this article the possibility of using polycapillary lenses in ITER to transport the SXR information several meters away from the plasma in the complex port-plugmore » geometry.« less
NASA Technical Reports Server (NTRS)
Winget, J. M.; Hughes, T. J. R.
1985-01-01
The particular problems investigated in the present study arise from nonlinear transient heat conduction. One of two types of nonlinearities considered is related to a material temperature dependence which is frequently needed to accurately model behavior over the range of temperature of engineering interest. The second nonlinearity is introduced by radiation boundary conditions. The finite element equations arising from the solution of nonlinear transient heat conduction problems are formulated. The finite element matrix equations are temporally discretized, and a nonlinear iterative solution algorithm is proposed. Algorithms for solving the linear problem are discussed, taking into account the form of the matrix equations, Gaussian elimination, cost, and iterative techniques. Attention is also given to approximate factorization, implementational aspects, and numerical results.
Coherent Microwave Scattering Model of Marsh Grass
NASA Astrophysics Data System (ADS)
Duan, Xueyang; Jones, Cathleen E.
2017-12-01
In this work, we developed an electromagnetic scattering model to analyze radar scattering from tall-grass-covered lands such as wetlands and marshes. The model adopts the generalized iterative extended boundary condition method (GIEBCM) algorithm, previously developed for buried cylindrical media such as vegetation roots, to simulate the scattering from the grass layer. The major challenge of applying GIEBCM to tall grass is the extremely time-consuming iteration among the large number of short subcylinders building up the grass. To overcome this issue, we extended the GIEBCM to multilevel GIEBCM, or M-GIEBCM, in which we first use GIEBCM to calculate a T matrix (transition matrix) database of "straws" with various lengths, thicknesses, orientations, curvatures, and dielectric properties; we then construct the grass with a group of straws from the database and apply GIEBCM again to calculate the T matrix of the overall grass scene. The grass T matrix is transferred to S matrix (scattering matrix) and combined with the ground S matrix, which is computed using the stabilized extended boundary condition method, to obtain the total scattering. In this article, we will demonstrate the capability of the model by simulating scattering from scenes with different grass densities, different grass structures, different grass water contents, and different ground moisture contents. This model will help with radar experiment design and image interpretation for marshland and wetland observations.
Rueda, Oscar M; Diaz-Uriarte, Ramon
2007-10-16
Yu et al. (BMC Bioinformatics 2007,8: 145+) have recently compared the performance of several methods for the detection of genomic amplification and deletion breakpoints using data from high-density single nucleotide polymorphism arrays. One of the methods compared is our non-homogenous Hidden Markov Model approach. Our approach uses Markov Chain Monte Carlo for inference, but Yu et al. ran the sampler for a severely insufficient number of iterations for a Markov Chain Monte Carlo-based method. Moreover, they did not use the appropriate reference level for the non-altered state. We rerun the analysis in Yu et al. using appropriate settings for both the Markov Chain Monte Carlo iterations and the reference level. Additionally, to show how easy it is to obtain answers to additional specific questions, we have added a new analysis targeted specifically to the detection of breakpoints. The reanalysis shows that the performance of our method is comparable to that of the other methods analyzed. In addition, we can provide probabilities of a given spot being a breakpoint, something unique among the methods examined. Markov Chain Monte Carlo methods require using a sufficient number of iterations before they can be assumed to yield samples from the distribution of interest. Running our method with too small a number of iterations cannot be representative of its performance. Moreover, our analysis shows how our original approach can be easily adapted to answer specific additional questions (e.g., identify edges).
GWASinlps: Nonlocal prior based iterative SNP selection tool for genome-wide association studies.
Sanyal, Nilotpal; Lo, Min-Tzu; Kauppi, Karolina; Djurovic, Srdjan; Andreassen, Ole A; Johnson, Valen E; Chen, Chi-Hua
2018-06-19
Multiple marker analysis of the genome-wide association study (GWAS) data has gained ample attention in recent years. However, because of the ultra high-dimensionality of GWAS data, such analysis is challenging. Frequently used penalized regression methods often lead to large number of false positives, whereas Bayesian methods are computationally very expensive. Motivated to ameliorate these issues simultaneously, we consider the novel approach of using nonlocal priors in an iterative variable selection framework. We develop a variable selection method, named, iterative nonlocal prior based selection for GWAS, or GWASinlps, that combines, in an iterative variable selection framework, the computational efficiency of the screen-and-select approach based on some association learning and the parsimonious uncertainty quantification provided by the use of nonlocal priors. The hallmark of our method is the introduction of 'structured screen-and-select' strategy, that considers hierarchical screening, which is not only based on response-predictor associations, but also based on response-response associations, and concatenates variable selection within that hierarchy. Extensive simulation studies with SNPs having realistic linkage disequilibrium structures demonstrate the advantages of our computationally efficient method compared to several frequentist and Bayesian variable selection methods, in terms of true positive rate, false discovery rate, mean squared error, and effect size estimation error. Further, we provide empirical power analysis useful for study design. Finally, a real GWAS data application was considered with human height as phenotype. An R-package for implementing the GWASinlps method is available at https://cran.r-project.org/web/packages/GWASinlps/index.html. Supplementary data are available at Bioinformatics online.
NASA Astrophysics Data System (ADS)
Goossens, Bart; Aelterman, Jan; Luong, Hi"p.; Pižurica, Aleksandra; Philips, Wilfried
2011-09-01
The shearlet transform is a recent sibling in the family of geometric image representations that provides a traditional multiresolution analysis combined with a multidirectional analysis. In this paper, we present a fast DFT-based analysis and synthesis scheme for the 2D discrete shearlet transform. Our scheme conforms to the continuous shearlet theory to high extent, provides perfect numerical reconstruction (up to floating point rounding errors) in a non-iterative scheme and is highly suitable for parallel implementation (e.g. FPGA, GPU). We show that our discrete shearlet representation is also a tight frame and the redundancy factor of the transform is around 2.6, independent of the number of analysis directions. Experimental denoising results indicate that the transform performs the same or even better than several related multiresolution transforms, while having a significantly lower redundancy factor.
NASA Astrophysics Data System (ADS)
Hudson, S. R.; Monticello, D. A.; Reiman, A. H.; Strickler, D. J.; Hirshman, S. P.
2003-06-01
For the (non-axisymmetric) stellarator class of plasma confinement devices to be feasible candidates for fusion power stations it is essential that, to a good approximation, the magnetic field lines lie on nested flux surfaces; however, the inherent lack of a continuous symmetry implies that magnetic islands are guaranteed to exist. Magnetic islands break the smooth topology of nested flux surfaces and chaotic field lines result when magnetic islands overlap. An analogous case occurs with 11/2-dimension Hamiltonian systems where resonant perturbations cause singularities in the transformation to action-angle coordinates and destroy integrability. The suppression of magnetic islands is a critical issue for stellarator design, particularly for small aspect ratio devices. Techniques for `healing' vacuum fields and fixed-boundary plasma equilibria have been developed, but what is ultimately required is a procedure for designing stellarators such that the self-consistent plasma equilibrium currents and the coil currents combine to produce an integrable magnetic field, and such a procedure is presented here for the first time. Magnetic islands in free-boundary full-pressure full-current stellarator magnetohydrodynamic equilibria are suppressed using a procedure based on the Princeton Iterative Equilibrium Solver [A.H.Reiman & H.S.Greenside, Comp. Phys. Comm., 43:157, 1986.] which iterates the equilibrium equations to obtain the plasma equilibrium. At each iteration, changes to a Fourier representation of the coil geometry are made to cancel resonant fields produced by the plasma. As the iterations continue, the coil geometry and the plasma simultaneously converge to an equilibrium in which the island content is negligible. The method is applied to a candidate plasma and coil design for the National Compact Stellarator eXperiment [G.H.Neilson et.al., Phys. Plas., 7:1911, 2000.].
First Operation with the JET ITER-Like Wall
NASA Astrophysics Data System (ADS)
Neu, Rudolf
2012-10-01
To consolidate ITER design choices and prepare for its operation, JET has implemented ITER's plasma facing materials, namely Be at the main wall and W in the divertor. In addition, protection systems, diagnostics and the vertical stability control were upgraded and the heating capability of the neutral beams was increased to over 30 MW. First results confirm the expected benefits and the limitations of all metal plasma facing components (PFCs), but also yield understanding of operational issues directly relating to ITER. H-retention is lower by at least a factor of 10 in all operational scenarios compared to that with C PFCs. The lower C content (˜ factor 10) have led to much lower radiation during the plasma burn-through phase eliminating breakdown failures. Similarly, the intrinsic radiation observed during disruptions is very low, leading to high power loads and to a slow current quench. Massive gas injection using a D2/Ar mixture restores levels of radiation and vessel forces similar to those of mitigated disruptions with the C wall. Dedicated L-H transition experiments indicate a reduced power threshold by 30%, a distinct minimum density and pronounced shape dependence. The L-mode density limit was found up to 30% higher than for C allowing stable detached divertor operation over a larger density range. Stable H-modes as well as the hybrid scenario could be only re-established when using gas puff levels of a few 10^21e/s. On average the confinement is lower with the new PFCs, but nevertheless, H factors around 1 (H-Mode) and 1.2 (at βN˜3, Hybrids) have been achieved with W concentrations well below the maximum acceptable level (<10-5).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tresemer, K. R.
2015-07-01
ITER is an international project under construction in France that will demonstrate nuclear fusion at a power plant-relevant scale. The Toroidal Interferometer and Polarimeter (TIP) Diagnostic will be used to measure the plasma electron line density along 5 laser-beam chords. This line-averaged density measurement will be input to the ITER feedback-control system. The TIP is considered the primary diagnostic for these measurements, which are needed for basic ITER machine control. Therefore, system reliability & accuracy is a critical element in TIP’s design. There are two major challenges to the reliability of the TIP system. First is the survivability and performancemore » of in-vessel optics and second is maintaining optical alignment over long optical paths and large vessel movements. Both of these issues greatly depend on minimizing the overall distortion due to neutron & gamma heating of the Corner Cube Retroreflectors (CCRs). These are small optical mirrors embedded in five first wall locations around the vacuum vessel, corresponding to certain plasma tangency radii. During the development of the design and location of these CCRs, several iterations of neutronics analyses were performed to determine and minimize the total distortion due to nuclear heating of the CCRs. The CCR corresponding to TIP Channel 2 was chosen for analysis as a good middle-road case, being an average distance from the plasma (of the five channels) and having moderate neutron shielding from its blanket shield housing. Results show that Channel 2 meets the requirements of the TIP Diagnostic, but barely. These results suggest other CCRs might be at risk of exceeding thermal deformation due to nuclear heating.« less
NASA Astrophysics Data System (ADS)
Miao, Yonghao; Zhao, Ming; Lin, Jing; Lei, Yaguo
2017-08-01
The extraction of periodic impulses, which are the important indicators of rolling bearing faults, from vibration signals is considerably significance for fault diagnosis. Maximum correlated kurtosis deconvolution (MCKD) developed from minimum entropy deconvolution (MED) has been proven as an efficient tool for enhancing the periodic impulses in the diagnosis of rolling element bearings and gearboxes. However, challenges still exist when MCKD is applied to the bearings operating under harsh working conditions. The difficulties mainly come from the rigorous requires for the multi-input parameters and the complicated resampling process. To overcome these limitations, an improved MCKD (IMCKD) is presented in this paper. The new method estimates the iterative period by calculating the autocorrelation of the envelope signal rather than relies on the provided prior period. Moreover, the iterative period will gradually approach to the true fault period through updating the iterative period after every iterative step. Since IMCKD is unaffected by the impulse signals with the high kurtosis value, the new method selects the maximum kurtosis filtered signal as the final choice from all candidates in the assigned iterative counts. Compared with MCKD, IMCKD has three advantages. First, without considering prior period and the choice of the order of shift, IMCKD is more efficient and has higher robustness. Second, the resampling process is not necessary for IMCKD, which is greatly convenient for the subsequent frequency spectrum analysis and envelope spectrum analysis without resetting the sampling rate. Third, IMCKD has a significant performance advantage in diagnosing the bearing compound-fault which expands the application range. Finally, the effectiveness and superiority of IMCKD are validated by a number of simulated bearing fault signals and applying to compound faults and single fault diagnosis of a locomotive bearing.
Chang, Corissa P; Barker, Judith C; Hoeft, Kristin S; Guerra, Claudia; Chung, Lisa H; Burke, Nancy J
2018-01-01
This study's purpose was to explore how content and format of children's oral health instruction in the dental clinic is perceived by parents and might affect parents' knowledge and behaviors. Thirty low-income Mexican immigrant parents of children age five years and under were recruited from dental clinics in 2015 to 2016. In-person qualitative interviews in Spanish about their children's and their own experiences of dental care and home oral hygiene practices were conducted, digitally recorded, translated, and transcribed. Data analysis involved iteratively reading text data and developing and refining codes to find common themes. Twenty-five of 30 parents recalled receiving oral hygiene instruction, and 18 recalled receiving nutrition instruction and were included in analyses. The format and effectiveness of instruction varied. More engaging educational approaches were recalled and described in more detail than less engaging educational approaches. As a result of oral hygiene and nutritional instruction, most parents reported changing their oral hygiene home behaviors for their children; half aimed to reduce purchasing sugary foods and drinks. Most parents recalled receiving oral hygiene and nutrition instruction as part of their child's dental visit and reported incorporating the instruction and recommendations they received into their children's home routine.
Robertson, Eden G; Wakefield, Claire E; Cohn, Richard J; O'Brien, Tracey; Ziegler, David S; Fardell, Joanna E
2018-05-04
The internet is increasingly being used to disseminate health information. Given the complexity of pediatric oncology clinical trials, we developed Delta, a Web-based decision aid to support families deciding whether or not to enroll their child with cancer in a clinical trial. This paper details the Agile development process of Delta and user testing results of Delta. Development was iterative and involved 5 main stages: a requirements analysis, planning, design, development, and user testing. For user testing, we conducted 13 eye-tracking analyses and think-aloud interviews with health care professionals (n=6) and parents (n=7). Results suggested that there was minimal rereading of content and a high level of engagement in content. However, there were some navigational problems. Participants reported high acceptability (12/13) and high usability of the website (8/13). Delta demonstrates the utility for the use of Agile in the development of a Web-based decision aid for health purposes. Our study provides a clear step-by-step guide to develop a Web-based psychosocial tool within the health setting. ©Eden G Robertson, Claire E Wakefield, Richard J Cohn, Tracey O'Brien, David S Ziegler, Joanna E Fardell. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 04.05.2018.
Inferring the Presence of Reverse Proxies Through Timing Analysis
2015-06-01
16 Figure 3.2 The three different instances of timing measurement configurations 17 Figure 3.3 Permutation of a web request iteration...Their data showed that they could detect at least 6 bits of entropy between unlike devices and that it was enough to determine that they are in fact...depending on the permutation being executed so that every iteration was conducted under the same distance 15 City Lat Long City Lat Long
Nonlinear Analysis of Cavitating Propellers in Nonuniform Flow
1992-10-16
Helmholtz more than a century ago [4]. The method was later extended to treat curved bodies at zero cavitation number by Levi - Civita [4]. The theory was...122, 1895. [63] M.P. Tulin. Steady two -dimensional cavity flows about slender bodies . Technical Report 834, DTMB, May 1953. [64] M.P. Tulin...iterative solution for two -dimensional flows is remarkably fast and that the accuracy of the first iteration solution is sufficient for a wide range of
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gingold, E; Dave, J
2014-06-01
Purpose: The purpose of this study was to compare a new model-based iterative reconstruction with existing reconstruction methods (filtered backprojection and basic iterative reconstruction) using quantitative analysis of standard image quality phantom images. Methods: An ACR accreditation phantom (Gammex 464) and a CATPHAN600 phantom were scanned using 3 routine clinical acquisition protocols (adult axial brain, adult abdomen, and pediatric abdomen) on a Philips iCT system. Each scan was acquired using default conditions and 75%, 50% and 25% dose levels. Images were reconstructed using standard filtered backprojection (FBP), conventional iterative reconstruction (iDose4) and a prototype model-based iterative reconstruction (IMR). Phantom measurementsmore » included CT number accuracy, contrast to noise ratio (CNR), modulation transfer function (MTF), low contrast detectability (LCD), and noise power spectrum (NPS). Results: The choice of reconstruction method had no effect on CT number accuracy, or MTF (p<0.01). The CNR of a 6 HU contrast target was improved by 1–67% with iDose4 relative to FBP, while IMR improved CNR by 145–367% across all protocols and dose levels. Within each scan protocol, the CNR improvement from IMR vs FBP showed a general trend of greater improvement at lower dose levels. NPS magnitude was greatest for FBP and lowest for IMR. The NPS of the IMR reconstruction showed a pronounced decrease with increasing spatial frequency, consistent with the unusual noise texture seen in IMR images. Conclusion: Iterative Model Reconstruction reduces noise and improves contrast-to-noise ratio without sacrificing spatial resolution in CT phantom images. This offers the possibility of radiation dose reduction and improved low contrast detectability compared with filtered backprojection or conventional iterative reconstruction.« less
Hybrid cloud and cluster computing paradigms for life science applications
2010-01-01
Background Clouds and MapReduce have shown themselves to be a broadly useful approach to scientific computing especially for parallel data intensive applications. However they have limited applicability to some areas such as data mining because MapReduce has poor performance on problems with an iterative structure present in the linear algebra that underlies much data analysis. Such problems can be run efficiently on clusters using MPI leading to a hybrid cloud and cluster environment. This motivates the design and implementation of an open source Iterative MapReduce system Twister. Results Comparisons of Amazon, Azure, and traditional Linux and Windows environments on common applications have shown encouraging performance and usability comparisons in several important non iterative cases. These are linked to MPI applications for final stages of the data analysis. Further we have released the open source Twister Iterative MapReduce and benchmarked it against basic MapReduce (Hadoop) and MPI in information retrieval and life sciences applications. Conclusions The hybrid cloud (MapReduce) and cluster (MPI) approach offers an attractive production environment while Twister promises a uniform programming environment for many Life Sciences applications. Methods We used commercial clouds Amazon and Azure and the NSF resource FutureGrid to perform detailed comparisons and evaluations of different approaches to data intensive computing. Several applications were developed in MPI, MapReduce and Twister in these different environments. PMID:21210982
Progress of IRSN R&D on ITER Safety Assessment
NASA Astrophysics Data System (ADS)
Van Dorsselaere, J. P.; Perrault, D.; Barrachin, M.; Bentaib, A.; Gensdarmes, F.; Haeck, W.; Pouvreau, S.; Salat, E.; Seropian, C.; Vendel, J.
2012-08-01
The French "Institut de Radioprotection et de Sûreté Nucléaire" (IRSN), in support to the French "Autorité de Sûreté Nucléaire", is analysing the safety of ITER fusion installation on the basis of the ITER operator's safety file. IRSN set up a multi-year R&D program in 2007 to support this safety assessment process. Priority has been given to four technical issues and the main outcomes of the work done in 2010 and 2011 are summarized in this paper: for simulation of accident scenarios in the vacuum vessel, adaptation of the ASTEC system code; for risk of explosion of gas-dust mixtures in the vacuum vessel, adaptation of the TONUS-CFD code for gas distribution, development of DUST code for dust transport, and preparation of IRSN experiments on gas inerting, dust mobilization, and hydrogen-dust mixtures explosion; for evaluation of the efficiency of the detritiation systems, thermo-chemical calculations of tritium speciation during transport in the gas phase and preparation of future experiments to evaluate the most influent factors on detritiation; for material neutron activation, adaptation of the VESTA Monte Carlo depletion code. The first results of these tasks have been used in 2011 for the analysis of the ITER safety file. In the near future, this R&D global programme may be reoriented to account for the feedback of the latter analysis or for new knowledge.
Hybrid cloud and cluster computing paradigms for life science applications.
Qiu, Judy; Ekanayake, Jaliya; Gunarathne, Thilina; Choi, Jong Youl; Bae, Seung-Hee; Li, Hui; Zhang, Bingjing; Wu, Tak-Lon; Ruan, Yang; Ekanayake, Saliya; Hughes, Adam; Fox, Geoffrey
2010-12-21
Clouds and MapReduce have shown themselves to be a broadly useful approach to scientific computing especially for parallel data intensive applications. However they have limited applicability to some areas such as data mining because MapReduce has poor performance on problems with an iterative structure present in the linear algebra that underlies much data analysis. Such problems can be run efficiently on clusters using MPI leading to a hybrid cloud and cluster environment. This motivates the design and implementation of an open source Iterative MapReduce system Twister. Comparisons of Amazon, Azure, and traditional Linux and Windows environments on common applications have shown encouraging performance and usability comparisons in several important non iterative cases. These are linked to MPI applications for final stages of the data analysis. Further we have released the open source Twister Iterative MapReduce and benchmarked it against basic MapReduce (Hadoop) and MPI in information retrieval and life sciences applications. The hybrid cloud (MapReduce) and cluster (MPI) approach offers an attractive production environment while Twister promises a uniform programming environment for many Life Sciences applications. We used commercial clouds Amazon and Azure and the NSF resource FutureGrid to perform detailed comparisons and evaluations of different approaches to data intensive computing. Several applications were developed in MPI, MapReduce and Twister in these different environments.
Convergence of Defect-Correction and Multigrid Iterations for Inviscid Flows
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.
2011-01-01
Convergence of multigrid and defect-correction iterations is comprehensively studied within different incompressible and compressible inviscid regimes on high-density grids. Good smoothing properties of the defect-correction relaxation have been shown using both a modified Fourier analysis and a more general idealized-coarse-grid analysis. Single-grid defect correction alone has some slowly converging iterations on grids of medium density. The convergence is especially slow for near-sonic flows and for very low compressible Mach numbers. Additionally, the fast asymptotic convergence seen on medium density grids deteriorates on high-density grids. Certain downstream-boundary modes are very slowly damped on high-density grids. Multigrid scheme accelerates convergence of the slow defect-correction iterations to the extent determined by the coarse-grid correction. The two-level asymptotic convergence rates are stable and significantly below one in most of the regions but slow convergence is noted for near-sonic and very low-Mach compressible flows. Multigrid solver has been applied to the NACA 0012 airfoil and to different flow regimes, such as near-tangency and stagnation. Certain convergence difficulties have been encountered within stagnation regions. Nonetheless, for the airfoil flow, with a sharp trailing-edge, residuals were fast converging for a subcritical flow on a sequence of grids. For supercritical flow, residuals converged slower on some intermediate grids than on the finest grid or the two coarsest grids.
NASA Astrophysics Data System (ADS)
Jackson, B. V.; Yu, H. S.; Hick, P. P.; Buffington, A.; Odstrcil, D.; Kim, T. K.; Pogorelov, N. V.; Tokumaru, M.; Bisi, M. M.; Kim, J.; Yun, J.
2017-12-01
The University of California, San Diego has developed an iterative remote-sensing time-dependent three-dimensional (3-D) reconstruction technique which provides volumetric maps of density, velocity, and magnetic field. We have applied this technique in near real time for over 15 years with a kinematic model approximation to fit data from ground-based interplanetary scintillation (IPS) observations. Our modeling concept extends volumetric data from an inner boundary placed above the Alfvén surface out to the inner heliosphere. We now use this technique to drive 3-D MHD models at their inner boundary and generate output 3-D data files that are fit to remotely-sensed observations (in this case IPS observations), and iterated. These analyses are also iteratively fit to in-situ spacecraft measurements near Earth. To facilitate this process, we have developed a traceback from input 3-D MHD volumes to yield an updated boundary in density, temperature, and velocity, which also includes magnetic-field components. Here we will show examples of this analysis using the ENLIL 3D-MHD and the University of Alabama Multi-Scale Fluid-Kinetic Simulation Suite (MS-FLUKSS) heliospheric codes. These examples help refine poorly-known 3-D MHD variables (i.e., density, temperature), and parameters (gamma) by fitting heliospheric remotely-sensed data between the region near the solar surface and in-situ measurements near Earth.
Design Performance of Front Steering-Type Electron Cyclotron Launcher for ITER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takahashi, K.; Imai, T.; Kobayashi, N.
2005-01-15
The performance of a front steering (FS)-type electron cyclotron launcher designed for the International Thermonuclear Experimental Reactor (ITER) is evaluated with a thermal, electromagnetic, and nuclear analysis of the components; a mechanical test of a spiral tube for the steering mirror; and a rotational test of bearings. The launcher consists of a front shield and a launcher plug where three movable optic mirrors to steer incident multimegawatt radio-frequency beam power, waveguide components, nuclear shields, and vacuum windows are installed. The windows are located behind a closure plate to isolate the transmission lines from the radioactivated circumstance (vacuum vessel). The waveguidemore » lines of the launcher are doglegged to reduce the direct neutron streaming toward the vacuum windows and other components. The maximum stresses on the critical components such as the steering mirror, its cooling tube, and the front shield are less than their allowable stresses. It was also identified that the stress on the launcher, which yielded from electromagnetic force caused by plasma disruption, was a little larger than the criteria, and a modification of the launcher plug structure was necessary. The nuclear analysis result shows that the neutron shield capability of the launcher satisfies the shield criteria of the ITER. It concludes that the design of the FS launcher is generally suitable for application to the ITER.« less
NASA Astrophysics Data System (ADS)
He, Jianbin; Yu, Simin; Cai, Jianping
2016-12-01
Lyapunov exponent is an important index for describing chaotic systems behavior, and the largest Lyapunov exponent can be used to determine whether a system is chaotic or not. For discrete-time dynamical systems, the Lyapunov exponents are calculated by an eigenvalue method. In theory, according to eigenvalue method, the more accurate calculations of Lyapunov exponent can be obtained with the increment of iterations, and the limits also exist. However, due to the finite precision of computer and other reasons, the results will be numeric overflow, unrecognized, or inaccurate, which can be stated as follows: (1) The iterations cannot be too large, otherwise, the simulation result will appear as an error message of NaN or Inf; (2) If the error message of NaN or Inf does not appear, then with the increment of iterations, all Lyapunov exponents will get close to the largest Lyapunov exponent, which leads to inaccurate calculation results; (3) From the viewpoint of numerical calculation, obviously, if the iterations are too small, then the results are also inaccurate. Based on the analysis of Lyapunov-exponent calculation in discrete-time systems, this paper investigates two improved algorithms via QR orthogonal decomposition and SVD orthogonal decomposition approaches so as to solve the above-mentioned problems. Finally, some examples are given to illustrate the feasibility and effectiveness of the improved algorithms.
Katsuki, Takeo; Mackey, Tim Ken; Cuomo, Raphael
2015-12-16
Youth and adolescent non-medical use of prescription medications (NUPM) has become a national epidemic. However, little is known about the association between promotion of NUPM behavior and access via the popular social media microblogging site, Twitter, which is currently used by a third of all teens. In order to better assess NUPM behavior online, this study conducts surveillance and analysis of Twitter data to characterize the frequency of NUPM-related tweets and also identifies illegal access to drugs of abuse via online pharmacies. Tweets were collected over a 2-week period from April 1-14, 2015, by applying NUPM keyword filters for both generic/chemical and street names associated with drugs of abuse using the Twitter public streaming application programming interface. Tweets were then analyzed for relevance to NUPM and whether they promoted illegal online access to prescription drugs using a protocol of content coding and supervised machine learning. A total of 2,417,662 tweets were collected and analyzed for this study. Tweets filtered for generic drugs names comprised 232,108 tweets, including 22,174 unique associated uniform resource locators (URLs), and 2,185,554 tweets (376,304 unique URLs) filtered for street names. Applying an iterative process of manual content coding and supervised machine learning, 81.72% of the generic and 12.28% of the street NUPM datasets were predicted as having content relevant to NUPM respectively. By examining hyperlinks associated with NUPM relevant content for the generic Twitter dataset, we discovered that 75.72% of the tweets with URLs included a hyperlink to an online marketing affiliate that directly linked to an illicit online pharmacy advertising the sale of Valium without a prescription. This study examined the association between Twitter content, NUPM behavior promotion, and online access to drugs using a broad set of prescription drug keywords. Initial results are concerning, as our study found over 45,000 tweets that directly promoted NUPM by providing a URL that actively marketed the illegal online sale of prescription drugs of abuse. Additional research is needed to further establish the link between Twitter content and NUPM, as well as to help inform future technology-based tools, online health promotion activities, and public policy to combat NUPM online.
Cuomo, Raphael
2015-01-01
Background Youth and adolescent non-medical use of prescription medications (NUPM) has become a national epidemic. However, little is known about the association between promotion of NUPM behavior and access via the popular social media microblogging site, Twitter, which is currently used by a third of all teens. Objective In order to better assess NUPM behavior online, this study conducts surveillance and analysis of Twitter data to characterize the frequency of NUPM-related tweets and also identifies illegal access to drugs of abuse via online pharmacies. Methods Tweets were collected over a 2-week period from April 1-14, 2015, by applying NUPM keyword filters for both generic/chemical and street names associated with drugs of abuse using the Twitter public streaming application programming interface. Tweets were then analyzed for relevance to NUPM and whether they promoted illegal online access to prescription drugs using a protocol of content coding and supervised machine learning. Results A total of 2,417,662 tweets were collected and analyzed for this study. Tweets filtered for generic drugs names comprised 232,108 tweets, including 22,174 unique associated uniform resource locators (URLs), and 2,185,554 tweets (376,304 unique URLs) filtered for street names. Applying an iterative process of manual content coding and supervised machine learning, 81.72% of the generic and 12.28% of the street NUPM datasets were predicted as having content relevant to NUPM respectively. By examining hyperlinks associated with NUPM relevant content for the generic Twitter dataset, we discovered that 75.72% of the tweets with URLs included a hyperlink to an online marketing affiliate that directly linked to an illicit online pharmacy advertising the sale of Valium without a prescription. Conclusions This study examined the association between Twitter content, NUPM behavior promotion, and online access to drugs using a broad set of prescription drug keywords. Initial results are concerning, as our study found over 45,000 tweets that directly promoted NUPM by providing a URL that actively marketed the illegal online sale of prescription drugs of abuse. Additional research is needed to further establish the link between Twitter content and NUPM, as well as to help inform future technology-based tools, online health promotion activities, and public policy to combat NUPM online. PMID:26677966
Design Features of the Neutral Particle Diagnostic System for the ITER Tokamak
NASA Astrophysics Data System (ADS)
Petrov, S. Ya.; Afanasyev, V. I.; Melnik, A. D.; Mironov, M. I.; Navolotsky, A. S.; Nesenevich, V. G.; Petrov, M. P.; Chernyshev, F. V.; Kedrov, I. V.; Kuzmin, E. G.; Lyublin, B. V.; Kozlovski, S. S.; Mokeev, A. N.
2017-12-01
The control of the deuterium-tritium (DT) fuel isotopic ratio has to ensure the best performance of the ITER thermonuclear fusion reactor. The diagnostic system described in this paper allows the measurement of this ratio analyzing the hydrogen isotope fluxes (performing neutral particle analysis (NPA)). The development and supply of the NPA diagnostics for ITER was delegated to the Russian Federation. The diagnostics is being developed at the Ioffe Institute. The system consists of two analyzers, viz., LENPA (Low Energy Neutral Particle Analyzer) with 10-200 keV energy range and HENPA (High Energy Neutral Particle Analyzer) with 0.1-4.0MeV energy range. Simultaneous operation of both analyzers in different energy ranges enables researchers to measure the DT fuel ratio both in the central burning plasma (thermonuclear burn zone) and at the edge as well. When developing the diagnostic complex, it was necessary to account for the impact of several factors: high levels of neutron and gamma radiation, the direct vacuum connection to the ITER vessel, implying high tritium containment, strict requirements on reliability of all units and mechanisms, and the limited space available for accommodation of the diagnostic hardware at the ITER tokamak. The paper describes the design of the diagnostic complex and the engineering solutions that make it possible to conduct measurements under tokamak reactor conditions. The proposed engineering solutions provide a safe—with respect to thermal and mechanical loads—common vacuum channel for hydrogen isotope atoms to pass to the analyzers; ensure efficient shielding of the analyzers from the ITER stray magnetic field (up to 1 kG); provide the remote control of the NPA diagnostic complex, in particular, connection/disconnection of the NPA vacuum beamline from the ITER vessel; meet the ITER radiation safety requirements; and ensure measurements of the fuel isotopic ratio under high levels of neutron and gamma radiation.
Kumar, Sudhir; Stecher, Glen; Peterson, Daniel; Tamura, Koichiro
2012-10-15
There is a growing need in the research community to apply the molecular evolutionary genetics analysis (MEGA) software tool for batch processing a large number of datasets and to integrate it into analysis workflows. Therefore, we now make available the computing core of the MEGA software as a stand-alone executable (MEGA-CC), along with an analysis prototyper (MEGA-Proto). MEGA-CC provides users with access to all the computational analyses available through MEGA's graphical user interface version. This includes methods for multiple sequence alignment, substitution model selection, evolutionary distance estimation, phylogeny inference, substitution rate and pattern estimation, tests of natural selection and ancestral sequence inference. Additionally, we have upgraded the source code for phylogenetic analysis using the maximum likelihood methods for parallel execution on multiple processors and cores. Here, we describe MEGA-CC and outline the steps for using MEGA-CC in tandem with MEGA-Proto for iterative and automated data analysis. http://www.megasoftware.net/.
Perl Modules for Constructing Iterators
NASA Technical Reports Server (NTRS)
Tilmes, Curt
2009-01-01
The Iterator Perl Module provides a general-purpose framework for constructing iterator objects within Perl, and a standard API for interacting with those objects. Iterators are an object-oriented design pattern where a description of a series of values is used in a constructor. Subsequent queries can request values in that series. These Perl modules build on the standard Iterator framework and provide iterators for some other types of values. Iterator::DateTime constructs iterators from DateTime objects or Date::Parse descriptions and ICal/RFC 2445 style re-currence descriptions. It supports a variety of input parameters, including a start to the sequence, an end to the sequence, an Ical/RFC 2445 recurrence describing the frequency of the values in the series, and a format description that can refine the presentation manner of the DateTime. Iterator::String constructs iterators from string representations. This module is useful in contexts where the API consists of supplying a string and getting back an iterator where the specific iteration desired is opaque to the caller. It is of particular value to the Iterator::Hash module which provides nested iterations. Iterator::Hash constructs iterators from Perl hashes that can include multiple iterators. The constructed iterators will return all the permutations of the iterations of the hash by nested iteration of embedded iterators. A hash simply includes a set of keys mapped to values. It is a very common data structure used throughout Perl programming. The Iterator:: Hash module allows a hash to include strings defining iterators (parsed and dispatched with Iterator::String) that are used to construct an overall series of hash values.
Modal Test/Analysis Correlation of Space Station Structures Using Nonlinear Sensitivity
NASA Technical Reports Server (NTRS)
Gupta, Viney K.; Newell, James F.; Berke, Laszlo; Armand, Sasan
1992-01-01
The modal correlation problem is formulated as a constrained optimization problem for validation of finite element models (FEM's). For large-scale structural applications, a pragmatic procedure for substructuring, model verification, and system integration is described to achieve effective modal correlation. The space station substructure FEM's are reduced using Lanczos vectors and integrated into a system FEM using Craig-Bampton component modal synthesis. The optimization code is interfaced with MSC/NASTRAN to solve the problem of modal test/analysis correlation; that is, the problem of validating FEM's for launch and on-orbit coupled loads analysis against experimentally observed frequencies and mode shapes. An iterative perturbation algorithm is derived and implemented to update nonlinear sensitivity (derivatives of eigenvalues and eigenvectors) during optimizer iterations, which reduced the number of finite element analyses.
Modal test/analysis correlation of Space Station structures using nonlinear sensitivity
NASA Technical Reports Server (NTRS)
Gupta, Viney K.; Newell, James F.; Berke, Laszlo; Armand, Sasan
1992-01-01
The modal correlation problem is formulated as a constrained optimization problem for validation of finite element models (FEM's). For large-scale structural applications, a pragmatic procedure for substructuring, model verification, and system integration is described to achieve effective modal correlations. The space station substructure FEM's are reduced using Lanczos vectors and integrated into a system FEM using Craig-Bampton component modal synthesis. The optimization code is interfaced with MSC/NASTRAN to solve the problem of modal test/analysis correlation; that is, the problem of validating FEM's for launch and on-orbit coupled loads analysis against experimentally observed frequencies and mode shapes. An iterative perturbation algorithm is derived and implemented to update nonlinear sensitivity (derivatives of eigenvalues and eigenvectors) during optimizer iterations, which reduced the number of finite element analyses.
Discrete Fourier Transform Analysis in a Complex Vector Space
NASA Technical Reports Server (NTRS)
Dean, Bruce H.
2009-01-01
Alternative computational strategies for the Discrete Fourier Transform (DFT) have been developed using analysis of geometric manifolds. This approach provides a general framework for performing DFT calculations, and suggests a more efficient implementation of the DFT for applications using iterative transform methods, particularly phase retrieval. The DFT can thus be implemented using fewer operations when compared to the usual DFT counterpart. The software decreases the run time of the DFT in certain applications such as phase retrieval that iteratively call the DFT function. The algorithm exploits a special computational approach based on analysis of the DFT as a transformation in a complex vector space. As such, this approach has the potential to realize a DFT computation that approaches N operations versus Nlog(N) operations for the equivalent Fast Fourier Transform (FFT) calculation.
Numerical solution of quadratic matrix equations for free vibration analysis of structures
NASA Technical Reports Server (NTRS)
Gupta, K. K.
1975-01-01
This paper is concerned with the efficient and accurate solution of the eigenvalue problem represented by quadratic matrix equations. Such matrix forms are obtained in connection with the free vibration analysis of structures, discretized by finite 'dynamic' elements, resulting in frequency-dependent stiffness and inertia matrices. The paper presents a new numerical solution procedure of the quadratic matrix equations, based on a combined Sturm sequence and inverse iteration technique enabling economical and accurate determination of a few required eigenvalues and associated vectors. An alternative procedure based on a simultaneous iteration procedure is also described when only the first few modes are the usual requirement. The employment of finite dynamic elements in conjunction with the presently developed eigenvalue routines results in a most significant economy in the dynamic analysis of structures.
SPECIAL TOPIC: ITER L mode confinement database
NASA Astrophysics Data System (ADS)
Kaye, S. M.; Greenwald, M.; Stroth, U.; Kardaun, O.; Kus, A.; Schissel, D.; DeBoo, J.; Bracco, G.; Thomsen, K.; Cordey, J. G.; Miura, Y.; Matsuda, T.; Tamai, H.; Takizuda, T.; Hirayama, T.; Kikuchi, H.; Naito, O.; Chudnovskij, A.; Ongena, J.; Hoang, G.
1997-09-01
This special topic describes the contents of an L mode database that has been compiled with data from Alcator C-Mod, ASDEX, DIII, DIII-D, FTU, JET, JFT-2M, JT-60, PBX-M, PDX, T-10, TEXTOR, TFTR and Tore Supra. The database consists of a total of 2938 entries, 1881 of which are in the L phase while 922 are ohmically heated only (ohmic). Each entry contains up to 95 descriptive parameters, including global and kinetic information, machine conditioning and configuration. The special topic presents a description of the database and the variables contained therein, and it also presents global and thermal scalings along with predictions for ITER. The L mode thermal confinement time scaling, determined from a subset of 1312 entries for which the τE,th are provided, is τE,th = 0.023Ip0.96BT0.03R1.83(R/a)0.06 κ0.64ne0.40Meff0.20P-0.73 in units of seconds, megamps, teslas, metres, -, -, 10-9 m-1
Modelling Feedback in Virtual Patients: An Iterative Approach.
Stathakarou, Natalia; Kononowicz, Andrzej A; Henningsohn, Lars; McGrath, Cormac
2018-01-01
Virtual Patients (VPs) offer learners the opportunity to practice clinical reasoning skills and have recently been integrated in Massive Open Online Courses (MOOCs). Feedback is a central part of a branched VP, allowing the learner to reflect on the consequences of their decisions and actions. However, there is insufficient guidance on how to design feedback models within VPs and especially in the context of their application in MOOCs. In this paper, we share our experiences from building a feedback model for a bladder cancer VP in a Urology MOOC, following an iterative process in three steps. Our results demonstrate how we can systematize the process of improving the quality of VP components by the application of known literature frameworks and extend them with a feedback module. We illustrate the design and re-design process and exemplify with content from our VP. Our results can act as starting point for discussions on modelling feedback in VPs and invite future research on the topic.
Full waveform inversion of combined towed streamer and limited OBS seismic data: a theoretical study
NASA Astrophysics Data System (ADS)
Yang, Huachen; Zhang, Jianzhong
2018-06-01
In marine seismic oil exploration, full waveform inversion (FWI) of towed-streamer data is used to reconstruct velocity models. However, the FWI of towed-streamer data easily converges to a local minimum solution due to the lack of low-frequency content. In this paper, we propose a new FWI technique using towed-streamer data, its integrated data sets and limited OBS data. Both integrated towed-streamer seismic data and OBS data have low-frequency components. Therefore, at early iterations in the new FWI technique, the OBS data combined with the integrated towed-streamer data sets reconstruct an appropriate background model. And the towed-streamer seismic data play a major role in later iterations to improve the resolution of the model. The new FWI technique is tested on numerical examples. The results show that when starting models are not accurate enough, the models inverted using the new FWI technique are superior to those inverted using conventional FWI.
Run-time parallelization and scheduling of loops
NASA Technical Reports Server (NTRS)
Saltz, Joel H.; Mirchandaney, Ravi; Crowley, Kay
1991-01-01
Run-time methods are studied to automatically parallelize and schedule iterations of a do loop in certain cases where compile-time information is inadequate. The methods presented involve execution time preprocessing of the loop. At compile-time, these methods set up the framework for performing a loop dependency analysis. At run-time, wavefronts of concurrently executable loop iterations are identified. Using this wavefront information, loop iterations are reordered for increased parallelism. Symbolic transformation rules are used to produce: inspector procedures that perform execution time preprocessing, and executors or transformed versions of source code loop structures. These transformed loop structures carry out the calculations planned in the inspector procedures. Performance results are presented from experiments conducted on the Encore Multimax. These results illustrate that run-time reordering of loop indexes can have a significant impact on performance.
NASA Astrophysics Data System (ADS)
Bottoms, SueAnn I.; Ciechanowski, Kathryn M.; Hartman, Brian
2015-12-01
Iterative cycles of enactment embedded in culturally and linguistically diverse contexts provide rich opportunities for preservice teachers (PSTs) to enact core practices of science. This study is situated in the larger Families Involved in Sociocultural Teaching and Science, Technology, Engineering and Mathematics (FIESTAS) project, which weaves together cycles of enactment, core practices in science education and culturally relevant pedagogies. The theoretical foundation draws upon situated learning theory and communities of practice. Using video analysis by PSTs and course artifacts, the authors studied how the iterative process of these cycles guided PSTs development as teachers of elementary science. Findings demonstrate how PSTs were drawing on resources to inform practice, purposefully noticing their practice, renegotiating their roles in teaching, and reconsidering "professional blindness" through cultural practice.
Improvement of tritium accountancy technology for ITER fuel cycle safety enhancement
NASA Astrophysics Data System (ADS)
O'hira, S.; Hayashi, T.; Nakamura, H.; Kobayashi, K.; Tadokoro, T.; Nakamura, H.; Itoh, T.; Yamanishi, T.; Kawamura, Y.; Iwai, Y.; Arita, T.; Maruyama, T.; Kakuta, T.; Konishi, S.; Enoeda, M.; Yamada, M.; Suzuki, T.; Nishi, M.; Nagashima, T.; Ohta, M.
2000-03-01
In order to improve the safe handling and control of tritium for the ITER fuel cycle, effective in situ tritium accounting methods have been developed at the Tritium Process Laboratory in the Japan Atomic Energy Research Institute under one of the ITER-EDA R&D tasks. The remote and multilocation analysis of process gases by an application of laser Raman spectroscopy developed and tested could provide a measurement of hydrogen isotope gases with a detection limit of 0.3 kPa analytical periods of 120 s. An in situ tritium inventory measurement by application of a `self-assaying' storage bed with 25 g tritium capacity could provide a measurement with the required detection limit of less than 1% and a design proof of a bed with 100 g tritium capacity.
Rabal, Obdulia; Link, Wolfgang; Serelde, Beatriz G; Bischoff, James R; Oyarzabal, Julen
2010-04-01
Here we report the development and validation of a complete solution to manage and analyze the data produced by image-based phenotypic screening campaigns of small-molecule libraries. In one step initial crude images are analyzed for multiple cytological features, statistical analysis is performed and molecules that produce the desired phenotypic profile are identified. A naïve Bayes classifier, integrating chemical and phenotypic spaces, is built and utilized during the process to assess those images initially classified as "fuzzy"-an automated iterative feedback tuning. Simultaneously, all this information is directly annotated in a relational database containing the chemical data. This novel fully automated method was validated by conducting a re-analysis of results from a high-content screening campaign involving 33 992 molecules used to identify inhibitors of the PI3K/Akt signaling pathway. Ninety-two percent of confirmed hits identified by the conventional multistep analysis method were identified using this integrated one-step system as well as 40 new hits, 14.9% of the total, originally false negatives. Ninety-six percent of true negatives were properly recognized too. A web-based access to the database, with customizable data retrieval and visualization tools, facilitates the posterior analysis of annotated cytological features which allows identification of additional phenotypic profiles; thus, further analysis of original crude images is not required.
Hierarchical image feature extraction by an irregular pyramid of polygonal partitions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skurikhin, Alexei N
2008-01-01
We present an algorithmic framework for hierarchical image segmentation and feature extraction. We build a successive fine-to-coarse hierarchy of irregular polygonal partitions of the original image. This multiscale hierarchy forms the basis for object-oriented image analysis. The framework incorporates the Gestalt principles of visual perception, such as proximity and closure, and exploits spectral and textural similarities of polygonal partitions, while iteratively grouping them until dissimilarity criteria are exceeded. Seed polygons are built upon a triangular mesh composed of irregular sized triangles, whose spatial arrangement is adapted to the image content. This is achieved by building the triangular mesh on themore » top of detected spectral discontinuities (such as edges), which form a network of constraints for the Delaunay triangulation. The image is then represented as a spatial network in the form of a graph with vertices corresponding to the polygonal partitions and edges reflecting their relations. The iterative agglomeration of partitions into object-oriented segments is formulated as Minimum Spanning Tree (MST) construction. An important characteristic of the approach is that the agglomeration of polygonal partitions is constrained by the detected edges; thus the shapes of agglomerated partitions are more likely to correspond to the outlines of real-world objects. The constructed partitions and their spatial relations are characterized using spectral, textural and structural features based on proximity graphs. The framework allows searching for object-oriented features of interest across multiple levels of details of the built hierarchy and can be generalized to the multi-criteria MST to account for multiple criteria important for an application.« less
Is universal health coverage the practical expression of the right to health care?
2014-01-01
The present Millennium Development Goals are set to expire in 2015 and their next iteration is now being discussed within the international community. With regards to health, the World Health Organization proposes universal health coverage as a ‘single overarching health goal’ for the next iteration of the Millennium Development Goals. The present Millennium Development Goals have been criticised for being ‘duplicative’ or even ‘competing alternatives’ to international human rights law. The question then arises, if universal health coverage would indeed become the single overarching health goal, replacing the present health-related Millennium Development Goals, would that be more consistent with the right to health? The World Health Organization seems to have anticipated the question, as it labels universal health coverage as “by definition, a practical expression of the concern for health equity and the right to health”. Rather than waiting for the negotiations to unfold, we thought it would be useful to verify this contention, using a comparative normative analysis. We found that – to be a practical expression of the right to health – at least one element is missing in present authoritative definitions of universal health coverage: a straightforward confirmation that international assistance is essential, not optional. But universal health coverage is a ‘work in progress’. A recent proposal by the United Nations Sustainable Development Solutions Network proposed universal health coverage with a set of targets, including a target for international assistance, which would turn universal health coverage into a practical expression of the right to health care. PMID:24559232
NASA Technical Reports Server (NTRS)
1972-01-01
The QL module of the Performance Analysis and Design Synthesis (PADS) computer program is described. Execution of this module is initiated when and if subroutine PADSI calls subroutine GROPE. Subroutine GROPE controls the high level logical flow of the QL module. The purpose of the module is to determine a trajectory that satisfies the necessary variational conditions for optimal performance. The module achieves this by solving a nonlinear multi-point boundary value problem. The numerical method employed is described. It is an iterative technique that converges quadratically when it does converge. The three basic steps of the module are: (1) initialization, (2) iteration, and (3) culmination. For Volume 1 see N73-13199.
Analysis of the ITER low field side reflectometer transmission line system.
Hanson, G R; Wilgen, J B; Bigelow, T S; Diem, S J; Biewer, T M
2010-10-01
A critical issue in the design of the ITER low field side reflectometer is the transmission line (TL) system. A TL connects each launcher to a diagnostic instrument. Each TL will typically consist of ∼42 m of corrugated waveguide and up to ten miter bends. Important issues for the performance of the TL system are mode conversion and reflections. Minimizing these issues are critical to minimizing standing waves and phase errors. The performance of TL system is analyzed and recommendations are given.
Nonlinear dynamic modeling of rotor system supported by angular contact ball bearings
NASA Astrophysics Data System (ADS)
Wang, Hong; Han, Qinkai; Zhou, Daning
2017-02-01
In current bearing dynamic models, the displacement coordinate relations are usually utilized to approximately obtain the contact deformations between the rolling element and raceways, and then the nonlinear restoring forces of the rolling bearing could be calculated accordingly. Although the calculation efficiency is relatively higher, the accuracy is lower as the contact deformations should be solved through iterative analysis. Thus, an improved nonlinear dynamic model is presented in this paper. Considering the preload condition, surface waviness, Hertz contact and elastohydrodynamic lubrication, load distribution analysis is solved iteratively to more accurately obtain the contact deformations and angles between the rolling balls and raceways. The bearing restoring forces are then obtained through iteratively solving the load distribution equations at every time step. Dynamic tests upon a typical rotor system supported by two angular contact ball bearings are conducted to verify the model. Through comparisons, the differences between the nonlinear dynamic model and current models are also pointed out. The effects of axial preload, rotor eccentricity and inner/outer waviness amplitudes on the dynamic response are discussed in detail.
Blacker, Teddy D.
1994-01-01
An automatic quadrilateral surface discretization method and apparatus is provided for automatically discretizing a geometric region without decomposing the region. The automated quadrilateral surface discretization method and apparatus automatically generates a mesh of all quadrilateral elements which is particularly useful in finite element analysis. The generated mesh of all quadrilateral elements is boundary sensitive, orientation insensitive and has few irregular nodes on the boundary. A permanent boundary of the geometric region is input and rows are iteratively layered toward the interior of the geometric region. Also, an exterior permanent boundary and an interior permanent boundary for a geometric region may be input and the rows are iteratively layered inward from the exterior boundary in a first counter clockwise direction while the rows are iteratively layered from the interior permanent boundary toward the exterior of the region in a second clockwise direction. As a result, a high quality mesh for an arbitrary geometry may be generated with a technique that is robust and fast for complex geometric regions and extreme mesh gradations.
Influential Observations in Principal Factor Analysis.
ERIC Educational Resources Information Center
Tanaka, Yutaka; Odaka, Yoshimasa
1989-01-01
A method is proposed for detecting influential observations in iterative principal factor analysis. Theoretical influence functions are derived for two components of the common variance decomposition. The major mathematical tool is the influence function derived by Tanaka (1988). (SLD)
Performance of ITER as a burning plasma experiment
NASA Astrophysics Data System (ADS)
Shimada, M.; Mukhovatov, V.; Federici, G.; Gribov, Y.; Kukushkin, A.; Murakami, Y.; Polevoi, A.; Pustovitov, V.; Sengoku, S.; Sugihara, M.
2004-02-01
Recent performance analysis has improved confidence in achieving Q (= fusion power/auxiliary heating power)geq 10 in inductive operation in ITER. Performance analysis based on empirical scalings shows the feasibility of achieving Q geq 10 in inductive operation, particularly with improved modelling of helium exhaust. Analysis has also indicated the possibility that ITER can potentially demonstrate Q ~ 50, enabling studies of self-heated plasmas. Theory-based core modelling indicates the need for a high pedestal temperature (3.2-5.3 keV) to achieve Q geq 10, which is in the range of projections with presently available pedestal scalings. Pellet injection from the high-field side would be useful in enhancing Q and reducing edge localized mode (ELM) heat load in high plasma current operation. If the ELM heat load is not acceptable, it could be made tolerable by further tilting the target plate. Steady state operation scenarios at Q = 5 have been developed with modest requirements on confinement improvement and beta (HH98(y,2) geq 1.3 and bgrN geq 2.6). Stabilization of the resistive wall modes (RWMs), required in such regimes, is feasible with the present saddle coils and power supplies with double-wall structures taken into account. Recent analysis shows a potential of high power steady state operation with a fusion power of 0.7 GW at Q ~ 8. Achievement of the required bgrN ~ 3.6 by RWM stabilization is a possibility. Further analysis is also needed on reduction of the divertor target heat load.
Polynomial elimination theory and non-linear stability analysis for the Euler equations
NASA Technical Reports Server (NTRS)
Kennon, S. R.; Dulikravich, G. S.; Jespersen, D. C.
1986-01-01
Numerical methods are presented that exploit the polynomial properties of discretizations of the Euler equations. It is noted that most finite difference or finite volume discretizations of the steady-state Euler equations produce a polynomial system of equations to be solved. These equations are solved using classical polynomial elimination theory, with some innovative modifications. This paper also presents some preliminary results of a new non-linear stability analysis technique. This technique is applicable to determining the stability of polynomial iterative schemes. Results are presented for applying the elimination technique to a one-dimensional test case. For this test case, the exact solution is computed in three iterations. The non-linear stability analysis is applied to determine the optimal time step for solving Burgers' equation using the MacCormack scheme. The estimated optimal time step is very close to the time step that arises from a linear stability analysis.
NASA Astrophysics Data System (ADS)
Shuxia, ZHAO; Lei, ZHANG; Jiajia, HOU; Yang, ZHAO; Wangbao, YIN; Weiguang, MA; Lei, DONG; Liantuan, XIAO; Suotang, JIA
2018-03-01
The chemical composition of alloys directly determines their mechanical behaviors and application fields. Accurate and rapid analysis of both major and minor elements in alloys plays a key role in metallurgy quality control and material classification processes. A quantitative calibration-free laser-induced breakdown spectroscopy (CF-LIBS) analysis method, which carries out combined correction of plasma temperature and spectral intensity by using a second-order iterative algorithm and two boundary standard samples, is proposed to realize accurate composition measurements. Experimental results show that, compared to conventional CF-LIBS analysis, the relative errors for major elements Cu and Zn and minor element Pb in the copper-lead alloys has been reduced from 12%, 26% and 32% to 1.8%, 2.7% and 13.4%, respectively. The measurement accuracy for all elements has been improved substantially.
Wind Tunnel Strain-Gage Balance Calibration Data Analysis Using a Weighted Least Squares Approach
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Volden, T.
2017-01-01
A new approach is presented that uses a weighted least squares fit to analyze wind tunnel strain-gage balance calibration data. The weighted least squares fit is specifically designed to increase the influence of single-component loadings during the regression analysis. The weighted least squares fit also reduces the impact of calibration load schedule asymmetries on the predicted primary sensitivities of the balance gages. A weighting factor between zero and one is assigned to each calibration data point that depends on a simple count of its intentionally loaded load components or gages. The greater the number of a data point's intentionally loaded load components or gages is, the smaller its weighting factor becomes. The proposed approach is applicable to both the Iterative and Non-Iterative Methods that are used for the analysis of strain-gage balance calibration data in the aerospace testing community. The Iterative Method uses a reasonable estimate of the tare corrected load set as input for the determination of the weighting factors. The Non-Iterative Method, on the other hand, uses gage output differences relative to the natural zeros as input for the determination of the weighting factors. Machine calibration data of a six-component force balance is used to illustrate benefits of the proposed weighted least squares fit. In addition, a detailed derivation of the PRESS residuals associated with a weighted least squares fit is given in the appendices of the paper as this information could not be found in the literature. These PRESS residuals may be needed to evaluate the predictive capabilities of the final regression models that result from a weighted least squares fit of the balance calibration data.
Data Integration Tool: Permafrost Data Debugging
NASA Astrophysics Data System (ADS)
Wilcox, H.; Schaefer, K. M.; Jafarov, E. E.; Pulsifer, P. L.; Strawhacker, C.; Yarmey, L.; Basak, R.
2017-12-01
We developed a Data Integration Tool (DIT) to significantly speed up the time of manual processing needed to translate inconsistent, scattered historical permafrost data into files ready to ingest directly into the Global Terrestrial Network-Permafrost (GTN-P). The United States National Science Foundation funded this project through the National Snow and Ice Data Center (NSIDC) with the GTN-P to improve permafrost data access and discovery. We leverage this data to support science research and policy decisions. DIT is a workflow manager that divides data preparation and analysis into a series of steps or operations called widgets (https://github.com/PermaData/DIT). Each widget does a specific operation, such as read, multiply by a constant, sort, plot, and write data. DIT allows the user to select and order the widgets as desired to meet their specific needs, incrementally interact with and evolve the widget workflows, and save those workflows for reproducibility. Taking ideas from visual programming found in the art and design domain, debugging and iterative design principles from software engineering, and the scientific data processing and analysis power of Fortran and Python it was written for interactive, iterative data manipulation, quality control, processing, and analysis of inconsistent data in an easily installable application. DIT was used to completely translate one dataset (133 sites) that was successfully added to GTN-P, nearly translate three datasets (270 sites), and is scheduled to translate 10 more datasets ( 1000 sites) from the legacy inactive site data holdings of the Frozen Ground Data Center (FGDC). Iterative development has provided the permafrost and wider scientific community with an extendable tool designed specifically for the iterative process of translating unruly data.
Application of Temperature Sensitivities During Iterative Strain-Gage Balance Calibration Analysis
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2011-01-01
A new method is discussed that may be used to correct wind tunnel strain-gage balance load predictions for the influence of residual temperature effects at the location of the strain-gages. The method was designed for the iterative analysis technique that is used in the aerospace testing community to predict balance loads from strain-gage outputs during a wind tunnel test. The new method implicitly applies temperature corrections to the gage outputs during the load iteration process. Therefore, it can use uncorrected gage outputs directly as input for the load calculations. The new method is applied in several steps. First, balance calibration data is analyzed in the usual manner assuming that the balance temperature was kept constant during the calibration. Then, the temperature difference relative to the calibration temperature is introduced as a new independent variable for each strain--gage output. Therefore, sensors must exist near the strain--gages so that the required temperature differences can be measured during the wind tunnel test. In addition, the format of the regression coefficient matrix needs to be extended so that it can support the new independent variables. In the next step, the extended regression coefficient matrix of the original calibration data is modified by using the manufacturer specified temperature sensitivity of each strain--gage as the regression coefficient of the corresponding temperature difference variable. Finally, the modified regression coefficient matrix is converted to a data reduction matrix that the iterative analysis technique needs for the calculation of balance loads. Original calibration data and modified check load data of NASA's MC60D balance are used to illustrate the new method.
Thakur, Shalabh; Guttman, David S
2016-06-30
Comparative analysis of whole genome sequence data from closely related prokaryotic species or strains is becoming an increasingly important and accessible approach for addressing both fundamental and applied biological questions. While there are number of excellent tools developed for performing this task, most scale poorly when faced with hundreds of genome sequences, and many require extensive manual curation. We have developed a de-novo genome analysis pipeline (DeNoGAP) for the automated, iterative and high-throughput analysis of data from comparative genomics projects involving hundreds of whole genome sequences. The pipeline is designed to perform reference-assisted and de novo gene prediction, homolog protein family assignment, ortholog prediction, functional annotation, and pan-genome analysis using a range of proven tools and databases. While most existing methods scale quadratically with the number of genomes since they rely on pairwise comparisons among predicted protein sequences, DeNoGAP scales linearly since the homology assignment is based on iteratively refined hidden Markov models. This iterative clustering strategy enables DeNoGAP to handle a very large number of genomes using minimal computational resources. Moreover, the modular structure of the pipeline permits easy updates as new analysis programs become available. DeNoGAP integrates bioinformatics tools and databases for comparative analysis of a large number of genomes. The pipeline offers tools and algorithms for annotation and analysis of completed and draft genome sequences. The pipeline is developed using Perl, BioPerl and SQLite on Ubuntu Linux version 12.04 LTS. Currently, the software package accompanies script for automated installation of necessary external programs on Ubuntu Linux; however, the pipeline should be also compatible with other Linux and Unix systems after necessary external programs are installed. DeNoGAP is freely available at https://sourceforge.net/projects/denogap/ .
A Probabilistic Collocation Based Iterative Kalman Filter for Landfill Data Assimilation
NASA Astrophysics Data System (ADS)
Qiang, Z.; Zeng, L.; Wu, L.
2016-12-01
Due to the strong spatial heterogeneity of landfill, uncertainty is ubiquitous in gas transport process in landfill. To accurately characterize the landfill properties, the ensemble Kalman filter (EnKF) has been employed to assimilate the measurements, e.g., the gas pressure. As a Monte Carlo (MC) based method, the EnKF usually requires a large ensemble size, which poses a high computational cost for large scale problems. In this work, we propose a probabilistic collocation based iterative Kalman filter (PCIKF) to estimate permeability in a liquid-gas coupling model. This method employs polynomial chaos expansion (PCE) to represent and propagate the uncertainties of model parameters and states, and an iterative form of Kalman filter to assimilate the current gas pressure data. To further reduce the computation cost, the functional ANOVA (analysis of variance) decomposition is conducted, and only the first order ANOVA components are remained for PCE. Illustrated with numerical case studies, this proposed method shows significant superiority in computation efficiency compared with the traditional MC based iterative EnKF. The developed method has promising potential in reliable prediction and management of landfill gas production.
Hirata, Kenichiro; Utsunomiya, Daisuke; Kidoh, Masafumi; Funama, Yoshinori; Oda, Seitaro; Yuki, Hideaki; Nagayama, Yasunori; Iyama, Yuji; Nakaura, Takeshi; Sakabe, Daisuke; Tsujita, Kenichi; Yamashita, Yasuyuki
2018-05-01
We aimed to evaluate the image quality performance of coronary CT angiography (CTA) under the different settings of forward-projected model-based iterative reconstruction solutions (FIRST).Thirty patients undergoing coronary CTA were included. Each image was reconstructed using filtered back projection (FBP), adaptive iterative dose reduction 3D (AIDR-3D), and 2 model-based iterative reconstructions including FIRST-body and FIRST-cardiac sharp (CS). CT number and noise were measured in the coronary vessels and plaque. Subjective image-quality scores were obtained for noise and structure visibility.In the objective image analysis, FIRST-body produced the significantly highest contrast-to-noise ratio. Regarding subjective image quality, FIRST-CS had the highest score for structure visibility, although the image noise score was inferior to that of FIRST-body.In conclusion, FIRST provides significant improvements in objective and subjective image quality compared with FBP and AIDR-3D. FIRST-body effectively reduces image noise, but the structure visibility with FIRST-CS was superior to FIRST-body.
Iterated reaction graphs: simulating complex Maillard reaction pathways.
Patel, S; Rabone, J; Russell, S; Tissen, J; Klaffke, W
2001-01-01
This study investigates a new method of simulating a complex chemical system including feedback loops and parallel reactions. The practical purpose of this approach is to model the actual reactions that take place in the Maillard process, a set of food browning reactions, in sufficient detail to be able to predict the volatile composition of the Maillard products. The developed framework, called iterated reaction graphs, consists of two main elements: a soup of molecules and a reaction base of Maillard reactions. An iterative process loops through the reaction base, taking reactants from and feeding products back to the soup. This produces a reaction graph, with molecules as nodes and reactions as arcs. The iterated reaction graph is updated and validated by comparing output with the main products found by classical gas-chromatographic/mass spectrometric analysis. To ensure a realistic output and convergence to desired volatiles only, the approach contains a number of novel elements: rate kinetics are treated as reaction probabilities; only a subset of the true chemistry is modeled; and the reactions are blocked into groups.
Cyclic Game Dynamics Driven by Iterated Reasoning
Frey, Seth; Goldstone, Robert L.
2013-01-01
Recent theories from complexity science argue that complex dynamics are ubiquitous in social and economic systems. These claims emerge from the analysis of individually simple agents whose collective behavior is surprisingly complicated. However, economists have argued that iterated reasoning–what you think I think you think–will suppress complex dynamics by stabilizing or accelerating convergence to Nash equilibrium. We report stable and efficient periodic behavior in human groups playing the Mod Game, a multi-player game similar to Rock-Paper-Scissors. The game rewards subjects for thinking exactly one step ahead of others in their group. Groups that play this game exhibit cycles that are inconsistent with any fixed-point solution concept. These cycles are driven by a “hopping” behavior that is consistent with other accounts of iterated reasoning: agents are constrained to about two steps of iterated reasoning and learn an additional one-half step with each session. If higher-order reasoning can be complicit in complex emergent dynamics, then cyclic and chaotic patterns may be endogenous features of real-world social and economic systems. PMID:23441191
The optimal algorithm for Multi-source RS image fusion.
Fu, Wei; Huang, Shui-Guang; Li, Zeng-Shun; Shen, Hao; Li, Jun-Shuai; Wang, Peng-Yuan
2016-01-01
In order to solve the issue which the fusion rules cannot be self-adaptively adjusted by using available fusion methods according to the subsequent processing requirements of Remote Sensing (RS) image, this paper puts forward GSDA (genetic-iterative self-organizing data analysis algorithm) by integrating the merit of genetic arithmetic together with the advantage of iterative self-organizing data analysis algorithm for multi-source RS image fusion. The proposed algorithm considers the wavelet transform of the translation invariance as the model operator, also regards the contrast pyramid conversion as the observed operator. The algorithm then designs the objective function by taking use of the weighted sum of evaluation indices, and optimizes the objective function by employing GSDA so as to get a higher resolution of RS image. As discussed above, the bullet points of the text are summarized as follows.•The contribution proposes the iterative self-organizing data analysis algorithm for multi-source RS image fusion.•This article presents GSDA algorithm for the self-adaptively adjustment of the fusion rules.•This text comes up with the model operator and the observed operator as the fusion scheme of RS image based on GSDA. The proposed algorithm opens up a novel algorithmic pathway for multi-source RS image fusion by means of GSDA.
A New Method for Analyzing Near-Field Faraday Probe Data in Hall Thrusters
NASA Technical Reports Server (NTRS)
Huang, Wensheng; Shastry, Rohit; Herman, Daniel A.; Soulas, George C.; Kamhawi, Hani
2013-01-01
This paper presents a new method for analyzing near-field Faraday probe data obtained from Hall thrusters. Traditional methods spawned from far-field Faraday probe analysis rely on assumptions that are not applicable to near-field Faraday probe data. In particular, arbitrary choices for the point of origin and limits of integration have made interpretation of the results difficult. The new method, called iterative pathfinding, uses the evolution of the near-field plume with distance to provide feedback for determining the location of the point of origin. Although still susceptible to the choice of integration limits, this method presents a systematic approach to determining the origin point for calculating the divergence angle. The iterative pathfinding method is applied to near-field Faraday probe data taken in a previous study from the NASA-300M and NASA-457Mv2 Hall thrusters. Since these two thrusters use centrally mounted cathodes the current density associated with the cathode plume is removed before applying iterative pathfinding. A procedure is presented for removing the cathode plume. The results of the analysis are compared to far-field probe analysis results. This paper ends with checks on the validity of the new method and discussions on the implications of the results.
A New Method for Analyzing Near-Field Faraday Probe Data in Hall Thrusters
NASA Technical Reports Server (NTRS)
Huang, Wensheng; Shastry, Rohit; Herman, Daniel A.; Soulas, George C.; Kamhawi, Hani
2013-01-01
This paper presents a new method for analyzing near-field Faraday probe data obtained from Hall thrusters. Traditional methods spawned from far-field Faraday probe analysis rely on assumptions that are not applicable to near-field Faraday probe data. In particular, arbitrary choices for the point of origin and limits of integration have made interpretation of the results difficult. The new method, called iterative pathfinding, uses the evolution of the near-field plume with distance to provide feedback for determining the location of the point of origin. Although still susceptible to the choice of integration limits, this method presents a systematic approach to determining the origin point for calculating the divergence angle. The iterative pathfinding method is applied to near-field Faraday probe data taken in a previous study from the NASA-300M and NASA-457Mv2 Hall thrusters. Since these two thrusters use centrally mounted cathodes, the current density associated with the cathode plume is removed before applying iterative pathfinding. A procedure is presented for removing the cathode plume. The results of the analysis are compared to far-field probe analysis results. This paper ends with checks on the validity of the new method and discussions on the implications of the results.
Effect of Geometrical Imperfection on Buckling Failure of ITER VVPSS Tank
NASA Astrophysics Data System (ADS)
Jha, Saroj Kumar; Gupta, Girish Kumar; Pandey, Manish Kumar; Bhattacharya, Avik; Jogi, Gaurav; Bhardwaj, Anil Kumar
2017-04-01
The ‘Vacuum Vessel Pressure Suppression System’ (VVPSS) is part of ITER machine, which is designed to protect the ITER Vacuum Vessel and its connected systems, from an over-pressure situation. It is comprised of a partially evacuated tank of stainless steel approximately 46 m long and 6 m in diameter and thickness 30 mm. It is to hold approximately 675 tonnes of water at room temperature to condense the steam resulting from the adverse water leakage into the Vacuum Vessel chamber. For any vacuum vessel, geometrical imperfection has significant effect on buckling failure and structural integrity. Major geometrical imperfection in VVPSS tank depends on form tolerances. To study the effect of geometrical imperfection on buckling failure of VVPSS tank, finite element analysis (FEA) has been performed in line with ASME section VIII division 2 part 5 [1], ‘design by analysis method’. Linear buckling analysis has been performed to get the buckled shape and displacement. Geometrical imperfection due to form tolerance is incorporated in FEA model of VVPSS tank by scaling the resulted buckled shape by a factor ‘60’. This buckled shape model is used as input geometry for plastic collapse and buckling failure assessment. Plastic collapse and buckling failure of VVPSS tank has been assessed by using the elastic-plastic analysis method. This analysis has been performed for different values of form tolerance. The results of analysis show that displacement and load proportionality factor (LPF) vary inversely with form tolerance. For higher values of form tolerance LPF reduces significantly with high values of displacement.
Atmospheric Precorrected Differential Absorption technique to retrieve columnar water vapor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schlaepfer, D.; Itten, K.I.; Borel, C.C.
1998-09-01
Differential absorption techniques are suitable to retrieve the total column water vapor contents from imaging spectroscopy data. A technique called Atmospheric Precorrected Differential Absorption (APDA) is derived directly from simplified radiative transfer equations. It combines a partial atmospheric correction with a differential absorption technique. The atmospheric path radiance term is iteratively corrected during the retrieval of water vapor. This improves the results especially over low background albedos. The error of the method for various ground reflectance spectra is below 7% for most of the spectra. The channel combinations for two test cases are then defined, using a quantitative procedure, whichmore » is based on MODTRAN simulations and the image itself. An error analysis indicates that the influence of aerosols and channel calibration is minimal. The APDA technique is then applied to two AVIRIS images acquired in 1991 and 1995. The accuracy of the measured water vapor columns is within a range of {+-}5% compared to ground truth radiosonde data.« less
Green, Ruth H; Evans, Val; MacLeod, Sheona; Barratt, Jonathan
2018-02-01
Major changes in the design and delivery of clinical academic training in the United Kingdom have occurred yet there has been little exploration of the perceptions of integrated clinic academic trainees or educators. We obtained the views of a range of key stakeholders involved in clinical academic training in the East Midlands. A qualitative study with inductive iterative thematic content analysis of findings from trainee surveys and facilitated focus groups. The East Midlands School of Clinical Academic Training. Integrated Clinical Academic Trainees, clinical and academic educators involved in clinical academic training. The experience, opinions and beliefs of key stakeholders about barriers and enablers in the delivery of clinical academic training. We identified key themes many shared by both trainees and educators. These highlighted issues in the systems and process of the integrated academic pathways, career pathways, supervision and support, the assessment process and the balance between clinical and academic training. Our findings help inform the future development of integrated academic training programmes.
Phadraig, Caoimhin Mac Giolla; Griffiths, Colin; McCallion, Philip; McCarron, Mary; Nunn, June
2017-01-01
A better understanding of how communication-based behaviour supports are applied with adults with intellectual disabilities may reduce reliance on restrictive practices such as holding, sedation and anaesthesia in dentistry. In this study, we explore how communication is used by dentists who provide treatment for adults with intellectual disabilities. A descriptive qualitative study, adopting synchronous online focus groups, was undertaken with six expert dentists in Ireland. Members were contacted again in pairs or individually for further data collection, analysed using thematic content analysis. Two relevant categories emerged from the data, relating to the selection and application of communication-based behaviour support for adults with intellectual disabilities. Decision-making processes were explored. Building on these categories, a co-regulating process of communication emerged as the means by which dentists iteratively apply and adapt communicative strategies. This exploration revealed rationalist and intuitive decision-making. Implications for education, practice and research are identified.
Brown, Ottilia; Goliath, Veonna; van Rooyen, Dalena R M; Aldous, Colleen; Marais, Leonard Charles
2017-01-01
Communicating the diagnosis of cancer in cross-cultural clinical settings is a complex task. This qualitative research article describes the content and process of informing Zulu patients in South Africa of the diagnosis of cancer, using osteosarcoma as the index diagnosis. We used a descriptive research design with census sampling and focus group interviews. We used an iterative thematic data analysis process and Guba's model of trustworthiness to ensure scientific rigor. Our results reinforced the use of well-accepted strategies for communicating the diagnosis of cancer. In addition, new strategies emerged which may be useful in other cross-cultural settings. These strategies included using the stages of cancer to explain the disease and its progression and instilling hope using a multidisciplinary team care model. We identified several patients, professionals, and organizational factors that complicate cross-cultural communication. We conclude by recommending the development of protocols for communication in these cross-cultural clinical settings.
The HIV Prison Paradox: Agency and HIV-Positive Women's Experiences in Jail and Prison in Alabama.
Sprague, Courtenay; Scanlon, Michael L; Radhakrishnan, Bharathi; Pantalone, David W
2017-08-01
Incarcerated women face significant barriers to achieve continuous HIV care. We employed a descriptive, exploratory design using qualitative methods and the theoretical construct of agency to investigate participants' self-reported experiences accessing HIV services in jail, in prison, and post-release in two Alabama cities. During January 2014, we conducted in-depth interviews with 25 formerly incarcerated HIV-positive women. Two researchers completed independent coding, producing preliminary codes from transcripts using content analysis. Themes were developed iteratively, verified, and refined. They encompassed (a) special rules for HIV-positive women: isolation, segregation, insults, food rationing, and forced disclosure; (b) absence of counseling following initial HIV diagnosis; and (c) HIV treatment impediments: delays, interruption, and denial. Participants deployed agentic strategies of accommodation, resistance, and care-seeking to navigate the social world of prison and HIV services. Findings illuminate the "HIV prison paradox": the chief opportunities that remain unexploited to engage and re-engage justice-involved women in the HIV care continuum.
Kramer, Jessica M
2015-01-01
Prior to undertaking randomized control trials, pilot research should ensure that an intervention's active ingredients are operationalized in manuals or protocols. This study identified the strategies facilitators reported to use during the implementation of a problem-solving self-advocacy intervention, Project "Teens making Environment and Activity Modifications" (TEAM), with transition-age youth with developmental disabilities, and evaluated the alignment of strategies with the intervention's hypothesized mechanisms of change. An iterative process was used to conduct a content analysis of 106 field notes completed by six facilitators. Facilitators used 19 strategies. Findings suggest that facilitators used strategies simultaneously to ensure universal design for learning, maximize relevance for individual trainees, and maintain a safe and encouraging environment. Facilitators can individualize Project TEAM in a way that operationalizes the mechanisms of change underlying Project TEAM. The quality of the intervention may improve by explicitly incorporating these strategies into the intervention protocol. The strategies may also be applicable to therapists implementing interventions informed, by similar theoretical propositions.
Gillespie, Alex; Reader, Tom W
2016-01-01
Background Letters of complaint written by patients and their advocates reporting poor healthcare experiences represent an under-used data source. The lack of a method for extracting reliable data from these heterogeneous letters hinders their use for monitoring and learning. To address this gap, we report on the development and reliability testing of the Healthcare Complaints Analysis Tool (HCAT). Methods HCAT was developed from a taxonomy of healthcare complaints reported in a previously published systematic review. It introduces the novel idea that complaints should be analysed in terms of severity. Recruiting three groups of educated lay participants (n=58, n=58, n=55), we refined the taxonomy through three iterations of discriminant content validity testing. We then supplemented this refined taxonomy with explicit coding procedures for seven problem categories (each with four levels of severity), stage of care and harm. These combined elements were further refined through iterative coding of a UK national sample of healthcare complaints (n= 25, n=80, n=137, n=839). To assess reliability and accuracy for the resultant tool, 14 educated lay participants coded a referent sample of 125 healthcare complaints. Results The seven HCAT problem categories (quality, safety, environment, institutional processes, listening, communication, and respect and patient rights) were found to be conceptually distinct. On average, raters identified 1.94 problems (SD=0.26) per complaint letter. Coders exhibited substantial reliability in identifying problems at four levels of severity; moderate and substantial reliability in identifying stages of care (except for ‘discharge/transfer’ that was only fairly reliable) and substantial reliability in identifying overall harm. Conclusions HCAT is not only the first reliable tool for coding complaints, it is the first tool to measure the severity of complaints. It facilitates service monitoring and organisational learning and it enables future research examining whether healthcare complaints are a leading indicator of poor service outcomes. HCAT is freely available to download and use. PMID:26740496
Harper, Angela F; Leuthaeuser, Janelle B; Babbitt, Patricia C; Morris, John H; Ferrin, Thomas E; Poole, Leslie B; Fetrow, Jacquelyn S
2017-02-01
Peroxiredoxins (Prxs or Prdxs) are a large protein superfamily of antioxidant enzymes that rapidly detoxify damaging peroxides and/or affect signal transduction and, thus, have roles in proliferation, differentiation, and apoptosis. Prx superfamily members are widespread across phylogeny and multiple methods have been developed to classify them. Here we present an updated atlas of the Prx superfamily identified using a novel method called MISST (Multi-level Iterative Sequence Searching Technique). MISST is an iterative search process developed to be both agglomerative, to add sequences containing similar functional site features, and divisive, to split groups when functional site features suggest distinct functionally-relevant clusters. Superfamily members need not be identified initially-MISST begins with a minimal representative set of known structures and searches GenBank iteratively. Further, the method's novelty lies in the manner in which isofunctional groups are selected; rather than use a single or shifting threshold to identify clusters, the groups are deemed isofunctional when they pass a self-identification criterion, such that the group identifies itself and nothing else in a search of GenBank. The method was preliminarily validated on the Prxs, as the Prxs presented challenges of both agglomeration and division. For example, previous sequence analysis clustered the Prx functional families Prx1 and Prx6 into one group. Subsequent expert analysis clearly identified Prx6 as a distinct functionally relevant group. The MISST process distinguishes these two closely related, though functionally distinct, families. Through MISST search iterations, over 38,000 Prx sequences were identified, which the method divided into six isofunctional clusters, consistent with previous expert analysis. The results represent the most complete computational functional analysis of proteins comprising the Prx superfamily. The feasibility of this novel method is demonstrated by the Prx superfamily results, laying the foundation for potential functionally relevant clustering of the universe of protein sequences.
Babbitt, Patricia C.; Ferrin, Thomas E.
2017-01-01
Peroxiredoxins (Prxs or Prdxs) are a large protein superfamily of antioxidant enzymes that rapidly detoxify damaging peroxides and/or affect signal transduction and, thus, have roles in proliferation, differentiation, and apoptosis. Prx superfamily members are widespread across phylogeny and multiple methods have been developed to classify them. Here we present an updated atlas of the Prx superfamily identified using a novel method called MISST (Multi-level Iterative Sequence Searching Technique). MISST is an iterative search process developed to be both agglomerative, to add sequences containing similar functional site features, and divisive, to split groups when functional site features suggest distinct functionally-relevant clusters. Superfamily members need not be identified initially—MISST begins with a minimal representative set of known structures and searches GenBank iteratively. Further, the method’s novelty lies in the manner in which isofunctional groups are selected; rather than use a single or shifting threshold to identify clusters, the groups are deemed isofunctional when they pass a self-identification criterion, such that the group identifies itself and nothing else in a search of GenBank. The method was preliminarily validated on the Prxs, as the Prxs presented challenges of both agglomeration and division. For example, previous sequence analysis clustered the Prx functional families Prx1 and Prx6 into one group. Subsequent expert analysis clearly identified Prx6 as a distinct functionally relevant group. The MISST process distinguishes these two closely related, though functionally distinct, families. Through MISST search iterations, over 38,000 Prx sequences were identified, which the method divided into six isofunctional clusters, consistent with previous expert analysis. The results represent the most complete computational functional analysis of proteins comprising the Prx superfamily. The feasibility of this novel method is demonstrated by the Prx superfamily results, laying the foundation for potential functionally relevant clustering of the universe of protein sequences. PMID:28187133
Gambler Risk Perception: A Mental Model and Grounded Theory Analysis.
Spurrier, Michael; Blaszczynski, Alexander; Rhodes, Paul
2015-09-01
Few studies have investigated how gamblers perceive risk or the role of risk perception in disordered gambling. The purpose of the current study therefore was to obtain data on lay gamblers' beliefs on these variables and their effects on decision-making, behaviour, and disordered gambling aetiology. Fifteen regular lay gamblers (non-problem/low risk, moderate risk and problem gamblers) completed a semi-structured interview following mental models and grounded theory methodologies. Gambler interview data was compared to an expert 'map' of risk-perception, to identify comparative gaps or differences associated with harmful or safe gambling. Systematic overlapping processes of data gathering and analysis were used to iteratively extend, saturate, test for exception, and verify concepts and themes emerging from the data. The preliminary findings suggested that gambler accounts supported the presence of expert conceptual constructs, and to some degree the role of risk perception in protecting against or increasing vulnerability to harm and disordered gambling. Gambler accounts of causality, meaning, motivation, and strategy were highly idiosyncratic, and often contained content inconsistent with measures of disordered gambling. Disordered gambling appears heavily influenced by relative underestimation of risk and overvaluation of gambling, based on explicit and implicit analysis, and deliberate, innate, contextual, and learned processing evaluations and biases.
Low speed airfoil design and analysis
NASA Technical Reports Server (NTRS)
Eppler, R.; Somers, D. M.
1979-01-01
A low speed airfoil design and analysis program was developed which contains several unique features. In the design mode, the velocity distribution is not specified for one but many different angles of attack. Several iteration options are included which allow the trailing edge angle to be specified while other parameters are iterated. For airfoil analysis, a panel method is available which uses third-order panels having parabolic vorticity distributions. The flow condition is satisfied at the end points of the panels. Both sharp and blunt trailing edges can be analyzed. The integral boundary layer method with its laminar separation bubble analog, empirical transition criterion, and precise turbulent boundary layer equations compares very favorably with other methods, both integral and finite difference. Comparisons with experiment for several airfoils over a very wide Reynolds number range are discussed. Applications to high lift airfoil design are also demonstrated.
Toward semantic-based retrieval of visual information: a model-based approach
NASA Astrophysics Data System (ADS)
Park, Youngchoon; Golshani, Forouzan; Panchanathan, Sethuraman
2002-07-01
This paper center around the problem of automated visual content classification. To enable classification based image or visual object retrieval, we propose a new image representation scheme called visual context descriptor (VCD) that is a multidimensional vector in which each element represents the frequency of a unique visual property of an image or a region. VCD utilizes the predetermined quality dimensions (i.e., types of features and quantization level) and semantic model templates mined in priori. Not only observed visual cues, but also contextually relevant visual features are proportionally incorporated in VCD. Contextual relevance of a visual cue to a semantic class is determined by using correlation analysis of ground truth samples. Such co-occurrence analysis of visual cues requires transformation of a real-valued visual feature vector (e.g., color histogram, Gabor texture, etc.,) into a discrete event (e.g., terms in text). Good-feature to track, rule of thirds, iterative k-means clustering and TSVQ are involved in transformation of feature vectors into unified symbolic representations called visual terms. Similarity-based visual cue frequency estimation is also proposed and used for ensuring the correctness of model learning and matching since sparseness of sample data causes the unstable results of frequency estimation of visual cues. The proposed method naturally allows integration of heterogeneous visual or temporal or spatial cues in a single classification or matching framework, and can be easily integrated into a semantic knowledge base such as thesaurus, and ontology. Robust semantic visual model template creation and object based image retrieval are demonstrated based on the proposed content description scheme.
Mitra, Monika; Smith, Lauren D; Smeltzer, Suzanne C; Long-Bellil, Linda M; Sammet Moring, Nechama; Iezzoni, Lisa I
2017-07-01
Women with physical disabilities are known to experience disparities in maternity care access and quality, and communication gaps with maternity care providers, however there is little research exploring the maternity care experiences of women with physical disabilities from the perspective of their health care practitioners. This study explored health care practitioners' experiences and needs around providing perinatal care to women with physical disabilities in order to identify potential drivers of these disparities. We conducted semi-structured telephone interviews with 14 health care practitioners in the United States who provide maternity care to women with physical disabilities, as identified by affiliation with disability-related organizations, publications and snowball sampling. Descriptive coding and content analysis techniques were used to develop an iterative code book related to barriers to caring for this population. Public health theory regarding levels of barriers was applied to generate broad barrier categories, which were then analyzed using content analysis. Participant-reported barriers to providing optimal maternity care to women with physical disabilities were grouped into four levels: practitioner level (e.g., unwillingness to provide care), clinical practice level (e.g., accessible office equipment like adjustable exam tables), system level (e.g., time limits, reimbursement policies), and barriers relating to lack of scientific evidence (e.g., lack of disability-specific clinical data). Participants endorsed barriers to providing optimal maternity care to women with physical disabilities. Our findings highlight the needs for maternity care practice guidelines for women with physical disabilities, and for training and education regarding the maternity care needs of this population. Copyright © 2016 Elsevier Inc. All rights reserved.
2017-01-01
Chemical standardization, along with morphological and DNA analysis ensures the authenticity and advances the integrity evaluation of botanical preparations. Achievement of a more comprehensive, metabolomic standardization requires simultaneous quantitation of multiple marker compounds. Employing quantitative 1H NMR (qHNMR), this study determined the total isoflavone content (TIfCo; 34.5–36.5% w/w) via multimarker standardization and assessed the stability of a 10-year-old isoflavone-enriched red clover extract (RCE). Eleven markers (nine isoflavones, two flavonols) were targeted simultaneously, and outcomes were compared with LC-based standardization. Two advanced quantitative measures in qHNMR were applied to derive quantities from complex and/or overlapping resonances: a quantum mechanical (QM) method (QM-qHNMR) that employs 1H iterative full spin analysis, and a non-QM method that uses linear peak fitting algorithms (PF-qHNMR). A 10 min UHPLC-UV method provided auxiliary orthogonal quantitation. This is the first systematic evaluation of QM and non-QM deconvolution as qHNMR quantitation measures. It demonstrates that QM-qHNMR can account successfully for the complexity of 1H NMR spectra of individual analytes and how QM-qHNMR can be built for mixtures such as botanical extracts. The contents of the main bioactive markers were in good agreement with earlier HPLC-UV results, demonstrating the chemical stability of the RCE. QM-qHNMR advances chemical standardization by its inherent QM accuracy and the use of universal calibrants, avoiding the impractical need for identical reference materials. PMID:28067513
Analysis of the sensitivity of soils to the leaching of agricultural pesticides in Ohio
Schalk, C.W.
1998-01-01
Pesticides have not been found frequently in the ground waters of Ohio even though large amounts of agricultural pesticides are applied to fields in Ohio every year. State regulators, including representatives from Ohio Environmental Protection Agency and Departments of Agriculture, Health, and Natural Resources, are striving to limit the presence of pesticides in ground water at a minimum. A proposed pesticide management plan for the State aims at protecting Ohio's ground water by assessing pesticide-leaching potential using geographic information system (GIS) technology and invoking a monitoring plan that targets aquifers deemed most likely to be vulnerable to pesticide leaching. The U.S. Geological Survey, in cooperation with Ohio Department of Agriculture, assessed the sensitivity of mapped soil units in Ohio to pesticide leaching. A soils data base (STATSGO) compiled by U.S. Department of Agriculture was used iteratively to estimate soil units as being of high to low sensitivity on the basis of soil permeability, clay content, and organic-matter content. Although this analysis did not target aquifers directly, the results can be used as a first estimate of areas most likely to be subject to pesticide contamination from normal agricultural practices. High-sensitivity soil units were found in lakefront areas and former lakefront beach ridges, buried valleys in several river basins, and parts of central and south- central Ohio. Medium-high-sensitivity soil units were found in other river basins, along Lake Erie in north-central Ohio, and in many of the upland areas of the Muskingum River Basin. Low-sensitivity map units dominated the northwestern quadrant of Ohio.
Smith, Sherilyn; Kogan, Jennifer R; Berman, Norman B; Dell, Michael S; Brock, Douglas M; Robins, Lynne S
2016-01-01
The ability to create a concise summary statement can be assessed as a marker for clinical reasoning. The authors describe the development and preliminary validation of a rubric to assess such summary statements. Between November 2011 and June 2014, four researchers independently coded 50 summary statements randomly selected from a large database of medical students' summary statements in virtual patient cases to each create an assessment rubric. Through an iterative process, they created a consensus assessment rubric and applied it to 60 additional summary statements. Cronbach alpha calculations determined the internal consistency of the rubric components, intraclass correlation coefficient (ICC) calculations determined the interrater agreement, and Spearman rank-order correlations determined the correlations between rubric components. Researchers' comments describing their individual rating approaches were analyzed using content analysis. The final rubric included five components: factual accuracy, appropriate narrowing of the differential diagnosis, transformation of information, use of semantic qualifiers, and a global rating. Internal consistency was acceptable (Cronbach alpha 0.771). Interrater reliability for the entire rubric was acceptable (ICC 0.891; 95% confidence interval 0.859-0.917). Spearman calculations revealed a range of correlations across cases. Content analysis of the researchers' comments indicated differences in their application of the assessment rubric. This rubric has potential as a tool for feedback and assessment. Opportunities for future study include establishing interrater reliability with other raters and on different cases, designing training for raters to use the tool, and assessing how feedback using this rubric affects students' clinical reasoning skills.
Strategies for improving family engagement during family-centered rounds.
Kelly, Michelle M; Xie, Anping; Carayon, Pascale; DuBenske, Lori L; Ehlenbach, Mary L; Cox, Elizabeth D
2013-04-01
Family-centered rounds (FCR) are recommended as standard practice in the pediatric inpatient setting; however, limited data exist on best practices promoting family engagement during rounds. To identify strategies to enhance family engagement during FCR using a recognized systems engineering approach. In this qualitative study, stimulated recall interviews using video-recorded rounding sessions were conducted with participants representing the various stakeholders on rounds (15 parents/children and 22 healthcare team [HCT] members) from 4 inpatient services at a children's hospital in Wisconsin. On video review, participants were asked to provide strategies that would increase family engagement on FCR. Qualitative content analysis of interview transcripts was performed in an iterative process. We identified 21 categories of strategies corresponding to 2 themes related to the structure and process of FCR. Strategies related to the structure of FCR were associated with all five recognized work system elements: people (HCT composition), tasks (HCT roles), organization (scheduling of rounds and HCT training), environment (location of rounds and HCT positioning), and tools and technologies (computer use). Strategies related to the FCR process were associated with three rounding phases: before (HCT and family preparation), during (eg, introductions, presentation content, communication style), and after (follow-up) FCR. We identified a range of strategies to enhance family engagement during FCR. These strategies both confirm prior work on the importance of the content and style of communication on rounds and highlight other factors within the hospital work system, like scheduling and computer use, which may affect family engagement in care. Copyright © 2013 Society of Hospital Medicine.
Using Storytelling to Address Oral Health Knowledge in American Indian and Alaska Native Communities
Gebel, Christina; Crawford, Andrew; Barker, Judith C.; Henshaw, Michelle; Garcia, Raul I.; Riedy, Christine; Wimsatt, Maureen A.
2018-01-01
Introduction We conducted a qualitative analysis to evaluate the acceptability of using storytelling as a way to communicate oral health messages regarding early childhood caries (ECC) prevention in the American Indian and Alaska Native (AIAN) population. Methods A traditional story was developed and pilot tested among AIAN mothers residing in 3 tribal locations in northern California. Evaluations of the story content and acceptability followed a multistep process consisting of initial feedback from 4 key informants, a focus group of 7 AIAN mothers, and feedback from the Community Advisory Board. Upon story approval, 9 additional focus group sessions (N = 53 participants) were held with AIAN mothers following an oral telling of the story. Results Participants reported that the story was culturally appropriate and used relatable characters. Messages about oral health were considered to be valuable. Concerns arose about the oral-only delivery of the story, story content, length, story messages that conflicted with normative community values, and the intent to target audiences. Feedback by focus group participants raised some doubts about the relevance and frequency of storytelling in AIAN communities today. Conclusion AIAN communities value the need for oral health messaging for community members. However, the acceptability of storytelling as a method for the messaging raises concerns, because the influence of modern technology and digital communications may weaken the acceptability of the oral tradition. Careful attention must be made to the delivery mode, content, and targeting with continual iterative feedback from community members to make these messages engaging, appropriate, relatable, and inclusive. PMID:29806581
Linear MHD stability analysis of post-disruption plasmas in ITER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aleynikova, K., E-mail: ksenia.aleynikova@gmail.com; Huijsmans, G. T. A.; Aleynikov, P.
2016-05-15
Most of the plasma current can be replaced by a runaway electron (RE) current during plasma disruptions in ITER. In this case the post-disruption plasma current profile is likely to be more peaked than the pre-disruption profile. The MHD activity of such plasma will affect the runaway electron generation and confinement and the dynamics of the plasma position evolution (Vertical Displacement Event), limiting the timeframe for runaway electrons and disruption mitigation. In the present paper, we evaluate the influence of the possible RE seed current parameters on the onset of the MHD instabilities. By varying the RE seed current profile,more » we search for subsequent plasma evolutions with the highest and the lowest MHD activity. This information can be applied to a development of desirable ITER disruption scenario.« less
Analysis of the iteratively regularized Gauss-Newton method under a heuristic rule
NASA Astrophysics Data System (ADS)
Jin, Qinian; Wang, Wei
2018-03-01
The iteratively regularized Gauss-Newton method is one of the most prominent regularization methods for solving nonlinear ill-posed inverse problems when the data is corrupted by noise. In order to produce a useful approximate solution, this iterative method should be terminated properly. The existing a priori and a posteriori stopping rules require accurate information on the noise level, which may not be available or reliable in practical applications. In this paper we propose a heuristic selection rule for this regularization method, which requires no information on the noise level. By imposing certain conditions on the noise, we derive a posteriori error estimates on the approximate solutions under various source conditions. Furthermore, we establish a convergence result without using any source condition. Numerical results are presented to illustrate the performance of our heuristic selection rule.
Virtual fringe projection system with nonparallel illumination based on iteration
NASA Astrophysics Data System (ADS)
Zhou, Duo; Wang, Zhangying; Gao, Nan; Zhang, Zonghua; Jiang, Xiangqian
2017-06-01
Fringe projection profilometry has been widely applied in many fields. To set up an ideal measuring system, a virtual fringe projection technique has been studied to assist in the design of hardware configurations. However, existing virtual fringe projection systems use parallel illumination and have a fixed optical framework. This paper presents a virtual fringe projection system with nonparallel illumination. Using an iterative method to calculate intersection points between rays and reference planes or object surfaces, the proposed system can simulate projected fringe patterns and captured images. A new explicit calibration method has been presented to validate the precision of the system. Simulated results indicate that the proposed iterative method outperforms previous systems. Our virtual system can be applied to error analysis, algorithm optimization, and help operators to find ideal system parameter settings for actual measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S.
The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.« less
NASA Astrophysics Data System (ADS)
He, Shixuan; Xie, Wanyi; Zhang, Wei; Zhang, Liqun; Wang, Yunxia; Liu, Xiaoling; Liu, Yulong; Du, Chunlei
2015-02-01
A novel strategy which combines iteratively cubic spline fitting baseline correction method with discriminant partial least squares qualitative analysis is employed to analyze the surface enhanced Raman scattering (SERS) spectroscopy of banned food additives, such as Sudan I dye and Rhodamine B in food, Malachite green residues in aquaculture fish. Multivariate qualitative analysis methods, using the combination of spectra preprocessing iteratively cubic spline fitting (ICSF) baseline correction with principal component analysis (PCA) and discriminant partial least squares (DPLS) classification respectively, are applied to investigate the effectiveness of SERS spectroscopy for predicting the class assignments of unknown banned food additives. PCA cannot be used to predict the class assignments of unknown samples. However, the DPLS classification can discriminate the class assignment of unknown banned additives using the information of differences in relative intensities. The results demonstrate that SERS spectroscopy combined with ICSF baseline correction method and exploratory analysis methodology DPLS classification can be potentially used for distinguishing the banned food additives in field of food safety.
de Vries-Erich, Joy; Reuchlin, Kirsten; de Maaijer, Paul; van de Ridder, J M Monica
2017-03-01
Patient care and patient safety can be compromised by the lack of interprofessional collaboration and communication between healthcare providers. Interprofessional education (IPE) should therefore start during medical training and not be postponed until after graduation. This case study explored the current situation in the Dutch context and interviewed experts within medical education and with pioneers of successful best practices to learn more about their experiences with IPE. Data analysis started while new data were still collected, resulting in an iterative, constant comparative process. Using a strengths, weaknesses, opportunities, and threats (SWOT) analysis framework, we identified barriers and facilitators such as lack of a collective professional language, insufficient time or budget, stakeholders' resistance, and hierarchy. Opportunities and strengths identified were developing a collective vision, more attention for patient safety, and commitment of teachers. The facilitators and barriers relate to the organisational level of IPE and the educational content and practice. In particular, communication, cohesiveness, and support are influenced by these facilitators. An adequate identification of the SWOT elements in the current situation could prove beneficial for a successful implementation of IPE within the healthcare educational system.
NASA Astrophysics Data System (ADS)
Parvathi, S. P.; Ramanan, R. V.
2018-06-01
An iterative analytical trajectory design technique that includes perturbations in the departure phase of the interplanetary orbiter missions is proposed. The perturbations such as non-spherical gravity of Earth and the third body perturbations due to Sun and Moon are included in the analytical design process. In the design process, first the design is obtained using the iterative patched conic technique without including the perturbations and then modified to include the perturbations. The modification is based on, (i) backward analytical propagation of the state vector obtained from the iterative patched conic technique at the sphere of influence by including the perturbations, and (ii) quantification of deviations in the orbital elements at periapsis of the departure hyperbolic orbit. The orbital elements at the sphere of influence are changed to nullify the deviations at the periapsis. The analytical backward propagation is carried out using the linear approximation technique. The new analytical design technique, named as biased iterative patched conic technique, does not depend upon numerical integration and all computations are carried out using closed form expressions. The improved design is very close to the numerical design. The design analysis using the proposed technique provides a realistic insight into the mission aspects. Also, the proposed design is an excellent initial guess for numerical refinement and helps arrive at the four distinct design options for a given opportunity.
Recent advances in quantitative high throughput and high content data analysis.
Moutsatsos, Ioannis K; Parker, Christian N
2016-01-01
High throughput screening has become a basic technique with which to explore biological systems. Advances in technology, including increased screening capacity, as well as methods that generate multiparametric readouts, are driving the need for improvements in the analysis of data sets derived from such screens. This article covers the recent advances in the analysis of high throughput screening data sets from arrayed samples, as well as the recent advances in the analysis of cell-by-cell data sets derived from image or flow cytometry application. Screening multiple genomic reagents targeting any given gene creates additional challenges and so methods that prioritize individual gene targets have been developed. The article reviews many of the open source data analysis methods that are now available and which are helping to define a consensus on the best practices to use when analyzing screening data. As data sets become larger, and more complex, the need for easily accessible data analysis tools will continue to grow. The presentation of such complex data sets, to facilitate quality control monitoring and interpretation of the results will require the development of novel visualizations. In addition, advanced statistical and machine learning algorithms that can help identify patterns, correlations and the best features in massive data sets will be required. The ease of use for these tools will be important, as they will need to be used iteratively by laboratory scientists to improve the outcomes of complex analyses.
Fast algorithm for spectral mixture analysis of imaging spectrometer data
NASA Astrophysics Data System (ADS)
Schouten, Theo E.; Klein Gebbinck, Maurice S.; Liu, Z. K.; Chen, Shaowei
1996-12-01
Imaging spectrometers acquire images in many narrow spectral bands but have limited spatial resolution. Spectral mixture analysis (SMA) is used to determine the fractions of the ground cover categories (the end-members) present in each pixel. In this paper a new iterative SMA method is presented and tested using a 30 band MAIS image. The time needed for each iteration is independent of the number of bands, thus the method can be used for spectrometers with a large number of bands. Further a new method, based on K-means clustering, for obtaining endmembers from image data is described and compared with existing methods. Using the developed methods the available MAIS image was analyzed using 2 to 6 endmembers.
NASA Astrophysics Data System (ADS)
Yamada, Y.; Shimokawa, T.; Shinomoto, S. Yano, T.; Gouda, N.
2009-09-01
For the purpose of determining the celestial coordinates of stellar positions, consecutive observational images are laid overlapping each other with clues of stars belonging to multiple plates. In the analysis, one has to estimate not only the coordinates of individual plates, but also the possible expansion and distortion of the frame. This problem reduces to a least-squares fit that can in principle be solved by a huge matrix inversion, which is, however, impracticable. Here, we propose using Kalman filtering to perform the least-squares fit and implement a practical iterative algorithm. We also estimate errors associated with this iterative method and suggest a design of overlapping plates to minimize the error.
Observation and analysis of pellet material del B drift on MAST
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garzotti, L.; Baylor, Larry R; Kochi, F.
2010-01-01
Pellet material deposited in a tokamak plasma experiences a drift towards the low field side of the torus induced by the magnetic field gradient. Plasma fuelling in ITER relies on the beneficial effect of this drift to increase the pellet deposition depth and fuelling efficiency. It is therefore important to analyse this phenomenon in present machines to improve the understanding of the del B induced drift and the accuracy of the predictions for ITER. This paper presents a detailed analysis of pellet material drift in MAST pellet injection experiments based on the unique diagnostic capabilities available on this machine andmore » compares the observations with predictions of state-of-the-art ablation and deposition codes.« less
Page layout analysis and classification for complex scanned documents
NASA Astrophysics Data System (ADS)
Erkilinc, M. Sezer; Jaber, Mustafa; Saber, Eli; Bauer, Peter; Depalov, Dejan
2011-09-01
A framework for region/zone classification in color and gray-scale scanned documents is proposed in this paper. The algorithm includes modules for extracting text, photo, and strong edge/line regions. Firstly, a text detection module which is based on wavelet analysis and Run Length Encoding (RLE) technique is employed. Local and global energy maps in high frequency bands of the wavelet domain are generated and used as initial text maps. Further analysis using RLE yields a final text map. The second module is developed to detect image/photo and pictorial regions in the input document. A block-based classifier using basis vector projections is employed to identify photo candidate regions. Then, a final photo map is obtained by applying probabilistic model based on Markov random field (MRF) based maximum a posteriori (MAP) optimization with iterated conditional mode (ICM). The final module detects lines and strong edges using Hough transform and edge-linkages analysis, respectively. The text, photo, and strong edge/line maps are combined to generate a page layout classification of the scanned target document. Experimental results and objective evaluation show that the proposed technique has a very effective performance on variety of simple and complex scanned document types obtained from MediaTeam Oulu document database. The proposed page layout classifier can be used in systems for efficient document storage, content based document retrieval, optical character recognition, mobile phone imagery, and augmented reality.
NASA Astrophysics Data System (ADS)
Wilkerson, Michelle Hoda; Andrews, Chelsea; Shaban, Yara; Laina, Vasiliki; Gravel, Brian E.
2016-02-01
This paper explores the role that technology can play in engaging pre-service teachers with the iterative, "messy" nature of model-based inquiry. Over the course of 5 weeks, 11 pre-service teachers worked in groups to construct models of diffusion using a computational animation and simulation toolkit, and designed lesson plans for the toolkit. Content analyses of group discussions and lesson plans document attention to content, representation, revision, and evaluation as interwoven aspects of modeling over the course of the workshop. When animating, only content and representation were heavily represented in group discussions. When simulating, all four aspects were represented to different extents across groups. Those differences corresponded with different planned uses for the technology during lessons: to teach modeling, to engage learners with one another's ideas, or to reveal student ideas. We identify specific ways in which technology served an important role in eliciting teachers' knowledge and goals related to scientific modeling in the classroom.
NASA Astrophysics Data System (ADS)
Theunissen, Raf; Kadosh, Jesse S.; Allen, Christian B.
2015-06-01
Spatially varying signals are typically sampled by collecting uniformly spaced samples irrespective of the signal content. For signals with inhomogeneous information content, this leads to unnecessarily dense sampling in regions of low interest or insufficient sample density at important features, or both. A new adaptive sampling technique is presented directing sample collection in proportion to local information content, capturing adequately the short-period features while sparsely sampling less dynamic regions. The proposed method incorporates a data-adapted sampling strategy on the basis of signal curvature, sample space-filling, variable experimental uncertainty and iterative improvement. Numerical assessment has indicated a reduction in the number of samples required to achieve a predefined uncertainty level overall while improving local accuracy for important features. The potential of the proposed method has been further demonstrated on the basis of Laser Doppler Anemometry experiments examining the wake behind a NACA0012 airfoil and the boundary layer characterisation of a flat plate.
X-ray crystal spectrometer upgrade for ITER-like wall experiments at JETa)
NASA Astrophysics Data System (ADS)
Shumack, A. E.; Rzadkiewicz, J.; Chernyshova, M.; Jakubowska, K.; Scholz, M.; Byszuk, A.; Cieszewski, R.; Czarski, T.; Dominik, W.; Karpinski, L.; Kasprowicz, G.; Pozniak, K.; Wojenski, A.; Zabolotny, W.; Conway, N. J.; Dalley, S.; Figueiredo, J.; Nakano, T.; Tyrrell, S.; Zastrow, K.-D.; Zoita, V.
2014-11-01
The high resolution X-Ray crystal spectrometer at the JET tokamak has been upgraded with the main goal of measuring the tungsten impurity concentration. This is important for understanding impurity accumulation in the plasma after installation of the JET ITER-like wall (main chamber: Be, divertor: W). This contribution provides details of the upgraded spectrometer with a focus on the aspects important for spectral analysis and plasma parameter calculation. In particular, we describe the determination of the spectrometer sensitivity: important for impurity concentration determination.
X-ray crystal spectrometer upgrade for ITER-like wall experiments at JET.
Shumack, A E; Rzadkiewicz, J; Chernyshova, M; Jakubowska, K; Scholz, M; Byszuk, A; Cieszewski, R; Czarski, T; Dominik, W; Karpinski, L; Kasprowicz, G; Pozniak, K; Wojenski, A; Zabolotny, W; Conway, N J; Dalley, S; Figueiredo, J; Nakano, T; Tyrrell, S; Zastrow, K-D; Zoita, V
2014-11-01
The high resolution X-Ray crystal spectrometer at the JET tokamak has been upgraded with the main goal of measuring the tungsten impurity concentration. This is important for understanding impurity accumulation in the plasma after installation of the JET ITER-like wall (main chamber: Be, divertor: W). This contribution provides details of the upgraded spectrometer with a focus on the aspects important for spectral analysis and plasma parameter calculation. In particular, we describe the determination of the spectrometer sensitivity: important for impurity concentration determination.
Estimation of carbon fibre composites as ITER divertor armour
NASA Astrophysics Data System (ADS)
Pestchanyi, S.; Safronov, V.; Landman, I.
2004-08-01
Exposure of the carbon fibre composites (CFC) NB31 and NS31 by multiple plasma pulses has been performed at the plasma guns MK-200UG and QSPA. Numerical simulation for the same CFCs under ITER type I ELM typical heat load has been carried out using the code PEGASUS-3D. Comparative analysis of the numerical and experimental results allowed understanding the erosion mechanism of CFC based on the simulation results. A modification of CFC structure has been proposed in order to decrease the armour erosion rate.
Spectral Analysis for Weighted Iterated Triangulations of Graphs
NASA Astrophysics Data System (ADS)
Chen, Yufei; Dai, Meifeng; Wang, Xiaoqian; Sun, Yu; Su, Weiyi
Much information about the structural properties and dynamical aspects of a network is measured by the eigenvalues of its normalized Laplacian matrix. In this paper, we aim to present a first study on the spectra of the normalized Laplacian of weighted iterated triangulations of graphs. We analytically obtain all the eigenvalues, as well as their multiplicities from two successive generations. As an example of application of these results, we then derive closed-form expressions for their multiplicative Kirchhoff index, Kemeny’s constant and number of weighted spanning trees.
Gaussian-Beam/Physical-Optics Design Of Beam Waveguide
NASA Technical Reports Server (NTRS)
Veruttipong, Watt; Chen, Jacqueline C.; Bathker, Dan A.
1993-01-01
In iterative method of designing wideband beam-waveguide feed for paraboloidal-reflector antenna, Gaussian-beam approximation alternated with more nearly exact physical-optics analysis of diffraction. Includes curved and straight reflectors guiding radiation from feed horn to subreflector. For iterative design calculations, curved mirrors mathematically modeled as thin lenses. Each distance Li is combined length of two straight-line segments intersecting at one of flat mirrors. Method useful for designing beam-waveguide reflectors or mirrors required to have diameters approximately less than 30 wavelengths at one or more intended operating frequencies.
Network structures between strategies in iterated prisoners' dilemma games
NASA Astrophysics Data System (ADS)
Kim, Young Jin; Roh, Myungkyoon; Son, Seung-Woo
2014-02-01
We use replicator dynamics to study an iterated prisoners' dilemma game with memory. In this study, we investigate the characteristics of all 32 possible strategies with a single-step memory by observing the results when each strategy encounters another one. Based on these results, we define similarity measures between the 32 strategies and perform a network analysis of the relationship between the strategies by constructing a strategies network. Interestingly, we find that a win-lose circulation, like rock-paper-scissors, exists between strategies and that the circulation results from one unusual strategy.
Coleman, S; Nixon, J; Keen, J; Muir, D; Wilson, L; McGinnis, E; Stubbs, N; Dealey, C; Nelson, E A
2016-11-16
Variation in development methods of Pressure Ulcer Risk Assessment Instruments has led to inconsistent inclusion of risk factors and concerns about content validity. A new evidenced-based Risk Assessment Instrument, the Pressure Ulcer Risk Primary Or Secondary Evaluation Tool - PURPOSE-T was developed as part of a National Institute for Health Research (NIHR) funded Pressure Ulcer Research Programme (PURPOSE: RP-PG-0407-10056). This paper reports the pre-test phase to assess and improve PURPOSE-T acceptability, usability and confirm content validity. A descriptive study incorporating cognitive pre-testing methods and integration of service user views was undertaken over 3 cycles comprising PURPOSE-T training, a focus group and one-to-one think-aloud interviews. Clinical nurses from 2 acute and 2 community NHS Trusts, were grouped according to job role. Focus group participants used 3 vignettes to complete PURPOSE-T assessments and then participated in the focus group. Think-aloud participants were interviewed during their completion of PURPOSE-T. After each pre-test cycle analysis was undertaken and adjustment/improvements made to PURPOSE-T in an iterative process. This incorporated the use of descriptive statistics for data completeness and decision rule compliance and directed content analysis for interview and focus group data. Data were collected April 2012-June 2012. Thirty-four nurses participated in 3 pre-test cycles. Data from 3 focus groups, 12 think-aloud interviews incorporating 101 PURPOSE-T assessments led to changes to improve instrument content and design, flow and format, decision support and item-specific wording. Acceptability and usability were demonstrated by improved data completion and appropriate risk pathway allocation. The pre-test also confirmed content validity with clinical nurses. The pre-test was an important step in the development of the preliminary PURPOSE-T and the methods used may have wider instrument development application. PURPOSE-T proposes a new approach to pressure ulcer risk assessment, incorporating a screening stage, the inclusion of skin status to distinguish between those who require primary prevention and those who require secondary prevention/treatment and the use of colour to support pathway allocation and decision making. Further clinical evaluation is planned to assess the reliability and validity of PURPOSE-T and it's impact on care processes and patient outcomes.
2016-03-01
constraints problem. Game rules described valid moves allowing player to generate a memory graph performing improved C program verification . 15. SUBJECT...TERMS Formal Verification , Static Analysis, Abstract Interpretation, Pointer Analysis, Fixpoint Iteration 16. SECURITY CLASSIFICATION OF: 17...36 3.4.12 Example: Game Play . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 3.4.13 Verification
NASA Astrophysics Data System (ADS)
Sudevan, Vipin; Aluri, Pavan K.; Yadav, Sarvesh Kumar; Saha, Rajib; Souradeep, Tarun
2017-06-01
We report an improved technique for diffuse foreground minimization from Cosmic Microwave Background (CMB) maps using a new multiphase iterative harmonic space internal-linear-combination (HILC) approach. Our method nullifies a foreground leakage that was present in the old and usual iterative HILC method. In phase 1 of the multiphase technique, we obtain an initial cleaned map using the single iteration HILC approach over the desired portion of the sky. In phase 2, we obtain a final CMB map using the iterative HILC approach; however, now, to nullify the leakage, during each iteration, some of the regions of the sky that are not being cleaned in the current iteration are replaced by the corresponding cleaned portions of the phase 1 map. We bring all input frequency maps to a common and maximum possible beam and pixel resolution at the beginning of the analysis, which significantly reduces data redundancy, memory usage, and computational cost, and avoids, during the HILC weight calculation, the deconvolution of partial sky harmonic coefficients by the azimuthally symmetric beam and pixel window functions, which in a strict mathematical sense, are not well defined. Using WMAP 9 year and Planck 2015 frequency maps, we obtain foreground-cleaned CMB maps and a CMB angular power spectrum for the multipole range 2≤slant {\\ell }≤slant 2500. Our power spectrum matches the published Planck results with some differences at different multipole ranges. We validate our method by performing Monte Carlo simulations. Finally, we show that the weights for HILC foreground minimization have the intrinsic characteristic that they also tend to produce a statistically isotropic CMB map.
NASA Astrophysics Data System (ADS)
Zhong, XiaoXu; Liao, ShiJun
2018-01-01
Analytic approximations of the Von Kármán's plate equations in integral form for a circular plate under external uniform pressure to arbitrary magnitude are successfully obtained by means of the homotopy analysis method (HAM), an analytic approximation technique for highly nonlinear problems. Two HAM-based approaches are proposed for either a given external uniform pressure Q or a given central deflection, respectively. Both of them are valid for uniform pressure to arbitrary magnitude by choosing proper values of the so-called convergence-control parameters c 1 and c 2 in the frame of the HAM. Besides, it is found that the HAM-based iteration approaches generally converge much faster than the interpolation iterative method. Furthermore, we prove that the interpolation iterative method is a special case of the first-order HAM iteration approach for a given external uniform pressure Q when c 1 = - θ and c 2 = -1, where θ denotes the interpolation iterative parameter. Therefore, according to the convergence theorem of Zheng and Zhou about the interpolation iterative method, the HAM-based approaches are valid for uniform pressure to arbitrary magnitude at least in the special case c 1 = - θ and c 2 = -1. In addition, we prove that the HAM approach for the Von Kármán's plate equations in differential form is just a special case of the HAM for the Von Kármán's plate equations in integral form mentioned in this paper. All of these illustrate the validity and great potential of the HAM for highly nonlinear problems, and its superiority over perturbation techniques.
Tanaka, Yuji; Yamashita, Takako; Nagoshi, Masayasu
2017-04-01
Hydrocarbon contamination introduced during point, line and map analyses in a field emission electron probe microanalysis (FE-EPMA) was investigated to enable reliable quantitative analysis of trace amounts of carbon in steels. The increment of contamination on pure iron in point analysis is proportional to the number of iterations of beam irradiation, but not to the accumulated irradiation time. A combination of a longer dwell time and single measurement with a liquid nitrogen (LN2) trap as an anti-contamination device (ACD) is sufficient for a quantitative point analysis. However, in line and map analyses, contamination increases with irradiation time in addition to the number of iterations, even though the LN2 trap and a plasma cleaner are used as ACDs. Thus, a shorter dwell time and single measurement are preferred for line and map analyses, although it is difficult to eliminate the influence of contamination. While ring-like contamination around the irradiation point grows during electron-beam irradiation, contamination at the irradiation point increases during blanking time after irradiation. This can explain the increment of contamination in iterative point analysis as well as in line and map analyses. Among the ACDs, which are tested in this study, specimen heating at 373 K has a significant contamination inhibition effect. This technique makes it possible to obtain line and map analysis data with minimum influence of contamination. The above-mentioned FE-EPMA data are presented and discussed in terms of the contamination-formation mechanisms and the preferable experimental conditions for the quantification of trace carbon in steels. © The Author 2016. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Technical Reports Server (NTRS)
Ables, Brett
2014-01-01
Multi-stage launch vehicles with solid rocket motors (SRMs) face design optimization challenges, especially when the mission scope changes frequently. Significant performance benefits can be realized if the solid rocket motors are optimized to the changing requirements. While SRMs represent a fixed performance at launch, rapid design iterations enable flexibility at design time, yielding significant performance gains. The streamlining and integration of SRM design and analysis can be achieved with improved analysis tools. While powerful and versatile, the Solid Performance Program (SPP) is not conducive to rapid design iteration. Performing a design iteration with SPP and a trajectory solver is a labor intensive process. To enable a better workflow, SPP, the Program to Optimize Simulated Trajectories (POST), and the interfaces between them have been improved and automated, and a graphical user interface (GUI) has been developed. The GUI enables real-time visual feedback of grain and nozzle design inputs, enforces parameter dependencies, removes redundancies, and simplifies manipulation of SPP and POST's numerous options. Automating the analysis also simplifies batch analyses and trade studies. Finally, the GUI provides post-processing, visualization, and comparison of results. Wrapping legacy high-fidelity analysis codes with modern software provides the improved interface necessary to enable rapid coupled SRM ballistics and vehicle trajectory analysis. Low cost trade studies demonstrate the sensitivities of flight performance metrics to propulsion characteristics. Incorporating high fidelity analysis from SPP into vehicle design reduces performance margins and improves reliability. By flying an SRM designed with the same assumptions as the rest of the vehicle, accurate comparisons can be made between competing architectures. In summary, this flexible workflow is a critical component to designing a versatile launch vehicle model that can accommodate a volatile mission scope.
Tan, T J; Lau, Kenneth K; Jackson, Dana; Ardley, Nicholas; Borasu, Adina
2017-04-01
The purpose of this study was to assess the efficacy of model-based iterative reconstruction (MBIR), statistical iterative reconstruction (SIR), and filtered back projection (FBP) image reconstruction algorithms in the delineation of ureters and overall image quality on non-enhanced computed tomography of the renal tracts (NECT-KUB). This was a prospective study of 40 adult patients who underwent NECT-KUB for investigation of ureteric colic. Images were reconstructed using FBP, SIR, and MBIR techniques and individually and randomly assessed by two blinded radiologists. Parameters measured were overall image quality, presence of ureteric calculus, presence of hydronephrosis or hydroureters, image quality of each ureteric segment, total length of ureters unable to be visualized, attenuation values of image noise, and retroperitoneal fat content for each patient. There were no diagnostic discrepancies between image reconstruction modalities for urolithiasis. Overall image qualities and for each ureteric segment were superior using MBIR (67.5 % rated as 'Good to Excellent' vs. 25 % in SIR and 2.5 % in FBP). The lengths of non-visualized ureteric segments were shortest using MBIR (55.0 % measured 'less than 5 cm' vs. ASIR 33.8 % and FBP 10 %). MBIR was able to reduce overall image noise by up to 49.36 % over SIR and 71.02 % over FBP. MBIR technique improves overall image quality and visualization of ureters over FBP and SIR.
Laboratory-based validation of the baseline sensors of the ITER diagnostic residual gas analyzer
NASA Astrophysics Data System (ADS)
Klepper, C. C.; Biewer, T. M.; Marcus, C.; Andrew, P.; Gardner, W. L.; Graves, V. B.; Hughes, S.
2017-10-01
The divertor-specific ITER Diagnostic Residual Gas Analyzer (DRGA) will provide essential information relating to DT fusion plasma performance. This includes pulse-resolving measurements of the fuel isotopic mix reaching the pumping ducts, as well as the concentration of the helium generated as the ash of the fusion reaction. In the present baseline design, the cluster of sensors attached to this diagnostic's differentially pumped analysis chamber assembly includes a radiation compatible version of a commercial quadrupole mass spectrometer, as well as an optical gas analyzer using a plasma-based light excitation source. This paper reports on a laboratory study intended to validate the performance of this sensor cluster, with emphasis on the detection limit of the isotopic measurement. This validation study was carried out in a laboratory set-up that closely prototyped the analysis chamber assembly configuration of the baseline design. This includes an ITER-specific placement of the optical gas measurement downstream from the first turbine of the chamber's turbo-molecular pump to provide sufficient light emission while preserving the gas dynamics conditions that allow for \\textasciitilde 1 s response time from the sensor cluster [1].
NASA Astrophysics Data System (ADS)
Nordemann, D. J. R.; Rigozo, N. R.; de Souza Echer, M. P.; Echer, E.
2008-11-01
We present here an implementation of a least squares iterative regression method applied to the sine functions embedded in the principal components extracted from geophysical time series. This method seems to represent a useful improvement for the non-stationary time series periodicity quantitative analysis. The principal components determination followed by the least squares iterative regression method was implemented in an algorithm written in the Scilab (2006) language. The main result of the method is to obtain the set of sine functions embedded in the series analyzed in decreasing order of significance, from the most important ones, likely to represent the physical processes involved in the generation of the series, to the less important ones that represent noise components. Taking into account the need of a deeper knowledge of the Sun's past history and its implication to global climate change, the method was applied to the Sunspot Number series (1750-2004). With the threshold and parameter values used here, the application of the method leads to a total of 441 explicit sine functions, among which 65 were considered as being significant and were used for a reconstruction that gave a normalized mean squared error of 0.146.
CAD-Based Shielding Analysis for ITER Port Diagnostics
NASA Astrophysics Data System (ADS)
Serikov, Arkady; Fischer, Ulrich; Anthoine, David; Bertalot, Luciano; De Bock, Maartin; O'Connor, Richard; Juarez, Rafael; Krasilnikov, Vitaly
2017-09-01
Radiation shielding analysis conducted in support of design development of the contemporary diagnostic systems integrated inside the ITER ports is relied on the use of CAD models. This paper presents the CAD-based MCNP Monte Carlo radiation transport and activation analyses for the Diagnostic Upper and Equatorial Port Plugs (UPP #3 and EPP #8, #17). The creation process of the complicated 3D MCNP models of the diagnostics systems was substantially accelerated by application of the CAD-to-MCNP converter programs MCAM and McCad. High performance computing resources of the Helios supercomputer allowed to speed-up the MCNP parallel transport calculations with the MPI/OpenMP interface. The found shielding solutions could be universal, reducing ports R&D costs. The shield block behind the Tritium and Deposit Monitor (TDM) optical box was added to study its influence on Shut-Down Dose Rate (SDDR) in Port Interspace (PI) of EPP#17. Influence of neutron streaming along the Lost Alpha Monitor (LAM) on the neutron energy spectra calculated in the Tangential Neutron Spectrometer (TNS) of EPP#8. For the UPP#3 with Charge eXchange Recombination Spectroscopy (CXRS-core), an excessive neutron streaming along the CXRS shutter, which should be prevented in further design iteration.
Unsteady flow model for circulation-control airfoils
NASA Technical Reports Server (NTRS)
Rao, B. M.
1979-01-01
An analysis and a numerical lifting surface method are developed for predicting the unsteady airloads on two-dimensional circulation control airfoils in incompressible flow. The analysis and the computer program are validated by correlating the computed unsteady airloads with test data and also with other theoretical solutions. Additionally, a mathematical model for predicting the bending-torsion flutter of a two-dimensional airfoil (a reference section of a wing or rotor blade) and a computer program using an iterative scheme are developed. The flutter program has a provision for using the CC airfoil airloads program or the Theodorsen hard flap solution to compute the unsteady lift and moment used in the flutter equations. The adopted mathematical model and the iterative scheme are used to perform a flutter analysis of a typical CC rotor blade reference section. The program seems to work well within the basic assumption of the incompressible flow.
NASA Technical Reports Server (NTRS)
Yan, Jerry C.
1987-01-01
In concurrent systems, a major responsibility of the resource management system is to decide how the application program is to be mapped onto the multi-processor. Instead of using abstract program and machine models, a generate-and-test framework known as 'post-game analysis' that is based on data gathered during program execution is proposed. Each iteration consists of (1) (a simulation of) an execution of the program; (2) analysis of the data gathered; and (3) the proposal of a new mapping that would have a smaller execution time. These heuristics are applied to predict execution time changes in response to small perturbations applied to the current mapping. An initial experiment was carried out using simple strategies on 'pipeline-like' applications. The results obtained from four simple strategies demonstrated that for this kind of application, even simple strategies can produce acceptable speed-up with a small number of iterations.
Beam ion acceleration by ICRH in JET discharges
NASA Astrophysics Data System (ADS)
Budny, R. V.; Gorelenkova, M.; Bertelli, N.; JET Collaboration
2015-11-01
The ion Monte-Carlo orbit integrator NUBEAM, used in TRANSP has been enhanced to include an ``RF-kick'' operator to simulate the interaction of RF fields and fast ions. The RF quasi-linear operator (localized in space) uses a second R-Z orbit integrator. We apply this to analysis of recent JET discharges using ICRH with the ITER-like first wall. An example of results for a high performance Hybrid discharge for which standard TRANSP analysis simulated the DD neutron emission rate below measurements, re-analysis using the RF-kick operator results in increased beam parallel and perpendicular energy densities (~=40% and 15% respectively), and increased beam-thermal neutron emission (~= 35%), making the total rate closer to the measurement. Checks of the numerics, comparisons with measurements, and ITER implications will be presented. Supported in part by the US DoE contract DE-AC02-09CH11466 and by EUROfusion No 633053.
Zhao, Ming; Li, Yu; Peng, Leilei
2014-01-01
We report a fast non-iterative lifetime data analysis method for the Fourier multiplexed frequency-sweeping confocal FLIM (Fm-FLIM) system [ Opt. Express22, 10221 ( 2014)24921725]. The new method, named R-method, allows fast multi-channel lifetime image analysis in the system’s FPGA data processing board. Experimental tests proved that the performance of the R-method is equivalent to that of single-exponential iterative fitting, and its sensitivity is well suited for time-lapse FLIM-FRET imaging of live cells, for example cyclic adenosine monophosphate (cAMP) level imaging with GFP-Epac-mCherry sensors. With the R-method and its FPGA implementation, multi-channel lifetime images can now be generated in real time on the multi-channel frequency-sweeping FLIM system, and live readout of FRET sensors can be performed during time-lapse imaging. PMID:25321778
IonGAP: integrative bacterial genome analysis for Ion Torrent sequence data.
Baez-Ortega, Adrian; Lorenzo-Diaz, Fabian; Hernandez, Mariano; Gonzalez-Vila, Carlos Ignacio; Roda-Garcia, Jose Luis; Colebrook, Marcos; Flores, Carlos
2015-09-01
We introduce IonGAP, a publicly available Web platform designed for the analysis of whole bacterial genomes using Ion Torrent sequence data. Besides assembly, it integrates a variety of comparative genomics, annotation and bacterial classification routines, based on the widely used FASTQ, BAM and SRA file formats. Benchmarking with different datasets evidenced that IonGAP is a fast, powerful and simple-to-use bioinformatics tool. By releasing this platform, we aim to translate low-cost bacterial genome analysis for microbiological prevention and control in healthcare, agroalimentary and pharmaceutical industry applications. IonGAP is hosted by the ITER's Teide-HPC supercomputer and is freely available on the Web for non-commercial use at http://iongap.hpc.iter.es. mcolesan@ull.edu.es or cflores@ull.edu.es Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Peng, Heng; Liu, Yinghua; Chen, Haofeng
2018-05-01
In this paper, a novel direct method called the stress compensation method (SCM) is proposed for limit and shakedown analysis of large-scale elastoplastic structures. Without needing to solve the specific mathematical programming problem, the SCM is a two-level iterative procedure based on a sequence of linear elastic finite element solutions where the global stiffness matrix is decomposed only once. In the inner loop, the static admissible residual stress field for shakedown analysis is constructed. In the outer loop, a series of decreasing load multipliers are updated to approach to the shakedown limit multiplier by using an efficient and robust iteration control technique, where the static shakedown theorem is adopted. Three numerical examples up to about 140,000 finite element nodes confirm the applicability and efficiency of this method for two-dimensional and three-dimensional elastoplastic structures, with detailed discussions on the convergence and the accuracy of the proposed algorithm.
Using formal methods for content validation of medical procedure documents.
Cota, Érika; Ribeiro, Leila; Bezerra, Jonas Santos; Costa, Andrei; da Silva, Rosiana Estefane; Cota, Gláucia
2017-08-01
We propose the use of a formal approach to support content validation of a standard operating procedure (SOP) for a therapeutic intervention. Such an approach provides a useful tool to identify ambiguities, omissions and inconsistencies, and improves the applicability and efficacy of documents in the health settings. We apply and evaluate a methodology originally proposed for the verification of software specification documents to a specific SOP. The verification methodology uses the graph formalism to model the document. Semi-automatic analysis identifies possible problems in the model and in the original document. The verification is an iterative process that identifies possible faults in the original text that should be revised by its authors and/or specialists. The proposed method was able to identify 23 possible issues in the original document (ambiguities, omissions, redundant information, and inaccuracies, among others). The formal verification process aided the specialists to consider a wider range of usage scenarios and to identify which instructions form the kernel of the proposed SOP and which ones represent additional or required knowledge that are mandatory for the correct application of the medical document. By using the proposed verification process, a simpler and yet more complete SOP could be produced. As consequence, during the validation process the experts received a more mature document and could focus on the technical aspects of the procedure itself. Copyright © 2017 Elsevier B.V. All rights reserved.
Validation of a method for assessing resident physicians' quality improvement proposals.
Leenstra, James L; Beckman, Thomas J; Reed, Darcy A; Mundell, William C; Thomas, Kris G; Krajicek, Bryan J; Cha, Stephen S; Kolars, Joseph C; McDonald, Furman S
2007-09-01
Residency programs involve trainees in quality improvement (QI) projects to evaluate competency in systems-based practice and practice-based learning and improvement. Valid approaches to assess QI proposals are lacking. We developed an instrument for assessing resident QI proposals--the Quality Improvement Proposal Assessment Tool (QIPAT-7)-and determined its validity and reliability. QIPAT-7 content was initially obtained from a national panel of QI experts. Through an iterative process, the instrument was refined, pilot-tested, and revised. Seven raters used the instrument to assess 45 resident QI proposals. Principal factor analysis was used to explore the dimensionality of instrument scores. Cronbach's alpha and intraclass correlations were calculated to determine internal consistency and interrater reliability, respectively. QIPAT-7 items comprised a single factor (eigenvalue = 3.4) suggesting a single assessment dimension. Interrater reliability for each item (range 0.79 to 0.93) and internal consistency reliability among the items (Cronbach's alpha = 0.87) were high. This method for assessing resident physician QI proposals is supported by content and internal structure validity evidence. QIPAT-7 is a useful tool for assessing resident QI proposals. Future research should determine the reliability of QIPAT-7 scores in other residency and fellowship training programs. Correlations should also be made between assessment scores and criteria for QI proposal success such as implementation of QI proposals, resident scholarly productivity, and improved patient outcomes.
Subsonic panel method for designing wing surfaces from pressure distribution
NASA Technical Reports Server (NTRS)
Bristow, D. R.; Hawk, J. D.
1983-01-01
An iterative method has been developed for designing wing section contours corresponding to a prescribed subcritical distribution of pressure. The calculations are initialized by using a surface panel method to analyze a baseline wing or wing-fuselage configuration. A first-order expansion to the baseline panel method equations is then used to calculate a matrix containing the partial derivative of potential at each control point with respect to each unknown geometry parameter. In every iteration cycle, the matrix is used both to calculate the geometry perturbation and to analyze the perturbed geometry. The distribution of potential on the perturbed geometry is established by simple linear extrapolation from the baseline solution. The extrapolated potential is converted to pressure by Bernoulli's equation. Not only is the accuracy of the approach good for very large perturbations, but the computing cost of each complete iteration cycle is substantially less than one analysis solution by a conventional panel method.
A novel dynamical community detection algorithm based on weighting scheme
NASA Astrophysics Data System (ADS)
Li, Ju; Yu, Kai; Hu, Ke
2015-12-01
Network dynamics plays an important role in analyzing the correlation between the function properties and the topological structure. In this paper, we propose a novel dynamical iteration (DI) algorithm, which incorporates the iterative process of membership vector with weighting scheme, i.e. weighting W and tightness T. These new elements can be used to adjust the link strength and the node compactness for improving the speed and accuracy of community structure detection. To estimate the optimal stop time of iteration, we utilize a new stability measure which is defined as the Markov random walk auto-covariance. We do not need to specify the number of communities in advance. It naturally supports the overlapping communities by associating each node with a membership vector describing the node's involvement in each community. Theoretical analysis and experiments show that the algorithm can uncover communities effectively and efficiently.
Approximate techniques of structural reanalysis
NASA Technical Reports Server (NTRS)
Noor, A. K.; Lowder, H. E.
1974-01-01
A study is made of two approximate techniques for structural reanalysis. These include Taylor series expansions for response variables in terms of design variables and the reduced-basis method. In addition, modifications to these techniques are proposed to overcome some of their major drawbacks. The modifications include a rational approach to the selection of the reduced-basis vectors and the use of Taylor series approximation in an iterative process. For the reduced basis a normalized set of vectors is chosen which consists of the original analyzed design and the first-order sensitivity analysis vectors. The use of the Taylor series approximation as a first (initial) estimate in an iterative process, can lead to significant improvements in accuracy, even with one iteration cycle. Therefore, the range of applicability of the reanalysis technique can be extended. Numerical examples are presented which demonstrate the gain in accuracy obtained by using the proposed modification techniques, for a wide range of variations in the design variables.
Rahaman, Mijanur; Pang, Chin-Tzong; Ishtyak, Mohd; Ahmad, Rais
2017-01-01
In this article, we introduce a perturbed system of generalized mixed quasi-equilibrium-like problems involving multi-valued mappings in Hilbert spaces. To calculate the approximate solutions of the perturbed system of generalized multi-valued mixed quasi-equilibrium-like problems, firstly we develop a perturbed system of auxiliary generalized multi-valued mixed quasi-equilibrium-like problems, and then by using the celebrated Fan-KKM technique, we establish the existence and uniqueness of solutions of the perturbed system of auxiliary generalized multi-valued mixed quasi-equilibrium-like problems. By deploying an auxiliary principle technique and an existence result, we formulate an iterative algorithm for solving the perturbed system of generalized multi-valued mixed quasi-equilibrium-like problems. Lastly, we study the strong convergence analysis of the proposed iterative sequences under monotonicity and some mild conditions. These results are new and generalize some known results in this field.
Self-adaptive predictor-corrector algorithm for static nonlinear structural analysis
NASA Technical Reports Server (NTRS)
Padovan, J.
1981-01-01
A multiphase selfadaptive predictor corrector type algorithm was developed. This algorithm enables the solution of highly nonlinear structural responses including kinematic, kinetic and material effects as well as pro/post buckling behavior. The strategy involves three main phases: (1) the use of a warpable hyperelliptic constraint surface which serves to upperbound dependent iterate excursions during successive incremental Newton Ramphson (INR) type iterations; (20 uses an energy constraint to scale the generation of successive iterates so as to maintain the appropriate form of local convergence behavior; (3) the use of quality of convergence checks which enable various self adaptive modifications of the algorithmic structure when necessary. The restructuring is achieved by tightening various conditioning parameters as well as switch to different algorithmic levels to improve the convergence process. The capabilities of the procedure to handle various types of static nonlinear structural behavior are illustrated.
Eighteen-month-olds' memory for short movies of simple stories.
Kingo, Osman S; Krøjgaard, Peter
2015-04-01
This study investigated twenty four 18-month-olds' memory for dynamic visual stimuli. During the first visit participants saw one of two brief movies (30 seconds) with a simple storyline displayed in four iterations. After 2 weeks, memory was tested in the visual paired comparison paradigm in which the familiar and the novel movie were contrasted simultaneously and displayed in two iterations for a total of 60 seconds. Eye-tracking revealed that participants fixated the familiar movie significantly more than the novel movie, thus indicating memory for the familiar movie. Furthermore, time-dependent analysis of the data revealed that individual differences in the looking-patterns for the first and second iteration of the movies were related to individual differences in productive vocabulary. We suggest that infants' vocabulary may be indicative of their ability to understand and remember the storyline of the movies, thereby affecting their subsequent memory. © 2015 Scandinavian Psychological Associations and John Wiley & Sons Ltd.
Run-time parallelization and scheduling of loops
NASA Technical Reports Server (NTRS)
Saltz, Joel H.; Mirchandaney, Ravi; Crowley, Kay
1990-01-01
Run time methods are studied to automatically parallelize and schedule iterations of a do loop in certain cases, where compile-time information is inadequate. The methods presented involve execution time preprocessing of the loop. At compile-time, these methods set up the framework for performing a loop dependency analysis. At run time, wave fronts of concurrently executable loop iterations are identified. Using this wavefront information, loop iterations are reordered for increased parallelism. Symbolic transformation rules are used to produce: inspector procedures that perform execution time preprocessing and executors or transformed versions of source code loop structures. These transformed loop structures carry out the calculations planned in the inspector procedures. Performance results are presented from experiments conducted on the Encore Multimax. These results illustrate that run time reordering of loop indices can have a significant impact on performance. Furthermore, the overheads associated with this type of reordering are amortized when the loop is executed several times with the same dependency structure.
The Iterative Reweighted Mixed-Norm Estimate for Spatio-Temporal MEG/EEG Source Reconstruction.
Strohmeier, Daniel; Bekhti, Yousra; Haueisen, Jens; Gramfort, Alexandre
2016-10-01
Source imaging based on magnetoencephalography (MEG) and electroencephalography (EEG) allows for the non-invasive analysis of brain activity with high temporal and good spatial resolution. As the bioelectromagnetic inverse problem is ill-posed, constraints are required. For the analysis of evoked brain activity, spatial sparsity of the neuronal activation is a common assumption. It is often taken into account using convex constraints based on the l 1 -norm. The resulting source estimates are however biased in amplitude and often suboptimal in terms of source selection due to high correlations in the forward model. In this work, we demonstrate that an inverse solver based on a block-separable penalty with a Frobenius norm per block and a l 0.5 -quasinorm over blocks addresses both of these issues. For solving the resulting non-convex optimization problem, we propose the iterative reweighted Mixed Norm Estimate (irMxNE), an optimization scheme based on iterative reweighted convex surrogate optimization problems, which are solved efficiently using a block coordinate descent scheme and an active set strategy. We compare the proposed sparse imaging method to the dSPM and the RAP-MUSIC approach based on two MEG data sets. We provide empirical evidence based on simulations and analysis of MEG data that the proposed method improves on the standard Mixed Norm Estimate (MxNE) in terms of amplitude bias, support recovery, and stability.
Invariants, Attractors and Bifurcation in Two Dimensional Maps with Polynomial Interaction
NASA Astrophysics Data System (ADS)
Hacinliyan, Avadis Simon; Aybar, Orhan Ozgur; Aybar, Ilknur Kusbeyzi
This work will present an extended discrete-time analysis on maps and their generalizations including iteration in order to better understand the resulting enrichment of the bifurcation properties. The standard concepts of stability analysis and bifurcation theory for maps will be used. Both iterated maps and flows are used as models for chaotic behavior. It is well known that when flows are converted to maps by discretization, the equilibrium points remain the same but a richer bifurcation scheme is observed. For example, the logistic map has a very simple behavior as a differential equation but as a map fold and period doubling bifurcations are observed. A way to gain information about the global structure of the state space of a dynamical system is investigating invariant manifolds of saddle equilibrium points. Studying the intersections of the stable and unstable manifolds are essential for understanding the structure of a dynamical system. It has been known that the Lotka-Volterra map and systems that can be reduced to it or its generalizations in special cases involving local and polynomial interactions admit invariant manifolds. Bifurcation analysis of this map and its higher iterates can be done to understand the global structure of the system and the artifacts of the discretization by comparing with the corresponding results from the differential equation on which they are based.
Group iterative methods for the solution of two-dimensional time-fractional diffusion equation
NASA Astrophysics Data System (ADS)
Balasim, Alla Tareq; Ali, Norhashidah Hj. Mohd.
2016-06-01
Variety of problems in science and engineering may be described by fractional partial differential equations (FPDE) in relation to space and/or time fractional derivatives. The difference between time fractional diffusion equations and standard diffusion equations lies primarily in the time derivative. Over the last few years, iterative schemes derived from the rotated finite difference approximation have been proven to work well in solving standard diffusion equations. However, its application on time fractional diffusion counterpart is still yet to be investigated. In this paper, we will present a preliminary study on the formulation and analysis of new explicit group iterative methods in solving a two-dimensional time fractional diffusion equation. These methods were derived from the standard and rotated Crank-Nicolson difference approximation formula. Several numerical experiments were conducted to show the efficiency of the developed schemes in terms of CPU time and iteration number. At the request of all authors of the paper an updated version of this article was published on 7 July 2016. The original version supplied to AIP Publishing contained an error in Table 1 and References 15 and 16 were incomplete. These errors have been corrected in the updated and republished article.
Wellenberg, Ruud H H; Boomsma, Martijn F; van Osch, Jochen A C; Vlassenbroek, Alain; Milles, Julien; Edens, Mireille A; Streekstra, Geert J; Slump, Cornelis H; Maas, Mario
To quantify the combined use of iterative model-based reconstruction (IMR) and orthopaedic metal artefact reduction (O-MAR) in reducing metal artefacts and improving image quality in a total hip arthroplasty phantom. Scans acquired at several dose levels and kVps were reconstructed with filtered back-projection (FBP), iterative reconstruction (iDose) and IMR, with and without O-MAR. Computed tomography (CT) numbers, noise levels, signal-to-noise-ratios and contrast-to-noise-ratios were analysed. Iterative model-based reconstruction results in overall improved image quality compared to iDose and FBP (P < 0.001). Orthopaedic metal artefact reduction is most effective in reducing severe metal artefacts improving CT number accuracy by 50%, 60%, and 63% (P < 0.05) and reducing noise by 1%, 62%, and 85% (P < 0.001) whereas improving signal-to-noise-ratios by 27%, 47%, and 46% (P < 0.001) and contrast-to-noise-ratios by 16%, 25%, and 19% (P < 0.001) with FBP, iDose, and IMR, respectively. The combined use of IMR and O-MAR strongly improves overall image quality and strongly reduces metal artefacts in the CT imaging of a total hip arthroplasty phantom.
Iterative Methods to Solve Linear RF Fields in Hot Plasma
NASA Astrophysics Data System (ADS)
Spencer, Joseph; Svidzinski, Vladimir; Evstatiev, Evstati; Galkin, Sergei; Kim, Jin-Soo
2014-10-01
Most magnetic plasma confinement devices use radio frequency (RF) waves for current drive and/or heating. Numerical modeling of RF fields is an important part of performance analysis of such devices and a predictive tool aiding design and development of future devices. Prior attempts at this modeling have mostly used direct solvers to solve the formulated linear equations. Full wave modeling of RF fields in hot plasma with 3D nonuniformities is mostly prohibited, with memory demands of a direct solver placing a significant limitation on spatial resolution. Iterative methods can significantly increase spatial resolution. We explore the feasibility of using iterative methods in 3D full wave modeling. The linear wave equation is formulated using two approaches: for cold plasmas the local cold plasma dielectric tensor is used (resolving resonances by particle collisions), while for hot plasmas the conductivity kernel (which includes a nonlocal dielectric response) is calculated by integrating along test particle orbits. The wave equation is discretized using a finite difference approach. The initial guess is important in iterative methods, and we examine different initial guesses including the solution to the cold plasma wave equation. Work is supported by the U.S. DOE SBIR program.
NASA Technical Reports Server (NTRS)
Mukherjee, Rinku; Gopalarathnam, Ashok; Kim, Sung Wan
2003-01-01
An iterative decambering approach for the post stall prediction of wings using known section data as inputs is presented. The method can currently be used for incompressible .ow and can be extended to compressible subsonic .ow using Mach number correction schemes. A detailed discussion of past work on this topic is presented first. Next, an overview of the decambering approach is presented and is illustrated by applying the approach to the prediction of the two-dimensional C(sub l) and C(sub m) curves for an airfoil. The implementation of the approach for iterative decambering of wing sections is then discussed. A novel feature of the current e.ort is the use of a multidimensional Newton iteration for taking into consideration the coupling between the di.erent sections of the wing. The approach lends itself to implementation in a variety of finite-wing analysis methods such as lifting-line theory, discrete-vortex Weissinger's method, and vortex lattice codes. Results are presented for a rectangular wing for a from 0 to 25 deg. The results are compared for both increasing and decreasing directions of a, and they show that a hysteresis loop can be predicted for post-stall angles of attack.
A Monte-Carlo Benchmark of TRIPOLI-4® and MCNP on ITER neutronics
NASA Astrophysics Data System (ADS)
Blanchet, David; Pénéliau, Yannick; Eschbach, Romain; Fontaine, Bruno; Cantone, Bruno; Ferlet, Marc; Gauthier, Eric; Guillon, Christophe; Letellier, Laurent; Proust, Maxime; Mota, Fernando; Palermo, Iole; Rios, Luis; Guern, Frédéric Le; Kocan, Martin; Reichle, Roger
2017-09-01
Radiation protection and shielding studies are often based on the extensive use of 3D Monte-Carlo neutron and photon transport simulations. ITER organization hence recommends the use of MCNP-5 code (version 1.60), in association with the FENDL-2.1 neutron cross section data library, specifically dedicated to fusion applications. The MCNP reference model of the ITER tokamak, the `C-lite', is being continuously developed and improved. This article proposes to develop an alternative model, equivalent to the 'C-lite', but for the Monte-Carlo code TRIPOLI-4®. A benchmark study is defined to test this new model. Since one of the most critical areas for ITER neutronics analysis concerns the assessment of radiation levels and Shutdown Dose Rates (SDDR) behind the Equatorial Port Plugs (EPP), the benchmark is conducted to compare the neutron flux through the EPP. This problem is quite challenging with regard to the complex geometry and considering the important neutron flux attenuation ranging from 1014 down to 108 n•cm-2•s-1. Such code-to-code comparison provides independent validation of the Monte-Carlo simulations, improving the confidence in neutronic results.
Strongly Coupled Fluid-Body Dynamics in the Immersed Boundary Projection Method
NASA Astrophysics Data System (ADS)
Wang, Chengjie; Eldredge, Jeff D.
2014-11-01
A computational algorithm is developed to simulate dynamically coupled interaction between fluid and rigid bodies. The basic computational framework is built upon a multi-domain immersed boundary method library, whirl, developed in previous work. In this library, the Navier-Stokes equations for incompressible flow are solved on a uniform Cartesian grid by the vorticity-based immersed boundary projection method of Colonius and Taira. A solver for the dynamics of rigid-body systems is also included. The fluid and rigid-body solvers are strongly coupled with an iterative approach based on the block Gauss-Seidel method. Interfacial force, with its intimate connection with the Lagrange multipliers used in the fluid solver, is used as the primary iteration variable. Relaxation, developed from a stability analysis of the iterative scheme, is used to achieve convergence in only 2-4 iterations per time step. Several two- and three-dimensional numerical tests are conducted to validate and demonstrate the method, including flapping of flexible wings, self-excited oscillations of a system of linked plates and three-dimensional propulsion of flexible fluked tail. This work has been supported by AFOSR, under Award FA9550-11-1-0098.
Gong, Pinghua; Zhang, Changshui; Lu, Zhaosong; Huang, Jianhua Z; Ye, Jieping
2013-01-01
Non-convex sparsity-inducing penalties have recently received considerable attentions in sparse learning. Recent theoretical investigations have demonstrated their superiority over the convex counterparts in several sparse learning settings. However, solving the non-convex optimization problems associated with non-convex penalties remains a big challenge. A commonly used approach is the Multi-Stage (MS) convex relaxation (or DC programming), which relaxes the original non-convex problem to a sequence of convex problems. This approach is usually not very practical for large-scale problems because its computational cost is a multiple of solving a single convex problem. In this paper, we propose a General Iterative Shrinkage and Thresholding (GIST) algorithm to solve the nonconvex optimization problem for a large class of non-convex penalties. The GIST algorithm iteratively solves a proximal operator problem, which in turn has a closed-form solution for many commonly used penalties. At each outer iteration of the algorithm, we use a line search initialized by the Barzilai-Borwein (BB) rule that allows finding an appropriate step size quickly. The paper also presents a detailed convergence analysis of the GIST algorithm. The efficiency of the proposed algorithm is demonstrated by extensive experiments on large-scale data sets.
Convergence of an iterative procedure for large-scale static analysis of structural components
NASA Technical Reports Server (NTRS)
Austin, F.; Ojalvo, I. U.
1976-01-01
The paper proves convergence of an iterative procedure for calculating the deflections of built-up component structures which can be represented as consisting of a dominant, relatively stiff primary structure and a less stiff secondary structure, which may be composed of one or more substructures that are not connected to one another but are all connected to the primary structure. The iteration consists in estimating the deformation of the primary structure in the absence of the secondary structure on the assumption that all mechanical loads are applied directly to the primary structure. The j-th iterate primary structure deflections at the interface are imposed on the secondary structure, and the boundary loads required to produce these deflections are computed. The cycle is completed by applying the interface reaction to the primary structure and computing its updated deflections. It is shown that the mathematical condition for convergence of this procedure is that the maximum eigenvalue of the equation relating primary-structure deflection to imposed secondary-structure deflection be less than unity, which is shown to correspond with the physical requirement that the secondary structure be more flexible at the interface boundary.
Function Invariant and Parameter Scale-Free Transformation Methods
ERIC Educational Resources Information Center
Bentler, P. M.; Wingard, Joseph A.
1977-01-01
A scale-invariant simple structure function of previously studied function components for principal component analysis and factor analysis is defined. First and second partial derivatives are obtained, and Newton-Raphson iterations are utilized. The resulting solutions are locally optimal and subjectively pleasing. (Author/JKS)
Analysis of Online Composite Mirror Descent Algorithm.
Lei, Yunwen; Zhou, Ding-Xuan
2017-03-01
We study the convergence of the online composite mirror descent algorithm, which involves a mirror map to reflect the geometry of the data and a convex objective function consisting of a loss and a regularizer possibly inducing sparsity. Our error analysis provides convergence rates in terms of properties of the strongly convex differentiable mirror map and the objective function. For a class of objective functions with Hölder continuous gradients, the convergence rates of the excess (regularized) risk under polynomially decaying step sizes have the order [Formula: see text] after [Formula: see text] iterates. Our results improve the existing error analysis for the online composite mirror descent algorithm by avoiding averaging and removing boundedness assumptions, and they sharpen the existing convergence rates of the last iterate for online gradient descent without any boundedness assumptions. Our methodology mainly depends on a novel error decomposition in terms of an excess Bregman distance, refined analysis of self-bounding properties of the objective function, and the resulting one-step progress bounds.
[Training in iterative hypothesis testing as part of psychiatric education. A randomized study].
Lampen-Imkamp, S; Alte, C; Sipos, V; Kordon, A; Hohagen, F; Schweiger, U; Kahl, K G
2012-01-01
The improvement of medical education is at the center of efforts to reform the studies of medicine. Furthermore, an excellent teaching program for students is a quality feature of medical universities. Besides teaching of disease-specific contents, the acquisition of interpersonal and decision-making skills is important. However, the cognitive style of senior physicians leading to a diagnosis cannot easily be taught. Therefore, the following study aimed at examining whether specific training in iterative hypothesis testing (IHT) may improve the correctness of the diagnostic process. Seventy-one medical students in their 9th-11th terms were randomized to medical teaching as usual or to IHT training for 4 weeks. The intervention group received specific training according to the method of IHT. All students were examined by a multiple choice (MC) exam and additionally by simulated patients (SP). The SPs were instructed to represent either a patient with depression and comorbid anxiety and substance use disorder (SP1) or to represent a patient with depression, obsessive-compulsive disorder and acute suicidal tendencies (SP2). All students identified the diagnosis of major depression in the SPs, but IHT-trained students recognized more diagnostic criteria. Furthermore, IHT-trained students recognized acute suicide tendencies in SP2 more often and identified more comorbid psychiatric disorders. The results of the MC exam were comparable in both groups. An analysis of the satisfaction with the different training programs revealed that the IHT training received a better appraisal. Our results point to the role of IHT in teaching diagnostic skills. However, the results of the MC exam were not influenced by IHT training. Furthermore, our results show that students are in need of training in practical clinical skills.
Defining competency-based evaluation objectives in family medicine
Lawrence, Kathrine; Allen, Tim; Brailovsky, Carlos; Crichton, Tom; Bethune, Cheri; Donoff, Michel; Laughlin, Tom; Wetmore, Stephen; Carpentier, Marie-Pierre; Visser, Shaun
2011-01-01
Abstract Objective To develop key features for priority topics previously identified by the College of Family Physicians of Canada that, together with skill dimensions and phases of the clinical encounter, broadly describe competence in family medicine. Design Modified nominal group methodology, which was used to develop key features for each priority topic through an iterative process. Setting The College of Family Physicians of Canada. Participants An expert group of 7 family physicians and 1 educational consultant, all of whom had experience in assessing competence in family medicine. Group members represented the Canadian family medicine context with respect to region, sex, language, community type, and experience. Methods The group used a modified Delphi process to derive a detailed operational definition of competence, using multiple iterations until consensus was achieved for the items under discussion. The group met 3 to 4 times a year from 2000 to 2007. Main findings The group analyzed 99 topics and generated 773 key features. There were 2 to 20 (average 7.8) key features per topic; 63% of the key features focused on the diagnostic phase of the clinical encounter. Conclusion This project expands previous descriptions of the process of generating key features for assessment, and removes this process from the context of written examinations. A key-features analysis of topics focuses on higher-order cognitive processes of clinical competence. The project did not define all the skill dimensions of competence to the same degree, but it clearly identified those requiring further definition. This work generates part of a discipline-specific, competency-based definition of family medicine for assessment purposes. It limits the domain for assessment purposes, which is an advantage for the teaching and assessment of learners. A validation study on the content of this work would ensure that it truly reflects competence in family medicine. PMID:21998245
Trafton, Jodie; Martins, Susana; Michel, Martha; Lewis, Eleanor; Wang, Dan; Combs, Ann; Scates, Naquell; Tu, Samson; Goldstein, Mary K
2010-04-01
To develop and evaluate a clinical decision support system (CDSS) named Assessment and Treatment in Healthcare: Evidenced-Based Automation (ATHENA)-Opioid Therapy, which encourages safe and effective use of opioid therapy for chronic, noncancer pain. CDSS development and iterative evaluation using the analysis, design, development, implementation, and evaluation process including simulation-based and in-clinic assessments of usability for providers followed by targeted system revisions. Volunteers provided detailed feedback to guide improvements in the graphical user interface, and content and design changes to increase clinical usefulness, understandability, clinical workflow fit, and ease of completing guideline recommended practices. Revisions based on feedback increased CDSS usability ratings over time. Practice concerns outside the scope of the CDSS were also identified. Usability testing optimized the CDSS to better address barriers such as lack of provider education, confusion in dosing calculations and titration schedules, access to relevant patient information, provider discontinuity, documentation, and access to validated assessment tools. It also highlighted barriers to good clinical practice that are difficult to address with CDSS technology in its current conceptualization. For example, clinicians indicated that constraints on time and competing priorities in primary care, discomfort in patient-provider communications, and lack of evidence to guide opioid prescribing decisions impeded their ability to provide effective, guideline-adherent pain management. Iterative testing was essential for designing a highly usable and acceptable CDSS; however, identified barriers may limit the impact of the ATHENA-Opioid Therapy system and other CDSS on clinical practices and outcomes unless CDSS are paired with parallel initiatives to address these issues.
Mousa Bacha, Rasha; Abdelaziz, Somaia
2017-01-01
Objectives To explore feedback processes of Western-based health professional student training curricula conducted in an Arab clinical teaching setting. Methods This qualitative study employed document analysis of in-training evaluation reports (ITERs) used by Canadian nursing, pharmacy, respiratory therapy, paramedic, dental hygiene, and pharmacy technician programs established in Qatar. Six experiential training program coordinators were interviewed between February and May 2016 to explore how national cultural differences are perceived to affect feedback processes between students and clinical supervisors. Interviews were recorded, transcribed, and coded according to a priori cultural themes. Results Document analysis found all programs’ ITERs outlined competency items for students to achieve. Clinical supervisors choose a response option corresponding to their judgment of student performance and may provide additional written feedback in spaces provided. Only one program required formal face-to-face feedback exchange between students and clinical supervisors. Experiential training program coordinators identified that no ITER was expressly culturally adapted, although in some instances, modifications were made for differences in scopes of practice between Canada and Qatar. Power distance was recognized by all coordinators who also identified both student and supervisor reluctance to document potentially negative feedback in ITERs. Instances of collectivism were described as more lenient student assessment by clinical supervisors of the same cultural background. Uncertainty avoidance did not appear to impact feedback processes. Conclusions Our findings suggest that differences in specific cultural dimensions between Qatar and Canada have implications on the feedback process in experiential training which may be addressed through simple measures to accommodate communication preferences. PMID:28315858
Wilbur, Kerry; Mousa Bacha, Rasha; Abdelaziz, Somaia
2017-03-17
To explore feedback processes of Western-based health professional student training curricula conducted in an Arab clinical teaching setting. This qualitative study employed document analysis of in-training evaluation reports (ITERs) used by Canadian nursing, pharmacy, respiratory therapy, paramedic, dental hygiene, and pharmacy technician programs established in Qatar. Six experiential training program coordinators were interviewed between February and May 2016 to explore how national cultural differences are perceived to affect feedback processes between students and clinical supervisors. Interviews were recorded, transcribed, and coded according to a priori cultural themes. Document analysis found all programs' ITERs outlined competency items for students to achieve. Clinical supervisors choose a response option corresponding to their judgment of student performance and may provide additional written feedback in spaces provided. Only one program required formal face-to-face feedback exchange between students and clinical supervisors. Experiential training program coordinators identified that no ITER was expressly culturally adapted, although in some instances, modifications were made for differences in scopes of practice between Canada and Qatar. Power distance was recognized by all coordinators who also identified both student and supervisor reluctance to document potentially negative feedback in ITERs. Instances of collectivism were described as more lenient student assessment by clinical supervisors of the same cultural background. Uncertainty avoidance did not appear to impact feedback processes. Our findings suggest that differences in specific cultural dimensions between Qatar and Canada have implications on the feedback process in experiential training which may be addressed through simple measures to accommodate communication preferences.
NASA Astrophysics Data System (ADS)
Habtezion, S.
2015-12-01
Fostering Earth Observation Regional Networks - Integrative and iterative approaches to capacity building Fostering Earth Observation Regional Networks - Integrative and iterative approaches to capacity building Senay Habtezion (shabtezion@start.org) / Hassan Virji (hvirji@start.org)Global Change SySTem for Analysis, Training and Research (START) (www.start.org) 2000 Florida Avenue NW, Suite 200 Washington, DC 20009 USA As part of the Global Observation of Forest and Land Cover Dynamics (GOFC-GOLD) project partnership effort to promote use of earth observations in advancing scientific knowledge, START works to bridge capacity needs related to earth observations (EOs) and their applications in the developing world. GOFC-GOLD regional networks, fostered through the support of regional and thematic workshops, have been successful in (1) enabling participation of scientists for developing countries and from the US to collaborate on key GOFC-GOLD and Land Cover and Land Use Change (LCLUC) issues, including NASA Global Data Set validation and (2) training young developing country scientists to gain key skills in EOs data management and analysis. Members of the regional networks are also engaged and reengaged in other EOs programs (e.g. visiting scientists program; data initiative fellowship programs at the USGS EROS Center and Boston University), which has helped strengthen these networks. The presentation draws from these experiences in advocating for integrative and iterative approaches to capacity building through the lens of the GOFC-GOLD partnership effort. Specifically, this presentation describes the role of the GODC-GOLD partnership in nurturing organic networks of scientists and EOs practitioners in Asia, Africa, Eastern Europe and Latin America.
In situ measurements of fuel retention by laser induced desorption spectroscopy in TEXTOR
NASA Astrophysics Data System (ADS)
Zlobinski, M.; Philipps, V.; Schweer, B.; Huber, A.; Stoschus, H.; Brezinsek, S.; Samm, U.; TEXTOR Team
2011-12-01
In future fusion devices such as ITER tritium retention due to tritium co-deposition in mixed material layers can be a serious safety problem. Laser induced desorption spectroscopy (LIDS) can measure the hydrogen content of hydrogenic carbon layers locally on plasma-facing components, while hydrogen is used as a tritium substitute. For several years, this method has been applied in the TEXTOR tokamak in situ during plasma operation to monitor the hydrogen content in space and time. This work shows the LIDS signal reproducibility and studies the effects of different plasma conditions, desorption distances from the plasma and different laser energies using a dedicated sample with constant hydrogen amount. Also the LIDS signal evaluation procedure is described in detail and the detection limits for different conditions in the TEXTOR tokamak are estimated.
The Effect of Iteration on the Design Performance of Primary School Children
ERIC Educational Resources Information Center
Looijenga, Annemarie; Klapwijk, Remke; de Vries, Marc J.
2015-01-01
Iteration during the design process is an essential element. Engineers optimize their design by iteration. Research on iteration in Primary Design Education is however scarce; possibly teachers believe they do not have enough time for iteration in daily classroom practices. Spontaneous playing behavior of children indicates that iteration fits in…
Land, P E; Haigh, J D
1997-12-20
In algorithms for the atmospheric correction of visible and near-IR satellite observations of the Earth's surface, it is generally assumed that the spectral variation of aerosol optical depth is characterized by an Angström power law or similar dependence. In an iterative fitting algorithm for atmospheric correction of ocean color imagery over case 2 waters, this assumption leads to an inability to retrieve the aerosol type and to the attribution to aerosol spectral variations of spectral effects actually caused by the water contents. An improvement to this algorithm is described in which the spectral variation of optical depth is calculated as a function of aerosol type and relative humidity, and an attempt is made to retrieve the relative humidity in addition to aerosol type. The aerosol is treated as a mixture of aerosol components (e.g., soot), rather than of aerosol types (e.g., urban). We demonstrate the improvement over the previous method by using simulated case 1 and case 2 sea-viewing wide field-of-view sensor data, although the retrieval of relative humidity was not successful.
Extending helium partial pressure measurement technology to JET DTE2 and ITER.
Klepper, C C; Biewer, T M; Kruezi, U; Vartanian, S; Douai, D; Hillis, D L; Marcus, C
2016-11-01
The detection limit for helium (He) partial pressure monitoring via the Penning discharge optical emission diagnostic, mainly used for tokamak divertor effluent gas analysis, is shown here to be possible for He concentrations down to 0.1% in predominantly deuterium effluents. This result from a dedicated laboratory study means that the technique can now be extended to intrinsically (non-injected) He produced as fusion reaction ash in deuterium-tritium experiments. The paper also examines threshold ionization mass spectroscopy as a potential backup to the optical technique, but finds that further development is needed to attain with plasma pulse-relevant response times. Both these studies are presented in the context of continuing development of plasma pulse-resolving, residual gas analysis for the upcoming JET deuterium-tritium campaign (DTE2) and for ITER.
NASA Astrophysics Data System (ADS)
Yarmohammadi, M.; Javadi, S.; Babolian, E.
2018-04-01
In this study a new spectral iterative method (SIM) based on fractional interpolation is presented for solving nonlinear fractional differential equations (FDEs) involving Caputo derivative. This method is equipped with a pre-algorithm to find the singularity index of solution of the problem. This pre-algorithm gives us a real parameter as the index of the fractional interpolation basis, for which the SIM achieves the highest order of convergence. In comparison with some recent results about the error estimates for fractional approximations, a more accurate convergence rate has been attained. We have also proposed the order of convergence for fractional interpolation error under the L2-norm. Finally, general error analysis of SIM has been considered. The numerical results clearly demonstrate the capability of the proposed method.
Probabilistic Structures Analysis Methods (PSAM) for select space propulsion system components
NASA Technical Reports Server (NTRS)
1991-01-01
The basic formulation for probabilistic finite element analysis is described and demonstrated on a few sample problems. This formulation is based on iterative perturbation that uses the factorized stiffness on the unperturbed system as the iteration preconditioner for obtaining the solution to the perturbed problem. This approach eliminates the need to compute, store and manipulate explicit partial derivatives of the element matrices and force vector, which not only reduces memory usage considerably, but also greatly simplifies the coding and validation tasks. All aspects for the proposed formulation were combined in a demonstration problem using a simplified model of a curved turbine blade discretized with 48 shell elements, and having random pressure and temperature fields with partial correlation, random uniform thickness, and random stiffness at the root.
Fine‐resolution conservation planning with limited climate‐change information
Shah, Payal; Mallory, Mindy L.; Ando , Amy W.; Guntenspergen, Glenn R.
2017-01-01
Climate‐change induced uncertainties in future spatial patterns of conservation‐related outcomes make it difficult to implement standard conservation‐planning paradigms. A recent study translates Markowitz's risk‐diversification strategy from finance to conservation settings, enabling conservation agents to use this diversification strategy for allocating conservation and restoration investments across space to minimize the risk associated with such uncertainty. However, this method is information intensive and requires a large number of forecasts of ecological outcomes associated with possible climate‐change scenarios for carrying out fine‐resolution conservation planning. We developed a technique for iterative, spatial portfolio analysis that can be used to allocate scarce conservation resources across a desired level of subregions in a planning landscape in the absence of a sufficient number of ecological forecasts. We applied our technique to the Prairie Pothole Region in central North America. A lack of sufficient future climate information prevented attainment of the most efficient risk‐return conservation outcomes in the Prairie Pothole Region. The difference in expected conservation returns between conservation planning with limited climate‐change information and full climate‐change information was as large as 30% for the Prairie Pothole Region even when the most efficient iterative approach was used. However, our iterative approach allowed finer resolution portfolio allocation with limited climate‐change forecasts such that the best possible risk‐return combinations were obtained. With our most efficient iterative approach, the expected loss in conservation outcomes owing to limited climate‐change information could be reduced by 17% relative to other iterative approaches.
NASA Astrophysics Data System (ADS)
Chew, J. V. L.; Sulaiman, J.
2017-09-01
Partial differential equations that are used in describing the nonlinear heat and mass transfer phenomena are difficult to be solved. For the case where the exact solution is difficult to be obtained, it is necessary to use a numerical procedure such as the finite difference method to solve a particular partial differential equation. In term of numerical procedure, a particular method can be considered as an efficient method if the method can give an approximate solution within the specified error with the least computational complexity. Throughout this paper, the two-dimensional Porous Medium Equation (2D PME) is discretized by using the implicit finite difference scheme to construct the corresponding approximation equation. Then this approximation equation yields a large-sized and sparse nonlinear system. By using the Newton method to linearize the nonlinear system, this paper deals with the application of the Four-Point Newton-EGSOR (4NEGSOR) iterative method for solving the 2D PMEs. In addition to that, the efficiency of the 4NEGSOR iterative method is studied by solving three examples of the problems. Based on the comparative analysis, the Newton-Gauss-Seidel (NGS) and the Newton-SOR (NSOR) iterative methods are also considered. The numerical findings show that the 4NEGSOR method is superior to the NGS and the NSOR methods in terms of the number of iterations to get the converged solutions, the time of computation and the maximum absolute errors produced by the methods.
Time-dependent modeling of dust injection in semi-detached ITER divertor plasma
NASA Astrophysics Data System (ADS)
Smirnov, Roman; Krasheninnikov, Sergei
2017-10-01
At present, it is generally understood that dust related issues will play important role in operation of the next step fusion devices, i.e. ITER, and in the development of future fusion reactors. Recent progress in research on dust in magnetic fusion devises has outlined several topics of particular concern: a) degradation of fusion plasma performance; b) impairment of in-vessel diagnostic instruments; and c) safety issues related to dust reactivity and tritium retention. In addition, observed dust events in fusion edge plasmas are highly irregular and require consideration of temporal evolution of both the dust and the fusion plasma. In order to address the dust-related fusion performance issues, we have coupled the dust transport code DUSTT and the edge plasma transport code UEDGE in time-dependent manner, allowing modeling of transient dust-induced phenomena in fusion edge plasmas. Using the coupled codes we simulate burst-like injection of tungsten dust into ITER divertor plasma in semi-detached regime, which is considered as preferable ITER divertor operational mode based on the plasma and heat load control restrictions. Analysis of transport of the dust and the dust-produced impurities, and of dynamics of the ITER divertor and edge plasma in response to the dust injection will be presented. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Fusion Energy Sciences, under Award Number DE-FG02-06ER54852.
NASA Astrophysics Data System (ADS)
De Temmerman, G.; Hirai, T.; Pitts, R. A.
2018-04-01
The tungsten (W) material in the high heat flux regions of the ITER divertor will be exposed to high fluxes of low-energy particles (e.g. H, D, T, He, Ne and/or N). Combined with long-pulse operations, this implies fluences well in excess of the highest values reached in today’s tokamak experiments. Shaping of the individual monoblock top surface and tilting of the vertical targets for leading-edge protection lead to an increased surface heat flux, and thus increased surface temperature and a reduced margin to remain below the temperature at which recrystallization and grain growth begin. Significant morphology changes are known to occur on W after exposure to high fluences of low-energy particles, be it H or He. An analysis of the formation conditions of these morphology changes is made in relation to the conditions expected at the vertical targets during different phases of operations. It is concluded that both H and He-related effects can occur in ITER. In particular, the case of He-induced nanostructure (also known as ‘fuzz’) is reviewed. Fuzz formation appears possible over a limited region of the outer vertical target, the inner target being generally a net Be deposition area. A simple analysis of the fuzz growth rate including the effect of edge-localized modes (ELMs) and the reduced thermal conductivity of fuzz shows that the fuzz thickness is likely to be limited by the occurrence of annealing during ELM-induced thermal excursions. Not only the morphology, but the material mechanical and thermal properties can be modified by plasma exposure. A review of the existing literature is made, but the existing data are insufficient to conclude quantitatively on the importance and extent of these effects for ITER. As a consequence of the high surface temperatures in ITER, W recrystallization is an important effect to consider, since it leads to a decrease in material strength. An approach is proposed here to develop an operational budget for the W material, i.e. the time the divertor material can be operated at a given temperature before a significant fraction of the material is recrystallized. In general, while it is clear that significant surface damage can occur during ITER operations, the tolerable level of damage in terms of plasma operations currently remains unknown.
Development of a Multi-Behavioral mHealth App for Women Smokers.
Armin, Julie; Johnson, Thienne; Hingle, Melanie; Giacobbi, Peter; Gordon, Judith S
2017-02-01
This article describes the development of the See Me Smoke-Free™ (SMSF) mobile health application, which uses guided imagery to support women in smoking cessation, eating a healthy diet, and increasing physical activity. Focus group discussions, with member checks, were conducted to refine the intervention content and app user interface. Data related to the context of app deployment were collected via user testing sessions and internal quality control testing, which identified and addressed functionality issues, content problems, and bugs. Interactive app features include playback of guided imagery audio files, notification pop-ups, award-sharing on social media, a tracking calendar, content resources, and direct call to the local tobacco quitline. Focus groups helped design the user interface and identified several themes for incorporation into app content, including positivity, the rewards of smoking cessation, and the integrated benefits of maintaining a healthy lifestyle. User testing improved app functionality and usability on many Android phone models. Changes to the app content and function were made iteratively by the development team as a result of focus group and user testing. Despite extensive internal and user testing, unanticipated data collection and reporting issues emerged during deployment due not only to the variety of Android software and hardware but also to individual phone settings and use.
2009-09-01
SAS Statistical Analysis Software SE Systems Engineering SEP Systems Engineering Process SHP Shaft Horsepower SIGINT Signals Intelligence......management occurs (OSD 2002). The Systems Engineering Process (SEP), displayed in Figure 2, is a comprehensive , iterative and recursive problem
Iteratively Developing an mHealth HIV Prevention Program for Sexual Minority Adolescent Men
Prescott, Tonya L.; Philips, Gregory L.; Bull, Sheana S.; Parsons, Jeffrey T.; Mustanski, Brian
2015-01-01
Five activities were implemented between November 2012 and June 2014 to develop an mHealth HIV prevention program for adolescent gay, bisexual, and queer men (AGBM): (1) focus groups to gather acceptability of the program components; (2) ongoing development of content; (3) Content Advisory Teams to confirm the tone, flow, and understandability of program content; (4) an internal team test to alpha test software functionality; and (5) a beta test to test the protocol and intervention messages. Findings suggest that AGBM preferred positive and friendly content that at the same time, did not try to sound like a peer. They deemed the number of daily text messages (i.e., 8–15 per day) to be acceptable. The Text Buddy component was well received but youth needed concrete direction about appropriate discussion topics. AGBM determined the self-safety assessment also was acceptable. Its feasible implementation in the beta test suggests that AGBM can actively self-determine their potential danger when participating in sexual health programs. Partnering with the target population in intervention development is critical to ensure that a salient final product and feasible protocol are created. PMID:26238038
Physics & Preservice Teachers Partnership Project (P4): An interdisciplinary peer learning tool
NASA Astrophysics Data System (ADS)
Simmonds, Paul J.; Wenner, Julianne A.
Physics graduate students (PGs) and teacher candidates (TCs) often graduate with specific weaknesses. PGs frequently lack training in teaching and effective communication. TCs are typically underprepared for teaching science, and physics in particular. In response to these challenges, we created P4 . P4 is an innovative model for peer learning, creating interdisciplinary partnerships that help college physics instructors train their students in the ``soft skills'' prized in both academia and industry, while helping teacher educators infuse more content knowledge into science methods courses. In P4, PGs plan a lesson and deliver physics content to TCs. TCs then use this content to design and execute a 15-minute elementary science lesson. Framed by the concept of peer learning, we expected P4 would help PGs develop their teaching and communication skills, and TCs learn more physics. We studied the affordances and constraints of P4 to inform future iterations. Overall, P4 was successful, with both PGs and TCs reporting benefits. Affordances for PGs included the chance to plan and teach a class; TCs benefitted from working with experts to increase content knowledge. We will share the full findings and implications of our study, and outline next steps for P4.
User-Centered Iterative Design of a Collaborative Virtual Environment
2001-03-01
cognitive task analysis methods to study land navigators. This study was intended to validate the use of user-centered design methodologies for the design of...have explored the cognitive aspects of collaborative human way finding and design for collaborative virtual environments. Further investigation of design paradigms should include cognitive task analysis and behavioral task analysis.
State Share of Instruction Funding to Ohio Public Community Colleges: A Policy Analysis
ERIC Educational Resources Information Center
Johnson, Betsy
2012-01-01
This study investigated various state policies to determine their impact on the state share of instruction (SSI) funding to community colleges in the state of Ohio. To complete the policy analysis, the researcher utilized three policy analysis tools, defined by Gill and Saunders (2010) as iterative processes, intuition and judgment, and advice and…
DAKOTA Design Analysis Kit for Optimization and Terascale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Brian M.; Dalbey, Keith R.; Eldred, Michael S.
2010-02-24
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes (computational models) and iterative analysis methods. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and analysis of computational models on high performance computers.A user provides a set of DAKOTA commands in an input file and launches DAKOTA. DAKOTA invokes instances of the computational models, collects their results, and performs systems analyses. DAKOTA contains algorithms for optimization with gradient and nongradient-basedmore » methods; uncertainty quantification with sampling, reliability, polynomial chaos, stochastic collocation, and epistemic methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as hybrid optimization, surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. Services for parallel computing, simulation interfacing, approximation modeling, fault tolerance, restart, and graphics are also included.« less
He, Shixuan; Xie, Wanyi; Zhang, Wei; Zhang, Liqun; Wang, Yunxia; Liu, Xiaoling; Liu, Yulong; Du, Chunlei
2015-02-25
A novel strategy which combines iteratively cubic spline fitting baseline correction method with discriminant partial least squares qualitative analysis is employed to analyze the surface enhanced Raman scattering (SERS) spectroscopy of banned food additives, such as Sudan I dye and Rhodamine B in food, Malachite green residues in aquaculture fish. Multivariate qualitative analysis methods, using the combination of spectra preprocessing iteratively cubic spline fitting (ICSF) baseline correction with principal component analysis (PCA) and discriminant partial least squares (DPLS) classification respectively, are applied to investigate the effectiveness of SERS spectroscopy for predicting the class assignments of unknown banned food additives. PCA cannot be used to predict the class assignments of unknown samples. However, the DPLS classification can discriminate the class assignment of unknown banned additives using the information of differences in relative intensities. The results demonstrate that SERS spectroscopy combined with ICSF baseline correction method and exploratory analysis methodology DPLS classification can be potentially used for distinguishing the banned food additives in field of food safety. Copyright © 2014 Elsevier B.V. All rights reserved.
New Parallel Algorithms for Structural Analysis and Design of Aerospace Structures
NASA Technical Reports Server (NTRS)
Nguyen, Duc T.
1998-01-01
Subspace and Lanczos iterations have been developed, well documented, and widely accepted as efficient methods for obtaining p-lowest eigen-pair solutions of large-scale, practical engineering problems. The focus of this paper is to incorporate recent developments in vectorized sparse technologies in conjunction with Subspace and Lanczos iterative algorithms for computational enhancements. Numerical performance, in terms of accuracy and efficiency of the proposed sparse strategies for Subspace and Lanczos algorithm, is demonstrated by solving for the lowest frequencies and mode shapes of structural problems on the IBM-R6000/590 and SunSparc 20 workstations.
Integrable mappings and the notion of anticonfinement
NASA Astrophysics Data System (ADS)
Mase, T.; Willox, R.; Ramani, A.; Grammaticos, B.
2018-06-01
We examine the notion of anticonfinement and the role it has to play in the singularity analysis of discrete systems. A singularity is said to be anticonfined if singular values continue to arise indefinitely for the forward and backward iterations of a mapping, with only a finite number of iterates taking regular values in between. We show through several concrete examples that the behaviour of some anticonfined singularities is strongly related to the integrability properties of the discrete mappings in which they arise, and we explain how to use this information to decide on the integrability or non-integrability of the mapping.
Augmenting the one-shot framework by additional constraints
Bosse, Torsten
2016-05-12
The (multistep) one-shot method for design optimization problems has been successfully implemented for various applications. To this end, a slowly convergent primal fixed-point iteration of the state equation is augmented by an adjoint iteration and a corresponding preconditioned design update. In this paper we present a modification of the method that allows for additional equality constraints besides the usual state equation. Finally, a retardation analysis and the local convergence of the method in terms of necessary and sufficient conditions are given, which depend on key characteristics of the underlying problem and the quality of the utilized preconditioner.
Augmenting the one-shot framework by additional constraints
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bosse, Torsten
The (multistep) one-shot method for design optimization problems has been successfully implemented for various applications. To this end, a slowly convergent primal fixed-point iteration of the state equation is augmented by an adjoint iteration and a corresponding preconditioned design update. In this paper we present a modification of the method that allows for additional equality constraints besides the usual state equation. Finally, a retardation analysis and the local convergence of the method in terms of necessary and sufficient conditions are given, which depend on key characteristics of the underlying problem and the quality of the utilized preconditioner.
ICRH system performance during ITER-Like Wall operations at JET and the outlook for DT campaign
NASA Astrophysics Data System (ADS)
Monakhov, Igor; Blackman, Trevor; Dumortier, Pierre; Durodié, Frederic; Jacquet, Philippe; Lerche, Ernesto; Noble, Craig
2017-10-01
Performance of JET ICRH system since installation of the metal ITER-Like Wall (ILW) has been assessed statistically. The data demonstrate steady increase of the RF power coupled to plasmas over recent years with the maximum pulse-average and peak values exceeding respectively 6MW and 8MW in 2016. Analysis and extrapolation of power capabilities of conventional JET ICRH antennas is provided and key performance-limiting factors are discussed. The RF plant operational frequency options are presented highlighting the issues of efficient ICRH application within a foreseeable range of DT plasma scenarios.
Adapting an in-person patient-caregiver communication intervention to a tailored web-based format.
Zulman, Donna M; Schafenacker, Ann; Barr, Kathryn L C; Moore, Ian T; Fisher, Jake; McCurdy, Kathryn; Derry, Holly A; Saunders, Edward W; An, Lawrence C; Northouse, Laurel
2012-03-01
Interventions that target cancer patients and their caregivers have been shown to improve patient-caregiver communication, support, and emotional well-being. To adapt an in-person communication intervention for cancer patients and caregivers to a web-based format, and to examine the usability and acceptability of the web-based program among representative users. A tailored, interactive web-based communication program for cancer patients and their family caregivers was developed based on an existing in-person, nurse-delivered intervention. The development process involved: (1) building a multidisciplinary team of content and web design experts, (2) combining key components of the in-person intervention with the unique tailoring and interactive features of a web-based platform, and (3) conducting focus groups and usability testing to obtain feedback from representative program users at multiple time points. Four focus groups with 2-3 patient-caregiver pairs per group (n = 22 total participants) and two iterations of usability testing with four patient-caregiver pairs per session (n = 16 total participants) were conducted. Response to the program's structure, design, and content was favorable, even among users who were older or had limited computer and Internet experience. The program received high ratings for ease of use and overall usability (mean System Usability Score of 89.5 out of 100). Many elements of a nurse-delivered patient-caregiver intervention can be successfully adapted to a web-based format. A multidisciplinary design team and an iterative evaluation process with representative users were instrumental in the development of a usable and well-received web-based program. Copyright © 2011 John Wiley & Sons, Ltd.
Hochstenbach, Laura M J; Courtens, Annemie M; Zwakhalen, Sandra M G; Vermeulen, Joan; van Kleef, Maarten; de Witte, Luc P
2017-08-01
Co-creative methods, having an iterative character and including different perspectives, allow for the development of complex nursing interventions. Information about the development process is essential in providing justification for the ultimate intervention and crucial in interpreting the outcomes of subsequent evaluations. This paper describes a co-creative method directed towards the development of an eHealth intervention delivered by registered nurses to support self-management in outpatients with cancer pain. Intervention development was divided into three consecutive phases (exploration of context, specification of content, organisation of care). In each phase, researchers and technicians addressed five iterative steps: research, ideas, prototyping, evaluation, and documentation. Health professionals and patients were consulted during research and evaluation steps. Collaboration of researchers, health professionals, patients and technicians was positive and valuable in optimising outcomes. The intervention includes a mobile application for patients and a web application for nurses. Patients are requested to monitor pain, adverse effects and medication intake, while being provided with graphical feedback, education and contact possibilities. Nurses monitor data, advise patients, and collaborate with the treating physician. Integration of patient self-management and professional care by means of eHealth key into well-known barriers and seem promising in improving cancer pain follow-up. Nurses are able to make substantial contributions because of their expertise, focus on daily living, and their bridging function between patients and health professionals in different care settings. Insights from the intervention development as well as the intervention content give thought for applications in different patients and care settings. Copyright © 2017 Elsevier Inc. All rights reserved.
Zulman, Donna M.; Schafenacker, Ann; Barr, Kathryn L.C.; Moore, Ian T.; Fisher, Jake; McCurdy, Kathryn; Derry, Holly A.; Saunders, Edward W.; An, Lawrence C.; Northouse, Laurel
2011-01-01
Background Interventions that target cancer patients and their caregivers have been shown to improve communication, support, and emotional well-being. Objective To adapt an in-person communication intervention for cancer patients and caregivers to a web-based format, and to examine the usability and acceptability of the web-based program among representative users. Methods A tailored, interactive web-based communication program for cancer patients and their family caregivers was developed based on an existing in-person, nurse-delivered intervention. The development process involved: 1) building a multidisciplinary team of content and web design experts, 2) combining key components of the in-person intervention with the unique tailoring and interactive features of a web-based platform, and 3) conducting focus groups and usability testing to obtain feedback from representative program users at multiple time points. Results Four focus groups with 2 to 3 patient-caregiver pairs per group (n = 22 total participants) and two iterations of usability testing with 4 patient-caregiver pairs per session (n = 16 total participants) were conducted. Response to the program's structure, design, and content was favorable, even among users who were older or had limited computer and internet experience. The program received high ratings for ease of use and overall usability (mean System Usability Score of 89.5 out of 100). Conclusions Many elements of a nurse-delivered patient-caregiver intervention can be successfully adapted to a web-based format. A multidisciplinary design team and an iterative evaluation process with representative users were instrumental in the development of a usable and well-received web-based program. PMID:21830255
NASA Astrophysics Data System (ADS)
Schenone, D. J.; Igama, S.; Marash-Whitman, D.; Sloan, C.; Okansinski, A.; Moffet, A.; Grace, J. M.; Gentry, D.
2015-12-01
Experimental evolution of microorganisms in controlled microenvironments serves as a powerful tool for understanding the relationship between micro-scale microbial interactions as well as local-to global-scale environmental factors. In response to iterative and targeted environmental pressures, mutagenesis drives the emergence of novel phenotypes. Current methods to induce expression of these phenotypes require repetitive and time intensive procedures and do not allow for the continuous monitoring of conditions such as optical density, pH and temperature. To address this shortcoming, an Automated Dynamic Directed Evolution Chamber is being developed. It will initially produce Escherichia coli cells with an elevated UV-C resistance phenotype that will ultimately be adapted for different organisms as well as studying environmental effects. A useful phenotype and environmental factor for examining this relationship is UV-C resistance and exposure. In order to build a baseline for the device's operational parameters, a UV-C assay was performed on six E. coli replicates with three exposure fluxes across seven iterations. The fluxes included a 0 second exposure (control), 6 seconds at 3.3 J/m2/s and 40 seconds at 0.5 J/m2/s. After each iteration the cells were regrown and tested for UV-C resistance. We sought to quantify the increase and variability of UV-C resistance among different fluxes, and observe changes in each replicate at each iteration in terms of variance. Under different fluxes, we observed that the 0s control showed no significant increase in resistance, while the 6s/40s fluxes showed increased resistance as the number of iterations increased. A one-million fold increase in survivability was observed after seven iterations. Through statistical analysis using Spearman's rank correlation, the 40s exposure showed signs of more consistently increased resistance, but seven iterations was insufficient to demonstrate statistical significance; to test this further, our experiments will include more iterations. Furthermore, we plan to sequence all the replicants. As adaptation dynamics under intense UV exposure leads to high rate of change, it would be useful to observe differences in tolerance-related and non-tolerance-related genes between the original and UV resistant strains.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berry, Jan; Ferrada, Juan J; Curd, Warren
During inductive plasma operation of ITER, fusion power will reach 500 MW with an energy multiplication factor of 10. The heat will be transferred by the Tokamak Cooling Water System (TCWS) to the environment using the secondary cooling system. Plasma operations are inherently safe even under the most severe postulated accident condition a large, in-vessel break that results in a loss-of-coolant accident. A functioning cooling water system is not required to ensure safe shutdown. Even though ITER is inherently safe, TCWS equipment (e.g., heat exchangers, piping, pressurizers) are classified as safety important components. This is because the water is predictedmore » to contain low-levels of radionuclides (e.g., activated corrosion products, tritium) with activity levels high enough to require the design of components to be in accordance with French regulations for nuclear pressure equipment, i.e., the French Order dated 12 December 2005 (ESPN). ESPN has extended the practical application of the methodology established by the Pressure Equipment Directive (97/23/EC) to nuclear pressure equipment, under French Decree 99-1046 dated 13 December 1999, and Order dated 21 December 1999 (ESP). ASME codes and supplementary analyses (e.g., Failure Modes and Effects Analysis) will be used to demonstrate that the TCWS equipment meets these essential safety requirements. TCWS is being designed to provide not only cooling, with a capacity of approximately 1 GW energy removal, but also elevated temperature baking of first-wall/blanket, vacuum vessel, and divertor. Additional TCWS functions include chemical control of water, draining and drying for maintenance, and facilitation of leak detection/localization. The TCWS interfaces with the majority of ITER systems, including the secondary cooling system. U.S. ITER is responsible for design, engineering, and procurement of the TCWS with industry support from an Engineering Services Organization (ESO) (AREVA Federal Services, with support from Northrop Grumman, and OneCIS). ITER International Organization (ITER-IO) is responsible for design oversight and equipment installation in Cadarache, France. TCWS equipment will be fabricated using ASME design codes with quality assurance and oversight by an Agreed Notified Body (approved by the French regulator) that will ensure regulatory compliance. This paper describes the TCWS design and how U.S. ITER and fabricators will use ASME codes to comply with EU Directives and French Orders and Decrees.« less
Comparison of different filter methods for data assimilation in the unsaturated zone
NASA Astrophysics Data System (ADS)
Lange, Natascha; Berkhahn, Simon; Erdal, Daniel; Neuweiler, Insa
2016-04-01
The unsaturated zone is an important compartment, which plays a role for the division of terrestrial water fluxes into surface runoff, groundwater recharge and evapotranspiration. For data assimilation in coupled systems it is therefore important to have a good representation of the unsaturated zone in the model. Flow processes in the unsaturated zone have all the typical features of flow in porous media: Processes can have long memory and as observations are scarce, hydraulic model parameters cannot be determined easily. However, they are important for the quality of model predictions. On top of that, the established flow models are highly non-linear. For these reasons, the use of the popular Ensemble Kalman filter as a data assimilation method to estimate state and parameters in unsaturated zone models could be questioned. With respect to the long process memory in the subsurface, it has been suggested that iterative filters and smoothers may be more suitable for parameter estimation in unsaturated media. We test the performance of different iterative filters and smoothers for data assimilation with a focus on parameter updates in the unsaturated zone. In particular we compare the Iterative Ensemble Kalman Filter and Smoother as introduced by Bocquet and Sakov (2013) as well as the Confirming Ensemble Kalman Filter and the modified Restart Ensemble Kalman Filter proposed by Song et al. (2014) to the original Ensemble Kalman Filter (Evensen, 2009). This is done with simple test cases generated numerically. We consider also test examples with layering structure, as a layering structure is often found in natural soils. We assume that observations are water content, obtained from TDR probes or other observation methods sampling relatively small volumes. Particularly in larger data assimilation frameworks, a reasonable balance between computational effort and quality of results has to be found. Therefore, we compare computational costs of the different methods as well as the quality of open loop model predictions and the estimated parameters. Bocquet, M. and P. Sakov, 2013: Joint state and parameter estimation with an iterative ensemble Kalman smoother, Nonlinear Processes in Geophysics 20(5): 803-818. Evensen, G., 2009: Data assimilation: The ensemble Kalman filter. Springer Science & Business Media. Song, X.H., L.S. Shi, M. Ye, J.Z. Yang and I.M. Navon, 2014: Numerical comparison of iterative ensemble Kalman filters for unsaturated flow inverse modeling. Vadose Zone Journal 13(2), 10.2136/vzj2013.05.0083.
Flood, Emuella; Silberg, Debra G; Romero, Beverly; Beusterien, Kathleen; Erder, M Haim; Cuffari, Carmen
2017-09-25
The purpose of this study is to develop patient-reported (PRO) and observer-reported (ObsRO) outcome measures of ulcerative colitis (UC) signs/symptoms in children aged 5-17 with mild/moderate UC. The daily ulcerative colitis signs and symptoms scale (DUCS) was developed in two phases. Phase I involved concept elicitation interviews with patients and healthcare providers, review of website posts and item generation. Phase II involved cognitive debriefing and assessment of usability and feasibility of the eDiaries. Participants were recruited from five US clinical sites, a research recruitment agency, and internet advertising. Thematic and content analysis was performed to identify concepts from Phase I. The Phase II cognitive debriefing interviews were analyzed iteratively to identify problems with clarity and relevance of eDiary content. The US Food and Drug Administration (FDA) also reviewed and provided feedback on the eDiaries. Phase I included 32 participants (22 remission; 10 active disease). Phase II included 38 participants (22 remission; 16 active disease). A core set of seven signs and symptoms emerged that were reported by at least 30% of the patients interviewed: abdominal pain, blood in stool, frequent stools, diarrhea, stool urgency, nighttime stools, and tiredness. Participant input influenced changes such as refinement of item wording, revision of graphics, and selection of response scales. Revisions suggested by FDA included simplifying the response scale and adding questions to capture symptoms during sleeping hours. The findings of instrument development suggest that the DUCS PRO and ObsRO eDiaries are content-valid instruments for capturing the daily signs and symptoms of pediatric patients with mild to moderate UC in a clinical trial setting.
Final Report on ITER Task Agreement 81-08
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richard L. Moore
As part of an ITER Implementing Task Agreement (ITA) between the ITER US Participant Team (PT) and the ITER International Team (IT), the INL Fusion Safety Program was tasked to provide the ITER IT with upgrades to the fusion version of the MELCOR 1.8.5 code including a beryllium dust oxidation model. The purpose of this model is to allow the ITER IT to investigate hydrogen production from beryllium dust layers on hot surfaces inside the ITER vacuum vessel (VV) during in-vessel loss-of-cooling accidents (LOCAs). Also included in the ITER ITA was a task to construct a RELAP5/ATHENA model of themore » ITER divertor cooling loop to model the draining of the loop during a large ex-vessel pipe break followed by an in-vessel divertor break and compare the results to a simular MELCOR model developed by the ITER IT. This report, which is the final report for this agreement, documents the completion of the work scope under this ITER TA, designated as TA 81-08.« less
Analysis of Complex Intervention Effects in Time-Series Experiments.
ERIC Educational Resources Information Center
Bower, Cathleen
An iterative least squares procedure for analyzing the effect of various kinds of intervention in time-series data is described. There are numerous applications of this design in economics, education, and psychology, although until recently, no appropriate analysis techniques had been developed to deal with the model adequately. This paper…
An Online Image Analysis Tool for Science Education
ERIC Educational Resources Information Center
Raeside, L.; Busschots, B.; Waddington, S.; Keating, J. G.
2008-01-01
This paper describes an online image analysis tool developed as part of an iterative, user-centered development of an online Virtual Learning Environment (VLE) called the Education through Virtual Experience (EVE) Portal. The VLE provides a Web portal through which schoolchildren and their teachers create scientific proposals, retrieve images and…
How do gut feelings feature in tutorial dialogues on diagnostic reasoning in GP traineeship?
Stolper, C F; Van de Wiel, M W J; Hendriks, R H M; Van Royen, P; Van Bokhoven, M A; Van der Weijden, T; Dinant, G J
2015-05-01
Diagnostic reasoning is considered to be based on the interaction between analytical and non-analytical cognitive processes. Gut feelings, a specific form of non-analytical reasoning, play a substantial role in diagnostic reasoning by general practitioners (GPs) and may activate analytical reasoning. In GP traineeships in the Netherlands, trainees mostly see patients alone but regularly consult with their supervisors to discuss patients and problems, receive feedback, and improve their competencies. In the present study, we examined the discussions of supervisors and their trainees about diagnostic reasoning in these so-called tutorial dialogues and how gut feelings feature in these discussions. 17 tutorial dialogues focussing on diagnostic reasoning were video-recorded and transcribed and the protocols were analysed using a detailed bottom-up and iterative content analysis and coding procedure. The dialogues were segmented into quotes. Each quote received a content code and a participant code. The number of words per code was used as a unit of analysis to quantitatively compare the contributions to the dialogues made by supervisors and trainees, and the attention given to different topics. The dialogues were usually analytical reflections on a trainee's diagnostic reasoning. A hypothetico-deductive strategy was often used, by listing differential diagnoses and discussing what information guided the reasoning process and might confirm or exclude provisional hypotheses. Gut feelings were discussed in seven dialogues. They were used as a tool in diagnostic reasoning, inducing analytical reflection, sometimes on the entire diagnostic reasoning process. The emphasis in these tutorial dialogues was on analytical components of diagnostic reasoning. Discussing gut feelings in tutorial dialogues seems to be a good educational method to familiarize trainees with non-analytical reasoning. Supervisors need specialised knowledge about these aspects of diagnostic reasoning and how to deal with them in medical education.
Ranney, Megan L.; Choo, Esther K.; Cunningham, Rebecca M.; Spirito, Anthony; Thorsen, Margaret; Mello, Michael J.; Morrow, Kathleen
2014-01-01
Purpose To elucidate key elements surrounding acceptability/feasibility, language, and structure of a text message-based preventive intervention for high-risk adolescent females. Methods We recruited high-risk 13- to 17-year-old females screening positive for past-year peer violence and depressive symptoms, during emergency department visits for any chief complaint. Participants completed semistructured interviews exploring preferences around text message preventive interventions. Interviews were conducted by trained interviewers, audio-recorded, and transcribed verbatim. A coding structure was iteratively developed using thematic and content analysis. Each transcript was double coded. NVivo 10 was used to facilitate analysis. Results Saturation was reached after 20 interviews (mean age 15.4; 55% white; 40% Hispanic; 85% with cell phone access). (1) Acceptability/feasibility themes: A text-message intervention was felt to support and enhance existing coping strategies. Participants had a few concerns about privacy and cost. Peer endorsement may increase uptake. (2) Language themes: Messages should be simple and positive. Tone should be conversational but not slang filled. (3) Structural themes: Messages may be automated but must be individually tailored on a daily basis. Both predetermined (automatic) and as-needed messages are requested. Dose and timing of content should be varied according to participants’ needs. Multimedia may be helpful but is not necessary. Conclusions High-risk adolescent females seeking emergency department care are enthusiastic about a text message-based preventive intervention. Incorporating thematic results on language and structure can inform development of future text messaging interventions for adolescent girls. Concerns about cost and privacy may be able to be addressed through the process of recruitment and introduction to the intervention. PMID:24559973
Edwards, M J; Jago, R; Sebire, S J; Kesten, J M; Pool, L; Thompson, J L
2015-01-01
Objectives The present study uses qualitative data to explore parental perceptions of how their young child's screen viewing and physical activity behaviours are influenced by their child's friends and siblings. Design Telephone interviews were conducted with parents of year 1 children (age 5–6 years). Interviews considered parental views on a variety of issues related to their child's screen viewing and physical activity behaviours, including the influence that their child's friends and siblings have over such behaviours. Interviews were transcribed verbatim and analysed using deductive content analysis. Data were organised using a categorisation matrix developed by the research team. Coding and theme generation was iterative and refined throughout. Data were entered into and coded within N-Vivo. Setting Parents were recruited through 57 primary schools located in Bristol and the surrounding area that took part in the B-ProAct1v study. Participants Fifty-three parents of children aged 5–6 years. Results Parents believe that their child's screen viewing and physical activity behaviours are influenced by their child's siblings and friends. Friends are considered to have a greater influence over the structured physical activities a child asks to participate in, whereas the influence of siblings is more strongly perceived over informal and spontaneous physical activities. In terms of screen viewing, parents suggest that their child's friends can heavily influence the content their child wishes to consume, however, siblings have a more direct and tangible influence over what a child watches. Conclusions Friends and siblings influence young children's physical activity and screen viewing behaviours. Child-focused physical activity and screen viewing interventions should consider the important influence that siblings and friends have over these behaviours. PMID:25976759
ITER Construction—Plant System Integration
NASA Astrophysics Data System (ADS)
Tada, E.; Matsuda, S.
2009-02-01
This brief paper introduces how the ITER will be built in the international collaboration. The ITER Organization plays a central role in constructing ITER and leading it into operation. Since most of the ITER components are to be provided in-kind from the member countries, integral project management should be scoped in advance of real work. Those include design, procurement, system assembly, testing, licensing and commissioning of ITER.
Establishing Reliability and Validity of the Criterion Referenced Exam of GeoloGy Standards EGGS
NASA Astrophysics Data System (ADS)
Guffey, S. K.; Slater, S. J.; Slater, T. F.; Schleigh, S.; Burrows, A. C.
2016-12-01
Discipline-based geoscience education researchers have considerable need for a criterion-referenced, easy-to-administer and -score conceptual diagnostic survey for undergraduates taking introductory science survey courses in order for faculty to better be able to monitor the learning impacts of various interactive teaching approaches. To support ongoing education research across the geosciences, we are continuing to rigorously and systematically work to firmly establish the reliability and validity of the recently released Exam of GeoloGy Standards, EGGS. In educational testing, reliability refers to the consistency or stability of test scores whereas validity refers to the accuracy of the inferences or interpretations one makes from test scores. There are several types of reliability measures being applied to the iterative refinement of the EGGS survey, including test-retest, alternate form, split-half, internal consistency, and interrater reliability measures. EGGS rates strongly on most measures of reliability. For one, Cronbach's alpha provides a quantitative index indicating the extent to which if students are answering items consistently throughout the test and measures inter-item correlations. Traditional item analysis methods further establish the degree to which a particular item is reliably assessing students is actually quantifiable, including item difficulty and item discrimination. Validity, on the other hand, is perhaps best described by the word accuracy. For example, content validity is the to extent to which a measurement reflects the specific intended domain of the content, stemming from judgments of people who are either experts in the testing of that particular content area or are content experts. Perhaps more importantly, face validity is a judgement of how representative an instrument is reflective of the science "at face value" and refers to the extent to which a test appears to measure a the targeted scientific domain as viewed by laypersons, examinees, test users, the public, and other invested stakeholders.
A computer program for the design and analysis of low-speed airfoils, supplement
NASA Technical Reports Server (NTRS)
Eppler, R.; Somers, D. M.
1980-01-01
Three new options were incorporated into an existing computer program for the design and analysis of low speed airfoils. These options permit the analysis of airfoils having variable chord (variable geometry), a boundary layer displacement iteration, and the analysis of the effect of single roughness elements. All three options are described in detail and are included in the FORTRAN IV computer program.
Research at ITER towards DEMO: Specific reactor diagnostic studies to be carried out on ITER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krasilnikov, A. V.; Kaschuck, Y. A.; Vershkov, V. A.
2014-08-21
In ITER diagnostics will operate in the very hard radiation environment of fusion reactor. Extensive technology studies are carried out during development of the ITER diagnostics and procedures of their calibration and remote handling. Results of these studies and practical application of the developed diagnostics on ITER will provide the direct input to DEMO diagnostic development. The list of DEMO measurement requirements and diagnostics will be determined during ITER experiments on the bases of ITER plasma physics results and success of particular diagnostic application in reactor-like ITER plasma. Majority of ITER diagnostic already passed the conceptual design phase and representmore » the state of the art in fusion plasma diagnostic development. The number of related to DEMO results of ITER diagnostic studies such as design and prototype manufacture of: neutron and γ–ray diagnostics, neutral particle analyzers, optical spectroscopy including first mirror protection and cleaning technics, reflectometry, refractometry, tritium retention measurements etc. are discussed.« less
Research at ITER towards DEMO: Specific reactor diagnostic studies to be carried out on ITER
NASA Astrophysics Data System (ADS)
Krasilnikov, A. V.; Kaschuck, Y. A.; Vershkov, V. A.; Petrov, A. A.; Petrov, V. G.; Tugarinov, S. N.
2014-08-01
In ITER diagnostics will operate in the very hard radiation environment of fusion reactor. Extensive technology studies are carried out during development of the ITER diagnostics and procedures of their calibration and remote handling. Results of these studies and practical application of the developed diagnostics on ITER will provide the direct input to DEMO diagnostic development. The list of DEMO measurement requirements and diagnostics will be determined during ITER experiments on the bases of ITER plasma physics results and success of particular diagnostic application in reactor-like ITER plasma. Majority of ITER diagnostic already passed the conceptual design phase and represent the state of the art in fusion plasma diagnostic development. The number of related to DEMO results of ITER diagnostic studies such as design and prototype manufacture of: neutron and γ-ray diagnostics, neutral particle analyzers, optical spectroscopy including first mirror protection and cleaning technics, reflectometry, refractometry, tritium retention measurements etc. are discussed.
Naidu, Sailen G; Kriegshauser, J Scott; Paden, Robert G; He, Miao; Wu, Qing; Hara, Amy K
2014-12-01
An ultra-low-dose radiation protocol reconstructed with model-based iterative reconstruction was compared with our standard-dose protocol. This prospective study evaluated 20 men undergoing surveillance-enhanced computed tomography after endovascular aneurysm repair. All patients underwent standard-dose and ultra-low-dose venous phase imaging; images were compared after reconstruction with filtered back projection, adaptive statistical iterative reconstruction, and model-based iterative reconstruction. Objective measures of aortic contrast attenuation and image noise were averaged. Images were subjectively assessed (1 = worst, 5 = best) for diagnostic confidence, image noise, and vessel sharpness. Aneurysm sac diameter and endoleak detection were compared. Quantitative image noise was 26% less with ultra-low-dose model-based iterative reconstruction than with standard-dose adaptive statistical iterative reconstruction and 58% less than with ultra-low-dose adaptive statistical iterative reconstruction. Average subjective noise scores were not different between ultra-low-dose model-based iterative reconstruction and standard-dose adaptive statistical iterative reconstruction (3.8 vs. 4.0, P = .25). Subjective scores for diagnostic confidence were better with standard-dose adaptive statistical iterative reconstruction than with ultra-low-dose model-based iterative reconstruction (4.4 vs. 4.0, P = .002). Vessel sharpness was decreased with ultra-low-dose model-based iterative reconstruction compared with standard-dose adaptive statistical iterative reconstruction (3.3 vs. 4.1, P < .0001). Ultra-low-dose model-based iterative reconstruction and standard-dose adaptive statistical iterative reconstruction aneurysm sac diameters were not significantly different (4.9 vs. 4.9 cm); concordance for the presence of endoleak was 100% (P < .001). Compared with a standard-dose technique, an ultra-low-dose model-based iterative reconstruction protocol provides comparable image quality and diagnostic assessment at a 73% lower radiation dose.
Analysis of the ITER central solenoid insert (CSI) coil stability tests
NASA Astrophysics Data System (ADS)
Savoldi, L.; Bonifetto, R.; Breschi, M.; Isono, T.; Martovetsky, N.; Ozeki, H.; Zanino, R.
2017-07-01
At the end of the test campaign of the ITER Central Solenoid Insert (CSI) coil in 2015, after 16,000 electromagnetic (EM) cycles, some tests were devoted to the study of the conductor stability, through the measurement of the Minimum Quench Energy (MQE). The tests were performed by means of an inductive heater (IH), located in the high-field region of the CSI and wrapped around the conductor. The calorimetric calibration of the IH is presented here, aimed at assessing the energy deposited in the conductor for different values of the IH electrical operating conditions. The MQE of the conductor of the ITER CS module 3L can be estimated as ∼200 J ± 20%, deposited on the whole conductor on a length of ∼10 cm (the IH length) in ∼40 ms, at current and magnetic field conditions relevant for the ITER CS operation. The repartition of the energy deposited in the conductor under the IH is computed to be ∼10% in the cable and 90% in the jacket by means of a 3D Finite Elements EM model. It is shown how this repartition implies that the bundle (cable + helium) heat capacity is fully available for stability on the time scale of the tested disturbances. This repartition is used in input to the thermal-hydraulic analysis performed with the 4C code, to assess the capability of the model to accurately reproduce the stability threshold of the conductor. The MQE computed by the code for this disturbance is in good agreement with the measured value, with an underestimation within 15% of the experimental value.
The Research of Multiple Attenuation Based on Feedback Iteration and Independent Component Analysis
NASA Astrophysics Data System (ADS)
Xu, X.; Tong, S.; Wang, L.
2017-12-01
How to solve the problem of multiple suppression is a difficult problem in seismic data processing. The traditional technology for multiple attenuation is based on the principle of the minimum output energy of the seismic signal, this criterion is based on the second order statistics, and it can't achieve the multiple attenuation when the primaries and multiples are non-orthogonal. In order to solve the above problems, we combine the feedback iteration method based on the wave equation and the improved independent component analysis (ICA) based on high order statistics to suppress the multiple waves. We first use iterative feedback method to predict the free surface multiples of each order. Then, in order to predict multiples from real multiple in amplitude and phase, we design an expanded pseudo multi-channel matching filtering method to get a more accurate matching multiple result. Finally, we present the improved fast ICA algorithm which is based on the maximum non-Gauss criterion of output signal to the matching multiples and get better separation results of the primaries and the multiples. The advantage of our method is that we don't need any priori information to the prediction of the multiples, and can have a better separation result. The method has been applied to several synthetic data generated by finite-difference model technique and the Sigsbee2B model multiple data, the primaries and multiples are non-orthogonal in these models. The experiments show that after three to four iterations, we can get the perfect multiple results. Using our matching method and Fast ICA adaptive multiple subtraction, we can not only effectively preserve the effective wave energy in seismic records, but also can effectively suppress the free surface multiples, especially the multiples related to the middle and deep areas.
NASA Astrophysics Data System (ADS)
Molde, H.; Zwick, D.; Muskulus, M.
2014-12-01
Support structures for offshore wind turbines are contributing a large part to the total project cost, and a cost saving of a few percent would have considerable impact. At present support structures are designed with simplified methods, e.g., spreadsheet analysis, before more detailed load calculations are performed. Due to the large number of loadcases only a few semimanual design iterations are typically executed. Computer-assisted optimization algorithms could help to further explore design limits and avoid unnecessary conservatism. In this study the simultaneous perturbation stochastic approximation method developed by Spall in the 1990s was assessed with respect to its suitability for support structure optimization. The method depends on a few parameters and an objective function that need to be chosen carefully. In each iteration the structure is evaluated by time-domain analyses, and joint fatigue lifetimes and ultimate strength utilization are computed from stress concentration factors. A pseudo-gradient is determined from only two analysis runs and the design is adjusted in the direction that improves it the most. The algorithm is able to generate considerably improved designs, compared to other methods, in a few hundred iterations, which is demonstrated for the NOWITECH 10 MW reference turbine.
A noise power spectrum study of a new model-based iterative reconstruction system: Veo 3.0.
Li, Guang; Liu, Xinming; Dodge, Cristina T; Jensen, Corey T; Rong, X John
2016-09-08
The purpose of this study was to evaluate performance of the third generation of model-based iterative reconstruction (MBIR) system, Veo 3.0, based on noise power spectrum (NPS) analysis with various clinical presets over a wide range of clinically applicable dose levels. A CatPhan 600 surrounded by an oval, fat-equivalent ring to mimic patient size/shape was scanned 10 times at each of six dose levels on a GE HD 750 scanner. NPS analysis was performed on images reconstructed with various Veo 3.0 preset combinations for comparisons of those images reconstructed using Veo 2.0, filtered back projection (FBP) and adaptive statistical iterative reconstruc-tion (ASiR). The new Target Thickness setting resulted in higher noise in thicker axial images. The new Texture Enhancement function achieved a more isotropic noise behavior with less image artifacts. Veo 3.0 provides additional reconstruction options designed to allow the user choice of balance between spatial resolution and image noise, relative to Veo 2.0. Veo 3.0 provides more user selectable options and in general improved isotropic noise behavior in comparison to Veo 2.0. The overall noise reduction performance of both versions of MBIR was improved in comparison to FBP and ASiR, especially at low-dose levels. © 2016 The Authors.
Improved Regression Analysis of Temperature-Dependent Strain-Gage Balance Calibration Data
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2015-01-01
An improved approach is discussed that may be used to directly include first and second order temperature effects in the load prediction algorithm of a wind tunnel strain-gage balance. The improved approach was designed for the Iterative Method that fits strain-gage outputs as a function of calibration loads and uses a load iteration scheme during the wind tunnel test to predict loads from measured gage outputs. The improved approach assumes that the strain-gage balance is at a constant uniform temperature when it is calibrated and used. First, the method introduces a new independent variable for the regression analysis of the balance calibration data. The new variable is designed as the difference between the uniform temperature of the balance and a global reference temperature. This reference temperature should be the primary calibration temperature of the balance so that, if needed, a tare load iteration can be performed. Then, two temperature{dependent terms are included in the regression models of the gage outputs. They are the temperature difference itself and the square of the temperature difference. Simulated temperature{dependent data obtained from Triumph Aerospace's 2013 calibration of NASA's ARC-30K five component semi{span balance is used to illustrate the application of the improved approach.
NASA Astrophysics Data System (ADS)
Chen, Lei; Liu, Xiang; Lian, Youyun; Cai, Laizhong
2015-09-01
The hypervapotron (HV), as an enhanced heat transfer technique, will be used for ITER divertor components in the dome region as well as the enhanced heat flux first wall panels. W-Cu brazing technology has been developed at SWIP (Southwestern Institute of Physics), and one W/CuCrZr/316LN component of 450 mm×52 mm×166 mm with HV cooling channels will be fabricated for high heat flux (HHF) tests. Before that a relevant analysis was carried out to optimize the structure of divertor component elements. ANSYS-CFX was used in CFD analysis and ABAQUS was adopted for thermal-mechanical calculations. Commercial code FE-SAFE was adopted to compute the fatigue life of the component. The tile size, thickness of tungsten tiles and the slit width among tungsten tiles were optimized and its HHF performances under International Thermonuclear Experimental Reactor (ITER) loading conditions were simulated. One brand new tokamak HL-2M with advanced divertor configuration is under construction in SWIP, where ITER-like flat-tile divertor components are adopted. This optimized design is expected to supply valuable data for HL-2M tokamak. supported by the National Magnetic Confinement Fusion Science Program of China (Nos. 2011GB110001 and 2011GB110004)
Effect of Low-Dose MDCT and Iterative Reconstruction on Trabecular Bone Microstructure Assessment.
Kopp, Felix K; Holzapfel, Konstantin; Baum, Thomas; Nasirudin, Radin A; Mei, Kai; Garcia, Eduardo G; Burgkart, Rainer; Rummeny, Ernst J; Kirschke, Jan S; Noël, Peter B
2016-01-01
We investigated the effects of low-dose multi detector computed tomography (MDCT) in combination with statistical iterative reconstruction algorithms on trabecular bone microstructure parameters. Twelve donated vertebrae were scanned with the routine radiation exposure used in our department (standard-dose) and a low-dose protocol. Reconstructions were performed with filtered backprojection (FBP) and maximum-likelihood based statistical iterative reconstruction (SIR). Trabecular bone microstructure parameters were assessed and statistically compared for each reconstruction. Moreover, fracture loads of the vertebrae were biomechanically determined and correlated to the assessed microstructure parameters. Trabecular bone microstructure parameters based on low-dose MDCT and SIR significantly correlated with vertebral bone strength. There was no significant difference between microstructure parameters calculated on low-dose SIR and standard-dose FBP images. However, the results revealed a strong dependency on the regularization strength applied during SIR. It was observed that stronger regularization might corrupt the microstructure analysis, because the trabecular structure is a very small detail that might get lost during the regularization process. As a consequence, the introduction of SIR for trabecular bone microstructure analysis requires a specific optimization of the regularization parameters. Moreover, in comparison to other approaches, superior noise-resolution trade-offs can be found with the proposed methods.
McDougal, Sarah J; Sullivan, Patrick S; Stekler, Joanne D; Stephenson, Rob
2015-01-01
Background Gay, bisexual, and other men who have sex with men (MSM) account for a disproportionate burden of new HIV infections in the United States. Mobile technology presents an opportunity for innovative interventions for HIV prevention. Some HIV prevention apps currently exist; however, it is challenging to encourage users to download these apps and use them regularly. An iterative research process that centers on the community’s needs and preferences may increase the uptake, adherence, and ultimate effectiveness of mobile apps for HIV prevention. Objective The aim of this paper is to provide a case study to illustrate how an iterative community approach to a mobile HIV prevention app can lead to changes in app content to appropriately address the needs and the desires of the target community. Methods In this three-phase study, we conducted focus group discussions (FGDs) with MSM and HIV testing counselors in Atlanta, Seattle, and US rural regions to learn preferences for building a mobile HIV prevention app. We used data from these groups to build a beta version of the app and theater tested it in additional FGDs. A thematic data analysis examined how this approach addressed preferences and concerns expressed by the participants. Results There was an increased willingness to use the app during theater testing than during the first phase of FGDs. Many concerns that were identified in phase one (eg, disagreements about reminders for HIV testing, concerns about app privacy) were considered in building the beta version. Participants perceived these features as strengths during theater testing. However, some disagreements were still present, especially regarding the tone and language of the app. Conclusions These findings highlight the benefits of using an interactive and community-driven process to collect data on app preferences when building a mobile HIV prevention app. Through this process, we learned how to be inclusive of the larger MSM population without marginalizing some app users. Though some issues in phase one were able to be addressed, disagreements still occurred in theater testing. If the app is going to address a large and diverse risk group, we cannot include niche functionality that may offend some of the target population. PMID:27227136
Blade design and analysis using a modified Euler solver
NASA Technical Reports Server (NTRS)
Leonard, O.; Vandenbraembussche, R. A.
1991-01-01
An iterative method for blade design based on Euler solver and described in an earlier paper is used to design compressor and turbine blades providing shock free transonic flows. The method shows a rapid convergence, and indicates how much the flow is sensitive to small modifications of the blade geometry, that the classical iterative use of analysis methods might not be able to define. The relationship between the required Mach number distribution and the resulting geometry is discussed. Examples show how geometrical constraints imposed upon the blade shape can be respected by using free geometrical parameters or by relaxing the required Mach number distribution. The same code is used both for the design of the required geometry and for the off-design calculations. Examples illustrate the difficulty of designing blade shapes with optimal performance also outside of the design point.
Formulation for Simultaneous Aerodynamic Analysis and Design Optimization
NASA Technical Reports Server (NTRS)
Hou, G. W.; Taylor, A. C., III; Mani, S. V.; Newman, P. A.
1993-01-01
An efficient approach for simultaneous aerodynamic analysis and design optimization is presented. This approach does not require the performance of many flow analyses at each design optimization step, which can be an expensive procedure. Thus, this approach brings us one step closer to meeting the challenge of incorporating computational fluid dynamic codes into gradient-based optimization techniques for aerodynamic design. An adjoint-variable method is introduced to nullify the effect of the increased number of design variables in the problem formulation. The method has been successfully tested on one-dimensional nozzle flow problems, including a sample problem with a normal shock. Implementations of the above algorithm are also presented that incorporate Newton iterations to secure a high-quality flow solution at the end of the design process. Implementations with iterative flow solvers are possible and will be required for large, multidimensional flow problems.
A Bootstrap Metropolis-Hastings Algorithm for Bayesian Analysis of Big Data.
Liang, Faming; Kim, Jinsu; Song, Qifan
2016-01-01
Markov chain Monte Carlo (MCMC) methods have proven to be a very powerful tool for analyzing data of complex structures. However, their computer-intensive nature, which typically require a large number of iterations and a complete scan of the full dataset for each iteration, precludes their use for big data analysis. In this paper, we propose the so-called bootstrap Metropolis-Hastings (BMH) algorithm, which provides a general framework for how to tame powerful MCMC methods to be used for big data analysis; that is to replace the full data log-likelihood by a Monte Carlo average of the log-likelihoods that are calculated in parallel from multiple bootstrap samples. The BMH algorithm possesses an embarrassingly parallel structure and avoids repeated scans of the full dataset in iterations, and is thus feasible for big data problems. Compared to the popular divide-and-combine method, BMH can be generally more efficient as it can asymptotically integrate the whole data information into a single simulation run. The BMH algorithm is very flexible. Like the Metropolis-Hastings algorithm, it can serve as a basic building block for developing advanced MCMC algorithms that are feasible for big data problems. This is illustrated in the paper by the tempering BMH algorithm, which can be viewed as a combination of parallel tempering and the BMH algorithm. BMH can also be used for model selection and optimization by combining with reversible jump MCMC and simulated annealing, respectively.
NASA Astrophysics Data System (ADS)
Federici, Gianfranco; Raffray, A. René
1997-04-01
The transient thermal model RACLETTE (acronym of Rate Analysis Code for pLasma Energy Transfer Transient Evaluation) described in part I of this paper is applied here to analyse the heat transfer and erosion effects of various slow (100 ms-10 s) high power energy transients on the actively cooled plasma facing components (PFCs) of the International Thermonuclear Experimental Reactor (ITER). These have a strong bearing on the PFC design and need careful analysis. The relevant parameters affecting the heat transfer during the plasma excursions are established. The temperature variation with time and space is evaluated together with the extent of vaporisation and melting (the latter only for metals) for the different candidate armour materials considered for the design (i.e., Be for the primary first wall, Be and CFCs for the limiter, Be, W, and CFCs for the divertor plates) and including for certain cases low-density vapour shielding effects. The critical heat flux, the change of the coolant parameters and the possible severe degradation of the coolant heat removal capability that could result under certain conditions during these transients, for example for the limiter, are also evaluated. Based on the results, the design implications on the heat removal performance and erosion damage of the variuos ITER PFCs are critically discussed and some recommendations are made for the selection of the most adequate protection materials and optimum armour thickness.
A Bootstrap Metropolis–Hastings Algorithm for Bayesian Analysis of Big Data
Kim, Jinsu; Song, Qifan
2016-01-01
Markov chain Monte Carlo (MCMC) methods have proven to be a very powerful tool for analyzing data of complex structures. However, their computer-intensive nature, which typically require a large number of iterations and a complete scan of the full dataset for each iteration, precludes their use for big data analysis. In this paper, we propose the so-called bootstrap Metropolis-Hastings (BMH) algorithm, which provides a general framework for how to tame powerful MCMC methods to be used for big data analysis; that is to replace the full data log-likelihood by a Monte Carlo average of the log-likelihoods that are calculated in parallel from multiple bootstrap samples. The BMH algorithm possesses an embarrassingly parallel structure and avoids repeated scans of the full dataset in iterations, and is thus feasible for big data problems. Compared to the popular divide-and-combine method, BMH can be generally more efficient as it can asymptotically integrate the whole data information into a single simulation run. The BMH algorithm is very flexible. Like the Metropolis-Hastings algorithm, it can serve as a basic building block for developing advanced MCMC algorithms that are feasible for big data problems. This is illustrated in the paper by the tempering BMH algorithm, which can be viewed as a combination of parallel tempering and the BMH algorithm. BMH can also be used for model selection and optimization by combining with reversible jump MCMC and simulated annealing, respectively. PMID:29033469
Investigation of a Parabolic Iterative Solver for Three-dimensional Configurations
NASA Technical Reports Server (NTRS)
Nark, Douglas M.; Watson, Willie R.; Mani, Ramani
2007-01-01
A parabolic iterative solution procedure is investigated that seeks to extend the parabolic approximation used within the internal propagation module of the duct noise propagation and radiation code CDUCT-LaRC. The governing convected Helmholtz equation is split into a set of coupled equations governing propagation in the positive and negative directions. The proposed method utilizes an iterative procedure to solve the coupled equations in an attempt to account for possible reflections from internal bifurcations, impedance discontinuities, and duct terminations. A geometry consistent with the NASA Langley Curved Duct Test Rig is considered and the effects of acoustic treatment and non-anechoic termination are included. Two numerical implementations are studied and preliminary results indicate that improved accuracy in predicted amplitude and phase can be obtained for modes at a cut-off ratio of 1.7. Further predictions for modes at a cut-off ratio of 1.1 show improvement in predicted phase at the expense of increased amplitude error. Possible methods of improvement are suggested based on analytic and numerical analysis. It is hoped that coupling the parabolic iterative approach with less efficient, high fidelity finite element approaches will ultimately provide the capability to perform efficient, higher fidelity acoustic calculations within complex 3-D geometries for impedance eduction and noise propagation and radiation predictions.
Parallel solution of the symmetric tridiagonal eigenproblem. Research report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jessup, E.R.
1989-10-01
This thesis discusses methods for computing all eigenvalues and eigenvectors of a symmetric tridiagonal matrix on a distributed-memory Multiple Instruction, Multiple Data multiprocessor. Only those techniques having the potential for both high numerical accuracy and significant large-grained parallelism are investigated. These include the QL method or Cuppen's divide and conquer method based on rank-one updating to compute both eigenvalues and eigenvectors, bisection to determine eigenvalues and inverse iteration to compute eigenvectors. To begin, the methods are compared with respect to computation time, communication time, parallel speed up, and accuracy. Experiments on an IPSC hypercube multiprocessor reveal that Cuppen's method ismore » the most accurate approach, but bisection with inverse iteration is the fastest and most parallel. Because the accuracy of the latter combination is determined by the quality of the computed eigenvectors, the factors influencing the accuracy of inverse iteration are examined. This includes, in part, statistical analysis of the effect of a starting vector with random components. These results are used to develop an implementation of inverse iteration producing eigenvectors with lower residual error and better orthogonality than those generated by the EISPACK routine TINVIT. This thesis concludes with adaptions of methods for the symmetric tridiagonal eigenproblem to the related problem of computing the singular value decomposition (SVD) of a bidiagonal matrix.« less
Parallel solution of the symmetric tridiagonal eigenproblem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jessup, E.R.
1989-01-01
This thesis discusses methods for computing all eigenvalues and eigenvectors of a symmetric tridiagonal matrix on a distributed memory MIMD multiprocessor. Only those techniques having the potential for both high numerical accuracy and significant large-grained parallelism are investigated. These include the QL method or Cuppen's divide and conquer method based on rank-one updating to compute both eigenvalues and eigenvectors, bisection to determine eigenvalues, and inverse iteration to compute eigenvectors. To begin, the methods are compared with respect to computation time, communication time, parallel speedup, and accuracy. Experiments on an iPSC hyper-cube multiprocessor reveal that Cuppen's method is the most accuratemore » approach, but bisection with inverse iteration is the fastest and most parallel. Because the accuracy of the latter combination is determined by the quality of the computed eigenvectors, the factors influencing the accuracy of inverse iteration are examined. This includes, in part, statistical analysis of the effects of a starting vector with random components. These results are used to develop an implementation of inverse iteration producing eigenvectors with lower residual error and better orthogonality than those generated by the EISPACK routine TINVIT. This thesis concludes with adaptations of methods for the symmetric tridiagonal eigenproblem to the related problem of computing the singular value decomposition (SVD) of a bidiagonal matrix.« less
NASA Astrophysics Data System (ADS)
Lin, Qingyang; Andrew, Matthew; Thompson, William; Blunt, Martin J.; Bijeljic, Branko
2018-05-01
Non-invasive laboratory-based X-ray microtomography has been widely applied in many industrial and research disciplines. However, the main barrier to the use of laboratory systems compared to a synchrotron beamline is its much longer image acquisition time (hours per scan compared to seconds to minutes at a synchrotron), which results in limited application for dynamic in situ processes. Therefore, the majority of existing laboratory X-ray microtomography is limited to static imaging; relatively fast imaging (tens of minutes per scan) can only be achieved by sacrificing imaging quality, e.g. reducing exposure time or number of projections. To alleviate this barrier, we introduce an optimized implementation of a well-known iterative reconstruction algorithm that allows users to reconstruct tomographic images with reasonable image quality, but requires lower X-ray signal counts and fewer projections than conventional methods. Quantitative analysis and comparison between the iterative and the conventional filtered back-projection reconstruction algorithm was performed using a sandstone rock sample with and without liquid phases in the pore space. Overall, by implementing the iterative reconstruction algorithm, the required image acquisition time for samples such as this, with sparse object structure, can be reduced by a factor of up to 4 without measurable loss of sharpness or signal to noise ratio.
Measured vs. Predicted Pedestal Pressure During RMP ELM Control in DIII-D
NASA Astrophysics Data System (ADS)
Zywicki, Bailey; Fenstermacher, Max; Groebner, Richard; Meneghini, Orso
2017-10-01
From database analysis of DIII-D plasmas with Resonant Magnetic Perturbations (RMPs) for ELM control, we will compare the experimental pedestal pressure (p_ped) to EPED code predictions and present the dependence of any p_ped differences from EPED on RMP parameters not included in the EPED model e.g. RMP field strength, toroidal and poloidal spectrum etc. The EPED code, based on Peeling-Ballooning and Kinetic Ballooning instability constraints, will also be used by ITER to predict the H-mode p_ped without RMPs. ITER plans to use RMPs as an effective ELM control method. The need to control ELMs in ITER is of the utmost priority, as it directly correlates to the lifetime of the plasma facing components. An accurate means of determining the impact of RMP ELM control on the p_ped is needed, because the device fusion power is strongly dependent on p_ped. With this new collection of data, we aim to provide guidance to predictions of the ITER pedestal during RMP ELM control that can be incorporated in a future predictive code. Work supported in part by US DoE under the Science Undergraduate Laboratory Internship (SULI) program and under DE-FC02-04ER54698, and DE-AC52-07NA27344.
NASA Astrophysics Data System (ADS)
Brighenti, A.; Bonifetto, R.; Isono, T.; Kawano, K.; Russo, G.; Savoldi, L.; Zanino, R.
2017-12-01
The ITER Central Solenoid Model Coil (CSMC) is a superconducting magnet, layer-wound two-in-hand using Nb3Sn cable-in-conduit conductors (CICCs) with the central channel typical of ITER magnets, cooled with supercritical He (SHe) at ∼4.5 K and 0.5 MPa, operating for approximately 15 years at the National Institutes for Quantum and Radiological Science and Technology in Naka, Japan. The aim of this work is to give an overview of the issues related to the hydraulic performance of the three different CICCs used in the CSMC based on the extensive experimental database put together during the past 15 years. The measured hydraulic characteristics are compared for the different test campaigns and compared also to those coming from the tests of short conductor samples when available. It is shown that the hydraulic performance of the CSMC conductors did not change significantly in the sequence of test campaigns with more than 50 cycles up to 46 kA and 8 cooldown/warmup cycles from 300 K to 4.5 K. The capability of the correlations typically used to predict the friction factor of the SHe for the design and analysis of ITER-like CICCs is also shown.
Optimised Iteration in Coupled Monte Carlo - Thermal-Hydraulics Calculations
NASA Astrophysics Data System (ADS)
Hoogenboom, J. Eduard; Dufek, Jan
2014-06-01
This paper describes an optimised iteration scheme for the number of neutron histories and the relaxation factor in successive iterations of coupled Monte Carlo and thermal-hydraulic reactor calculations based on the stochastic iteration method. The scheme results in an increasing number of neutron histories for the Monte Carlo calculation in successive iteration steps and a decreasing relaxation factor for the spatial power distribution to be used as input to the thermal-hydraulics calculation. The theoretical basis is discussed in detail and practical consequences of the scheme are shown, among which a nearly linear increase per iteration of the number of cycles in the Monte Carlo calculation. The scheme is demonstrated for a full PWR type fuel assembly. Results are shown for the axial power distribution during several iteration steps. A few alternative iteration method are also tested and it is concluded that the presented iteration method is near optimal.
Manganelli, Joe; Threatt, Anthony; Brooks, Johnell O; Healy, Stan; Merino, Jessica; Yanik, Paul; Walker, Ian; Green, Keith
2014-01-01
This article presents the results of a qualitative study that confirmed, classified, and prioritized user needs for the design of a more useful, usable, and actively assistive over-the-bed table. Manganelli et al. (2014) generated a list of 74 needs for use in developing an actively assistive over-the-bed table. This present study assesses the value and importance of those needs. Fourteen healthcare subject matter experts and eight research and design subject matter experts engaged in a participatory and iterative research and design process. A mixed methods qualitative approach used methodological triangulation to confirm the value of the findings and ratings to establish importance. Open and closed card sorts and a Delphi study were used. Data analysis methods included frequency analysis, content analysis, and a modified Kano analysis. A table demonstrating the needs that are of high importance to both groups of subject matter experts and classification of the design challenges each represents was produced. Through this process, the list of 74 needs was refined to the 37 most important need statements for both groups. Designing a more useful, usable, and actively assistive over-the-bed table is primarily about the ability to position it optimally with respect to the user for any task, as well as improving ease of use and usability. It is also important to make explicit and discuss the differences in priorities and perspectives demonstrated between research and design teams and their clients. © 2014 Vendome Group, LLC.
Dust measurements in tokamaks (invited).
Rudakov, D L; Yu, J H; Boedo, J A; Hollmann, E M; Krasheninnikov, S I; Moyer, R A; Muller, S H; Pigarov, A Yu; Rosenberg, M; Smirnov, R D; West, W P; Boivin, R L; Bray, B D; Brooks, N H; Hyatt, A W; Wong, C P C; Roquemore, A L; Skinner, C H; Solomon, W M; Ratynskaia, S; Fenstermacher, M E; Groth, M; Lasnier, C J; McLean, A G; Stangeby, P C
2008-10-01
Dust production and accumulation present potential safety and operational issues for the ITER. Dust diagnostics can be divided into two groups: diagnostics of dust on surfaces and diagnostics of dust in plasma. Diagnostics from both groups are employed in contemporary tokamaks; new diagnostics suitable for ITER are also being developed and tested. Dust accumulation in ITER is likely to occur in hidden areas, e.g., between tiles and under divertor baffles. A novel electrostatic dust detector for monitoring dust in these regions has been developed and tested at PPPL. In the DIII-D tokamak dust diagnostics include Mie scattering from Nd:YAG lasers, visible imaging, and spectroscopy. Laser scattering is able to resolve particles between 0.16 and 1.6 microm in diameter; using these data the total dust content in the edge plasmas and trends in the dust production rates within this size range have been established. Individual dust particles are observed by visible imaging using fast framing cameras, detecting dust particles of a few microns in diameter and larger. Dust velocities and trajectories can be determined in two-dimension with a single camera or three-dimension using multiple cameras, but determination of particle size is challenging. In order to calibrate diagnostics and benchmark dust dynamics modeling, precharacterized carbon dust has been injected into the lower divertor of DIII-D. Injected dust is seen by cameras, and spectroscopic diagnostics observe an increase in carbon line (CI, CII, C(2) dimer) and thermal continuum emissions from the injected dust. The latter observation can be used in the design of novel dust survey diagnostics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rudakov, D. L.; Yu, J. H.; Boedo, J. A.
Dust production and accumulation present potential safety and operational issues for the ITER. Dust diagnostics can be divided into two groups: diagnostics of dust on surfaces and diagnostics of dust in plasma. Diagnostics from both groups are employed in contemporary tokamaks; new diagnostics suitable for ITER are also being developed and tested. Dust accumulation in ITER is likely to occur in hidden areas, e.g., between tiles and under divertor baffles. A novel electrostatic dust detector for monitoring dust in these regions has been developed and tested at PPPL. In the DIII-D tokamak dust diagnostics include Mie scattering from Nd:YAG lasers,more » visible imaging, and spectroscopy. Laser scattering is able to resolve particles between 0.16 and 1.6 {mu}m in diameter; using these data the total dust content in the edge plasmas and trends in the dust production rates within this size range have been established. Individual dust particles are observed by visible imaging using fast framing cameras, detecting dust particles of a few microns in diameter and larger. Dust velocities and trajectories can be determined in two-dimension with a single camera or three-dimension using multiple cameras, but determination of particle size is challenging. In order to calibrate diagnostics and benchmark dust dynamics modeling, precharacterized carbon dust has been injected into the lower divertor of DIII-D. Injected dust is seen by cameras, and spectroscopic diagnostics observe an increase in carbon line (CI, CII, C{sub 2} dimer) and thermal continuum emissions from the injected dust. The latter observation can be used in the design of novel dust survey diagnostics.« less
Evolutionary Capability Delivery of Coast Guard Manpower System
2014-06-01
Office IID iterative incremental development model IT information technology MA major accomplishment MRA manpower requirements analysis MRD manpower...CG will need to ensure that development is low risk. The CG uses Manpower Requirements Analysis ( MRAs ) to collect the necessary manpower data to...of users. The CG uses two business processes to manage human capital: Manpower Requirements Analysis ( MRA ) and Manpower Requirements
NASA Astrophysics Data System (ADS)
Ming, A. B.; Qin, Z. Y.; Zhang, W.; Chu, F. L.
2013-12-01
Bearing failure is one of the most common reasons of machine breakdowns and accidents. Therefore, the fault diagnosis of rolling element bearings is of great significance to the safe and efficient operation of machines owing to its fault indication and accident prevention capability in engineering applications. Based on the orthogonal projection theory, a novel method is proposed to extract the fault characteristic frequency for the incipient fault diagnosis of rolling element bearings in this paper. With the capability of exposing the oscillation frequency of the signal energy, the proposed method is a generalized form of the squared envelope analysis and named as spectral auto-correlation analysis (SACA). Meanwhile, the SACA is a simplified form of the cyclostationary analysis as well and can be iteratively carried out in applications. Simulations and experiments are used to evaluate the efficiency of the proposed method. Comparing the results of SACA, the traditional envelope analysis and the squared envelope analysis, it is found that the result of SACA is more legible due to the more prominent harmonic amplitudes of the fault characteristic frequency and that the SACA with the proper iteration will further enhance the fault features.
NASA Technical Reports Server (NTRS)
Barnett, Alan R.; Widrick, Timothy W.; Ludwiczak, Damian R.
1995-01-01
Solving for the displacements of free-free coupled systems acted upon by static loads is commonly performed throughout the aerospace industry. Many times, these problems are solved using static analysis with inertia relief. This solution technique allows for a free-free static analysis by balancing the applied loads with inertia loads generated by the applied loads. For some engineering applications, the displacements of the free-free coupled system induce additional static loads. Hence, the applied loads are equal to the original loads plus displacement-dependent loads. Solving for the final displacements of such systems is commonly performed using iterative solution techniques. Unfortunately, these techniques can be time-consuming and labor-intensive. Since the coupled system equations for free-free systems with displacement-dependent loads can be written in closed-form, it is advantageous to solve for the displacements in this manner. Implementing closed-form equations in static analysis with inertia relief is analogous to implementing transfer functions in dynamic analysis. Using a MSC/NASTRAN DMAP Alter, displacement-dependent loads have been included in static analysis with inertia relief. Such an Alter has been used successfully to solve efficiently a common aerospace problem typically solved using an iterative technique.
NASA Astrophysics Data System (ADS)
Lalush, D. S.; Tsui, B. M. W.
1998-06-01
We study the statistical convergence properties of two fast iterative reconstruction algorithms, the rescaled block-iterative (RBI) and ordered subset (OS) EM algorithms, in the context of cardiac SPECT with 3D detector response modeling. The Monte Carlo method was used to generate nearly noise-free projection data modeling the effects of attenuation, detector response, and scatter from the MCAT phantom. One thousand noise realizations were generated with an average count level approximating a typical T1-201 cardiac study. Each noise realization was reconstructed using the RBI and OS algorithms for cases with and without detector response modeling. For each iteration up to twenty, we generated mean and variance images, as well as covariance images for six specific locations. Both OS and RBI converged in the mean to results that were close to the noise-free ML-EM result using the same projection model. When detector response was not modeled in the reconstruction, RBI exhibited considerably lower noise variance than OS for the same resolution. When 3D detector response was modeled, the RBI-EM provided a small improvement in the tradeoff between noise level and resolution recovery, primarily in the axial direction, while OS required about half the number of iterations of RBI to reach the same resolution. We conclude that OS is faster than RBI, but may be sensitive to errors in the projection model. Both OS-EM and RBI-EM are effective alternatives to the EVIL-EM algorithm, but noise level and speed of convergence depend on the projection model used.
Fine-resolution conservation planning with limited climate-change information.
Shah, Payal; Mallory, Mindy L; Ando, Amy W; Guntenspergen, Glenn R
2017-04-01
Climate-change induced uncertainties in future spatial patterns of conservation-related outcomes make it difficult to implement standard conservation-planning paradigms. A recent study translates Markowitz's risk-diversification strategy from finance to conservation settings, enabling conservation agents to use this diversification strategy for allocating conservation and restoration investments across space to minimize the risk associated with such uncertainty. However, this method is information intensive and requires a large number of forecasts of ecological outcomes associated with possible climate-change scenarios for carrying out fine-resolution conservation planning. We developed a technique for iterative, spatial portfolio analysis that can be used to allocate scarce conservation resources across a desired level of subregions in a planning landscape in the absence of a sufficient number of ecological forecasts. We applied our technique to the Prairie Pothole Region in central North America. A lack of sufficient future climate information prevented attainment of the most efficient risk-return conservation outcomes in the Prairie Pothole Region. The difference in expected conservation returns between conservation planning with limited climate-change information and full climate-change information was as large as 30% for the Prairie Pothole Region even when the most efficient iterative approach was used. However, our iterative approach allowed finer resolution portfolio allocation with limited climate-change forecasts such that the best possible risk-return combinations were obtained. With our most efficient iterative approach, the expected loss in conservation outcomes owing to limited climate-change information could be reduced by 17% relative to other iterative approaches. © 2016 Society for Conservation Biology.
Perception of competence in middle school physical education: instrument development and validation.
Scrabis-Fletcher, Kristin; Silverman, Stephen
2010-03-01
Perception of Competence (POC) has been studied extensively in physical activity (PA) research with similar instruments adapted for physical education (PE) research. Such instruments do not account for the unique PE learning environment. Therefore, an instrument was developed and the scores validated to measure POC in middle school PE. A multiphase design was used consisting of an intensive theoretical review, elicitation study, prepilot study, pilot study, content validation study, and final validation study (N=1281). Data analysis included a multistep iterative process to identify the best model fit. A three-factor model for POC was tested and resulted in root mean square error of approximation = .09, root mean square residual = .07, goodness offit index = .90, and adjusted goodness offit index = .86 values in the acceptable range (Hu & Bentler, 1999). A two-factor model was also tested and resulted in a good fit (two-factor fit indexes values = .05, .03, .98, .97, respectively). The results of this study suggest that an instrument using a three- or two-factor model provides reliable and valid scores ofPOC measurement in middle school PE.
Application Agreement and Integration Services
NASA Technical Reports Server (NTRS)
Driscoll, Kevin R.; Hall, Brendan; Schweiker, Kevin
2013-01-01
Application agreement and integration services are required by distributed, fault-tolerant, safety critical systems to assure required performance. An analysis of distributed and hierarchical agreement strategies are developed against the backdrop of observed agreement failures in fielded systems. The documented work was performed under NASA Task Order NNL10AB32T, Validation And Verification of Safety-Critical Integrated Distributed Systems Area 2. This document is intended to satisfy the requirements for deliverable 5.2.11 under Task 4.2.2.3. This report discusses the challenges of maintaining application agreement and integration services. A literature search is presented that documents previous work in the area of replica determinism. Sources of non-deterministic behavior are identified and examples are presented where system level agreement failed to be achieved. We then explore how TTEthernet services can be extended to supply some interesting application agreement frameworks. This document assumes that the reader is familiar with the TTEthernet protocol. The reader is advised to read the TTEthernet protocol standard [1] before reading this document. This document does not re-iterate the content of the standard.
On-line milk spectrometry: analysis of bovine milk composition
NASA Astrophysics Data System (ADS)
Spitzer, Kyle; Kuennemeyer, Rainer; Woolford, Murray; Claycomb, Rod
2005-04-01
We present partial least squares (PLS) regressions to predict the composition of raw, unhomogenised milk using visible to near infrared spectroscopy. A total of 370 milk samples from individual quarters were collected and analysed on-line by two low cost spectrometers in the wavelength ranges 380-1100 nm and 900-1700 nm. Samples were collected from 22 Friesian, 17 Jersey, 2 Ayrshire and 3 Friesian-Jersey crossbred cows over a period of 7 consecutive days. Transmission spectra were recorded in an inline flowcell through a 0.5 mm thick milk sample. PLS models, where wavelength selection was performed using iterative PLS, were developed for fat, protein, lactose, and somatic cell content. The root mean square error of prediction (and correlation coefficient) for the nir and visible spectrometers respectively were 0.70%(0.93) and 0.91%(0.91) for fat, 0.65%(0.5) and 0.47%(0.79) for protein, 0.36%(0.49) and 0.45%(0.43) for lactose, and 0.50(0.54) and 0.48(0.51) for log10 somatic cells.
Evans, Val; MacLeod, Sheona
2018-01-01
Objective Major changes in the design and delivery of clinical academic training in the United Kingdom have occurred yet there has been little exploration of the perceptions of integrated clinic academic trainees or educators. We obtained the views of a range of key stakeholders involved in clinical academic training in the East Midlands. Design A qualitative study with inductive iterative thematic content analysis of findings from trainee surveys and facilitated focus groups. Setting The East Midlands School of Clinical Academic Training. Participants Integrated Clinical Academic Trainees, clinical and academic educators involved in clinical academic training. Main outcome measures The experience, opinions and beliefs of key stakeholders about barriers and enablers in the delivery of clinical academic training. Results We identified key themes many shared by both trainees and educators. These highlighted issues in the systems and process of the integrated academic pathways, career pathways, supervision and support, the assessment process and the balance between clinical and academic training. Conclusions Our findings help inform the future development of integrated academic training programmes. PMID:29487745
Valentine, Sarah E; Borba, Christina P C; Dixon, Louise; Vaewsorn, Adin S; Guajardo, Julia Gallegos; Resick, Patricia A; Wiltsey Stirman, Shannon; Marques, Luana
2017-03-01
As part of a larger implementation trial for cognitive processing therapy (CPT) for posttraumatic stress disorder (PTSD) in a community health center, we used formative evaluation to assess relations between iterative cultural adaption (for Spanish-speaking clients) and implementation outcomes (appropriateness and acceptability) for CPT. Qualitative data for the current study were gathered through multiple sources (providers: N = 6; clients: N = 22), including CPT therapy sessions, provider fieldnotes, weekly consultation team meetings, and researcher fieldnotes. Findings from conventional and directed content analysis of the data informed refinements to the CPT manual. Data-driven refinements included adaptations related to cultural context (i.e., language, regional variation in wording), urban context (e.g., crime/violence), and literacy level. Qualitative findings suggest improved appropriateness and acceptability of CPT for Spanish-speaking clients. Our study reinforces the need for dual application of cultural adaptation and implementation science to address the PTSD treatment needs of Spanish-speaking clients. © 2016 Wiley Periodicals, Inc.
Han, Heeyoung; Papireddy, Muralidhar Reddy; Hingle, Susan T; Ferguson, Jacqueline Anne; Koschmann, Timothy; Sandstrom, Steve
2018-07-01
Individualized structured feedback is an integral part of a resident's learning in communication skills. However, it is not clear what feedback residents receive for their communication skills development in real patient care. We will identify the most common feedback topics given to residents regarding communication skills during Internal Medicine residency training. We analyzed Resident Audio-recording Project feedback data from 2008 to 2013 by using a content analysis approach. Using open coding and an iterative categorization process, we identified 15 emerging themes for both positive and negative feedback. The most recurrent feedback topics were Patient education, Thoroughness, Organization, Questioning strategy, and Management. The residents were guided to improve their communication skills regarding Patient education, Thoroughness, Management, and Holistic exploration of patient's problem. Thoroughness and Communication intelligibility were newly identified themes that were rarely discussed in existing frameworks. Assessment rubrics serve as a lens through which we assess the adequacy of the residents' communication skills. Rather than sticking to a specific rubric, we chose to let the rubric evolve through our experience.
Cultural adaptation of a supportive care needs measure for Hispanic men cancer survivors.
Martinez Tyson, Dinorah; Medina-Ramirez, Patricia; Vázquez-Otero, Coralia; Gwede, Clement K; Bobonis, Margarita; McMillan, Susan C
2018-01-01
Research with ethnic minority populations requires instrumentation that is cultural and linguistically relevant. The aim of this study was to translate and culturally adapt the Cancer Survivor Unmet Needs measure into Spanish. We describe the iterative, community-engaged consensus-building approaches used to adapt the instrument for Hispanic male cancer survivors. We used an exploratory sequential mixed method study design. Methods included translation and back-translation, focus groups with cancer survivors (n = 18) and providers (n = 5), use of cognitive interview techniques to evaluate the comprehension and acceptability of the adapted instrument with survivors (n = 12), ongoing input from the project's community advisory board, and preliminary psychometric analysis (n = 84). The process emphasized conceptual, content, semantic, and technical equivalence. Combining qualitative and quantitative approaches offered a rigorous, systematic, and contextual approach to translation alone and supports the cultural adaptation of this measure in a purposeful and relevant manner. Our findings highlight the importance of going beyond translation when adapting measures for cross-cultural populations and illustrate the importance of taking culture, literacy, and language into consideration.
Valentine, Sarah E.; Borba, Christina P. C.; Dixon, Louise; Vaewsorn, Adin S.; Guajardo, Julia Gallegos; Resick, Patricia A.; Wiltsey-Stirman, Shannon; Marques, Luana
2016-01-01
Objective As part of a larger implementation trial for Cognitive Processing Therapy (CPT) for posttraumatic stress disorder (PTSD) in a community health center, we used formative evaluation to assess relations between iterative cultural adaption (for Spanish-speaking clients) and implementation outcomes (appropriateness & acceptability) for CPT. Method Qualitative data for the current study were gathered through multiple sources (providers: N=6; clients: N=22), including CPT therapy sessions, provider field notes, weekly consultation team meetings, and researcher field notes. Findings from conventional and directed content analysis of the data informed refinements to the CPT manual. Results Data-driven refinements included adaptations related to cultural context (i.e., language, regional variation in wording), urban context (e.g., crime/violence), and literacy level. Qualitative findings suggest improved appropriateness and acceptability of CPT for Spanish-speaking clients. Conclusion Our study reinforces the need for dual application of cultural adaptation and implementation science to address the PTSD treatment needs of Spanish-speaking clients. PMID:27378013
Structure and mechanical properties of improved cast stainless steels for nuclear applications
Kenik, Edward A.; Busby, Jeremy T.; Gussev, Maxim N.; ...
2016-10-27
Casting of stainless steels is a promising and cost saving way of directly producing large and complex structures, such a shield modules or divertors for the ITER. Here, a series of modified high-nitrogen cast steels has been developed and characterized. The steels, based on the cast equivalent of 316 composition, have increased N (0.14-0.36%) and Mn (2-5.1%) content; copper was added to one of the heats. Mechanical tests were conducted with non-irradiated and neutron irradiated specimens at 0.7 dpa. It was established that alloying by nitrogen significantly improves the yield stress of non-irradiated steels and the deformation hardening rate. Manganesemore » tended to decrease yield stress, but increased radiation hardening. Furthermore, the role of copper on mechanical properties was negligibly small. Analysis of structure was conducted using SEM-EDS and the nature and compositions of the second phases and inclusions were analyzed in detail. We show that the modified steels, compared to reference material, exhibit significantly reduced elemental inhomogeneity and second phase formation.« less
NASA Technical Reports Server (NTRS)
Hillger, D. W.; Vonder Haar, T. H.
1977-01-01
The ability to provide mesoscale temperature and moisture fields from operational satellite infrared sounding radiances over the United States is explored. High-resolution sounding information for mesoscale analysis and forecasting is shown to be obtainable in mostly clear areas. An iterative retrieval algorithm applied to NOAA-VTPR radiances uses a mean radiosonde sounding as a best initial-guess profile. Temperature soundings are then retrieved at a horizontal resolution of about 70 km, as is an indication of the precipitable water content of the vertical sounding columns. Derived temperature values may be biased in general by the initial-guess sounding or in certain areas by the cloud correction technique, but the resulting relative temperature changes across the field when not contaminated by clouds will be useful for mesoscale forecasting and models. The derived moisture, affected only by high clouds, proves to be reliable to within 0.5 cm of precipitable water and contains valuable horizontal information. Present-day applications from polar-orbiting satellites as well as possibilities from upcoming temperature and moisture sounders on geostationary satellites are noted.
An Expert Map of Gambling Risk Perception.
Spurrier, Michael; Blaszczynski, Alexander; Rhodes, Paul
2015-12-01
The purpose of the current study was to investigate the moderating or mediating role played by risk perception in decision-making, gambling behaviour, and disordered gambling aetiology. Eleven gambling expert clinicians and researchers completed a semi-structured interview derived from mental models and grounded theory methodologies. Expert interview data was used to construct a comprehensive expert mental model 'map' detailing risk-perception related factors contributing to harmful or safe gambling. Systematic overlapping processes of data gathering and analysis were used to iteratively extend, saturate, test for exception, and verify concepts and emergent themes. Findings indicated that experts considered idiosyncratic beliefs among gamblers result in overall underestimates of risk and loss, insufficient prioritization of needs, and planning and implementation of risk management strategies. Additional contextual factors influencing use of risk information (reinforcement and learning; mental states, environmental cues, ambivalence; and socio-cultural and biological variables) acted to shape risk perceptions and increase vulnerabilities to harm or disordered gambling. It was concluded that understanding the nature, extent and processes by which risk perception predisposes an individual to maintain gambling despite adverse consequences can guide the content of preventative educational responsible gambling campaigns.
O'Donnell, Matthew D
2011-05-01
The glass transition temperature (T(g)) of inorganic glasses is an important parameter than can be used to correlate with other glass properties, such as dissolution rate, which governs in vitro and in vivo bioactivity. Seven bioactive glass compositional series reported in the literature (77 in total) were analysed here with T(g) values obtained by a number of different methods: differential thermal analysis, differential scanning calorimetry and dilatometry. An iterative least-squares fitting method was used to correlate T(g) from thermal analysis of these compositions with the levels of individual oxide and fluoride components in the glasses. When all seven series were fitted a reasonable correlation was found between calculated and experimental values (R(2)=0.89). When the two compositional series that were designed in weight percentages (the remaining five were designed in molar percentage) were removed from the model an improved fit was achieved (R(2)=0.97). This study shows that T(g) for a wide range in compositions (e.g. SiO(2) content of 37.3-68.4 mol.%) can be predicted to reasonable accuracy enabling processing parameters to be predicted such as annealing, fibre-drawing and sintering temperatures. Copyright © 2011 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Wavelet-based analysis of transient electromagnetic wave propagation in photonic crystals.
Shifman, Yair; Leviatan, Yehuda
2004-03-01
Photonic crystals and optical bandgap structures, which facilitate high-precision control of electromagnetic-field propagation, are gaining ever-increasing attention in both scientific and commercial applications. One common photonic device is the distributed Bragg reflector (DBR), which exhibits high reflectivity at certain frequencies. Analysis of the transient interaction of an electromagnetic pulse with such a device can be formulated in terms of the time-domain volume integral equation and, in turn, solved numerically with the method of moments. Owing to the frequency-dependent reflectivity of such devices, the extent of field penetration into deep layers of the device will be different depending on the frequency content of the impinging pulse. We show how this phenomenon can be exploited to reduce the number of basis functions needed for the solution. To this end, we use spatiotemporal wavelet basis functions, which possess the multiresolution property in both spatial and temporal domains. To select the dominant functions in the solution, we use an iterative impedance matrix compression (IMC) procedure, which gradually constructs and solves a compressed version of the matrix equation until the desired degree of accuracy has been achieved. Results show that when the electromagnetic pulse is reflected, the transient IMC omits basis functions defined over the last layers of the DBR, as anticipated.
Pol, Sreymom; Fox-Lewis, Shivani; Neou, Leakhena; Parker, Michael; Kingori, Patricia; Turner, Claudia
2018-01-01
To explore Cambodian community members' understanding of and attitudes towards healthcare research. This qualitative study generated data from semi-structured interviews and focus group discussions. This study was conducted at a non-governmental paediatric hospital and in nearby villages in Siem Reap province, Cambodia. A total of ten semi-structured interviews and four focus group discussions were conducted, involving 27 participants. Iterative data collection and analysis were performed concurrently. Data were analysed by thematic content analysis and the coding structure was developed using relevant literature. Participants did not have a clear understanding of what activities related to research compared with those for routine healthcare. Key attitudes towards research were responsibility and trust: personal (trust of the researcher directly) and institutional (trust of the institution as a whole). Villagers believe the village headman holds responsibility for community activities, while the village headman believes that this responsibility should be shared across all levels of the government system. It is essential for researchers to understand the structure and relationship within the community they wish to work with in order to develop trust among community participants. This aids effective communication and understanding among all parties, enabling high quality ethical research to be conducted.
Bass, Kristin M; Drits-Esser, Dina; Stark, Louisa A
2016-01-01
The credibility of conclusions made about the effectiveness of educational interventions depends greatly on the quality of the assessments used to measure learning gains. This essay, intended for faculty involved in small-scale projects, courses, or educational research, provides a step-by-step guide to the process of developing, scoring, and validating high-quality content knowledge assessments. We illustrate our discussion with examples from our assessments of high school students' understanding of concepts in cell biology and epigenetics. Throughout, we emphasize the iterative nature of the development process, the importance of creating instruments aligned to the learning goals of an intervention or curricula, and the importance of collaborating with other content and measurement specialists along the way. © 2016 K. M. Bass et al. CBE—Life Sciences Education © 2016 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).
1991-05-23
rotational objects can b ec-tetd. E-Ac Ceedent 3exp-erimental demon ct r-ati ons for these tuo zethodsc hare L-en nerfor-med.A aner atohi naturve xs...dependent nature ---f the Joint rransifore f.Iter. Unlike theVa.dr %g~ii ssignal indepndent. a0. eir -las 3advata in real-tim ’-n14-a-entatio-n...a-tit reI ra-’ t --er is n -) 0 s-’ow Uha thsthoesc~-heo 8 spectral content of the target. A paper of this nature is published in the Optics and
NASA Astrophysics Data System (ADS)
Lasche, George; Coldwell, Robert; Metzger, Robert
2017-09-01
A new application (known as "VRF", or "Visual RobFit") for analysis of high-resolution gamma-ray spectra has been developed using non-linear fitting techniques to fit full-spectrum nuclide shapes. In contrast to conventional methods based on the results of an initial peak-search, the VRF analysis method forms, at each of many automated iterations, a spectrum-wide shape for each nuclide and, also at each iteration, it adjusts the activities of each nuclide, as well as user-enabled parameters of energy calibration, attenuation by up to three intervening or self-absorbing materials, peak width as a function of energy, full-energy peak efficiency, and coincidence summing until no better fit to the data can be obtained. This approach, which employs a new and significantly advanced underlying fitting engine especially adapted to nuclear spectra, allows identification of minor peaks that are masked by larger, overlapping peaks that would not otherwise be possible. The application and method are briefly described and two examples are presented.
NASA Astrophysics Data System (ADS)
Lewandowska, Monika; Herzog, Robert; Malinowski, Leszek
2015-01-01
A heat slug propagation experiment in the final design dual channel ITER TF CICC was performed in the SULTAN test facility at EPFL-CRPP in Villigen PSI. We analyzed the data resulting from this experiment to determine the equivalent transverse heat transfer coefficient hBC between the bundle and the central channel of this cable. In the data analysis we used methods based on the analytical solutions of a problem of transient heat transfer in a dual-channel cable, similar to Renard et al. (2006) and Bottura et al. (2006). The observed experimental and other limits related to these methods are identified and possible modifications proposed. One result from our analysis is that the hBC values obtained with different methods differ by up to a factor of 2. We have also observed that the uncertainties of hBC in both methods considered are much larger than those reported earlier.
NASA Technical Reports Server (NTRS)
Hallidy, William H. (Inventor); Chin, Robert C. (Inventor)
1999-01-01
The present invention is a system for chemometric analysis for the extraction of the individual component fluorescence spectra and fluorescence lifetimes from a target mixture. The present invention combines a processor with an apparatus for generating an excitation signal to transmit at a target mixture and an apparatus for detecting the emitted signal from the target mixture. The present invention extracts the individual fluorescence spectrum and fluorescence lifetime measurements from the frequency and wavelength data acquired from the emitted signal. The present invention uses an iterative solution that first requires the initialization of several decision variables and the initial approximation determinations of intermediate matrices. The iterative solution compares the decision variables for convergence to see if further approximation determinations are necessary. If the solution converges, the present invention then determines the reduced best fit error for the analysis of the individual fluorescence lifetime and the fluorescence spectrum before extracting the individual fluorescence lifetime and fluorescence spectrum from the emitted signal of the target mixture.
A stepladder approach to a tokamak fusion power plant
NASA Astrophysics Data System (ADS)
Zohm, H.; Träuble, F.; Biel, W.; Fable, E.; Kemp, R.; Lux, H.; Siccinio, M.; Wenninger, R.
2017-08-01
We present an approach to design in a consistent way a stepladder connecting ITER, DEMO and an FPP, starting from an attractive FPP and then locating DEMO such that main similarity parameters for the core scenario are constant. The approach presented suggests how to use ITER such that DEMO can be extrapolated with maximum confidence and a development path for plasma scenarios in ITER follows from our approach, moving from low β N and q typical for the present Q = 10 scenario to higher values needed for steady state. A numerical example is given, indicative of the feasibility of the approach, and it is backed up by more detailed 1.5-D calculation using the ASTRA code. We note that ideal MHD stability analysis of the DEMO operating point indicates that it is located between the no-wall and the ideal wall β-limit, which may require active stabilization. The DEMO design could also be a pulsed fallback solution should a stationary operation turn out to be impossible.
A New Pivoting and Iterative Text Detection Algorithm for Biomedical Images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Songhua; Krauthammer, Prof. Michael
2010-01-01
There is interest to expand the reach of literature mining to include the analysis of biomedical images, which often contain a paper's key findings. Examples include recent studies that use Optical Character Recognition (OCR) to extract image text, which is used to boost biomedical image retrieval and classification. Such studies rely on the robust identification of text elements in biomedical images, which is a non-trivial task. In this work, we introduce a new text detection algorithm for biomedical images based on iterative projection histograms. We study the effectiveness of our algorithm by evaluating the performance on a set of manuallymore » labeled random biomedical images, and compare the performance against other state-of-the-art text detection algorithms. We demonstrate that our projection histogram-based text detection approach is well suited for text detection in biomedical images, and that the iterative application of the algorithm boosts performance to an F score of .60. We provide a C++ implementation of our algorithm freely available for academic use.« less