Final Report on ITER Task Agreement 81-08
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richard L. Moore
As part of an ITER Implementing Task Agreement (ITA) between the ITER US Participant Team (PT) and the ITER International Team (IT), the INL Fusion Safety Program was tasked to provide the ITER IT with upgrades to the fusion version of the MELCOR 1.8.5 code including a beryllium dust oxidation model. The purpose of this model is to allow the ITER IT to investigate hydrogen production from beryllium dust layers on hot surfaces inside the ITER vacuum vessel (VV) during in-vessel loss-of-cooling accidents (LOCAs). Also included in the ITER ITA was a task to construct a RELAP5/ATHENA model of themore » ITER divertor cooling loop to model the draining of the loop during a large ex-vessel pipe break followed by an in-vessel divertor break and compare the results to a simular MELCOR model developed by the ITER IT. This report, which is the final report for this agreement, documents the completion of the work scope under this ITER TA, designated as TA 81-08.« less
United States Research and Development effort on ITER magnet tasks
Martovetsky, Nicolai N.; Reierson, Wayne T.
2011-01-22
This study presents the status of research and development (R&D) magnet tasks that are being performed in support of the U.S. ITER Project Office (USIPO) commitment to provide a central solenoid assembly and toroidal field conductor for the ITER machine to be constructed in Cadarache, France. The following development tasks are presented: winding development, inlets and outlets development, internal and bus joints development and testing, insulation development and qualification, vacuum-pressure impregnation, bus supports, and intermodule structure and materials characterization.
SUMMARY REPORT-FY2006 ITER WORK ACCOMPLISHED
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martovetsky, N N
2006-04-11
Six parties (EU, Japan, Russia, US, Korea, China) will build ITER. The US proposed to deliver at least 4 out of 7 modules of the Central Solenoid. Phillip Michael (MIT) and I were tasked by DoE to assist ITER in development of the ITER CS and other magnet systems. We work to help Magnets and Structure division headed by Neil Mitchell. During this visit I worked on the selected items of the CS design and carried out other small tasks, like PF temperature margin assessment.
Final Report on ITER Task Agreement 81-10
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brad J. Merrill
An International Thermonuclear Experimental Reactor (ITER) Implementing Task Agreement (ITA) on Magnet Safety was established between the ITER International Organization (IO) and the Idaho National Laboratory (INL) Fusion Safety Program (FSP) during calendar year 2004. The objectives of this ITA were to add new capabilities to the MAGARC code and to use this updated version of MAGARC to analyze unmitigated superconductor quench events for both poloidal field (PF) and toroidal field (TF) coils of the ITER design. This report documents the completion of the work scope for this ITA. Based on the results obtained for this ITA, an unmitigated quenchmore » event in an ITER larger PF coil does not appear to be as severe an accident as in an ITER TF coil.« less
Reliability of functional MR imaging with word-generation tasks for mapping Broca's area.
Brannen, J H; Badie, B; Moritz, C H; Quigley, M; Meyerand, M E; Haughton, V M
2001-10-01
Functional MR (fMR) imaging of word generation has been used to map Broca's area in some patients selected for craniotomy. The purpose of this study was to measure the reliability, precision, and accuracy of word-generation tasks to identify Broca's area. The Brodmann areas activated during performance of word-generation tasks were tabulated in 34 consecutive patients referred for fMR imaging mapping of language areas. In patients performing two iterations of the letter word-generation tasks, test-retest reliability was quantified by using the concurrence ratio (CR), or the number of voxels activated by each iteration in proportion to the average number of voxels activated from both iterations of the task. Among patients who also underwent category or antonym word generation or both, the similarity of the activation from each task was assessed with the CR. In patients who underwent electrocortical stimulation (ECS) mapping of speech function during craniotomy while awake, the sites with speech function were compared with the locations of activation found during fMR imaging of word generation. In 31 of 34 patients, activation was identified in the inferior frontal gyri or middle frontal gyri or both in Brodmann areas 9, 44, 45, or 46, unilaterally or bilaterally, with one or more of the tasks. Activation was noted in the same gyri when the patient performed a second iteration of the letter word-generation task or second task. The CR for pixel precision in a single section averaged 49%. In patients who underwent craniotomy while awake, speech areas located with ECS coincided with areas of the brain activated during a word-generation task. fMR imaging with word-generation tasks produces technically satisfactory maps of Broca's area, which localize the area accurately and reliably.
NASA Astrophysics Data System (ADS)
Raj, Prasoon; Angelone, Maurizio; Döring, Toralf; Eberhardt, Klaus; Fischer, Ulrich; Klix, Axel; Schwengner, Ronald
2018-01-01
Neutron and gamma flux measurements in designated positions in the test blanket modules (TBM) of ITER will be important tasks during ITER's campaigns. As part of the ongoing task on development of nuclear instrumentation for application in European ITER TBMs, experimental investigations on self-powered detectors (SPD) are undertaken. This paper reports the findings of neutron and photon irradiation tests performed with a test SPD in flat sandwich-like geometry. Whereas both neutrons and gammas can be detected with appropriate optimization of geometries, materials and sizes of the components, the present sandwich-like design is more sensitive to gammas than 14 MeV neutrons. Range of SPD current signals achievable under TBM conditions are predicted based on the SPD sensitivities measured in this work.
Not so Complex: Iteration in the Complex Plane
ERIC Educational Resources Information Center
O'Dell, Robin S.
2014-01-01
The simple process of iteration can produce complex and beautiful figures. In this article, Robin O'Dell presents a set of tasks requiring students to use the geometric interpretation of complex number multiplication to construct linear iteration rules. When the outputs are plotted in the complex plane, the graphs trace pleasing designs…
Meadmore, Katie L; Cai, Zhonglun; Tong, Daisy; Hughes, Ann-Marie; Freeman, Chris T; Rogers, Eric; Burridge, Jane H
2011-01-01
A novel system has been developed which combines robotic therapy with electrical stimulation (ES) for upper limb stroke rehabilitation. This technology, termed SAIL: Stimulation Assistance through Iterative Learning, employs advanced model-based iterative learning control (ILC) algorithms to precisely assist participant's completion of 3D tracking tasks with their impaired arm. Data is reported from a preliminary study with unimpaired participants, and also from a single hemiparetic stroke participant with reduced upper limb function who has used the system in a clinical trial. All participants completed tasks which involved moving their (impaired) arm to follow an image of a slowing moving sphere along a trajectory. The participants' arm was supported by a robot and ES was applied to the triceps brachii and anterior deltoid muscles. During each task, the same tracking trajectory was repeated 6 times and ILC was used to compute the stimulation signals to be applied on the next iteration. Unimpaired participants took part in a single, one hour training session and the stroke participant undertook 18, 1 hour treatment sessions composed of tracking tasks varying in length, orientation and speed. The results reported describe changes in tracking ability and demonstrate feasibility of the SAIL system for upper limb rehabilitation. © 2011 IEEE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duke, Roger T.; Crump, Thomas Vu
The work was created to provide a tool for the purpose of improving the management of tasks associated with Agile projects. Agile projects are typically completed in an iterative manner with many short duration tasks being performed as part of iterations. These iterations are generally referred to as sprints. The objective of this work is to create a single tool that enables sprint teams to manage all of their tasks in multiple sprints and automatically produce all standard sprint performance charts with minimum effort. The format of the printed work is designed to mimic a standard Kanban board. The workmore » is developed as a single Excel file with worksheets capable of managing up to five concurrent sprints and up to one hundred tasks. It also includes a summary worksheet providing performance information from all active sprints. There are many commercial project management systems typically designed with features desired by larger organizations with many resources managing multiple programs and projects. The audience for this work is the small organizations and Agile project teams desiring an inexpensive, simple, user-friendly, task management tool. This work uses standard readily available software, Excel, requiring minimum data entry and automatically creating summary charts and performance data. It is formatted to print out and resemble standard flip charts and provide the visuals associated with this type of work.« less
Regularization iteration imaging algorithm for electrical capacitance tomography
NASA Astrophysics Data System (ADS)
Tong, Guowei; Liu, Shi; Chen, Hongyan; Wang, Xueyao
2018-03-01
The image reconstruction method plays a crucial role in real-world applications of the electrical capacitance tomography technique. In this study, a new cost function that simultaneously considers the sparsity and low-rank properties of the imaging targets is proposed to improve the quality of the reconstruction images, in which the image reconstruction task is converted into an optimization problem. Within the framework of the split Bregman algorithm, an iterative scheme that splits a complicated optimization problem into several simpler sub-tasks is developed to solve the proposed cost function efficiently, in which the fast-iterative shrinkage thresholding algorithm is introduced to accelerate the convergence. Numerical experiment results verify the effectiveness of the proposed algorithm in improving the reconstruction precision and robustness.
Liang, Jennifer J; Tsou, Ching-Huei; Devarakonda, Murthy V
2017-01-01
Natural language processing (NLP) holds the promise of effectively analyzing patient record data to reduce cognitive load on physicians and clinicians in patient care, clinical research, and hospital operations management. A critical need in developing such methods is the "ground truth" dataset needed for training and testing the algorithms. Beyond localizable, relatively simple tasks, ground truth creation is a significant challenge because medical experts, just as physicians in patient care, have to assimilate vast amounts of data in EHR systems. To mitigate potential inaccuracies of the cognitive challenges, we present an iterative vetting approach for creating the ground truth for complex NLP tasks. In this paper, we present the methodology, and report on its use for an automated problem list generation task, its effect on the ground truth quality and system accuracy, and lessons learned from the effort.
Active Learning with a Human in The Loop
2012-11-01
handwrit - ten digits (LeCun et al. [1998]). In the red curve the model is built iteratively: at each iteration the twenty examples with the lowest...continuum. The most we can say about MUC annotation is that it’s simple enough that other tasks are likely to impose a heavier load on the user for
McCulloch, Karen L.; Radomski, Mary V.; Finkelstein, Marsha; Cecchini, Amy S.; Davidson, Leslie F.; Heaton, Kristin J.; Smith, Laurel B.; Scherer, Matthew R.
2017-01-01
The Assessment of Military Multitasking Performance (AMMP) is a battery of functional dual-tasks and multitasks based on military activities that target known sensorimotor, cognitive, and exertional vulnerabilities after concussion/mild traumatic brain injury (mTBI). The AMMP was developed to help address known limitations in post concussive return to duty assessment and decision making. Once validated, the AMMP is intended for use in combination with other metrics to inform duty-readiness decisions in Active Duty Service Members following concussion. This study used an iterative process of repeated interrater reliability testing and feasibility feedback to drive modifications to the 9 tasks of the original AMMP which resulted in a final version of 6 tasks with metrics that demonstrated clinically acceptable ICCs of > 0.92 (range of 0.92–1.0) for the 3 dual tasks and > 0.87 (range 0.87–1.0) for the metrics of the 3 multitasks. Three metrics involved in recording subject errors across 2 tasks did not achieve ICCs above 0.85 set apriori for multitasks (0.64) and above 0.90 set for dual-tasks (0.77 and 0.86) and were not used for further analysis. This iterative process involved 3 phases of testing with between 13 and 26 subjects, ages 18–42 years, tested in each phase from a combined cohort of healthy controls and Service Members with mTBI. Study findings support continued validation of this assessment tool to provide rehabilitation clinicians further return to duty assessment methods robust to ceiling effects with strong face validity to injured Warriors and their leaders. PMID:28056045
User-Centered Iterative Design of a Collaborative Virtual Environment
2001-03-01
cognitive task analysis methods to study land navigators. This study was intended to validate the use of user-centered design methodologies for the design of...have explored the cognitive aspects of collaborative human way finding and design for collaborative virtual environments. Further investigation of design paradigms should include cognitive task analysis and behavioral task analysis.
Iterative learning control with applications in energy generation, lasers and health care.
Rogers, E; Tutty, O R
2016-09-01
Many physical systems make repeated executions of the same finite time duration task. One example is a robot in a factory or warehouse whose task is to collect an object in sequence from a location, transfer it over a finite duration, place it at a specified location or on a moving conveyor and then return for the next one and so on. Iterative learning control was especially developed for systems with this mode of operation and this paper gives an overview of this control design method using relatively recent relevant applications in wind turbines, free-electron lasers and health care, as exemplars to demonstrate its applicability.
Training Effectiveness and Cost Iterative Technique (TECIT). Volume 2. Cost Effectiveness Analysis
1988-07-01
Moving Tank in a Field Exercise A The task cluster identified as tank commander’s station/tank gunnery and the sub-task of firing an M250 grenade launcher...Firing Procedures, Task Number 171-126-1028. I OBJECTIVE: Given an Ml tank with crew, loaded M250 I grenade launcher, the commander’s station powered up
ERIC Educational Resources Information Center
Lowrie, Tom; Diezmann, Carmel M.; Logan, Tracy
2012-01-01
Graphical tasks have become a prominent aspect of mathematics assessment. From a conceptual stance, the purpose of this study was to better understand the composition of graphical tasks commonly used to assess students' mathematics understandings. Through an iterative design, the investigation described the sense making of 11-12-year-olds as they…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samei, Ehsan, E-mail: samei@duke.edu; Richard, Samuel
2015-01-15
Purpose: Different computed tomography (CT) reconstruction techniques offer different image quality attributes of resolution and noise, challenging the ability to compare their dose reduction potential against each other. The purpose of this study was to evaluate and compare the task-based imaging performance of CT systems to enable the assessment of the dose performance of a model-based iterative reconstruction (MBIR) to that of an adaptive statistical iterative reconstruction (ASIR) and a filtered back projection (FBP) technique. Methods: The ACR CT phantom (model 464) was imaged across a wide range of mA setting on a 64-slice CT scanner (GE Discovery CT750 HD,more » Waukesha, WI). Based on previous work, the resolution was evaluated in terms of a task-based modulation transfer function (MTF) using a circular-edge technique and images from the contrast inserts located in the ACR phantom. Noise performance was assessed in terms of the noise-power spectrum (NPS) measured from the uniform section of the phantom. The task-based MTF and NPS were combined with a task function to yield a task-based estimate of imaging performance, the detectability index (d′). The detectability index was computed as a function of dose for two imaging tasks corresponding to the detection of a relatively small and a relatively large feature (1.5 and 25 mm, respectively). The performance of MBIR in terms of the d′ was compared with that of ASIR and FBP to assess its dose reduction potential. Results: Results indicated that MBIR exhibits a variability spatial resolution with respect to object contrast and noise while significantly reducing image noise. The NPS measurements for MBIR indicated a noise texture with a low-pass quality compared to the typical midpass noise found in FBP-based CT images. At comparable dose, the d′ for MBIR was higher than those of FBP and ASIR by at least 61% and 19% for the small feature and the large feature tasks, respectively. Compared to FBP and ASIR, MBIR indicated a 46%–84% dose reduction potential, depending on task, without compromising the modeled detection performance. Conclusions: The presented methodology based on ACR phantom measurements extends current possibilities for the assessment of CT image quality under the complex resolution and noise characteristics exhibited with statistical and iterative reconstruction algorithms. The findings further suggest that MBIR can potentially make better use of the projections data to reduce CT dose by approximately a factor of 2. Alternatively, if the dose held unchanged, it can improve image quality by different levels for different tasks.« less
Iterative learning control with applications in energy generation, lasers and health care
Tutty, O. R.
2016-01-01
Many physical systems make repeated executions of the same finite time duration task. One example is a robot in a factory or warehouse whose task is to collect an object in sequence from a location, transfer it over a finite duration, place it at a specified location or on a moving conveyor and then return for the next one and so on. Iterative learning control was especially developed for systems with this mode of operation and this paper gives an overview of this control design method using relatively recent relevant applications in wind turbines, free-electron lasers and health care, as exemplars to demonstrate its applicability. PMID:27713654
Writing and compiling code into biochemistry.
Shea, Adam; Fett, Brian; Riedel, Marc D; Parhi, Keshab
2010-01-01
This paper presents a methodology for translating iterative arithmetic computation, specified as high-level programming constructs, into biochemical reactions. From an input/output specification, we generate biochemical reactions that produce output quantities of proteins as a function of input quantities performing operations such as addition, subtraction, and scalar multiplication. Iterative constructs such as "while" loops and "for" loops are implemented by transferring quantities between protein types, based on a clocking mechanism. Synthesis first is performed at a conceptual level, in terms of abstract biochemical reactions - a task analogous to high-level program compilation. Then the results are mapped onto specific biochemical reactions selected from libraries - a task analogous to machine language compilation. We demonstrate our approach through the compilation of a variety of standard iterative functions: multiplication, exponentiation, discrete logarithms, raising to a power, and linear transforms on time series. The designs are validated through transient stochastic simulation of the chemical kinetics. We are exploring DNA-based computation via strand displacement as a possible experimental chassis.
Iterative Integration of Visual Insights during Scalable Patent Search and Analysis.
Koch, S; Bosch, H; Giereth, M; Ertl, T
2011-05-01
Patents are of growing importance in current economic markets. Analyzing patent information has, therefore, become a common task for many interest groups. As a prerequisite for patent analysis, extensive search for relevant patent information is essential. Unfortunately, the complexity of patent material inhibits a straightforward retrieval of all relevant patent documents and leads to iterative, time-consuming approaches in practice. Already the amount of patent data to be analyzed poses challenges with respect to scalability. Further scalability issues arise concerning the diversity of users and the large variety of analysis tasks. With "PatViz", a system for interactive analysis of patent information has been developed addressing scalability at various levels. PatViz provides a visual environment allowing for interactive reintegration of insights into subsequent search iterations, thereby bridging the gap between search and analytic processes. Because of its extensibility, we expect that the approach we have taken can be employed in different problem domains that require high quality of search results regarding their completeness.
An iterative learning control method with application for CNC machine tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, D.I.; Kim, S.
1996-01-01
A proportional, integral, and derivative (PID) type iterative learning controller is proposed for precise tracking control of industrial robots and computer numerical controller (CNC) machine tools performing repetitive tasks. The convergence of the output error by the proposed learning controller is guaranteed under a certain condition even when the system parameters are not known exactly and unknown external disturbances exist. As the proposed learning controller is repeatedly applied to the industrial robot or the CNC machine tool with the path-dependent repetitive task, the distance difference between the desired path and the actual tracked or machined path, which is one ofmore » the most significant factors in the evaluation of control performance, is progressively reduced. The experimental results demonstrate that the proposed learning controller can improve machining accuracy when the CNC machine tool performs repetitive machining tasks.« less
Discuss Similarity Using Visual Intuition
ERIC Educational Resources Information Center
Cox, Dana C.; Lo, Jane-Jane
2012-01-01
The change in size from a smaller shape to a larger similar shape (or vice versa) is created through continuous proportional stretching or shrinking in every direction. Students cannot solve similarity tasks simply by iterating or partitioning a composed unit, strategies typically used on numerical proportional tasks. The transition to thinking…
What Physicians Reason about during Admission Case Review
ERIC Educational Resources Information Center
Juma, Salina; Goldszmidt, Mark
2017-01-01
Research suggests that physicians perform multiple reasoning tasks beyond diagnosis during patient review. However, these remain largely theoretical. The purpose of this study was to explore reasoning tasks in clinical practice during patient admission review. The authors used a constant comparative approach--an iterative and inductive process of…
Powell, Joanne L; Grossi, Davide; Corcoran, Rhiannon; Gobet, Fernand; García-Fiñana, Marta
2017-07-04
Chess involves the capacity to reason iteratively about potential intentional choices of an opponent and therefore involves high levels of explicit theory of mind [ToM] (i.e. ability to infer mental states of others) alongside clear, strategic rule-based decision-making. Functional magnetic resonance imaging was used on 12 healthy male novice chess players to identify cortical regions associated with chess, ToM and empathizing. The blood-oxygenation-level-dependent (BOLD) response for chess and empathizing tasks was extracted from each ToM region. Results showed neural overlap between ToM, chess and empathizing tasks in right-hemisphere temporo-parietal junction (TPJ) [BA40], left-hemisphere superior temporal gyrus [BA22] and posterior cingulate gyrus [BA23/31]. TPJ is suggested to underlie the capacity to reason iteratively about another's internal state in a range of tasks. Areas activated by ToM and empathy included right-hemisphere orbitofrontal cortex and bilateral middle temporal gyrus: areas that become active when there is need to inhibit one's own experience when considering the internal state of another and for visual evaluation of action rationality. Results support previous findings, that ToM recruits a neural network with each region sub-serving a supporting role depending on the nature of the task itself. In contrast, a network of cortical regions primarily located within right- and left-hemisphere medial-frontal and parietal cortex, outside the internal representational network, was selectively recruited during the chess task. We hypothesize that in our cohort of novice chess players the strategy was to employ an iterative thinking pattern which in part involved mentalizing processes and recruited core ToM-related regions. Copyright © 2017. Published by Elsevier Ltd.
2013-10-18
of the enclosed tasks plus the last parallel task for a total of five parallel tasks for one iteration, i). for j = 1…N for i = 1… 8 Figure...drizzling juices culminating in a state of salivating desire to cut a piece and enjoy. On the other hand, the smell could be that of a pungent, unpleasant
Iteration: Unit Fraction Knowledge and the French Fry Tasks
ERIC Educational Resources Information Center
Tzur, Ron; Hunt, Jessica
2015-01-01
Often, students who solve fraction tasks respond in ways that indicate inadequate conceptual grounding of unit fractions. Many elementary school curricula use folding, partitioning, shading, and naming parts of various wholes to develop children's understanding of unit and then nonunit fractions (e.g., coloring three of four parts of a pizza and…
NASA Astrophysics Data System (ADS)
Saha, Gouranga Chandra
Very often a number of factors, especially time, space and money, deter many science educators from using inquiry-based, hands-on, laboratory practical tasks as alternative assessment instruments in science. A shortage of valid inquiry-based laboratory tasks for high school biology has been cited. Driven by this need, this study addressed the following three research questions: (1) How can laboratory-based performance tasks be designed and developed that are doable by students for whom they are designed/written? (2) Do student responses to the laboratory-based performance tasks validly represent at least some of the intended process skills that new biology learning goals want students to acquire? (3) Are the laboratory-based performance tasks psychometrically consistent as individual tasks and as a set? To answer these questions, three tasks were used from the six biology tasks initially designed and developed by an iterative process of trial testing. Analyses of data from 224 students showed that performance-based laboratory tasks that are doable by all students require careful and iterative process of development. Although the students demonstrated more skill in performing than planning and reasoning, their performances at the item level were very poor for some items. Possible reasons for the poor performances have been discussed and suggestions on how to remediate the deficiencies have been made. Empirical evidences for validity and reliability of the instrument have been presented both from the classical and the modern validity criteria point of view. Limitations of the study have been identified. Finally implications of the study and directions for further research have been discussed.
What physicians reason about during admission case review.
Juma, Salina; Goldszmidt, Mark
2017-08-01
Research suggests that physicians perform multiple reasoning tasks beyond diagnosis during patient review. However, these remain largely theoretical. The purpose of this study was to explore reasoning tasks in clinical practice during patient admission review. The authors used a constant comparative approach-an iterative and inductive process of coding and recoding-to analyze transcripts from 38 audio-recorded case reviews between junior trainees and their senior residents or attendings. Using a previous list of reasoning tasks, analysis focused on what tasks were performed, when they occurred, and how they related to the other tasks. All 24 tasks were observed in at least one review with a mean of 17.9 (Min = 15, Max = 22) distinct tasks per review. Two new tasks-assess illness severity and patient decision-making capacity-were identified, thus 26 tasks were examined. Three overarching tasks were identified-assess priorities, determine and refine the most likely diagnosis and establish and refine management plans-that occurred throughout all stages of the case review starting from patient identification and continuing through to assessment and plan. A fourth possible overarching task-reflection-was also identified but only observed in four instances across three cases. The other 22 tasks appeared to be context dependent serving to support, expand, and refine one or more overarching tasks. Tasks were non-sequential and the same supporting task could serve more than one overarching task. The authors conclude that these findings provide insight into the 'what' and 'when' of physician reasoning during case review that can be used to support professional development, clinical training and patient care. In particular, they draw attention to the iterative way in which each task is addressed during a case review and how this finding may challenge conventional ways of teaching and assessing clinical communication and reasoning. They also suggest that further research is needed to explore how physicians decide why a supporting task is required in a particular context.
Using Performance Tasks to Improve Quantitative Reasoning in an Introductory Mathematics Course
ERIC Educational Resources Information Center
Kruse, Gerald; Drews, David
2013-01-01
A full-cycle assessment of our efforts to improve quantitative reasoning in an introductory math course is described. Our initial iteration substituted more open-ended performance tasks for the active learning projects than had been used. Using a quasi-experimental design, we compared multiple sections of the same course and found non-significant…
Chen, Tinggui; Xiao, Renbin
2014-01-01
Due to fierce market competition, how to improve product quality and reduce development cost determines the core competitiveness of enterprises. However, design iteration generally causes increases of product cost and delays of development time as well, so how to identify and model couplings among tasks in product design and development has become an important issue for enterprises to settle. In this paper, the shortcomings existing in WTM model are discussed and tearing approach as well as inner iteration method is used to complement the classic WTM model. In addition, the ABC algorithm is also introduced to find out the optimal decoupling schemes. In this paper, firstly, tearing approach and inner iteration method are analyzed for solving coupled sets. Secondly, a hybrid iteration model combining these two technologies is set up. Thirdly, a high-performance swarm intelligence algorithm, artificial bee colony, is adopted to realize problem-solving. Finally, an engineering design of a chemical processing system is given in order to verify its reasonability and effectiveness.
Shan, Haijun; Xu, Haojie; Zhu, Shanan; He, Bin
2015-10-21
For sensorimotor rhythms based brain-computer interface (BCI) systems, classification of different motor imageries (MIs) remains a crucial problem. An important aspect is how many scalp electrodes (channels) should be used in order to reach optimal performance classifying motor imaginations. While the previous researches on channel selection mainly focus on MI tasks paradigms without feedback, the present work aims to investigate the optimal channel selection in MI tasks paradigms with real-time feedback (two-class control and four-class control paradigms). In the present study, three datasets respectively recorded from MI tasks experiment, two-class control and four-class control experiments were analyzed offline. Multiple frequency-spatial synthesized features were comprehensively extracted from every channel, and a new enhanced method IterRelCen was proposed to perform channel selection. IterRelCen was constructed based on Relief algorithm, but was enhanced from two aspects: change of target sample selection strategy and adoption of the idea of iterative computation, and thus performed more robust in feature selection. Finally, a multiclass support vector machine was applied as the classifier. The least number of channels that yield the best classification accuracy were considered as the optimal channels. One-way ANOVA was employed to test the significance of performance improvement among using optimal channels, all the channels and three typical MI channels (C3, C4, Cz). The results show that the proposed method outperformed other channel selection methods by achieving average classification accuracies of 85.2, 94.1, and 83.2 % for the three datasets, respectively. Moreover, the channel selection results reveal that the average numbers of optimal channels were significantly different among the three MI paradigms. It is demonstrated that IterRelCen has a strong ability for feature selection. In addition, the results have shown that the numbers of optimal channels in the three different motor imagery BCI paradigms are distinct. From a MI task paradigm, to a two-class control paradigm, and to a four-class control paradigm, the number of required channels for optimizing the classification accuracy increased. These findings may provide useful information to optimize EEG based BCI systems, and further improve the performance of noninvasive BCI.
Recent Updates to the MELCOR 1.8.2 Code for ITER Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Merrill, Brad J
This report documents recent changes made to the MELCOR 1.8.2 computer code for application to the International Thermonuclear Experimental Reactor (ITER), as required by ITER Task Agreement ITA 81-18. There are four areas of change documented by this report. The first area is the addition to this code of a model for transporting HTO. The second area is the updating of the material oxidation correlations to match those specified in the ITER Safety Analysis Data List (SADL). The third area replaces a modification to an aerosol tranpsort subroutine that specified the nominal aerosol density internally with one that now allowsmore » the user to specify this density through user input. The fourth area corrected an error that existed in an air condensation subroutine of previous versions of this modified MELCOR code. The appendices of this report contain FORTRAN listings of the coding for these modifications.« less
Spotting the difference in molecular dynamics simulations of biomolecules
NASA Astrophysics Data System (ADS)
Sakuraba, Shun; Kono, Hidetoshi
2016-08-01
Comparing two trajectories from molecular simulations conducted under different conditions is not a trivial task. In this study, we apply a method called Linear Discriminant Analysis with ITERative procedure (LDA-ITER) to compare two molecular simulation results by finding the appropriate projection vectors. Because LDA-ITER attempts to determine a projection such that the projections of the two trajectories do not overlap, the comparison does not suffer from a strong anisotropy, which is an issue in protein dynamics. LDA-ITER is applied to two test cases: the T4 lysozyme protein simulation with or without a point mutation and the allosteric protein PDZ2 domain of hPTP1E with or without a ligand. The projection determined by the method agrees with the experimental data and previous simulations. The proposed procedure, which complements existing methods, is a versatile analytical method that is specialized to find the "difference" between two trajectories.
Scalable splitting algorithms for big-data interferometric imaging in the SKA era
NASA Astrophysics Data System (ADS)
Onose, Alexandru; Carrillo, Rafael E.; Repetti, Audrey; McEwen, Jason D.; Thiran, Jean-Philippe; Pesquet, Jean-Christophe; Wiaux, Yves
2016-11-01
In the context of next-generation radio telescopes, like the Square Kilometre Array (SKA), the efficient processing of large-scale data sets is extremely important. Convex optimization tasks under the compressive sensing framework have recently emerged and provide both enhanced image reconstruction quality and scalability to increasingly larger data sets. We focus herein mainly on scalability and propose two new convex optimization algorithmic structures able to solve the convex optimization tasks arising in radio-interferometric imaging. They rely on proximal splitting and forward-backward iterations and can be seen, by analogy, with the CLEAN major-minor cycle, as running sophisticated CLEAN-like iterations in parallel in multiple data, prior, and image spaces. Both methods support any convex regularization function, in particular, the well-studied ℓ1 priors promoting image sparsity in an adequate domain. Tailored for big-data, they employ parallel and distributed computations to achieve scalability, in terms of memory and computational requirements. One of them also exploits randomization, over data blocks at each iteration, offering further flexibility. We present simulation results showing the feasibility of the proposed methods as well as their advantages compared to state-of-the-art algorithmic solvers. Our MATLAB code is available online on GitHub.
D'Avolio, Leonard W; Nguyen, Thien M; Goryachev, Sergey; Fiore, Louis D
2011-01-01
Despite at least 40 years of promising empirical performance, very few clinical natural language processing (NLP) or information extraction systems currently contribute to medical science or care. The authors address this gap by reducing the need for custom software and rules development with a graphical user interface-driven, highly generalizable approach to concept-level retrieval. A 'learn by example' approach combines features derived from open-source NLP pipelines with open-source machine learning classifiers to automatically and iteratively evaluate top-performing configurations. The Fourth i2b2/VA Shared Task Challenge's concept extraction task provided the data sets and metrics used to evaluate performance. Top F-measure scores for each of the tasks were medical problems (0.83), treatments (0.82), and tests (0.83). Recall lagged precision in all experiments. Precision was near or above 0.90 in all tasks. Discussion With no customization for the tasks and less than 5 min of end-user time to configure and launch each experiment, the average F-measure was 0.83, one point behind the mean F-measure of the 22 entrants in the competition. Strong precision scores indicate the potential of applying the approach for more specific clinical information extraction tasks. There was not one best configuration, supporting an iterative approach to model creation. Acceptable levels of performance can be achieved using fully automated and generalizable approaches to concept-level information extraction. The described implementation and related documentation is available for download.
NASA Astrophysics Data System (ADS)
Dohaney, J. A.; kennedy, B.; Brogt, E.; Gravley, D.; Wilson, T.; O'Steen, B.
2011-12-01
This qualitative study investigates behaviors and experiences of upper-year geosciences undergraduate students during an intensive role-play simulation, in which the students interpret geological data streams and manage a volcanic crisis event. We present the development of the simulation, its academic tasks, (group) role assignment strategies and planned facilitator interventions over three iterations. We aim to develop and balance an authentic, intensive and highly engaging capstone activity for volcanology and geo-hazard courses. Interview data were collected from academic and professional experts in the fields of Volcanology and Hazard Management (n=11) in order to characterize expertise in the field, characteristics of key roles in the simulation, and to validate the authenticity of tasks and scenarios. In each iteration, observations and student artifacts were collected (total student participants: 68) along with interviews (n=36) and semi-structured, open-ended questionnaires (n=26). Our analysis of these data indicates that increasing the structure (i.e. organization, role-specific tasks and responsibilities) lessens non-productive group dynamics, which allows for an increase in difficulty of academic tasks within the simulation without increasing the cognitive load on students. Under these conditions, students exhibit professional expert-like behaviours, in particular in the quality of decision-making, communication skills and task-efficiency. In addition to illustrating the value of using this simulation to teach geosciences concepts, this study has implications for many complex situated-learning activities.
2014-01-01
Due to fierce market competition, how to improve product quality and reduce development cost determines the core competitiveness of enterprises. However, design iteration generally causes increases of product cost and delays of development time as well, so how to identify and model couplings among tasks in product design and development has become an important issue for enterprises to settle. In this paper, the shortcomings existing in WTM model are discussed and tearing approach as well as inner iteration method is used to complement the classic WTM model. In addition, the ABC algorithm is also introduced to find out the optimal decoupling schemes. In this paper, firstly, tearing approach and inner iteration method are analyzed for solving coupled sets. Secondly, a hybrid iteration model combining these two technologies is set up. Thirdly, a high-performance swarm intelligence algorithm, artificial bee colony, is adopted to realize problem-solving. Finally, an engineering design of a chemical processing system is given in order to verify its reasonability and effectiveness. PMID:25431584
The motional Stark effect diagnostic for ITER using a line-shift approach.
Foley, E L; Levinton, F M; Yuh, H Y; Zakharov, L E
2008-10-01
The United States has been tasked with the development and implementation of a motional Stark effect (MSE) system on ITER. In the harsh ITER environment, MSE is particularly susceptible to degradation, as it depends on polarimetry, and the polarization reflection properties of surfaces are highly sensitive to thin film effects due to plasma deposition and erosion of a first mirror. Here we present the results of a comprehensive study considering a new MSE-based approach to internal plasma magnetic field measurements for ITER. The proposed method uses the line shifts in the MSE spectrum (MSE-LS) to provide a radial profile of the magnetic field magnitude. To determine the utility of MSE-LS for equilibrium reconstruction, studies were performed using the ESC-ERV code system. A near-term opportunity to test the use of MSE-LS for equilibrium reconstruction is being pursued in the implementation of MSE with laser-induced fluorescence on NSTX. Though the field values and beam energies are very different from ITER, the use of a laser allows precision spectroscopy with a similar ratio of linewidth to line spacing on NSTX as would be achievable with a passive system on ITER. Simulation results for ITER and NSTX are presented, and the relative merits of the traditional line polarization approach and the new line-shift approach are discussed.
Kim, Yong-Hwan; Kim, Junghoe; Lee, Jong-Hwan
2012-12-01
This study proposes an iterative dual-regression (DR) approach with sparse prior regularization to better estimate an individual's neuronal activation using the results of an independent component analysis (ICA) method applied to a temporally concatenated group of functional magnetic resonance imaging (fMRI) data (i.e., Tc-GICA method). An ordinary DR approach estimates the spatial patterns (SPs) of neuronal activation and corresponding time courses (TCs) specific to each individual's fMRI data with two steps involving least-squares (LS) solutions. Our proposed approach employs iterative LS solutions to refine both the individual SPs and TCs with an additional a priori assumption of sparseness in the SPs (i.e., minimally overlapping SPs) based on L(1)-norm minimization. To quantitatively evaluate the performance of this approach, semi-artificial fMRI data were created from resting-state fMRI data with the following considerations: (1) an artificially designed spatial layout of neuronal activation patterns with varying overlap sizes across subjects and (2) a BOLD time series (TS) with variable parameters such as onset time, duration, and maximum BOLD levels. To systematically control the spatial layout variability of neuronal activation patterns across the "subjects" (n=12), the degree of spatial overlap across all subjects was varied from a minimum of 1 voxel (i.e., 0.5-voxel cubic radius) to a maximum of 81 voxels (i.e., 2.5-voxel radius) across the task-related SPs with a size of 100 voxels for both the block-based and event-related task paradigms. In addition, several levels of maximum percentage BOLD intensity (i.e., 0.5, 1.0, 2.0, and 3.0%) were used for each degree of spatial overlap size. From the results, the estimated individual SPs of neuronal activation obtained from the proposed iterative DR approach with a sparse prior showed an enhanced true positive rate and reduced false positive rate compared to the ordinary DR approach. The estimated TCs of the task-related SPs from our proposed approach showed greater temporal correlation coefficients with a reference hemodynamic response function than those of the ordinary DR approach. Moreover, the efficacy of the proposed DR approach was also successfully demonstrated by the results of real fMRI data acquired from left-/right-hand clenching tasks in both block-based and event-related task paradigms. Copyright © 2012 Elsevier Inc. All rights reserved.
Knobology in use: an experimental evaluation of ergonomics recommendations.
Overgård, Kjell Ivar; Fostervold, Knut Inge; Bjelland, Hans Vanhauwaert; Hoff, Thomas
2007-05-01
The scientific basis for ergonomics recommendations for controls has usually not been related to active goal-directed use. The present experiment tests how different knob sizes and torques affect operator performance. The task employed is to control a pointer by the use of a control knob, and is as such an experimentally defined goal-directed task relevant to machine systems in general. Duration of use, error associated with use (overshooting of the goal area) and movement reproduction were used as performance measures. Significant differences between knob sizes were found for movement reproduction. High torques led to less overshooting as opposed to low torques. The results from duration of use showed a tendency that the differences between knob sizes were reduced from the first iteration to the second iteration. The present results indicate that the ergonomically recommended ranges of knob sizes might differently affect operator performance.
Integrated prototyping environment for programmable automation
NASA Astrophysics Data System (ADS)
da Costa, Francis; Hwang, Vincent S. S.; Khosla, Pradeep K.; Lumia, Ronald
1992-11-01
We propose a rapid prototyping environment for robotic systems, based on tenets of modularity, reconfigurability and extendibility that may help build robot systems `faster, better, and cheaper.' Given a task specification, (e.g., repair brake assembly), the user browses through a library of building blocks that include both hardware and software components. Software advisors or critics recommend how blocks may be `snapped' together to speedily construct alternative ways to satisfy task requirements. Mechanisms to allow `swapping' competing modules for comparative test and evaluation studies are also included in the prototyping environment. After some iterations, a stable configuration or `wiring diagram' emerges. This customized version of the general prototyping environment still contains all the hooks needed to incorporate future improvements in component technologies and to obviate unplanned obsolescence. The prototyping environment so described is relevant for both interactive robot programming (telerobotics) and iterative robot system development (prototyping).
Machine learning in motion control
NASA Technical Reports Server (NTRS)
Su, Renjeng; Kermiche, Noureddine
1989-01-01
The existing methodologies for robot programming originate primarily from robotic applications to manufacturing, where uncertainties of the robots and their task environment may be minimized by repeated off-line modeling and identification. In space application of robots, however, a higher degree of automation is required for robot programming because of the desire of minimizing the human intervention. We discuss a new paradigm of robotic programming which is based on the concept of machine learning. The goal is to let robots practice tasks by themselves and the operational data are used to automatically improve their motion performance. The underlying mathematical problem is to solve the problem of dynamical inverse by iterative methods. One of the key questions is how to ensure the convergence of the iterative process. There have been a few small steps taken into this important approach to robot programming. We give a representative result on the convergence problem.
A Framework for Load Balancing of Tensor Contraction Expressions via Dynamic Task Partitioning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lai, Pai-Wei; Stock, Kevin; Rajbhandari, Samyam
In this paper, we introduce the Dynamic Load-balanced Tensor Contractions (DLTC), a domain-specific library for efficient task parallel execution of tensor contraction expressions, a class of computation encountered in quantum chemistry and physics. Our framework decomposes each contraction into smaller unit of tasks, represented by an abstraction referred to as iterators. We exploit an extra level of parallelism by having tasks across independent contractions executed concurrently through a dynamic load balancing run- time. We demonstrate the improved performance, scalability, and flexibility for the computation of tensor contraction expressions on parallel computers using examples from coupled cluster methods.
Improvement of tritium accountancy technology for ITER fuel cycle safety enhancement
NASA Astrophysics Data System (ADS)
O'hira, S.; Hayashi, T.; Nakamura, H.; Kobayashi, K.; Tadokoro, T.; Nakamura, H.; Itoh, T.; Yamanishi, T.; Kawamura, Y.; Iwai, Y.; Arita, T.; Maruyama, T.; Kakuta, T.; Konishi, S.; Enoeda, M.; Yamada, M.; Suzuki, T.; Nishi, M.; Nagashima, T.; Ohta, M.
2000-03-01
In order to improve the safe handling and control of tritium for the ITER fuel cycle, effective in situ tritium accounting methods have been developed at the Tritium Process Laboratory in the Japan Atomic Energy Research Institute under one of the ITER-EDA R&D tasks. The remote and multilocation analysis of process gases by an application of laser Raman spectroscopy developed and tested could provide a measurement of hydrogen isotope gases with a detection limit of 0.3 kPa analytical periods of 120 s. An in situ tritium inventory measurement by application of a `self-assaying' storage bed with 25 g tritium capacity could provide a measurement with the required detection limit of less than 1% and a design proof of a bed with 100 g tritium capacity.
Design of the DEMO Fusion Reactor Following ITER.
Garabedian, Paul R; McFadden, Geoffrey B
2009-01-01
Runs of the NSTAB nonlinear stability code show there are many three-dimensional (3D) solutions of the advanced tokamak problem subject to axially symmetric boundary conditions. These numerical simulations based on mathematical equations in conservation form predict that the ITER international tokamak project will encounter persistent disruptions and edge localized mode (ELMS) crashes. Test particle runs of the TRAN transport code suggest that for quasineutrality to prevail in tokamaks a certain minimum level of 3D asymmetry of the magnetic spectrum is required which is comparable to that found in quasiaxially symmetric (QAS) stellarators. The computational theory suggests that a QAS stellarator with two field periods and proportions like those of ITER is a good candidate for a fusion reactor. For a demonstration reactor (DEMO) we seek an experiment that combines the best features of ITER, with a system of QAS coils providing external rotational transform, which is a measure of the poloidal field. We have discovered a configuration with unusually good quasisymmetry that is ideal for this task.
Design of the DEMO Fusion Reactor Following ITER
Garabedian, Paul R.; McFadden, Geoffrey B.
2009-01-01
Runs of the NSTAB nonlinear stability code show there are many three-dimensional (3D) solutions of the advanced tokamak problem subject to axially symmetric boundary conditions. These numerical simulations based on mathematical equations in conservation form predict that the ITER international tokamak project will encounter persistent disruptions and edge localized mode (ELMS) crashes. Test particle runs of the TRAN transport code suggest that for quasineutrality to prevail in tokamaks a certain minimum level of 3D asymmetry of the magnetic spectrum is required which is comparable to that found in quasiaxially symmetric (QAS) stellarators. The computational theory suggests that a QAS stellarator with two field periods and proportions like those of ITER is a good candidate for a fusion reactor. For a demonstration reactor (DEMO) we seek an experiment that combines the best features of ITER, with a system of QAS coils providing external rotational transform, which is a measure of the poloidal field. We have discovered a configuration with unusually good quasisymmetry that is ideal for this task. PMID:27504224
Iterative Importance Sampling Algorithms for Parameter Estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grout, Ray W; Morzfeld, Matthias; Day, Marcus S.
In parameter estimation problems one computes a posterior distribution over uncertain parameters defined jointly by a prior distribution, a model, and noisy data. Markov chain Monte Carlo (MCMC) is often used for the numerical solution of such problems. An alternative to MCMC is importance sampling, which can exhibit near perfect scaling with the number of cores on high performance computing systems because samples are drawn independently. However, finding a suitable proposal distribution is a challenging task. Several sampling algorithms have been proposed over the past years that take an iterative approach to constructing a proposal distribution. We investigate the applicabilitymore » of such algorithms by applying them to two realistic and challenging test problems, one in subsurface flow, and one in combustion modeling. More specifically, we implement importance sampling algorithms that iterate over the mean and covariance matrix of Gaussian or multivariate t-proposal distributions. Our implementation leverages massively parallel computers, and we present strategies to initialize the iterations using 'coarse' MCMC runs or Gaussian mixture models.« less
Advanced Inertial Technologies. Volume 3
1975-06-01
carried out under all of the technical tasks by means of publication of reports, presentation of papers , attendance at symposia, etc., this task is...sputter deposition by conventional RF sputter techniques. This choice was indicated by past experience on other programs show- ing that solid spherical...through R3 are source resistors for th« op cimp LPF and, as such, are inversely proportional to gain. Equation (4-4) n.ust be solved by iteration
Deep learning methods to guide CT image reconstruction and reduce metal artifacts
NASA Astrophysics Data System (ADS)
Gjesteby, Lars; Yang, Qingsong; Xi, Yan; Zhou, Ye; Zhang, Junping; Wang, Ge
2017-03-01
The rapidly-rising field of machine learning, including deep learning, has inspired applications across many disciplines. In medical imaging, deep learning has been primarily used for image processing and analysis. In this paper, we integrate a convolutional neural network (CNN) into the computed tomography (CT) image reconstruction process. Our first task is to monitor the quality of CT images during iterative reconstruction and decide when to stop the process according to an intelligent numerical observer instead of using a traditional stopping rule, such as a fixed error threshold or a maximum number of iterations. After training on ground truth images, the CNN was successful in guiding an iterative reconstruction process to yield high-quality images. Our second task is to improve a sinogram to correct for artifacts caused by metal objects. A large number of interpolation and normalization-based schemes were introduced for metal artifact reduction (MAR) over the past four decades. The NMAR algorithm is considered a state-of-the-art method, although residual errors often remain in the reconstructed images, especially in cases of multiple metal objects. Here we merge NMAR with deep learning in the projection domain to achieve additional correction in critical image regions. Our results indicate that deep learning can be a viable tool to address CT reconstruction challenges.
Pellet Injection in ITER with ∇B-induced Drift Effect using TASK/TR and HPI2 Codes
NASA Astrophysics Data System (ADS)
Kongkurd, R.; Wisitsorasak, A.
2017-09-01
The impact of pellet injection in International Thermonuclear Experimental Reactor (ITER) are investigated using integrated predictive modeling codes TASK/TR and HPI2 . In the core, the plasma profiles are predicted by the TASK/TR code in which the core transport models consist of a combination of the MMM95 anomalous transport model and NCLASS neoclassical transport. The pellet ablation in the plasma is described using neutral gas shielding (NGS) model with inclusion of the ∇B-induced \\overrightarrow{E}× \\overrightarrow{B} drift of the ionized ablated pellet particles. It is found that the high-field-side injection can deposit the pellet mass deeper than the injection from the low-field-side due to the advantage of the ∇B-induced drift. When pellets with deuterium-tritium mixing ratio of unity are launched with speed of 200 m/s, radius of 3 mm and injected at frequency of 2 Hz, the line average density and the plasma stored energy are increased by 80% and 25% respectively. The pellet material is mostly deposited at the normalized minor radius of 0.5 from the edge.
Lunar lander conceptual design: Lunar base systems study task 2.2
NASA Technical Reports Server (NTRS)
1988-01-01
This study is a first look at the problem of building a lunar lander to support a small lunar surface base. One lander, which can land 25 metric tons, one way, or take a 6 metric ton crew capsule up and down is desired. A series of trade studies are used to narrow the choices and provide some general guidelines. Given a rough baseline, the systems are then reviewed. A conceptual design is then produced. The process was only carried through one iteration. Many more iterations are needed. Assumptions and groundrules are considered.
NASA Astrophysics Data System (ADS)
Li, Haifeng; Zhu, Qing; Yang, Xiaoxia; Xu, Linrong
2012-10-01
Typical characteristics of remote sensing applications are concurrent tasks, such as those found in disaster rapid response. The existing composition approach to geographical information processing service chain, searches for an optimisation solution and is what can be deemed a "selfish" way. This way leads to problems of conflict amongst concurrent tasks and decreases the performance of all service chains. In this study, a non-cooperative game-based mathematical model to analyse the competitive relationships between tasks, is proposed. A best response function is used, to assure each task maintains utility optimisation by considering composition strategies of other tasks and quantifying conflicts between tasks. Based on this, an iterative algorithm that converges to Nash equilibrium is presented, the aim being to provide good convergence and maximise the utilisation of all tasks under concurrent task conditions. Theoretical analyses and experiments showed that the newly proposed method, when compared to existing service composition methods, has better practical utility in all tasks.
The presence of a perseverative iterative style in poor vs. good sleepers.
Barclay, N L; Gregory, A M
2010-03-01
Catastrophizing is present in worriers and poor sleepers. This study investigates whether poor sleepers possess a 'perseverative iterative style' which predisposes them to catastrophize any topic, regardless of content or affective valence, a style previously found to occur more commonly in worriers as compared to others. Poor (n=23) and good sleepers (n=37) were distinguished using the Pittsburgh Sleep Quality Index (PSQI), from a sample of adults in the general population. Participants were required to catastrophize 2 topics: worries about sleep, and a current personal worry; and to iterate the positive aspects of a hypothetical topic. Poor sleepers catastrophized/iterated more steps to a greater extent than good sleepers to these three interviews, (F(1, 58)=7.35, p<.05). However, after controlling for anxiety and worry, this effect was reduced to non-significance for the 'sleep' and 'worry' topics, suggesting that anxiety may mediate some of the association between catastrophizing and sleep. However there was still a tendency for poor sleepers to iterate more steps to the 'hypothetical' topic, after controlling for anxiety and worry, which also suggests that poor sleepers possess a cognitive style which may predispose them to continue iterating consecutive steps to open-ended tasks regardless of anxiety and worry. Future research should examine whether the presence of this cognitive style is significant in leading to or maintaining insomnia.
Singh, Ramandeep; Baby, Britty; Damodaran, Natesan; Srivastav, Vinkle; Suri, Ashish; Banerjee, Subhashis; Kumar, Subodh; Kalra, Prem; Prasad, Sanjiva; Paul, Kolin; Anand, Sneh; Kumar, Sanjeev; Dhiman, Varun; Ben-Israel, David; Kapoor, Kulwant Singh
2016-02-01
Box trainers are ideal simulators, given they are inexpensive, accessible, and use appropriate fidelity. The development and validation of an open-source, partial task simulator that teaches the fundamental skills necessary for endonasal skull-base neuro-endoscopic surgery. We defined the Neuro-Endo-Trainer (NET) SkullBase-Task-GraspPickPlace with an activity area by analyzing the computed tomography scans of 15 adult patients with sellar suprasellar parasellar tumors. Four groups of participants (Group E, n = 4: expert neuroendoscopists; Group N, n =19: novice neurosurgeons; Group R, n = 11: neurosurgery residents with multiple iterations; and Group T, n = 27: neurosurgery residents with single iteration) performed grasp, pick, and place tasks using NET and were graded on task completion time and skills assessment scale score. Group E had lower task completion times and greater skills assessment scale scores than both Group N and R (P ≤ 0.03, 0.001). The performance of Groups N and R was found to be equivalent; in self-assessing neuro-endoscopic skill, the participants in these groups were found to have equally low pretraining scores (4/10) with significant improvement shown after NET simulation (6, 7 respectively). Angled scopes resulted in decreased scores with tilted plates compared with straight plates (30° P ≤ 0.04, 45° P ≤ 0.001). With tilted plates, decreased scores were observed when we compared the 0° with 45° endoscope (right, P ≤ 0.008; left, P ≤ 0.002). The NET, a face and construct valid open-source partial task neuroendoscopic trainer, was designed. Presimulation novice neurosurgeons and neurosurgical residents were described as having insufficient skills and preparation to practice neuro-endoscopy. Plate tilt and endoscope angle were shown to be important factors in participant performance. The NET was found to be a useful partial-task trainer for skill building in neuro-endoscopy. Copyright © 2016 Elsevier Inc. All rights reserved.
Understanding the Concepts of Proportion and Ratio Constructed by Two Grade Six Students.
ERIC Educational Resources Information Center
Singh, Parmjit
2000-01-01
Reports on a study designed to construct an understanding of two grade 6 students' proportional reasoning schemes. Finds that two mental operations, unitizing and iterating, play an important role in students' use of multiplicative thinking in proportion tasks. (Author/MM)
Liao, Yu-Kai; Tseng, Sheng-Hao
2014-01-01
Accurately determining the optical properties of multi-layer turbid media using a layered diffusion model is often a difficult task and could be an ill-posed problem. In this study, an iterative algorithm was proposed for solving such problems. This algorithm employed a layered diffusion model to calculate the optical properties of a layered sample at several source-detector separations (SDSs). The optical properties determined at various SDSs were mutually referenced to complete one round of iteration and the optical properties were gradually revised in further iterations until a set of stable optical properties was obtained. We evaluated the performance of the proposed method using frequency domain Monte Carlo simulations and found that the method could robustly recover the layered sample properties with various layer thickness and optical property settings. It is expected that this algorithm can work with photon transport models in frequency and time domain for various applications, such as determination of subcutaneous fat or muscle optical properties and monitoring the hemodynamics of muscle. PMID:24688828
NASA Technical Reports Server (NTRS)
Barnes, Bruce W.; Sessions, Alaric M.; Beyon, Jeffrey; Petway, Larry B.
2014-01-01
Optimized designs of the Navigation Doppler Lidar (NDL) instrument for Autonomous Landing Hazard Avoidance Technology (ALHAT) were accomplished via Interdisciplinary Design Concept (IDEC) at NASA Langley Research Center during the summer of 2013. Three branches in the Engineering Directorate and three students were involved in this joint task through the NASA Langley Aerospace Research Summer Scholars (LARSS) Program. The Laser Remote Sensing Branch (LRSB), Mechanical Systems Branch (MSB), and Structural and Thermal Systems Branch (STSB) were engaged to achieve optimal designs through iterative and interactive collaborative design processes. A preliminary design iteration was able to reduce the power consumption, mass, and footprint by removing redundant components and replacing inefficient components with more efficient ones. A second design iteration reduced volume and mass by replacing bulky components with excessive performance with smaller components custom-designed for the power system. The existing power system was analyzed to rank components in terms of inefficiency, power dissipation, footprint and mass. Design considerations and priorities are compared along with the results of each design iteration. Overall power system improvements are summarized for design implementations.
Iterative Stable Alignment and Clustering of 2D Transmission Electron Microscope Images
Yang, Zhengfan; Fang, Jia; Chittuluru, Johnathan; Asturias, Francisco J.; Penczek, Pawel A.
2012-01-01
SUMMARY Identification of homogeneous subsets of images in a macromolecular electron microscopy (EM) image data set is a critical step in single-particle analysis. The task is handled by iterative algorithms, whose performance is compromised by the compounded limitations of image alignment and K-means clustering. Here we describe an approach, iterative stable alignment and clustering (ISAC) that, relying on a new clustering method and on the concepts of stability and reproducibility, can extract validated, homogeneous subsets of images. ISAC requires only a small number of simple parameters and, with minimal human intervention, can eliminate bias from two-dimensional image clustering and maximize the quality of group averages that can be used for ab initio three-dimensional structural determination and analysis of macromolecular conformational variability. Repeated testing of the stability and reproducibility of a solution within ISAC eliminates heterogeneous or incorrect classes and introduces critical validation to the process of EM image clustering. PMID:22325773
A frequency dependent preconditioned wavelet method for atmospheric tomography
NASA Astrophysics Data System (ADS)
Yudytskiy, Mykhaylo; Helin, Tapio; Ramlau, Ronny
2013-12-01
Atmospheric tomography, i.e. the reconstruction of the turbulence in the atmosphere, is a main task for the adaptive optics systems of the next generation telescopes. For extremely large telescopes, such as the European Extremely Large Telescope, this problem becomes overly complex and an efficient algorithm is needed to reduce numerical costs. Recently, a conjugate gradient method based on wavelet parametrization of turbulence layers was introduced [5]. An iterative algorithm can only be numerically efficient when the number of iterations required for a sufficient reconstruction is low. A way to achieve this is to design an efficient preconditioner. In this paper we propose a new frequency-dependent preconditioner for the wavelet method. In the context of a multi conjugate adaptive optics (MCAO) system simulated on the official end-to-end simulation tool OCTOPUS of the European Southern Observatory we demonstrate robustness and speed of the preconditioned algorithm. We show that three iterations are sufficient for a good reconstruction.
Layout compliance for triple patterning lithography: an iterative approach
NASA Astrophysics Data System (ADS)
Yu, Bei; Garreton, Gilda; Pan, David Z.
2014-10-01
As the semiconductor process further scales down, the industry encounters many lithography-related issues. In the 14nm logic node and beyond, triple patterning lithography (TPL) is one of the most promising techniques for Metal1 layer and possibly Via0 layer. As one of the most challenging problems in TPL, recently layout decomposition efforts have received more attention from both industry and academia. Ideally the decomposer should point out locations in the layout that are not triple patterning decomposable and therefore manual intervention by designers is required. A traditional decomposition flow would be an iterative process, where each iteration consists of an automatic layout decomposition step and manual layout modification task. However, due to the NP-hardness of triple patterning layout decomposition, automatic full chip level layout decomposition requires long computational time and therefore design closure issues continue to linger around in the traditional flow. Challenged by this issue, we present a novel incremental layout decomposition framework to facilitate accelerated iterative decomposition. In the first iteration, our decomposer not only points out all conflicts, but also provides the suggestions to fix them. After the layout modification, instead of solving the full chip problem from scratch, our decomposer can provide a quick solution for a selected portion of layout. We believe this framework is efficient, in terms of performance and designer friendly.
Journalism as Model for Civic and Information Literacies
ERIC Educational Resources Information Center
Smirnov, Natalia; Saiyed, Gulnaz; Easterday, Matthew W.; Lam, Wan Shun Eva
2018-01-01
Journalism can serve as a generative disciplinary context for developing civic and information literacies needed to meaningfully participate in an increasingly networked and mediated public sphere. Using interviews with journalists, we developed a cognitive task analysis model, identifying an iterative sequence of production and domain-specific…
Gale, Maggie; Ball, Linden J
2012-04-01
Hypothesis-testing performance on Wason's (Quarterly Journal of Experimental Psychology 12:129-140, 1960) 2-4-6 task is typically poor, with only around 20% of participants announcing the to-be-discovered "ascending numbers" rule on their first attempt. Enhanced solution rates can, however, readily be observed with dual-goal (DG) task variants requiring the discovery of two complementary rules, one labeled "DAX" (the standard "ascending numbers" rule) and the other labeled "MED" ("any other number triples"). Two DG experiments are reported in which we manipulated the usefulness of a presented MED exemplar, where usefulness denotes cues that can establish a helpful "contrast class" that can stand in opposition to the presented 2-4-6 DAX exemplar. The usefulness of MED exemplars had a striking facilitatory effect on DAX rule discovery, which supports the importance of contrast-class information in hypothesis testing. A third experiment ruled out the possibility that the useful MED triple seeded the correct rule from the outset and obviated any need for hypothesis testing. We propose that an extension of Oaksford and Chater's (European Journal of Cognitive Psychology 6:149-169, 1994) iterative counterfactual model can neatly capture the mechanisms by which DG facilitation arises.
Supporting the Health and Wellness of Individuals with Psychiatric Disabilities
ERIC Educational Resources Information Center
Swarbrick, Margaret; Nemec, Patricia B.
2016-01-01
Purpose: Psychiatric rehabilitation is recognized as a field with specialized knowledge and skills required for practice. The certified psychiatric rehabilitation practitioner (CPRP) credential, an exam-based certification process, is based on a regularly updated job task analysis that, in its most recent iteration, identified the new core…
Zhang, Rubo; Yang, Yu
2017-01-01
Research on distributed task planning model for multi-autonomous underwater vehicle (MAUV). A scroll time domain quantum artificial bee colony (STDQABC) optimization algorithm is proposed to solve the multi-AUV optimal task planning scheme. In the uncertain marine environment, the rolling time domain control technique is used to realize a numerical optimization in a narrowed time range. Rolling time domain control is one of the better task planning techniques, which can greatly reduce the computational workload and realize the tradeoff between AUV dynamics, environment and cost. Finally, a simulation experiment was performed to evaluate the distributed task planning performance of the scroll time domain quantum bee colony optimization algorithm. The simulation results demonstrate that the STDQABC algorithm converges faster than the QABC and ABC algorithms in terms of both iterations and running time. The STDQABC algorithm can effectively improve MAUV distributed tasking planning performance, complete the task goal and get the approximate optimal solution. PMID:29186166
Li, Jianjun; Zhang, Rubo; Yang, Yu
2017-01-01
Research on distributed task planning model for multi-autonomous underwater vehicle (MAUV). A scroll time domain quantum artificial bee colony (STDQABC) optimization algorithm is proposed to solve the multi-AUV optimal task planning scheme. In the uncertain marine environment, the rolling time domain control technique is used to realize a numerical optimization in a narrowed time range. Rolling time domain control is one of the better task planning techniques, which can greatly reduce the computational workload and realize the tradeoff between AUV dynamics, environment and cost. Finally, a simulation experiment was performed to evaluate the distributed task planning performance of the scroll time domain quantum bee colony optimization algorithm. The simulation results demonstrate that the STDQABC algorithm converges faster than the QABC and ABC algorithms in terms of both iterations and running time. The STDQABC algorithm can effectively improve MAUV distributed tasking planning performance, complete the task goal and get the approximate optimal solution.
Exploiting Vector and Multicore Parallelsim for Recursive, Data- and Task-Parallel Programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Bin; Krishnamoorthy, Sriram; Agrawal, Kunal
Modern hardware contains parallel execution resources that are well-suited for data-parallelism-vector units-and task parallelism-multicores. However, most work on parallel scheduling focuses on one type of hardware or the other. In this work, we present a scheduling framework that allows for a unified treatment of task- and data-parallelism. Our key insight is an abstraction, task blocks, that uniformly handles data-parallel iterations and task-parallel tasks, allowing them to be scheduled on vector units or executed independently as multicores. Our framework allows us to define schedulers that can dynamically select between executing task- blocks on vector units or multicores. We show that thesemore » schedulers are asymptotically optimal, and deliver the maximum amount of parallelism available in computation trees. To evaluate our schedulers, we develop program transformations that can convert mixed data- and task-parallel pro- grams into task block-based programs. Using a prototype instantiation of our scheduling framework, we show that, on an 8-core system, we can simultaneously exploit vector and multicore parallelism to achieve 14×-108× speedup over sequential baselines.« less
NASA Astrophysics Data System (ADS)
Belokurov, V. P.; Belokurov, S. V.; Korablev, R. A.; Shtepa, A. A.
2018-05-01
The article deals with decision making concerning transport tasks on search iterations in the management of motor transport processes. An optimal selection of the best option for specific situations is suggested in the management of complex multi-criteria transport processes.
On Adaptation, Maximization, and Reinforcement Learning among Cognitive Strategies
ERIC Educational Resources Information Center
Erev, Ido; Barron, Greg
2005-01-01
Analysis of binary choice behavior in iterated tasks with immediate feedback reveals robust deviations from maximization that can be described as indications of 3 effects: (a) a payoff variability effect, in which high payoff variability seems to move choice behavior toward random choice; (b) underweighting of rare events, in which alternatives…
Using Formative Assessment to Support Complex Learning in Conditions of Social Adversity
ERIC Educational Resources Information Center
Crossouard, Barbara
2011-01-01
This article reports on research into formative assessment within a task design that produces multiple opportunities for teacher and pupil dialogue. It draws upon in-depth case studies conducted in schools in socially deprived areas of Scotland, using policy and documentary analysis, video-observation, and an iterative series of interviews with…
ERIC Educational Resources Information Center
Reali, Florencia; Griffiths, Thomas L.
2009-01-01
The regularization of linguistic structures by learners has played a key role in arguments for strong innate constraints on language acquisition, and has important implications for language evolution. However, relating the inductive biases of learners to regularization behavior in laboratory tasks can be challenging without a formal model. In this…
Optimal Iterative Task Scheduling for Parallel Simulations.
1991-03-01
State University, Pullman, Washington. November 1976. 19. Grimaldi , Ralph P . Discrete and Combinatorial Mathematics. Addison-Wesley. June 1989. 20...2 4.8.1 Problem Description .. .. .. .. ... .. ... .... 4-25 4.8.2 Reasons for Level-Strate- p Failure. .. .. .. .. ... 4-26...f- I CA A* overview................................ C-1 C .2 Sample A* r......................... .... C-I C-3 Evaluation P
A Model for the Strategic Use of Metacognitive Reading Comprehension Strategies
ERIC Educational Resources Information Center
Gómez González, Juan David
2017-01-01
This paper describes an approach to developing intermediate level reading proficiency through a strategic and iterative use of a discreet set of tasks that combine some of the more common metacognitive theories and strategies that have been published in the past thirty years. The case for incorporating this composite approach into reading…
Estimating Standardized Linear Contrasts of Means with Desired Precision
ERIC Educational Resources Information Center
Bonett, Douglas G.
2009-01-01
L. Wilkinson and the Task Force on Statistical Inference (1999) recommended reporting confidence intervals for measures of effect sizes. If the sample size is too small, the confidence interval may be too wide to provide meaningful information. Recently, K. Kelley and J. R. Rausch (2006) used an iterative approach to computer-generate tables of…
User-oriented evaluation of a medical image retrieval system for radiologists.
Markonis, Dimitrios; Holzer, Markus; Baroz, Frederic; De Castaneda, Rafael Luis Ruiz; Boyer, Célia; Langs, Georg; Müller, Henning
2015-10-01
This article reports the user-oriented evaluation of a text- and content-based medical image retrieval system. User tests with radiologists using a search system for images in the medical literature are presented. The goal of the tests is to assess the usability of the system, identify system and interface aspects that need improvement and useful additions. Another objective is to investigate the system's added value to radiology information retrieval. The study provides an insight into required specifications and potential shortcomings of medical image retrieval systems through a concrete methodology for conducting user tests. User tests with a working image retrieval system of images from the biomedical literature were performed in an iterative manner, where each iteration had the participants perform radiology information seeking tasks and then refining the system as well as the user study design itself. During these tasks the interaction of the users with the system was monitored, usability aspects were measured, retrieval success rates recorded and feedback was collected through survey forms. In total, 16 radiologists participated in the user tests. The success rates in finding relevant information were on average 87% and 78% for image and case retrieval tasks, respectively. The average time for a successful search was below 3 min in both cases. Users felt quickly comfortable with the novel techniques and tools (after 5 to 15 min), such as content-based image retrieval and relevance feedback. User satisfaction measures show a very positive attitude toward the system's functionalities while the user feedback helped identifying the system's weak points. The participants proposed several potentially useful new functionalities, such as filtering by imaging modality and search for articles using image examples. The iterative character of the evaluation helped to obtain diverse and detailed feedback on all system aspects. Radiologists are quickly familiar with the functionalities but have several comments on desired functionalities. The analysis of the results can potentially assist system refinement for future medical information retrieval systems. Moreover, the methodology presented as well as the discussion on the limitations and challenges of such studies can be useful for user-oriented medical image retrieval evaluation, as user-oriented evaluation of interactive system is still only rarely performed. Such interactive evaluations can be limited in effort if done iteratively and can give many insights for developing better systems. Copyright © 2015. Published by Elsevier Ireland Ltd.
NASA Astrophysics Data System (ADS)
Akiba, Masato; Matsui, Hideki; Takatsu, Hideyuki; Konishi, Satoshi
Technical issues regarding the fusion power plant that are required to be developed in the period of ITER construction and operation, both with ITER and with other facilities that complement ITER are described in this section. Three major fields are considered to be important in fusion technology. Section 4.1 summarizes blanket study, and ITER Test Blanket Module (TBM) development that focuses its effort on the first generation power blanket to be installed in DEMO. ITER will be equipped with 6 TBMs which are developed under each party's fusion program. In Japan, the solid breeder using water as a coolant is the primary candidate, and He-cooled pebble bed is the alternative. Other liquid options such as LiPb, Li or molten salt are developed by other parties' initiatives. The Test Blanket Working Group (TBWG) is coordinating these efforts. Japanese universities are investigating advanced concepts and fundamental crosscutting technologies. Section 4.2 introduces material development and particularly, the international irradiation facility, IFMIF. Reduced activation ferritic/martensitic steels are identified as promising candidates for the structural material of the first generation fusion blanket, while and vanadium alloy and SiC/SiC composite are pursued as advanced options. The IFMIF is currently planning the next phase of joint activity, EVEDA (Engineering Validation and Engineering Design Activity) that encompasses construction. Material studies together with the ITER TBM will provide essential technical information for development of the fusion power plant. Other technical issues to be addressed regarding the first generation fusion power plant are summarized in section 4.3. Development of components for ITER made remarkable progress for the major essential technology also necessary for future fusion plants, however many still need further improvements toward power plant. Such areas includes; the divertor, plasma heating/current drive, magnets, tritium, and remote handling. There remain many other technical issues for power plant which require integrated efforts.
Emerging Techniques for Dose Optimization in Abdominal CT
Platt, Joel F.; Goodsitt, Mitchell M.; Al-Hawary, Mahmoud M.; Maturen, Katherine E.; Wasnik, Ashish P.; Pandya, Amit
2014-01-01
Recent advances in computed tomographic (CT) scanning technique such as automated tube current modulation (ATCM), optimized x-ray tube voltage, and better use of iterative image reconstruction have allowed maintenance of good CT image quality with reduced radiation dose. ATCM varies the tube current during scanning to account for differences in patient attenuation, ensuring a more homogeneous image quality, although selection of the appropriate image quality parameter is essential for achieving optimal dose reduction. Reducing the x-ray tube voltage is best suited for evaluating iodinated structures, since the effective energy of the x-ray beam will be closer to the k-edge of iodine, resulting in a higher attenuation for the iodine. The optimal kilovoltage for a CT study should be chosen on the basis of imaging task and patient habitus. The aim of iterative image reconstruction is to identify factors that contribute to noise on CT images with use of statistical models of noise (statistical iterative reconstruction) and selective removal of noise to improve image quality. The degree of noise suppression achieved with statistical iterative reconstruction can be customized to minimize the effect of altered image quality on CT images. Unlike with statistical iterative reconstruction, model-based iterative reconstruction algorithms model both the statistical noise and the physical acquisition process, allowing CT to be performed with further reduction in radiation dose without an increase in image noise or loss of spatial resolution. Understanding these recently developed scanning techniques is essential for optimization of imaging protocols designed to achieve the desired image quality with a reduced dose. © RSNA, 2014 PMID:24428277
A New Pivoting and Iterative Text Detection Algorithm for Biomedical Images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Songhua; Krauthammer, Prof. Michael
2010-01-01
There is interest to expand the reach of literature mining to include the analysis of biomedical images, which often contain a paper's key findings. Examples include recent studies that use Optical Character Recognition (OCR) to extract image text, which is used to boost biomedical image retrieval and classification. Such studies rely on the robust identification of text elements in biomedical images, which is a non-trivial task. In this work, we introduce a new text detection algorithm for biomedical images based on iterative projection histograms. We study the effectiveness of our algorithm by evaluating the performance on a set of manuallymore » labeled random biomedical images, and compare the performance against other state-of-the-art text detection algorithms. We demonstrate that our projection histogram-based text detection approach is well suited for text detection in biomedical images, and that the iterative application of the algorithm boosts performance to an F score of .60. We provide a C++ implementation of our algorithm freely available for academic use.« less
Investigation of iterative image reconstruction in low-dose breast CT
NASA Astrophysics Data System (ADS)
Bian, Junguo; Yang, Kai; Boone, John M.; Han, Xiao; Sidky, Emil Y.; Pan, Xiaochuan
2014-06-01
There is interest in developing computed tomography (CT) dedicated to breast-cancer imaging. Because breast tissues are radiation-sensitive, the total radiation exposure in a breast-CT scan is kept low, often comparable to a typical two-view mammography exam, thus resulting in a challenging low-dose-data-reconstruction problem. In recent years, evidence has been found that suggests that iterative reconstruction may yield images of improved quality from low-dose data. In this work, based upon the constrained image total-variation minimization program and its numerical solver, i.e., the adaptive steepest descent-projection onto the convex set (ASD-POCS), we investigate and evaluate iterative image reconstructions from low-dose breast-CT data of patients, with a focus on identifying and determining key reconstruction parameters, devising surrogate utility metrics for characterizing reconstruction quality, and tailoring the program and ASD-POCS to the specific reconstruction task under consideration. The ASD-POCS reconstructions appear to outperform the corresponding clinical FDK reconstructions, in terms of subjective visualization and surrogate utility metrics.
Visual recognition and inference using dynamic overcomplete sparse learning.
Murray, Joseph F; Kreutz-Delgado, Kenneth
2007-09-01
We present a hierarchical architecture and learning algorithm for visual recognition and other visual inference tasks such as imagination, reconstruction of occluded images, and expectation-driven segmentation. Using properties of biological vision for guidance, we posit a stochastic generative world model and from it develop a simplified world model (SWM) based on a tractable variational approximation that is designed to enforce sparse coding. Recent developments in computational methods for learning overcomplete representations (Lewicki & Sejnowski, 2000; Teh, Welling, Osindero, & Hinton, 2003) suggest that overcompleteness can be useful for visual tasks, and we use an overcomplete dictionary learning algorithm (Kreutz-Delgado, et al., 2003) as a preprocessing stage to produce accurate, sparse codings of images. Inference is performed by constructing a dynamic multilayer network with feedforward, feedback, and lateral connections, which is trained to approximate the SWM. Learning is done with a variant of the back-propagation-through-time algorithm, which encourages convergence to desired states within a fixed number of iterations. Vision tasks require large networks, and to make learning efficient, we take advantage of the sparsity of each layer to update only a small subset of elements in a large weight matrix at each iteration. Experiments on a set of rotated objects demonstrate various types of visual inference and show that increasing the degree of overcompleteness improves recognition performance in difficult scenes with occluded objects in clutter.
ERIC Educational Resources Information Center
de Klerk, Sebastiaan; Veldkamp, Bernard P.; Eggen, Theo J. H. M.
2018-01-01
The development of any assessment should be an iterative and careful process. Ideally, this process is guided by a well-defined framework (see for example Downing in: Downing and Haladyna (eds) "Handbook of test development," Lawrence Erlbaum Associates, Mahwah, 2006; Mislevy et al. in "On the roles of task model variables in…
ERIC Educational Resources Information Center
Gale, Jessica; Wind, Stefanie; Koval, Jayma; Dagosta, Joseph; Ryan, Mike; Usselman, Marion
2016-01-01
This paper illustrates the use of simulation-based performance assessment (PA) methodology in a recent study of eighth-grade students' understanding of physical science concepts. A set of four simulation-based PA tasks were iteratively developed to assess student understanding of an array of physical science concepts, including net force,…
FAST COGNITIVE AND TASK ORIENTED, ITERATIVE DATA DISPLAY (FACTOID)
2017-06-01
approaches. As a result, the following assumptions guided our efforts in developing modeling and descriptive metrics for evaluation purposes...Application Evaluation . Our analytic workflow for evaluation is to first provide descriptive statistics about applications across metrics (performance...distributions for evaluation purposes because the goal of evaluation is accurate description , not inference (e.g., prediction). Outliers depicted
Assessing Children's Understanding of Length Measurement: A Focus on Three Key Concepts
ERIC Educational Resources Information Center
Bush, Heidi
2009-01-01
In this article, the author presents three different tasks that can be used to assess students' understanding of the concept of length. Three important measurement concepts for students to understand are transitive reasoning, use of identical units, and iteration. In any teaching and learning process it is important to acknowledge students'…
Practicing Design Judgement through Intention-Focused Course Curricula
ERIC Educational Resources Information Center
Fernaeus, Ylva; Lundström, Anders
2015-01-01
This paper elaborates on how design judgement can be practiced in design education, as explored in several iterations of an advanced course in interaction design. The students were probed to address four separate design tasks based on distinct high-level intentions, i.e. to 1) take societal responsibility, 2) to generate profit, 3) to explore a…
The Translation of Cognitive Paradigms for Patient Research
Luck, Steven J.; Gold, James M.
2008-01-01
Many cognitive tasks have been developed by basic scientists to isolate and measure specific cognitive processes in healthy young adults, and these tasks have the potential to provide important information about cognitive dysfunction in psychiatric disorders, both in psychopathology research and in clinical trials. However, several practical and conceptual challenges arise in translating these tasks for patient research. Here we outline a paradigm development strategy—which involves iteratively testing modifications of the tasks in college students, in older healthy adults, and in patients—that we have used to successfully translate a large number of cognitive tasks for use in schizophrenia patients. This strategy makes it possible to make the tasks patient friendly while maintaining their cognitive precision. We also outline several measurement issues that arise in these tasks, including differences in baseline performance levels and speed-accuracy trade-offs, and we provide suggestions for addressing these issues. Finally, we present examples of 2 experiments, one of which exemplifies our recommendations regarding measurement issues and was a success and one of which was a painful but informative failure. PMID:18487226
Design & control of a 3D stroke rehabilitation platform.
Cai, Z; Tong, D; Meadmore, K L; Freeman, C T; Hughes, A M; Rogers, E; Burridge, J H
2011-01-01
An upper limb stroke rehabilitation system is developed which combines electrical stimulation with mechanical arm support, to assist patients performing 3D reaching tasks in a virtual reality environment. The Stimulation Assistance through Iterative Learning (SAIL) platform applies electrical stimulation to two muscles in the arm using model-based control schemes which learn from previous trials of the task. This results in accurate movement which maximises the therapeutic effect of treatment. The principal components of the system are described and experimental results confirm its efficacy for clinical use in upper limb stroke rehabilitation. © 2011 IEEE
NASA Astrophysics Data System (ADS)
Kobayashi, K.; Isobe, K.; Iwai, Y.; Hayashi, T.; Shu, W.; Nakamura, H.; Kawamura, Y.; Yamada, M.; Suzuki, T.; Miura, H.; Uzawa, M.; Nishikawa, M.; Yamanishi, T.
2007-12-01
Confinement and the removal of tritium are key subjects for the safety of ITER. The ITER buildings are confinement barriers of tritium. In a hot cell, tritium is often released as vapour and is in contact with the inner walls. The inner walls of the ITER tritium plant building will also be exposed to tritium in an accident. The tritium released in the buildings is removed by the atmosphere detritiation systems (ADS), where the tritium is oxidized by catalysts and is removed as water. A special gas of SF6 is used in ITER and is expected to be released in an accident such as a fire. Although the SF6 gas has potential as a catalyst poison, the performance of ADS with the existence of SF6 has not been confirmed as yet. Tritiated water is produced in the regeneration process of ADS and is subsequently processed by the ITER water detritiation system (WDS). One of the key components of the WDS is an electrolysis cell. To overcome the issues in a global tritium confinement, a series of experimental studies have been carried out as an ITER R&D task: (1) tritium behaviour in concrete; (2) the effect of SF6 on the performance of ADS and (3) tritium durability of the electrolysis cell of the ITER-WDS. (1) The tritiated water vapour penetrated up to 50 mm into the concrete from the surface in six months' exposure. The penetration rate of tritium in the concrete was thus appreciably first, the isotope exchange capacity of the cement paste plays an important role in tritium trapping and penetration into concrete materials when concrete is exposed to tritiated water vapour. It is required to evaluate the effect of coating on the penetration rate quantitatively from the actual tritium tests. (2) SF6 gas decreased the detritiation factor of ADS. Since the effect of SF6 depends closely on its concentration, the amount of SF6 released into the tritium handling area in an accident should be reduced by some ideas of arrangement of components in the buildings. (3) It was expected that the electrolysis cell of the ITER-WDS could endure 3 years' operation under the ITER design conditions. Measuring the concentration of the fluorine ions could be a promising technique for monitoring the damage to the electrolysis cell.
NASA Astrophysics Data System (ADS)
Philipps, V.; Malaquias, A.; Hakola, A.; Karhunen, J.; Maddaluno, G.; Almaviva, S.; Caneve, L.; Colao, F.; Fortuna, E.; Gasior, P.; Kubkowska, M.; Czarnecka, A.; Laan, M.; Lissovski, A.; Paris, P.; van der Meiden, H. J.; Petersson, P.; Rubel, M.; Huber, A.; Zlobinski, M.; Schweer, B.; Gierse, N.; Xiao, Q.; Sergienko, G.
2013-09-01
Analysis and understanding of wall erosion, material transport and fuel retention are among the most important tasks for ITER and future devices, since these questions determine largely the lifetime and availability of the fusion reactor. These data are also of extreme value to improve the understanding and validate the models of the in vessel build-up of the T inventory in ITER and future D-T devices. So far, research in these areas is largely supported by post-mortem analysis of wall tiles. However, access to samples will be very much restricted in the next-generation devices (such as ITER, JT-60SA, W7-X, etc) with actively cooled plasma-facing components (PFC) and increasing duty cycle. This has motivated the development of methods to measure the deposition of material and retention of plasma fuel on the walls of fusion devices in situ, without removal of PFC samples. For this purpose, laser-based methods are the most promising candidates. Their feasibility has been assessed in a cooperative undertaking in various European associations under EFDA coordination. Different laser techniques have been explored both under laboratory and tokamak conditions with the emphasis to develop a conceptual design for a laser-based wall diagnostic which is integrated into an ITER port plug, aiming to characterize in situ relevant parts of the inner wall, the upper region of the inner divertor, part of the dome and the upper X-point region.
NASA Astrophysics Data System (ADS)
Shiangjen, Kanokwatt; Chaijaruwanich, Jeerayut; Srisujjalertwaja, Wijak; Unachak, Prakarn; Somhom, Samerkae
2018-02-01
This article presents an efficient heuristic placement algorithm, namely, a bidirectional heuristic placement, for solving the two-dimensional rectangular knapsack packing problem. The heuristic demonstrates ways to maximize space utilization by fitting the appropriate rectangle from both sides of the wall of the current residual space layer by layer. The iterative local search along with a shift strategy is developed and applied to the heuristic to balance the exploitation and exploration tasks in the solution space without the tuning of any parameters. The experimental results on many scales of packing problems show that this approach can produce high-quality solutions for most of the benchmark datasets, especially for large-scale problems, within a reasonable duration of computational time.
On the assessment of spatial resolution of PET systems with iterative image reconstruction
NASA Astrophysics Data System (ADS)
Gong, Kuang; Cherry, Simon R.; Qi, Jinyi
2016-03-01
Spatial resolution is an important metric for performance characterization in PET systems. Measuring spatial resolution is straightforward with a linear reconstruction algorithm, such as filtered backprojection, and can be performed by reconstructing a point source scan and calculating the full-width-at-half-maximum (FWHM) along the principal directions. With the widespread adoption of iterative reconstruction methods, it is desirable to quantify the spatial resolution using an iterative reconstruction algorithm. However, the task can be difficult because the reconstruction algorithms are nonlinear and the non-negativity constraint can artificially enhance the apparent spatial resolution if a point source image is reconstructed without any background. Thus, it was recommended that a background should be added to the point source data before reconstruction for resolution measurement. However, there has been no detailed study on the effect of the point source contrast on the measured spatial resolution. Here we use point source scans from a preclinical PET scanner to investigate the relationship between measured spatial resolution and the point source contrast. We also evaluate whether the reconstruction of an isolated point source is predictive of the ability of the system to resolve two adjacent point sources. Our results indicate that when the point source contrast is below a certain threshold, the measured FWHM remains stable. Once the contrast is above the threshold, the measured FWHM monotonically decreases with increasing point source contrast. In addition, the measured FWHM also monotonically decreases with iteration number for maximum likelihood estimate. Therefore, when measuring system resolution with an iterative reconstruction algorithm, we recommend using a low-contrast point source and a fixed number of iterations.
Compressively sampled MR image reconstruction using generalized thresholding iterative algorithm
NASA Astrophysics Data System (ADS)
Elahi, Sana; kaleem, Muhammad; Omer, Hammad
2018-01-01
Compressed sensing (CS) is an emerging area of interest in Magnetic Resonance Imaging (MRI). CS is used for the reconstruction of the images from a very limited number of samples in k-space. This significantly reduces the MRI data acquisition time. One important requirement for signal recovery in CS is the use of an appropriate non-linear reconstruction algorithm. It is a challenging task to choose a reconstruction algorithm that would accurately reconstruct the MR images from the under-sampled k-space data. Various algorithms have been used to solve the system of non-linear equations for better image quality and reconstruction speed in CS. In the recent past, iterative soft thresholding algorithm (ISTA) has been introduced in CS-MRI. This algorithm directly cancels the incoherent artifacts produced because of the undersampling in k -space. This paper introduces an improved iterative algorithm based on p -thresholding technique for CS-MRI image reconstruction. The use of p -thresholding function promotes sparsity in the image which is a key factor for CS based image reconstruction. The p -thresholding based iterative algorithm is a modification of ISTA, and minimizes non-convex functions. It has been shown that the proposed p -thresholding iterative algorithm can be used effectively to recover fully sampled image from the under-sampled data in MRI. The performance of the proposed method is verified using simulated and actual MRI data taken at St. Mary's Hospital, London. The quality of the reconstructed images is measured in terms of peak signal-to-noise ratio (PSNR), artifact power (AP), and structural similarity index measure (SSIM). The proposed approach shows improved performance when compared to other iterative algorithms based on log thresholding, soft thresholding and hard thresholding techniques at different reduction factors.
Myria: Scalable Analytics as a Service
NASA Astrophysics Data System (ADS)
Howe, B.; Halperin, D.; Whitaker, A.
2014-12-01
At the UW eScience Institute, we're working to empower non-experts, especially in the sciences, to write and use data-parallel algorithms. To this end, we are building Myria, a web-based platform for scalable analytics and data-parallel programming. Myria's internal model of computation is the relational algebra extended with iteration, such that every program is inherently data-parallel, just as every query in a database is inherently data-parallel. But unlike databases, iteration is a first class concept, allowing us to express machine learning tasks, graph traversal tasks, and more. Programs can be expressed in a number of languages and can be executed on a number of execution environments, but we emphasize a particular language called MyriaL that supports both imperative and declarative styles and a particular execution engine called MyriaX that uses an in-memory column-oriented representation and asynchronous iteration. We deliver Myria over the web as a service, providing an editor, performance analysis tools, and catalog browsing features in a single environment. We find that this web-based "delivery vector" is critical in reaching non-experts: they are insulated from irrelevant effort technical work associated with installation, configuration, and resource management. The MyriaX backend, one of several execution runtimes we support, is a main-memory, column-oriented, RDBMS-on-the-worker system that supports cyclic data flows as a first-class citizen and has been shown to outperform competitive systems on 100-machine cluster sizes. I will describe the Myria system, give a demo, and present some new results in large-scale oceanographic microbiology.
Learning Efficient Sparse and Low Rank Models.
Sprechmann, P; Bronstein, A M; Sapiro, G
2015-09-01
Parsimony, including sparsity and low rank, has been shown to successfully model data in numerous machine learning and signal processing tasks. Traditionally, such modeling approaches rely on an iterative algorithm that minimizes an objective function with parsimony-promoting terms. The inherently sequential structure and data-dependent complexity and latency of iterative optimization constitute a major limitation in many applications requiring real-time performance or involving large-scale data. Another limitation encountered by these modeling techniques is the difficulty of their inclusion in discriminative learning scenarios. In this work, we propose to move the emphasis from the model to the pursuit algorithm, and develop a process-centric view of parsimonious modeling, in which a learned deterministic fixed-complexity pursuit process is used in lieu of iterative optimization. We show a principled way to construct learnable pursuit process architectures for structured sparse and robust low rank models, derived from the iteration of proximal descent algorithms. These architectures learn to approximate the exact parsimonious representation at a fraction of the complexity of the standard optimization methods. We also show that appropriate training regimes allow to naturally extend parsimonious models to discriminative settings. State-of-the-art results are demonstrated on several challenging problems in image and audio processing with several orders of magnitude speed-up compared to the exact optimization algorithms.
Examining the response programming function of the Quiet Eye: Do tougher shots need a quieter eye?
Walters-Symons, Rosanna; Wilson, Mark; Klostermann, Andre; Vine, Samuel
2018-02-01
Support for the proposition that the Quiet Eye (QE) duration reflects a period of response programming (including task parameterisation) has come from research showing that an increase in task difficulty is associated with increases in QE duration. Here, we build on previous research by manipulating three elements of task difficulty that correspond with different parameters of golf-putting performance; force production, impact quality and target line. Longer QE durations were found for more complex iterations of the task and furthermore, more sensitive analyses of the QE duration suggest that the early QE proportion (prior to movement initiation) is closely related to force production and impact quality. However, these increases in QE do not seem functional in terms of supporting improved performance. Further research is needed to explore QE's relationship with performance under conditions of increased difficulty.
Eschler, Jordan; Meas, Perry Lin; Lozano, Paula; McClure, Jennifer B.; Ralston, James D.; Pratt, Wanda
2016-01-01
People with a chronic illness must manage a myriad of tasks to support their health. Online patient portals can provide vital information and support in managing health tasks through notification and reminder features. However, little is known about the efficacy of these features in managing health tasks via the portal. To elicit feedback about reminder and notification features in patient portals, we employed a patient-centered approach to design new features for managing health tasks within an existing portal tool. We tested three iteratively designed prototypes with 19 patients and caregivers. Our findings provide insights into users’ attitudes, behavior, and motivations in portal use. Design implications based on these insights include: (1) building on positive aspects of clinician relationships to enhance engagement with the portal; (2) using face-to-face visits to promote clinician collaboration in portal use; and (3) allowing customization of portal modules to support tasks based on user roles. PMID:28269850
Eschler, Jordan; Meas, Perry Lin; Lozano, Paula; McClure, Jennifer B; Ralston, James D; Pratt, Wanda
2016-01-01
People with a chronic illness must manage a myriad of tasks to support their health. Online patient portals can provide vital information and support in managing health tasks through notification and reminder features. However, little is known about the efficacy of these features in managing health tasks via the portal. To elicit feedback about reminder and notification features in patient portals, we employed a patient-centered approach to design new features for managing health tasks within an existing portal tool. We tested three iteratively designed prototypes with 19 patients and caregivers. Our findings provide insights into users' attitudes, behavior, and motivations in portal use. Design implications based on these insights include: (1) building on positive aspects of clinician relationships to enhance engagement with the portal; (2) using face-to-face visits to promote clinician collaboration in portal use; and (3) allowing customization of portal modules to support tasks based on user roles.
ERIC Educational Resources Information Center
Furtak, Erin Marie; Circi, Ruhan; Heredia, Sara C.
2018-01-01
This article describes a 4-year study of experienced high school biology teachers' participation in a five-step professional development experience in which they iteratively studied student ideas with the support of a set of learning progressions, designed formative assessment activities, practiced using those activities with their students,…
Is a Single-Bladed Knife Enough to Dissect Human Cognition? Commentary on Griffiths et al.
ERIC Educational Resources Information Center
Fu, Wai-Tat
2008-01-01
Griffiths, Christian, and Kalish (this issue) present an iterative-learning paradigm applying a Bayesian model to understand inductive biases in categorization. The authors argue that the paradigm is useful as an exploratory tool to understand inductive biases in situations where little is known about the task. It is argued that a theory developed…
Casanueva, Felipe F; Barkan, Ariel L; Buchfelder, Michael; Klibanski, Anne; Laws, Edward R; Loeffler, Jay S; Melmed, Shlomo; Mortini, Pietro; Wass, John; Giustina, Andrea
2017-10-01
With the goal of generate uniform criteria among centers dealing with pituitary tumors and to enhance patient care, the Pituitary Society decided to generate criteria for developing Pituitary Tumors Centers of Excellence (PTCOE). To develop that task, a group of ten experts served as a Task Force and through two years of iterative work an initial draft was elaborated. This draft was discussed, modified and finally approved by the Board of Directors of the Pituitary Society. Such document was presented and debated at a specific session of the Congress of the Pituitary Society, Orlando 2017, and suggestions were incorporated. Finally the document was distributed to a large group of global experts that introduced further modifications with final endorsement. After five years of iterative work a document with the ideal criteria for a PTCOE is presented. Acknowledging that very few centers in the world, if any, likely fulfill the requirements here presented, the document may be a tool to guide improvements of care delivery to patients with pituitary disorders. All these criteria must be accommodated to the regulations and organization of Health of a given country.
NASA Technical Reports Server (NTRS)
Kasahara, Hironori; Honda, Hiroki; Narita, Seinosuke
1989-01-01
Parallel processing of real-time dynamic systems simulation on a multiprocessor system named OSCAR is presented. In the simulation of dynamic systems, generally, the same calculation are repeated every time step. However, we cannot apply to Do-all or the Do-across techniques for parallel processing of the simulation since there exist data dependencies from the end of an iteration to the beginning of the next iteration and furthermore data-input and data-output are required every sampling time period. Therefore, parallelism inside the calculation required for a single time step, or a large basic block which consists of arithmetic assignment statements, must be used. In the proposed method, near fine grain tasks, each of which consists of one or more floating point operations, are generated to extract the parallelism from the calculation and assigned to processors by using optimal static scheduling at compile time in order to reduce large run time overhead caused by the use of near fine grain tasks. The practicality of the scheme is demonstrated on OSCAR (Optimally SCheduled Advanced multiprocessoR) which has been developed to extract advantageous features of static scheduling algorithms to the maximum extent.
RAVE: Rapid Visualization Environment
NASA Technical Reports Server (NTRS)
Klumpar, D. M.; Anderson, Kevin; Simoudis, Avangelos
1994-01-01
Visualization is used in the process of analyzing large, multidimensional data sets. However, the selection and creation of visualizations that are appropriate for the characteristics of a particular data set and the satisfaction of the analyst's goals is difficult. The process consists of three tasks that are performed iteratively: generate, test, and refine. The performance of these tasks requires the utilization of several types of domain knowledge that data analysts do not often have. Existing visualization systems and frameworks do not adequately support the performance of these tasks. In this paper we present the RApid Visualization Environment (RAVE), a knowledge-based system that interfaces with commercial visualization frameworks and assists a data analyst in quickly and easily generating, testing, and refining visualizations. RAVE was used for the visualization of in situ measurement data captured by spacecraft.
NASA Astrophysics Data System (ADS)
Ayalon, Michal; Watson, Anne; Lerman, Steve
2016-09-01
This study examines expressions of reasoning by some higher achieving 11 to 18 year-old English students responding to a survey consisting of function tasks developed in collaboration with their teachers. We report on 70 students, 10 from each of English years 7-13. Iterative and comparative analysis identified capabilities and difficulties of students and suggested conjectures concerning links between the affordances of the tasks, the curriculum, and students' responses. The paper focuses on five of the survey tasks and highlights connections between informal and formal expressions of reasoning about variables in learning. We introduce the notion of `schooled' expressions of reasoning, neither formal nor informal, to emphasise the role of the formatting tools introduced in school that shape future understanding and reasoning.
Deutsch, Judith E
2009-01-01
Improving walking for individuals with musculoskeletal and neuromuscular conditions is an important aspect of rehabilitation. The capabilities of clinicians who address these rehabilitation issues could be augmented with innovations such as virtual reality gaming based technologies. The chapter provides an overview of virtual reality gaming based technologies currently being developed and tested to improve motor and cognitive elements required for ambulation and mobility in different patient populations. Included as well is a detailed description of a single VR system, consisting of the rationale for development and iterative refinement of the system based on clinical science. These concepts include: neural plasticity, part-task training, whole task training, task specific training, principles of exercise and motor learning, sensorimotor integration, and visual spatial processing.
Adaptive rehabilitation gaming system: on-line individualization of stroke rehabilitation.
Nirme, Jens; Duff, Armin; Verschure, Paul F M J
2011-01-01
The effects of stroke differ considerably in degree and symptoms for different patients. It has been shown that specific, individualized and varied therapy favors recovery. The Rehabilitation Gaming System (RGS) is a Virtual Reality (VR) based rehabilitation system designed following these principles. We have developed two algorithms to control the level of task difficulty that a user of the RGS is exposed to, as well as providing controlled variation in the therapy. In this paper, we compare the two algorithms by running numerical simulations and a study with healthy subjects. We show that both algorithms allow for individualization of the challenge level of the task. Further, the results reveal that the algorithm that iteratively learns a user model for each subject also allows a high variation of the task.
NASA Astrophysics Data System (ADS)
Xu, Jingyan; Fuld, Matthew K.; Fung, George S. K.; Tsui, Benjamin M. W.
2015-04-01
Iterative reconstruction (IR) methods for x-ray CT is a promising approach to improve image quality or reduce radiation dose to patients. The goal of this work was to use task based image quality measures and the channelized Hotelling observer (CHO) to evaluate both analytic and IR methods for clinical x-ray CT applications. We performed realistic computer simulations at five radiation dose levels, from a clinical reference low dose D0 to 25% D0. A fixed size and contrast lesion was inserted at different locations into the liver of the XCAT phantom to simulate a weak signal. The simulated data were reconstructed on a commercial CT scanner (SOMATOM Definition Flash; Siemens, Forchheim, Germany) using the vendor-provided analytic (WFBP) and IR (SAFIRE) methods. The reconstructed images were analyzed by CHOs with both rotationally symmetric (RS) and rotationally oriented (RO) channels, and with different numbers of lesion locations (5, 10, and 20) in a signal known exactly (SKE), background known exactly but variable (BKEV) detection task. The area under the receiver operating characteristic curve (AUC) was used as a summary measure to compare the IR and analytic methods; the AUC was also used as the equal performance criterion to derive the potential dose reduction factor of IR. In general, there was a good agreement in the relative AUC values of different reconstruction methods using CHOs with RS and RO channels, although the CHO with RO channels achieved higher AUCs than RS channels. The improvement of IR over analytic methods depends on the dose level. The reference dose level D0 was based on a clinical low dose protocol, lower than the standard dose due to the use of IR methods. At 75% D0, the performance improvement was statistically significant (p < 0.05). The potential dose reduction factor also depended on the detection task. For the SKE/BKEV task involving 10 lesion locations, a dose reduction of at least 25% from D0 was achieved.
Neurocognitive systems related to real-world prospective memory.
Kalpouzos, Grégoria; Eriksson, Johan; Sjölie, Daniel; Molin, Jonas; Nyberg, Lars
2010-10-08
Prospective memory (PM) denotes the ability to remember to perform actions in the future. It has been argued that standard laboratory paradigms fail to capture core aspects of PM. We combined functional MRI, virtual reality, eye-tracking and verbal reports to explore the dynamic allocation of neurocognitive processes during a naturalistic PM task where individuals performed errands in a realistic model of their residential town. Based on eye movement data and verbal reports, we modeled PM as an iterative loop of five sustained and transient phases: intention maintenance before target detection (TD), TD, intention maintenance after TD, action, and switching, the latter representing the activation of a new intention in mind. The fMRI analyses revealed continuous engagement of a top-down fronto-parietal network throughout the entire task, likely subserving goal maintenance in mind. In addition, a shift was observed from a perceptual (occipital) system while searching for places to go, to a mnemonic (temporo-parietal, fronto-hippocampal) system for remembering what actions to perform after TD. Updating of the top-down fronto-parietal network occurred at both TD and switching, the latter likely also being characterized by frontopolar activity. Taken together, these findings show how brain systems complementary interact during real-world PM, and support a more complete model of PM that can be applied to naturalistic PM tasks and that we named PROspective MEmory DYnamic (PROMEDY) model because of its dynamics on both multi-phase iteration and the interactions of distinct neurocognitive networks.
TTSA: An Effective Scheduling Approach for Delay Bounded Tasks in Hybrid Clouds.
Yuan, Haitao; Bi, Jing; Tan, Wei; Zhou, MengChu; Li, Bo Hu; Li, Jianqiang
2017-11-01
The economy of scale provided by cloud attracts a growing number of organizations and industrial companies to deploy their applications in cloud data centers (CDCs) and to provide services to users around the world. The uncertainty of arriving tasks makes it a big challenge for private CDC to cost-effectively schedule delay bounded tasks without exceeding their delay bounds. Unlike previous studies, this paper takes into account the cost minimization problem for private CDC in hybrid clouds, where the energy price of private CDC and execution price of public clouds both show the temporal diversity. Then, this paper proposes a temporal task scheduling algorithm (TTSA) to effectively dispatch all arriving tasks to private CDC and public clouds. In each iteration of TTSA, the cost minimization problem is modeled as a mixed integer linear program and solved by a hybrid simulated-annealing particle-swarm-optimization. The experimental results demonstrate that compared with the existing methods, the optimal or suboptimal scheduling strategy produced by TTSA can efficiently increase the throughput and reduce the cost of private CDC while meeting the delay bounds of all the tasks.
NASA Astrophysics Data System (ADS)
Gang, Grace J.; Siewerdsen, Jeffrey H.; Webster Stayman, J.
2017-06-01
Tube current modulation (TCM) is routinely adopted on diagnostic CT scanners for dose reduction. Conventional TCM strategies are generally designed for filtered-backprojection (FBP) reconstruction to satisfy simple image quality requirements based on noise. This work investigates TCM designs for model-based iterative reconstruction (MBIR) to achieve optimal imaging performance as determined by a task-based image quality metric. Additionally, regularization is an important aspect of MBIR that is jointly optimized with TCM, and includes both the regularization strength that controls overall smoothness as well as directional weights that permits control of the isotropy/anisotropy of the local noise and resolution properties. Initial investigations focus on a known imaging task at a single location in the image volume. The framework adopts Fourier and analytical approximations for fast estimation of the local noise power spectrum (NPS) and modulation transfer function (MTF)—each carrying dependencies on TCM and regularization. For the single location optimization, the local detectability index (d‧) of the specific task was directly adopted as the objective function. A covariance matrix adaptation evolution strategy (CMA-ES) algorithm was employed to identify the optimal combination of imaging parameters. Evaluations of both conventional and task-driven approaches were performed in an abdomen phantom for a mid-frequency discrimination task in the kidney. Among the conventional strategies, the TCM pattern optimal for FBP using a minimum variance criterion yielded a worse task-based performance compared to an unmodulated strategy when applied to MBIR. Moreover, task-driven TCM designs for MBIR were found to have the opposite behavior from conventional designs for FBP, with greater fluence assigned to the less attenuating views of the abdomen and less fluence to the more attenuating lateral views. Such TCM patterns exaggerate the intrinsic anisotropy of the MTF and NPS as a result of the data weighting in MBIR. Directional penalty design was found to reinforce the same trend. The task-driven approaches outperform conventional approaches, with the maximum improvement in d‧ of 13% given by the joint optimization of TCM and regularization. This work demonstrates that the TCM optimal for MBIR is distinct from conventional strategies proposed for FBP reconstruction and strategies optimal for FBP are suboptimal and may even reduce performance when applied to MBIR. The task-driven imaging framework offers a promising approach for optimizing acquisition and reconstruction for MBIR that can improve imaging performance and/or dose utilization beyond conventional imaging strategies.
A new pivoting and iterative text detection algorithm for biomedical images.
Xu, Songhua; Krauthammer, Michael
2010-12-01
There is interest to expand the reach of literature mining to include the analysis of biomedical images, which often contain a paper's key findings. Examples include recent studies that use Optical Character Recognition (OCR) to extract image text, which is used to boost biomedical image retrieval and classification. Such studies rely on the robust identification of text elements in biomedical images, which is a non-trivial task. In this work, we introduce a new text detection algorithm for biomedical images based on iterative projection histograms. We study the effectiveness of our algorithm by evaluating the performance on a set of manually labeled random biomedical images, and compare the performance against other state-of-the-art text detection algorithms. We demonstrate that our projection histogram-based text detection approach is well suited for text detection in biomedical images, and that the iterative application of the algorithm boosts performance to an F score of .60. We provide a C++ implementation of our algorithm freely available for academic use. Copyright © 2010 Elsevier Inc. All rights reserved.
Nikazad, T; Davidi, R; Herman, G. T.
2013-01-01
We study the convergence of a class of accelerated perturbation-resilient block-iterative projection methods for solving systems of linear equations. We prove convergence to a fixed point of an operator even in the presence of summable perturbations of the iterates, irrespective of the consistency of the linear system. For a consistent system, the limit point is a solution of the system. In the inconsistent case, the symmetric version of our method converges to a weighted least squares solution. Perturbation resilience is utilized to approximate the minimum of a convex functional subject to the equations. A main contribution, as compared to previously published approaches to achieving similar aims, is a more than an order of magnitude speed-up, as demonstrated by applying the methods to problems of image reconstruction from projections. In addition, the accelerated algorithms are illustrated to be better, in a strict sense provided by the method of statistical hypothesis testing, than their unaccelerated versions for the task of detecting small tumors in the brain from X-ray CT projection data. PMID:23440911
Nikazad, T; Davidi, R; Herman, G T
2012-03-01
We study the convergence of a class of accelerated perturbation-resilient block-iterative projection methods for solving systems of linear equations. We prove convergence to a fixed point of an operator even in the presence of summable perturbations of the iterates, irrespective of the consistency of the linear system. For a consistent system, the limit point is a solution of the system. In the inconsistent case, the symmetric version of our method converges to a weighted least squares solution. Perturbation resilience is utilized to approximate the minimum of a convex functional subject to the equations. A main contribution, as compared to previously published approaches to achieving similar aims, is a more than an order of magnitude speed-up, as demonstrated by applying the methods to problems of image reconstruction from projections. In addition, the accelerated algorithms are illustrated to be better, in a strict sense provided by the method of statistical hypothesis testing, than their unaccelerated versions for the task of detecting small tumors in the brain from X-ray CT projection data.
Modeling human decision making behavior in supervisory control
NASA Technical Reports Server (NTRS)
Tulga, M. K.; Sheridan, T. B.
1977-01-01
An optimal decision control model was developed, which is based primarily on a dynamic programming algorithm which looks at all the available task possibilities, charts an optimal trajectory, and commits itself to do the first step (i.e., follow the optimal trajectory during the next time period), and then iterates the calculation. A Bayesian estimator was included which estimates the tasks which might occur in the immediate future and provides this information to the dynamic programming routine. Preliminary trials comparing the human subject's performance to that of the optimal model show a great similarity, but indicate that the human skips certain movements which require quick change in strategy.
2014-01-01
Berth allocation is the forefront operation performed when ships arrive at a port and is a critical task in container port optimization. Minimizing the time ships spend at berths constitutes an important objective of berth allocation problems. This study focuses on the discrete dynamic berth allocation problem (discrete DBAP), which aims to minimize total service time, and proposes an iterated greedy (IG) algorithm to solve it. The proposed IG algorithm is tested on three benchmark problem sets. Experimental results show that the proposed IG algorithm can obtain optimal solutions for all test instances of the first and second problem sets and outperforms the best-known solutions for 35 out of 90 test instances of the third problem set. PMID:25295295
Probabilistic Structures Analysis Methods (PSAM) for select space propulsion system components
NASA Technical Reports Server (NTRS)
1991-01-01
The basic formulation for probabilistic finite element analysis is described and demonstrated on a few sample problems. This formulation is based on iterative perturbation that uses the factorized stiffness on the unperturbed system as the iteration preconditioner for obtaining the solution to the perturbed problem. This approach eliminates the need to compute, store and manipulate explicit partial derivatives of the element matrices and force vector, which not only reduces memory usage considerably, but also greatly simplifies the coding and validation tasks. All aspects for the proposed formulation were combined in a demonstration problem using a simplified model of a curved turbine blade discretized with 48 shell elements, and having random pressure and temperature fields with partial correlation, random uniform thickness, and random stiffness at the root.
Deployment Optimization for Embedded Flight Avionics Systems
2011-11-01
the iterations, the best solution(s) that evolved out from the group is output as the result. Although metaheuristic algorithms are powerful, they...that other design constraints are met—ScatterD uses metaheuristic algorithms to seed the bin-packing algorithm . In particular, metaheuristic ... metaheuristic algorithms to search the design space—and then using bin-packing to allocate software tasks to processors—ScatterD can generate
NASA Technical Reports Server (NTRS)
Ortega, J. M.
1986-01-01
Various graduate research activities in the field of computer science are reported. Among the topics discussed are: (1) failure probabilities in multi-version software; (2) Gaussian Elimination on parallel computers; (3) three dimensional Poisson solvers on parallel/vector computers; (4) automated task decomposition for multiple robot arms; (5) multi-color incomplete cholesky conjugate gradient methods on the Cyber 205; and (6) parallel implementation of iterative methods for solving linear equations.
2005-12-01
passive and active versions of each fiber designed under this task. Crystal Fibre shall provide characteristics of the fiber fabricated to include core...passive version of multicore fiber iteration 2. 15. SUBJECT TERMS EOARD, Laser physics, Fibre Lasers, Photonic Crystal, Multicore, Fiber Laser 16...9 00* 0 " CRYSTAL FIBRE INT ODUCTION This report describes the photonic crystal fibers developed under agreement No FA8655-o5-a- 3046. All
Methods for design and evaluation of integrated hardware-software systems for concurrent computation
NASA Technical Reports Server (NTRS)
Pratt, T. W.
1985-01-01
Research activities and publications are briefly summarized. The major tasks reviewed are: (1) VAX implementation of the PISCES parallel programming environment; (2) Apollo workstation network implementation of the PISCES environment; (3) FLEX implementation of the PISCES environment; (4) sparse matrix iterative solver in PSICES Fortran; (5) image processing application of PISCES; and (6) a formal model of concurrent computation being developed.
ERIC Educational Resources Information Center
Scott, G. W.
2017-01-01
This study involves evaluation of a novel iterative group-based learning task developed to enable students to actively engage with assessment and feedback in order to improve the quality of their written work. The students were all in the final semester of their final year of study and enrolled on either BSc Zoology or BSc Marine and Freshwater…
Automated Scheduling Via Artificial Intelligence
NASA Technical Reports Server (NTRS)
Biefeld, Eric W.; Cooper, Lynne P.
1991-01-01
Artificial-intelligence software that automates scheduling developed in Operations Mission Planner (OMP) research project. Software used in both generation of new schedules and modification of existing schedules in view of changes in tasks and/or available resources. Approach based on iterative refinement. Although project focused upon scheduling of operations of scientific instruments and other equipment aboard spacecraft, also applicable to such terrestrial problems as scheduling production in factory.
LROC assessment of non-linear filtering methods in Ga-67 SPECT imaging
NASA Astrophysics Data System (ADS)
De Clercq, Stijn; Staelens, Steven; De Beenhouwer, Jan; D'Asseler, Yves; Lemahieu, Ignace
2006-03-01
In emission tomography, iterative reconstruction is usually followed by a linear smoothing filter to make such images more appropriate for visual inspection and diagnosis by a physician. This will result in a global blurring of the images, smoothing across edges and possibly discarding valuable image information for detection tasks. The purpose of this study is to investigate which possible advantages a non-linear, edge-preserving postfilter could have on lesion detection in Ga-67 SPECT imaging. Image quality can be defined based on the task that has to be performed on the image. This study used LROC observer studies based on a dataset created by CPU-intensive Gate Monte Carlo simulations of a voxelized digital phantom. The filters considered in this study were a linear Gaussian filter, a bilateral filter, the Perona-Malik anisotropic diffusion filter and the Catte filtering scheme. The 3D MCAT software phantom was used to simulate the distribution of Ga-67 citrate in the abdomen. Tumor-present cases had a 1-cm diameter tumor randomly placed near the edges of the anatomical boundaries of the kidneys, bone, liver and spleen. Our data set was generated out of a single noisy background simulation using the bootstrap method, to significantly reduce the simulation time and to allow for a larger observer data set. Lesions were simulated separately and added to the background afterwards. These were then reconstructed with an iterative approach, using a sufficiently large number of MLEM iterations to establish convergence. The output of a numerical observer was used in a simplex optimization method to estimate an optimal set of parameters for each postfilter. No significant improvement was found for using edge-preserving filtering techniques over standard linear Gaussian filtering.
2012-01-01
Background Novel stroke rehabilitation techniques that employ electrical stimulation (ES) and robotic technologies are effective in reducing upper limb impairments. ES is most effective when it is applied to support the patients’ voluntary effort; however, current systems fail to fully exploit this connection. This study builds on previous work using advanced ES controllers, and aims to investigate the feasibility of Stimulation Assistance through Iterative Learning (SAIL), a novel upper limb stroke rehabilitation system which utilises robotic support, ES, and voluntary effort. Methods Five hemiparetic, chronic stroke participants with impaired upper limb function attended 18, 1 hour intervention sessions. Participants completed virtual reality tracking tasks whereby they moved their impaired arm to follow a slowly moving sphere along a specified trajectory. To do this, the participants’ arm was supported by a robot. ES, mediated by advanced iterative learning control (ILC) algorithms, was applied to the triceps and anterior deltoid muscles. Each movement was repeated 6 times and ILC adjusted the amount of stimulation applied on each trial to improve accuracy and maximise voluntary effort. Participants completed clinical assessments (Fugl-Meyer, Action Research Arm Test) at baseline and post-intervention, as well as unassisted tracking tasks at the beginning and end of each intervention session. Data were analysed using t-tests and linear regression. Results From baseline to post-intervention, Fugl-Meyer scores improved, assisted and unassisted tracking performance improved, and the amount of ES required to assist tracking reduced. Conclusions The concept of minimising support from ES using ILC algorithms was demonstrated. The positive results are promising with respect to reducing upper limb impairments following stroke, however, a larger study is required to confirm this. PMID:22676920
Pérez-Pérez, Martín; Glez-Peña, Daniel; Fdez-Riverola, Florentino; Lourenço, Anália
2015-02-01
Document annotation is a key task in the development of Text Mining methods and applications. High quality annotated corpora are invaluable, but their preparation requires a considerable amount of resources and time. Although the existing annotation tools offer good user interaction interfaces to domain experts, project management and quality control abilities are still limited. Therefore, the current work introduces Marky, a new Web-based document annotation tool equipped to manage multi-user and iterative projects, and to evaluate annotation quality throughout the project life cycle. At the core, Marky is a Web application based on the open source CakePHP framework. User interface relies on HTML5 and CSS3 technologies. Rangy library assists in browser-independent implementation of common DOM range and selection tasks, and Ajax and JQuery technologies are used to enhance user-system interaction. Marky grants solid management of inter- and intra-annotator work. Most notably, its annotation tracking system supports systematic and on-demand agreement analysis and annotation amendment. Each annotator may work over documents as usual, but all the annotations made are saved by the tracking system and may be further compared. So, the project administrator is able to evaluate annotation consistency among annotators and across rounds of annotation, while annotators are able to reject or amend subsets of annotations made in previous rounds. As a side effect, the tracking system minimises resource and time consumption. Marky is a novel environment for managing multi-user and iterative document annotation projects. Compared to other tools, Marky offers a similar visually intuitive annotation experience while providing unique means to minimise annotation effort and enforce annotation quality, and therefore corpus consistency. Marky is freely available for non-commercial use at http://sing.ei.uvigo.es/marky. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Meadmore, Katie L; Hughes, Ann-Marie; Freeman, Chris T; Cai, Zhonglun; Tong, Daisy; Burridge, Jane H; Rogers, Eric
2012-06-07
Novel stroke rehabilitation techniques that employ electrical stimulation (ES) and robotic technologies are effective in reducing upper limb impairments. ES is most effective when it is applied to support the patients' voluntary effort; however, current systems fail to fully exploit this connection. This study builds on previous work using advanced ES controllers, and aims to investigate the feasibility of Stimulation Assistance through Iterative Learning (SAIL), a novel upper limb stroke rehabilitation system which utilises robotic support, ES, and voluntary effort. Five hemiparetic, chronic stroke participants with impaired upper limb function attended 18, 1 hour intervention sessions. Participants completed virtual reality tracking tasks whereby they moved their impaired arm to follow a slowly moving sphere along a specified trajectory. To do this, the participants' arm was supported by a robot. ES, mediated by advanced iterative learning control (ILC) algorithms, was applied to the triceps and anterior deltoid muscles. Each movement was repeated 6 times and ILC adjusted the amount of stimulation applied on each trial to improve accuracy and maximise voluntary effort. Participants completed clinical assessments (Fugl-Meyer, Action Research Arm Test) at baseline and post-intervention, as well as unassisted tracking tasks at the beginning and end of each intervention session. Data were analysed using t-tests and linear regression. From baseline to post-intervention, Fugl-Meyer scores improved, assisted and unassisted tracking performance improved, and the amount of ES required to assist tracking reduced. The concept of minimising support from ES using ILC algorithms was demonstrated. The positive results are promising with respect to reducing upper limb impairments following stroke, however, a larger study is required to confirm this.
NASA Astrophysics Data System (ADS)
Zhang, M.; Zheng, G. Z.; Zheng, W.; Chen, Z.; Yuan, T.; Yang, C.
2016-04-01
The magnetic confinement nuclear fusion experiments require various real-time control applications like plasma control. ITER has designed the Fast Plant System Controller (FPSC) for this job. ITER provided hardware and software standards and guidelines for building a FPSC. In order to develop various real-time FPSC applications efficiently, a flexible real-time software framework called J-TEXT real-time framework (JRTF) is developed by J-TEXT tokamak team. JRTF allowed developers to implement different functions as independent and reusable modules called Application Blocks (AB). The AB developers only need to focus on implementing the control tasks or the algorithms. The timing, scheduling, data sharing and eventing are handled by the JRTF pipelines. JRTF provides great flexibility on developing ABs. Unit test against ABs can be developed easily and ABs can even be used in non-JRTF applications. JRTF also provides interfaces allowing JRTF applications to be configured and monitored at runtime. JRTF is compatible with ITER standard FPSC hardware and ITER (Control, Data Access and Communication) CODAC Core software. It can be configured and monitored using (Experimental Physics and Industrial Control System) EPICS. Moreover the JRTF can be ported to different platforms and be integrated with supervisory control software other than EPICS. The paper presents the design and implementation of JRTF as well as brief test results.
Progress of IRSN R&D on ITER Safety Assessment
NASA Astrophysics Data System (ADS)
Van Dorsselaere, J. P.; Perrault, D.; Barrachin, M.; Bentaib, A.; Gensdarmes, F.; Haeck, W.; Pouvreau, S.; Salat, E.; Seropian, C.; Vendel, J.
2012-08-01
The French "Institut de Radioprotection et de Sûreté Nucléaire" (IRSN), in support to the French "Autorité de Sûreté Nucléaire", is analysing the safety of ITER fusion installation on the basis of the ITER operator's safety file. IRSN set up a multi-year R&D program in 2007 to support this safety assessment process. Priority has been given to four technical issues and the main outcomes of the work done in 2010 and 2011 are summarized in this paper: for simulation of accident scenarios in the vacuum vessel, adaptation of the ASTEC system code; for risk of explosion of gas-dust mixtures in the vacuum vessel, adaptation of the TONUS-CFD code for gas distribution, development of DUST code for dust transport, and preparation of IRSN experiments on gas inerting, dust mobilization, and hydrogen-dust mixtures explosion; for evaluation of the efficiency of the detritiation systems, thermo-chemical calculations of tritium speciation during transport in the gas phase and preparation of future experiments to evaluate the most influent factors on detritiation; for material neutron activation, adaptation of the VESTA Monte Carlo depletion code. The first results of these tasks have been used in 2011 for the analysis of the ITER safety file. In the near future, this R&D global programme may be reoriented to account for the feedback of the latter analysis or for new knowledge.
NASA Astrophysics Data System (ADS)
Nguyen, An Hung; Guillemette, Thomas; Lambert, Andrew J.; Pickering, Mark R.; Garratt, Matthew A.
2017-09-01
Image registration is a fundamental image processing technique. It is used to spatially align two or more images that have been captured at different times, from different sensors, or from different viewpoints. There have been many algorithms proposed for this task. The most common of these being the well-known Lucas-Kanade (LK) and Horn-Schunck approaches. However, the main limitation of these approaches is the computational complexity required to implement the large number of iterations necessary for successful alignment of the images. Previously, a multi-pass image interpolation algorithm (MP-I2A) was developed to considerably reduce the number of iterations required for successful registration compared with the LK algorithm. This paper develops a kernel-warping algorithm (KWA), a modified version of the MP-I2A, which requires fewer iterations to successfully register two images and less memory space for the field-programmable gate array (FPGA) implementation than the MP-I2A. These reductions increase feasibility of the implementation of the proposed algorithm on FPGAs with very limited memory space and other hardware resources. A two-FPGA system rather than single FPGA system is successfully developed to implement the KWA in order to compensate insufficiency of hardware resources supported by one FPGA, and increase parallel processing ability and scalability of the system.
Initial results for a 170 GHz high power ITER waveguide component test stand
NASA Astrophysics Data System (ADS)
Bigelow, Timothy; Barker, Alan; Dukes, Carl; Killough, Stephen; Kaufman, Michael; White, John; Bell, Gary; Hanson, Greg; Rasmussen, Dave
2014-10-01
A high power microwave test stand is being setup at ORNL to enable prototype testing of 170 GHz cw waveguide components being developed for the ITER ECH system. The ITER ECH system will utilize 63.5 mm diameter evacuated corrugated waveguide and will have 24 >150 m long runs. A 170 GHz 1 MW class gyrotron is being developed by Communications and Power Industries and is nearing completion. A HVDC power supply, water-cooling and control system has been partially tested in preparation for arrival of the gyrotron. The power supply and water-cooling system are being designed to operate for >3600 second pulses to simulate the operating conditions planned for the ITER ECH system. The gyrotron Gaussian beam output has a single mirror for focusing into a 63.5 mm corrugated waveguide in the vertical plane. The output beam and mirror are enclosed in an evacuated duct with absorber for stray radiation. Beam alignment with the waveguide is a critical task so a combination of mirror tilt adjustments and a bellows for offsets will be provided. Analysis of thermal patterns on thin witness plates will provide gyrotron mode purity and waveguide coupling efficiency data. Pre-prototype waveguide components and two dummy loads are available for initial operational testing of the gyrotron. ORNL is managed by UT-Battelle, LLC, for the U.S. Dept. of Energy under Contract DE-AC-05-00OR22725.
Total systems design analysis of high performance structures
NASA Technical Reports Server (NTRS)
Verderaime, V.
1993-01-01
Designer-control parameters were identified at interdiscipline interfaces to optimize structural systems performance and downstream development and operations with reliability and least life-cycle cost. Interface tasks and iterations are tracked through a matrix of performance disciplines integration versus manufacturing, verification, and operations interactions for a total system design analysis. Performance integration tasks include shapes, sizes, environments, and materials. Integrity integrating tasks are reliability and recurring structural costs. Significant interface designer control parameters were noted as shapes, dimensions, probability range factors, and cost. Structural failure concept is presented, and first-order reliability and deterministic methods, benefits, and limitations are discussed. A deterministic reliability technique combining benefits of both is proposed for static structures which is also timely and economically verifiable. Though launch vehicle environments were primarily considered, the system design process is applicable to any surface system using its own unique filed environments.
NASA Technical Reports Server (NTRS)
Lala, J. H.; Smith, T. B., III
1983-01-01
The software developed for the Fault-Tolerant Multiprocessor (FTMP) is described. The FTMP executive is a timer-interrupt driven dispatcher that schedules iterative tasks which run at 3.125, 12.5, and 25 Hz. Major tasks which run under the executive include system configuration control, flight control, and display. The flight control task includes autopilot and autoland functions for a jet transport aircraft. System Displays include status displays of all hardware elements (processors, memories, I/O ports, buses), failure log displays showing transient and hard faults, and an autopilot display. All software is in a higher order language (AED, an ALGOL derivative). The executive is a fully distributed general purpose executive which automatically balances the load among available processor triads. Provisions for graceful performance degradation under processing overload are an integral part of the scheduling algorithms.
A task-invariant cognitive reserve network.
Stern, Yaakov; Gazes, Yunglin; Razlighi, Qolomreza; Steffener, Jason; Habeck, Christian
2018-05-14
The concept of cognitive reserve (CR) can explain individual differences in susceptibility to cognitive or functional impairment in the presence of age or disease-related brain changes. Epidemiologic evidence indicates that CR helps maintain performance in the face of pathology across multiple cognitive domains. We therefore tried to identify a single, "task-invariant" CR network that is active during the performance of many disparate tasks. In imaging data acquired from 255 individuals age 20-80 while performing 12 different cognitive tasks, we used an iterative approach to derive a multivariate network that was expressed during the performance of all tasks, and whose degree of expression correlated with IQ, a proxy for CR. When applied to held out data or forward applied to fMRI data from an entirely different activation task, network expression correlated with IQ. Expression of the CR pattern accounted for additional variance in fluid reasoning performance over and above the influence of cortical thickness, and also moderated between cortical thickness and reasoning performance, consistent with the behavior of a CR network. The identification of a task-invariant CR network supports the idea that life experiences may result in brain processing differences that might provide reserve against age- or disease-related changes across multiple tasks. Copyright © 2018. Published by Elsevier Inc.
Designing for Temporal Awareness: The Role of Temporality in Time-Critical Medical Teamwork
Kusunoki, Diana S.; Sarcevic, Aleksandra
2016-01-01
This paper describes the role of temporal information in emergency medical teamwork and how time-based features can be designed to support the temporal awareness of clinicians in this fast-paced and dynamic environment. Engagement in iterative design activities with clinicians over the course of two years revealed a strong need for time-based features and mechanisms, including timestamps for tasks based on absolute time and automatic stopclocks measuring time by counting up since task performance. We describe in detail the aspects of temporal awareness central to clinicians’ awareness needs and then provide examples of how we addressed these needs through the design of a shared information display. As an outcome of this process, we define four types of time representation techniques to facilitate the design of time-based features: (1) timestamps based on absolute time, (2) timestamps relative to the process start time, (3) time since task performance, and (4) time until the next required task. PMID:27478880
Neal, Andrew; Kwantes, Peter J
2009-04-01
The aim of this article is to develop a formal model of conflict detection performance. Our model assumes that participants iteratively sample evidence regarding the state of the world and accumulate it over time. A decision is made when the evidence reaches a threshold that changes over time in response to the increasing urgency of the task. Two experiments were conducted to examine the effects of conflict geometry and timing on response proportions and response time. The model is able to predict the observed pattern of response times, including a nonmonotonic relationship between distance at point of closest approach and response time, as well as effects of angle of approach and relative velocity. The results demonstrate that evidence accumulation models provide a good account of performance on a conflict detection task. Evidence accumulation models are a form of dynamic signal detection theory, allowing for the analysis of response times as well as response proportions, and can be used for simulating human performance on dynamic decision tasks.
Iterative filtering decomposition based on local spectral evolution kernel
Wang, Yang; Wei, Guo-Wei; Yang, Siyang
2011-01-01
The synthesizing information, achieving understanding, and deriving insight from increasingly massive, time-varying, noisy and possibly conflicting data sets are some of most challenging tasks in the present information age. Traditional technologies, such as Fourier transform and wavelet multi-resolution analysis, are inadequate to handle all of the above-mentioned tasks. The empirical model decomposition (EMD) has emerged as a new powerful tool for resolving many challenging problems in data processing and analysis. Recently, an iterative filtering decomposition (IFD) has been introduced to address the stability and efficiency problems of the EMD. Another data analysis technique is the local spectral evolution kernel (LSEK), which provides a near prefect low pass filter with desirable time-frequency localizations. The present work utilizes the LSEK to further stabilize the IFD, and offers an efficient, flexible and robust scheme for information extraction, complexity reduction, and signal and image understanding. The performance of the present LSEK based IFD is intensively validated over a wide range of data processing tasks, including mode decomposition, analysis of time-varying data, information extraction from nonlinear dynamic systems, etc. The utility, robustness and usefulness of the proposed LESK based IFD are demonstrated via a large number of applications, such as the analysis of stock market data, the decomposition of ocean wave magnitudes, the understanding of physiologic signals and information recovery from noisy images. The performance of the proposed method is compared with that of existing methods in the literature. Our results indicate that the LSEK based IFD improves both the efficiency and the stability of conventional EMD algorithms. PMID:22350559
NASA Astrophysics Data System (ADS)
Korotkova, T. I.; Popova, V. I.
2017-11-01
The generalized mathematical model of decision-making in the problem of planning and mode selection providing required heat loads in a large heat supply system is considered. The system is multilevel, decomposed into levels of main and distribution heating networks with intermediate control stages. Evaluation of the effectiveness, reliability and safety of such a complex system is carried out immediately according to several indicators, in particular pressure, flow, temperature. This global multicriteria optimization problem with constraints is decomposed into a number of local optimization problems and the coordination problem. An agreed solution of local problems provides a solution to the global multicriterion problem of decision making in a complex system. The choice of the optimum operational mode of operation of a complex heat supply system is made on the basis of the iterative coordination process, which converges to the coordinated solution of local optimization tasks. The interactive principle of multicriteria task decision-making includes, in particular, periodic adjustment adjustments, if necessary, guaranteeing optimal safety, reliability and efficiency of the system as a whole in the process of operation. The degree of accuracy of the solution, for example, the degree of deviation of the internal air temperature from the required value, can also be changed interactively. This allows to carry out adjustment activities in the best way and to improve the quality of heat supply to consumers. At the same time, an energy-saving task is being solved to determine the minimum required values of heads at sources and pumping stations.
Parallel fast multipole boundary element method applied to computational homogenization
NASA Astrophysics Data System (ADS)
Ptaszny, Jacek
2018-01-01
In the present work, a fast multipole boundary element method (FMBEM) and a parallel computer code for 3D elasticity problem is developed and applied to the computational homogenization of a solid containing spherical voids. The system of equation is solved by using the GMRES iterative solver. The boundary of the body is dicretized by using the quadrilateral serendipity elements with an adaptive numerical integration. Operations related to a single GMRES iteration, performed by traversing the corresponding tree structure upwards and downwards, are parallelized by using the OpenMP standard. The assignment of tasks to threads is based on the assumption that the tree nodes at which the moment transformations are initialized can be partitioned into disjoint sets of equal or approximately equal size and assigned to the threads. The achieved speedup as a function of number of threads is examined.
Sampling-Based Coverage Path Planning for Complex 3D Structures
2012-09-01
one such task, in which a single robot must sweep its end effector over the entirety of a known workspace. For two-dimensional environments, optimal...structures. First, we introduce a new algorithm for planning feasible coverage paths. It is more computationally efficient in problems of complex geometry...iteratively shortens and smooths a feasible coverage path; robot configurations are adjusted without violating any coverage con- straints. Third, we propose
SimCenter Hawaii Technology Enabled Learning and Intervention Systems
2008-01-01
manikin training in acquiring triage skills and self -efficacy. Phase II includes the development of the VR training scenarios, which includes iterative...Task A5. Skills acquisition relative to self -efficacy study See Appendix F, Mass Casualty Triage Training using Human Patient Simulators Improves...relative to self -efficacy study • See Appendix F, Mass Casualty Triage Training using Human Patient Simulators Improves Speed and Accuracy of First
Low-Cost, Net-Shape Ceramic Radial Turbine Program
1985-05-01
PROGRAM ELEMENT. PROJECT. TASK Garrett Turbine Engine Company AE OKUI UBR 111 South 34th Street, P.O. Box 2517 Phoenix, Arizona 85010 %I. CONTROLLING...processing iterations. Program management and materials characterization were conducted at Garrett Turbine Engine Company (GTEC), test bar and rotor...automotive gas turbine engine rotor development efforts at ACC. xvii PREFACE This is the final technical report of the Low-Cost, Net- Shape Ceramic
Reverse engineering of integrated circuits
Chisholm, Gregory H.; Eckmann, Steven T.; Lain, Christopher M.; Veroff, Robert L.
2003-01-01
Software and a method therein to analyze circuits. The software comprises several tools, each of which perform particular functions in the Reverse Engineering process. The analyst, through a standard interface, directs each tool to the portion of the task to which it is most well suited, rendering previously intractable problems solvable. The tools are generally used iteratively to produce a successively more abstract picture of a circuit, about which incomplete a priori knowledge exists.
System Development and Evaluation Technology: State of the Art of Manned System Measurement
1985-02-01
considered " applicable to the assessment of training effectiveness. They include the classic *-: Solomon four - group design; iterative adaptation to...evaluate the performance of infantrymen using small arms weapons (Klein, 1969) were grouped into four areas for purposes of thisevauaton:accuracy...developed for i four naval ratings. This checklist was a detailed comprehensive checklist of the * tasks performed in that rating. For this study
Decision Analysis of the Benefits and Costs of Screening for Prostate Cancer
2014-08-01
waiting (WW) as experienced in the PIVOT study or active surveillance (AS), radical prostatectomy (RP), radiation therapy (IMRT), and brachytherapy...strategies for low-risk, clinically localized prostate cancer. In the initial iteration of this model, the strategies studied included active surveillance...with regard to modeling PSA kinetics. Task 1.4 Calibrate the model using data from published studies of the natural history of conservatively- treated
NASA Astrophysics Data System (ADS)
Komachi, Mamoru; Kudo, Taku; Shimbo, Masashi; Matsumoto, Yuji
Bootstrapping has a tendency, called semantic drift, to select instances unrelated to the seed instances as the iteration proceeds. We demonstrate the semantic drift of Espresso-style bootstrapping has the same root as the topic drift of Kleinberg's HITS, using a simplified graph-based reformulation of bootstrapping. We confirm that two graph-based algorithms, the von Neumann kernels and the regularized Laplacian, can reduce the effect of semantic drift in the task of word sense disambiguation (WSD) on Senseval-3 English Lexical Sample Task. Proposed algorithms achieve superior performance to Espresso and previous graph-based WSD methods, even though the proposed algorithms have less parameters and are easy to calibrate.
Virtual reality cataract surgery training: learning curves and concurrent validity.
Selvander, Madeleine; Åsman, Peter
2012-08-01
To investigate initial learning curves on a virtual reality (VR) eye surgery simulator and whether achieved skills are transferable between tasks. Thirty-five medical students were randomized to complete ten iterations on either the VR Caspulorhexis module (group A) or the Cataract navigation training module (group B) and then two iterations on the other module. Learning curves were compared between groups. The second Capsulorhexis video was saved and evaluated with the performance rating tool Objective Structured Assessment of Cataract Surgical Skill (OSACSS). The students' stereoacuity was examined. Both groups demonstrated significant improvements in performance over the 10 iterations: group A for all parameters analysed including score (p < 0.0001), time (p < 0.0001) and corneal damage (p = 0.0003), group B for time (p < 0.0001), corneal damage (p < 0.0001) but not for score (p = 0.752). Training on one module did not improve performance on the other. Capsulorhexis score correlated significantly with evaluation of the videos using the OSACSS performance rating tool. For stereoacuity < and ≥120 seconds of arc, sum of both modules' second iteration score was 73.5 and 41.0, respectively (p = 0.062). An initial rapid improvement in performance on a simulator with repeated practice was shown. For capsulorhexis, 10 iterations with only simulator feedback are not enough to reach a plateau for overall score. Skills transfer between modules was not found suggesting benefits from training on both modules. Stereoacuity may be of importance in the recruitment and training of new cataract surgeons. Additional studies are needed to investigate this further. Concurrent validity was found for Capsulorhexis module. © 2010 The Authors. Acta Ophthalmologica © 2010 Acta Ophthalmologica Scandinavica Foundation.
How children perceive fractals: Hierarchical self-similarity and cognitive development
Martins, Maurício Dias; Laaha, Sabine; Freiberger, Eva Maria; Choi, Soonja; Fitch, W. Tecumseh
2014-01-01
The ability to understand and generate hierarchical structures is a crucial component of human cognition, available in language, music, mathematics and problem solving. Recursion is a particularly useful mechanism for generating complex hierarchies by means of self-embedding rules. In the visual domain, fractals are recursive structures in which simple transformation rules generate hierarchies of infinite depth. Research on how children acquire these rules can provide valuable insight into the cognitive requirements and learning constraints of recursion. Here, we used fractals to investigate the acquisition of recursion in the visual domain, and probed for correlations with grammar comprehension and general intelligence. We compared second (n = 26) and fourth graders (n = 26) in their ability to represent two types of rules for generating hierarchical structures: Recursive rules, on the one hand, which generate new hierarchical levels; and iterative rules, on the other hand, which merely insert items within hierarchies without generating new levels. We found that the majority of fourth graders, but not second graders, were able to represent both recursive and iterative rules. This difference was partially accounted by second graders’ impairment in detecting hierarchical mistakes, and correlated with between-grade differences in grammar comprehension tasks. Empirically, recursion and iteration also differed in at least one crucial aspect: While the ability to learn recursive rules seemed to depend on the previous acquisition of simple iterative representations, the opposite was not true, i.e., children were able to acquire iterative rules before they acquired recursive representations. These results suggest that the acquisition of recursion in vision follows learning constraints similar to the acquisition of recursion in language, and that both domains share cognitive resources involved in hierarchical processing. PMID:24955884
Multistep-Ahead Air Passengers Traffic Prediction with Hybrid ARIMA-SVMs Models
Ming, Wei; Xiong, Tao
2014-01-01
The hybrid ARIMA-SVMs prediction models have been established recently, which take advantage of the unique strength of ARIMA and SVMs models in linear and nonlinear modeling, respectively. Built upon this hybrid ARIMA-SVMs models alike, this study goes further to extend them into the case of multistep-ahead prediction for air passengers traffic with the two most commonly used multistep-ahead prediction strategies, that is, iterated strategy and direct strategy. Additionally, the effectiveness of data preprocessing approaches, such as deseasonalization and detrending, is investigated and proofed along with the two strategies. Real data sets including four selected airlines' monthly series were collected to justify the effectiveness of the proposed approach. Empirical results demonstrate that the direct strategy performs better than iterative one in long term prediction case while iterative one performs better in the case of short term prediction. Furthermore, both deseasonalization and detrending can significantly improve the prediction accuracy for both strategies, indicating the necessity of data preprocessing. As such, this study contributes as a full reference to the planners from air transportation industries on how to tackle multistep-ahead prediction tasks in the implementation of either prediction strategy. PMID:24723814
Scoping studies of shielding to reduce the shutdown dose rates in the ITER ports
NASA Astrophysics Data System (ADS)
Juárez, R.; Guirao, J.; Pampin, R.; Loughlin, M.; Polunovskiy, E.; Le Tonqueze, Y.; Bertalot, L.; Kolsek, A.; Ogando, F.; Udintsev, V.; Walsh, M.
2018-07-01
The planned in situ maintenance tasks in the ITER port interspace are fundamental to ensure the operation of equipment to control, evaluate and optimize the plasma performance during the entire facility lifetime. They are subject to a limit of shutdown dose rates (SDDR) of 100 µSv h‑1 after 106 s of cooling time, which is nowadays a design driver for the port plugs as well as the application of ALARA. Three conceptual shielding proposals outside the ITER ports are studied in this work to support the achievement of this objective. Considered one by one, they offer reductions ranging from 25% to 50%, which are rather significant. This paper shows that, by combining these shields, the SDDR as low as 57Δ µSv h‑1 can be achieved with a local approach considering only radiation from one port (no cross-talk form neighboring ports). The locally evaluated SDDR are well below the limit which is an essential pre-requisite for achieving 100µSv h‑1 in a global analysis including all contributions. Further studies will have to deal with a realistic port plug design and the cross-talks from neighbour ports.
A New Pivoting and Iterative Text Detection Algorithm for Biomedical Images
Xu, Songhua; Krauthammer, Michael
2010-01-01
There is interest to expand the reach of literature mining to include the analysis of biomedical images, which often contain a paper’s key findings. Examples include recent studies that use Optical Character Recognition (OCR) to extract image text, which is used to boost biomedical image retrieval and classification. Such studies rely on the robust identification of text elements in biomedical images, which is a non-trivial task. In this work, we introduce a new text detection algorithm for biomedical images based on iterative projection histograms. We study the effectiveness of our algorithm by evaluating the performance on a set of manually labeled random biomedical images, and compare the performance against other state-of-the-art text detection algorithms. In this paper, we demonstrate that a projection histogram-based text detection approach is well suited for text detection in biomedical images, with a performance of F score of .60. The approach performs better than comparable approaches for text detection. Further, we show that the iterative application of the algorithm is boosting overall detection performance. A C++ implementation of our algorithm is freely available through email request for academic use. PMID:20887803
Marginal Consistency: Upper-Bounding Partition Functions over Commutative Semirings.
Werner, Tomás
2015-07-01
Many inference tasks in pattern recognition and artificial intelligence lead to partition functions in which addition and multiplication are abstract binary operations forming a commutative semiring. By generalizing max-sum diffusion (one of convergent message passing algorithms for approximate MAP inference in graphical models), we propose an iterative algorithm to upper bound such partition functions over commutative semirings. The iteration of the algorithm is remarkably simple: change any two factors of the partition function such that their product remains the same and their overlapping marginals become equal. In many commutative semirings, repeating this iteration for different pairs of factors converges to a fixed point when the overlapping marginals of every pair of factors coincide. We call this state marginal consistency. During that, an upper bound on the partition function monotonically decreases. This abstract algorithm unifies several existing algorithms, including max-sum diffusion and basic constraint propagation (or local consistency) algorithms in constraint programming. We further construct a hierarchy of marginal consistencies of increasingly higher levels and show than any such level can be enforced by adding identity factors of higher arity (order). Finally, we discuss instances of the framework for several semirings, including the distributive lattice and the max-sum and sum-product semirings.
NASA Astrophysics Data System (ADS)
Li, Zhengguang; Lai, Siu-Kai; Wu, Baisheng
2018-07-01
Determining eigenvector derivatives is a challenging task due to the singularity of the coefficient matrices of the governing equations, especially for those structural dynamic systems with repeated eigenvalues. An effective strategy is proposed to construct a non-singular coefficient matrix, which can be directly used to obtain the eigenvector derivatives with distinct and repeated eigenvalues. This approach also has an advantage that only requires eigenvalues and eigenvectors of interest, without solving the particular solutions of eigenvector derivatives. The Symmetric Quasi-Minimal Residual (SQMR) method is then adopted to solve the governing equations, only the existing factored (shifted) stiffness matrix from an iterative eigensolution such as the subspace iteration method or the Lanczos algorithm is utilized. The present method can deal with both cases of simple and repeated eigenvalues in a unified manner. Three numerical examples are given to illustrate the accuracy and validity of the proposed algorithm. Highly accurate approximations to the eigenvector derivatives are obtained within a few iteration steps, making a significant reduction of the computational effort. This method can be incorporated into a coupled eigensolver/derivative software module. In particular, it is applicable for finite element models with large sparse matrices.
Development of parallel algorithms for electrical power management in space applications
NASA Technical Reports Server (NTRS)
Berry, Frederick C.
1989-01-01
The application of parallel techniques for electrical power system analysis is discussed. The Newton-Raphson method of load flow analysis was used along with the decomposition-coordination technique to perform load flow analysis. The decomposition-coordination technique enables tasks to be performed in parallel by partitioning the electrical power system into independent local problems. Each independent local problem represents a portion of the total electrical power system on which a loan flow analysis can be performed. The load flow analysis is performed on these partitioned elements by using the Newton-Raphson load flow method. These independent local problems will produce results for voltage and power which can then be passed to the coordinator portion of the solution procedure. The coordinator problem uses the results of the local problems to determine if any correction is needed on the local problems. The coordinator problem is also solved by an iterative method much like the local problem. The iterative method for the coordination problem will also be the Newton-Raphson method. Therefore, each iteration at the coordination level will result in new values for the local problems. The local problems will have to be solved again along with the coordinator problem until some convergence conditions are met.
Automatic extraction of the mid-sagittal plane using an ICP variant
NASA Astrophysics Data System (ADS)
Fieten, Lorenz; Eschweiler, Jörg; de la Fuente, Matías; Gravius, Sascha; Radermacher, Klaus
2008-03-01
Precise knowledge of the mid-sagittal plane is important for the assessment and correction of several deformities. Furthermore, the mid-sagittal plane can be used for the definition of standardized coordinate systems such as pelvis or skull coordinate systems. A popular approach for mid-sagittal plane computation is based on the selection of anatomical landmarks located either directly on the plane or symmetrically to it. However, the manual selection of landmarks is a tedious, time-consuming and error-prone task, which requires great care. In order to overcome this drawback, previously it was suggested to use the iterative closest point (ICP) algorithm: After an initial mirroring of the data points on a default mirror plane, the mirrored data points should be registered iteratively to the model points using rigid transforms. Finally, a reflection transform approximating the cumulative transform could be extracted. In this work, we present an ICP variant for the iterative optimization of the reflection parameters. It is based on a closed-form solution to the least-squares problem of matching data points to model points using a reflection. In experiments on CT pelvis and skull datasets our method showed a better ability to match homologous areas.
Compressed sensing with gradient total variation for low-dose CBCT reconstruction
NASA Astrophysics Data System (ADS)
Seo, Chang-Woo; Cha, Bo Kyung; Jeon, Seongchae; Huh, Young; Park, Justin C.; Lee, Byeonghun; Baek, Junghee; Kim, Eunyoung
2015-06-01
This paper describes the improvement of convergence speed with gradient total variation (GTV) in compressed sensing (CS) for low-dose cone-beam computed tomography (CBCT) reconstruction. We derive a fast algorithm for the constrained total variation (TV)-based a minimum number of noisy projections. To achieve this task we combine the GTV with a TV-norm regularization term to promote an accelerated sparsity in the X-ray attenuation characteristics of the human body. The GTV is derived from a TV and enforces more efficient computationally and faster in convergence until a desired solution is achieved. The numerical algorithm is simple and derives relatively fast convergence. We apply a gradient projection algorithm that seeks a solution iteratively in the direction of the projected gradient while enforcing a non-negatively of the found solution. In comparison with the Feldkamp, Davis, and Kress (FDK) and conventional TV algorithms, the proposed GTV algorithm showed convergence in ≤18 iterations, whereas the original TV algorithm needs at least 34 iterations in reducing 50% of the projections compared with the FDK algorithm in order to reconstruct the chest phantom images. Future investigation includes improving imaging quality, particularly regarding X-ray cone-beam scatter, and motion artifacts of CBCT reconstruction.
Canopy, Erin; Evans, Matt; Boehler, Margaret; Roberts, Nicole; Sanfey, Hilary; Mellinger, John
2015-10-01
Endoscopic retrograde cholangiopancreatography is a challenging procedure performed by surgeons and gastroenterologists. We employed cognitive task analysis to identify steps and decision points for this procedure. Standardized interviews were conducted with expert gastroenterologists (7) and surgeons (4) from 4 institutions. A procedural step and cognitive decision point protocol was created from audio-taped transcriptions and was refined by 5 additional surgeons. Conceptual elements, sequential actions, and decision points were iterated for 5 tasks: patient preparation, duodenal intubation, selective cannulation, imaging interpretation with related therapeutic intervention, and complication management. A total of 180 steps were identified. Gastroenterologists identified 34 steps not identified by surgeons, and surgeons identified 20 steps not identified by gastroenterologists. The findings suggest that for complex procedures performed by diverse practitioners, more experts may help delineate distinctive emphases differentiated by training background and type of practice. Copyright © 2015 Elsevier Inc. All rights reserved.
Gaze and Feet as Additional Input Modalities for Interacting with Geospatial Interfaces
NASA Astrophysics Data System (ADS)
Çöltekin, A.; Hempel, J.; Brychtova, A.; Giannopoulos, I.; Stellmach, S.; Dachselt, R.
2016-06-01
Geographic Information Systems (GIS) are complex software environments and we often work with multiple tasks and multiple displays when we work with GIS. However, user input is still limited to mouse and keyboard in most workplace settings. In this project, we demonstrate how the use of gaze and feet as additional input modalities can overcome time-consuming and annoying mode switches between frequently performed tasks. In an iterative design process, we developed gaze- and foot-based methods for zooming and panning of map visualizations. We first collected appropriate gestures in a preliminary user study with a small group of experts, and designed two interaction concepts based on their input. After the implementation, we evaluated the two concepts comparatively in another user study to identify strengths and shortcomings in both. We found that continuous foot input combined with implicit gaze input is promising for supportive tasks.
[Computers in biomedical research: I. Analysis of bioelectrical signals].
Vivaldi, E A; Maldonado, P
2001-08-01
A personal computer equipped with an analog-to-digital conversion card is able to input, store and display signals of biomedical interest. These signals can additionally be submitted to ad-hoc software for analysis and diagnosis. Data acquisition is based on the sampling of a signal at a given rate and amplitude resolution. The automation of signal processing conveys syntactic aspects (data transduction, conditioning and reduction); and semantic aspects (feature extraction to describe and characterize the signal and diagnostic classification). The analytical approach that is at the basis of computer programming allows for the successful resolution of apparently complex tasks. Two basic principles involved are the definition of simple fundamental functions that are then iterated and the modular subdivision of tasks. These two principles are illustrated, respectively, by presenting the algorithm that detects relevant elements for the analysis of a polysomnogram, and the task flow in systems that automate electrocardiographic reports.
A Decentralized Eigenvalue Computation Method for Spectrum Sensing Based on Average Consensus
NASA Astrophysics Data System (ADS)
Mohammadi, Jafar; Limmer, Steffen; Stańczak, Sławomir
2016-07-01
This paper considers eigenvalue estimation for the decentralized inference problem for spectrum sensing. We propose a decentralized eigenvalue computation algorithm based on the power method, which is referred to as generalized power method GPM; it is capable of estimating the eigenvalues of a given covariance matrix under certain conditions. Furthermore, we have developed a decentralized implementation of GPM by splitting the iterative operations into local and global computation tasks. The global tasks require data exchange to be performed among the nodes. For this task, we apply an average consensus algorithm to efficiently perform the global computations. As a special case, we consider a structured graph that is a tree with clusters of nodes at its leaves. For an accelerated distributed implementation, we propose to use computation over multiple access channel (CoMAC) as a building block of the algorithm. Numerical simulations are provided to illustrate the performance of the two algorithms.
NASA Technical Reports Server (NTRS)
Zimmerman, W. F.; Matijevic, J. R.
1987-01-01
Novel system engineering techniques have been developed and applied to establishing structured design and performance objectives for the Telerobotics Testbed that reduce technical risk while still allowing the testbed to demonstrate an advancement in state-of-the-art robotic technologies. To estblish the appropriate tradeoff structure and balance of technology performance against technical risk, an analytical data base was developed which drew on: (1) automation/robot-technology availability projections, (2) typical or potential application mission task sets, (3) performance simulations, (4) project schedule constraints, and (5) project funding constraints. Design tradeoffs and configuration/performance iterations were conducted by comparing feasible technology/task set configurations against schedule/budget constraints as well as original program target technology objectives. The final system configuration, task set, and technology set reflected a balanced advancement in state-of-the-art robotic technologies, while meeting programmatic objectives and schedule/cost constraints.
Clinical Reasoning Tasks and Resident Physicians: What Do They Reason About?
McBee, Elexis; Ratcliffe, Temple; Goldszmidt, Mark; Schuwirth, Lambert; Picho, Katherine; Artino, Anthony R; Masel, Jennifer; Durning, Steven J
2016-07-01
A framework of clinical reasoning tasks thought to occur in a clinical encounter was recently developed. It proposes that diagnostic and therapeutic reasoning comprise 24 tasks. The authors of this current study used this framework to investigate what internal medicine residents reason about when they approach straightforward clinical cases. Participants viewed three video-recorded clinical encounters portraying common diagnoses. After each video, participants completed a post encounter form and think-aloud protocol. Two authors analyzed transcripts from the think-aloud protocols using a constant comparative approach. They conducted iterative coding of the utterances, classifying each according to the framework of clinical reasoning tasks. They evaluated the type, number, and sequence of tasks the residents used. Ten residents participated in the study in 2013-2014. Across all three cases, the residents employed 14 clinical reasoning tasks. Nearly all coded tasks were associated with framing the encounter or diagnosis. The order in which residents used specific tasks varied. The average number of tasks used per case was as follows: Case 1, 4.4 (range 1-10); Case 2, 4.6 (range 1-6); and Case 3, 4.7 (range 1-7). The residents used some tasks repeatedly; the average number of task utterances was 11.6, 13.2, and 14.7 for, respectively, Case 1, 2, and 3. Results suggest that the use of clinical reasoning tasks occurs in a varied, not sequential, process. The authors provide suggestions for strengthening the framework to more fully encompass the spectrum of reasoning tasks that occur in residents' clinical encounters.
Voltage scheduling for low power/energy
NASA Astrophysics Data System (ADS)
Manzak, Ali
2001-07-01
Power considerations have become an increasingly dominant factor in the design of both portable and desk-top systems. An effective way to reduce power consumption is to lower the supply voltage since voltage is quadratically related to power. This dissertation considers the problem of lowering the supply voltage at (i) the system level and at (ii) the behavioral level. At the system level, the voltage of the variable voltage processor is dynamically changed with the work load. Processors with limited sized buffers as well as those with very large buffers are considered. Given the task arrival times, deadline times, execution times, periods and switching activities, task scheduling algorithms that minimize energy or peak power are developed for the processors equipped with very large buffers. A relation between the operating voltages of the tasks for minimum energy/power is determined using the Lagrange multiplier method, and an iterative algorithm that utilizes this relation is developed. Experimental results show that the voltage assignment obtained by the proposed algorithm is very close (0.1% error) to that of the optimal energy assignment and the optimal peak power (1% error) assignment. Next, on-line and off-fine minimum energy task scheduling algorithms are developed for processors with limited sized buffers. These algorithms have polynomial time complexity and present optimal (off-line) and close-to-optimal (on-line) solutions. A procedure to calculate the minimum buffer size given information about the size of the task (maximum, minimum), execution time (best case, worst case) and deadlines is also presented. At the behavioral level, resources operating at multiple voltages are used to minimize power while maintaining the throughput. Such a scheme has the advantage of allowing modules on the critical paths to be assigned to the highest voltage levels (thus meeting the required timing constraints) while allowing modules on non-critical paths to be assigned to lower voltage levels (thus reducing the power consumption). A polynomial time resource and latency constrained scheduling algorithm is developed to distribute the available slack among the nodes such that power consumption is minimum. The algorithm is iterative and utilizes the slack based on the Lagrange multiplier method.
NDARC NASA Design and Analysis of Rotorcraft - Input, Appendix 4
NASA Technical Reports Server (NTRS)
Johnson, Wayne
2016-01-01
The NDARC code performs design and analysis tasks. The design task involves sizing the rotorcraft to satisfy specified design conditions and missions. The analysis tasks can include off-design mission performance analysis, flight performance calculation for point operating conditions, and generation of subsystem or component performance maps. The principal tasks (sizing, mission analysis, flight performance analysis) are shown in the figure as boxes with heavy borders. Heavy arrows show control of subordinate tasks. The aircraft description consists of all the information, input and derived, that denes the aircraft. The aircraft consists of a set of components, including fuselage, rotors, wings, tails, and propulsion. This information can be the result of the sizing task; can come entirely from input, for a fixed model; or can come from the sizing task in a previous case or previous job. The aircraft description information is available to all tasks and all solutions. The sizing task determines the dimensions, power, and weight of a rotorcraft that can perform a specified set of design conditions and missions. The aircraft size is characterized by parameters such as design gross weight, weight empty, rotor radius, and engine power available. The relations between dimensions, power, and weight generally require an iterative solution. From the design flight conditions and missions, the task can determine the total engine power or the rotor radius (or both power and radius can be fixed), as well as the design gross weight, maximum takeoff weight, drive system torque limit, and fuel tank capacity. For each propulsion group, the engine power or the rotor radius can be sized. Missions are defined for the sizing task, and for the mission performance analysis. A mission consists of a number of mission segments, for which time, distance, and fuel burn are evaluated. For the sizing task, certain missions are designated to be used for design gross weight calculations; for transmission sizing; and for fuel tank sizing. The mission parameters include mission takeoff gross weight and useful load. For specified takeoff fuel weight with adjustable segments, the mission time or distance is adjusted so the fuel required for the mission equals the takeoff fuel weight. The mission iteration is on fuel weight or energy. Flight conditions are specified for the sizing task, and for the flight performance analysis. For the sizing task, certain flight conditions are designated to be used for design gross weight calculations; for transmission sizing; for maximum takeoff weight calculations; and for anti-torque or auxiliary thrust rotor sizing. The flight condition parameters include gross weight and useful load. For flight conditions and mission takeoff, the gross weight can be maximized, such that the power required equals the power available. A flight state is defined for each mission segment and each flight condition. The aircraft performance can be analyzed for the specified state, or a maximum effort performance can be identified. The maximum effort is specified in terms of a quantity such as best endurance or best range, and a variable such as speed, rate of climb, or altitude.
Five Bit, Five Gigasample TED Analog-to-Digital Converter Development.
1981-06-01
pliers. TRW uses two sources at present: materials grown by Horizontal I Bridgman technique from Crystal Specialties, and Czochralski from MRI. The...the circuit modelling and circuit design tasks. A number of design iterations were required to arrive at a satisfactory design. In or-der to riake...made by modeling the TELD as a voltage-controlled current generator with a built-in time delay between impressed voltage and output current. Based on
Self-Encoded Spread Spectrum Modulation for Robust Anti-Jamming Communication
2009-06-30
experience in both theoretical and experimental aspects of RF and optical communications, multi-user CDMA systems, transmitter precoding and code...the performance of DS - and FH-SESS modulation in the presence of worst-case jamming, develop innovative SESS schemes that further exploit time and...Determine BER and AJ performance of the feedback and iterative detectors in DS -SESS under pulsed-noise and multi-tone jamming • Task 2: Develop a scheme
Phased models for evaluating the performability of computing systems
NASA Technical Reports Server (NTRS)
Wu, L. T.; Meyer, J. F.
1979-01-01
A phase-by-phase modelling technique is introduced to evaluate a fault tolerant system's ability to execute different sets of computational tasks during different phases of the control process. Intraphase processes are allowed to differ from phase to phase. The probabilities of interphase state transitions are specified by interphase transition matrices. Based on constraints imposed on the intraphase and interphase transition probabilities, various iterative solution methods are developed for calculating system performability.
ERIC Educational Resources Information Center
Hodge, David R.
2014-01-01
My work in the area of spirituality and religion builds on our profession's proud history of expanding diversity to include previously marginalized groups. Each iteration of diversity, however, has been met with critiques implicitly designed to affirm the status quo. In this article, I respond to criticisms that have been leveled against my…
Efficient numerical method of freeform lens design for arbitrary irradiance shaping
NASA Astrophysics Data System (ADS)
Wojtanowski, Jacek
2018-05-01
A computational method to design a lens with a flat entrance surface and a freeform exit surface that can transform a collimated, generally non-uniform input beam into a beam with a desired irradiance distribution of arbitrary shape is presented. The methodology is based on non-linear elliptic partial differential equations, known as Monge-Ampère PDEs. This paper describes an original numerical algorithm to solve this problem by applying the Gauss-Seidel method with simplified boundary conditions. A joint MATLAB-ZEMAX environment is used to implement and verify the method. To prove the efficiency of the proposed approach, an exemplary study where the designed lens is faced with the challenging illumination task is shown. An analysis of solution stability, iteration-to-iteration ray mapping evolution (attached in video format), depth of focus and non-zero étendue efficiency is performed.
Iterative Code-Aided ML Phase Estimation and Phase Ambiguity Resolution
NASA Astrophysics Data System (ADS)
Wymeersch, Henk; Moeneclaey, Marc
2005-12-01
As many coded systems operate at very low signal-to-noise ratios, synchronization becomes a very difficult task. In many cases, conventional algorithms will either require long training sequences or result in large BER degradations. By exploiting code properties, these problems can be avoided. In this contribution, we present several iterative maximum-likelihood (ML) algorithms for joint carrier phase estimation and ambiguity resolution. These algorithms operate on coded signals by accepting soft information from the MAP decoder. Issues of convergence and initialization are addressed in detail. Simulation results are presented for turbo codes, and are compared to performance results of conventional algorithms. Performance comparisons are carried out in terms of BER performance and mean square estimation error (MSEE). We show that the proposed algorithm reduces the MSEE and, more importantly, the BER degradation. Additionally, phase ambiguity resolution can be performed without resorting to a pilot sequence, thus improving the spectral efficiency.
Experimental investigations of helium cryotrapping by argon frost
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mack, A.; Perinic, D.; Murdoch, D.
1992-03-01
At the Karlsruhe Nuclear Research Centre (KfK) cryopumping techniques are being investigated by which the gaseous exhausts from the NET/ITER reactor can be pumped out during the burn-and dwell-times. Cryosorption and cryotrapping are techniques which are suitable for this task. It is the target of the investigations to test the techniques under NET/ITER conditions and to determine optimum design data for a prototype. They involve measurement of the pumping speed as a function of the gas composition, gas flow and loading condition of the pump surfaces. The following parameters are subjected to variations: Ar/He ratio, specific helium volume flow rate,more » cryosurface temperature, process gas composition, impurities in argon trapping gas, three-stage operation and two-stage operation. This paper is a description of the experiments on argon trapping techniques started in 1990. Eleven tests as well as the results derived from them are described.« less
Weinmann, Andreas; Storath, Martin
2015-01-01
Signals with discontinuities appear in many problems in the applied sciences ranging from mechanics, electrical engineering to biology and medicine. The concrete data acquired are typically discrete, indirect and noisy measurements of some quantities describing the signal under consideration. The task is to restore the signal and, in particular, the discontinuities. In this respect, classical methods perform rather poor, whereas non-convex non-smooth variational methods seem to be the correct choice. Examples are methods based on Mumford–Shah and piecewise constant Mumford–Shah functionals and discretized versions which are known as Blake–Zisserman and Potts functionals. Owing to their non-convexity, minimization of such functionals is challenging. In this paper, we propose a new iterative minimization strategy for Blake–Zisserman as well as Potts functionals and a related jump-sparsity problem dealing with indirect, noisy measurements. We provide a convergence analysis and underpin our findings with numerical experiments. PMID:27547074
A Putative Multiple-Demand System in the Macaque Brain.
Mitchell, Daniel J; Bell, Andrew H; Buckley, Mark J; Mitchell, Anna S; Sallet, Jerome; Duncan, John
2016-08-17
In humans, cognitively demanding tasks of many types recruit common frontoparietal brain areas. Pervasive activation of this "multiple-demand" (MD) network suggests a core function in supporting goal-oriented behavior. A similar network might therefore be predicted in nonhuman primates that readily perform similar tasks after training. However, an MD network in nonhuman primates has not been described. Single-cell recordings from macaque frontal and parietal cortex show some similar properties to human MD fMRI responses (e.g., adaptive coding of task-relevant information). Invasive recordings, however, come from limited prespecified locations, so they do not delineate a macaque homolog of the MD system and their positioning could benefit from knowledge of where MD foci lie. Challenges of scanning behaving animals mean that few macaque fMRI studies specifically contrast levels of cognitive demand, so we sought to identify a macaque counterpart to the human MD system using fMRI connectivity in 35 rhesus macaques. Putative macaque MD regions, mapped from frontoparietal MD regions defined in humans, were found to be functionally connected under anesthesia. To further refine these regions, an iterative process was used to maximize their connectivity cross-validated across animals. Finally, whole-brain connectivity analyses identified voxels that were robustly connected to MD regions, revealing seven clusters across frontoparietal and insular cortex comparable to human MD regions and one unexpected cluster in the lateral fissure. The proposed macaque MD regions can be used to guide future electrophysiological investigation of MD neural coding and in task-based fMRI to test predictions of similar functional properties to human MD cortex. In humans, a frontoparietal "multiple-demand" (MD) brain network is recruited during a wide range of cognitively demanding tasks. Because this suggests a fundamental function, one might expect a similar network to exist in nonhuman primates, but this remains controversial. Here, we sought to identify a macaque counterpart to the human MD system using fMRI connectivity. Putative macaque MD regions were functionally connected under anesthesia and were further refined by iterative optimization. The result is a network including lateral frontal, dorsomedial frontal, and insular and inferior parietal regions closely similar to the human counterpart. The proposed macaque MD regions can be useful in guiding electrophysiological recordings or in task-based fMRI to test predictions of similar functional properties to human MD cortex. Copyright © 2016 Mitchell et al.
Concentrator enhanced solar arrays design study
NASA Technical Reports Server (NTRS)
Lott, D. R.
1978-01-01
The analysis and preliminary design of a 25 kW concentrator enhanced lightweight flexible solar array are presented. The study was organized into five major tasks: (1) assessment and specification of design requirements; (2) mechanical design; (3) electric design; (4) concentrator design; and (5) cost projection. The tasks were conducted in an iterative manner so as to best derive a baseline design selection. The objectives of the study are discussed and comparative configurations and mass data on the SEP (Solar Electric Propulsion) array design, concentrator design options and configuration/mass data on the selected concentrator enhanced solar array baseline design are presented. Design requirements supporting design analysis and detailed baseline design data are discussed. The results of the cost projection analysis and new technology are also discussed.
2010-12-01
A European Federation of Neurological Societies/Peripheral Nerve Society consensus guideline on the definition, investigation, and treatment of multifocal motor neuropathy (MMN) was published in 2006. The aim is to revise this guideline. Disease experts considered references retrieved from MEDLINE and Cochrane Systematic Reviews published between August 2004 and July 2009 and prepared statements that were agreed to in an iterative fashion. The Task Force agreed on Good Practice Points to define clinical and electrophysiological diagnostic criteria for MMN, investigations to be considered, and principal recommendations for treatment. © 2010 Peripheral Nerve Society.
NPSS Multidisciplinary Integration and Analysis
NASA Technical Reports Server (NTRS)
Hall, Edward J.; Rasche, Joseph; Simons, Todd A.; Hoyniak, Daniel
2006-01-01
The objective of this task was to enhance the capability of the Numerical Propulsion System Simulation (NPSS) by expanding its reach into the high-fidelity multidisciplinary analysis area. This task investigated numerical techniques to convert between cold static to hot running geometry of compressor blades. Numerical calculations of blade deformations were iteratively done with high fidelity flow simulations together with high fidelity structural analysis of the compressor blade. The flow simulations were performed with the Advanced Ducted Propfan Analysis (ADPAC) code, while structural analyses were performed with the ANSYS code. High fidelity analyses were used to evaluate the effects on performance of: variations in tip clearance, uncertainty in manufacturing tolerance, variable inlet guide vane scheduling, and the effects of rotational speed on the hot running geometry of the compressor blades.
Task Equivalence for Model and Human-Observer Comparisons in SPECT Localization Studies
NASA Astrophysics Data System (ADS)
Sen, Anando; Kalantari, Faraz; Gifford, Howard C.
2016-06-01
While mathematical model observers are intended for efficient assessment of medical imaging systems, their findings should be relevant for human observers as the primary clinical end users. We have investigated whether pursuing equivalence between the model and human-observer tasks can help ensure this goal. A localization receiver operating characteristic (LROC) study tested prostate lesion detection in simulated In-111 SPECT imaging with anthropomorphic phantoms. The test images were 2D slices extracted from reconstructed volumes. The iterative ordered sets expectation-maximization (OSEM) reconstruction algorithm was used with Gaussian postsmoothing. Variations in the number of iterations and the level of postfiltering defined the test strategies in the study. Human-observer performance was compared with that of a visual-search (VS) observer, a scanning channelized Hotelling observer, and a scanning channelized nonprewhitening (CNPW) observer. These model observers were applied with precise information about the target regions of interest (ROIs). ROI knowledge was a study variable for the human observers. In one study format, the humans read the SPECT image alone. With a dual-modality format, the SPECT image was presented alongside an anatomical image slice extracted from the density map of the phantom. Performance was scored by area under the LROC curve. The human observers performed significantly better with the dual-modality format, and correlation with the model observers was also improved. Given the human-observer data from the SPECT study format, the Pearson correlation coefficients for the model observers were 0.58 (VS), -0.12 (CH), and -0.23 (CNPW). The respective coefficients based on the human-observer data from the dual-modality study were 0.72, 0.27, and -0.11. These results point towards the continued development of the VS observer for enhancing task equivalence in model-observer studies.
Elion, Orit; Sela, Itamar; Bahat, Yotam; Siev-Ner, Itzhak; Weiss, Patrice L Tamar; Karni, Avi
2015-06-03
Does the learning of a balance and stability skill exhibit time-course phases and transfer limitations characteristic of the acquisition and consolidation of voluntary movement sequences? Here we followed the performance of young adults trained in maintaining balance while standing on a moving platform synchronized with a virtual reality road travel scene. The training protocol included eight 3 min long iterations of the road scene. Center of Pressure (CoP) displacements were analyzed for each task iteration within the training session, as well as during tests at 24h, 4 weeks and 12 weeks post-training to test for consolidation phase ("offline") gains and assess retention. In addition, CoP displacements in reaction to external perturbations were assessed before and after the training session and in the 3 subsequent post-training assessments (stability tests). There were significant reductions in CoP displacements as experience accumulated within session, with performance stabilizing by the end of the session. However, CoP displacements were further reduced at 24h post-training (delayed "offline" gains) and these gains were robustly retained. There was no transfer of the practice-related gains to performance in the stability tests. The time-course of learning the balance maintenance task, as well as the limitation on generalizing the gains to untrained conditions, are in line with the results of studies of manual movement skill learning. The current results support the conjecture that a similar repertoire of basic neuronal mechanisms of plasticity may underlay skill (procedural, "how to" knowledge) acquisition and skill memory consolidation in voluntary and balance maintenance tasks. Copyright © 2015 Elsevier B.V. All rights reserved.
Goyette, Sharon Ramos; McCoy, John G; Kennedy, Ashley; Sullivan, Meghan
2012-02-28
It has been well-established that men outperform women on some spatial tasks. The tools commonly used to demonstrate this difference (e.g. The Mental Rotations Task) typically involve problems and solutions that are presented in a context devoid of referents. The study presented here assessed whether the addition of referents (or "landmarks") would attenuate the well-established sex difference on the judgment of line orientation task (JLOT). Three versions of the JLOT were presented in a within design. The first iteration contained the original JLOT (JLOT 1). JLOT 2 contained three "landmarks" or referents and JLOT 3 contained only one landmark. The sex difference on JLOT 1 was completely negated by the addition of three landmarks on JLOT 2 or the addition of one landmark on JLOT3. In addition, salivary testosterone was measured. In men, gains in performance on the JLOT due to the addition of landmarks were positively correlated with testosterone levels. This suggests that men with the highest testosterone levels benefited the most from the addition of landmarks. These data help to highlight different strategies used by men and women to solve spatial tasks. Copyright © 2011 Elsevier Inc. All rights reserved.
Convex Formulations of Learning from Crowds
NASA Astrophysics Data System (ADS)
Kajino, Hiroshi; Kashima, Hisashi
It has attracted considerable attention to use crowdsourcing services to collect a large amount of labeled data for machine learning, since crowdsourcing services allow one to ask the general public to label data at very low cost through the Internet. The use of crowdsourcing has introduced a new challenge in machine learning, that is, coping with low quality of crowd-generated data. There have been many recent attempts to address the quality problem of multiple labelers, however, there are two serious drawbacks in the existing approaches, that are, (i) non-convexity and (ii) task homogeneity. Most of the existing methods consider true labels as latent variables, which results in non-convex optimization problems. Also, the existing models assume only single homogeneous tasks, while in realistic situations, clients can offer multiple tasks to crowds and crowd workers can work on different tasks in parallel. In this paper, we propose a convex optimization formulation of learning from crowds by introducing personal models of individual crowds without estimating true labels. We further extend the proposed model to multi-task learning based on the resemblance between the proposed formulation and that for an existing multi-task learning model. We also devise efficient iterative methods for solving the convex optimization problems by exploiting conditional independence structures in multiple classifiers.
Automated ILA design for synchronous sequential circuits
NASA Technical Reports Server (NTRS)
Liu, M. N.; Liu, K. Z.; Maki, G. K.; Whitaker, S. R.
1991-01-01
An iterative logic array (ILA) architecture for synchronous sequential circuits is presented. This technique utilizes linear algebra to produce the design equations. The ILA realization of synchronous sequential logic can be fully automated with a computer program. A programmable design procedure is proposed to fullfill the design task and layout generation. A software algorithm in the C language has been developed and tested to generate 1 micron CMOS layouts using the Hewlett-Packard FUNGEN module generator shell.
NASA Technical Reports Server (NTRS)
Ortega, J. M.
1985-01-01
Synopses are given for NASA supported work in computer science at the University of Virginia. Some areas of research include: error seeding as a testing method; knowledge representation for engineering design; analysis of faults in a multi-version software experiment; implementation of a parallel programming environment; two computer graphics systems for visualization of pressure distribution and convective density particles; task decomposition for multiple robot arms; vectorized incomplete conjugate gradient; and iterative methods for solving linear equations on the Flex/32.
Understanding Gulf War Illness: An Integrative Modeling Approach
2017-10-01
group (Task 2; Subtask1). The latest iteration of this analysis focused on n=11 animals control and n=11 DFP exposed animals without corticosterone... Groups : 1 – Control Group (Male and Female Intact) 2 – GWI Model – Cort+DFP (Male and Female OVX) 3 – Control OVX (Female) 4 – GWI Model + Enbrel...The designed protocol counts with five basic experimental groups : Group 1 (Untreated Control no toxic exposure); Group 2 (Toxic exposed, no
1988-01-01
Deblurring This long-standing research area was wrapped up this year with the preparation of a major tutorial paper. This paper summarizes all of the work...that we have done. The iterative procedures were shown to perform significantly better at the deblurring task than Kalman filtering, Wiener filtering...suited to the resolution of multiple impulsive sources on a uniform background. Such applications occur in radio astronomy and in a number of
Quantum state matching of qubits via measurement-induced nonlinear transformations
NASA Astrophysics Data System (ADS)
Kálmán, Orsolya; Kiss, Tamás
2018-03-01
We consider the task of deciding whether an unknown qubit state falls in a prescribed neighborhood of a reference state. We assume that several copies of the unknown state are given and apply a unitary operation pairwise on them combined with a postselection scheme conditioned on the measurement result obtained on one of the qubits of the pair. The resulting transformation is a deterministic, nonlinear, chaotic map in the Hilbert space. We derive a class of these transformations capable of orthogonalizing nonorthogonal qubit states after a few iterations. These nonlinear maps orthogonalize states which correspond to the two different convergence regions of the nonlinear map. Based on the analysis of the border (the so-called Julia set) between the two regions of convergence, we show that it is always possible to find a map capable of deciding whether an unknown state is within a neighborhood of fixed radius around a desired quantum state. We analyze which one- and two-qubit operations would physically realize the scheme. It is possible to find a single two-qubit unitary gate for each map or, alternatively, a universal special two-qubit gate together with single-qubit gates in order to carry out the task. We note that it is enough to have a single physical realization of the required gates due to the iterative nature of the scheme.
An Iterative Approach To Development Of A PACS Display Workstation
NASA Astrophysics Data System (ADS)
O'Malley, Kathleen G.
1989-05-01
An iterative prototyping approach has been used in the development of requirements for a new user interface for the display workstation in the CommView system product line. This approach involves many steps, including development of the preliminary concept, validation and ranking of ideas within that concept, prototyping, evaluating, and revising. We describe in this paper the process undertaken to design and evaluate the new user interface. Staff at Abbott Northwestern Hospital, Bowman Gray/Baptist Hospital Medical Center, Duke University Medical Center, Georgetown University Medical Center and Robert Wood Johnson University Hospital participated in various aspects of the study. The subject population included radiologists, residents, technologists and staff physicians from several areas in the hospitals. Subjects participated in in-depth interviews, answered questionnaires, and performed specific tasks, to aid our development process. We feel this method has resulted in a product that will achieve a high level of customer satisfaction, developed in less time than a traditional approach. Some of the reasons we believe in the value of this approach are: • Users may not be able to describe their needs in terms that designers are expecting, leading to misinterpretation; • Users may not be able to choose between options without seeing them; • Users needs and choices evolve with experience; • Users true choices and needs may not seem logical to one not performing those tasks (i.e., the designers).
Efficient robust conditional random fields.
Song, Dongjin; Liu, Wei; Zhou, Tianyi; Tao, Dacheng; Meyer, David A
2015-10-01
Conditional random fields (CRFs) are a flexible yet powerful probabilistic approach and have shown advantages for popular applications in various areas, including text analysis, bioinformatics, and computer vision. Traditional CRF models, however, are incapable of selecting relevant features as well as suppressing noise from noisy original features. Moreover, conventional optimization methods often converge slowly in solving the training procedure of CRFs, and will degrade significantly for tasks with a large number of samples and features. In this paper, we propose robust CRFs (RCRFs) to simultaneously select relevant features. An optimal gradient method (OGM) is further designed to train RCRFs efficiently. Specifically, the proposed RCRFs employ the l1 norm of the model parameters to regularize the objective used by traditional CRFs, therefore enabling discovery of the relevant unary features and pairwise features of CRFs. In each iteration of OGM, the gradient direction is determined jointly by the current gradient together with the historical gradients, and the Lipschitz constant is leveraged to specify the proper step size. We show that an OGM can tackle the RCRF model training very efficiently, achieving the optimal convergence rate [Formula: see text] (where k is the number of iterations). This convergence rate is theoretically superior to the convergence rate O(1/k) of previous first-order optimization methods. Extensive experiments performed on three practical image segmentation tasks demonstrate the efficacy of OGM in training our proposed RCRFs.
CuCrZr alloy microstructure and mechanical properties after hot isostatic pressing bonding cycles
NASA Astrophysics Data System (ADS)
Frayssines, P.-E.; Gentzbittel, J.-M.; Guilloud, A.; Bucci, P.; Soreau, T.; Francois, N.; Primaux, F.; Heikkinen, S.; Zacchia, F.; Eaton, R.; Barabash, V.; Mitteau, R.
2014-04-01
ITER first wall (FW) panels are a layered structure made of the three following materials: 316L(N) austenitic stainless steel, CuCrZr alloy and beryllium. Two hot isostatic pressing (HIP) cycles are included in the reference fabrication route to bond these materials together for the normal heat flux design supplied by the European Union (EU). This reference fabrication route ensures sufficiently good mechanical properties for the materials and joints, which fulfil the ITER mechanical specifications, but often results in a coarse grain size for the CuCrZr alloy, which is not favourable, especially, for the thermal creep properties of the FW panels. To limit the abnormal grain growth of CuCrZr and make the ITER FW fabrication route more reliable, a study began in 2010 in the EU in the frame of an ITER task agreement. Two material fabrication approaches have been investigated. The first one was dedicated to the fabrication of solid CuCrZr alloy in close collaboration with an industrial copper alloys manufacturer. The second approach investigated was the manufacturing of CuCrZr alloy using the powder metallurgy (PM) route and HIP consolidation. This paper presents the main mechanical and microstructural results associated with the two CuCrZr approaches mentioned above. The mechanical properties of solid CuCrZr, PM CuCrZr and joints (solid CuCrZr/solid CuCrZr and solid CuCrZr/316L(N) and PM CuCrZr/316L(N)) are also presented.
Acceleration of linear stationary iterative processes in multiprocessor computers. II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romm, Ya.E.
1982-05-01
For pt.I, see Kibernetika, vol.18, no.1, p.47 (1982). For pt.I, see Cybernetics, vol.18, no.1, p.54 (1982). Considers a reduced system of linear algebraic equations x=ax+b, where a=(a/sub ij/) is a real n*n matrix; b is a real vector with common euclidean norm >>>. It is supposed that the existence and uniqueness of solution det (0-a) not equal to e is given, where e is a unit matrix. The linear iterative process converging to x x/sup (k+1)/=fx/sup (k)/, k=0, 1, 2, ..., where the operator f translates r/sup n/ into r/sup n/. In considering implementation of the iterative process (ip) inmore » a multiprocessor system, it is assumed that the number of processors is constant, and are various values of the latter investigated; it is assumed in addition, that the processors perform elementary binary arithmetic operations of addition and multiestimates only include the time of execution of arithmetic operations. With any paralleling of individual iteration, the execution time of the ip is proportional to the number of sequential steps k+1. The author sets the task of reducing the number of sequential steps in the ip so as to execute it in a time proportional to a value smaller than k+1. He also sets the goal of formulating a method of accelerated bit serial-parallel execution of each successive step of the ip, with, in the modification sought, a reduced number of steps in a time comparable to the operation time of logical elements. 6 references.« less
NASA Astrophysics Data System (ADS)
Magnen, Jacques; Unterberger, Jérémie
2012-03-01
{Let $B=(B_1(t),...,B_d(t))$ be a $d$-dimensional fractional Brownian motion with Hurst index $\\alpha<1/4$, or more generally a Gaussian process whose paths have the same local regularity. Defining properly iterated integrals of $B$ is a difficult task because of the low H\\"older regularity index of its paths. Yet rough path theory shows it is the key to the construction of a stochastic calculus with respect to $B$, or to solving differential equations driven by $B$. We intend to show in a series of papers how to desingularize iterated integrals by a weak, singular non-Gaussian perturbation of the Gaussian measure defined by a limit in law procedure. Convergence is proved by using "standard" tools of constructive field theory, in particular cluster expansions and renormalization. These powerful tools allow optimal estimates, and call for an extension of Gaussian tools such as for instance the Malliavin calculus. After a first introductory paper \\cite{MagUnt1}, this one concentrates on the details of the constructive proof of convergence for second-order iterated integrals, also known as L\\'evy area.
Performance improvement of robots using a learning control scheme
NASA Technical Reports Server (NTRS)
Krishna, Ramuhalli; Chiang, Pen-Tai; Yang, Jackson C. S.
1987-01-01
Many applications of robots require that the same task be repeated a number of times. In such applications, the errors associated with one cycle are also repeated every cycle of the operation. An off-line learning control scheme is used here to modify the command function which would result in smaller errors in the next operation. The learning scheme is based on a knowledge of the errors and error rates associated with each cycle. Necessary conditions for the iterative scheme to converge to zero errors are derived analytically considering a second order servosystem model. Computer simulations show that the errors are reduced at a faster rate if the error rate is included in the iteration scheme. The results also indicate that the scheme may increase the magnitude of errors if the rate information is not included in the iteration scheme. Modification of the command input using a phase and gain adjustment is also proposed to reduce the errors with one attempt. The scheme is then applied to a computer model of a robot system similar to PUMA 560. Improved performance of the robot is shown by considering various cases of trajectory tracing. The scheme can be successfully used to improve the performance of actual robots within the limitations of the repeatability and noise characteristics of the robot.
NASA Technical Reports Server (NTRS)
Crasner, Aaron I.; Scola,Salvatore; Beyon, Jeffrey Y.; Petway, Larry B.
2014-01-01
Optimized designs of the Navigation Doppler Lidar (NDL) instrument for Autonomous Landing Hazard Avoidance Technology (ALHAT) were accomplished via Interdisciplinary Design Concept (IDEC) at NASA Langley Research Center during the summer of 2013. Three branches in the Engineering Directorate and three students were involved in this joint task through the NASA Langley Aerospace Research Summer Scholars (LARSS) Program. The Laser Remote Sensing Branch (LRSB), Mechanical Systems Branch (MSB), and Structural and Thermal Systems Branch (STSB) were engaged to achieve optimal designs through iterative and interactive collaborative design processes. A preliminary design iteration was able to reduce the power consumption, mass, and footprint by removing redundant components and replacing inefficient components with more efficient ones. A second design iteration reduced volume and mass by replacing bulky components with excessive performance with smaller components custom-designed for the power system. Thermal modeling software was used to run steady state thermal analyses, which were used to both validate the designs and recommend further changes. Analyses were run on each redesign, as well as the original system. Thermal Desktop was used to run trade studies to account for uncertainty and assumptions about fan performance and boundary conditions. The studies suggested that, even if the assumptions were significantly wrong, the redesigned systems would remain within operating temperature limits.
Gang, Grace J; Siewerdsen, Jeffrey H; Stayman, J Webster
2017-12-01
This paper presents a joint optimization of dynamic fluence field modulation (FFM) and regularization in quadratic penalized-likelihood reconstruction that maximizes a task-based imaging performance metric. We adopted a task-driven imaging framework for prospective designs of the imaging parameters. A maxi-min objective function was adopted to maximize the minimum detectability index ( ) throughout the image. The optimization algorithm alternates between FFM (represented by low-dimensional basis functions) and local regularization (including the regularization strength and directional penalty weights). The task-driven approach was compared with three FFM strategies commonly proposed for FBP reconstruction (as well as a task-driven TCM strategy) for a discrimination task in an abdomen phantom. The task-driven FFM assigned more fluence to less attenuating anteroposterior views and yielded approximately constant fluence behind the object. The optimal regularization was almost uniform throughout image. Furthermore, the task-driven FFM strategy redistribute fluence across detector elements in order to prescribe more fluence to the more attenuating central region of the phantom. Compared with all strategies, the task-driven FFM strategy not only improved minimum by at least 17.8%, but yielded higher over a large area inside the object. The optimal FFM was highly dependent on the amount of regularization, indicating the importance of a joint optimization. Sample reconstructions of simulated data generally support the performance estimates based on computed . The improvements in detectability show the potential of the task-driven imaging framework to improve imaging performance at a fixed dose, or, equivalently, to provide a similar level of performance at reduced dose.
VA FitHeart, a Mobile App for Cardiac Rehabilitation: Usability Study
Magnusson, Sara L; Fortney, John C; Sayre, George G; Whooley, Mary A
2018-01-01
Background Cardiac rehabilitation (CR) improves outcomes for patients with ischemic heart disease or heart failure but is underused. New strategies to improve access to and engagement in CR are needed. There is considerable interest in technology-facilitated home CR. However, little is known about patient acceptance and use of mobile technology for CR. Objective The aim of this study was to develop a mobile app for technology-facilitated home CR and seek to determine its usability. Methods We recruited patients eligible for CR who had access to a mobile phone, tablet, or computer with Internet access. The mobile app includes physical activity goal setting, logs for tracking physical activity and health metrics (eg, weight, blood pressure, and mood), health education, reminders, and feedback. Study staff demonstrated the mobile app to participants in person and then observed participants completing prespecified tasks with the mobile app. Participants completed the System Usability Scale (SUS, 0-100), rated likelihood to use the mobile app (0-100), questionnaires on mobile app use, and participated in a semistructured interview. The Unified Theory of Acceptance and Use of Technology and the Theory of Planned Behavior informed the analysis. On the basis of participant feedback, we made iterative revisions to the mobile app between users. Results We conducted usability testing in 13 participants. The first version of the mobile app was used by the first 5 participants, and revised versions were used by the final 8 participants. From the first version to revised versions, task completion success rate improved from 44% (11/25 tasks) to 78% (31/40 tasks; P=.05), SUS improved from 54 to 76 (P=.04; scale 0-100, with 100 being the best usability), and self-reported likelihood of use remained high at 76 and 87 (P=.30; scale 0-100, with 100 being the highest likelihood). In interviews, patients expressed interest in tracking health measures (“I think it’ll be good to track my exercise and to see what I’m doing”), a desire for introductory training (“Initially, training with a technical person, instead of me relying on myself”), and an expectation for sharing data with providers (“It would also be helpful to share with my doctor, it just being a matter of clicking a button and sharing it with my doctor”). Conclusions With participant feedback and iterative revisions, we significantly improved the usability of a mobile app for CR. Patient expectations for using a mobile app for CR include tracking health metrics, introductory training, and sharing data with providers. Iterative mixed-method evaluation may be useful for improving the usability of health technology. PMID:29335235
VA FitHeart, a Mobile App for Cardiac Rehabilitation: Usability Study.
Beatty, Alexis L; Magnusson, Sara L; Fortney, John C; Sayre, George G; Whooley, Mary A
2018-01-15
Cardiac rehabilitation (CR) improves outcomes for patients with ischemic heart disease or heart failure but is underused. New strategies to improve access to and engagement in CR are needed. There is considerable interest in technology-facilitated home CR. However, little is known about patient acceptance and use of mobile technology for CR. The aim of this study was to develop a mobile app for technology-facilitated home CR and seek to determine its usability. We recruited patients eligible for CR who had access to a mobile phone, tablet, or computer with Internet access. The mobile app includes physical activity goal setting, logs for tracking physical activity and health metrics (eg, weight, blood pressure, and mood), health education, reminders, and feedback. Study staff demonstrated the mobile app to participants in person and then observed participants completing prespecified tasks with the mobile app. Participants completed the System Usability Scale (SUS, 0-100), rated likelihood to use the mobile app (0-100), questionnaires on mobile app use, and participated in a semistructured interview. The Unified Theory of Acceptance and Use of Technology and the Theory of Planned Behavior informed the analysis. On the basis of participant feedback, we made iterative revisions to the mobile app between users. We conducted usability testing in 13 participants. The first version of the mobile app was used by the first 5 participants, and revised versions were used by the final 8 participants. From the first version to revised versions, task completion success rate improved from 44% (11/25 tasks) to 78% (31/40 tasks; P=.05), SUS improved from 54 to 76 (P=.04; scale 0-100, with 100 being the best usability), and self-reported likelihood of use remained high at 76 and 87 (P=.30; scale 0-100, with 100 being the highest likelihood). In interviews, patients expressed interest in tracking health measures ("I think it'll be good to track my exercise and to see what I'm doing"), a desire for introductory training ("Initially, training with a technical person, instead of me relying on myself"), and an expectation for sharing data with providers ("It would also be helpful to share with my doctor, it just being a matter of clicking a button and sharing it with my doctor"). With participant feedback and iterative revisions, we significantly improved the usability of a mobile app for CR. Patient expectations for using a mobile app for CR include tracking health metrics, introductory training, and sharing data with providers. Iterative mixed-method evaluation may be useful for improving the usability of health technology. ©Alexis L Beatty, Sara L Magnusson, John C Fortney, George G Sayre, Mary A Whooley. Originally published in JMIR Human Factors (http://humanfactors.jmir.org), 15.01.2018.
Flow balancing orifice for ITER toroidal field coil
NASA Astrophysics Data System (ADS)
Litvinovich, A. V.; Y Rodin, I.; Kovalchuk, O. A.; Safonov, A. V.; Stepanov, D. B.; Guryeva, T. M.
2017-12-01
Flow balancing orifices (FBOs) are used in in International thermonuclear experimental reactor (ITER) Toroidal Field coil to uniform flow rate of cooling gas in the side double pancakes which have a different conductor length: 99 m and 305 m, respectively. FBOs consist of straight parts, elbows produced from a 316L stainless steel tube 21.34 x 2.11 mm and orifices made from a 316L stainless steel rod. Each of right and left FBOs contains 6 orifices, straight FBOs contain 4 and 6 orifices. Before manufacturing of qualification samples D.V. Efremov Institute of Electrophysical Apparatus (JSC NIIEFA) proposed to ITER a new approach to provide the seamless connection between a tube and a plate therefore the most critical weld between the orifice with 1 mm thickness and the tube removed from the FBOs final design. The proposed orifice diameter is three times less than the minimum requirement of the ISO 5167, therefore it was tasked to define accuracy of calculation flow characteristics at room temperature and compare with the experimental data. In 2015 the qualification samples of flow balancing orifices were produced and tested. The results of experimental data showed that the deviation of calculated data is less than 7%. Based on this result and other tests ITER approved the design of FBOs, which made it possible to start the serial production. In 2016 JSC NIIEFA delivered 50 FBOs to ITER, i.e. 24 left side, 24 right side and 2 straight FBOs. In order to define the quality of FBOs the test facility in JSC NIIEFA was prepared. The helium tightness test at 10-9 m3·Pa/s the pressure up to 3 MPa, flow rate measuring at the various pressure drops, the non-destructive tests of orifices and weld seams (ISO 5817, class B) were conducted. Other tests such as check dimensions and thermo cycling 300 - 80 - 300 K also were carried out for each FBO.
1993-12-31
19,23,25,26,27,28,32,33,35,41]) - A new cost function is postulated and an algorithm that employs this cost function is proposed for the learning of...updates the controller parameters from time to time [53]. The learning control algorithm consist of updating the parameter estimates as used in the...proposed cost function with the other learning type algorithms , such as based upon learning of iterative tasks [Kawamura-85], variable structure
A Portable Burn Pan for the Disposal of Excess Propellants
2015-11-01
project objective for the total mass of the HTU burn pan of less than 120 kg. The bonnet was made more durable while eliminating hazardous sharp edges...remaining in the pan will need to be considered hazardous . The sponsoring facility representatives, Mr. Steve Thurmond and Ms. Ellen Clark, agreed to...Don’t need the door on the bonnet any more – remove from next iteration − Beef up the mounting of the legs on the base After-action Tasks (CRREL
Silicon material task. Part 3: Low-cost silicon solar array project
NASA Technical Reports Server (NTRS)
Roques, R. A.; Coldwell, D. M.
1977-01-01
The feasibility of a process for carbon reduction of low impurity silica in a plasma heat source was investigated to produce low-cost solar-grade silicon. Theoretical aspects of the reaction chemistry were studied with the aid of a computer program using iterative free energy minimization. These calculations indicate a threshold temperature exists at 2400 K below which no silicon is formed. The computer simulation technique of molecular dynamics was used to study the quenching of product species.
Air Asset to Mission Assignment for Dynamic High-Threat Environments in Real-Time
2015-03-01
39 Initial Distribution List 41 viii List of Figures Figure 2.1 Joint Air Tasking Cycle (JCS 2014). An iterative 120-hour cycle for planners within the...minutes of on- staion time, or “playtime”, with a total of two GBU -16 laser-guided bomb (LGB) and an Advanced Targeting Forward Looking Infrared (ATFLIR...proba- bility of survival against the SA-2 and SA-3 systems, respectively. A GBU -16 LGB has no standoff capability and 90%, 60%, and 70% probability of
An iterative requirements specification procedure for decision support systems.
Brookes, C H
1987-08-01
Requirements specification is a key element in a DSS development project because it not only determines what is to be done, it also drives the evolution process. A procedure for requirements elicitation is described that is based on the decomposition of the DSS design task into a number of functions, subfunctions, and operators. It is postulated that the procedure facilitates the building of a DSS that is complete and integrates MIS, modelling and expert system components. Some examples given are drawn from the health administration field.
NASA Technical Reports Server (NTRS)
Volakis, John L.
1990-01-01
There are two tasks described in this report. First, an extension of a two dimensional formulation is presented for a three dimensional body of revolution. With the introduction of a Fourier expansion of the vector electric and magnetic fields, a coupled two dimensional system is generated and solved via the finite element method. An exact boundary condition is employed to terminate the mesh and the fast fourier transformation is used to evaluate the boundary integrals for low O(n) memory demand when an iterative solution algorithm is used. Second, the diffraction by a material discontinuity in a thick dielectric/ferrite layer is considered by modeling the layer as a distributed current sheet obeying generalized sheet transition conditions (GSTC's).
Dionne-Odom, J Nicholas; Willis, Danny G; Bakitas, Marie; Crandall, Beth; Grace, Pamela J
2015-01-01
Surrogate decision makers (SDMs) face difficult decisions at end of life (EOL) for decisionally incapacitated intensive care unit (ICU) patients. To identify and describe the underlying psychological processes of surrogate decision making for adults at EOL in the ICU. Qualitative case study design using a cognitive task analysis interviewing approach. Participants were recruited from October 2012 to June 2013 from an academic tertiary medical center's ICU located in the rural Northeastern United States. Nineteen SDMs for patients who had died in the ICU completed in-depth semistructured cognitive task analysis interviews. The conceptual framework formulated from data analysis reveals that three underlying, iterative, psychological dimensions (gist impressions, distressing emotions, and moral intuitions) impact an SDM's judgment about the acceptability of either the patient's medical treatments or his or her condition. The framework offers initial insights about the underlying psychological processes of surrogate decision making and may facilitate enhanced decision support for SDMs. Copyright © 2015 Elsevier Inc. All rights reserved.
Disentangling prototypicality and social desirability: the case of the KNOWI task.
Turan, Bulent
2011-01-01
The prototype of indicators of a relationship partner who can be trusted to be responsive at times of stress is one kind of social knowledge structure. The Knowledge of Indicators (KNOWI) Task assesses individual differences in knowledge about these prototypic indicators. In constructing the KNOWI, an iterative procedure was used in an attempt to identify those indicators for which ratings of prototypicality are not influenced by social desirability. Study 1 demonstrated that the correlation between ratings of prototypicality and social desirability is indeed eliminated for the final set of indicators retained in the KNOWI. Study 2 tested the prototype matching hypothesis: Comparing an actual partner to the prototype might shape global judgments about that partner's responsiveness. Because in Study 2 only those indicators that are uncorrelated with social desirability were used, this result cannot be explained by social desirability. These results support the construct validity of the indicators used in the KNOWI Task, which seems to be a precise assessment tool not influenced by social desirability.
Gong, Yunchao; Lazebnik, Svetlana; Gordo, Albert; Perronnin, Florent
2013-12-01
This paper addresses the problem of learning similarity-preserving binary codes for efficient similarity search in large-scale image collections. We formulate this problem in terms of finding a rotation of zero-centered data so as to minimize the quantization error of mapping this data to the vertices of a zero-centered binary hypercube, and propose a simple and efficient alternating minimization algorithm to accomplish this task. This algorithm, dubbed iterative quantization (ITQ), has connections to multiclass spectral clustering and to the orthogonal Procrustes problem, and it can be used both with unsupervised data embeddings such as PCA and supervised embeddings such as canonical correlation analysis (CCA). The resulting binary codes significantly outperform several other state-of-the-art methods. We also show that further performance improvements can result from transforming the data with a nonlinear kernel mapping prior to PCA or CCA. Finally, we demonstrate an application of ITQ to learning binary attributes or "classemes" on the ImageNet data set.
NASA Astrophysics Data System (ADS)
Wang, Q.; Elbouz, M.; Alfalou, A.; Brosseau, C.
2017-06-01
We present a novel method to optimize the discrimination ability and noise robustness of composite filters. This method is based on the iterative preprocessing of training images which can extract boundary and detailed feature information of authentic training faces, thereby improving the peak-to-correlation energy (PCE) ratio of authentic faces and to be immune to intra-class variance and noise interference. By adding the training images directly, one can obtain a composite template with high discrimination ability and robustness for face recognition task. The proposed composite correlation filter does not involve any complicated mathematical analysis and computation which are often required in the design of correlation algorithms. Simulation tests have been conducted to check the effectiveness and feasibility of our proposal. Moreover, to assess robustness of composite filters using receiver operating characteristic (ROC) curves, we devise a new method to count the true positive and false positive rates for which the difference between PCE and threshold is involved.
NASA Astrophysics Data System (ADS)
Juárez, R.; Guirao, J.; Kolsek, A.; Lopez, A.; Pedroche, G.; Bertalot, L.; Udintsev, V. S.; Walsh, M. J.; Sauvan, P.; Sanz, J.
2018-05-01
The ITER equatorial port plugs are submitted to a drained weight limit of 45 T. This limitation can conflict with their radiation shielding demands, although some weight margin is being discussed. The port interspaces are subject to a shutdown dose rate limit of 100 µSv h‑1 after 106 s of cooling time. To meet it, the port plugs must show a neutron flux attenuation comparable to their neighborhood, despite considering penetrations to host systems. Most of this task relies on the drawer shield module (DSM). In this work, two DSM concepts are analyzed with this perspective: the box-based DSM and the modular DSM. Regardless the penetrations, the box-based DSM leads to unsatisfactory port plugs to meet both weight and SDDR requirements. On the contrary, the modular DSM shows a performance which allows for the adoption of such DSM concept, or equivalent, a port may comply with both requirements at the same time, provided the penetrations are well designed.
The interactive evolution of human communication systems.
Fay, Nicolas; Garrod, Simon; Roberts, Leo; Swoboda, Nik
2010-04-01
This paper compares two explanations of the process by which human communication systems evolve: iterated learning and social collaboration. It then reports an experiment testing the social collaboration account. Participants engaged in a graphical communication task either as a member of a community, where they interacted with seven different partners drawn from the same pool, or as a member of an isolated pair, where they interacted with the same partner across the same number of games. Participants' horizontal, pair-wise interactions led "bottom up" to the creation of an effective and efficient shared sign system in the community condition. Furthermore, the community-evolved sign systems were as effective and efficient as the local sign systems developed by isolated pairs. Finally, and as predicted by a social collaboration account, and not by an iterated learning account, interaction was critical to the creation of shared sign systems, with different isolated pairs establishing different local sign systems and different communities establishing different global sign systems. Copyright © 2010 Cognitive Science Society, Inc.
NASA Astrophysics Data System (ADS)
Cunha-Filho, A. G.; Briend, Y. P. J.; de Lima, A. M. G.; Donadon, M. V.
2018-05-01
The flutter boundary prediction of complex aeroelastic systems is not an easy task. In some cases, these analyses may become prohibitive due to the high computational cost and time associated with the large number of degrees of freedom of the aeroelastic models, particularly when the aeroelastic model incorporates a control strategy with the aim of suppressing the flutter phenomenon, such as the use of viscoelastic treatments. In this situation, the use of a model reduction method is essential. However, the construction of a modal reduction basis for aeroviscoelastic systems is still a challenge, owing to the inherent frequency- and temperature-dependent behavior of the viscoelastic materials. Thus, the main contribution intended for the present study is to propose an efficient and accurate iterative enriched Ritz basis to deal with aeroviscoelastic systems. The main features and capabilities of the proposed model reduction method are illustrated in the prediction of flutter boundary for a thin three-layer sandwich flat panel and a typical aeronautical stiffened panel, both under supersonic flow.
Distributed weighted least-squares estimation with fast convergence for large-scale systems.
Marelli, Damián Edgardo; Fu, Minyue
2015-01-01
In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods.
Distributed weighted least-squares estimation with fast convergence for large-scale systems☆
Marelli, Damián Edgardo; Fu, Minyue
2015-01-01
In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods. PMID:25641976
Wilk, S; Michalowski, W; O'Sullivan, D; Farion, K; Sayyad-Shirabad, J; Kuziemsky, C; Kukawka, B
2013-01-01
The purpose of this study was to create a task-based support architecture for developing clinical decision support systems (CDSSs) that assist physicians in making decisions at the point-of-care in the emergency department (ED). The backbone of the proposed architecture was established by a task-based emergency workflow model for a patient-physician encounter. The architecture was designed according to an agent-oriented paradigm. Specifically, we used the O-MaSE (Organization-based Multi-agent System Engineering) method that allows for iterative translation of functional requirements into architectural components (e.g., agents). The agent-oriented paradigm was extended with ontology-driven design to implement ontological models representing knowledge required by specific agents to operate. The task-based architecture allows for the creation of a CDSS that is aligned with the task-based emergency workflow model. It facilitates decoupling of executable components (agents) from embedded domain knowledge (ontological models), thus supporting their interoperability, sharing, and reuse. The generic architecture was implemented as a pilot system, MET3-AE--a CDSS to help with the management of pediatric asthma exacerbation in the ED. The system was evaluated in a hospital ED. The architecture allows for the creation of a CDSS that integrates support for all tasks from the task-based emergency workflow model, and interacts with hospital information systems. Proposed architecture also allows for reusing and sharing system components and knowledge across disease-specific CDSSs.
Sampson, Patrica; Freeman, Chris; Coote, Susan; Demain, Sara; Feys, Peter; Meadmore, Katie; Hughes, Ann-Marie
2016-02-01
Few interventions address multiple sclerosis (MS) arm dysfunction but robotics and functional electrical stimulation (FES) appear promising. This paper investigates the feasibility of combining FES with passive robotic support during virtual reality (VR) training tasks to improve upper limb function in people with multiple sclerosis (pwMS). The system assists patients in following a specified trajectory path, employing an advanced model-based paradigm termed iterative learning control (ILC) to adjust the FES to improve accuracy and maximise voluntary effort. Reaching tasks were repeated six times with ILC learning the optimum control action from previous attempts. A convenience sample of five pwMS was recruited from local MS societies, and the intervention comprised 18 one-hour training sessions over 10 weeks. The accuracy of tracking performance without FES and the amount of FES delivered during training were analyzed using regression analysis. Clinical functioning of the arm was documented before and after treatment with standard tests. Statistically significant results following training included: improved accuracy of tracking performance both when assisted and unassisted by FES; reduction in maximum amount of FES needed to assist tracking; and less impairment in the proximal arm that was trained. The system was well tolerated by all participants with no increase in muscle fatigue reported. This study confirms the feasibility of FES combined with passive robot assistance as a potentially effective intervention to improve arm movement and control in pwMS and provides the basis for a follow-up study.
Radac, Mircea-Bogdan; Precup, Radu-Emil; Petriu, Emil M
2015-11-01
This paper proposes a novel model-free trajectory tracking of multiple-input multiple-output (MIMO) systems by the combination of iterative learning control (ILC) and primitives. The optimal trajectory tracking solution is obtained in terms of previously learned solutions to simple tasks called primitives. The library of primitives that are stored in memory consists of pairs of reference input/controlled output signals. The reference input primitives are optimized in a model-free ILC framework without using knowledge of the controlled process. The guaranteed convergence of the learning scheme is built upon a model-free virtual reference feedback tuning design of the feedback decoupling controller. Each new complex trajectory to be tracked is decomposed into the output primitives regarded as basis functions. The optimal reference input for the control system to track the desired trajectory is next recomposed from the reference input primitives. This is advantageous because the optimal reference input is computed straightforward without the need to learn from repeated executions of the tracking task. In addition, the optimization problem specific to trajectory tracking of square MIMO systems is decomposed in a set of optimization problems assigned to each separate single-input single-output control channel that ensures a convenient model-free decoupling. The new model-free primitive-based ILC approach is capable of planning, reasoning, and learning. A case study dealing with the model-free control tuning for a nonlinear aerodynamic system is included to validate the new approach. The experimental results are given.
Jaddi, Najmeh Sadat; Abdullah, Salwani; Abdul Malek, Marlinda
2017-01-01
Artificial neural networks (ANNs) have been employed to solve a broad variety of tasks. The selection of an ANN model with appropriate weights is important in achieving accurate results. This paper presents an optimization strategy for ANN model selection based on the cuckoo search (CS) algorithm, which is rooted in the obligate brood parasitic actions of some cuckoo species. In order to enhance the convergence ability of basic CS, some modifications are proposed. The fraction Pa of the n nests replaced by new nests is a fixed parameter in basic CS. As the selection of Pa is a challenging issue and has a direct effect on exploration and therefore on convergence ability, in this work the Pa is set to a maximum value at initialization to achieve more exploration in early iterations and it is decreased during the search to achieve more exploitation in later iterations until it reaches the minimum value in the final iteration. In addition, a novel master-leader-slave multi-population strategy is used where the slaves employ the best fitness function among all slaves, which is selected by the leader under a certain condition. This fitness function is used for subsequent Lévy flights. In each iteration a copy of the best solution of each slave is migrated to the master and then the best solution is found by the master. The method is tested on benchmark classification and time series prediction problems and the statistical analysis proves the ability of the method. This method is also applied to a real-world water quality prediction problem with promising results. PMID:28125609
Kernel-based least squares policy iteration for reinforcement learning.
Xu, Xin; Hu, Dewen; Lu, Xicheng
2007-07-01
In this paper, we present a kernel-based least squares policy iteration (KLSPI) algorithm for reinforcement learning (RL) in large or continuous state spaces, which can be used to realize adaptive feedback control of uncertain dynamic systems. By using KLSPI, near-optimal control policies can be obtained without much a priori knowledge on dynamic models of control plants. In KLSPI, Mercer kernels are used in the policy evaluation of a policy iteration process, where a new kernel-based least squares temporal-difference algorithm called KLSTD-Q is proposed for efficient policy evaluation. To keep the sparsity and improve the generalization ability of KLSTD-Q solutions, a kernel sparsification procedure based on approximate linear dependency (ALD) is performed. Compared to the previous works on approximate RL methods, KLSPI makes two progresses to eliminate the main difficulties of existing results. One is the better convergence and (near) optimality guarantee by using the KLSTD-Q algorithm for policy evaluation with high precision. The other is the automatic feature selection using the ALD-based kernel sparsification. Therefore, the KLSPI algorithm provides a general RL method with generalization performance and convergence guarantee for large-scale Markov decision problems (MDPs). Experimental results on a typical RL task for a stochastic chain problem demonstrate that KLSPI can consistently achieve better learning efficiency and policy quality than the previous least squares policy iteration (LSPI) algorithm. Furthermore, the KLSPI method was also evaluated on two nonlinear feedback control problems, including a ship heading control problem and the swing up control of a double-link underactuated pendulum called acrobot. Simulation results illustrate that the proposed method can optimize controller performance using little a priori information of uncertain dynamic systems. It is also demonstrated that KLSPI can be applied to online learning control by incorporating an initial controller to ensure online performance.
Jaddi, Najmeh Sadat; Abdullah, Salwani; Abdul Malek, Marlinda
2017-01-01
Artificial neural networks (ANNs) have been employed to solve a broad variety of tasks. The selection of an ANN model with appropriate weights is important in achieving accurate results. This paper presents an optimization strategy for ANN model selection based on the cuckoo search (CS) algorithm, which is rooted in the obligate brood parasitic actions of some cuckoo species. In order to enhance the convergence ability of basic CS, some modifications are proposed. The fraction Pa of the n nests replaced by new nests is a fixed parameter in basic CS. As the selection of Pa is a challenging issue and has a direct effect on exploration and therefore on convergence ability, in this work the Pa is set to a maximum value at initialization to achieve more exploration in early iterations and it is decreased during the search to achieve more exploitation in later iterations until it reaches the minimum value in the final iteration. In addition, a novel master-leader-slave multi-population strategy is used where the slaves employ the best fitness function among all slaves, which is selected by the leader under a certain condition. This fitness function is used for subsequent Lévy flights. In each iteration a copy of the best solution of each slave is migrated to the master and then the best solution is found by the master. The method is tested on benchmark classification and time series prediction problems and the statistical analysis proves the ability of the method. This method is also applied to a real-world water quality prediction problem with promising results.
Improved silicon nitride for advanced heat engines
NASA Technical Reports Server (NTRS)
Yeh, Hun C.; Fang, Ho T.
1987-01-01
The technology base required to fabricate silicon nitride components with the strength, reliability, and reproducibility necessary for actual heat engine applications is presented. Task 2 was set up to develop test bars with high Weibull slope and greater high temperature strength, and to conduct an initial net shape component fabrication evaluation. Screening experiments were performed in Task 7 on advanced materials and processing for input to Task 2. The technical efforts performed in the second year of a 5-yr program are covered. The first iteration of Task 2 was completed as planned. Two half-replicated, fractional factorial (2 sup 5), statistically designed matrix experiments were conducted. These experiments have identified Denka 9FW Si3N4 as an alternate raw material to GTE SN502 Si3N4 for subsequent process evaluation. A detailed statistical analysis was conducted to correlate processing conditions with as-processed test bar properties. One processing condition produced a material with a 97 ksi average room temperature MOR (100 percent of goal) with 13.2 Weibull slope (83 percent of goal); another condition produced 86 ksi (6 percent over baseline) room temperature strength with a Weibull slope of 20 (125 percent of goal).
Chaudhuri, Shomesh E; Merfeld, Daniel M
2013-03-01
Psychophysics generally relies on estimating a subject's ability to perform a specific task as a function of an observed stimulus. For threshold studies, the fitted functions are called psychometric functions. While fitting psychometric functions to data acquired using adaptive sampling procedures (e.g., "staircase" procedures), investigators have encountered a bias in the spread ("slope" or "threshold") parameter that has been attributed to the serial dependency of the adaptive data. Using simulations, we confirm this bias for cumulative Gaussian parametric maximum likelihood fits on data collected via adaptive sampling procedures, and then present a bias-reduced maximum likelihood fit that substantially reduces the bias without reducing the precision of the spread parameter estimate and without reducing the accuracy or precision of the other fit parameters. As a separate topic, we explain how to implement this bias reduction technique using generalized linear model fits as well as other numeric maximum likelihood techniques such as the Nelder-Mead simplex. We then provide a comparison of the iterative bootstrap and observed information matrix techniques for estimating parameter fit variance from adaptive sampling procedure data sets. The iterative bootstrap technique is shown to be slightly more accurate; however, the observed information technique executes in a small fraction (0.005 %) of the time required by the iterative bootstrap technique, which is an advantage when a real-time estimate of parameter fit variance is required.
Jagau, Thomas-C
2018-01-14
The impact of residual electron correlation beyond the equation-of-motion coupled-cluster singles and doubles (EOM-CCSD) approximation on positions and widths of electronic resonances is investigated. To establish a method that accomplishes this task in an economical manner, several approaches proposed for the approximate treatment of triple excitations are reviewed with respect to their performance in the electron attachment (EA) variant of EOM-CC theory. The recently introduced EOM-CCSD(T)(a)* method [D. A. Matthews and J. F. Stanton, J. Chem. Phys. 145, 124102 (2016)], which includes non-iterative corrections to the reference and the target states, reliably reproduces vertical attachment energies from EOM-EA-CC calculations with single, double, and full triple excitations in contrast to schemes in which non-iterative corrections are applied only to the target states. Applications of EOM-EA-CCSD(T)(a)* augmented by a complex absorbing potential (CAP) to several temporary anions illustrate that shape resonances are well described by EOM-EA-CCSD, but that residual electron correlation often makes a non-negligible impact on their positions and widths. The positions of Feshbach resonances, on the other hand, are significantly improved when going from CAP-EOM-EA-CCSD to CAP-EOM-EA-CCSD(T)(a)*, but the correct energetic order of the relevant electronic states is still not achieved.
NASA Astrophysics Data System (ADS)
Jagau, Thomas-C.
2018-01-01
The impact of residual electron correlation beyond the equation-of-motion coupled-cluster singles and doubles (EOM-CCSD) approximation on positions and widths of electronic resonances is investigated. To establish a method that accomplishes this task in an economical manner, several approaches proposed for the approximate treatment of triple excitations are reviewed with respect to their performance in the electron attachment (EA) variant of EOM-CC theory. The recently introduced EOM-CCSD(T)(a)* method [D. A. Matthews and J. F. Stanton, J. Chem. Phys. 145, 124102 (2016)], which includes non-iterative corrections to the reference and the target states, reliably reproduces vertical attachment energies from EOM-EA-CC calculations with single, double, and full triple excitations in contrast to schemes in which non-iterative corrections are applied only to the target states. Applications of EOM-EA-CCSD(T)(a)* augmented by a complex absorbing potential (CAP) to several temporary anions illustrate that shape resonances are well described by EOM-EA-CCSD, but that residual electron correlation often makes a non-negligible impact on their positions and widths. The positions of Feshbach resonances, on the other hand, are significantly improved when going from CAP-EOM-EA-CCSD to CAP-EOM-EA-CCSD(T)(a)*, but the correct energetic order of the relevant electronic states is still not achieved.
NASA Technical Reports Server (NTRS)
Boyer, Charles M.; Jackson, Trevor P.; Beyon, Jeffrey Y.; Petway, Larry B.
2013-01-01
Optimized designs of the Navigation Doppler Lidar (NDL) instrument for Autonomous Landing Hazard Avoidance Technology (ALHAT) were accomplished via Interdisciplinary Design Concept (IDEC) at NASA Langley Research Center during the summer of 2013. Three branches in the Engineering Directorate and three students were involved in this joint task through the NASA Langley Aerospace Research Summer Scholars (LARSS) Program. The Laser Remote Sensing Branch (LRSB), Mechanical Systems Branch (MSB), and Structural and Thermal Systems Branch (STSB) were engaged to achieve optimal designs through iterative and interactive collaborative design processes. A preliminary design iteration was able to reduce the power consumption, mass, and footprint by removing redundant components and replacing inefficient components with more efficient ones. A second design iteration reduced volume and mass by replacing bulky components with excessive performance with smaller components custom-designed for the power system. Mechanical placement collaboration reduced potential electromagnetic interference (EMI). Through application of newly selected electrical components and thermal analysis data, a total electronic chassis redesign was accomplished. Use of an innovative forced convection tunnel heat sink was employed to meet and exceed project requirements for cooling, mass reduction, and volume reduction. Functionality was a key concern to make efficient use of airflow, and accessibility was also imperative to allow for servicing of chassis internals. The collaborative process provided for accelerated design maturation with substantiated function.
Spacecraft Attitude Maneuver Planning Using Genetic Algorithms
NASA Technical Reports Server (NTRS)
Kornfeld, Richard P.
2004-01-01
A key enabling technology that leads to greater spacecraft autonomy is the capability to autonomously and optimally slew the spacecraft from and to different attitudes while operating under a number of celestial and dynamic constraints. The task of finding an attitude trajectory that meets all the constraints is a formidable one, in particular for orbiting or fly-by spacecraft where the constraints and initial and final conditions are of time-varying nature. This approach for attitude path planning makes full use of a priori constraint knowledge and is computationally tractable enough to be executed onboard a spacecraft. The approach is based on incorporating the constraints into a cost function and using a Genetic Algorithm to iteratively search for and optimize the solution. This results in a directed random search that explores a large part of the solution space while maintaining the knowledge of good solutions from iteration to iteration. A solution obtained this way may be used as is or as an initial solution to initialize additional deterministic optimization algorithms. A number of representative case examples for time-fixed and time-varying conditions yielded search times that are typically on the order of minutes, thus demonstrating the viability of this method. This approach is applicable to all deep space and planet Earth missions requiring greater spacecraft autonomy, and greatly facilitates navigation and science observation planning.
NASA Astrophysics Data System (ADS)
Lou, Yang
Photoacoustic computed tomography(PACT), also known as optoacoustic tomography (OAT), is an emerging imaging technique that has developed rapidly in recent years. The combination of the high optical contrast and the high acoustic resolution of this hybrid imaging technique makes it a promising candidate for human breast imaging, where conventional imaging techniques including X-ray mammography, B-mode ultrasound, and MRI suffer from low contrast, low specificity for certain breast types, and additional risks related to ionizing radiation. Though significant works have been done to push the frontier of PACT breast imaging, it is still challenging to successfully build a PACT breast imaging system and apply it to wide clinical use because of various practical reasons. First, computer simulation studies are often conducted to guide imaging system designs, but the numerical phantoms employed in most previous works consist of simple geometries and do not reflect the true anatomical structures within the breast. Therefore the effectiveness of such simulation-guided PACT system in clinical experiments will be compromised. Second, it is challenging to design a system to simultaneously illuminate the entire breast with limited laser power. Some heuristic designs have been proposed where the illumination is non-stationary during the imaging procedure, but the impact of employing such a design has not been carefully studied. Third, current PACT imaging systems are often optimized with respect to physical measures such as resolution or signal-to-noise ratio (SNR). It would be desirable to establish an assessing framework where the detectability of breast tumor can be directly quantified, therefore the images produced by such optimized imaging systems are not only visually appealing, but most informative in terms of the tumor detection task. Fourth, when imaging a large three-dimensional (3D) object such as the breast, iterative reconstruction algorithms are often utilized to alleviate the need to collect densely sampled measurement data hence a long scanning time. However, the heavy computation burden associated with iterative algorithms largely hinders its application in PACT breast imaging. This dissertation is dedicated to address these aforementioned problems in PACT breast imaging. A method that generates anatomically realistic numerical breast phantoms is first proposed to facilitate computer simulation studies in PACT. The non-stationary illumination designs for PACT breast imaging are then systematically investigated in terms of its impact on reconstructed images. We then apply signal detection theory to assess different system designs to demonstrate how an objective, task-based measure can be established for PACT breast imaging. To address the slow computation time of iterative algorithms for PACT imaging, we propose an acceleration method that employs an approximated but much faster adjoint operator during iterations, which can reduce the computation time by a factor of six without significantly compromising image quality. Finally, some clinical results are presented to demonstrate that the PACT breast imaging can resolve most major and fine vascular structures within the breast, along with some pathological biomarkers that may indicate tumor development.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gardner, Russell; Pawlowski, Roger P.; Clarno, Kevin T.
Bison is being used in VERA in a variety of ways; this milestone will document an independent review of the current usage of Bison and provide guidance that will improve the accuracy, performance, and consistency in the ways that Bison is used. This task will entail running a suite of small, single and multi-cycle problems with VERA-CS, followed by Bison, and Tiamat (inline) and evaluating the usage. It will also entail performing several detailed ramp to full power solutions to compare the one-way coupled solver with the fully-coupled Tiamat. This will include at least one iteration with the PHI teammore » to incorporate some of the feedback and improve the usage. This work will also be completed in conjunction with an FMC task to evaluate the ability of Bison to model load-follow in a PWR« less
Student Practices, Learning, and Attitudes When Using Computerized Ranking Tasks
NASA Astrophysics Data System (ADS)
Lee, Kevin M.; Prather, E. E.; Collaboration of Astronomy Teaching Scholars CATS
2011-01-01
Ranking Tasks are a novel type of conceptual exercise based on a technique called rule assessment. Ranking Tasks present students with a series of four to eight icons that describe slightly different variations of a basic physical situation. Students are then asked to identify the order, or ranking, of the various situations based on some physical outcome or result. The structure of Ranking Tasks makes it difficult for students to rely strictly on memorized answers and mechanical substitution of formulae. In addition, by changing the presentation of the different scenarios (e.g., photographs, line diagrams, graphs, tables, etc.) we find that Ranking Tasks require students to develop mental schema that are more flexible and robust. Ranking tasks may be implemented on the computer which requires students to order the icons through drag-and-drop. Computer implementation allows the incorporation of background material, grading with feedback, and providing additional similar versions of the task through randomization so that students can build expertise through practice. This poster will summarize the results of a study of student usage of computerized ranking tasks. We will investigate 1) student practices (How do they make use of these tools?), 2) knowledge and skill building (Do student scores improve with iteration and are there diminishing returns?), and 3) student attitudes toward using computerized Ranking Tasks (Do they like using them?). This material is based upon work supported by the National Science Foundation under Grant No. 0715517, a CCLI Phase III Grant for the Collaboration of Astronomy Teaching Scholars (CATS). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
Determination and Control of Optical and X-Ray Wave Fronts
NASA Technical Reports Server (NTRS)
Kim, Young K.
1997-01-01
A successful design of a space-based or ground optical system requires an iterative procedure which includes the kinematics and dynamics of the system in operating environment, control synthesis and verification. To facilitate the task of designing optical wave front control systems being developed at NASA/MSFC, a multi-discipline dynamics and control tool has been developed by utilizing TREETOPS, a multi-body dynamics and control simulation, NASTRAN and MATLAB. Dynamics and control models of STABLE and ARIS were developed for TREETOPS simulation, and their simulation results are documented in this report.
Taking Lessons Learned from a Proxy Application to a Full Application for SNAP and PARTISN
Womeldorff, Geoffrey Alan; Payne, Joshua Estes; Bergen, Benjamin Karl
2017-06-09
SNAP is a proxy application which simulates the computational motion of a neutral particle transport code, PARTISN. Here in this work, we have adapted parts of SNAP separately; we have re-implemented the iterative shell of SNAP in the task-model runtime Legion, showing an improvement to the original schedule, and we have created multiple Kokkos implementations of the computational kernel of SNAP, displaying similar performance to the native Fortran. We then translate our Kokkos experiments in SNAP to PARTISN, necessitating engineering development, regression testing, and further thought.
Developing a taxonomy for mission architecture definition
NASA Technical Reports Server (NTRS)
Neubek, Deborah J.
1990-01-01
The Lunar and Mars Exploration Program Office (LMEPO) was tasked to define candidate architectures for the Space Exploration Initiative to submit to NASA senior management and an externally constituted Outreach Synthesis Group. A systematic, structured process for developing, characterizing, and describing the alternate mission architectures, and applying this process to future studies was developed. The work was done in two phases: (1) national needs were identified and categorized into objectives achievable by the Space Exploration Initiative; and (2) a program development process was created which both hierarchically and iteratively describes the program planning process.
Taking Lessons Learned from a Proxy Application to a Full Application for SNAP and PARTISN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Womeldorff, Geoffrey Alan; Payne, Joshua Estes; Bergen, Benjamin Karl
SNAP is a proxy application which simulates the computational motion of a neutral particle transport code, PARTISN. Here in this work, we have adapted parts of SNAP separately; we have re-implemented the iterative shell of SNAP in the task-model runtime Legion, showing an improvement to the original schedule, and we have created multiple Kokkos implementations of the computational kernel of SNAP, displaying similar performance to the native Fortran. We then translate our Kokkos experiments in SNAP to PARTISN, necessitating engineering development, regression testing, and further thought.
Burton, Harold; Sinclair, Robert J; Dixit, Sachin
2010-11-01
In blind, occipital cortex showed robust activation to nonvisual stimuli in many prior functional neuroimaging studies. The cognitive processes represented by these activations are not fully determined, although a verbal recognition memory role has been demonstrated. In congenitally blind and sighted (10 per group), we contrasted responses to a vibrotactile one-back frequency retention task with 5-s delays and a vibrotactile amplitude-change task; both tasks involved the same vibration parameters. The one-back paradigm required continuous updating for working memory (WM). Findings in both groups confirmed roles in WM for right hemisphere dorsolateral prefrontal (DLPFC) and dorsal/ventral attention components of posterior parietal cortex. Negative findings in bilateral ventrolateral prefrontal cortex suggested task performance without subvocalization. In bilateral occipital cortex, blind showed comparable positive responses to both tasks, whereas WM evoked large negative responses in sighted. Greater utilization of attention resources in blind were suggested as causing larger responses in dorsal and ventral attention systems, right DLPFC, and persistent responses across delays between trials in somatosensory and premotor cortex. In sighted, responses in somatosensory and premotor areas showed iterated peaks matched to stimulation trial intervals. The findings in occipital cortex of blind suggest that tactile activations do not represent cognitive operations for nonverbal WM task. However, these data suggest a role in sensory processing for tactile information in blind that parallels a similar contribution for visual stimuli in occipital cortex of sighted. © 2010 Wiley-Liss, Inc.
A multi-site cognitive task analysis for biomedical query mediation.
Hruby, Gregory W; Rasmussen, Luke V; Hanauer, David; Patel, Vimla L; Cimino, James J; Weng, Chunhua
2016-09-01
To apply cognitive task analyses of the Biomedical query mediation (BQM) processes for EHR data retrieval at multiple sites towards the development of a generic BQM process model. We conducted semi-structured interviews with eleven data analysts from five academic institutions and one government agency, and performed cognitive task analyses on their BQM processes. A coding schema was developed through iterative refinement and used to annotate the interview transcripts. The annotated dataset was used to reconstruct and verify each BQM process and to develop a harmonized BQM process model. A survey was conducted to evaluate the face and content validity of this harmonized model. The harmonized process model is hierarchical, encompassing tasks, activities, and steps. The face validity evaluation concluded the model to be representative of the BQM process. In the content validity evaluation, out of the 27 tasks for BQM, 19 meet the threshold for semi-valid, including 3 fully valid: "Identify potential index phenotype," "If needed, request EHR database access rights," and "Perform query and present output to medical researcher", and 8 are invalid. We aligned the goals of the tasks within the BQM model with the five components of the reference interview. The similarity between the process of BQM and the reference interview is promising and suggests the BQM tasks are powerful for eliciting implicit information needs. We contribute a BQM process model based on a multi-site study. This model promises to inform the standardization of the BQM process towards improved communication efficiency and accuracy. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
A Multi-Site Cognitive Task Analysis for Biomedical Query Mediation
Hruby, Gregory W.; Rasmussen, Luke V.; Hanauer, David; Patel, Vimla; Cimino, James J.; Weng, Chunhua
2016-01-01
Objective To apply cognitive task analyses of the Biomedical query mediation (BQM) processes for EHR data retrieval at multiple sites towards the development of a generic BQM process model. Materials and Methods We conducted semi-structured interviews with eleven data analysts from five academic institutions and one government agency, and performed cognitive task analyses on their BQM processes. A coding schema was developed through iterative refinement and used to annotate the interview transcripts. The annotated dataset was used to reconstruct and verify each BQM process and to develop a harmonized BQM process model. A survey was conducted to evaluate the face and content validity of this harmonized model. Results The harmonized process model is hierarchical, encompassing tasks, activities, and steps. The face validity evaluation concluded the model to be representative of the BQM process. In the content validity evaluation, out of the 27 tasks for BQM, 19 meet the threshold for semi-valid, including 3 fully valid: “Identify potential index phenotype,” “If needed, request EHR database access rights,” and “Perform query and present output to medical researcher”, and 8 are invalid. Discussion We aligned the goals of the tasks within the BQM model with the five components of the reference interview. The similarity between the process of BQM and the reference interview is promising and suggests the BQM tasks are powerful for eliciting implicit information needs. Conclusions We contribute a BQM process model based on a multi-site study. This model promises to inform the standardization of the BQM process towards improved communication efficiency and accuracy. PMID:27435950
NASA Astrophysics Data System (ADS)
Sips, A. C. C.; Giruzzi, G.; Ide, S.; Kessel, C.; Luce, T. C.; Snipes, J. A.; Stober, J. K.
2015-02-01
The development of operating scenarios is one of the key issues in the research for ITER which aims to achieve a fusion gain (Q) of ˜10, while producing 500 MW of fusion power for ≥300 s. The ITER Research plan proposes a success oriented schedule starting in hydrogen and helium, to be followed by a nuclear operation phase with a rapid development towards Q ˜ 10 in deuterium/tritium. The Integrated Operation Scenarios Topical Group of the International Tokamak Physics Activity initiates joint activities among worldwide institutions and experiments to prepare ITER operation. Plasma formation studies report robust plasma breakdown in devices with metal walls over a wide range of conditions, while other experiments use an inclined EC launch angle at plasma formation to mimic the conditions in ITER. Simulations of the plasma burn-through predict that at least 4 MW of Electron Cyclotron heating (EC) assist would be required in ITER. For H-modes at q95 ˜ 3, many experiments have demonstrated operation with scaled parameters for the ITER baseline scenario at ne/nGW ˜ 0.85. Most experiments, however, obtain stable discharges at H98(y,2) ˜ 1.0 only for βN = 2.0-2.2. For the rampup in ITER, early X-point formation is recommended, allowing auxiliary heating to reduce the flux consumption. A range of plasma inductance (li(3)) can be obtained from 0.65 to 1.0, with the lowest values obtained in H-mode operation. For the rampdown, the plasma should stay diverted maintaining H-mode together with a reduction of the elongation from 1.85 to 1.4. Simulations show that the proposed rampup and rampdown schemes developed since 2007 are compatible with the present ITER design for the poloidal field coils. At 13-15 MA and densities down to ne/nGW ˜ 0.5, long pulse operation (>1000 s) in ITER is possible at Q ˜ 5, useful to provide neutron fluence for Test Blanket Module assessments. ITER scenario preparation in hydrogen and helium requires high input power (>50 MW). H-mode operation in helium may be possible at input powers above 35 MW at a toroidal field of 2.65 T, for studying H-modes and ELM mitigation. In hydrogen, H-mode operation is expected to be marginal, even at 2.65 T with 60 MW of input power. Simulation code benchmark studies using hybrid and steady state scenario parameters have proved to be a very challenging and lengthy task of testing suites of codes, consisting of tens of sophisticated modules. Nevertheless, the general basis of the modelling appears sound, with substantial consistency among codes developed by different groups. For a hybrid scenario at 12 MA, the code simulations give a range for Q = 6.5-8.3, using 30 MW neutral beam injection and 20 MW ICRH. For non-inductive operation at 7-9 MA, the simulation results show more variation. At high edge pedestal pressure (Tped ˜ 7 keV), the codes predict Q = 3.3-3.8 using 33 MW NB, 20 MW EC, and 20 MW ion cyclotron to demonstrate the feasibility of steady-state operation with the day-1 heating systems in ITER. Simulations using a lower edge pedestal temperature (˜3 keV) but improved core confinement obtain Q = 5-6.5, when ECCD is concentrated at mid-radius and ˜20 MW off-axis current drive (ECCD or LHCD) is added. Several issues remain to be studied, including plasmas with dominant electron heating, mitigation of transient heat loads integrated in scenario demonstrations and (burn) control simulations in ITER scenarios.
Kinematic path planning for space-based robotics
NASA Astrophysics Data System (ADS)
Seereeram, Sanjeev; Wen, John T.
1998-01-01
Future space robotics tasks require manipulators of significant dexterity, achievable through kinematic redundancy and modular reconfigurability, but with a corresponding complexity of motion planning. Existing research aims for full autonomy and completeness, at the expense of efficiency, generality or even user friendliness. Commercial simulators require user-taught joint paths-a significant burden for assembly tasks subject to collision avoidance, kinematic and dynamic constraints. Our research has developed a Kinematic Path Planning (KPP) algorithm which bridges the gap between research and industry to produce a powerful and useful product. KPP consists of three key components: path-space iterative search, probabilistic refinement, and an operator guidance interface. The KPP algorithm has been successfully applied to the SSRMS for PMA relocation and dual-arm truss assembly tasks. Other KPP capabilities include Cartesian path following, hybrid Cartesian endpoint/intermediate via-point planning, redundancy resolution and path optimization. KPP incorporates supervisory (operator) input at any detail to influence the solution, yielding desirable/predictable paths for multi-jointed arms, avoiding obstacles and obeying manipulator limits. This software will eventually form a marketable robotic planner suitable for commercialization in conjunction with existing robotic CAD/CAM packages.
Adaptive distance metric learning for diffusion tensor image segmentation.
Kong, Youyong; Wang, Defeng; Shi, Lin; Hui, Steve C N; Chu, Winnie C W
2014-01-01
High quality segmentation of diffusion tensor images (DTI) is of key interest in biomedical research and clinical application. In previous studies, most efforts have been made to construct predefined metrics for different DTI segmentation tasks. These methods require adequate prior knowledge and tuning parameters. To overcome these disadvantages, we proposed to automatically learn an adaptive distance metric by a graph based semi-supervised learning model for DTI segmentation. An original discriminative distance vector was first formulated by combining both geometry and orientation distances derived from diffusion tensors. The kernel metric over the original distance and labels of all voxels were then simultaneously optimized in a graph based semi-supervised learning approach. Finally, the optimization task was efficiently solved with an iterative gradient descent method to achieve the optimal solution. With our approach, an adaptive distance metric could be available for each specific segmentation task. Experiments on synthetic and real brain DTI datasets were performed to demonstrate the effectiveness and robustness of the proposed distance metric learning approach. The performance of our approach was compared with three classical metrics in the graph based semi-supervised learning framework.
Adaptive Distance Metric Learning for Diffusion Tensor Image Segmentation
Kong, Youyong; Wang, Defeng; Shi, Lin; Hui, Steve C. N.; Chu, Winnie C. W.
2014-01-01
High quality segmentation of diffusion tensor images (DTI) is of key interest in biomedical research and clinical application. In previous studies, most efforts have been made to construct predefined metrics for different DTI segmentation tasks. These methods require adequate prior knowledge and tuning parameters. To overcome these disadvantages, we proposed to automatically learn an adaptive distance metric by a graph based semi-supervised learning model for DTI segmentation. An original discriminative distance vector was first formulated by combining both geometry and orientation distances derived from diffusion tensors. The kernel metric over the original distance and labels of all voxels were then simultaneously optimized in a graph based semi-supervised learning approach. Finally, the optimization task was efficiently solved with an iterative gradient descent method to achieve the optimal solution. With our approach, an adaptive distance metric could be available for each specific segmentation task. Experiments on synthetic and real brain DTI datasets were performed to demonstrate the effectiveness and robustness of the proposed distance metric learning approach. The performance of our approach was compared with three classical metrics in the graph based semi-supervised learning framework. PMID:24651858
FENDL: International reference nuclear data library for fusion applications
NASA Astrophysics Data System (ADS)
Pashchenko, A. B.; Wienke, H.; Ganesan, S.
1996-10-01
The IAEA Nuclear Data Section, in co-operation with several national nuclear data centres and research groups, has created the first version of an internationally available Fusion Evaluated Nuclear Data Library (FENDL-1). The FENDL library has been selected to serve as a comprehensive source of processed and tested nuclear data tailored to the requirements of the engineering design activity (EDA) of the ITER project and other fusion-related development projects. The present version of FENDL consists of the following sublibraries covering the necessary nuclear input for all physics and engineering aspects of the material development, design, operation and safety of the ITER project in its current EDA phase: FENDL/A-1.1: neutron activation cross-sections, selected from different available sources, for 636 nuclides, FENDL/D-1.0: nuclear decay data for 2900 nuclides in ENDF-6 format, FENDL/DS-1.0: neutron activation data for dosimetry by foil activation, FENDL/C-1.0: data for the fusion reactions D(d,n), D(d,p), T(d,n), T(t,2n), He-3(d,p) extracted from ENDF/B-6 and processed, FENDL/E-1.0:data for coupled neutron—photon transport calculations, including a data library for neutron interaction and photon production for 63 elements or isotopes, selected from ENDF/B-6, JENDL-3, or BROND-2, and a photon—atom interaction data library for 34 elements. The benchmark validation of FENDL-1 as required by the customer, i.e. the ITER team, is considered to be a task of high priority in the coming months. The well tested and validated nuclear data libraries in processed form of the FENDL-2 are expected to be ready by mid 1996 for use by the ITER team in the final phase of ITER EDA after extensive benchmarking and integral validation studies in the 1995-1996 period. The FENDL data files can be electronically transferred to users from the IAEA nuclear data section online system through INTERNET. A grand total of 54 (sub)directories with 845 files with total size of about 2 million blocks or about 1 Gigabyte (1 block = 512 bytes) of numerical data is currently available on-line.
Morie, K P; De Sanctis, P; Foxe, J J
2014-07-25
Task execution almost always occurs in the context of reward-seeking or punishment-avoiding behavior. As such, ongoing task-monitoring systems are influenced by reward anticipation systems. In turn, when a task has been executed either successfully or unsuccessfully, future iterations of that task will be re-titrated on the basis of the task outcome. Here, we examined the neural underpinnings of the task-monitoring and reward-evaluation systems to better understand how they govern reward-seeking behavior. Twenty-three healthy adult participants performed a task where they accrued points that equated to real world value (gift cards) by responding as rapidly as possible within an allotted timeframe, while success rate was titrated online by changing the duration of the timeframe dependent on participant performance. Informative cues initiated each trial, indicating the probability of potential reward or loss (four levels from very low to very high). We manipulated feedback by first informing participants of task success/failure, after which a second feedback signal indicated actual magnitude of reward/loss. High-density electroencephalography (EEG) recordings allowed for examination of event-related potentials (ERPs) to the informative cues and in turn, to both feedback signals. Distinct ERP components associated with reward cues, task-preparatory and task-monitoring processes, and reward feedback processes were identified. Unsurprisingly, participants displayed increased ERP amplitudes associated with task-preparatory processes following cues that predicted higher chances of reward. They also rapidly updated reward and loss prediction information dependent on task performance after the first feedback signal. Finally, upon reward receipt, initial reward probability was no longer taken into account. Rather, ERP measures suggested that only the magnitude of actual reward or loss was now processed. Reward and task-monitoring processes are clearly dissociable, but interact across very fast timescales to update reward predictions as information about task success or failure is accrued. Careful delineation of these processes will be useful in future investigations in clinical groups where such processes are suspected of having gone awry. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.
Morie, Kristen P.; De Sanctis, Pierfilippo; Foxe, John J.
2014-01-01
Task execution almost always occurs in the context of reward-seeking or punishment-avoiding behavior. As such, ongoing task monitoring systems are influenced by reward anticipation systems. In turn, when a task has been executed either successfully or unsuccessfully, future iterations of that task will be re-titrated on the basis of the task outcome. Here, we examined the neural underpinnings of the task-monitoring and reward-evaluation systems to better understand how they govern reward seeking behavior. Twenty-three healthy adult participants performed a task where they accrued points that equated to real world value (gift cards) by responding as rapidly as possible within an allotted timeframe, while success rate was titrated online by changing the duration of the timeframe dependent on participant performance. Informative cues initiated each trial, indicating the probability of potential reward or loss (four levels from very low to very high). We manipulated feedback by first informing participants of task success/failure, after which a second feedback signal indicated actual magnitude of reward/loss. High-density EEG recordings allowed for examination of event-related potentials (ERPs) to the informative cues and in turn, to both feedback signals. Distinct ERP components associated with reward cues, task preparatory and task monitoring processes, and reward feedback processes were identified. Unsurprisingly, participants displayed increased ERP amplitudes associated with task preparatory processes following cues that predicted higher chances of reward. They also rapidly updated reward and loss prediction information dependent on task performance after the first feedback signal. Finally, upon reward receipt, initial reward probability was no longer taken into account. Rather, ERP measures suggested that only the magnitude of actual reward or loss was now processed. Reward and task monitoring processes are clearly dissociable, but interact across very fast timescales to update reward predictions as information about task success or failure is accrued. Careful delineation of these processes will be useful in future investigations in clinical groups where such processes are suspected of having gone awry. PMID:24836852
Eye-in-Hand Manipulation for Remote Handling: Experimental Setup
NASA Astrophysics Data System (ADS)
Niu, Longchuan; Suominen, Olli; Aref, Mohammad M.; Mattila, Jouni; Ruiz, Emilio; Esque, Salvador
2018-03-01
A prototype for eye-in-hand manipulation in the context of remote handling in the International Thermonuclear Experimental Reactor (ITER)1 is presented in this paper. The setup consists of an industrial robot manipulator with a modified open control architecture and equipped with a pair of stereoscopic cameras, a force/torque sensor, and pneumatic tools. It is controlled through a haptic device in a mock-up environment. The industrial robot controller has been replaced by a single industrial PC running Xenomai that has a real-time connection to both the robot controller and another Linux PC running as the controller for the haptic device. The new remote handling control environment enables further development of advanced control schemes for autonomous and semi-autonomous manipulation tasks. This setup benefits from a stereovision system for accurate tracking of the target objects with irregular shapes. The overall environmental setup successfully demonstrates the required robustness and precision that remote handling tasks need.
Pybel: a Python wrapper for the OpenBabel cheminformatics toolkit
O'Boyle, Noel M; Morley, Chris; Hutchison, Geoffrey R
2008-01-01
Background Scripting languages such as Python are ideally suited to common programming tasks in cheminformatics such as data analysis and parsing information from files. However, for reasons of efficiency, cheminformatics toolkits such as the OpenBabel toolkit are often implemented in compiled languages such as C++. We describe Pybel, a Python module that provides access to the OpenBabel toolkit. Results Pybel wraps the direct toolkit bindings to simplify common tasks such as reading and writing molecular files and calculating fingerprints. Extensive use is made of Python iterators to simplify loops such as that over all the molecules in a file. A Pybel Molecule can be easily interconverted to an OpenBabel OBMol to access those methods or attributes not wrapped by Pybel. Conclusion Pybel allows cheminformaticians to rapidly develop Python scripts that manipulate chemical information. It is open source, available cross-platform, and offers the power of the OpenBabel toolkit to Python programmers. PMID:18328109
Optimism as a Prior Belief about the Probability of Future Reward
Kalra, Aditi; Seriès, Peggy
2014-01-01
Optimists hold positive a priori beliefs about the future. In Bayesian statistical theory, a priori beliefs can be overcome by experience. However, optimistic beliefs can at times appear surprisingly resistant to evidence, suggesting that optimism might also influence how new information is selected and learned. Here, we use a novel Pavlovian conditioning task, embedded in a normative framework, to directly assess how trait optimism, as classically measured using self-report questionnaires, influences choices between visual targets, by learning about their association with reward progresses. We find that trait optimism relates to an a priori belief about the likelihood of rewards, but not losses, in our task. Critically, this positive belief behaves like a probabilistic prior, i.e. its influence reduces with increasing experience. Contrary to findings in the literature related to unrealistic optimism and self-beliefs, it does not appear to influence the iterative learning process directly. PMID:24853098
Pybel: a Python wrapper for the OpenBabel cheminformatics toolkit.
O'Boyle, Noel M; Morley, Chris; Hutchison, Geoffrey R
2008-03-09
Scripting languages such as Python are ideally suited to common programming tasks in cheminformatics such as data analysis and parsing information from files. However, for reasons of efficiency, cheminformatics toolkits such as the OpenBabel toolkit are often implemented in compiled languages such as C++. We describe Pybel, a Python module that provides access to the OpenBabel toolkit. Pybel wraps the direct toolkit bindings to simplify common tasks such as reading and writing molecular files and calculating fingerprints. Extensive use is made of Python iterators to simplify loops such as that over all the molecules in a file. A Pybel Molecule can be easily interconverted to an OpenBabel OBMol to access those methods or attributes not wrapped by Pybel. Pybel allows cheminformaticians to rapidly develop Python scripts that manipulate chemical information. It is open source, available cross-platform, and offers the power of the OpenBabel toolkit to Python programmers.
Regularization Reconstruction Method for Imaging Problems in Electrical Capacitance Tomography
NASA Astrophysics Data System (ADS)
Chu, Pan; Lei, Jing
2017-11-01
The electrical capacitance tomography (ECT) is deemed to be a powerful visualization measurement technique for the parametric measurement in a multiphase flow system. The inversion task in the ECT technology is an ill-posed inverse problem, and seeking for an efficient numerical method to improve the precision of the reconstruction images is important for practical measurements. By the introduction of the Tikhonov regularization (TR) methodology, in this paper a loss function that emphasizes the robustness of the estimation and the low rank property of the imaging targets is put forward to convert the solution of the inverse problem in the ECT reconstruction task into a minimization problem. Inspired by the split Bregman (SB) algorithm, an iteration scheme is developed for solving the proposed loss function. Numerical experiment results validate that the proposed inversion method not only reconstructs the fine structures of the imaging targets, but also improves the robustness.
Image annotation by deep neural networks with attention shaping
NASA Astrophysics Data System (ADS)
Zheng, Kexin; Lv, Shaohe; Ma, Fang; Chen, Fei; Jin, Chi; Dou, Yong
2017-07-01
Image annotation is a task of assigning semantic labels to an image. Recently, deep neural networks with visual attention have been utilized successfully in many computer vision tasks. In this paper, we show that conventional attention mechanism is easily misled by the salient class, i.e., the attended region always contains part of the image area describing the content of salient class at different attention iterations. To this end, we propose a novel attention shaping mechanism, which aims to maximize the non-overlapping area between consecutive attention processes by taking into account the history of previous attention vectors. Several weighting polices are studied to utilize the history information in different manners. In two benchmark datasets, i.e., PASCAL VOC2012 and MIRFlickr-25k, the average precision is improved by up to 10% in comparison with the state-of-the-art annotation methods.
Altered predictive capability of the brain network EEG model in schizophrenia during cognition.
Gomez-Pilar, Javier; Poza, Jesús; Gómez, Carlos; Northoff, Georg; Lubeiro, Alba; Cea-Cañas, Benjamín B; Molina, Vicente; Hornero, Roberto
2018-05-12
The study of the mechanisms involved in cognition is of paramount importance for the understanding of the neurobiological substrates in psychiatric disorders. Hence, this research is aimed at exploring the brain network dynamics during a cognitive task. Specifically, we analyze the predictive capability of the pre-stimulus theta activity to ascertain the functional brain dynamics during cognition in both healthy and schizophrenia subjects. Firstly, EEG recordings were acquired during a three-tone oddball task from fifty-one healthy subjects and thirty-five schizophrenia patients. Secondly, phase-based coupling measures were used to generate the time-varying functional network for each subject. Finally, pre-stimulus network connections were iteratively modified according to different models of network reorganization. This adjustment was applied by minimizing the prediction error through recurrent iterations, following the predictive coding approach. Both controls and schizophrenia patients follow a reinforcement of the secondary neural pathways (i.e., pathways between cortical brain regions weakly connected during pre-stimulus) for most of the subjects, though the ratio of controls that exhibited this behavior was statistically significant higher than for patients. These findings suggest that schizophrenia is associated with an impaired ability to modify brain network configuration during cognition. Furthermore, we provide direct evidence that the changes in phase-based brain network parameters from pre-stimulus to cognitive response in the theta band are closely related to the performance in important cognitive domains. Our findings not only contribute to the understanding of healthy brain dynamics, but also shed light on the altered predictive neuronal substrates in schizophrenia. Copyright © 2018 Elsevier B.V. All rights reserved.
Exploring the knowledge behind predictions in everyday cognition: an iterated learning study.
Stephens, Rachel G; Dunn, John C; Rao, Li-Lin; Li, Shu
2015-10-01
Making accurate predictions about events is an important but difficult task. Recent work suggests that people are adept at this task, making predictions that reflect surprisingly accurate knowledge of the distributions of real quantities. Across three experiments, we used an iterated learning procedure to explore the basis of this knowledge: to what extent is domain experience critical to accurate predictions and how accurate are people when faced with unfamiliar domains? In Experiment 1, two groups of participants, one resident in Australia, the other in China, predicted the values of quantities familiar to both (movie run-times), unfamiliar to both (the lengths of Pharaoh reigns), and familiar to one but unfamiliar to the other (cake baking durations and the lengths of Beijing bus routes). While predictions from both groups were reasonably accurate overall, predictions were inaccurate in the selectively unfamiliar domains and, surprisingly, predictions by the China-resident group were also inaccurate for a highly familiar domain: local bus route lengths. Focusing on bus routes, two follow-up experiments with Australia-resident groups clarified the knowledge and strategies that people draw upon, plus important determinants of accurate predictions. For unfamiliar domains, people appear to rely on extrapolating from (not simply directly applying) related knowledge. However, we show that people's predictions are subject to two sources of error: in the estimation of quantities in a familiar domain and extension to plausible values in an unfamiliar domain. We propose that the key to successful predictions is not simply domain experience itself, but explicit experience of relevant quantities.
NASA Astrophysics Data System (ADS)
Ongena, Jef
2012-07-01
The JET Task Force Heating is proud to present this special issue. It is the result of hard and dedicated work by everybody participating in the Task Force over the last four years and gives an overview of the experimental and theoretical results obtained in the period 2008-2010 with radio frequency heating of JET fusion plasmas. Topics studied and reported in this issue are: investigations into the operation of lower hybrid heating accompanied by new modeling results; new experimental results and insights into the physics of various ion cyclotron range of frequencies (ICRF) heating scenarios; progress in studies of intrinsic and ion cyclotron wave-induced plasma rotation and flows; a summary of the developments over the last years in designing an ion cyclotron radiofrequency heating (ICRH) system that can cope with the presence of fast load variations in the edge, as e.g. caused by pellets or edge localized modes (ELMs) during H-Mode operation; an overview of the results obtained with the ITER-like antenna operating in H-Mode with a packed array of straps and power densities close to those of the projected ITER ICRH antenna; and, finally, a summary of the results obtained in applying ion cyclotron waves for wall conditioning of the tokamak. This issue would not have been possible without the strong motivation and efforts (sometimes truly heroic) of all colleagues of the JET Task Force Heating. A sincere word of thanks, therefore, to all authors and co-authors involved in the experiments, analysis and compilation of the papers. It was a special privilege to work with all of them during the past very intense years. Thanks also to all other European and non-European scientists who contributed to the JET scientific programme, the operations team of JET and the colleagues of the Close Support Unit in Culham. Thanks also to the editors, Editorial Board and referees of Plasma Physics and Controlled Fusion, together with the publishing staff of IOPP, who have not only supported but also contributed very substantially to this initiative. Without their dedication this issue would not have been possible in its present form. A special word of thanks to Marie-Line Mayoral and Joelle Mailloux for their precious help and very active support in running the JET Task Force Heating over the last years. Without Joelle and Marie-Line itwould have been a much more daunting task to prepare JET operations, monitor progress during the experiments and edit the papers that are compiled here.
Wang, Jiexin; Uchibe, Eiji; Doya, Kenji
2017-01-01
EM-based policy search methods estimate a lower bound of the expected return from the histories of episodes and iteratively update the policy parameters using the maximum of a lower bound of expected return, which makes gradient calculation and learning rate tuning unnecessary. Previous algorithms like Policy learning by Weighting Exploration with the Returns, Fitness Expectation Maximization, and EM-based Policy Hyperparameter Exploration implemented the mechanisms to discard useless low-return episodes either implicitly or using a fixed baseline determined by the experimenter. In this paper, we propose an adaptive baseline method to discard worse samples from the reward history and examine different baselines, including the mean, and multiples of SDs from the mean. The simulation results of benchmark tasks of pendulum swing up and cart-pole balancing, and standing up and balancing of a two-wheeled smartphone robot showed improved performances. We further implemented the adaptive baseline with mean in our two-wheeled smartphone robot hardware to test its performance in the standing up and balancing task, and a view-based approaching task. Our results showed that with adaptive baseline, the method outperformed the previous algorithms and achieved faster, and more precise behaviors at a higher successful rate. PMID:28167910
Berger, Marc L; Sox, Harold; Willke, Richard J; Brixner, Diana L; Eichler, Hans-Georg; Goettsch, Wim; Madigan, David; Makady, Amr; Schneeweiss, Sebastian; Tarricone, Rosanna; Wang, Shirley V; Watkins, John; Mullins, C Daniel
2017-09-01
Real-world evidence (RWE) includes data from retrospective or prospective observational studies and observational registries and provides insights beyond those addressed by randomized controlled trials. RWE studies aim to improve health care decision making. The International Society for Pharmacoeconomics and Outcomes Research (ISPOR) and the International Society for Pharmacoepidemiology (ISPE) created a task force to make recommendations regarding good procedural practices that would enhance decision makers' confidence in evidence derived from RWD studies. Peer review by ISPOR/ISPE members and task force participants provided a consensus-building iterative process for the topics and framing of recommendations. The ISPOR/ISPE Task Force recommendations cover seven topics such as study registration, replicability, and stakeholder involvement in RWE studies. These recommendations, in concert with earlier recommendations about study methodology, provide a trustworthy foundation for the expanded use of RWE in health care decision making. The focus of these recommendations is good procedural practices for studies that test a specific hypothesis in a specific population. We recognize that some of the recommendations in this report may not be widely adopted without appropriate incentives from decision makers, journal editors, and other key stakeholders. Copyright © 2017. Published by Elsevier Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reed, C.B.; Haglund, R.C.; Miller, M.E.
1996-12-31
The Vanadium/Lithium system has been the recent focus of ANL`s Blanket Technology Pro-ram, and for the last several years, ANL`s Liquid Metal Blanket activities have been carried out in direct support of the ITER (International Thermonuclear Experimental Reactor) breeding blanket task area. A key feasibility issue for the ITER Vanadium/Lithium breeding blanket is the Near the development of insulator coatings. Design calculations, Hua and Gohar, show that an electrically insulating layer is necessary to maintain an acceptably low magneto-hydrodynamic (MHD) pressure drop in the current ITER design. Consequently, the decision was made to convert Argonne`s Liquid Metal EXperiment (ALEX) frommore » a 200{degrees}C NaK facility to a 350{degrees}C lithium facility. The upgraded facility was designed to produce MHD pressure drop data, test section voltage distributions, and heat transfer data for mid-scale test sections and blanket mockups at Hartmann numbers (M) and interaction parameters (N) in the range of 10{sup 3} to 10{sup 5} in lithium at 350{degrees}C. Following completion of the upgrade work, a short performance test was conducted, followed by two longer multiple-hour, MHD tests, all at 230{degrees}C. The modified ALEX facility performed up to expectations in the testing. MHD pressure drop and test section voltage distributions were collected at Hartmann numbers of 1000.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reed, C.B.; Haglund, R.C.; Miller, M.E.
1996-12-31
The Vanadium/Lithium system has been the recent focus of ANL`s Blanket Technology Program, and for the last several years, ANL`s Liquid Metal Blanket activities have been carried out in direct support of the ITER (International Thermonuclear Experimental Reactor) breeding blanket task area. A key feasibility issue for the ITER Vanadium/Lithium breeding blanket is the development of insulator coatings. Design calculations, Hua and Gohar, show that an electrically insulating layer is necessary to maintain an acceptably low magnetohydrodynamic (MHD) pressure drop in the current ITER design. Consequently, the decision was made to convert Argonne`s Liquid Metal EXperiment (ALEX) from a 200{degree}Cmore » NaK facility to a 350{degree}C lithium facility. The upgraded facility was designed to produce MHD pressure drop data, test section voltage distributions, and heat transfer data for mid-scale test sections and blanket mockups at Hartmann numbers (M) and interaction parameters (N) in the range of 10{sup 3} to 10{sup 5} in lithium at 350{degree}C. Following completion of the upgrade work, a short performance test was conducted, followed by two longer, multiple-hour, MHD tests, all at 230{degree}C. The modified ALEX facility performed up to expectations in the testing. MHD pressure drop and test section voltage distributions were collected at Hartmann numbers of 1000. 4 refs., 2 figs.« less
A new design approach to innovative spectrometers. Case study: TROPOLITE
NASA Astrophysics Data System (ADS)
Volatier, Jean-Baptiste; Baümer, Stefan; Kruizinga, Bob; Vink, Rob
2014-05-01
Designing a novel optical system is a nested iterative process. The optimization loop, from a starting point to final system is already mostly automated. However this loop is part of a wider loop which is not. This wider loop starts with an optical specification and ends with a manufacturability assessment. When designing a new spectrometer with emphasis on weight and cost, numerous iterations between the optical- and mechanical designer are inevitable. The optical designer must then be able to reliably produce optical designs based on new input gained from multidisciplinary studies. This paper presents a procedure that can automatically generate new starting points based on any kind of input or new constraint that might arise. These starting points can then be handed over to a generic optimization routine to make the design tasks extremely efficient. The optical designer job is then not to design optical systems, but to meta-design a procedure that produces optical systems paving the way for system level optimization. We present here this procedure and its application to the design of TROPOLITE a lightweight push broom imaging spectrometer.
PARALLELISATION OF THE MODEL-BASED ITERATIVE RECONSTRUCTION ALGORITHM DIRA.
Örtenberg, A; Magnusson, M; Sandborg, M; Alm Carlsson, G; Malusek, A
2016-06-01
New paradigms for parallel programming have been devised to simplify software development on multi-core processors and many-core graphical processing units (GPU). Despite their obvious benefits, the parallelisation of existing computer programs is not an easy task. In this work, the use of the Open Multiprocessing (OpenMP) and Open Computing Language (OpenCL) frameworks is considered for the parallelisation of the model-based iterative reconstruction algorithm DIRA with the aim to significantly shorten the code's execution time. Selected routines were parallelised using OpenMP and OpenCL libraries; some routines were converted from MATLAB to C and optimised. Parallelisation of the code with the OpenMP was easy and resulted in an overall speedup of 15 on a 16-core computer. Parallelisation with OpenCL was more difficult owing to differences between the central processing unit and GPU architectures. The resulting speedup was substantially lower than the theoretical peak performance of the GPU; the cause was explained. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
User-Centered Design of a Controller-Free Game for Hand Rehabilitation.
Proffitt, Rachel; Sevick, Marisa; Chang, Chien-Yen; Lange, Belinda
2015-08-01
The purpose of this study was to develop and test a hand therapy game using the Microsoft (Redmond, WA) Kinect(®) sensor with a customized videogame. Using the Microsoft Kinect sensor as an input device, a customized game for hand rehabilitation was developed that required players to perform various gestures to accomplish a virtual cooking task. Over the course of two iterative sessions, 11 participants with different levels of wrist, hand, and finger injuries interacted with the game in a single session, and user perspectives and feedback were obtained via a questionnaire and semistructured interviews. Participants reported high levels of enjoyment, specifically related to the challenging nature of the game and the visuals. Participant feedback from the first iterative round of testing was incorporated to produce a second prototype for the second round of testing. Additionally, participants expressed the desire to have the game adapt and be customized to their unique hand therapy needs. The game tested in this study has the potential to be a unique and cutting edge method for the delivery of hand rehabilitation for a diverse population.
Smoothed low rank and sparse matrix recovery by iteratively reweighted least squares minimization.
Lu, Canyi; Lin, Zhouchen; Yan, Shuicheng
2015-02-01
This paper presents a general framework for solving the low-rank and/or sparse matrix minimization problems, which may involve multiple nonsmooth terms. The iteratively reweighted least squares (IRLSs) method is a fast solver, which smooths the objective function and minimizes it by alternately updating the variables and their weights. However, the traditional IRLS can only solve a sparse only or low rank only minimization problem with squared loss or an affine constraint. This paper generalizes IRLS to solve joint/mixed low-rank and sparse minimization problems, which are essential formulations for many tasks. As a concrete example, we solve the Schatten-p norm and l2,q-norm regularized low-rank representation problem by IRLS, and theoretically prove that the derived solution is a stationary point (globally optimal if p,q ≥ 1). Our convergence proof of IRLS is more general than previous one that depends on the special properties of the Schatten-p norm and l2,q-norm. Extensive experiments on both synthetic and real data sets demonstrate that our IRLS is much more efficient.
Clay, Zanna; Pople, Sally; Hood, Bruce; Kita, Sotaro
2014-08-01
Research on Nicaraguan Sign Language, created by deaf children, has suggested that young children use gestures to segment the semantic elements of events and linearize them in ways similar to those used in signed and spoken languages. However, it is unclear whether this is due to children's learning processes or to a more general effect of iterative learning. We investigated whether typically developing children, without iterative learning, segment and linearize information. Gestures produced in the absence of speech to express a motion event were examined in 4-year-olds, 12-year-olds, and adults (all native English speakers). We compared the proportions of gestural expressions that segmented semantic elements into linear sequences and that encoded them simultaneously. Compared with adolescents and adults, children reshaped the holistic stimuli by segmenting and recombining their semantic features into linearized sequences. A control task on recognition memory ruled out the possibility that this was due to different event perception or memory. Young children spontaneously bring fundamental properties of language into their communication system. © The Author(s) 2014.
NASA Astrophysics Data System (ADS)
Herz, A.; Herz, E.; Center, K.; George, P.; Axelrad, P.; Mutschler, S.; Jones, B.
2016-09-01
The Space Surveillance Network (SSN) is tasked with the increasingly difficult mission of detecting, tracking, cataloging and identifying artificial objects orbiting the Earth, including active and inactive satellites, spent rocket bodies, and fragmented debris. Much of the architecture and operations of the SSN are limited and outdated. Efforts are underway to modernize some elements of the systems. Even so, the ability to maintain the best current Space Situational Awareness (SSA) picture and identify emerging events in a timely fashion could be significantly improved by leveraging non-traditional sensor sites. Orbit Logic, the University of Colorado and the University of Texas at Austin are developing an innovative architecture and operations concept to coordinate the tasking and observation information processing of non - traditional assets based on information-theoretic approaches. These confirmed tasking schedules and the resulting data can then be used to "inform" the SSN tasking process. The 'Heimdall Web' system is comprised of core tasking optimization components and accompanying Web interfaces within a secure, split architecture that will for the first time allow non-traditional sensors to support SSA and improve SSN tasking. Heimdall Web application components appropriately score/prioritize space catalog objects based on covariance, priority, observability, expected information gain, and probability of detect - then coordinate an efficient sensor observation schedule for non-SSN sensors contributing to the overall SSA picture maintained by the Joint Space Operations Center (JSpOC). The Heimdall Web Ops concept supports sensor participation levels of "Scheduled", "Tasked" and "Contributing". Scheduled and Tasked sensors are provided optimized observation schedules or object tracking lists from central algorithms, while Contributing sensors review and select from a list of "desired track objects". All sensors are "Web Enabled" for tasking and feedback, supplying observation schedules, confirmed observations and related data back to Heimdall Web to complete the feedback loop for the next scheduling iteration.
NASA Astrophysics Data System (ADS)
Solomon, Justin; Ba, Alexandre; Diao, Andrew; Lo, Joseph; Bier, Elianna; Bochud, François; Gehm, Michael; Samei, Ehsan
2016-03-01
In x-ray computed tomography (CT), task-based image quality studies are typically performed using uniform background phantoms with low-contrast signals. Such studies may have limited clinical relevancy for modern non-linear CT systems due to possible influence of background texture on image quality. The purpose of this study was to design and implement anatomically informed textured phantoms for task-based assessment of low-contrast detection. Liver volumes were segmented from 23 abdominal CT cases. The volumes were characterized in terms of texture features from gray-level co-occurrence and run-length matrices. Using a 3D clustered lumpy background (CLB) model, a fitting technique based on a genetic optimization algorithm was used to find the CLB parameters that were most reflective of the liver textures, accounting for CT system factors of spatial blurring and noise. With the modeled background texture as a guide, a cylinder phantom (165 mm in diameter and 30 mm height) was designed, containing 20 low-contrast spherical signals (6 mm in diameter at targeted contrast levels of ~3.2, 5.2, 7.2, 10, and 14 HU, 4 repeats per signal). The phantom was voxelized and input into a commercial multi-material 3D printer (Object Connex 350), with custom software for voxel-based printing. Using principles of digital half-toning and dithering, the 3D printer was programmed to distribute two base materials (VeroWhite and TangoPlus, nominal voxel size of 42x84x30 microns) to achieve the targeted spatial distribution of x-ray attenuation properties. The phantom was used for task-based image quality assessment of a clinically available iterative reconstruction algorithm (Sinogram Affirmed Iterative Reconstruction, SAFIRE) using a channelized Hotelling observer paradigm. Images of the textured phantom and a corresponding uniform phantom were acquired at six dose levels and observer model performance was estimated for each condition (5 contrasts x 6 doses x 2 reconstructions x 2 backgrounds = 120 total conditions). Based on the observer model results, the dose reduction potential of SAFIRE was computed and compared between the uniform and textured phantom. The dose reduction potential of SAFIRE was found to be 23% based on the uniform phantom and 17% based on the textured phantom. This discrepancy demonstrates the need to consider background texture when assessing non-linear reconstruction algorithms.
Space Station: The next iteration
NASA Astrophysics Data System (ADS)
Foley, Theresa M.
1995-01-01
NASA's international space station is nearing the completion stage of its troublesome 10-year design phase. With a revised design and new management team, NASA is tasked to deliver the station on time at a budget acceptable to both Congress and the White House. For the next three years, NASA is using tried-and-tested Russian hardware as the technical centerpiece of the station. The new station configuration consists of eight pressurized modules in which the crew can live and work; a long metal truss to connect the pieces; a robot arm for exterior jobs; a solar power system; and propelling the facility in space.
Application of IPAD to missile design
NASA Technical Reports Server (NTRS)
Santa, J. E.; Whiting, T. R.
1974-01-01
The application of an integrated program for aerospace-vehicle design (IPAD) to the design of a tactical missile is examined. The feasibility of modifying a proposed IPAD system for aircraft design work for use in missile design is evaluated. The tasks, cost, and schedule for the modification are presented. The basic engineering design process is described, explaining how missile design is achieved through iteration of six logical problem solving functions throughout the system studies, preliminary design, and detailed design phases of a new product. Existing computer codes used in various engineering disciplines are evaluated for their applicability to IPAD in missile design.
The detection methods of dynamic objects
NASA Astrophysics Data System (ADS)
Knyazev, N. L.; Denisova, L. A.
2018-01-01
The article deals with the application of cluster analysis methods for solving the task of aircraft detection on the basis of distribution of navigation parameters selection into groups (clusters). The modified method of cluster analysis for search and detection of objects and then iterative combining in clusters with the subsequent count of their quantity for increase in accuracy of the aircraft detection have been suggested. The course of the method operation and the features of implementation have been considered. In the conclusion the noted efficiency of the offered method for exact cluster analysis for finding targets has been shown.
McCormick, Cornelia; Ciaramelli, Elisa; De Luca, Flavia; Maguire, Eleanor A
2018-03-15
The hippocampus and ventromedial prefrontal cortex (vmPFC) are closely connected brain regions whose functions are still debated. In order to offer a fresh perspective on understanding the contributions of these two brain regions to cognition, in this review we considered cognitive tasks that usually elicit deficits in hippocampal-damaged patients (e.g., autobiographical memory retrieval), and examined the performance of vmPFC-lesioned patients on these tasks. We then took cognitive tasks where performance is typically compromised following vmPFC damage (e.g., decision making), and looked at how these are affected by hippocampal lesions. Three salient motifs emerged. First, there are surprising gaps in our knowledge about how hippocampal and vmPFC patients perform on tasks typically associated with the other group. Second, while hippocampal or vmPFC damage seems to adversely affect performance on so-called hippocampal tasks, the performance of hippocampal and vmPFC patients clearly diverges on classic vmPFC tasks. Third, although performance appears analogous on hippocampal tasks, on closer inspection, there are significant disparities between hippocampal and vmPFC patients. Based on these findings, we suggest a tentative hierarchical model to explain the functions of the hippocampus and vmPFC. We propose that the vmPFC initiates the construction of mental scenes by coordinating the curation of relevant elements from neocortical areas, which are then funneled into the hippocampus to build a scene. The vmPFC then engages in iterative re-initiation via feedback loops with neocortex and hippocampus to facilitate the flow and integration of the multiple scenes that comprise the coherent unfolding of an extended mental event. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Ott, Julien G.; Becce, Fabio; Monnin, Pascal; Schmidt, Sabine; Bochud, François O.; Verdun, Francis R.
2014-08-01
The state of the art to describe image quality in medical imaging is to assess the performance of an observer conducting a task of clinical interest. This can be done by using a model observer leading to a figure of merit such as the signal-to-noise ratio (SNR). Using the non-prewhitening (NPW) model observer, we objectively characterised the evolution of its figure of merit in various acquisition conditions. The NPW model observer usually requires the use of the modulation transfer function (MTF) as well as noise power spectra. However, although the computation of the MTF poses no problem when dealing with the traditional filtered back-projection (FBP) algorithm, this is not the case when using iterative reconstruction (IR) algorithms, such as adaptive statistical iterative reconstruction (ASIR) or model-based iterative reconstruction (MBIR). Given that the target transfer function (TTF) had already shown it could accurately express the system resolution even with non-linear algorithms, we decided to tune the NPW model observer, replacing the standard MTF by the TTF. It was estimated using a custom-made phantom containing cylindrical inserts surrounded by water. The contrast differences between the inserts and water were plotted for each acquisition condition. Then, mathematical transformations were performed leading to the TTF. As expected, the first results showed a dependency of the image contrast and noise levels on the TTF for both ASIR and MBIR. Moreover, FBP also proved to be dependent of the contrast and noise when using the lung kernel. Those results were then introduced in the NPW model observer. We observed an enhancement of SNR every time we switched from FBP to ASIR to MBIR. IR algorithms greatly improve image quality, especially in low-dose conditions. Based on our results, the use of MBIR could lead to further dose reduction in several clinical applications.
Sundareshan, Malur K; Bhattacharjee, Supratik; Inampudi, Radhika; Pang, Ho-Yuen
2002-12-10
Computational complexity is a major impediment to the real-time implementation of image restoration and superresolution algorithms in many applications. Although powerful restoration algorithms have been developed within the past few years utilizing sophisticated mathematical machinery (based on statistical optimization and convex set theory), these algorithms are typically iterative in nature and require a sufficient number of iterations to be executed to achieve the desired resolution improvement that may be needed to meaningfully perform postprocessing image exploitation tasks in practice. Additionally, recent technological breakthroughs have facilitated novel sensor designs (focal plane arrays, for instance) that make it possible to capture megapixel imagery data at video frame rates. A major challenge in the processing of these large-format images is to complete the execution of the image processing steps within the frame capture times and to keep up with the output rate of the sensor so that all data captured by the sensor can be efficiently utilized. Consequently, development of novel methods that facilitate real-time implementation of image restoration and superresolution algorithms is of significant practical interest and is the primary focus of this study. The key to designing computationally efficient processing schemes lies in strategically introducing appropriate preprocessing steps together with the superresolution iterations to tailor optimized overall processing sequences for imagery data of specific formats. For substantiating this assertion, three distinct methods for tailoring a preprocessing filter and integrating it with the superresolution processing steps are outlined. These methods consist of a region-of-interest extraction scheme, a background-detail separation procedure, and a scene-derived information extraction step for implementing a set-theoretic restoration of the image that is less demanding in computation compared with the superresolution iterations. A quantitative evaluation of the performance of these algorithms for restoring and superresolving various imagery data captured by diffraction-limited sensing operations are also presented.
Performance evaluation of the Personal Mobility and Manipulation Appliance (PerMMA).
Wang, Hongwu; Xu, Jijie; Grindle, Garrett; Vazquez, Juan; Salatin, Ben; Kelleher, Annmarie; Ding, Dan; Collins, Diane M; Cooper, Rory A
2013-11-01
The Personal Mobility and Manipulation Appliance (PerMMA) is a recently developed personal assistance robot created to provide people with severe physical disabilities enhanced assistance in both mobility and manipulation. PerMMA aims to improve functional independence when a personal care attendant is not available on site. PerMMA integrates both a smart powered wheelchair and two dexterous robotic arms to assist its users in completing essential mobility and manipulation tasks during basic and instrumental activities of daily living (ADL). Two user interfaces were developed: a local control interface and a remote operator controller. This paper reports on the evaluation of PerMMA with end users completing basic ADL tasks. Participants with both lower and upper extremity impairments (N=15) were recruited to operate PerMMA and complete up to five ADL tasks in a single session of no more than two hours (to avoid fatigue or frustration of the participants). The performance of PerMMA was evaluated by participants completing ADL tasks with two different control modes: local mode and cooperative control. The users' task completion performance and answers on pre/post-evaluation questionnaires demonstrated not only the ease in learning and usefulness of PerMMA, but also their attitudes toward assistance from advanced technology like PerMMA. As a part of the iterative development process, results of this work will serve as supporting evidence to identify design criteria and other areas for improvement of PerMMA. Copyright © 2013 IPEM. All rights reserved.
Clinical Complexity in Medicine: A Measurement Model of Task and Patient Complexity.
Islam, R; Weir, C; Del Fiol, G
2016-01-01
Complexity in medicine needs to be reduced to simple components in a way that is comprehensible to researchers and clinicians. Few studies in the current literature propose a measurement model that addresses both task and patient complexity in medicine. The objective of this paper is to develop an integrated approach to understand and measure clinical complexity by incorporating both task and patient complexity components focusing on the infectious disease domain. The measurement model was adapted and modified for the healthcare domain. Three clinical infectious disease teams were observed, audio-recorded and transcribed. Each team included an infectious diseases expert, one infectious diseases fellow, one physician assistant and one pharmacy resident fellow. The transcripts were parsed and the authors independently coded complexity attributes. This baseline measurement model of clinical complexity was modified in an initial set of coding processes and further validated in a consensus-based iterative process that included several meetings and email discussions by three clinical experts from diverse backgrounds from the Department of Biomedical Informatics at the University of Utah. Inter-rater reliability was calculated using Cohen's kappa. The proposed clinical complexity model consists of two separate components. The first is a clinical task complexity model with 13 clinical complexity-contributing factors and 7 dimensions. The second is the patient complexity model with 11 complexity-contributing factors and 5 dimensions. The measurement model for complexity encompassing both task and patient complexity will be a valuable resource for future researchers and industry to measure and understand complexity in healthcare.
Improved silicon carbide for advanced heat engines
NASA Technical Reports Server (NTRS)
Whalen, Thomas J.
1988-01-01
This is the third annual technical report for the program entitled, Improved Silicon Carbide for Advanced Heat Engines, for the period February 16, 1987 to February 15, 1988. The objective of the original program was the development of high strength, high reliability silicon carbide parts with complex shapes suitable for use in advanced heat engines. Injection molding is the forming method selected for the program because it is capable of forming complex parts adaptable for mass production on an economically sound basis. The goals of the revised program are to reach a Weibull characteristic strength of 550 MPa (80 ksi) and a Weibull modulus of 16 for bars tested in 4-point loading. Two tasks are discussed: Task 1 which involves materials and process improvements, and Task 2 which is a MOR bar matrix to improve strength and reliability. Many statistically designed experiments were completed under task 1 which improved the composition of the batches, the mixing of the powders, the sinter and anneal cycles. The best results were obtained by an attritor mixing process which yielded strengths in excess of 550 MPa (80 ksi) and an individual Weibull modulus of 16.8 for a 9-sample group. Strengths measured at 1200 and 1400 C were equal to the room temperature strength. Annealing of machined test bars significantly improved the strength. Molding yields were measured and flaw distributions were observed to follow a Poisson process. The second iteration of the Task 2 matrix experiment is described.
Validity of peer grading using Calibrated Peer Review in a guided-inquiry, conceptual physics course
NASA Astrophysics Data System (ADS)
Price, Edward; Goldberg, Fred; Robinson, Steve; McKean, Michael
2016-12-01
Constructing and evaluating explanations are important science practices, but in large classes it can be difficult to effectively engage students in these practices and provide feedback. Peer review and grading are scalable instructional approaches that address these concerns, but which raise questions about the validity of the peer grading process. Calibrated Peer Review (CPR) is a web-based system that scaffolds peer evaluation through a "calibration" process where students evaluate sample responses and receive feedback on their evaluations before evaluating their peers. Guided by an activity theory framework, we developed, implemented, and evaluated CPR-based tasks in guided-inquiry, conceptual physics courses for future teachers and general education students. The tasks were developed through iterative testing and revision. Effective tasks had specific and directed prompts and evaluation instructions. Using these tasks, over 350 students at three universities constructed explanations or analyzed physical phenomena, and evaluated their peers' work. By independently assessing students' responses, we evaluated the CPR calibration process and compared students' peer reviews with expert evaluations. On the tasks analyzed, peer scores were equivalent to our independent evaluations. On a written explanation item included on the final exam, students in the courses using CPR outperformed students in similar courses using traditional writing assignments without a peer evaluation element. Our research demonstrates that CPR can be an effective way to explicitly include the science practices of constructing and evaluating explanations into large classes without placing a significant burden on the instructor.
Perl Modules for Constructing Iterators
NASA Technical Reports Server (NTRS)
Tilmes, Curt
2009-01-01
The Iterator Perl Module provides a general-purpose framework for constructing iterator objects within Perl, and a standard API for interacting with those objects. Iterators are an object-oriented design pattern where a description of a series of values is used in a constructor. Subsequent queries can request values in that series. These Perl modules build on the standard Iterator framework and provide iterators for some other types of values. Iterator::DateTime constructs iterators from DateTime objects or Date::Parse descriptions and ICal/RFC 2445 style re-currence descriptions. It supports a variety of input parameters, including a start to the sequence, an end to the sequence, an Ical/RFC 2445 recurrence describing the frequency of the values in the series, and a format description that can refine the presentation manner of the DateTime. Iterator::String constructs iterators from string representations. This module is useful in contexts where the API consists of supplying a string and getting back an iterator where the specific iteration desired is opaque to the caller. It is of particular value to the Iterator::Hash module which provides nested iterations. Iterator::Hash constructs iterators from Perl hashes that can include multiple iterators. The constructed iterators will return all the permutations of the iterations of the hash by nested iteration of embedded iterators. A hash simply includes a set of keys mapped to values. It is a very common data structure used throughout Perl programming. The Iterator:: Hash module allows a hash to include strings defining iterators (parsed and dispatched with Iterator::String) that are used to construct an overall series of hash values.
Software Would Largely Automate Design of Kalman Filter
NASA Technical Reports Server (NTRS)
Chuang, Jason C. H.; Negast, William J.
2005-01-01
Embedded Navigation Filter Automatic Designer (ENFAD) is a computer program being developed to automate the most difficult tasks in designing embedded software to implement a Kalman filter in a navigation system. The most difficult tasks are selection of error states of the filter and tuning of filter parameters, which are timeconsuming trial-and-error tasks that require expertise and rarely yield optimum results. An optimum selection of error states and filter parameters depends on navigation-sensor and vehicle characteristics, and on filter processing time. ENFAD would include a simulation module that would incorporate all possible error states with respect to a given set of vehicle and sensor characteristics. The first of two iterative optimization loops would vary the selection of error states until the best filter performance was achieved in Monte Carlo simulations. For a fixed selection of error states, the second loop would vary the filter parameter values until an optimal performance value was obtained. Design constraints would be satisfied in the optimization loops. Users would supply vehicle and sensor test data that would be used to refine digital models in ENFAD. Filter processing time and filter accuracy would be computed by ENFAD.
Humanoid Mobile Manipulation Using Controller Refinement
NASA Technical Reports Server (NTRS)
Platt, Robert; Burridge, Robert; Diftler, Myron; Graf, Jodi; Goza, Mike; Huber, Eric; Brock, Oliver
2006-01-01
An important class of mobile manipulation problems are move-to-grasp problems where a mobile robot must navigate to and pick up an object. One of the distinguishing features of this class of tasks is its coarse-to-fine structure. Near the beginning of the task, the robot can only sense the target object coarsely or indirectly and make gross motion toward the object. However, after the robot has located and approached the object, the robot must finely control its grasping contacts using precise visual and haptic feedback. This paper proposes that move-to-grasp problems are naturally solved by a sequence of controllers that iteratively refines what ultimately becomes the final solution. This paper introduces the notion of a refining sequence of controllers and characterizes this type of solution. The approach is demonstrated in a move-to-grasp task where Robonaut, the NASA/JSC dexterous humanoid, is mounted on a mobile base and navigates to and picks up a geological sample box. In a series of tests, it is shown that a refining sequence of controllers decreases variance in robot configuration relative to the sample box until a successful grasp has been achieved.
Open control/display system for a telerobotics work station
NASA Technical Reports Server (NTRS)
Keslowitz, Saul
1987-01-01
A working Advanced Space Cockpit was developed that integrated advanced control and display devices into a state-of-the-art multimicroprocessor hardware configuration, using window graphics and running under an object-oriented, multitasking real-time operating system environment. This Open Control/Display System supports the idea that the operator should be able to interactively monitor, select, control, and display information about many payloads aboard the Space Station using sets of I/O devices with a single, software-reconfigurable workstation. This is done while maintaining system consistency, yet the system is completely open to accept new additions and advances in hardware and software. The Advanced Space Cockpit, linked to Grumman's Hybrid Computing Facility and Large Amplitude Space Simulator (LASS), was used to test the Open Control/Display System via full-scale simulation of the following tasks: telerobotic truss assembly, RCS and thermal bus servicing, CMG changeout, RMS constrained motion and space constructible radiator assembly, HPA coordinated control, and OMV docking and tumbling satellite retrieval. The proposed man-machine interface standard discussed has evolved through many iterations of the tasks, and is based on feedback from NASA and Air Force personnel who performed those tasks in the LASS.
Humanoid Mobile Manipulation Using Controller Refinement
NASA Technical Reports Server (NTRS)
Platt, Robert; Burridge, Robert; Diftler, Myron; Graf, Jodi; Goza, Mike; Huber, Eric
2006-01-01
An important class of mobile manipulation problems are move-to-grasp problems where a mobile robot must navigate to and pick up an object. One of the distinguishing features of this class of tasks is its coarse-to-fine structure. Near the beginning of the task, the robot can only sense the target object coarsely or indirectly and make gross motion toward the object. However, after the robot has located and approached the object, the robot must finely control its grasping contacts using precise visual and haptic feedback. In this paper, it is proposed that move-to-grasp problems are naturally solved by a sequence of controllers that iteratively refines what ultimately becomes the final solution. This paper introduces the notion of a refining sequence of controllers and characterizes this type of solution. The approach is demonstrated in a move-to-grasp task where Robonaut, the NASA/JSC dexterous humanoid, is mounted on a mobile base and navigates to and picks up a geological sample box. In a series of tests, it is shown that a refining sequence of controllers decreases variance in robot configuration relative to the sample box until a successful grasp has been achieved.
Collaborative damage mapping for emergency response: the role of Cognitive Systems Engineering
NASA Astrophysics Data System (ADS)
Kerle, N.; Hoffman, R. R.
2013-01-01
Remote sensing is increasingly used to assess disaster damage, traditionally by professional image analysts. A recent alternative is crowdsourcing by volunteers experienced in remote sensing, using internet-based mapping portals. We identify a range of problems in current approaches, including how volunteers can best be instructed for the task, ensuring that instructions are accurately understood and translate into valid results, or how the mapping scheme must be adapted for different map user needs. The volunteers, the mapping organizers, and the map users all perform complex cognitive tasks, yet little is known about the actual information needs of the users. We also identify problematic assumptions about the capabilities of the volunteers, principally related to the ability to perform the mapping, and to understand mapping instructions unambiguously. We propose that any robust scheme for collaborative damage mapping must rely on Cognitive Systems Engineering and its principal method, Cognitive Task Analysis (CTA), to understand the information and decision requirements of the map and image users, and how the volunteers can be optimally instructed and their mapping contributions merged into suitable map products. We recommend an iterative approach involving map users, remote sensing specialists, cognitive systems engineers and instructional designers, as well as experimental psychologists.
NASA Astrophysics Data System (ADS)
Kohler, Sophie; Far, Aïcha Beya; Hirsch, Ernest
2007-01-01
This paper presents an original approach for the optimal 3D reconstruction of manufactured workpieces based on a priori planification of the task, enhanced on-line through dynamic adjustment of the lighting conditions, and built around a cognitive intelligent sensory system using so-called Situation Graph Trees. The system takes explicitely structural knowledge related to image acquisition conditions, type of illumination sources, contents of the scene (e. g., CAD models and tolerance information), etc. into account. The principle of the approach relies on two steps. First, a socalled initialization phase, leading to the a priori task plan, collects this structural knowledge. This knowledge is conveniently encoded, as a sub-part, in the Situation Graph Tree building the backbone of the planning system specifying exhaustively the behavior of the application. Second, the image is iteratively evaluated under the control of this Situation Graph Tree. The information describing the quality of the piece to analyze is thus extracted and further exploited for, e. g., inspection tasks. Lastly, the approach enables dynamic adjustment of the Situation Graph Tree, enabling the system to adjust itself to the actual application run-time conditions, thus providing the system with a self-learning capability.
Feasibility of Active Machine Learning for Multiclass Compound Classification.
Lang, Tobias; Flachsenberg, Florian; von Luxburg, Ulrike; Rarey, Matthias
2016-01-25
A common task in the hit-to-lead process is classifying sets of compounds into multiple, usually structural classes, which build the groundwork for subsequent SAR studies. Machine learning techniques can be used to automate this process by learning classification models from training compounds of each class. Gathering class information for compounds can be cost-intensive as the required data needs to be provided by human experts or experiments. This paper studies whether active machine learning can be used to reduce the required number of training compounds. Active learning is a machine learning method which processes class label data in an iterative fashion. It has gained much attention in a broad range of application areas. In this paper, an active learning method for multiclass compound classification is proposed. This method selects informative training compounds so as to optimally support the learning progress. The combination with human feedback leads to a semiautomated interactive multiclass classification procedure. This method was investigated empirically on 15 compound classification tasks containing 86-2870 compounds in 3-38 classes. The empirical results show that active learning can solve these classification tasks using 10-80% of the data which would be necessary for standard learning techniques.
Localized Energy-Based Normalization of Medical Images: Application to Chest Radiography.
Philipsen, R H H M; Maduskar, P; Hogeweg, L; Melendez, J; Sánchez, C I; van Ginneken, B
2015-09-01
Automated quantitative analysis systems for medical images often lack the capability to successfully process images from multiple sources. Normalization of such images prior to further analysis is a possible solution to this limitation. This work presents a general method to normalize medical images and thoroughly investigates its effectiveness for chest radiography (CXR). The method starts with an energy decomposition of the image in different bands. Next, each band's localized energy is scaled to a reference value and the image is reconstructed. We investigate iterative and local application of this technique. The normalization is applied iteratively to the lung fields on six datasets from different sources, each comprising 50 normal CXRs and 50 abnormal CXRs. The method is evaluated in three supervised computer-aided detection tasks related to CXR analysis and compared to two reference normalization methods. In the first task, automatic lung segmentation, the average Jaccard overlap significantly increased from 0.72±0.30 and 0.87±0.11 for both reference methods to with normalization. The second experiment was aimed at segmentation of the clavicles. The reference methods had an average Jaccard index of 0.57±0.26 and 0.53±0.26; with normalization this significantly increased to . The third experiment was detection of tuberculosis related abnormalities in the lung fields. The average area under the Receiver Operating Curve increased significantly from 0.72±0.14 and 0.79±0.06 using the reference methods to with normalization. We conclude that the normalization can be successfully applied in chest radiography and makes supervised systems more generally applicable to data from different sources.
Markel, D; Naqa, I El
2012-06-01
Positron emission tomography (PET) presents a valuable resource for delineating the biological tumor volume (BTV) for image-guided radiotherapy. However, accurate and consistent image segmentation is a significant challenge within the context of PET, owing to its low spatial resolution and high levels of noise. Active contour methods based on the level set methods can be sensitive to noise and susceptible to failing in low contrast regions. Therefore, this work evaluates a novel active contour algorithm applied to the task of PET tumor segmentation. A novel active contour segmentation algorithm based on maximizing the Jensen-Renyi Divergence between regions of interest was applied to the task of segmenting lesions in 7 patients with T3-T4 pharyngolaryngeal squamous cell carcinoma. The algorithm was implemented on an NVidia GEFORCE GTV 560M GPU. The cases were taken from the Louvain database, which includes contours of the macroscopically defined BTV drawn using histology of resected tissue. The images were pre-processed using denoising/deconvolution. The segmented volumes agreed well with the macroscopic contours, with an average concordance index and classification error of 0.6 ± 0.09 and 55 ± 16.5%, respectively. The algorithm in its present implementation requires approximately 0.5-1.3 sec per iteration and can reach convergence within 10-30 iterations. The Jensen-Renyi active contour method was shown to come close to and in terms of concordance, outperforms a variety of PET segmentation methods that have been previously evaluated using the same data. Further evaluation on a larger dataset along with performance optimization is necessary before clinical deployment. © 2012 American Association of Physicists in Medicine.
Zhao, Yu; Ge, Fangfei; Liu, Tianming
2018-07-01
fMRI data decomposition techniques have advanced significantly from shallow models such as Independent Component Analysis (ICA) and Sparse Coding and Dictionary Learning (SCDL) to deep learning models such Deep Belief Networks (DBN) and Convolutional Autoencoder (DCAE). However, interpretations of those decomposed networks are still open questions due to the lack of functional brain atlases, no correspondence across decomposed or reconstructed networks across different subjects, and significant individual variabilities. Recent studies showed that deep learning, especially deep convolutional neural networks (CNN), has extraordinary ability of accommodating spatial object patterns, e.g., our recent works using 3D CNN for fMRI-derived network classifications achieved high accuracy with a remarkable tolerance for mistakenly labelled training brain networks. However, the training data preparation is one of the biggest obstacles in these supervised deep learning models for functional brain network map recognitions, since manual labelling requires tedious and time-consuming labours which will sometimes even introduce label mistakes. Especially for mapping functional networks in large scale datasets such as hundreds of thousands of brain networks used in this paper, the manual labelling method will become almost infeasible. In response, in this work, we tackled both the network recognition and training data labelling tasks by proposing a new iteratively optimized deep learning CNN (IO-CNN) framework with an automatic weak label initialization, which enables the functional brain networks recognition task to a fully automatic large-scale classification procedure. Our extensive experiments based on ABIDE-II 1099 brains' fMRI data showed the great promise of our IO-CNN framework. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Tian, Xiang-Dong
The purpose of this research is to simulate induction and measuring-while-drilling (MWD) logs. In simulation of logs, there are two tasks. The first task, the forward modeling procedure, is to compute the logs from known formation. The second task, the inversion procedure, is to determine the unknown properties of the formation from the measured field logs. In general, the inversion procedure requires the solution of a forward model. In this study, a stable numerical method to simulate induction and MWD logs is presented. The proposed algorithm is based on a horizontal eigenmode expansion method. Vertical propagation of modes is modeled by a three-layer module. The multilayer cases are treated as a cascade of these modules. The mode tracing algorithm possesses stable characteristics that are superior to other methods. This method is applied to simulate the logs in the formations with both vertical and horizontal layers, and also used to study the groove effects of the MWD tool. The results are very good. Two-dimensional inversion of induction logs is an nonlinear problem. Nonlinear functions of the apparent conductivity are expanded into a Taylor series. After truncating the high order terms in this Taylor series, the nonlinear functions are linearized. An iterative procedure is then devised to solve the inversion problem. In each iteration, the Jacobian matrix is calculated, and a small variation computed using the least-squares method is used to modify the background medium. Finally, the inverted medium is obtained. The horizontal eigenstate method is used to solve the forward problem. It is found that a good inverted formation can be obtained by using measurements. In order to help the user simulate the induction logs conveniently, a Wellog Simulator, based on the X-window system, is developed. The application software (FORTRAN codes) embedded in the Simulator is designed to simulate the responses of the induction tools in the layered formation with dipping beds. The graphic user-interface part of the Wellog Simulator is implemented with C and Motif. Through the user interface, the user can prepare the simulation data, select the tools, simulate the logs and plot the results.
Quantum learning of classical stochastic processes: The completely positive realization problem
NASA Astrophysics Data System (ADS)
Monràs, Alex; Winter, Andreas
2016-01-01
Among several tasks in Machine Learning, a specially important one is the problem of inferring the latent variables of a system and their causal relations with the observed behavior. A paradigmatic instance of this is the task of inferring the hidden Markov model underlying a given stochastic process. This is known as the positive realization problem (PRP), [L. Benvenuti and L. Farina, IEEE Trans. Autom. Control 49(5), 651-664 (2004)] and constitutes a central problem in machine learning. The PRP and its solutions have far-reaching consequences in many areas of systems and control theory, and is nowadays an important piece in the broad field of positive systems theory. We consider the scenario where the latent variables are quantum (i.e., quantum states of a finite-dimensional system) and the system dynamics is constrained only by physical transformations on the quantum system. The observable dynamics is then described by a quantum instrument, and the task is to determine which quantum instrument — if any — yields the process at hand by iterative application. We take as a starting point the theory of quasi-realizations, whence a description of the dynamics of the process is given in terms of linear maps on state vectors and probabilities are given by linear functionals on the state vectors. This description, despite its remarkable resemblance with the hidden Markov model, or the iterated quantum instrument, is however devoid of any stochastic or quantum mechanical interpretation, as said maps fail to satisfy any positivity conditions. The completely positive realization problem then consists in determining whether an equivalent quantum mechanical description of the same process exists. We generalize some key results of stochastic realization theory, and show that the problem has deep connections with operator systems theory, giving possible insight to the lifting problem in quotient operator systems. Our results have potential applications in quantum machine learning, device-independent characterization and reverse-engineering of stochastic processes and quantum processors, and more generally, of dynamical processes with quantum memory [M. Guţă, Phys. Rev. A 83(6), 062324 (2011); M. Guţă and N. Yamamoto, e-print arXiv:1303.3771(2013)].
Bellesi, Luca; Wyttenbach, Rolf; Gaudino, Diego; Colleoni, Paolo; Pupillo, Francesco; Carrara, Mauro; Braghetti, Antonio; Puligheddu, Carla; Presilla, Stefano
2017-01-01
The aim of this work was to evaluate detection of low-contrast objects and image quality in computed tomography (CT) phantom images acquired at different tube loadings (i.e. mAs) and reconstructed with different algorithms, in order to find appropriate settings to reduce the dose to the patient without any image detriment. Images of supraslice low-contrast objects of a CT phantom were acquired using different mAs values. Images were reconstructed using filtered back projection (FBP), hybrid and iterative model-based methods. Image quality parameters were evaluated in terms of modulation transfer function; noise, and uniformity using two software resources. For the definition of low-contrast detectability, studies based on both human (i.e. four-alternative forced-choice test) and model observers were performed across the various images. Compared to FBP, image quality parameters were improved by using iterative reconstruction (IR) algorithms. In particular, IR model-based methods provided a 60% noise reduction and a 70% dose reduction, preserving image quality and low-contrast detectability for human radiological evaluation. According to the model observer, the diameters of the minimum detectable detail were around 2 mm (up to 100 mAs). Below 100 mAs, the model observer was unable to provide a result. IR methods improve CT protocol quality, providing a potential dose reduction while maintaining a good image detectability. Model observer can in principle be useful to assist human performance in CT low-contrast detection tasks and in dose optimisation.
CT image reconstruction with half precision floating-point values.
Maaß, Clemens; Baer, Matthias; Kachelrieß, Marc
2011-07-01
Analytic CT image reconstruction is a computationally demanding task. Currently, the even more demanding iterative reconstruction algorithms find their way into clinical routine because their image quality is superior to analytic image reconstruction. The authors thoroughly analyze a so far unconsidered but valuable tool of tomorrow's reconstruction hardware (CPU and GPU) that allows implementing the forward projection and backprojection steps, which are the computationally most demanding parts of any reconstruction algorithm, much more efficiently. Instead of the standard 32 bit floating-point values (float), a recently standardized floating-point value with 16 bit (half) is adopted for data representation in image domain and in rawdata domain. The reduction in the total data amount reduces the traffic on the memory bus, which is the bottleneck of today's high-performance algorithms, by 50%. In CT simulations and CT measurements, float reconstructions (gold standard) and half reconstructions are visually compared via difference images and by quantitative image quality evaluation. This is done for analytical reconstruction (filtered backprojection) and iterative reconstruction (ordered subset SART). The magnitude of quantization noise, which is caused by a reduction in the data precision of both rawdata and image data during image reconstruction, is negligible. This is clearly shown for filtered backprojection and iterative ordered subset SART reconstruction. In filtered backprojection, the implementation of the backprojection should be optimized for low data precision if the image data are represented in half format. In ordered subset SART image reconstruction, no adaptations are necessary and the convergence speed remains unchanged. Half precision floating-point values allow to speed up CT image reconstruction without compromising image quality.
Freire, Paulo G L; Ferrari, Ricardo J
2016-06-01
Multiple sclerosis (MS) is a demyelinating autoimmune disease that attacks the central nervous system (CNS) and affects more than 2 million people worldwide. The segmentation of MS lesions in magnetic resonance imaging (MRI) is a very important task to assess how a patient is responding to treatment and how the disease is progressing. Computational approaches have been proposed over the years to segment MS lesions and reduce the amount of time spent on manual delineation and inter- and intra-rater variability and bias. However, fully-automatic segmentation of MS lesions still remains an open problem. In this work, we propose an iterative approach using Student's t mixture models and probabilistic anatomical atlases to automatically segment MS lesions in Fluid Attenuated Inversion Recovery (FLAIR) images. Our technique resembles a refinement approach by iteratively segmenting brain tissues into smaller classes until MS lesions are grouped as the most hyperintense one. To validate our technique we used 21 clinical images from the 2015 Longitudinal Multiple Sclerosis Lesion Segmentation Challenge dataset. Evaluation using Dice Similarity Coefficient (DSC), True Positive Ratio (TPR), False Positive Ratio (FPR), Volume Difference (VD) and Pearson's r coefficient shows that our technique has a good spatial and volumetric agreement with raters' manual delineations. Also, a comparison between our proposal and the state-of-the-art shows that our technique is comparable and, in some cases, better than some approaches, thus being a viable alternative for automatic MS lesion segmentation in MRI. Copyright © 2016 Elsevier Ltd. All rights reserved.
On-board autonomous attitude maneuver planning for planetary spacecraft using genetic algorithms
NASA Technical Reports Server (NTRS)
Kornfeld, Richard P.
2003-01-01
A key enabling technology that leads to greater spacecraft autonomy is the capability to autonomously and optimally slew the spacecraft from and to different attitudes while operating under a number of celestial and dynamic constraints. The task of finding an attitude trajectory that meets all the constraints is a formidable one, in particular for orbiting or fly-by spacecraft where the constraints and initial and final conditions are of time-varying nature. This paper presents an approach for attitude path planning that makes full use of a priori constraint knowledge and is computationally tractable enough to be executed on-board a spacecraft. The approach is based on incorporating the constraints into a cost function and using a Genetic Algorithm to iteratively search for and optimize the solution. This results in a directed random search that explores a large part of the solution space while maintaining the knowledge of good solutions from iteration to iteration. A solution obtained this way may be used 'as is' or as an initial solution to initialize additional deterministic optimization algorithms. A number of example simulations are presented including the case examples of a generic Europa Orbiter spacecraft in cruise as well as in orbit around Europa. The search times are typically on the order of minutes, thus demonstrating the viability of the presented approach. The results are applicable to all future deep space missions where greater spacecraft autonomy is required. In addition, onboard autonomous attitude planning greatly facilitates navigation and science observation planning, benefiting thus all missions to planet Earth as well.
NASA Astrophysics Data System (ADS)
Bruns, S.; Stipp, S. L. S.; Sørensen, H. O.
2017-07-01
X-ray micro- and nanotomography has evolved into a quantitative analysis tool rather than a mere qualitative visualization technique for the study of porous natural materials. Tomographic reconstructions are subject to noise that has to be handled by image filters prior to quantitative analysis. Typically, denoising filters are designed to handle random noise, such as Gaussian or Poisson noise. In tomographic reconstructions, noise has been projected from Radon space to Euclidean space, i.e. post reconstruction noise cannot be expected to be random but to be correlated. Reconstruction artefacts, such as streak or ring artefacts, aggravate the filtering process so algorithms performing well with random noise are not guaranteed to provide satisfactory results for X-ray tomography reconstructions. With sufficient image resolution, the crystalline origin of most geomaterials results in tomography images of objects that are untextured. We developed a denoising framework for these kinds of samples that combines a noise level estimate with iterative nonlocal means denoising. This allows splitting the denoising task into several weak denoising subtasks where the later filtering steps provide a controlled level of texture removal. We describe a hands-on explanation for the use of this iterative denoising approach and the validity and quality of the image enhancement filter was evaluated in a benchmarking experiment with noise footprints of a varying level of correlation and residual artefacts. They were extracted from real tomography reconstructions. We found that our denoising solutions were superior to other denoising algorithms, over a broad range of contrast-to-noise ratios on artificial piecewise constant signals.
The Roles and Developments needed for Diagnostics in the ITER Fusion Device
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walsh, Michael
2015-07-01
Harnessing the power from Fusion on earth is an important and challenging task. Excellent work has been carried out in this area over the years with several demonstrations of the ability to produce power. Now, a new large device is being constructed in the south of France. This is called ITER. ITER is a large-scale scientific experiment that aims to demonstrate a possibility to produce commercial energy from fusion. This project is now well underway with the many teams working on the construction and completing various aspects of the design. This device will carry up to 15 MA of plasmamore » current and produce about 500 MW of power, 400 MW approximately in high energy neutrons. The typical temperatures of the electrons inside this device are in the region of a few hundred million Kelvin. It is maintained using a magnetic field. This device is pushing several boundaries from those currently existing. As a result of this, several technologies need to be developed or extended. This is especially true for the systems or diagnostics that measure the performance and provide the control signals for this device. A diagnostic set will be installed on the ITER machine to provide the measurements necessary to control, evaluate and optimize plasma performance in ITER and to further the understanding of plasma physics. These include amongst others, measurements of the plasma shape, temperature, density, impurity concentration, and particle and energy confinement times. The system will comprise about 45 individual measuring systems drawn from the full range of modern plasma diagnostic techniques, including magnetics, lasers, X-rays, neutron cameras, impurity monitors, particle spectrometers, radiation bolometers, pressure and gas analysis, and optical fibres. These devices will have to be made to work in the new and challenging environment inside the vacuum vessel. These systems will have to cope with a range of phenomena that extend the current knowledge in the Fusion field. One amongst them is the parasitic effect of the neutrons on the while all the performing with great accuracy and precision. The levels of neutral particle flux, neutron flux and neutron fluence will be respectively about 5, 10 and 10,000 times higher than the harshest experienced in today's machines. The pulse length of the fusion reaction-or the amount of time the reaction is sustained-will be about 100 times longer. (authors)« less
NASA Astrophysics Data System (ADS)
Morgan, Ashraf
The need for an accurate and reliable way for measuring patient dose in multi-row detector computed tomography (MDCT) has increased significantly. This research was focusing on the possibility of measuring CT dose in air to estimate Computed Tomography Dose Index (CTDI) for routine quality control purposes. New elliptic CTDI phantom that better represent human geometry was manufactured for investigating the effect of the subject shape on measured CTDI. Monte Carlo simulation was utilized in order to determine the dose distribution in comparison to the traditional cylindrical CTDI phantom. This research also investigated the effect of Siemens health care newly developed iMAR (iterative metal artifact reduction) algorithm, arthroplasty phantom was designed and manufactured that purpose. The design of new phantoms was part of the research as they mimic the human geometry more than the existing CTDI phantom. The standard CTDI phantom is a right cylinder that does not adequately represent the geometry of the majority of the patient population. Any dose reduction algorithm that is used during patient scan will not be utilized when scanning the CTDI phantom, so a better-designed phantom will allow the use of dose reduction algorithms when measuring dose, which leads to better dose estimation and/or better understanding of dose delivery. Doses from a standard CTDI phantom and the newly-designed phantoms were compared to doses measured in air. Iterative reconstruction is a promising technique in MDCT dose reduction and artifacts correction. Iterative reconstruction algorithms have been developed to address specific imaging tasks as is the case with Iterative Metal Artifact Reduction or iMAR which was developed by Siemens and is to be in use with the companys future computed tomography platform. The goal of iMAR is to reduce metal artifact when imaging patients with metal implants and recover CT number of tissues adjacent to the implant. This research evaluated iMAR capability of recovering CT numbers and reducing noise. Also, the use of iMAR should allow using lower tube voltage instead of 140 KVp which is used frequently to image patients with shoulder implants. The evaluations of image quality and dose reduction were carried out using an arthroplasty phantom.
Physical and cognitive task analysis in interventional radiology.
Johnson, S; Healey, A; Evans, J; Murphy, M; Crawshaw, M; Gould, D
2006-01-01
To identify, describe and detail the cognitive thought processes, decision-making, and physical actions involved in the preparation and successful performance of core interventional radiology procedures. Five commonly performed core interventional radiology procedures were selected for cognitive task analysis. Several examples of each procedure being performed by consultant interventional radiologists were videoed. The videos of those procedures, and the steps required for successful outcome, were analysed by a psychologist and an interventional radiologist. Once a skeleton algorithm of the procedures was defined, further refinement was achieved using individual interview techniques with consultant interventional radiologists. Additionally a critique of each iteration of the established algorithm was sought from non-participating independent consultant interventional radiologists. Detailed task descriptions and decision protocols were developed for five interventional radiology procedures (arterial puncture, nephrostomy, venous access, biopsy-using both ultrasound and computed tomography, and percutaneous transhepatic cholangiogram). Identical tasks performed within these procedures were identified and standardized within the protocols. Complex procedures were broken down and their constituent processes identified. This might be suitable for use as a training protocol to provide a universally acceptable safe practice at the most fundamental level. It is envisaged that data collected in this way can be used as an educational resource for trainees and could provide the basis for a training curriculum in interventional radiology. It will direct trainees towards safe practice of the highest standard. It will also provide performance objectives of a simulator model.
ExM:System Support for Extreme-Scale, Many-Task Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katz, Daniel S
The ever-increasing power of supercomputer systems is both driving and enabling the emergence of new problem-solving methods that require the effi cient execution of many concurrent and interacting tasks. Methodologies such as rational design (e.g., in materials science), uncertainty quanti fication (e.g., in engineering), parameter estimation (e.g., for chemical and nuclear potential functions, and in economic energy systems modeling), massive dynamic graph pruning (e.g., in phylogenetic searches), Monte-Carlo- based iterative fi xing (e.g., in protein structure prediction), and inverse modeling (e.g., in reservoir simulation) all have these requirements. These many-task applications frequently have aggregate computing needs that demand the fastestmore » computers. For example, proposed next-generation climate model ensemble studies will involve 1,000 or more runs, each requiring 10,000 cores for a week, to characterize model sensitivity to initial condition and parameter uncertainty. The goal of the ExM project is to achieve the technical advances required to execute such many-task applications efficiently, reliably, and easily on petascale and exascale computers. In this way, we will open up extreme-scale computing to new problem solving methods and application classes. In this document, we report on combined technical progress of the collaborative ExM project, and the institutional financial status of the portion of the project at University of Chicago, over the rst 8 months (through April 30, 2011)« less
A task-related and resting state realistic fMRI simulator for fMRI data validation
NASA Astrophysics Data System (ADS)
Hill, Jason E.; Liu, Xiangyu; Nutter, Brian; Mitra, Sunanda
2017-02-01
After more than 25 years of published functional magnetic resonance imaging (fMRI) studies, careful scrutiny reveals that most of the reported results lack fully decisive validation. The complex nature of fMRI data generation and acquisition results in unavoidable uncertainties in the true estimation and interpretation of both task-related activation maps and resting state functional connectivity networks, despite the use of various statistical data analysis methodologies. The goal of developing the proposed STANCE (Spontaneous and Task-related Activation of Neuronally Correlated Events) simulator is to generate realistic task-related and/or resting-state 4D blood oxygenation level dependent (BOLD) signals, given the experimental paradigm and scan protocol, by using digital phantoms of twenty normal brains available from BrainWeb (http://brainweb.bic.mni.mcgill.ca/brainweb/). The proposed simulator will include estimated system and modelled physiological noise as well as motion to serve as a reference to measured brain activities. In its current form, STANCE is a MATLAB toolbox with command line functions serving as an open-source add-on to SPM8 (http://www.fil.ion.ucl.ac.uk/spm/software/spm8/). The STANCE simulator has been designed in a modular framework so that the hemodynamic response (HR) and various noise models can be iteratively improved to include evolving knowledge about such models.
NASA Astrophysics Data System (ADS)
Guérin, Joris; Gibaru, Olivier; Thiery, Stéphane; Nyiri, Eric
2017-01-01
Recent methods of Reinforcement Learning have enabled to solve difficult, high dimensional, robotic tasks under unknown dynamics using iterative Linear Quadratic Gaussian control theory. These algorithms are based on building a local time-varying linear model of the dynamics from data gathered through interaction with the environment. In such tasks, the cost function is often expressed directly in terms of the state and control variables so that it can be locally quadratized to run the algorithm. If the cost is expressed in terms of other variables, a model is required to compute the cost function from the variables manipulated. We propose a method to learn the cost function directly from the data, in the same way as for the dynamics. This way, the cost function can be defined in terms of any measurable quantity and thus can be chosen more appropriately for the task to be carried out. With our method, any sensor information can be used to design the cost function. We demonstrate the efficiency of this method through simulating, with the V-REP software, the learning of a Cartesian positioning task on several industrial robots with different characteristics. The robots are controlled in joint space and no model is provided a priori. Our results are compared with another model free technique, consisting in writing the cost function as a state variable.
The skills and experience of GRADE methodologists can be assessed with a simple tool.
Norris, Susan L; Meerpohl, Joerg J; Akl, Elie A; Schünemann, Holger J; Gartlehner, Gerald; Chen, Yaolong; Whittington, Craig
2016-11-01
To suggest approaches for guideline developers on how to assess a methodologist's expertise with Grading of Recommendations Assessment, Development and Evaluation (GRADE) methods and tasks and to provide a set of minimum skills and experience required to perform specific tasks related to guideline development using GRADE. We used an iterative and consensus-based process involving individuals with in-depth experience with GRADE. We considered four main tasks: (1) development of key questions; (2) assessment of the certainty of effect estimates; (3) development of recommendations; and (4) teaching GRADE. There are three basic approaches to determine a methodologist's skill set. First, self-report of knowledge, skills, and experience with a standardized "GRADE curriculum vitae (CV)" focused on each of the GRADE-related tasks; second, demonstration of skills using worked examples; third, a formal evaluation using a written or oral test. We suggest that the GRADE CV is likely to be useful and feasible to implement. We also suggest minimum training including attendance at one or more full-day workshops and familiarity with the main GRADE publications and the GRADE handbook. The selection of a GRADE methodologist must be a thoughtful, reasoned decision, informed by the criteria suggested in this article and tailored to the specific project. Our suggested approaches need further pilot testing and validation. Copyright © 2016 Elsevier Inc. All rights reserved.
Berger, Marc L; Sox, Harold; Willke, Richard J; Brixner, Diana L; Eichler, Hans-Georg; Goettsch, Wim; Madigan, David; Makady, Amr; Schneeweiss, Sebastian; Tarricone, Rosanna; Wang, Shirley V; Watkins, John; Daniel Mullins, C
2017-09-01
Real-world evidence (RWE) includes data from retrospective or prospective observational studies and observational registries and provides insights beyond those addressed by randomized controlled trials. RWE studies aim to improve health care decision making. The International Society for Pharmacoeconomics and Outcomes Research (ISPOR) and the International Society for Pharmacoepidemiology (ISPE) created a task force to make recommendations regarding good procedural practices that would enhance decision makers' confidence in evidence derived from RWD studies. Peer review by ISPOR/ISPE members and task force participants provided a consensus-building iterative process for the topics and framing of recommendations. The ISPOR/ISPE Task Force recommendations cover seven topics such as study registration, replicability, and stakeholder involvement in RWE studies. These recommendations, in concert with earlier recommendations about study methodology, provide a trustworthy foundation for the expanded use of RWE in health care decision making. The focus of these recommendations is good procedural practices for studies that test a specific hypothesis in a specific population. We recognize that some of the recommendations in this report may not be widely adopted without appropriate incentives from decision makers, journal editors, and other key stakeholders. © 2017 The Authors. Pharmacoepidemiology & Drug Safety published by John Wiley & Sons Ltd.
Tommasino, Paolo; Campolo, Domenico
2017-02-03
In this work, we address human-like motor planning in redundant manipulators. Specifically, we want to capture postural synergies such as Donders' law, experimentally observed in humans during kinematically redundant tasks, and infer a minimal set of parameters to implement similar postural synergies in a kinematic model. For the model itself, although the focus of this paper is to solve redundancy by implementing postural strategies derived from experimental data, we also want to ensure that such postural control strategies do not interfere with other possible forms of motion control (in the task-space), i.e. solving the posture/movement problem. The redundancy problem is framed as a constrained optimization problem, traditionally solved via the method of Lagrange multipliers. The posture/movement problem can be tackled via the separation principle which, derived from experimental evidence, posits that the brain processes static torques (i.e. posture-dependent, such as gravitational torques) separately from dynamic torques (i.e. velocity-dependent). The separation principle has traditionally been applied at a joint torque level. Our main contribution is to apply the separation principle to Lagrange multipliers, which act as task-space force fields, leading to a task-space separation principle. In this way, we can separate postural control (implementing Donders' law) from various types of tasks-space movement planners. As an example, the proposed framework is applied to the (redundant) task of pointing with the human wrist. Nonlinear inverse optimization (NIO) is used to fit the model parameters and to capture motor strategies displayed by six human subjects during pointing tasks. The novelty of our NIO approach is that (i) the fitted motor strategy, rather than raw data, is used to filter and down-sample human behaviours; (ii) our framework is used to efficiently simulate model behaviour iteratively, until it converges towards the experimental human strategies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gang, G; Siewerdsen, J; Stayman, J
Purpose: There has been increasing interest in integrating fluence field modulation (FFM) devices with diagnostic CT scanners for dose reduction purposes. Conventional FFM strategies, however, are often either based on heuristics or the analysis of filtered-backprojection (FBP) performance. This work investigates a prospective task-driven optimization of FFM for model-based iterative reconstruction (MBIR) in order to improve imaging performance at the same total dose as conventional strategies. Methods: The task-driven optimization framework utilizes an ultra-low dose 3D scout as a patient-specific anatomical model and a mathematical formation of the imaging task. The MBIR method investigated is quadratically penalized-likelihood reconstruction. The FFMmore » objective function uses detectability index, d’, computed as a function of the predicted spatial resolution and noise in the image. To optimize performance throughout the object, a maxi-min objective was adopted where the minimum d’ over multiple locations is maximized. To reduce the dimensionality of the problem, FFM is parameterized as a linear combination of 2D Gaussian basis functions over horizontal detector pixels and projection angles. The coefficients of these bases are found using the covariance matrix adaptation evolution strategy (CMA-ES) algorithm. The task-driven design was compared with three other strategies proposed for FBP reconstruction for a calcification cluster discrimination task in an abdomen phantom. Results: The task-driven optimization yielded FFM that was significantly different from those designed for FBP. Comparing all four strategies, the task-based design achieved the highest minimum d’ with an 8–48% improvement, consistent with the maxi-min objective. In addition, d’ was improved to a greater extent over a larger area within the entire phantom. Conclusion: Results from this investigation suggests the need to re-evaluate conventional FFM strategies for MBIR. The task-based optimization framework provides a promising approach that maximizes imaging performance under the same total dose constraint.« less
NASA Technical Reports Server (NTRS)
John, Bonnie; Vera, Alonso; Matessa, Michael; Freed, Michael; Remington, Roger
2002-01-01
CPM-GOMS is a modeling method that combines the task decomposition of a GOMS analysis with a model of human resource usage at the level of cognitive, perceptual, and motor operations. CPM-GOMS models have made accurate predictions about skilled user behavior in routine tasks, but developing such models is tedious and error-prone. We describe a process for automatically generating CPM-GOMS models from a hierarchical task decomposition expressed in a cognitive modeling tool called Apex. Resource scheduling in Apex automates the difficult task of interleaving the cognitive, perceptual, and motor resources underlying common task operators (e.g. mouse move-and-click). Apex's UI automatically generates PERT charts, which allow modelers to visualize a model's complex parallel behavior. Because interleaving and visualization is now automated, it is feasible to construct arbitrarily long sequences of behavior. To demonstrate the process, we present a model of automated teller interactions in Apex and discuss implications for user modeling. available to model human users, the Goals, Operators, Methods, and Selection (GOMS) method [6, 21] has been the most widely used, providing accurate, often zero-parameter, predictions of the routine performance of skilled users in a wide range of procedural tasks [6, 13, 15, 27, 28]. GOMS is meant to model routine behavior. The user is assumed to have methods that apply sequences of operators and to achieve a goal. Selection rules are applied when there is more than one method to achieve a goal. Many routine tasks lend themselves well to such decomposition. Decomposition produces a representation of the task as a set of nested goal states that include an initial state and a final state. The iterative decomposition into goals and nested subgoals can terminate in primitives of any desired granularity, the choice of level of detail dependent on the predictions required. Although GOMS has proven useful in HCI, tools to support the construction of GOMS models have not yet come into general use.
Fundamentals of neurosurgery: virtual reality tasks for training and evaluation of technical skills.
Choudhury, Nusrat; Gélinas-Phaneuf, Nicholas; Delorme, Sébastien; Del Maestro, Rolando
2013-11-01
Technical skills training in neurosurgery is mostly done in the operating room. New educational paradigms are encouraging the development of novel training methods for surgical skills. Simulation could answer some of these needs. This article presents the development of a conceptual training framework for use on a virtual reality neurosurgical simulator. Appropriate tasks were identified by reviewing neurosurgical oncology curricula requirements and performing cognitive task analyses of basic techniques and representative surgeries. The tasks were then elaborated into training modules by including learning objectives, instructions, levels of difficulty, and performance metrics. Surveys and interviews were iteratively conducted with subject matter experts to delimitate, review, discuss, and approve each of the development stages. Five tasks were selected as representative of basic and advanced neurosurgical skill. These tasks were: 1) ventriculostomy, 2) endoscopic nasal navigation, 3) tumor debulking, 4) hemostasis, and 5) microdissection. The complete training modules were structured into easy, intermediate, and advanced settings. Performance metrics were also integrated to provide feedback on outcome, efficiency, and errors. The subject matter experts deemed the proposed modules as pertinent and useful for neurosurgical skills training. The conceptual framework presented here, the Fundamentals of Neurosurgery, represents a first attempt to develop standardized training modules for technical skills acquisition in neurosurgical oncology. The National Research Council Canada is currently developing NeuroTouch, a virtual reality simulator for cranial microneurosurgery. The simulator presently includes the five Fundamentals of Neurosurgery modules at varying stages of completion. A first pilot study has shown that neurosurgical residents obtained higher performance scores on the simulator than medical students. Further work will validate its components and use in a training curriculum. Copyright © 2013 N. Choudhury. Published by Elsevier Inc. All rights reserved.
Brydges, Christopher R; Barceló, Francisco
2018-01-01
Cognitive control warrants efficient task performance in dynamic and changing environments through adjustments in executive attention, stimulus and response selection. The well-known P300 component of the human event-related potential (ERP) has long been proposed to index "context-updating"-critical for cognitive control-in simple target detection tasks. However, task switching ERP studies have revealed both target P3 (300-350 ms) and later sustained P3-like potentials (400-1,200 ms) to first targets ensuing transition cues, although it remains unclear whether these target P3-like potentials also reflect context updating operations. To address this question, we applied novel single-trial EEG analyses-residue iteration decomposition (RIDE)-in order to disentangle target P3 sub-components in a sample of 22 young adults while they either repeated or switched (updated) task rules. The rationale was to revise the context updating hypothesis of P300 elicitation in the light of new evidence suggesting that "the context" consists of not only the sensory units of stimulation, but also associated motor units, and intermediate low- and high-order sensorimotor units, all of which may need to be dynamically updated on a trial by trial basis. The results showed functionally distinct target P3-like potentials in stimulus-locked, response-locked, and intermediate RIDE component clusters overlying parietal and frontal regions, implying multiple functionally distinct, though temporarily overlapping context updating operations. These findings support a reformulated version of the context updating hypothesis, and reveal a rich family of distinct target P3-like sub-components during the reactive control of target detection in task-switching, plausibly indexing the complex and dynamic workings of frontoparietal cortical networks subserving cognitive control.
Brydges, Christopher R.; Barceló, Francisco
2018-01-01
Cognitive control warrants efficient task performance in dynamic and changing environments through adjustments in executive attention, stimulus and response selection. The well-known P300 component of the human event-related potential (ERP) has long been proposed to index “context-updating”—critical for cognitive control—in simple target detection tasks. However, task switching ERP studies have revealed both target P3 (300–350 ms) and later sustained P3-like potentials (400–1,200 ms) to first targets ensuing transition cues, although it remains unclear whether these target P3-like potentials also reflect context updating operations. To address this question, we applied novel single-trial EEG analyses—residue iteration decomposition (RIDE)—in order to disentangle target P3 sub-components in a sample of 22 young adults while they either repeated or switched (updated) task rules. The rationale was to revise the context updating hypothesis of P300 elicitation in the light of new evidence suggesting that “the context” consists of not only the sensory units of stimulation, but also associated motor units, and intermediate low- and high-order sensorimotor units, all of which may need to be dynamically updated on a trial by trial basis. The results showed functionally distinct target P3-like potentials in stimulus-locked, response-locked, and intermediate RIDE component clusters overlying parietal and frontal regions, implying multiple functionally distinct, though temporarily overlapping context updating operations. These findings support a reformulated version of the context updating hypothesis, and reveal a rich family of distinct target P3-like sub-components during the reactive control of target detection in task-switching, plausibly indexing the complex and dynamic workings of frontoparietal cortical networks subserving cognitive control. PMID:29515383
Pacchiarotti, Isabella; Bond, David J.; Baldessarini, Ross J.; Nolen, Willem A.; Grunze, Heinz; Licht, Rasmus W.; Post, Robert M.; Berk, Michael; Goodwin, Guy M.; Sachs, Gary S.; Tondo, Leonardo; Findling, Robert L.; Youngstrom, Eric A.; Tohen, Mauricio; Undurraga, Juan; González-Pinto, Ana; Goldberg, Joseph F.; Yildiz, Ayşegül; Altshuler, Lori L.; Calabrese, Joseph R.; Mitchell, Philip B.; Thase, Michael E.; Koukopoulos, Athanasios; Colom, Francesc; Frye, Mark A.; Malhi, Gin S.; Fountoulakis, Konstantinos N.; Vázquez, Gustavo; Perlis, Roy H.; Ketter, Terence A.; Cassidy, Frederick; Akiskal, Hagop; Azorin, Jean-Michel; Valentí, Marc; Mazzei, Diego Hidalgo; Lafer, Beny; Kato, Tadafumi; Mazzarini, Lorenzo; Martínez-Aran, Anabel; Parker, Gordon; Souery, Daniel; Özerdem, Ayşegül; McElroy, Susan L.; Girardi, Paolo; Bauer, Michael; Yatham, Lakshmi N.; Zarate, Carlos A.; Nierenberg, Andrew A.; Birmaher, Boris; Kanba, Shigenobu; El-Mallakh, Rif S.; Serretti, Alessandro; Rihmer, Zoltan; Young, Allan H.; Kotzalidis, Georgios D.; MacQueen, Glenda M.; Bowden, Charles L.; Ghaemi, S. Nassir; Lopez-Jaramillo, Carlos; Rybakowski, Janusz; Ha, Kyooseob; Perugi, Giulio; Kasper, Siegfried; Amsterdam, Jay D.; Hirschfeld, Robert M.; Kapczinski, Flávio; Vieta, Eduard
2014-01-01
Objective The risk-benefit profile of antidepressant medications in bipolar disorder is controversial. When conclusive evidence is lacking, expert consensus can guide treatment decisions. The International Society for Bipolar Disorders (ISBD) convened a task force to seek consensus recommendations on the use of antidepressants in bipolar disorders. Method An expert task force iteratively developed consensus through serial consensus-based revisions using the Delphi method. Initial survey items were based on systematic review of the literature. Subsequent surveys included new or reworded items and items that needed to be rerated. This process resulted in the final ISBD Task Force clinical recommendations on antidepressant use in bipolar disorder. Results There is striking incongruity between the wide use of and the weak evidence base for the efficacy and safety of antidepressant drugs in bipolar disorder. Few well-designed, long-term trials of prophylactic benefits have been conducted, and there is insufficient evidence for treatment benefits with antidepressants combined with mood stabilizers. A major concern is the risk for mood switch to hypomania, mania, and mixed states. Integrating the evidence and the experience of the task force members, a consensus was reached on 12 statements on the use of antidepressants in bipolar disorder. Conclusions Because of limited data, the task force could not make broad statements endorsing antidepressant use but acknowledged that individual bipolar patients may benefit from antidepressants. Regarding safety, serotonin reuptake inhibitors and bupropion may have lower rates of manic switch than tricyclic and tetracyclic antidepressants and norepinephrine-serotonin reuptake inhibitors. The frequency and severity of antidepressant-associated mood elevations appear to be greater in bipolar I than bipolar II disorder. Hence, in bipolar I patients antidepressants should be prescribed only as an adjunct to mood-stabilizing medications. PMID:24030475
Elementary students' engagement in failure-prone engineering design tasks
NASA Astrophysics Data System (ADS)
Andrews, Chelsea Joy
Although engineering education has been practiced at the undergraduate level for over a century, only fairly recently has the field broadened to include the elementary level; the pre-college division of the American Society of Engineering Education was established in 2003. As a result, while recent education standards require engineering in elementary schools, current studies are still filling in basic research on how best to design and implement elementary engineering activities. One area in need of investigation is how students engage with physical failure in design tasks. In this dissertation, I explore how upper elementary students engage in failure-prone engineering design tasks in an out-of-school environment. In a series of three empirical case studies, I look closely at how students evaluate failed tests and decide on changes to their design constructions, how their reasoning evolves as they repeatedly encounter physical failure, and how students and facilitators co-construct testing norms where repetitive failure is manageable. I also briefly investigate how students' engagement differs in a task that features near-immediate success. By closely examining student groups' discourse and their interactions with their design constructions, I found that these students: are able to engage in iteration and see failure-as-feedback with minimal externally-imposed structure; seem to be designing in a more sophisticated manner, attending to multiple causal factors, after experiencing repetitive failure; and are able to manage the stress and frustration of repetitive failure, provided the co-constructed testing norms of the workshop environment are supportive of failure management. These results have both pedagogical implications, in terms of how to create and facilitate design tasks, and methodological implications--namely, I highlight the particular insights afforded by a case study approach for analyzing engagement in design tasks.
Advanced Platform Systems Technology study. Volume 4: Technology advancement program plan
NASA Technical Reports Server (NTRS)
1983-01-01
An overview study of the major technology definition tasks and subtasks along with their interfaces and interrelationships is presented. Although not specifically indicated in the diagram, iterations were required at many steps to finalize the results. The development of the integrated technology advancement plan was initiated by using the results of the previous two tasks, i.e., the trade studies and the preliminary cost and schedule estimates for the selected technologies. Descriptions for the development of each viable technology advancement was drawn from the trade studies. Additionally, a logic flow diagram depicting the steps in developing each technology element was developed along with descriptions for each of the major elements. Next, major elements of the logic flow diagrams were time phased, and that allowed the definition of a technology development schedule that was consistent with the space station program schedule when possible. Schedules show the major milestone including tests required as described in the logic flow diagrams.
Optimal quantum operations at zero energy cost
NASA Astrophysics Data System (ADS)
Chiribella, Giulio; Yang, Yuxiang
2017-08-01
Quantum technologies are developing powerful tools to generate and manipulate coherent superpositions of different energy levels. Envisaging a new generation of energy-efficient quantum devices, here we explore how coherence can be manipulated without exchanging energy with the surrounding environment. We start from the task of converting a coherent superposition of energy eigenstates into another. We identify the optimal energy-preserving operations, both in the deterministic and in the probabilistic scenario. We then design a recursive protocol, wherein a branching sequence of energy-preserving filters increases the probability of success while reaching maximum fidelity at each iteration. Building on the recursive protocol, we construct efficient approximations of the optimal fidelity-probability trade-off, by taking coherent superpositions of the different branches generated by probabilistic filtering. The benefits of this construction are illustrated in applications to quantum metrology, quantum cloning, coherent state amplification, and ancilla-driven computation. Finally, we extend our results to transitions where the input state is generally mixed and we apply our findings to the task of purifying quantum coherence.
Dionne-Odom, J. Nicholas; Willis, Danny G.; Bakitas, Marie; Crandall, Beth; Grace, Pamela J.
2014-01-01
Background Surrogate decision-makers (SDMs) face difficult decisions at end of life (EOL) for decisionally incapacitated intensive care unit (ICU) patients. Purpose Identify and describe the underlying psychological processes of surrogate decision-making for adults at EOL in the ICU. Method Qualitative case study design using a cognitive task analysis (CTA) interviewing approach. Participants were recruited from October 2012 to June 2013 from an academic tertiary medical center’s ICU located in the rural Northeastern United States. Nineteen SDMs for patients who had died in the ICU completed in-depth semi-structured CTA interviews. Discussion The conceptual framework formulated from data analysis reveals that three underlying, iterative, psychological dimensions: gist impressions, distressing emotions, and moral intuitions impact a SDM’s judgment about the acceptability of either the patient’s medical treatments or his or her condition. Conclusion The framework offers initial insights about the underlying psychological processes of surrogate decision-making and may facilitate enhanced decision support for SDMs. PMID:25982772
Ranade-Kharkar, Pallavi; Norlin, Chuck; Del Fiol, Guilherme
2017-01-01
Complex and chronic conditions in pediatric patients with special needs often result in large and diverse patient care teams. Having a comprehensive view of the care teams is crucial to achieving effective and efficient care coordination for these vulnerable patients. In this study, we iteratively design and develop two alternative user interfaces (graphical and tabular) of a prototype of a tool for visualizing and managing care teams and conduct a formative assessment of the usability, usefulness, and efficiency of the tool. The median time to task completion for the 21 study participants was less than 7 seconds for 19 out of the 22 usability tasks. While both the prototype formats were well-liked in terms of usability and usefulness, the tabular format was rated higher for usefulness (p=0.02). Inclusion of CareNexus-like tools in electronic and personal health records has the potential to facilitate care coordination in complex pediatric patients. PMID:29854215
Object class segmentation of RGB-D video using recurrent convolutional neural networks.
Pavel, Mircea Serban; Schulz, Hannes; Behnke, Sven
2017-04-01
Object class segmentation is a computer vision task which requires labeling each pixel of an image with the class of the object it belongs to. Deep convolutional neural networks (DNN) are able to learn and take advantage of local spatial correlations required for this task. They are, however, restricted by their small, fixed-sized filters, which limits their ability to learn long-range dependencies. Recurrent Neural Networks (RNN), on the other hand, do not suffer from this restriction. Their iterative interpretation allows them to model long-range dependencies by propagating activity. This property is especially useful when labeling video sequences, where both spatial and temporal long-range dependencies occur. In this work, a novel RNN architecture for object class segmentation is presented. We investigate several ways to train such a network. We evaluate our models on the challenging NYU Depth v2 dataset for object class segmentation and obtain competitive results. Copyright © 2017 Elsevier Ltd. All rights reserved.
The Developmental Stages of a Community–University Partnership
Allen, Michele L.; Svetaz, María Veronica; Hurtado, G. Ali; Linares, Roxana; Garcia-Huidobro, Diego; Hurtado, Monica
2013-01-01
Background: Strong and sustained community–university partnerships are necessary for community-based participatory translational research. Little attention has been paid to understanding the trajectory of research partnerships from a developmental perspective. Objective: To propose a framework describing partnership development and maturation based on Erikson’s eight stages of psychosocial development and describe how our collaboration is moving through those stages. Methods: Collaborators engaged in three rounds of iterative reflection regarding characteristics and contributors to the maturation of the Padres Informados/Jovenes Preparados (Informed Parents/Prepared Youth [PI/JP]) partnership. Lessons Learned: Each stage is characterized by broad developmental partnership tasks. Conflict or tension within the partnership is often a part of achieving the associated tasks. The strengths developed at each stage prepare the partnership for challenges associated with subsequent stages. Conclusions: This framework could provide a means for partnerships to reflect on their strengths and challenges at a given time point, and to help understand why some partnerships fail whereas others achieve maturity. PMID:24056509
Liu, Yijin; Meirer, Florian; Williams, Phillip A.; Wang, Junyue; Andrews, Joy C.; Pianetta, Piero
2012-01-01
Transmission X-ray microscopy (TXM) has been well recognized as a powerful tool for non-destructive investigation of the three-dimensional inner structure of a sample with spatial resolution down to a few tens of nanometers, especially when combined with synchrotron radiation sources. Recent developments of this technique have presented a need for new tools for both system control and data analysis. Here a software package developed in MATLAB for script command generation and analysis of TXM data is presented. The first toolkit, the script generator, allows automating complex experimental tasks which involve up to several thousand motor movements. The second package was designed to accomplish computationally intense tasks such as data processing of mosaic and mosaic tomography datasets; dual-energy contrast imaging, where data are recorded above and below a specific X-ray absorption edge; and TXM X-ray absorption near-edge structure imaging datasets. Furthermore, analytical and iterative tomography reconstruction algorithms were implemented. The compiled software package is freely available. PMID:22338691
Buller, David B; Berwick, Marianne; Shane, James; Kane, Ilima; Lantz, Kathleen; Buller, Mary Klein
2013-09-01
Smart phones are changing health communication for Americans. User-centered production of a mobile application for sun protection is reported. Focus groups (n = 16 adults) provided input on the mobile application concept. Four rounds of usability testing were conducted with 22 adults to develop the interface. An iterative programming procedure moved from a specification document to the final mobile application, named Solar Cell. Adults desired a variety of sun protection advice, identified few barriers to use and were willing to input personal data. The Solar Cell prototype was improved from round 1 (seven of 12 tasks completed) to round 2 (11 of 12 task completed) of usability testing and was interoperable across handsets and networks. The fully produced version was revised during testing. Adults rated Solar Cell as highly user friendly (mean = 5.06). The user-centered process produced a mobile application that should help many adults manage sun safety.
Allen, Michele L; Svetaz, A Veronica; Hurtado, G Ali; Linares, Roxana; Garcia-Huidobro, Diego; Hurtado, Monica
2013-01-01
Strong and sustained community-university partnerships are necessary for community-based participatory translational research. Little attention has been paid to understanding the trajectory of research partnerships from a developmental perspective. To propose a framework describing partnership development and maturation based on Erikson's eight stages of psychosocial development and describe how our collaboration is moving through those stages. Collaborators engaged in three rounds of iterative reflection regarding characteristics and contributors to the maturation of the Padres Informados/Jovenes Preparados (Informed Parents/Prepared Youth [PI/JP]) partnership. Each stage is characterized by broad developmental partnership tasks. Conflict or tension within the partnership is often a part of achieving the associated tasks. The strengths developed at each stage prepare the partnership for challenges associated with subsequent stages. This framework could provide a means for partnerships to reflect on their strengths and challenges at a given time point, and to help understand why some partnerships fail whereas others achieve maturity.
Evaluation of Life Sciences Glovebox (LSG) and Multi-Purpose Crew Restraint Concepts
NASA Technical Reports Server (NTRS)
Whitmore, Mihriban
2005-01-01
Within the scope of the Multi-purpose Crew Restraints for Long Duration Spaceflights project, funded by Code U, it was proposed to conduct a series of evaluations on the ground and on the KC-135 to investigate the human factors issues concerning confined/unique workstations, such as the design of crew restraints. The usability of multiple crew restraints was evaluated for use with the Life Sciences Glovebox (LSG) and for performing general purpose tasks. The purpose of the KC-135 microgravity evaluation was to: (1) to investigate the usability and effectiveness of the concepts developed, (2) to gather recommendations for further development of the concepts, and (3) to verify the validity of the existing requirements. Some designs had already been tested during a March KC-135 evaluation, and testing revealed the need for modifications/enhancements. This flight was designed to test the new iterations, as well as some new concepts. This flight also involved higher fidelity tasks in the LSG, and the addition of load cells on the gloveports.
A comparison of representations for discrete multi-criteria decision problems☆
Gettinger, Johannes; Kiesling, Elmar; Stummer, Christian; Vetschera, Rudolf
2013-01-01
Discrete multi-criteria decision problems with numerous Pareto-efficient solution candidates place a significant cognitive burden on the decision maker. An interactive, aspiration-based search process that iteratively progresses toward the most preferred solution can alleviate this task. In this paper, we study three ways of representing such problems in a DSS, and compare them in a laboratory experiment using subjective and objective measures of the decision process as well as solution quality and problem understanding. In addition to an immediate user evaluation, we performed a re-evaluation several weeks later. Furthermore, we consider several levels of problem complexity and user characteristics. Results indicate that different problem representations have a considerable influence on search behavior, although long-term consistency appears to remain unaffected. We also found interesting discrepancies between subjective evaluations and objective measures. Conclusions from our experiments can help designers of DSS for large multi-criteria decision problems to fit problem representations to the goals of their system and the specific task at hand. PMID:24882912
Automated flight path planning for virtual endoscopy.
Paik, D S; Beaulieu, C F; Jeffrey, R B; Rubin, G D; Napel, S
1998-05-01
In this paper, a novel technique for rapid and automatic computation of flight paths for guiding virtual endoscopic exploration of three-dimensional medical images is described. While manually planning flight paths is a tedious and time consuming task, our algorithm is automated and fast. Our method for positioning the virtual camera is based on the medial axis transform but is much more computationally efficient. By iteratively correcting a path toward the medial axis, the necessity of evaluating simple point criteria during morphological thinning is eliminated. The virtual camera is also oriented in a stable viewing direction, avoiding sudden twists and turns. We tested our algorithm on volumetric data sets of eight colons, one aorta and one bronchial tree. The algorithm computed the flight paths in several minutes per volume on an inexpensive workstation with minimal computation time added for multiple paths through branching structures (10%-13% per extra path). The results of our algorithm are smooth, centralized paths that aid in the task of navigation in virtual endoscopic exploration of three-dimensional medical images.
Usability testing of an mHealth device for swallowing therapy in head and neck cancer survivors.
Constantinescu, Gabriela; Kuffel, Kristina; King, Ben; Hodgetts, William; Rieger, Jana
2018-04-01
The objective of this study was to conduct the first patient usability testing of a mobile health (mHealth) system for in-home swallowing therapy. Five participants with a history of head and neck cancer evaluated the mHealth system. After completing an in-application (app) tutorial with the clinician, participants were asked to independently complete five tasks: pair the device to the smartphone, place the device correctly, exercise, interpret progress displays, and close the system. Quantitative and qualitative methods were used to evaluate the effectiveness, efficiency, and satisfaction with the system. Critical changes to the app were found in three of the tasks, resulting in recommendations for the next iteration. These issues were related to ease of Bluetooth pairing, placement of device, and interpretation of statistics. Usability testing with patients identified issues that were essential to address prior to implementing the mHealth system in subsequent clinical trials. Of the usability methods used, video observation (synced screen capture with videoed gestures) revealed the most information.
Image Edge Tracking via Ant Colony Optimization
NASA Astrophysics Data System (ADS)
Li, Ruowei; Wu, Hongkun; Liu, Shilong; Rahman, M. A.; Liu, Sanchi; Kwok, Ngai Ming
2018-04-01
A good edge plot should use continuous thin lines to describe the complete contour of the captured object. However, the detection of weak edges is a challenging task because of the associated low pixel intensities. Ant Colony Optimization (ACO) has been employed by many researchers to address this problem. The algorithm is a meta-heuristic method developed by mimicking the natural behaviour of ants. It uses iterative searches to find the optimal solution that cannot be found via traditional optimization approaches. In this work, ACO is employed to track and repair broken edges obtained via conventional Sobel edge detector to produced a result with more connected edges.
Modeling and Optimization of Multiple Unmanned Aerial Vehicles System Architecture Alternatives
Wang, Weiping; He, Lei
2014-01-01
Unmanned aerial vehicle (UAV) systems have already been used in civilian activities, although very limitedly. Confronted different types of tasks, multi UAVs usually need to be coordinated. This can be extracted as a multi UAVs system architecture problem. Based on the general system architecture problem, a specific description of the multi UAVs system architecture problem is presented. Then the corresponding optimization problem and an efficient genetic algorithm with a refined crossover operator (GA-RX) is proposed to accomplish the architecting process iteratively in the rest of this paper. The availability and effectiveness of overall method is validated using 2 simulations based on 2 different scenarios. PMID:25140328
Design tool for multiprocessor scheduling and evaluation of iterative dataflow algorithms
NASA Technical Reports Server (NTRS)
Jones, Robert L., III
1995-01-01
A graph-theoretic design process and software tool is defined for selecting a multiprocessing scheduling solution for a class of computational problems. The problems of interest are those that can be described with a dataflow graph and are intended to be executed repetitively on a set of identical processors. Typical applications include signal processing and control law problems. Graph-search algorithms and analysis techniques are introduced and shown to effectively determine performance bounds, scheduling constraints, and resource requirements. The software tool applies the design process to a given problem and includes performance optimization through the inclusion of additional precedence constraints among the schedulable tasks.
Hidden Markov models of biological primary sequence information.
Baldi, P; Chauvin, Y; Hunkapiller, T; McClure, M A
1994-01-01
Hidden Markov model (HMM) techniques are used to model families of biological sequences. A smooth and convergent algorithm is introduced to iteratively adapt the transition and emission parameters of the models from the examples in a given family. The HMM approach is applied to three protein families: globins, immunoglobulins, and kinases. In all cases, the models derived capture the important statistical characteristics of the family and can be used for a number of tasks, including multiple alignments, motif detection, and classification. For K sequences of average length N, this approach yields an effective multiple-alignment algorithm which requires O(KN2) operations, linear in the number of sequences. PMID:8302831
Zebrafish tracking using convolutional neural networks.
Xu, Zhiping; Cheng, Xi En
2017-02-17
Keeping identity for a long term after occlusion is still an open problem in the video tracking of zebrafish-like model animals, and accurate animal trajectories are the foundation of behaviour analysis. We utilize the highly accurate object recognition capability of a convolutional neural network (CNN) to distinguish fish of the same congener, even though these animals are indistinguishable to the human eye. We used data augmentation and an iterative CNN training method to optimize the accuracy for our classification task, achieving surprisingly accurate trajectories of zebrafish of different size and age zebrafish groups over different time spans. This work will make further behaviour analysis more reliable.
An incremental block-line-Gauss-Seidel method for the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Napolitano, M.; Walters, R. W.
1985-01-01
A block-line-Gauss-Seidel (LGS) method is developed for solving the incompressible and compressible Navier-Stokes equations in two dimensions. The method requires only one block-tridiagonal solution process per iteration and is consequently faster per step than the linearized block-ADI methods. Results are presented for both incompressible and compressible separated flows: in all cases the proposed block-LGS method is more efficient than the block-ADI methods. Furthermore, for high Reynolds number weakly separated incompressible flow in a channel, which proved to be an impossible task for a block-ADI method, solutions have been obtained very efficiently by the new scheme.
Zebrafish tracking using convolutional neural networks
NASA Astrophysics Data System (ADS)
Xu, Zhiping; Cheng, Xi En
2017-02-01
Keeping identity for a long term after occlusion is still an open problem in the video tracking of zebrafish-like model animals, and accurate animal trajectories are the foundation of behaviour analysis. We utilize the highly accurate object recognition capability of a convolutional neural network (CNN) to distinguish fish of the same congener, even though these animals are indistinguishable to the human eye. We used data augmentation and an iterative CNN training method to optimize the accuracy for our classification task, achieving surprisingly accurate trajectories of zebrafish of different size and age zebrafish groups over different time spans. This work will make further behaviour analysis more reliable.
The case for reassessment of health care technology. Once is not enough.
Banta, H D; Thacker, S B
1990-07-11
Assessment of health care technologies should be an iterative process, not a single event. In the United States there are an increasing number of organized attempts at reassessment of technologies by the health industry, professional societies, and national government agencies, such as the Medical Necessity Project of Blue Cross/Blue Shield, the Clinical Efficacy Assessment Project of the American College of Physicians, and the work of the US Preventive Services Task Force. We examine four clinical practices--electronic fetal monitoring, episiotomy, electroencephalography, and hysterectomy--to illustrate the need to continuously reassess existing technologies and to challenge our current inertia in this critical arena of health practice.
Spaceport Command and Control System Support Software Development
NASA Technical Reports Server (NTRS)
Brunotte, Leonard
2016-01-01
The Spaceport Command and Control System (SCCS) is a project developed and used by NASA at Kennedy Space Center in order to control and monitor the Space Launch System (SLS) at the time of its launch. One integral subteam under SCCS is the one assigned to the development of a data set building application to be used both on the launch pad and in the Launch Control Center (LCC) at the time of launch. This web application was developed in Ruby on Rails, a web framework using the Ruby object-oriented programming language, by a 15 - employee team (approx.). Because this application is such a huge undertaking with many facets and iterations, there were a few areas in which work could be more easily organized and expedited. As an intern working with this team, I was charged with the task of writing web applications that fulfilled this need, creating a virtual and highly customizable whiteboard in order to allow engineers to keep track of build iterations and their status. Additionally, I developed a knowledge capture web application wherein any engineer or contractor within SCCS could ask a question, answer an existing question, or leave a comment on any question or answer, similar to Stack Overflow.
Assessment of prostate cancer detection with a visual-search human model observer
NASA Astrophysics Data System (ADS)
Sen, Anando; Kalantari, Faraz; Gifford, Howard C.
2014-03-01
Early staging of prostate cancer (PC) is a significant challenge, in part because of the small tumor sizes in- volved. Our long-term goal is to determine realistic diagnostic task performance benchmarks for standard PC imaging with single photon emission computed tomography (SPECT). This paper reports on a localization receiver operator characteristic (LROC) validation study comparing human and model observers. The study made use of a digital anthropomorphic phantom and one-cm tumors within the prostate and pelvic lymph nodes. Uptake values were consistent with data obtained from clinical In-111 ProstaScint scans. The SPECT simulation modeled a parallel-hole imaging geometry with medium-energy collimators. Nonuniform attenua- tion and distance-dependent detector response were accounted for both in the imaging and the ordered-subset expectation-maximization (OSEM) iterative reconstruction. The observer study made use of 2D slices extracted from reconstructed volumes. All observers were informed about the prostate and nodal locations in an image. Iteration number and the level of postreconstruction smoothing were study parameters. The results show that a visual-search (VS) model observer correlates better with the average detection performance of human observers than does a scanning channelized nonprewhitening (CNPW) model observer.
NASA Astrophysics Data System (ADS)
Liu, J.; Suo, X. M.; Zhou, S. S.; Meng, S. Q.; Chen, S. S.; Mu, H. P.
2016-12-01
The tracking of the migration of ice frontal surface is crucial for the understanding of the underlying physical mechanisms in freezing soil. Owing to the distinct advantages, including non-invasive sensing, high safety, low cost and high data acquisition speed, the electrical capacitance tomography (ECT) is considered to be a promising visualization measurement method. In this paper, the ECT method is used to visualize the migration of ice frontal surface in freezing soil. With the main motivation of the improvement of imaging quality, a loss function with multiple regularizers that incorporate the prior formation related to the imaging objects is proposed to cast the ECT image reconstruction task into an optimization problem. An iteration scheme that integrates the superiority of the split Bregman iteration (SBI) method is developed for searching for the optimal solution of the proposed loss function. An unclosed electrodes sensor is designed for satisfying the requirements of practical measurements. An experimental system of one dimensional freezing in frozen soil is constructed, and the ice frontal surface migration in the freezing process of the wet soil sample containing five percent of moisture is measured. The visualization measurement results validate the feasibility and effectiveness of the ECT visualization method
Applying Evolutionary Prototyping In Developing LMIS: A Spatial Web-Based System For Land Management
NASA Astrophysics Data System (ADS)
Agustiono, W.
2018-01-01
Software development project is a difficult task. Especially for software designed to comply with regulations that are constantly being introduced or changed, it is almost impossible to make just one change during the development process. Even if it is possible, nonetheless, the developers may take bulk of works to fix the design to meet specified needs. This iterative work also means that it takes additional time and potentially leads to failing to meet the original schedule and budget. In such inevitable changes, it is essential for developers to carefully consider and use an appropriate method which will help them carry out software project development. This research aims to examine the implementation of a software development method called evolutionary prototyping for developing software for complying regulation. It investigates the development of Land Management Information System (pseudonym), initiated by the Australian government, for use by farmers to meet regulatory demand requested by Soil and Land Conservation Act. By doing so, it sought to provide understanding the efficacy of evolutionary prototyping in helping developers address frequent changing requirements and iterative works but still within schedule. The findings also offer useful practical insights for other developers who seek to build similar regulatory compliance software.
Automatic programming via iterated local search for dynamic job shop scheduling.
Nguyen, Su; Zhang, Mengjie; Johnston, Mark; Tan, Kay Chen
2015-01-01
Dispatching rules have been commonly used in practice for making sequencing and scheduling decisions. Due to specific characteristics of each manufacturing system, there is no universal dispatching rule that can dominate in all situations. Therefore, it is important to design specialized dispatching rules to enhance the scheduling performance for each manufacturing environment. Evolutionary computation approaches such as tree-based genetic programming (TGP) and gene expression programming (GEP) have been proposed to facilitate the design task through automatic design of dispatching rules. However, these methods are still limited by their high computational cost and low exploitation ability. To overcome this problem, we develop a new approach to automatic programming via iterated local search (APRILS) for dynamic job shop scheduling. The key idea of APRILS is to perform multiple local searches started with programs modified from the best obtained programs so far. The experiments show that APRILS outperforms TGP and GEP in most simulation scenarios in terms of effectiveness and efficiency. The analysis also shows that programs generated by APRILS are more compact than those obtained by genetic programming. An investigation of the behavior of APRILS suggests that the good performance of APRILS comes from the balance between exploration and exploitation in its search mechanism.
Impact of extrinsic factors on fine motor performance of children attending day care.
Corsi, Carolina; Santos, Mariana Martins Dos; Marques, Luísa de Andrade Perez; Rocha, Nelci Adriana Cicuto Ferreira
2016-12-01
To assess the impact of extrinsic factors on fine motor performance of children aged two years old. 73 children attending public and 21 private day care centers were assessed. Day care environment was evaluated using the Infant/Toddler Environment Rating Scale - Revised Edition (ITERS-R), fine motor performance was assessed through the Bayley Scales of Infant and Toddler Development - III (BSITD-III), socioeconomic data, maternal education and time of start at the day care were collected through interviews. Spearman's correlation coefficient was calculated to assess the association between the studied variables. The time at the day care was positively correlated with the children's performance in some fine motor tasks of the BSITD-III, showing that the activities developed in day care centers were important for the refinement of specific motor skills, while the overall fine motor performance by the scale was associated with maternal education and the ITERS-R scale sub-item "language and understanding". Extrinsic factors such as higher maternal education and quality of day care centers are associated with fine motor performance in children attending day care. Copyright © 2016 Sociedade de Pediatria de São Paulo. Publicado por Elsevier Editora Ltda. All rights reserved.
TARGET - TASK ANALYSIS REPORT GENERATION TOOL, VERSION 1.0
NASA Technical Reports Server (NTRS)
Ortiz, C. J.
1994-01-01
The Task Analysis Report Generation Tool, TARGET, is a graphical interface tool used to capture procedural knowledge and translate that knowledge into a hierarchical report. TARGET is based on VISTA, a knowledge acquisition tool developed by the Naval Systems Training Center. TARGET assists a programmer and/or task expert organize and understand the steps involved in accomplishing a task. The user can label individual steps in the task through a dialogue-box and get immediate graphical feedback for analysis. TARGET users can decompose tasks into basic action kernels or minimal steps to provide a clear picture of all basic actions needed to accomplish a job. This method allows the user to go back and critically examine the overall flow and makeup of the process. The user can switch between graphics (box flow diagrams) and text (task hierarchy) versions to more easily study the process being documented. As the practice of decomposition continues, tasks and their subtasks can be continually modified to more accurately reflect the user's procedures and rationale. This program is designed to help a programmer document an expert's task thus allowing the programmer to build an expert system which can help others perform the task. Flexibility is a key element of the system design and of the knowledge acquisition session. If the expert is not able to find time to work on the knowledge acquisition process with the program developer, the developer and subject matter expert may work in iterative sessions. TARGET is easy to use and is tailored to accommodate users ranging from the novice to the experienced expert systems builder. TARGET is written in C-language for IBM PC series and compatible computers running MS-DOS and Microsoft Windows version 3.0 or 3.1. No source code is supplied. The executable also requires 2Mb of RAM, a Microsoft compatible mouse, a VGA display and an 80286, 386 or 486 processor machine. The standard distribution medium for TARGET is one 5.25 inch 360K MS-DOS format diskette. TARGET was developed in 1991.
Mixed Initiative Visual Analytics Using Task-Driven Recommendations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cook, Kristin A.; Cramer, Nicholas O.; Israel, David
2015-12-07
Visual data analysis is composed of a collection of cognitive actions and tasks to decompose, internalize, and recombine data to produce knowledge and insight. Visual analytic tools provide interactive visual interfaces to data to support tasks involved in discovery and sensemaking, including forming hypotheses, asking questions, and evaluating and organizing evidence. Myriad analytic models can be incorporated into visual analytic systems, at the cost of increasing complexity in the analytic discourse between user and system. Techniques exist to increase the usability of interacting with such analytic models, such as inferring data models from user interactions to steer the underlying modelsmore » of the system via semantic interaction, shielding users from having to do so explicitly. Such approaches are often also referred to as mixed-initiative systems. Researchers studying the sensemaking process have called for development of tools that facilitate analytic sensemaking through a combination of human and automated activities. However, design guidelines do not exist for mixed-initiative visual analytic systems to support iterative sensemaking. In this paper, we present a candidate set of design guidelines and introduce the Active Data Environment (ADE) prototype, a spatial workspace supporting the analytic process via task recommendations invoked by inferences on user interactions within the workspace. ADE recommends data and relationships based on a task model, enabling users to co-reason with the system about their data in a single, spatial workspace. This paper provides an illustrative use case, a technical description of ADE, and a discussion of the strengths and limitations of the approach.« less
Wei, Qinglai; Liu, Derong; Lin, Qiao
In this paper, a novel local value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon optimal control problems for discrete-time nonlinear systems. The focuses of this paper are to study admissibility properties and the termination criteria of discrete-time local value iteration ADP algorithms. In the discrete-time local value iteration ADP algorithm, the iterative value functions and the iterative control laws are both updated in a given subset of the state space in each iteration, instead of the whole state space. For the first time, admissibility properties of iterative control laws are analyzed for the local value iteration ADP algorithm. New termination criteria are established, which terminate the iterative local ADP algorithm with an admissible approximate optimal control law. Finally, simulation results are given to illustrate the performance of the developed algorithm.In this paper, a novel local value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon optimal control problems for discrete-time nonlinear systems. The focuses of this paper are to study admissibility properties and the termination criteria of discrete-time local value iteration ADP algorithms. In the discrete-time local value iteration ADP algorithm, the iterative value functions and the iterative control laws are both updated in a given subset of the state space in each iteration, instead of the whole state space. For the first time, admissibility properties of iterative control laws are analyzed for the local value iteration ADP algorithm. New termination criteria are established, which terminate the iterative local ADP algorithm with an admissible approximate optimal control law. Finally, simulation results are given to illustrate the performance of the developed algorithm.
Thakur, Shalabh; Guttman, David S
2016-06-30
Comparative analysis of whole genome sequence data from closely related prokaryotic species or strains is becoming an increasingly important and accessible approach for addressing both fundamental and applied biological questions. While there are number of excellent tools developed for performing this task, most scale poorly when faced with hundreds of genome sequences, and many require extensive manual curation. We have developed a de-novo genome analysis pipeline (DeNoGAP) for the automated, iterative and high-throughput analysis of data from comparative genomics projects involving hundreds of whole genome sequences. The pipeline is designed to perform reference-assisted and de novo gene prediction, homolog protein family assignment, ortholog prediction, functional annotation, and pan-genome analysis using a range of proven tools and databases. While most existing methods scale quadratically with the number of genomes since they rely on pairwise comparisons among predicted protein sequences, DeNoGAP scales linearly since the homology assignment is based on iteratively refined hidden Markov models. This iterative clustering strategy enables DeNoGAP to handle a very large number of genomes using minimal computational resources. Moreover, the modular structure of the pipeline permits easy updates as new analysis programs become available. DeNoGAP integrates bioinformatics tools and databases for comparative analysis of a large number of genomes. The pipeline offers tools and algorithms for annotation and analysis of completed and draft genome sequences. The pipeline is developed using Perl, BioPerl and SQLite on Ubuntu Linux version 12.04 LTS. Currently, the software package accompanies script for automated installation of necessary external programs on Ubuntu Linux; however, the pipeline should be also compatible with other Linux and Unix systems after necessary external programs are installed. DeNoGAP is freely available at https://sourceforge.net/projects/denogap/ .
Investigation of cone-beam CT image quality trade-off for image-guided radiation therapy
NASA Astrophysics Data System (ADS)
Bian, Junguo; Sharp, Gregory C.; Park, Yang-Kyun; Ouyang, Jinsong; Bortfeld, Thomas; El Fakhri, Georges
2016-05-01
It is well-known that projections acquired over an angular range slightly over 180° (so-called short scan) are sufficient for fan-beam reconstruction. However, due to practical imaging conditions (projection data and reconstruction image discretization, physical factors, and data noise), the short-scan reconstructions may have different appearances and properties from the full-scan (scans over 360°) reconstructions. Nevertheless, short-scan configurations have been used in applications such as cone-beam CT (CBCT) for head-neck-cancer image-guided radiation therapy (IGRT) that only requires a small field of view due to the potential reduced imaging time and dose. In this work, we studied the image quality trade-off for full, short, and full/short scan configurations with both conventional filtered-backprojection (FBP) reconstruction and iterative reconstruction algorithms based on total-variation (TV) minimization for head-neck-cancer IGRT. Anthropomorphic and Catphan phantoms were scanned at different exposure levels with a clinical scanner used in IGRT. Both visualization- and numerical-metric-based evaluation studies were performed. The results indicate that the optimal exposure level and number of views are in the middle range for both FBP and TV-based iterative algorithms and the optimization is object-dependent and task-dependent. The optimal view numbers decrease with the total exposure levels for both FBP and TV-based algorithms. The results also indicate there are slight differences between FBP and TV-based iterative algorithms for the image quality trade-off: FBP seems to be more in favor of larger number of views while the TV-based algorithm is more robust to different data conditions (number of views and exposure levels) than the FBP algorithm. The studies can provide a general guideline for image-quality optimization for CBCT used in IGRT and other applications.
NASA Astrophysics Data System (ADS)
Albert, L.; Rottensteiner, F.; Heipke, C.
2015-08-01
Land cover and land use exhibit strong contextual dependencies. We propose a novel approach for the simultaneous classification of land cover and land use, where semantic and spatial context is considered. The image sites for land cover and land use classification form a hierarchy consisting of two layers: a land cover layer and a land use layer. We apply Conditional Random Fields (CRF) at both layers. The layers differ with respect to the image entities corresponding to the nodes, the employed features and the classes to be distinguished. In the land cover layer, the nodes represent super-pixels; in the land use layer, the nodes correspond to objects from a geospatial database. Both CRFs model spatial dependencies between neighbouring image sites. The complex semantic relations between land cover and land use are integrated in the classification process by using contextual features. We propose a new iterative inference procedure for the simultaneous classification of land cover and land use, in which the two classification tasks mutually influence each other. This helps to improve the classification accuracy for certain classes. The main idea of this approach is that semantic context helps to refine the class predictions, which, in turn, leads to more expressive context information. Thus, potentially wrong decisions can be reversed at later stages. The approach is designed for input data based on aerial images. Experiments are carried out on a test site to evaluate the performance of the proposed method. We show the effectiveness of the iterative inference procedure and demonstrate that a smaller size of the super-pixels has a positive influence on the classification result.
Ide, Jaime S; Nedic, Sanja; Wong, Kin F; Strey, Shmuel L; Lawson, Elizabeth A; Dickerson, Bradford C; Wald, Lawrence L; La Camera, Giancarlo; Mujica-Parodi, Lilianne R
2018-07-01
Oxytocin (OT) is an endogenous neuropeptide that, while originally thought to promote trust, has more recently been found to be context-dependent. Here we extend experimental paradigms previously restricted to de novo decision-to-trust, to a more realistic environment in which social relationships evolve in response to iterative feedback over twenty interactions. In a randomized, double blind, placebo-controlled within-subject/crossover experiment of human adult males, we investigated the effects of a single dose of intranasal OT (40 IU) on Bayesian expectation updating and reinforcement learning within a social context, with associated brain circuit dynamics. Subjects participated in a neuroeconomic task (Iterative Trust Game) designed to probe iterative social learning while their brains were scanned using ultra-high field (7T) fMRI. We modeled each subject's behavior using Bayesian updating of belief-states ("willingness to trust") as well as canonical measures of reinforcement learning (learning rate, inverse temperature). Behavioral trajectories were then used as regressors within fMRI activation and connectivity analyses to identify corresponding brain network functionality affected by OT. Behaviorally, OT reduced feedback learning, without bias with respect to positive versus negative reward. Neurobiologically, reduced learning under OT was associated with muted communication between three key nodes within the reward circuit: the orbitofrontal cortex, amygdala, and lateral (limbic) habenula. Our data suggest that OT, rather than inspiring feelings of generosity, instead attenuates the brain's encoding of prediction error and therefore its ability to modulate pre-existing beliefs. This effect may underlie OT's putative role in promoting what has typically been reported as 'unjustified trust' in the face of information that suggests likely betrayal, while also resolving apparent contradictions with regard to OT's context-dependent behavioral effects. Copyright © 2018 Elsevier Inc. All rights reserved.
A Model of Supervisor Decision-Making in the Accommodation of Workers with Low Back Pain.
Williams-Whitt, Kelly; Kristman, Vicki; Shaw, William S; Soklaridis, Sophie; Reguly, Paula
2016-09-01
Purpose To explore supervisors' perspectives and decision-making processes in the accommodation of back injured workers. Methods Twenty-three semi-structured, in-depth interviews were conducted with supervisors from eleven Canadian organizations about their role in providing job accommodations. Supervisors were identified through an on-line survey and interviews were recorded, transcribed and entered into NVivo software. The initial analyses identified common units of meaning, which were used to develop a coding guide. Interviews were coded, and a model of supervisor decision-making was developed based on the themes, categories and connecting ideas identified in the data. Results The decision-making model includes a process element that is described as iterative "trial and error" decision-making. Medical restrictions are compared to job demands, employee abilities and available alternatives. A feasible modification is identified through brainstorming and then implemented by the supervisor. Resources used for brainstorming include information, supervisor experience and autonomy, and organizational supports. The model also incorporates the experience of accommodation as a job demand that causes strain for the supervisor. Accommodation demands affect the supervisor's attitude, brainstorming and monitoring effort, and communication with returning employees. Resources and demands have a combined effect on accommodation decision complexity, which in turn affects the quality of the accommodation option selected. If the employee is unable to complete the tasks or is reinjured during the accommodation, the decision cycle repeats. More frequent iteration through the trial and error process reduces the likelihood of return to work success. Conclusion A series of propositions is developed to illustrate the relationships among categories in the model. The model and propositions show: (a) the iterative, problem solving nature of the RTW process; (b) decision resources necessary for accommodation planning, and (c) the impact accommodation demands may have on supervisors and RTW quality.
Investigation of cone-beam CT image quality trade-off for image-guided radiation therapy.
Bian, Junguo; Sharp, Gregory C; Park, Yang-Kyun; Ouyang, Jinsong; Bortfeld, Thomas; El Fakhri, Georges
2016-05-07
It is well-known that projections acquired over an angular range slightly over 180° (so-called short scan) are sufficient for fan-beam reconstruction. However, due to practical imaging conditions (projection data and reconstruction image discretization, physical factors, and data noise), the short-scan reconstructions may have different appearances and properties from the full-scan (scans over 360°) reconstructions. Nevertheless, short-scan configurations have been used in applications such as cone-beam CT (CBCT) for head-neck-cancer image-guided radiation therapy (IGRT) that only requires a small field of view due to the potential reduced imaging time and dose. In this work, we studied the image quality trade-off for full, short, and full/short scan configurations with both conventional filtered-backprojection (FBP) reconstruction and iterative reconstruction algorithms based on total-variation (TV) minimization for head-neck-cancer IGRT. Anthropomorphic and Catphan phantoms were scanned at different exposure levels with a clinical scanner used in IGRT. Both visualization- and numerical-metric-based evaluation studies were performed. The results indicate that the optimal exposure level and number of views are in the middle range for both FBP and TV-based iterative algorithms and the optimization is object-dependent and task-dependent. The optimal view numbers decrease with the total exposure levels for both FBP and TV-based algorithms. The results also indicate there are slight differences between FBP and TV-based iterative algorithms for the image quality trade-off: FBP seems to be more in favor of larger number of views while the TV-based algorithm is more robust to different data conditions (number of views and exposure levels) than the FBP algorithm. The studies can provide a general guideline for image-quality optimization for CBCT used in IGRT and other applications.
Scalable and Axiomatic Ranking of Network Role Similarity
Jin, Ruoming; Lee, Victor E.; Li, Longjie
2014-01-01
A key task in analyzing social networks and other complex networks is role analysis: describing and categorizing nodes according to how they interact with other nodes. Two nodes have the same role if they interact with equivalent sets of neighbors. The most fundamental role equivalence is automorphic equivalence. Unfortunately, the fastest algorithms known for graph automorphism are nonpolynomial. Moreover, since exact equivalence is rare, a more meaningful task is measuring the role similarity between any two nodes. This task is closely related to the structural or link-based similarity problem that SimRank addresses. However, SimRank and other existing similarity measures are not sufficient because they do not guarantee to recognize automorphically or structurally equivalent nodes. This paper makes two contributions. First, we present and justify several axiomatic properties necessary for a role similarity measure or metric. Second, we present RoleSim, a new similarity metric which satisfies these axioms and which can be computed with a simple iterative algorithm. We rigorously prove that RoleSim satisfies all these axiomatic properties. We also introduce Iceberg RoleSim, a scalable algorithm which discovers all pairs with RoleSim scores above a user-defined threshold θ. We demonstrate the interpretative power of RoleSim on both both synthetic and real datasets. PMID:25383066
A parallel computing engine for a class of time critical processes.
Nabhan, T M; Zomaya, A Y
1997-01-01
This paper focuses on the efficient parallel implementation of systems of numerically intensive nature over loosely coupled multiprocessor architectures. These analytical models are of significant importance to many real-time systems that have to meet severe time constants. A parallel computing engine (PCE) has been developed in this work for the efficient simplification and the near optimal scheduling of numerical models over the different cooperating processors of the parallel computer. First, the analytical system is efficiently coded in its general form. The model is then simplified by using any available information (e.g., constant parameters). A task graph representing the interconnections among the different components (or equations) is generated. The graph can then be compressed to control the computation/communication requirements. The task scheduler employs a graph-based iterative scheme, based on the simulated annealing algorithm, to map the vertices of the task graph onto a Multiple-Instruction-stream Multiple-Data-stream (MIMD) type of architecture. The algorithm uses a nonanalytical cost function that properly considers the computation capability of the processors, the network topology, the communication time, and congestion possibilities. Moreover, the proposed technique is simple, flexible, and computationally viable. The efficiency of the algorithm is demonstrated by two case studies with good results.
Independent tasks scheduling in cloud computing via improved estimation of distribution algorithm
NASA Astrophysics Data System (ADS)
Sun, Haisheng; Xu, Rui; Chen, Huaping
2018-04-01
To minimize makespan for scheduling independent tasks in cloud computing, an improved estimation of distribution algorithm (IEDA) is proposed to tackle the investigated problem in this paper. Considering that the problem is concerned with multi-dimensional discrete problems, an improved population-based incremental learning (PBIL) algorithm is applied, which the parameter for each component is independent with other components in PBIL. In order to improve the performance of PBIL, on the one hand, the integer encoding scheme is used and the method of probability calculation of PBIL is improved by using the task average processing time; on the other hand, an effective adaptive learning rate function that related to the number of iterations is constructed to trade off the exploration and exploitation of IEDA. In addition, both enhanced Max-Min and Min-Min algorithms are properly introduced to form two initial individuals. In the proposed IEDA, an improved genetic algorithm (IGA) is applied to generate partial initial population by evolving two initial individuals and the rest of initial individuals are generated at random. Finally, the sampling process is divided into two parts including sampling by probabilistic model and IGA respectively. The experiment results show that the proposed IEDA not only gets better solution, but also has faster convergence speed.
Estimation of Handgrip Force from SEMG Based on Wavelet Scale Selection.
Wang, Kai; Zhang, Xianmin; Ota, Jun; Huang, Yanjiang
2018-02-24
This paper proposes a nonlinear correlation-based wavelet scale selection technology to select the effective wavelet scales for the estimation of handgrip force from surface electromyograms (SEMG). The SEMG signal corresponding to gripping force was collected from extensor and flexor forearm muscles during the force-varying analysis task. We performed a computational sensitivity analysis on the initial nonlinear SEMG-handgrip force model. To explore the nonlinear correlation between ten wavelet scales and handgrip force, a large-scale iteration based on the Monte Carlo simulation was conducted. To choose a suitable combination of scales, we proposed a rule to combine wavelet scales based on the sensitivity of each scale and selected the appropriate combination of wavelet scales based on sequence combination analysis (SCA). The results of SCA indicated that the scale combination VI is suitable for estimating force from the extensors and the combination V is suitable for the flexors. The proposed method was compared to two former methods through prolonged static and force-varying contraction tasks. The experiment results showed that the root mean square errors derived by the proposed method for both static and force-varying contraction tasks were less than 20%. The accuracy and robustness of the handgrip force derived by the proposed method is better than that obtained by the former methods.
Brehmer, Matthew; Ingram, Stephen; Stray, Jonathan; Munzner, Tamara
2014-12-01
For an investigative journalist, a large collection of documents obtained from a Freedom of Information Act request or a leak is both a blessing and a curse: such material may contain multiple newsworthy stories, but it can be difficult and time consuming to find relevant documents. Standard text search is useful, but even if the search target is known it may not be possible to formulate an effective query. In addition, summarization is an important non-search task. We present Overview, an application for the systematic analysis of large document collections based on document clustering, visualization, and tagging. This work contributes to the small set of design studies which evaluate a visualization system "in the wild", and we report on six case studies where Overview was voluntarily used by self-initiated journalists to produce published stories. We find that the frequently-used language of "exploring" a document collection is both too vague and too narrow to capture how journalists actually used our application. Our iterative process, including multiple rounds of deployment and observations of real world usage, led to a much more specific characterization of tasks. We analyze and justify the visual encoding and interaction techniques used in Overview's design with respect to our final task abstractions, and propose generalizable lessons for visualization design methodology.
Gene selection for microarray data classification via subspace learning and manifold regularization.
Tang, Chang; Cao, Lijuan; Zheng, Xiao; Wang, Minhui
2017-12-19
With the rapid development of DNA microarray technology, large amount of genomic data has been generated. Classification of these microarray data is a challenge task since gene expression data are often with thousands of genes but a small number of samples. In this paper, an effective gene selection method is proposed to select the best subset of genes for microarray data with the irrelevant and redundant genes removed. Compared with original data, the selected gene subset can benefit the classification task. We formulate the gene selection task as a manifold regularized subspace learning problem. In detail, a projection matrix is used to project the original high dimensional microarray data into a lower dimensional subspace, with the constraint that the original genes can be well represented by the selected genes. Meanwhile, the local manifold structure of original data is preserved by a Laplacian graph regularization term on the low-dimensional data space. The projection matrix can serve as an importance indicator of different genes. An iterative update algorithm is developed for solving the problem. Experimental results on six publicly available microarray datasets and one clinical dataset demonstrate that the proposed method performs better when compared with other state-of-the-art methods in terms of microarray data classification. Graphical Abstract The graphical abstract of this work.
Understanding linear measurement: A comparison of filipino and new zealand children
NASA Astrophysics Data System (ADS)
Irwin, Kathryn C.; Ell, Fiona R.; Vistro-Yu, Catherine P.
2004-06-01
An understanding of linear measurement depends on principles that include standard unit size, iteration of units, numbering of a unit at its end, and partial units for measuring continuous length. Children may learn these principles at school, for example through experience with informal measurement, or they may learn them through use of measurement in society. This study compared the application of these principles by children aged 8 and 9 from the Philippines and New Zealand. These countries were selected because they have quite different curricula, societal influences and economies. Ninety-one children were interviewed individually on a common set of unusual tasks that were designed to tap underlying principles. Results showed many similarities and some differences between countries. Most tasks requiring visualisation and informal units were done more accurately by New Zealand children. Some tasks involving the use of a conventional ruler were done more accurately by Filipino children. These differences appear to be related to differences in curricula and possibly to differences in societal use of measurement. We suggest that these results, like those of other writers cited, demonstrate the need for extensive work on the underlying concepts in measurement through work on informal measurement and a careful transition between informal and formal measurement.
Temporal neural networks and transient analysis of complex engineering systems
NASA Astrophysics Data System (ADS)
Uluyol, Onder
A theory is introduced for a multi-layered Local Output Gamma Feedback (LOGF) neural network within the paradigm of Locally-Recurrent Globally-Feedforward neural networks. It is developed for the identification, prediction, and control tasks of spatio-temporal systems and allows for the presentation of different time scales through incorporation of a gamma memory. It is initially applied to the tasks of sunspot and Mackey-Glass series prediction as benchmarks, then it is extended to the task of power level control of a nuclear reactor at different fuel cycle conditions. The developed LOGF neuron model can also be viewed as a Transformed Input and State (TIS) Gamma memory for neural network architectures for temporal processing. The novel LOGF neuron model extends the static neuron model by incorporating into it a short-term memory structure in the form of a digital gamma filter. A feedforward neural network made up of LOGF neurons can thus be used to model dynamic systems. A learning algorithm based upon the Backpropagation-Through-Time (BTT) approach is derived. It is applicable for training a general L-layer LOGF neural network. The spatial and temporal weights and parameters of the network are iteratively optimized for a given problem using the derived learning algorithm.
Joint image restoration and location in visual navigation system
NASA Astrophysics Data System (ADS)
Wu, Yuefeng; Sang, Nong; Lin, Wei; Shao, Yuanjie
2018-02-01
Image location methods are the key technologies of visual navigation, most previous image location methods simply assume the ideal inputs without taking into account the real-world degradations (e.g. low resolution and blur). In view of such degradations, the conventional image location methods first perform image restoration and then match the restored image on the reference image. However, the defective output of the image restoration can affect the result of localization, by dealing with the restoration and location separately. In this paper, we present a joint image restoration and location (JRL) method, which utilizes the sparse representation prior to handle the challenging problem of low-quality image location. The sparse representation prior states that the degraded input image, if correctly restored, will have a good sparse representation in terms of the dictionary constructed from the reference image. By iteratively solving the image restoration in pursuit of the sparest representation, our method can achieve simultaneous restoration and location. Based on such a sparse representation prior, we demonstrate that the image restoration task and the location task can benefit greatly from each other. Extensive experiments on real scene images with Gaussian blur are carried out and our joint model outperforms the conventional methods of treating the two tasks independently.
A methodology for image quality evaluation of advanced CT systems.
Wilson, Joshua M; Christianson, Olav I; Richard, Samuel; Samei, Ehsan
2013-03-01
This work involved the development of a phantom-based method to quantify the performance of tube current modulation and iterative reconstruction in modern computed tomography (CT) systems. The quantification included resolution, HU accuracy, noise, and noise texture accounting for the impact of contrast, prescribed dose, reconstruction algorithm, and body size. A 42-cm-long, 22.5-kg polyethylene phantom was designed to model four body sizes. Each size was represented by a uniform section, for the measurement of the noise-power spectrum (NPS), and a feature section containing various rods, for the measurement of HU and the task-based modulation transfer function (TTF). The phantom was scanned on a clinical CT system (GE, 750HD) using a range of tube current modulation settings (NI levels) and reconstruction methods (FBP and ASIR30). An image quality analysis program was developed to process the phantom data to calculate the targeted image quality metrics as a function of contrast, prescribed dose, and body size. The phantom fabrication closely followed the design specifications. In terms of tube current modulation, the tube current and resulting image noise varied as a function of phantom size as expected based on the manufacturer specification: From the 16- to 37-cm section, the HU contrast for each rod was inversely related to phantom size, and noise was relatively constant (<5% change). With iterative reconstruction, the TTF exhibited a contrast dependency with better performance for higher contrast objects. At low noise levels, TTFs of iterative reconstruction were better than those of FBP, but at higher noise, that superiority was not maintained at all contrast levels. Relative to FBP, the NPS of iterative reconstruction exhibited an ~30% decrease in magnitude and a 0.1 mm(-1) shift in the peak frequency. Phantom and image quality analysis software were created for assessing CT image quality over a range of contrasts, doses, and body sizes. The testing platform enabled robust NPS, TTF, HU, and pixel noise measurements as a function of body size capable of characterizing the performance of reconstruction algorithms and tube current modulation techniques.
Lan, Yihua; Li, Cunhua; Ren, Haozheng; Zhang, Yong; Min, Zhifang
2012-10-21
A new heuristic algorithm based on the so-called geometric distance sorting technique is proposed for solving the fluence map optimization with dose-volume constraints which is one of the most essential tasks for inverse planning in IMRT. The framework of the proposed method is basically an iterative process which begins with a simple linear constrained quadratic optimization model without considering any dose-volume constraints, and then the dose constraints for the voxels violating the dose-volume constraints are gradually added into the quadratic optimization model step by step until all the dose-volume constraints are satisfied. In each iteration step, an interior point method is adopted to solve each new linear constrained quadratic programming. For choosing the proper candidate voxels for the current dose constraint adding, a so-called geometric distance defined in the transformed standard quadratic form of the fluence map optimization model was used to guide the selection of the voxels. The new geometric distance sorting technique can mostly reduce the unexpected increase of the objective function value caused inevitably by the constraint adding. It can be regarded as an upgrading to the traditional dose sorting technique. The geometry explanation for the proposed method is also given and a proposition is proved to support our heuristic idea. In addition, a smart constraint adding/deleting strategy is designed to ensure a stable iteration convergence. The new algorithm is tested on four cases including head-neck, a prostate, a lung and an oropharyngeal, and compared with the algorithm based on the traditional dose sorting technique. Experimental results showed that the proposed method is more suitable for guiding the selection of new constraints than the traditional dose sorting method, especially for the cases whose target regions are in non-convex shapes. It is a more efficient optimization technique to some extent for choosing constraints than the dose sorting method. By integrating a smart constraint adding/deleting scheme within the iteration framework, the new technique builds up an improved algorithm for solving the fluence map optimization with dose-volume constraints.
The Effect of Iteration on the Design Performance of Primary School Children
ERIC Educational Resources Information Center
Looijenga, Annemarie; Klapwijk, Remke; de Vries, Marc J.
2015-01-01
Iteration during the design process is an essential element. Engineers optimize their design by iteration. Research on iteration in Primary Design Education is however scarce; possibly teachers believe they do not have enough time for iteration in daily classroom practices. Spontaneous playing behavior of children indicates that iteration fits in…
Value Iteration Adaptive Dynamic Programming for Optimal Control of Discrete-Time Nonlinear Systems.
Wei, Qinglai; Liu, Derong; Lin, Hanquan
2016-03-01
In this paper, a value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon undiscounted optimal control problems for discrete-time nonlinear systems. The present value iteration ADP algorithm permits an arbitrary positive semi-definite function to initialize the algorithm. A novel convergence analysis is developed to guarantee that the iterative value function converges to the optimal performance index function. Initialized by different initial functions, it is proven that the iterative value function will be monotonically nonincreasing, monotonically nondecreasing, or nonmonotonic and will converge to the optimum. In this paper, for the first time, the admissibility properties of the iterative control laws are developed for value iteration algorithms. It is emphasized that new termination criteria are established to guarantee the effectiveness of the iterative control laws. Neural networks are used to approximate the iterative value function and compute the iterative control law, respectively, for facilitating the implementation of the iterative ADP algorithm. Finally, two simulation examples are given to illustrate the performance of the present method.
Efficient ICCG on a shared memory multiprocessor
NASA Technical Reports Server (NTRS)
Hammond, Steven W.; Schreiber, Robert
1989-01-01
Different approaches are discussed for exploiting parallelism in the ICCG (Incomplete Cholesky Conjugate Gradient) method for solving large sparse symmetric positive definite systems of equations on a shared memory parallel computer. Techniques for efficiently solving triangular systems and computing sparse matrix-vector products are explored. Three methods for scheduling the tasks in solving triangular systems are implemented on the Sequent Balance 21000. Sample problems that are representative of a large class of problems solved using iterative methods are used. We show that a static analysis to determine data dependences in the triangular solve can greatly improve its parallel efficiency. We also show that ignoring symmetry and storing the whole matrix can reduce solution time substantially.
NASA Technical Reports Server (NTRS)
Manro, M. E.
1983-01-01
Two separated flow computer programs and a semiempirical method for incorporating the experimentally measured separated flow effects into a linear aeroelastic analysis were evaluated. The three dimensional leading edge vortex (LEV) code is evaluated. This code is an improved panel method for three dimensional inviscid flow over a wing with leading edge vortex separation. The governing equations are the linear flow differential equation with nonlinear boundary conditions. The solution is iterative; the position as well as the strength of the vortex is determined. Cases for both full and partial span vortices were executed. The predicted pressures are good and adequately reflect changes in configuration.
No-reference image quality assessment for horizontal-path imaging scenarios
NASA Astrophysics Data System (ADS)
Rios, Carlos; Gladysz, Szymon
2013-05-01
There exist several image-enhancement algorithms and tasks associated with imaging through turbulence that depend on defining the quality of an image. Examples include: "lucky imaging", choosing the width of the inverse filter for image reconstruction, or stopping iterative deconvolution. We collected a number of image quality metrics found in the literature. Particularly interesting are the blind, "no-reference" metrics. We discuss ways of evaluating the usefulness of these metrics, even when a fully objective comparison is impossible because of the lack of a reference image. Metrics are tested on simulated and real data. Field data comes from experiments performed by the NATO SET 165 research group over a 7 km distance in Dayton, Ohio.
Blade loss transient dynamics analysis, volume 1. Task 2: TETRA 2 theoretical development
NASA Technical Reports Server (NTRS)
Gallardo, Vincente C.; Black, Gerald
1986-01-01
The theoretical development of the forced steady state analysis of the structural dynamic response of a turbine engine having nonlinear connecting elements is discussed. Based on modal synthesis, and the principle of harmonic balance, the governing relations are the compatibility of displacements at the nonlinear connecting elements. There are four displacement compatibility equations at each nonlinear connection, which are solved by iteration for the principle harmonic of the excitation frequency. The resulting computer program, TETRA 2, combines the original TETRA transient analysis (with flexible bladed disk) with the steady state capability. A more versatile nonlinear rub or bearing element which contains a hardening (or softening) spring, with or without deadband, is also incorporated.
Operations mission planner beyond the baseline
NASA Technical Reports Server (NTRS)
Biefeld, Eric; Cooper, Lynne
1991-01-01
The scheduling of Space Station Freedom must satisfy four major requirements. It must ensure efficient housekeeping operations, maximize the collection of science, respond to changes in tasking and available resources, and accommodate the above changes in a manner that minimizes disruption of the ongoing operations of the station. While meeting these requirements the scheduler must cope with the complexity, scope, and flexibility of SSF operations. This requires the scheduler to deal with an astronomical number of possible schedules. The Operations Mission Planner (OMP) is centered around minimally disruptive replanning and the use of heuristics limit search in scheduling. OMP has already shown several artificial intelligence based scheduling techniques such as Interleaved Iterative Refinement and Bottleneck Identification using Process Chronologies.
Geo-Engineering through Internet Informatics (GEMINI)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watney, W. Lynn; Doveton, John H.; Victorine, John R.
GEMINI will resolve reservoir parameters that control well performance; characterize subtle reservoir properties important in understanding and modeling hydrocarbon pore volume and fluid flow; expedite recognition of bypassed, subtle, and complex oil and gas reservoirs at regional and local scale; differentiate commingled reservoirs; build integrated geologic and engineering model based on real-time, iterate solutions to evaluate reservoir management options for improved recovery; provide practical tools to assist the geoscientist, engineer, and petroleum operator in making their tasks more efficient and effective; enable evaluations to be made at different scales, ranging from individual well, through lease, field, to play and regionmore » (scalable information infrastructure); and provide training and technology transfer to evaluate capabilities of the client.« less
The assessment of function: How is it measured? A clinical perspective
Reiman, Michael P; Manske, Robert C
2011-01-01
Testing for outcome or performance can take many forms; including multiple iterations of self-reported measures of function (an assessment of the individual’s perceived dysfunction) and/or clinical special tests (which are primarily assessments of impairments). Typically absent within these testing mechanisms is whether or not one can perform a specific task associated with function. The paper will operationally define function, discuss the construct of function within the disablement model, will overview the multi-dimensional nature of ‘function’ as a concept, will examine the current evidence for functional testing methods, and will propose a functional testing continuum. Limitations of functional performance testing will be discussed including recommendations for future research. PMID:22547919
Application development environment for advanced digital workstations
NASA Astrophysics Data System (ADS)
Valentino, Daniel J.; Harreld, Michael R.; Liu, Brent J.; Brown, Matthew S.; Huang, Lu J.
1998-06-01
One remaining barrier to the clinical acceptance of electronic imaging and information systems is the difficulty in providing intuitive access to the information needed for a specific clinical task (such as reaching a diagnosis or tracking clinical progress). The purpose of this research was to create a development environment that enables the design and implementation of advanced digital imaging workstations. We used formal data and process modeling to identify the diagnostic and quantitative data that radiologists use and the tasks that they typically perform to make clinical decisions. We studied a diverse range of radiology applications, including diagnostic neuroradiology in an academic medical center, pediatric radiology in a children's hospital, screening mammography in a breast cancer center, and thoracic radiology consultation for an oncology clinic. We used object- oriented analysis to develop software toolkits that enable a programmer to rapidly implement applications that closely match clinical tasks. The toolkits support browsing patient information, integrating patient images and reports, manipulating images, and making quantitative measurements on images. Collectively, we refer to these toolkits as the UCLA Digital ViewBox toolkit (ViewBox/Tk). We used the ViewBox/Tk to rapidly prototype and develop a number of diverse medical imaging applications. Our task-based toolkit approach enabled rapid and iterative prototyping of workstations that matched clinical tasks. The toolkit functionality and performance provided a 'hands-on' feeling for manipulating images, and for accessing textual information and reports. The toolkits directly support a new concept for protocol based-reading of diagnostic studies. The design supports the implementation of network-based application services (e.g., prefetching, workflow management, and post-processing) that will facilitate the development of future clinical applications.
ITER Construction—Plant System Integration
NASA Astrophysics Data System (ADS)
Tada, E.; Matsuda, S.
2009-02-01
This brief paper introduces how the ITER will be built in the international collaboration. The ITER Organization plays a central role in constructing ITER and leading it into operation. Since most of the ITER components are to be provided in-kind from the member countries, integral project management should be scoped in advance of real work. Those include design, procurement, system assembly, testing, licensing and commissioning of ITER.
User-Driven Sampling Strategies in Image Exploitation
Harvey, Neal R.; Porter, Reid B.
2013-12-23
Visual analytics and interactive machine learning both try to leverage the complementary strengths of humans and machines to solve complex data exploitation tasks. These fields overlap most significantly when training is involved: the visualization or machine learning tool improves over time by exploiting observations of the human-computer interaction. This paper focuses on one aspect of the human-computer interaction that we call user-driven sampling strategies. Unlike relevance feedback and active learning sampling strategies, where the computer selects which data to label at each iteration, we investigate situations where the user selects which data is to be labeled at each iteration. User-drivenmore » sampling strategies can emerge in many visual analytics applications but they have not been fully developed in machine learning. We discovered that in user-driven sampling strategies suggest new theoretical and practical research questions for both visualization science and machine learning. In this paper we identify and quantify the potential benefits of these strategies in a practical image analysis application. We find user-driven sampling strategies can sometimes provide significant performance gains by steering tools towards local minima that have lower error than tools trained with all of the data. Furthermore, in preliminary experiments we find these performance gains are particularly pronounced when the user is experienced with the tool and application domain.« less
Qing Liu; Zhihui Lai; Zongwei Zhou; Fangjun Kuang; Zhong Jin
2016-01-01
Low-rank matrix completion aims to recover a matrix from a small subset of its entries and has received much attention in the field of computer vision. Most existing methods formulate the task as a low-rank matrix approximation problem. A truncated nuclear norm has recently been proposed as a better approximation to the rank of matrix than a nuclear norm. The corresponding optimization method, truncated nuclear norm regularization (TNNR), converges better than the nuclear norm minimization-based methods. However, it is not robust to the number of subtracted singular values and requires a large number of iterations to converge. In this paper, a TNNR method based on weighted residual error (TNNR-WRE) for matrix completion and its extension model (ETNNR-WRE) are proposed. TNNR-WRE assigns different weights to the rows of the residual error matrix in an augmented Lagrange function to accelerate the convergence of the TNNR method. The ETNNR-WRE is much more robust to the number of subtracted singular values than the TNNR-WRE, TNNR alternating direction method of multipliers, and TNNR accelerated proximal gradient with Line search methods. Experimental results using both synthetic and real visual data sets show that the proposed TNNR-WRE and ETNNR-WRE methods perform better than TNNR and Iteratively Reweighted Nuclear Norm (IRNN) methods.
NASA Astrophysics Data System (ADS)
Yue, Haosong; Chen, Weihai; Wu, Xingming; Wang, Jianhua
2016-03-01
Three-dimensional (3-D) simultaneous localization and mapping (SLAM) is a crucial technique for intelligent robots to navigate autonomously and execute complex tasks. It can also be applied to shape measurement, reverse engineering, and many other scientific or engineering fields. A widespread SLAM algorithm, named KinectFusion, performs well in environments with complex shapes. However, it cannot handle translation uncertainties well in highly structured scenes. This paper improves the KinectFusion algorithm and makes it competent in both structured and unstructured environments. 3-D line features are first extracted according to both color and depth data captured by Kinect sensor. Then the lines in the current data frame are matched with the lines extracted from the entire constructed world model. Finally, we fuse the distance errors of these line-pairs into the standard KinectFusion framework and estimate sensor poses using an iterative closest point-based algorithm. Comparative experiments with the KinectFusion algorithm and one state-of-the-art method in a corridor scene have been done. The experimental results demonstrate that after our improvement, the KinectFusion algorithm can also be applied to structured environments and has higher accuracy. Experiments on two open access datasets further validated our improvements.
Online Pairwise Learning Algorithms.
Ying, Yiming; Zhou, Ding-Xuan
2016-04-01
Pairwise learning usually refers to a learning task that involves a loss function depending on pairs of examples, among which the most notable ones are bipartite ranking, metric learning, and AUC maximization. In this letter we study an online algorithm for pairwise learning with a least-square loss function in an unconstrained setting of a reproducing kernel Hilbert space (RKHS) that we refer to as the Online Pairwise lEaRning Algorithm (OPERA). In contrast to existing works (Kar, Sriperumbudur, Jain, & Karnick, 2013 ; Wang, Khardon, Pechyony, & Jones, 2012 ), which require that the iterates are restricted to a bounded domain or the loss function is strongly convex, OPERA is associated with a non-strongly convex objective function and learns the target function in an unconstrained RKHS. Specifically, we establish a general theorem that guarantees the almost sure convergence for the last iterate of OPERA without any assumptions on the underlying distribution. Explicit convergence rates are derived under the condition of polynomially decaying step sizes. We also establish an interesting property for a family of widely used kernels in the setting of pairwise learning and illustrate the convergence results using such kernels. Our methodology mainly depends on the characterization of RKHSs using its associated integral operators and probability inequalities for random variables with values in a Hilbert space.
User-driven sampling strategies in image exploitation
NASA Astrophysics Data System (ADS)
Harvey, Neal; Porter, Reid
2013-12-01
Visual analytics and interactive machine learning both try to leverage the complementary strengths of humans and machines to solve complex data exploitation tasks. These fields overlap most significantly when training is involved: the visualization or machine learning tool improves over time by exploiting observations of the human-computer interaction. This paper focuses on one aspect of the human-computer interaction that we call user-driven sampling strategies. Unlike relevance feedback and active learning sampling strategies, where the computer selects which data to label at each iteration, we investigate situations where the user selects which data is to be labeled at each iteration. User-driven sampling strategies can emerge in many visual analytics applications but they have not been fully developed in machine learning. User-driven sampling strategies suggest new theoretical and practical research questions for both visualization science and machine learning. In this paper we identify and quantify the potential benefits of these strategies in a practical image analysis application. We find user-driven sampling strategies can sometimes provide significant performance gains by steering tools towards local minima that have lower error than tools trained with all of the data. In preliminary experiments we find these performance gains are particularly pronounced when the user is experienced with the tool and application domain.
Favazza, Christopher P; Ferrero, Andrea; Yu, Lifeng; Leng, Shuai; McMillan, Kyle L; McCollough, Cynthia H
2017-07-01
The use of iterative reconstruction (IR) algorithms in CT generally decreases image noise and enables dose reduction. However, the amount of dose reduction possible using IR without sacrificing diagnostic performance is difficult to assess with conventional image quality metrics. Through this investigation, achievable dose reduction using a commercially available IR algorithm without loss of low contrast spatial resolution was determined with a channelized Hotelling observer (CHO) model and used to optimize a clinical abdomen/pelvis exam protocol. A phantom containing 21 low contrast disks-three different contrast levels and seven different diameters-was imaged at different dose levels. Images were created with filtered backprojection (FBP) and IR. The CHO was tasked with detecting the low contrast disks. CHO performance indicated dose could be reduced by 22% to 25% without compromising low contrast detectability (as compared to full-dose FBP images) whereas 50% or more dose reduction significantly reduced detection performance. Importantly, default settings for the scanner and protocol investigated reduced dose by upward of 75%. Subsequently, CHO-based protocol changes to the default protocol yielded images of higher quality and doses more consistent with values from a larger, dose-optimized scanner fleet. CHO assessment provided objective data to successfully optimize a clinical CT acquisition protocol.
Sorgi, Kristen M; van 't Wout, Mascha
2016-12-30
This study evaluated the influence of self-reported levels of depression on interpersonal strategic decision making when interacting with partners who differed in their predetermined tendency to cooperate in three separate computerized iterated Prisoner's Dilemma Games (iPDGs). Across 29 participants, cooperation was lowest when interacting with a predominantly defecting partner and highest when interacting with a predominantly cooperating partner. Greater depression severity was related to steadier and continued cooperation over trials with the cooperating partner, seeming to reflect a prosocial response tendency when interacting with this partner. With the unbiased partner, depression severity was associated with a more volatile response pattern in reaction to cooperation and defection by this partner. Severity of depression did not influence cooperation with a defecting partner or expectations about partner cooperation reported before the task began. Taken together, these data appear to show that in predominately positive interactions, as in the cooperating partner condition, depression is associated with less volatile, more consistent cooperation. When such clear feedback is absent, as in the unbiased partner condition, depression is associated with more volatile behavior. Nonetheless, participants were generally able to adapt their behavior accordingly in this dynamic interpersonal decision making context. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
HEALTH GeoJunction: place-time-concept browsing of health publications.
MacEachren, Alan M; Stryker, Michael S; Turton, Ian J; Pezanowski, Scott
2010-05-18
The volume of health science publications is escalating rapidly. Thus, keeping up with developments is becoming harder as is the task of finding important cross-domain connections. When geographic location is a relevant component of research reported in publications, these tasks are more difficult because standard search and indexing facilities have limited or no ability to identify geographic foci in documents. This paper introduces HEALTH GeoJunction, a web application that supports researchers in the task of quickly finding scientific publications that are relevant geographically and temporally as well as thematically. HEALTH GeoJunction is a geovisual analytics-enabled web application providing: (a) web services using computational reasoning methods to extract place-time-concept information from bibliographic data for documents and (b) visually-enabled place-time-concept query, filtering, and contextualizing tools that apply to both the documents and their extracted content. This paper focuses specifically on strategies for visually-enabled, iterative, facet-like, place-time-concept filtering that allows analysts to quickly drill down to scientific findings of interest in PubMed abstracts and to explore relations among abstracts and extracted concepts in place and time. The approach enables analysts to: find publications without knowing all relevant query parameters, recognize unanticipated geographic relations within and among documents in multiple health domains, identify the thematic emphasis of research targeting particular places, notice changes in concepts over time, and notice changes in places where concepts are emphasized. PubMed is a database of over 19 million biomedical abstracts and citations maintained by the National Center for Biotechnology Information; achieving quick filtering is an important contribution due to the database size. Including geography in filters is important due to rapidly escalating attention to geographic factors in public health. The implementation of mechanisms for iterative place-time-concept filtering makes it possible to narrow searches efficiently and quickly from thousands of documents to a small subset that meet place-time-concept constraints. Support for a more-like-this query creates the potential to identify unexpected connections across diverse areas of research. Multi-view visualization methods support understanding of the place, time, and concept components of document collections and enable comparison of filtered query results to the full set of publications.
Quantum learning of classical stochastic processes: The completely positive realization problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Monràs, Alex; Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, Singapore 117543; Winter, Andreas
2016-01-15
Among several tasks in Machine Learning, a specially important one is the problem of inferring the latent variables of a system and their causal relations with the observed behavior. A paradigmatic instance of this is the task of inferring the hidden Markov model underlying a given stochastic process. This is known as the positive realization problem (PRP), [L. Benvenuti and L. Farina, IEEE Trans. Autom. Control 49(5), 651–664 (2004)] and constitutes a central problem in machine learning. The PRP and its solutions have far-reaching consequences in many areas of systems and control theory, and is nowadays an important piece inmore » the broad field of positive systems theory. We consider the scenario where the latent variables are quantum (i.e., quantum states of a finite-dimensional system) and the system dynamics is constrained only by physical transformations on the quantum system. The observable dynamics is then described by a quantum instrument, and the task is to determine which quantum instrument — if any — yields the process at hand by iterative application. We take as a starting point the theory of quasi-realizations, whence a description of the dynamics of the process is given in terms of linear maps on state vectors and probabilities are given by linear functionals on the state vectors. This description, despite its remarkable resemblance with the hidden Markov model, or the iterated quantum instrument, is however devoid of any stochastic or quantum mechanical interpretation, as said maps fail to satisfy any positivity conditions. The completely positive realization problem then consists in determining whether an equivalent quantum mechanical description of the same process exists. We generalize some key results of stochastic realization theory, and show that the problem has deep connections with operator systems theory, giving possible insight to the lifting problem in quotient operator systems. Our results have potential applications in quantum machine learning, device-independent characterization and reverse-engineering of stochastic processes and quantum processors, and more generally, of dynamical processes with quantum memory [M. Guţă, Phys. Rev. A 83(6), 062324 (2011); M. Guţă and N. Yamamoto, e-print http://arxiv.org/abs/1303.3771 (2013)].« less
NASA Astrophysics Data System (ADS)
Memon, Shahbaz; Vallot, Dorothée; Zwinger, Thomas; Neukirchen, Helmut
2017-04-01
Scientific communities generate complex simulations through orchestration of semi-structured analysis pipelines which involves execution of large workflows on multiple, distributed and heterogeneous computing and data resources. Modeling ice dynamics of glaciers requires workflows consisting of many non-trivial, computationally expensive processing tasks which are coupled to each other. From this domain, we present an e-Science use case, a workflow, which requires the execution of a continuum ice flow model and a discrete element based calving model in an iterative manner. Apart from the execution, this workflow also contains data format conversion tasks that support the execution of ice flow and calving by means of transition through sequential, nested and iterative steps. Thus, the management and monitoring of all the processing tasks including data management and transfer of the workflow model becomes more complex. From the implementation perspective, this workflow model was initially developed on a set of scripts using static data input and output references. In the course of application usage when more scripts or modifications introduced as per user requirements, the debugging and validation of results were more cumbersome to achieve. To address these problems, we identified a need to have a high-level scientific workflow tool through which all the above mentioned processes can be achieved in an efficient and usable manner. We decided to make use of the e-Science middleware UNICORE (Uniform Interface to Computing Resources) that allows seamless and automated access to different heterogenous and distributed resources which is supported by a scientific workflow engine. Based on this, we developed a high-level scientific workflow model for coupling of massively parallel High-Performance Computing (HPC) jobs: a continuum ice sheet model (Elmer/Ice) and a discrete element calving and crevassing model (HiDEM). In our talk we present how the use of a high-level scientific workflow middleware enables reproducibility of results more convenient and also provides a reusable and portable workflow template that can be deployed across different computing infrastructures. Acknowledgements This work was kindly supported by NordForsk as part of the Nordic Center of Excellence (NCoE) eSTICC (eScience Tools for Investigating Climate Change at High Northern Latitudes) and the Top-level Research Initiative NCoE SVALI (Stability and Variation of Arctic Land Ice).
Research at ITER towards DEMO: Specific reactor diagnostic studies to be carried out on ITER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krasilnikov, A. V.; Kaschuck, Y. A.; Vershkov, V. A.
2014-08-21
In ITER diagnostics will operate in the very hard radiation environment of fusion reactor. Extensive technology studies are carried out during development of the ITER diagnostics and procedures of their calibration and remote handling. Results of these studies and practical application of the developed diagnostics on ITER will provide the direct input to DEMO diagnostic development. The list of DEMO measurement requirements and diagnostics will be determined during ITER experiments on the bases of ITER plasma physics results and success of particular diagnostic application in reactor-like ITER plasma. Majority of ITER diagnostic already passed the conceptual design phase and representmore » the state of the art in fusion plasma diagnostic development. The number of related to DEMO results of ITER diagnostic studies such as design and prototype manufacture of: neutron and γ–ray diagnostics, neutral particle analyzers, optical spectroscopy including first mirror protection and cleaning technics, reflectometry, refractometry, tritium retention measurements etc. are discussed.« less
Research at ITER towards DEMO: Specific reactor diagnostic studies to be carried out on ITER
NASA Astrophysics Data System (ADS)
Krasilnikov, A. V.; Kaschuck, Y. A.; Vershkov, V. A.; Petrov, A. A.; Petrov, V. G.; Tugarinov, S. N.
2014-08-01
In ITER diagnostics will operate in the very hard radiation environment of fusion reactor. Extensive technology studies are carried out during development of the ITER diagnostics and procedures of their calibration and remote handling. Results of these studies and practical application of the developed diagnostics on ITER will provide the direct input to DEMO diagnostic development. The list of DEMO measurement requirements and diagnostics will be determined during ITER experiments on the bases of ITER plasma physics results and success of particular diagnostic application in reactor-like ITER plasma. Majority of ITER diagnostic already passed the conceptual design phase and represent the state of the art in fusion plasma diagnostic development. The number of related to DEMO results of ITER diagnostic studies such as design and prototype manufacture of: neutron and γ-ray diagnostics, neutral particle analyzers, optical spectroscopy including first mirror protection and cleaning technics, reflectometry, refractometry, tritium retention measurements etc. are discussed.
Kushniruk, A. W.; Patel, V. L.; Cimino, J. J.
1997-01-01
This paper describes an approach to the evaluation of health care information technologies based on usability engineering and a methodological framework from the study of medical cognition. The approach involves collection of a rich set of data including video recording of health care workers as they interact with systems, such as computerized patient records and decision support tools. The methodology can be applied in the laboratory setting, typically involving subjects "thinking aloud" as they interact with a system. A similar approach to data collection and analysis can also be extended to study of computer systems in the "live" environment of hospital clinics. Our approach is also influenced from work in the area of cognitive task analysis, which aims to characterize the decision making and reasoning of subjects of varied levels of expertise as they interact with information technology in carrying out representative tasks. The stages involved in conducting cognitively-based usability analyses are detailed and the application of such analysis in the iterative process of system and interface development is discussed. PMID:9357620
Face verification with balanced thresholds.
Yan, Shuicheng; Xu, Dong; Tang, Xiaoou
2007-01-01
The process of face verification is guided by a pre-learned global threshold, which, however, is often inconsistent with class-specific optimal thresholds. It is, hence, beneficial to pursue a balance of the class-specific thresholds in the model-learning stage. In this paper, we present a new dimensionality reduction algorithm tailored to the verification task that ensures threshold balance. This is achieved by the following aspects. First, feasibility is guaranteed by employing an affine transformation matrix, instead of the conventional projection matrix, for dimensionality reduction, and, hence, we call the proposed algorithm threshold balanced transformation (TBT). Then, the affine transformation matrix, constrained as the product of an orthogonal matrix and a diagonal matrix, is optimized to improve the threshold balance and classification capability in an iterative manner. Unlike most algorithms for face verification which are directly transplanted from face identification literature, TBT is specifically designed for face verification and clarifies the intrinsic distinction between these two tasks. Experiments on three benchmark face databases demonstrate that TBT significantly outperforms the state-of-the-art subspace techniques for face verification.
Application Agreement and Integration Services
NASA Technical Reports Server (NTRS)
Driscoll, Kevin R.; Hall, Brendan; Schweiker, Kevin
2013-01-01
Application agreement and integration services are required by distributed, fault-tolerant, safety critical systems to assure required performance. An analysis of distributed and hierarchical agreement strategies are developed against the backdrop of observed agreement failures in fielded systems. The documented work was performed under NASA Task Order NNL10AB32T, Validation And Verification of Safety-Critical Integrated Distributed Systems Area 2. This document is intended to satisfy the requirements for deliverable 5.2.11 under Task 4.2.2.3. This report discusses the challenges of maintaining application agreement and integration services. A literature search is presented that documents previous work in the area of replica determinism. Sources of non-deterministic behavior are identified and examples are presented where system level agreement failed to be achieved. We then explore how TTEthernet services can be extended to supply some interesting application agreement frameworks. This document assumes that the reader is familiar with the TTEthernet protocol. The reader is advised to read the TTEthernet protocol standard [1] before reading this document. This document does not re-iterate the content of the standard.
Mindtagger: A Demonstration of Data Labeling in Knowledge Base Construction.
Shin, Jaeho; Ré, Christopher; Cafarella, Michael
2015-08-01
End-to-end knowledge base construction systems using statistical inference are enabling more people to automatically extract high-quality domain-specific information from unstructured data. As a result of deploying DeepDive framework across several domains, we found new challenges in debugging and improving such end-to-end systems to construct high-quality knowledge bases. DeepDive has an iterative development cycle in which users improve the data. To help our users, we needed to develop principles for analyzing the system's error as well as provide tooling for inspecting and labeling various data products of the system. We created guidelines for error analysis modeled after our colleagues' best practices, in which data labeling plays a critical role in every step of the analysis. To enable more productive and systematic data labeling, we created Mindtagger, a versatile tool that can be configured to support a wide range of tasks. In this demonstration, we show in detail what data labeling tasks are modeled in our error analysis guidelines and how each of them is performed using Mindtagger.
NASA Technical Reports Server (NTRS)
Size, P.; Takeuchi, Esther S.
1993-01-01
The purpose of this contract is to evaluate parametrically the effects of various factors including the electrolyte type, electrolyte concentration, depolarizer type, and cell configuration on lithium cell electrical performance and safety. This effort shall allow for the selection and optimization of cell design for future NASA applications while maintaining close ties with WGL's continuous improvements in manufacturing processes and lithium cell design. Taguchi experimental design techniques are employed in this task, and allow for a maximum amount of information to be obtained while requiring significantly less cells than if a full factorial design were employed. Acceptance testing for this task is modeled after the NASA Document EP5-83-025, Revision C, for cell weights, OCV's and load voltages. The performance attributes that are studied in this effort are fresh capacity and start-up characteristics evaluated at two rates and two temperatures, shelf-life characteristics including start-up and capacity retention, and iterative microcalorimetry measurements. Abuse testing includes forced over discharge at two rates with and without diode protection, temperature tolerance testing, and shorting tests at three rates with the measurement of heat generated during shorting conditions.
The reliability and stability of visual working memory capacity.
Xu, Z; Adam, K C S; Fang, X; Vogel, E K
2018-04-01
Because of the central role of working memory capacity in cognition, many studies have used short measures of working memory capacity to examine its relationship to other domains. Here, we measured the reliability and stability of visual working memory capacity, measured using a single-probe change detection task. In Experiment 1, the participants (N = 135) completed a large number of trials of a change detection task (540 in total, 180 each of set sizes 4, 6, and 8). With large numbers of both trials and participants, reliability estimates were high (α > .9). We then used an iterative down-sampling procedure to create a look-up table for expected reliability in experiments with small sample sizes. In Experiment 2, the participants (N = 79) completed 31 sessions of single-probe change detection. The first 30 sessions took place over 30 consecutive days, and the last session took place 30 days later. This unprecedented number of sessions allowed us to examine the effects of practice on stability and internal reliability. Even after much practice, individual differences were stable over time (average between-session r = .76).
Semi-supervised prediction of gene regulatory networks using machine learning algorithms.
Patel, Nihir; Wang, Jason T L
2015-10-01
Use of computational methods to predict gene regulatory networks (GRNs) from gene expression data is a challenging task. Many studies have been conducted using unsupervised methods to fulfill the task; however, such methods usually yield low prediction accuracies due to the lack of training data. In this article, we propose semi-supervised methods for GRN prediction by utilizing two machine learning algorithms, namely, support vector machines (SVM) and random forests (RF). The semi-supervised methods make use of unlabelled data for training. We investigated inductive and transductive learning approaches, both of which adopt an iterative procedure to obtain reliable negative training data from the unlabelled data. We then applied our semi-supervised methods to gene expression data of Escherichia coli and Saccharomyces cerevisiae, and evaluated the performance of our methods using the expression data. Our analysis indicated that the transductive learning approach outperformed the inductive learning approach for both organisms. However, there was no conclusive difference identified in the performance of SVM and RF. Experimental results also showed that the proposed semi-supervised methods performed better than existing supervised methods for both organisms.
Naidu, Sailen G; Kriegshauser, J Scott; Paden, Robert G; He, Miao; Wu, Qing; Hara, Amy K
2014-12-01
An ultra-low-dose radiation protocol reconstructed with model-based iterative reconstruction was compared with our standard-dose protocol. This prospective study evaluated 20 men undergoing surveillance-enhanced computed tomography after endovascular aneurysm repair. All patients underwent standard-dose and ultra-low-dose venous phase imaging; images were compared after reconstruction with filtered back projection, adaptive statistical iterative reconstruction, and model-based iterative reconstruction. Objective measures of aortic contrast attenuation and image noise were averaged. Images were subjectively assessed (1 = worst, 5 = best) for diagnostic confidence, image noise, and vessel sharpness. Aneurysm sac diameter and endoleak detection were compared. Quantitative image noise was 26% less with ultra-low-dose model-based iterative reconstruction than with standard-dose adaptive statistical iterative reconstruction and 58% less than with ultra-low-dose adaptive statistical iterative reconstruction. Average subjective noise scores were not different between ultra-low-dose model-based iterative reconstruction and standard-dose adaptive statistical iterative reconstruction (3.8 vs. 4.0, P = .25). Subjective scores for diagnostic confidence were better with standard-dose adaptive statistical iterative reconstruction than with ultra-low-dose model-based iterative reconstruction (4.4 vs. 4.0, P = .002). Vessel sharpness was decreased with ultra-low-dose model-based iterative reconstruction compared with standard-dose adaptive statistical iterative reconstruction (3.3 vs. 4.1, P < .0001). Ultra-low-dose model-based iterative reconstruction and standard-dose adaptive statistical iterative reconstruction aneurysm sac diameters were not significantly different (4.9 vs. 4.9 cm); concordance for the presence of endoleak was 100% (P < .001). Compared with a standard-dose technique, an ultra-low-dose model-based iterative reconstruction protocol provides comparable image quality and diagnostic assessment at a 73% lower radiation dose.
Generalized PSF modeling for optimized quantitation in PET imaging.
Ashrafinia, Saeed; Mohy-Ud-Din, Hassan; Karakatsanis, Nicolas A; Jha, Abhinav K; Casey, Michael E; Kadrmas, Dan J; Rahmim, Arman
2017-06-21
Point-spread function (PSF) modeling offers the ability to account for resolution degrading phenomena within the PET image generation framework. PSF modeling improves resolution and enhances contrast, but at the same time significantly alters image noise properties and induces edge overshoot effect. Thus, studying the effect of PSF modeling on quantitation task performance can be very important. Frameworks explored in the past involved a dichotomy of PSF versus no-PSF modeling. By contrast, the present work focuses on quantitative performance evaluation of standard uptake value (SUV) PET images, while incorporating a wide spectrum of PSF models, including those that under- and over-estimate the true PSF, for the potential of enhanced quantitation of SUVs. The developed framework first analytically models the true PSF, considering a range of resolution degradation phenomena (including photon non-collinearity, inter-crystal penetration and scattering) as present in data acquisitions with modern commercial PET systems. In the context of oncologic liver FDG PET imaging, we generated 200 noisy datasets per image-set (with clinically realistic noise levels) using an XCAT anthropomorphic phantom with liver tumours of varying sizes. These were subsequently reconstructed using the OS-EM algorithm with varying PSF modelled kernels. We focused on quantitation of both SUV mean and SUV max , including assessment of contrast recovery coefficients, as well as noise-bias characteristics (including both image roughness and coefficient of-variability), for different tumours/iterations/PSF kernels. It was observed that overestimated PSF yielded more accurate contrast recovery for a range of tumours, and typically improved quantitative performance. For a clinically reasonable number of iterations, edge enhancement due to PSF modeling (especially due to over-estimated PSF) was in fact seen to lower SUV mean bias in small tumours. Overall, the results indicate that exactly matched PSF modeling does not offer optimized PET quantitation, and that PSF overestimation may provide enhanced SUV quantitation. Furthermore, generalized PSF modeling may provide a valuable approach for quantitative tasks such as treatment-response assessment and prognostication.
Optimization of tomographic reconstruction workflows on geographically distributed resources
Bicer, Tekin; Gursoy, Doga; Kettimuthu, Rajkumar; ...
2016-01-01
New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modelingmore » of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Furthermore, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks.« less
Chocholik, Joan K.; Bouchard, Susan E.; Tan, Joseph K. H.; Ostrow, David N.
1999-01-01
Objectives: To determine the relevant weighted goals and criteria for use in the selection of an automated patient care information system (PCIS) using a modified Delphi technique to achieve consensus. Design: A three-phase, six-round modified Delphi process was implemented by a ten-member PCIS selection task force. The first phase consisted of an exploratory round. It was followed by the second phase, of two rounds, to determine the selection goals and finally the third phase, of three rounds, to finalize the selection criteria. Results: Consensus on the goals and criteria for selecting a PCIS was measured during the Delphi process by reviewing the mean and standard deviation of the previous round's responses. After the study was completed, the results were analyzed using a limits-of-agreement indicator that showed strong agreement of each individual's responses between each of the goal determination rounds. Further analysis for variability in the group's response showed a significant movement to consensus after the first goal-determination iteration, with consensus reached on all goals by the end of the second iteration. Conclusion: The results indicated that the relevant weighted goals and criteria used to make the final decision for an automated PCIS were developed as a result of strong agreement among members of the PCIS selection task force. It is therefore recognized that the use of the Delphi process was beneficial in achieving consensus among clinical and nonclinical members in a relatively short time while avoiding a decision based on political biases and the “groupthink” of traditional committee meetings. The results suggest that improvements could be made in lessening the number of rounds by having information available through side conversations, by having other statistical indicators besides the mean and standard deviation available between rounds, and by having a content expert address questions between rounds. PMID:10332655
Optimization of tomographic reconstruction workflows on geographically distributed resources
Bicer, Tekin; Gürsoy, Doǧa; Kettimuthu, Rajkumar; De Carlo, Francesco; Foster, Ian T.
2016-01-01
New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modeling of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Moreover, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks. PMID:27359149
Optimization of tomographic reconstruction workflows on geographically distributed resources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bicer, Tekin; Gursoy, Doga; Kettimuthu, Rajkumar
New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modelingmore » of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Furthermore, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, J; Tsui, B; Noo, F
Purpose: To develop a feature-preserving model based image reconstruction (MBIR) method that improves performance in pancreatic lesion classification at equal or reduced radiation dose. Methods: A set of pancreatic lesion models was created with both benign and premalignant lesion types. These two classes of lesions are distinguished by their fine internal structures; their delineation is therefore crucial to the task of pancreatic lesion classification. To reduce image noise while preserving the features of the lesions, we developed a MBIR method with curvature-based regularization. The novel regularization encourages formation of smooth surfaces that model both the exterior shape and the internalmore » features of pancreatic lesions. Given that the curvature depends on the unknown image, image reconstruction or denoising becomes a non-convex optimization problem; to address this issue an iterative-reweighting scheme was used to calculate and update the curvature using the image from the previous iteration. Evaluation was carried out with insertion of the lesion models into the pancreas of a patient CT image. Results: Visual inspection was used to compare conventional TV regularization with our curvature-based regularization. Several penalty-strengths were considered for TV regularization, all of which resulted in erasing portions of the septation (thin partition) in a premalignant lesion. At matched noise variance (50% noise reduction in the patient stomach region), the connectivity of the septation was well preserved using the proposed curvature-based method. Conclusion: The curvature-based regularization is able to reduce image noise while simultaneously preserving the lesion features. This method could potentially improve task performance for pancreatic lesion classification at equal or reduced radiation dose. The result is of high significance for longitudinal surveillance studies of patients with pancreatic cysts, which may develop into pancreatic cancer. The Senior Author receives financial support from Siemens GmbH Healthcare.« less
Optimised Iteration in Coupled Monte Carlo - Thermal-Hydraulics Calculations
NASA Astrophysics Data System (ADS)
Hoogenboom, J. Eduard; Dufek, Jan
2014-06-01
This paper describes an optimised iteration scheme for the number of neutron histories and the relaxation factor in successive iterations of coupled Monte Carlo and thermal-hydraulic reactor calculations based on the stochastic iteration method. The scheme results in an increasing number of neutron histories for the Monte Carlo calculation in successive iteration steps and a decreasing relaxation factor for the spatial power distribution to be used as input to the thermal-hydraulics calculation. The theoretical basis is discussed in detail and practical consequences of the scheme are shown, among which a nearly linear increase per iteration of the number of cycles in the Monte Carlo calculation. The scheme is demonstrated for a full PWR type fuel assembly. Results are shown for the axial power distribution during several iteration steps. A few alternative iteration method are also tested and it is concluded that the presented iteration method is near optimal.
Sanabria, Federico; Killeen, Peter R
2008-01-01
Background The inability to inhibit reinforced responses is a defining feature of ADHD associated with impulsivity. The Spontaneously Hypertensive Rat (SHR) has been extolled as an animal model of ADHD, but there is no clear experimental evidence of inhibition deficits in SHR. Attempts to demonstrate these deficits may have suffered from methodological and analytical limitations. Methods We provide a rationale for using two complementary response-withholding tasks to doubly dissociate impulsivity from motivational and motor processes. In the lever-holding task (LHT), continual lever depression was required for a minimum interval. Under a differential reinforcement of low rates schedule (DRL), a minimum interval was required between lever presses. Both tasks were studied using SHR and two normotensive control strains, Wistar-Kyoto (WKY) and Long Evans (LE), over an overlapping range of intervals (1 – 5 s for LHT and 5 – 60 s for DRL). Lever-holding and DRL performance was characterized as the output of a mixture of two processes, timing and iterative random responding; we call this account of response inhibition the Temporal Regulation (TR) model. In the context of TR, impulsivity was defined as a bias toward premature termination of the timed intervals. Results The TR model provided an accurate description of LHT and DRL performance. On the basis of TR parameter estimates, SHRs were more impulsive than LE rats across tasks and target times. WKY rats produced substantially shorter timed responses in the lever-holding task than in DRL, suggesting a motivational or motor deficit. The precision of timing by SHR, as measured by the variance of their timed intervals, was excellent, flouting expectations from ADHD research. Conclusion This research validates the TR model of response inhibition and supports SHR as an animal model of ADHD-related impulsivity. It indicates, however, that SHR's impulse-control deficit is not caused by imprecise timing. The use of ad hoc impulsivity metrics and of WKY as control strain for SHR impulsivity are called into question. PMID:18261220
A comparison of linear interpolation models for iterative CT reconstruction.
Hahn, Katharina; Schöndube, Harald; Stierstorfer, Karl; Hornegger, Joachim; Noo, Frédéric
2016-12-01
Recent reports indicate that model-based iterative reconstruction methods may improve image quality in computed tomography (CT). One difficulty with these methods is the number of options available to implement them, including the selection of the forward projection model and the penalty term. Currently, the literature is fairly scarce in terms of guidance regarding this selection step, whereas these options impact image quality. Here, the authors investigate the merits of three forward projection models that rely on linear interpolation: the distance-driven method, Joseph's method, and the bilinear method. The authors' selection is motivated by three factors: (1) in CT, linear interpolation is often seen as a suitable trade-off between discretization errors and computational cost, (2) the first two methods are popular with manufacturers, and (3) the third method enables assessing the importance of a key assumption in the other methods. One approach to evaluate forward projection models is to inspect their effect on discretized images, as well as the effect of their transpose on data sets, but significance of such studies is unclear since the matrix and its transpose are always jointly used in iterative reconstruction. Another approach is to investigate the models in the context they are used, i.e., together with statistical weights and a penalty term. Unfortunately, this approach requires the selection of a preferred objective function and does not provide clear information on features that are intrinsic to the model. The authors adopted the following two-stage methodology. First, the authors analyze images that progressively include components of the singular value decomposition of the model in a reconstructed image without statistical weights and penalty term. Next, the authors examine the impact of weights and penalty on observed differences. Image quality metrics were investigated for 16 different fan-beam imaging scenarios that enabled probing various aspects of all models. The metrics include a surrogate for computational cost, as well as bias, noise, and an estimation task, all at matched resolution. The analysis revealed fundamental differences in terms of both bias and noise. Task-based assessment appears to be required to appreciate the differences in noise; the estimation task the authors selected showed that these differences balance out to yield similar performance. Some scenarios highlighted merits for the distance-driven method in terms of bias but with an increase in computational cost. Three combinations of statistical weights and penalty term showed that the observed differences remain the same, but strong edge-preserving penalty can dramatically reduce the magnitude of these differences. In many scenarios, Joseph's method seems to offer an interesting compromise between cost and computational effort. The distance-driven method offers the possibility to reduce bias but with an increase in computational cost. The bilinear method indicated that a key assumption in the other two methods is highly robust. Last, strong edge-preserving penalty can act as a compensator for insufficiencies in the forward projection model, bringing all models to similar levels in the most challenging imaging scenarios. Also, the authors find that their evaluation methodology helps appreciating how model, statistical weights, and penalty term interplay together.
Multisociety task force recommendations of competencies in Pulmonary and Critical Care Medicine.
Buckley, John D; Addrizzo-Harris, Doreen J; Clay, Alison S; Curtis, J Randall; Kotloff, Robert M; Lorin, Scott M; Murin, Susan; Sessler, Curtis N; Rogers, Paul L; Rosen, Mark J; Spevetz, Antoinette; King, Talmadge E; Malhotra, Atul; Parsons, Polly E
2009-08-15
Numerous accrediting organizations are calling for competency-based medical education that would help define specific specialties and serve as a foundation for ongoing assessment throughout a practitioner's career. Pulmonary Medicine and Critical Care Medicine are two distinct subspecialties, yet many individual physicians have expertise in both because of overlapping content. Establishing specific competencies for these subspecialties identifies educational goals for trainees and guides practitioners through their lifelong learning. To define specific competencies for graduates of fellowships in Pulmonary Medicine and Internal Medicine-based Critical Care. A Task Force composed of representatives from key stakeholder societies convened to identify and define specific competencies for both disciplines. Beginning with a detailed list of existing competencies from diverse sources, the Task Force categorized each item into one of six core competency headings. Each individual item was reviewed by committee members individually, in group meetings, and conference calls. Nominal group methods were used for most items to retain the views and opinions of the minority perspective. Controversial items underwent additional whole group discussions with iterative modified-Delphi techniques. Consensus was ultimately determined by a simple majority vote. The Task Force identified and defined 327 specific competencies for Internal Medicine-based Critical Care and 276 for Pulmonary Medicine, each with a designation as either: (1) relevant, but competency is not essential or (2) competency essential to the specialty. Specific competencies in Pulmonary and Critical Care Medicine can be identified and defined using a multisociety collaborative approach. These recommendations serve as a starting point and set the stage for future modification to facilitate maximum quality of care as the specialties evolve.
Leep Hunderfund, Andrea N; Reed, Darcy A; Starr, Stephanie R; Havyer, Rachel D; Lang, Tara R; Norby, Suzanne M
2017-09-01
To identify approaches to operationalizing the development of competence in Accreditation Council for Graduate Medical Education (ACGME) milestones. The authors reviewed all 25 "Milestone Project" documents available on the ACGME Web site on September 11, 2013, using an iterative process to identify approaches to operationalizing the development of competence in the milestones associated with each of 601 subcompetencies. Fifteen approaches were identified. Ten focused on attributes and activities of the learner, such as their ability to perform different, increasingly difficult tasks (304/601; 51%), perform a task better and faster (171/601; 45%), or perform a task more consistently (123/601; 20%). Two approaches focused on context, inferring competence from performing a task in increasingly difficult situations (236/601; 29%) or an expanding scope of engagement (169/601; 28%). Two used socially defined indicators of competence such as progression from "learning" to "teaching," "leading," or "role modeling" (271/601; 45%). One approach focused on the supervisor's role, inferring competence from a decreasing need for supervision or assistance (151/601; 25%). Multiple approaches were often combined within a single set of milestones (mean 3.9, SD 1.6). Initial ACGME milestones operationalize the development of competence in many ways. These findings offer insights into how physicians understand and assess the developmental progression of competence and an opportunity to consider how different approaches may affect the validity of milestone-based assessments. The results of this analysis can inform the work of educators developing or revising milestones, interpreting milestone data, or creating assessment tools to inform milestone-based performance measures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hallenbeck, L.D.; Harpole, K.J.; Gerard, M.G.
The work reported here covers Budget Phase I of the project. The principal tasks in Budget Phase I are the Reservoir Analysis and Characterization Task and the Advanced Technology Definition Task. Completion of these tasks have enabled an optimum carbon dioxide (CO{sub 2}) flood project to be designed and evaluated from an economic and risk analysis standpoint. Field implementation of the project has been recommended to the working interest owner of the South Cowden Unit (SCU) and approval has been obtained. The current project has focused on reducing initial investment cost by utilizing horizontal injection wells and concentrating the projectmore » in the best productivity area of the field. An innovative CO{sub 2} purchase agreement (no take or pay requirements, CO{sub 2} purchase price tied to West Texas Intermediate crude oil price) and gas recycle agreements (expensing cost as opposed to large capital investments for compression) were negotiated to further improve project economics. A detailed reservoir characterization study was completed by an integrated team of geoscientists and engineers. The study consisted of detailed core description, integration of log response to core descriptions, mapping of the major flow units, evaluation of porosity and permeability relationships, geostatistical analysis of permeability trends, and direct integration of reservoir performance with the geological interpretation. The study methodology fostered iterative bidirectional feedback between the reservoir characterization team and the reservoir engineering/simulation team to allow simultaneous refinement and convergence of the geological interpretation with the reservoir model. The fundamental conclusion from the study is that South Cowden exhibits favorable enhanced oil recovery characteristics, particularly reservoir quality and continuity.« less
Young, John Q; Hasser, Caitlin; Hung, Erick K; Kusz, Martin; O'Sullivan, Patricia S; Stewart, Colin; Weiss, Andrea; Williams, Nancy
2018-07-01
To develop entrustable professional activities (EPAs) for psychiatry and to demonstrate an innovative, validity-enhancing methodology that may be relevant to other specialties. A national task force employed a three-stage process from May 2014 to February 2017 to develop EPAs for psychiatry. In stage 1, the task force used an iterative consensus-driven process to construct proposed EPAs. Each included a title, full description, and relevant competencies. In stage 2, the task force interviewed four nonpsychiatric experts in EPAs and further revised the EPAs. In stage 3, the task force performed a Delphi study of national experts in psychiatric education and assessment. All survey participants completed a brief training program on EPAs. Quantitative and qualitative analysis led to further modifications. Essentialness was measured on a five-point scale. EPAs were included if the content validity index was at least 0.8 and the lower end of the asymmetric confidence interval was not lower than 4.0. Stages 1 and 2 yielded 24 and 14 EPAs, respectively. In stage 3, 31 of the 39 invited experts participated in both rounds of the Delphi study. Round 1 reduced the proposed EPAs to 13. Ten EPAs met the inclusion criteria in Round 2. The final EPAs provide a strong foundation for competency-based assessment in psychiatry. Methodological features such as critique by nonpsychiatry experts, a national Delphi study with frame-of-reference training, and stringent inclusion criteria strengthen the content validity of the findings and may serve as a model for future efforts in other specialties.
Designing Real-time Decision Support for Trauma Resuscitations
Yadav, Kabir; Chamberlain, James M.; Lewis, Vicki R.; Abts, Natalie; Chawla, Shawn; Hernandez, Angie; Johnson, Justin; Tuveson, Genevieve; Burd, Randall S.
2016-01-01
Background Use of electronic clinical decision support (eCDS) has been recommended to improve implementation of clinical decision rules. Many eCDS tools, however, are designed and implemented without taking into account the context in which clinical work is performed. Implementation of the pediatric traumatic brain injury (TBI) clinical decision rule at one Level I pediatric emergency department includes an electronic questionnaire triggered when ordering a head computed tomography using computerized physician order entry (CPOE). Providers use this CPOE tool in less than 20% of trauma resuscitation cases. A human factors engineering approach could identify the implementation barriers that are limiting the use of this tool. Objectives The objective was to design a pediatric TBI eCDS tool for trauma resuscitation using a human factors approach. The hypothesis was that clinical experts will rate a usability-enhanced eCDS tool better than the existing CPOE tool for user interface design and suitability for clinical use. Methods This mixed-methods study followed usability evaluation principles. Pediatric emergency physicians were surveyed to identify barriers to using the existing eCDS tool. Using standard trauma resuscitation protocols, a hierarchical task analysis of pediatric TBI evaluation was developed. Five clinical experts, all board-certified pediatric emergency medicine faculty members, then iteratively modified the hierarchical task analysis until reaching consensus. The software team developed a prototype eCDS display using the hierarchical task analysis. Three human factors engineers provided feedback on the prototype through a heuristic evaluation, and the software team refined the eCDS tool using a rapid prototyping process. The eCDS tool then underwent iterative usability evaluations by the five clinical experts using video review of 50 trauma resuscitation cases. A final eCDS tool was created based on their feedback, with content analysis of the evaluations performed to ensure all concerns were identified and addressed. Results Among 26 EPs (76% response rate), the main barriers to using the existing tool were that the information displayed is redundant and does not fit clinical workflow. After the prototype eCDS tool was developed based on the trauma resuscitation hierarchical task analysis, the human factors engineers rated it to be better than the CPOE tool for nine of 10 standard user interface design heuristics on a three-point scale. The eCDS tool was also rated better for clinical use on the same scale, in 84% of 50 expert–video pairs, and was rated equivalent in the remainder. Clinical experts also rated barriers to use of the eCDS tool as being low. Conclusions An eCDS tool for diagnostic imaging designed using human factors engineering methods has improved perceived usability among pediatric emergency physicians. PMID:26300010
The Exception Does Not Rule: Attention Constrains Form Preparation in Word Production
O’Séaghdha, Pádraig G.; Frazer, Alexandra K.
2014-01-01
Form preparation in word production, the benefit of exploiting a useful common sound (such as the first phoneme) of iteratively spoken small groups of words, is notoriously fastidious, exhibiting a seemingly categorical, all-or-none character, and a corresponding susceptibility to ‘killers’ of preparation. In particular, the presence of a single exception item in a group of otherwise phonologically consistent words has been found to eliminate the benefit of knowing a majority characteristic. This has been interpreted to mean that form preparation amounts to partial production, and thus provides a window on fundamental processes of phonological word encoding (e.g., Levelt et al., 1999). However, preparation of only fully distributed properties appears to be non-optimal, and is difficult to reconcile with the sensitivity of cognitive responses to probabilities in other domains. We show here that the all-or-none characteristic of form preparation is specific to task format. Preparation for sets that included an exception item occurred in ecologically valid production tasks, picture naming (Experiment 1), and word naming (Experiment 2). Preparation failed only in the commonly used, but indirect and resource-intensive, associative cuing task (Experiment 3). We outline an account of form preparation in which anticipation of word-initial phonological fragments uses a limited capacity, sustained attentional capability that points to rather than enacts possibilities for imminent speech. PMID:24548328
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hayashi, T.; Nakamura, H.; Kawamura, Y.
JAEA (Japan Atomic Energy Agency) manages 2 tritium handling laboratories: Tritium Processing Laboratory (TPL) in Tokai and DEMO-RD building in Rokkasho. TPL has been accumulating a gram level tritium safety handling experiences without any accidental tritium release to the environment for more than 25 years. Recently, our activities have focused on 3 categories, as follows. First, the development of a detritiation system for ITER. This task is the demonstration test of a wet Scrubber Column (SC) as a pilot scale (a few hundreds m{sup 3}/h of processing capacity). Secondly, DEMO-RD tasks are focused on investigating the general issues required formore » DEMO-RD design, such as structural materials like RAFM (Reduced Activity Ferritic/Martensitic steels) and SiC/SiC, functional materials like tritium breeder and neutron multiplier, and tritium. For the last 4 years, we have spent a lot of time and means to the construction of the DEMO-RD facility and to its licensing, so we have just started the actual research program with tritium and other radioisotopes. This tritium task includes tritium accountancy, tritium basic safety research such as tritium interactions with various materials, which will be used for DEMO-RD and durability. The third category is the recovery work from the Great East Japan earthquake (2011 earthquake). It is worth noting that despite the high magnitude of the earthquake, TPL was able to confine tritium properly without any accidental tritium release.« less
McMullen, David P.; Hotson, Guy; Katyal, Kapil D.; Wester, Brock A.; Fifer, Matthew S.; McGee, Timothy G.; Harris, Andrew; Johannes, Matthew S.; Vogelstein, R. Jacob; Ravitz, Alan D.; Anderson, William S.; Thakor, Nitish V.; Crone, Nathan E.
2014-01-01
To increase the ability of brain-machine interfaces (BMIs) to control advanced prostheses such as the modular prosthetic limb (MPL), we are developing a novel system: the Hybrid Augmented Reality Multimodal Operation Neural Integration Environment (HARMONIE). This system utilizes hybrid input, supervisory control, and intelligent robotics to allow users to identify an object (via eye tracking and computer vision) and initiate (via brain-control) a semi-autonomous reach-grasp-and-drop of the object by the MPL. Sequential iterations of HARMONIE were tested in two pilot subjects implanted with electrocorticographic (ECoG) and depth electrodes within motor areas. The subjects performed the complex task in 71.4% (20/28) and 67.7% (21/31) of trials after minimal training. Balanced accuracy for detecting movements was 91.1% and 92.9%, significantly greater than chance accuracies (p < 0.05). After BMI-based initiation, the MPL completed the entire task 100% (one object) and 70% (three objects) of the time. The MPL took approximately 12.2 seconds for task completion after system improvements implemented for the second subject. Our hybrid-BMI design prevented all but one baseline false positive from initiating the system. The novel approach demonstrated in this proof-of-principle study, using hybrid input, supervisory control, and intelligent robotics, addresses limitations of current BMIs. PMID:24760914
McMullen, David P; Hotson, Guy; Katyal, Kapil D; Wester, Brock A; Fifer, Matthew S; McGee, Timothy G; Harris, Andrew; Johannes, Matthew S; Vogelstein, R Jacob; Ravitz, Alan D; Anderson, William S; Thakor, Nitish V; Crone, Nathan E
2014-07-01
To increase the ability of brain-machine interfaces (BMIs) to control advanced prostheses such as the modular prosthetic limb (MPL), we are developing a novel system: the Hybrid Augmented Reality Multimodal Operation Neural Integration Environment (HARMONIE). This system utilizes hybrid input, supervisory control, and intelligent robotics to allow users to identify an object (via eye tracking and computer vision) and initiate (via brain-control) a semi-autonomous reach-grasp-and-drop of the object by the MPL. Sequential iterations of HARMONIE were tested in two pilot subjects implanted with electrocorticographic (ECoG) and depth electrodes within motor areas. The subjects performed the complex task in 71.4% (20/28) and 67.7% (21/31) of trials after minimal training. Balanced accuracy for detecting movements was 91.1% and 92.9%, significantly greater than chance accuracies (p < 0.05). After BMI-based initiation, the MPL completed the entire task 100% (one object) and 70% (three objects) of the time. The MPL took approximately 12.2 s for task completion after system improvements implemented for the second subject. Our hybrid-BMI design prevented all but one baseline false positive from initiating the system. The novel approach demonstrated in this proof-of-principle study, using hybrid input, supervisory control, and intelligent robotics, addresses limitations of current BMIs.
Automated Illustration of Patients Instructions
Bui, Duy; Nakamura, Carlos; Bray, Bruce E.; Zeng-Treitler, Qing
2012-01-01
A picture can be a powerful communication tool. However, creating pictures to illustrate patient instructions can be a costly and time-consuming task. Building on our prior research in this area, we developed a computer application that automatically converts text to pictures using natural language processing and computer graphics techniques. After iterative testing, the automated illustration system was evaluated using 49 previously unseen cardiology discharge instructions. The completeness of the system-generated illustrations was assessed by three raters using a three-level scale. The average inter-rater agreement for text correctly represented in the pictograph was about 66 percent. Since illustration in this context is intended to enhance rather than replace text, these results support the feasibility of conducting automated illustration. PMID:23304392
On Design Mining: Coevolution and Surrogate Models.
Preen, Richard J; Bull, Larry
2017-01-01
Design mining is the use of computational intelligence techniques to iteratively search and model the attribute space of physical objects evaluated directly through rapid prototyping to meet given objectives. It enables the exploitation of novel materials and processes without formal models or complex simulation. In this article, we focus upon the coevolutionary nature of the design process when it is decomposed into concurrent sub-design-threads due to the overall complexity of the task. Using an abstract, tunable model of coevolution, we consider strategies to sample subthread designs for whole-system testing and how best to construct and use surrogate models within the coevolutionary scenario. Drawing on our findings, we then describe the effective design of an array of six heterogeneous vertical-axis wind turbines.
Motor–sensory convergence in object localization: a comparative study in rats and humans
Horev, Guy; Saig, Avraham; Knutsen, Per Magne; Pietr, Maciej; Yu, Chunxiu; Ahissar, Ehud
2011-01-01
In order to identify basic aspects in the process of tactile perception, we trained rats and humans in similar object localization tasks and compared the strategies used by the two species. We found that rats integrated temporally related sensory inputs (‘temporal inputs’) from early whisk cycles with spatially related inputs (‘spatial inputs’) to align their whiskers with the objects; their perceptual reports appeared to be based primarily on this spatial alignment. In a similar manner, human subjects also integrated temporal and spatial inputs, but relied mainly on temporal inputs for object localization. These results suggest that during tactile object localization, an iterative motor–sensory process gradually converges on a stable percept of object location in both species. PMID:21969688
Decentralized Estimation and Control for Preserving the Strong Connectivity of Directed Graphs.
Sabattini, Lorenzo; Secchi, Cristian; Chopra, Nikhil
2015-10-01
In order to accomplish cooperative tasks, decentralized systems are required to communicate among each other. Thus, maintaining the connectivity of the communication graph is a fundamental issue. Connectivity maintenance has been extensively studied in the last few years, but generally considering undirected communication graphs. In this paper, we introduce a decentralized control and estimation strategy to maintain the strong connectivity property of directed communication graphs. In particular, we introduce a hierarchical estimation procedure that implements power iteration in a decentralized manner, exploiting an algorithm for balancing strongly connected directed graphs. The output of the estimation system is then utilized for guaranteeing preservation of the strong connectivity property. The control strategy is validated by means of analytical proofs and simulation results.
Gradually including potential users: A tool to counter design exclusions.
Zitkus, Emilene; Langdon, Patrick; Clarkson, P John
2018-01-01
The paper describes an iterative development process used to understand the suitability of different inclusive design evaluation tools applied into design practices. At the end of this process, a tool named Inclusive Design Advisor was developed, combining data related to design features of small appliances with ergonomic task demands, anthropometric data and exclusion data. When auditing a new design the tool examines the exclusion that each design feature can cause, followed by objective recommendations directly related to its features. Interactively, it allows designers or clients to balance design changes with the exclusion caused. It presents the type of information that enables designers and clients to discuss user needs and make more inclusive design decisions. Copyright © 2017. Published by Elsevier Ltd.
ɛ-subgradient algorithms for bilevel convex optimization
NASA Astrophysics Data System (ADS)
Helou, Elias S.; Simões, Lucas E. A.
2017-05-01
This paper introduces and studies the convergence properties of a new class of explicit ɛ-subgradient methods for the task of minimizing a convex function over a set of minimizers of another convex minimization problem. The general algorithm specializes to some important cases, such as first-order methods applied to a varying objective function, which have computationally cheap iterations. We present numerical experimentation concerning certain applications where the theoretical framework encompasses efficient algorithmic techniques, enabling the use of the resulting methods to solve very large practical problems arising in tomographic image reconstruction. ES Helou was supported by FAPESP grants 2013/07375-0 and 2013/16508-3 and CNPq grant 311476/2014-7. LEA Simões was supported by FAPESP grants 2011/02219-4 and 2013/14615-7.
Numerical methods for the design of gradient-index optical coatings.
Anzengruber, Stephan W; Klann, Esther; Ramlau, Ronny; Tonova, Diana
2012-12-01
We formulate the problem of designing gradient-index optical coatings as the task of solving a system of operator equations. We use iterative numerical procedures known from the theory of inverse problems to solve it with respect to the coating refractive index profile and thickness. The mathematical derivations necessary for the application of the procedures are presented, and different numerical methods (Landweber, Newton, and Gauss-Newton methods, Tikhonov minimization with surrogate functionals) are implemented. Procedures for the transformation of the gradient coating designs into quasi-gradient ones (i.e., multilayer stacks of homogeneous layers with different refractive indices) are also developed. The design algorithms work with physically available coating materials that could be produced with the modern coating technologies.
A Toolset for Supporting Iterative Human Automation: Interaction in Design
NASA Technical Reports Server (NTRS)
Feary, Michael S.
2010-01-01
The addition of automation has greatly extended humans' capability to accomplish tasks, including those that are difficult, complex and safety critical. The majority of Human - Automation Interacton (HAl) results in more efficient and safe operations, ho,,:,ever ertain unpected atomatlon behaviors or "automation surprises" can be frustrating and, In certain safety critical operations (e.g. transporttion, manufacturing control, medicine), may result in injuries or. the loss of life.. (Mellor, 1994; Leveson, 1995; FAA, 1995; BASI, 1998; Sheridan, 2002). This papr describes he development of a design tool that enables on the rapid development and evaluation. of automaton prototypes. The ultimate goal of the work is to provide a design platform upon which automation surprise vulnerability analyses can be integrated.
Convergence Results on Iteration Algorithms to Linear Systems
Wang, Zhuande; Yang, Chuansheng; Yuan, Yubo
2014-01-01
In order to solve the large scale linear systems, backward and Jacobi iteration algorithms are employed. The convergence is the most important issue. In this paper, a unified backward iterative matrix is proposed. It shows that some well-known iterative algorithms can be deduced with it. The most important result is that the convergence results have been proved. Firstly, the spectral radius of the Jacobi iterative matrix is positive and the one of backward iterative matrix is strongly positive (lager than a positive constant). Secondly, the mentioned two iterations have the same convergence results (convergence or divergence simultaneously). Finally, some numerical experiments show that the proposed algorithms are correct and have the merit of backward methods. PMID:24991640
Goal-Directed Decision Making with Spiking Neurons.
Friedrich, Johannes; Lengyel, Máté
2016-02-03
Behavioral and neuroscientific data on reward-based decision making point to a fundamental distinction between habitual and goal-directed action selection. The formation of habits, which requires simple updating of cached values, has been studied in great detail, and the reward prediction error theory of dopamine function has enjoyed prominent success in accounting for its neural bases. In contrast, the neural circuit mechanisms of goal-directed decision making, requiring extended iterative computations to estimate values online, are still unknown. Here we present a spiking neural network that provably solves the difficult online value estimation problem underlying goal-directed decision making in a near-optimal way and reproduces behavioral as well as neurophysiological experimental data on tasks ranging from simple binary choice to sequential decision making. Our model uses local plasticity rules to learn the synaptic weights of a simple neural network to achieve optimal performance and solves one-step decision-making tasks, commonly considered in neuroeconomics, as well as more challenging sequential decision-making tasks within 1 s. These decision times, and their parametric dependence on task parameters, as well as the final choice probabilities match behavioral data, whereas the evolution of neural activities in the network closely mimics neural responses recorded in frontal cortices during the execution of such tasks. Our theory provides a principled framework to understand the neural underpinning of goal-directed decision making and makes novel predictions for sequential decision-making tasks with multiple rewards. Goal-directed actions requiring prospective planning pervade decision making, but their circuit-level mechanisms remain elusive. We show how a model circuit of biologically realistic spiking neurons can solve this computationally challenging problem in a novel way. The synaptic weights of our network can be learned using local plasticity rules such that its dynamics devise a near-optimal plan of action. By systematically comparing our model results to experimental data, we show that it reproduces behavioral decision times and choice probabilities as well as neural responses in a rich set of tasks. Our results thus offer the first biologically realistic account for complex goal-directed decision making at a computational, algorithmic, and implementational level. Copyright © 2016 the authors 0270-6474/16/361529-18$15.00/0.
Goal-Directed Decision Making with Spiking Neurons
Lengyel, Máté
2016-01-01
Behavioral and neuroscientific data on reward-based decision making point to a fundamental distinction between habitual and goal-directed action selection. The formation of habits, which requires simple updating of cached values, has been studied in great detail, and the reward prediction error theory of dopamine function has enjoyed prominent success in accounting for its neural bases. In contrast, the neural circuit mechanisms of goal-directed decision making, requiring extended iterative computations to estimate values online, are still unknown. Here we present a spiking neural network that provably solves the difficult online value estimation problem underlying goal-directed decision making in a near-optimal way and reproduces behavioral as well as neurophysiological experimental data on tasks ranging from simple binary choice to sequential decision making. Our model uses local plasticity rules to learn the synaptic weights of a simple neural network to achieve optimal performance and solves one-step decision-making tasks, commonly considered in neuroeconomics, as well as more challenging sequential decision-making tasks within 1 s. These decision times, and their parametric dependence on task parameters, as well as the final choice probabilities match behavioral data, whereas the evolution of neural activities in the network closely mimics neural responses recorded in frontal cortices during the execution of such tasks. Our theory provides a principled framework to understand the neural underpinning of goal-directed decision making and makes novel predictions for sequential decision-making tasks with multiple rewards. SIGNIFICANCE STATEMENT Goal-directed actions requiring prospective planning pervade decision making, but their circuit-level mechanisms remain elusive. We show how a model circuit of biologically realistic spiking neurons can solve this computationally challenging problem in a novel way. The synaptic weights of our network can be learned using local plasticity rules such that its dynamics devise a near-optimal plan of action. By systematically comparing our model results to experimental data, we show that it reproduces behavioral decision times and choice probabilities as well as neural responses in a rich set of tasks. Our results thus offer the first biologically realistic account for complex goal-directed decision making at a computational, algorithmic, and implementational level. PMID:26843636
Overview of International Thermonuclear Experimental Reactor (ITER) engineering design activities*
NASA Astrophysics Data System (ADS)
Shimomura, Y.
1994-05-01
The International Thermonuclear Experimental Reactor (ITER) [International Thermonuclear Experimental Reactor (ITER) (International Atomic Energy Agency, Vienna, 1988), ITER Documentation Series, No. 1] project is a multiphased project, presently proceeding under the auspices of the International Atomic Energy Agency according to the terms of a four-party agreement among the European Atomic Energy Community (EC), the Government of Japan (JA), the Government of the Russian Federation (RF), and the Government of the United States (US), ``the Parties.'' The ITER project is based on the tokamak, a Russian invention, and has since been brought to a high level of development in all major fusion programs in the world. The objective of ITER is to demonstrate the scientific and technological feasibility of fusion energy for peaceful purposes. The ITER design is being developed, with support from the Parties' four Home Teams and is in progress by the Joint Central Team. An overview of ITER Design activities is presented.
Li, Ke; Gomez-Cardona, Daniel; Hsieh, Jiang; Lubner, Meghan G.; Pickhardt, Perry J.; Chen, Guang-Hong
2015-01-01
Purpose: For a given imaging task and patient size, the optimal selection of x-ray tube potential (kV) and tube current-rotation time product (mAs) is pivotal in achieving the maximal radiation dose reduction while maintaining the needed diagnostic performance. Although contrast-to-noise (CNR)-based strategies can be used to optimize kV/mAs for computed tomography (CT) imaging systems employing the linear filtered backprojection (FBP) reconstruction method, a more general framework needs to be developed for systems using the nonlinear statistical model-based iterative reconstruction (MBIR) method. The purpose of this paper is to present such a unified framework for the optimization of kV/mAs selection for both FBP- and MBIR-based CT systems. Methods: The optimal selection of kV and mAs was formulated as a constrained optimization problem to minimize the objective function, Dose(kV,mAs), under the constraint that the achievable detectability index d′(kV,mAs) is not lower than the prescribed value of d℞′ for a given imaging task. Since it is difficult to analytically model the dependence of d′ on kV and mAs for the highly nonlinear MBIR method, this constrained optimization problem is solved with comprehensive measurements of Dose(kV,mAs) and d′(kV,mAs) at a variety of kV–mAs combinations, after which the overlay of the dose contours and d′ contours is used to graphically determine the optimal kV–mAs combination to achieve the lowest dose while maintaining the needed detectability for the given imaging task. As an example, d′ for a 17 mm hypoattenuating liver lesion detection task was experimentally measured with an anthropomorphic abdominal phantom at four tube potentials (80, 100, 120, and 140 kV) and fifteen mA levels (25 and 50–700) with a sampling interval of 50 mA at a fixed rotation time of 0.5 s, which corresponded to a dose (CTDIvol) range of [0.6, 70] mGy. Using the proposed method, the optimal kV and mA that minimized dose for the prescribed detectability level of d℞′=16 were determined. As another example, the optimal kV and mA for an 8 mm hyperattenuating liver lesion detection task were also measured using the developed framework. Both an in vivo animal and human subject study were used as demonstrations of how the developed framework can be applied to the clinical work flow. Results: For the first task, the optimal kV and mAs were measured to be 100 and 500, respectively, for FBP, which corresponded to a dose level of 24 mGy. In comparison, the optimal kV and mAs for MBIR were 80 and 150, respectively, which corresponded to a dose level of 4 mGy. The topographies of the iso-d′ map and the iso-CNR map were the same for FBP; thus, the use of d′- and CNR-based optimization methods generated the same results for FBP. However, the topographies of the iso-d′ and iso-CNR map were significantly different in MBIR; the CNR-based method overestimated the performance of MBIR, predicting an overly aggressive dose reduction factor. For the second task, the developed framework generated the following optimization results: for FBP, kV = 140, mA = 350, dose = 37.5 mGy; for MBIR, kV = 120, mA = 250, dose = 18.8 mGy. Again, the CNR-based method overestimated the performance of MBIR. Results of the preliminary in vivo studies were consistent with those of the phantom experiments. Conclusions: A unified and task-driven kV/mAs optimization framework has been developed in this work. The framework is applicable to both linear and nonlinear CT systems such as those using the MBIR method. As expected, the developed framework can be reduced to the conventional CNR-based kV/mAs optimization frameworks if the system is linear. For MBIR-based nonlinear CT systems, however, the developed task-based kV/mAs optimization framework is needed to achieve the maximal dose reduction while maintaining the desired diagnostic performance. PMID:26328971
Iterative deep convolutional encoder-decoder network for medical image segmentation.
Jung Uk Kim; Hak Gu Kim; Yong Man Ro
2017-07-01
In this paper, we propose a novel medical image segmentation using iterative deep learning framework. We have combined an iterative learning approach and an encoder-decoder network to improve segmentation results, which enables to precisely localize the regions of interest (ROIs) including complex shapes or detailed textures of medical images in an iterative manner. The proposed iterative deep convolutional encoder-decoder network consists of two main paths: convolutional encoder path and convolutional decoder path with iterative learning. Experimental results show that the proposed iterative deep learning framework is able to yield excellent medical image segmentation performances for various medical images. The effectiveness of the proposed method has been proved by comparing with other state-of-the-art medical image segmentation methods.
Shuman, William P; Chan, Keith T; Busey, Janet M; Mitsumori, Lee M; Choi, Eunice; Koprowicz, Kent M; Kanal, Kalpana M
2014-12-01
To investigate whether reduced radiation dose liver computed tomography (CT) images reconstructed with model-based iterative reconstruction ( MBIR model-based iterative reconstruction ) might compromise depiction of clinically relevant findings or might have decreased image quality when compared with clinical standard radiation dose CT images reconstructed with adaptive statistical iterative reconstruction ( ASIR adaptive statistical iterative reconstruction ). With institutional review board approval, informed consent, and HIPAA compliance, 50 patients (39 men, 11 women) were prospectively included who underwent liver CT. After a portal venous pass with ASIR adaptive statistical iterative reconstruction images, a 60% reduced radiation dose pass was added with MBIR model-based iterative reconstruction images. One reviewer scored ASIR adaptive statistical iterative reconstruction image quality and marked findings. Two additional independent reviewers noted whether marked findings were present on MBIR model-based iterative reconstruction images and assigned scores for relative conspicuity, spatial resolution, image noise, and image quality. Liver and aorta Hounsfield units and image noise were measured. Volume CT dose index and size-specific dose estimate ( SSDE size-specific dose estimate ) were recorded. Qualitative reviewer scores were summarized. Formal statistical inference for signal-to-noise ratio ( SNR signal-to-noise ratio ), contrast-to-noise ratio ( CNR contrast-to-noise ratio ), volume CT dose index, and SSDE size-specific dose estimate was made (paired t tests), with Bonferroni adjustment. Two independent reviewers identified all 136 ASIR adaptive statistical iterative reconstruction image findings (n = 272) on MBIR model-based iterative reconstruction images, scoring them as equal or better for conspicuity, spatial resolution, and image noise in 94.1% (256 of 272), 96.7% (263 of 272), and 99.3% (270 of 272), respectively. In 50 image sets, two reviewers (n = 100) scored overall image quality as sufficient or good with MBIR model-based iterative reconstruction in 99% (99 of 100). Liver SNR signal-to-noise ratio was significantly greater for MBIR model-based iterative reconstruction (10.8 ± 2.5 [standard deviation] vs 7.7 ± 1.4, P < .001); there was no difference for CNR contrast-to-noise ratio (2.5 ± 1.4 vs 2.4 ± 1.4, P = .45). For ASIR adaptive statistical iterative reconstruction and MBIR model-based iterative reconstruction , respectively, volume CT dose index was 15.2 mGy ± 7.6 versus 6.2 mGy ± 3.6; SSDE size-specific dose estimate was 16.4 mGy ± 6.6 versus 6.7 mGy ± 3.1 (P < .001). Liver CT images reconstructed with MBIR model-based iterative reconstruction may allow up to 59% radiation dose reduction compared with the dose with ASIR adaptive statistical iterative reconstruction , without compromising depiction of findings or image quality. © RSNA, 2014.
Rater variables associated with ITER ratings.
Paget, Michael; Wu, Caren; McIlwrick, Joann; Woloschuk, Wayne; Wright, Bruce; McLaughlin, Kevin
2013-10-01
Advocates of holistic assessment consider the ITER a more authentic way to assess performance. But this assessment format is subjective and, therefore, susceptible to rater bias. Here our objective was to study the association between rater variables and ITER ratings. In this observational study our participants were clerks at the University of Calgary and preceptors who completed online ITERs between February 2008 and July 2009. Our outcome variable was global rating on the ITER (rated 1-5), and we used a generalized estimating equation model to identify variables associated with this rating. Students were rated "above expected level" or "outstanding" on 66.4 % of 1050 online ITERs completed during the study period. Two rater variables attenuated ITER ratings: the log transformed time taken to complete the ITER [β = -0.06, 95 % confidence interval (-0.10, -0.02), p = 0.002], and the number of ITERs that a preceptor completed over the time period of the study [β = -0.008 (-0.02, -0.001), p = 0.02]. In this study we found evidence of leniency bias that resulted in two thirds of students being rated above expected level of performance. This leniency bias appeared to be attenuated by delay in ITER completion, and was also blunted in preceptors who rated more students. As all biases threaten the internal validity of the assessment process, further research is needed to confirm these and other sources of rater bias in ITER ratings, and to explore ways of limiting their impact.
Ginsburg, Shiphra; Eva, Kevin; Regehr, Glenn
2013-10-01
Although scores on in-training evaluation reports (ITERs) are often criticized for poor reliability and validity, ITER comments may yield valuable information. The authors assessed across-rotation reliability of ITER scores in one internal medicine program, ability of ITER scores and comments to predict postgraduate year three (PGY3) performance, and reliability and incremental predictive validity of attendings' analysis of written comments. Numeric and narrative data from the first two years of ITERs for one cohort of residents at the University of Toronto Faculty of Medicine (2009-2011) were assessed for reliability and predictive validity of third-year performance. Twenty-four faculty attendings rank-ordered comments (without scores) such that each resident was ranked by three faculty. Mean ITER scores and comment rankings were submitted to regression analyses; dependent variables were PGY3 ITER scores and program directors' rankings. Reliabilities of ITER scores across nine rotations for 63 residents were 0.53 for both postgraduate year one (PGY1) and postgraduate year two (PGY2). Interrater reliabilities across three attendings' rankings were 0.83 for PGY1 and 0.79 for PGY2. There were strong correlations between ITER scores and comments within each year (0.72 and 0.70). Regressions revealed that PGY1 and PGY2 ITER scores collectively explained 25% of variance in PGY3 scores and 46% of variance in PGY3 rankings. Comment rankings did not improve predictions. ITER scores across multiple rotations showed decent reliability and predictive validity. Comment ranks did not add to the predictive ability, but correlation analyses suggest that trainee performance can be measured through these comments.
High-Performance Agent-Based Modeling Applied to Vocal Fold Inflammation and Repair.
Seekhao, Nuttiiya; Shung, Caroline; JaJa, Joseph; Mongeau, Luc; Li-Jessen, Nicole Y K
2018-01-01
Fast and accurate computational biology models offer the prospect of accelerating the development of personalized medicine. A tool capable of estimating treatment success can help prevent unnecessary and costly treatments and potential harmful side effects. A novel high-performance Agent-Based Model (ABM) was adopted to simulate and visualize multi-scale complex biological processes arising in vocal fold inflammation and repair. The computational scheme was designed to organize the 3D ABM sub-tasks to fully utilize the resources available on current heterogeneous platforms consisting of multi-core CPUs and many-core GPUs. Subtasks are further parallelized and convolution-based diffusion is used to enhance the performance of the ABM simulation. The scheme was implemented using a client-server protocol allowing the results of each iteration to be analyzed and visualized on the server (i.e., in-situ ) while the simulation is running on the same server. The resulting simulation and visualization software enables users to interact with and steer the course of the simulation in real-time as needed. This high-resolution 3D ABM framework was used for a case study of surgical vocal fold injury and repair. The new framework is capable of completing the simulation, visualization and remote result delivery in under 7 s per iteration, where each iteration of the simulation represents 30 min in the real world. The case study model was simulated at the physiological scale of a human vocal fold. This simulation tracks 17 million biological cells as well as a total of 1.7 billion signaling chemical and structural protein data points. The visualization component processes and renders all simulated biological cells and 154 million signaling chemical data points. The proposed high-performance 3D ABM was verified through comparisons with empirical vocal fold data. Representative trends of biomarker predictions in surgically injured vocal folds were observed.
High-Performance Agent-Based Modeling Applied to Vocal Fold Inflammation and Repair
Seekhao, Nuttiiya; Shung, Caroline; JaJa, Joseph; Mongeau, Luc; Li-Jessen, Nicole Y. K.
2018-01-01
Fast and accurate computational biology models offer the prospect of accelerating the development of personalized medicine. A tool capable of estimating treatment success can help prevent unnecessary and costly treatments and potential harmful side effects. A novel high-performance Agent-Based Model (ABM) was adopted to simulate and visualize multi-scale complex biological processes arising in vocal fold inflammation and repair. The computational scheme was designed to organize the 3D ABM sub-tasks to fully utilize the resources available on current heterogeneous platforms consisting of multi-core CPUs and many-core GPUs. Subtasks are further parallelized and convolution-based diffusion is used to enhance the performance of the ABM simulation. The scheme was implemented using a client-server protocol allowing the results of each iteration to be analyzed and visualized on the server (i.e., in-situ) while the simulation is running on the same server. The resulting simulation and visualization software enables users to interact with and steer the course of the simulation in real-time as needed. This high-resolution 3D ABM framework was used for a case study of surgical vocal fold injury and repair. The new framework is capable of completing the simulation, visualization and remote result delivery in under 7 s per iteration, where each iteration of the simulation represents 30 min in the real world. The case study model was simulated at the physiological scale of a human vocal fold. This simulation tracks 17 million biological cells as well as a total of 1.7 billion signaling chemical and structural protein data points. The visualization component processes and renders all simulated biological cells and 154 million signaling chemical data points. The proposed high-performance 3D ABM was verified through comparisons with empirical vocal fold data. Representative trends of biomarker predictions in surgically injured vocal folds were observed. PMID:29706894
NASA Astrophysics Data System (ADS)
Qin, Cheng-Zhi; Zhan, Lijun
2012-06-01
As one of the important tasks in digital terrain analysis, the calculation of flow accumulations from gridded digital elevation models (DEMs) usually involves two steps in a real application: (1) using an iterative DEM preprocessing algorithm to remove the depressions and flat areas commonly contained in real DEMs, and (2) using a recursive flow-direction algorithm to calculate the flow accumulation for every cell in the DEM. Because both algorithms are computationally intensive, quick calculation of the flow accumulations from a DEM (especially for a large area) presents a practical challenge to personal computer (PC) users. In recent years, rapid increases in hardware capacity of the graphics processing units (GPUs) provided in modern PCs have made it possible to meet this challenge in a PC environment. Parallel computing on GPUs using a compute-unified-device-architecture (CUDA) programming model has been explored to speed up the execution of the single-flow-direction algorithm (SFD). However, the parallel implementation on a GPU of the multiple-flow-direction (MFD) algorithm, which generally performs better than the SFD algorithm, has not been reported. Moreover, GPU-based parallelization of the DEM preprocessing step in the flow-accumulation calculations has not been addressed. This paper proposes a parallel approach to calculate flow accumulations (including both iterative DEM preprocessing and a recursive MFD algorithm) on a CUDA-compatible GPU. For the parallelization of an MFD algorithm (MFD-md), two different parallelization strategies using a GPU are explored. The first parallelization strategy, which has been used in the existing parallel SFD algorithm on GPU, has the problem of computing redundancy. Therefore, we designed a parallelization strategy based on graph theory. The application results show that the proposed parallel approach to calculate flow accumulations on a GPU performs much faster than either sequential algorithms or other parallel GPU-based algorithms based on existing parallelization strategies.
A model of supervisor decision-making in the accommodation of workers with low back pain
Williams-Whitt, Kelly; Kristman, Vicki; Shaw, William S.; Soklaridis, Sophie; Reguly, Paula
2016-01-01
PURPOSE To explore supervisors’ perspectives and decision-making processes in the accommodation of back injured workers. METHODS Twenty-three semi-structured, in-depth interviews were conducted with supervisors from eleven Canadian organizations about their role in providing job accommodations. Supervisors were identified through an on-line survey and interviews were recorded, transcribed and entered into NVivo software. The initial analyses identified common units of meaning, which were used to develop a coding guide. Interviews were coded, and a model of supervisor decision-making was developed based on the themes, categories and connecting ideas identified in the data. RESULTS The decision-making model includes a process element that is described as iterative “trial and error” decision-making. Medical restrictions are compared to job demands, employee abilities and available alternatives. A feasible modification is identified through brainstorming and then implemented by the supervisor. Resources used for brainstorming include information, supervisor experience and autonomy, and organizational supports. The model also incorporates the experience of accommodation as a job demand that causes strain for the supervisor. Accommodation demands affect the supervisor’s attitude, brainstorming and monitoring effort and communication with returning employees. Resources and demands have a combined effect on accommodation decision complexity, which in turn affects the quality of the accommodation option selected. If the employee is unable to complete the tasks or is reinjured during the accommodation, the decision cycle repeats. More frequent iteration through the trial and error process reduces the likelihood of return to work success. CONCLUSIONS A series of propositions is developed to illustrate the relationships among categories in the model. The model and propositions show: a) the iterative, problem solving nature of the RTW process; b) decision resources necessary for accommodation planning, and c) the impact accommodation demands may have on supervisors and RTW quality. PMID:26811170
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, B; Fujita, A; Buch, K
Purpose: To investigate the correlation between texture analysis-based model observer and human observer in the task of diagnosis of ischemic infarct in non-contrast head CT of adults. Methods: Non-contrast head CTs of five patients (2 M, 3 F; 58–83 y) with ischemic infarcts were retro-reconstructed using FBP and Adaptive Statistical Iterative Reconstruction (ASIR) of various levels (10–100%). Six neuro -radiologists reviewed each image and scored image quality for diagnosing acute infarcts by a 9-point Likert scale in a blinded test. These scores were averaged across the observers to produce the average human observer responses. The chief neuro-radiologist placed multiple ROIsmore » over the infarcts. These ROIs were entered into a texture analysis software package. Forty-two features per image, including 11 GLRL, 5 GLCM, 4 GLGM, 9 Laws, and 13 2-D features, were computed and averaged over the images per dataset. The Fisher-coefficient (ratio of between-class variance to in-class variance) was calculated for each feature to identify the most discriminating features from each matrix that separate the different confidence scores most efficiently. The 15 features with the highest Fisher -coefficient were entered into linear multivariate regression for iterative modeling. Results: Multivariate regression analysis resulted in the best prediction model of the confidence scores after three iterations (df=11, F=11.7, p-value<0.0001). The model predicted scores and human observers were highly correlated (R=0.88, R-sq=0.77). The root-mean-square and maximal residual were 0.21 and 0.44, respectively. The residual scatter plot appeared random, symmetric, and unbiased. Conclusion: For diagnosis of ischemic infarct in non-contrast head CT in adults, the predicted image quality scores from texture analysis-based model observer was highly correlated with that of human observers for various noise levels. Texture-based model observer can characterize image quality of low contrast, subtle texture changes in addition to human observers.« less
NASA Astrophysics Data System (ADS)
Tong, Yubing; Udupa, Jayaram K.; Odhner, Dewey; Wu, Caiyun; Zhao, Yue; McDonough, Joseph M.; Capraro, Anthony; Torigian, Drew A.; Campbell, Robert M.
2017-03-01
Lung delineation via dynamic 4D thoracic magnetic resonance imaging (MRI) is necessary for quantitative image analysis for studying pediatric respiratory diseases such as thoracic insufficiency syndrome (TIS). This task is very challenging because of the often-extreme malformations of the thorax in TIS, lack of signal from bone and connective tissues resulting in inadequate image quality, abnormal thoracic dynamics, and the inability of the patients to cooperate with the protocol needed to get good quality images. We propose an interactive fuzzy connectedness approach as a potential practical solution to this difficult problem. Manual segmentation is too labor intensive especially due to the 4D nature of the data and can lead to low repeatability of the segmentation results. Registration-based approaches are somewhat inefficient and may produce inaccurate results due to accumulated registration errors and inadequate boundary information. The proposed approach works in a manner resembling the Iterative Livewire tool but uses iterative relative fuzzy connectedness (IRFC) as the delineation engine. Seeds needed by IRFC are set manually and are propagated from slice-to-slice, decreasing the needed human labor, and then a fuzzy connectedness map is automatically calculated almost instantaneously. If the segmentation is acceptable, the user selects "next" slice. Otherwise, the seeds are refined and the process continues. Although human interaction is needed, an advantage of the method is the high level of efficient user-control on the process and non-necessity to refine the results. Dynamic MRI sequences from 5 pediatric TIS patients involving 39 3D spatial volumes are used to evaluate the proposed approach. The method is compared to two other IRFC strategies with a higher level of automation. The proposed method yields an overall true positive and false positive volume fraction of 0.91 and 0.03, respectively, and Hausdorff boundary distance of 2 mm.
NASA Technical Reports Server (NTRS)
Marley, Mark
2015-01-01
After discovery, the first task of exoplanet science is characterization. However experience has shown that the limited spectral range and resolution of most directly imaged exoplanet data requires an iterative approach to spectral modeling. Simple, brown dwarf-like models, must first be tested to ascertain if they are both adequate to reproduce the available data and consistent with additional constraints, including the age of the system and available limits on the planet's mass and luminosity, if any. When agreement is lacking, progressively more complex solutions must be considered, including non-solar composition, partial cloudiness, and disequilibrium chemistry. Such additional complexity must be balanced against an understanding of the limitations of the atmospheric models themselves. For example while great strides have been made in improving the opacities of important molecules, particularly NH3 and CH4, at high temperatures, much more work is needed to understand the opacity of atomic Na and K. The highly pressure broadened fundamental band of Na and K in the optical stretches into the near-infrared, strongly influencing the spectral shape of Y and J spectral bands. Discerning gravity and atmospheric composition is difficult, if not impossible, without both good atomic opacities as well as an excellent understanding of the relevant atmospheric chemistry. I will present examples of the iterative process of directly imaged exoplanet characterization as applied to both known and potentially newly discovered exoplanets with a focus on constraints provided by GPI spectra. If a new GPI planet is lacking, as a case study I will discuss HR 8799 c and d will explain why some solutions, such as spatially inhomogeneous cloudiness, introduce their own additional layers of complexity. If spectra of new planets from GPI are available I will explain the modeling process in the context of understanding these new worlds.
Ikeda, Mitsuru
2017-01-01
Information extraction and knowledge discovery regarding adverse drug reaction (ADR) from large-scale clinical texts are very useful and needy processes. Two major difficulties of this task are the lack of domain experts for labeling examples and intractable processing of unstructured clinical texts. Even though most previous works have been conducted on these issues by applying semisupervised learning for the former and a word-based approach for the latter, they face with complexity in an acquisition of initial labeled data and ignorance of structured sequence of natural language. In this study, we propose automatic data labeling by distant supervision where knowledge bases are exploited to assign an entity-level relation label for each drug-event pair in texts, and then, we use patterns for characterizing ADR relation. The multiple-instance learning with expectation-maximization method is employed to estimate model parameters. The method applies transductive learning to iteratively reassign a probability of unknown drug-event pair at the training time. By investigating experiments with 50,998 discharge summaries, we evaluate our method by varying large number of parameters, that is, pattern types, pattern-weighting models, and initial and iterative weightings of relations for unlabeled data. Based on evaluations, our proposed method outperforms the word-based feature for NB-EM (iEM), MILR, and TSVM with F1 score of 11.3%, 9.3%, and 6.5% improvement, respectively. PMID:29090077
A free interactive matching program
DOE Office of Scientific and Technical Information (OSTI.GOV)
J.-F. Ostiguy
1999-04-16
For physicists and engineers involved in the design and analysis of beamlines (transfer lines or insertions) the lattice function matching problem is central and can be time-consuming because it involves constrained nonlinear optimization. For such problems convergence can be difficult to obtain in general without expert human intervention. Over the years, powerful codes have been developed to assist beamline designers. The canonical example is MAD (Methodical Accelerator Design) developed at CERN by Christophe Iselin. MAD, through a specialized command language, allows one to solve a wide variety of problems, including matching problems. Although in principle, the MAD command interpreter canmore » be run interactively, in practice the solution of a matching problem involves a sequence of independent trial runs. Unfortunately, but perhaps not surprisingly, there still exists relatively few tools exploiting the resources offered by modern environments to assist lattice designer with this routine and repetitive task. In this paper, we describe a fully interactive lattice matching program, written in C++ and assembled using freely available software components. An important feature of the code is that the evolution of the lattice functions during the nonlinear iterative process can be graphically monitored in real time; the user can dynamically interrupt the iterations at will to introduce new variables, freeze existing ones into their current state and/or modify constraints. The program runs under both UNIX and Windows NT.« less
Thermonuclear Power Engineering: 60 Years of Research. What Comes Next?
NASA Astrophysics Data System (ADS)
Strelkov, V. S.
2017-12-01
This paper summarizes results of more than half a century of research of high-temperature plasmas heated to a temperature of more than 100 million degrees (104 eV) and magnetically insulated from the walls. The energy of light-element fusion can be used for electric power generation or as a source of fissionable fuel production (development of a fusion neutron source—FNS). The main results of studies of tokamak plasmas which were obtained in the Soviet Union with the greatest degree of thermal plasma isolation among all other types of devices are presented. As a result, research programs of other countries were redirected to tokamaks. Later, on the basis of the analysis of numerous experiments, the international fusion community gradually came to an opinion that it is possible to build a tokamak (ITER) with Q > 1 (where Q is the ratio of the fusion power to the external power injected into the plasma). The ITER program objective is to achieve Q = 1-10 for a discharge time of up to 1000 s. The implementation of this goal does not solve the problem of a steadystate operation. The solution to this problem is a reliable first wall and current generation. This is a task of the next fusion power plant construction stage, called DEMO. Comparison of DEMO and FNS parameters shows that, at this development stage, the operating parameters and conditions of these devices are identical.
Image super-resolution via adaptive filtering and regularization
NASA Astrophysics Data System (ADS)
Ren, Jingbo; Wu, Hao; Dong, Weisheng; Shi, Guangming
2014-11-01
Image super-resolution (SR) is widely used in the fields of civil and military, especially for the low-resolution remote sensing images limited by the sensor. Single-image SR refers to the task of restoring a high-resolution (HR) image from the low-resolution image coupled with some prior knowledge as a regularization term. One classic method regularizes image by total variation (TV) and/or wavelet or some other transform which introduce some artifacts. To compress these shortages, a new framework for single image SR is proposed by utilizing an adaptive filter before regularization. The key of our model is that the adaptive filter is used to remove the spatial relevance among pixels first and then only the high frequency (HF) part, which is sparser in TV and transform domain, is considered as the regularization term. Concretely, through transforming the original model, the SR question can be solved by two alternate iteration sub-problems. Before each iteration, the adaptive filter should be updated to estimate the initial HF. A high quality HF part and HR image can be obtained by solving the first and second sub-problem, respectively. In experimental part, a set of remote sensing images captured by Landsat satellites are tested to demonstrate the effectiveness of the proposed framework. Experimental results show the outstanding performance of the proposed method in quantitative evaluation and visual fidelity compared with the state-of-the-art methods.
Insect anaphylaxis: where are we? The stinging facts 2012.
Tracy, James M; Khan, Fatima S; Demain, Jeffrey G
2012-08-01
Insect allergy remains an important cause of morbidity and mortality in the United States. In 2011, the third iteration of the stinging insect hypersensitivity practice parameter was published, the first being published in 1999 and the second in 2004. Since the 2004 edition, our understanding of insect hypersensitivity has continued to expand and has been incorporated into the 2011 edition. This work will review the relevant changes in the management of insect hypersensitivity occurring since 2004 and present our current understanding of the insect hypersensitivity diagnosis and management. Since the 2004 commissioning by the Joint Task Force (JTF) on Practice Parameters of 'Stinging insect hypersensitivity: a practice parameter update', there have been important contributions to our understanding of insect allergy. These contributions were incorporated into the 2011 iteration. Similar efforts were made by the European Allergy Asthma and Clinical Immunology Interest Group in 2005 and most recently in 2011 by the British Society of Allergy and Clinical Immunology. Our understanding of insect allergy, including the natural history, epidemiology, diagnostic testing, and risk factors, has greatly expanded. This evolution of knowledge should provide improved long-term management of stinging insect hypersensitivity. This review will focus primarily on the changes between the 2004 and 2011 stinging insect practice parameter commissioned by the JTF on Practice Parameters, but will, where appropriate, highlight the differences between working groups.
NASA Technical Reports Server (NTRS)
1996-01-01
Solving for the displacements of free-free coupled systems acted upon by static loads is a common task in the aerospace industry. Often, these problems are solved by static analysis with inertia relief. This technique allows for a free-free static analysis by balancing the applied loads with the inertia loads generated by the applied loads. For some engineering applications, the displacements of the free-free coupled system induce additional static loads. Hence, the applied loads are equal to the original loads plus the displacement-dependent loads. A launch vehicle being acted upon by an aerodynamic loading can have such applied loads. The final displacements of such systems are commonly determined with iterative solution techniques. Unfortunately, these techniques can be time consuming and labor intensive. Because the coupled system equations for free-free systems with displacement-dependent loads can be written in closed form, it is advantageous to solve for the displacements in this manner. Implementing closed-form equations in static analysis with inertia relief is analogous to implementing transfer functions in dynamic analysis. An MSC/NASTRAN (MacNeal-Schwendler Corporation/NASA Structural Analysis) DMAP (Direct Matrix Abstraction Program) Alter was used to include displacement-dependent loads in static analysis with inertia relief. It efficiently solved a common aerospace problem that typically has been solved with an iterative technique.
Zhu, Dianwen; Li, Changqing
2014-12-01
Fluorescence molecular tomography (FMT) is a promising imaging modality and has been actively studied in the past two decades since it can locate the specific tumor position three-dimensionally in small animals. However, it remains a challenging task to obtain fast, robust and accurate reconstruction of fluorescent probe distribution in small animals due to the large computational burden, the noisy measurement and the ill-posed nature of the inverse problem. In this paper we propose a nonuniform preconditioning method in combination with L (1) regularization and ordered subsets technique (NUMOS) to take care of the different updating needs at different pixels, to enhance sparsity and suppress noise, and to further boost convergence of approximate solutions for fluorescence molecular tomography. Using both simulated data and phantom experiment, we found that the proposed nonuniform updating method outperforms its popular uniform counterpart by obtaining a more localized, less noisy, more accurate image. The computational cost was greatly reduced as well. The ordered subset (OS) technique provided additional 5 times and 3 times speed enhancements for simulation and phantom experiments, respectively, without degrading image qualities. When compared with the popular L (1) algorithms such as iterative soft-thresholding algorithm (ISTA) and Fast iterative soft-thresholding algorithm (FISTA) algorithms, NUMOS also outperforms them by obtaining a better image in much shorter period of time.
Iterative methods for mixed finite element equations
NASA Technical Reports Server (NTRS)
Nakazawa, S.; Nagtegaal, J. C.; Zienkiewicz, O. C.
1985-01-01
Iterative strategies for the solution of indefinite system of equations arising from the mixed finite element method are investigated in this paper with application to linear and nonlinear problems in solid and structural mechanics. The augmented Hu-Washizu form is derived, which is then utilized to construct a family of iterative algorithms using the displacement method as the preconditioner. Two types of iterative algorithms are implemented. Those are: constant metric iterations which does not involve the update of preconditioner; variable metric iterations, in which the inverse of the preconditioning matrix is updated. A series of numerical experiments is conducted to evaluate the numerical performance with application to linear and nonlinear model problems.
ITER EDA Newsletter. Volume 3, no. 2
NASA Astrophysics Data System (ADS)
1994-02-01
This issue of the ITER EDA (Engineering Design Activities) Newsletter contains reports on the Fifth ITER Council Meeting held in Garching, Germany, January 27-28, 1994, a visit (January 28, 1994) of an international group of Harvard Fellows to the San Diego Joint Work Site, the Inauguration Ceremony of the EC-hosted ITER joint work site in Garching (January 28, 1994), on an ITER Technical Meeting on Assembly and Maintenance held in Garching, Germany, January 19-26, 1994, and a report on a Technical Committee Meeting on radiation effects on in-vessel components held in Garching, Germany, November 15-19, 1993, as well as an ITER Status Report.
Experimental Evidence on Iterated Reasoning in Games
Grehl, Sascha; Tutić, Andreas
2015-01-01
We present experimental evidence on two forms of iterated reasoning in games, i.e. backward induction and interactive knowledge. Besides reliable estimates of the cognitive skills of the subjects, our design allows us to disentangle two possible explanations for the observed limits in performed iterated reasoning: Restrictions in subjects’ cognitive abilities and their beliefs concerning the rationality of co-players. In comparison to previous literature, our estimates regarding subjects’ skills in iterated reasoning are quite pessimistic. Also, we find that beliefs concerning the rationality of co-players are completely irrelevant in explaining the observed limited amount of iterated reasoning in the dirty faces game. In addition, it is demonstrated that skills in backward induction are a solid predictor for skills in iterated knowledge, which points to some generalized ability of the subjects in iterated reasoning. PMID:26312486
Pastores, Stephen M.; Martin, Greg S.; Baumann, Michael H.; Curtis, J. Randall; Farmer, J. Christopher; Fessler, Henry E.; Gupta, Rakesh; Hill, Nicholas S.; Hyzy, Robert C.; Kvetan, Vladimir; MacGregor, Drew A.; O’Grady, Naomi P.; Ognibene, Frederick P.; Rubenfeld, Gordon D.; Sessler, Curtis N.; Siegal, Eric; Simpson, Steven Q.; Spevetz, Antoinette; Ward, Nicholas S.; Zimmerman, Janice L.
2014-01-01
Objectives Multiple training pathways are recognized by the Accreditation Council for Graduate Medical Education (ACGME) for internal medicine (IM) physicians to certify in critical care medicine (CCM) via the American Board of Internal Medicine. While each involves 1 year of clinical fellowship training in CCM, substantive differences in training requirements exist among the various pathways. The Critical Care Societies Collaborative convened a task force to review these CCM pathways and to provide recommendations for unified and coordinated training requirements for IM-based physicians. Participants A group of CCM professionals certified in pulmonary-CCM and/or IM-CCM from ACGME-accredited training programs who have expertise in education, administration, research, and clinical practice. Data Sources and Synthesis Relevant published literature was accessed through a MEDLINE search and references provided by all task force members. Material published by the ACGME, American Board of Internal Medicine, and other specialty organizations was also reviewed. Collaboratively and iteratively, the task force reached consensus using a roundtable meeting, electronic mail, and conference calls. Main Results Internal medicine-CCM–based fellowships have disparate program requirements compared to other internal medicine subspecialties and adult CCM fellowships. Differences between IM-CCM and pulmonary-CCM programs include the ratio of key clinical faculty to fellows and a requirement to perform 50 therapeutic bronchoscopies. Competency-based training was considered uniformly desirable for all CCM training pathways. Conclusions The task force concluded that requesting competency-based training and minimizing variations in the requirements for IM-based CCM fellowship programs will facilitate effective CCM training for both programs and trainees. PMID:24637881
Kim, Dongkue; Park, Sangsoo; Jeong, Myung Ho; Ryu, Jeha
2018-02-01
In percutaneous coronary intervention (PCI), cardiologists must study two different X-ray image sources: a fluoroscopic image and an angiogram. Manipulating a guidewire while alternately monitoring the two separate images on separate screens requires a deep understanding of the anatomy of coronary vessels and substantial training. We propose 2D/2D spatiotemporal image registration of the two images in a single image in order to provide cardiologists with enhanced visual guidance in PCI. The proposed 2D/2D spatiotemporal registration method uses a cross-correlation of two ECG series in each image to temporally synchronize two separate images and register an angiographic image onto the fluoroscopic image. A guidewire centerline is then extracted from the fluoroscopic image in real time, and the alignment of the centerline with vessel outlines of the chosen angiographic image is optimized using the iterative closest point algorithm for spatial registration. A proof-of-concept evaluation with a phantom coronary vessel model with engineering students showed an error reduction rate greater than 74% on wrong insertion to nontarget branches compared to the non-registration method and more than 47% reduction in the task completion time in performing guidewire manipulation for very difficult tasks. Evaluation with a small number of experienced doctors shows a potentially significant reduction in both task completion time and error rate for difficult tasks. The total registration time with real procedure X-ray (angiographic and fluoroscopic) images takes [Formula: see text] 60 ms, which is within the fluoroscopic image acquisition rate of 15 Hz. By providing cardiologists with better visual guidance in PCI, the proposed spatiotemporal image registration method is shown to be useful in advancing the guidewire to the coronary vessel branches, especially those difficult to insert into.
Application Of Iterative Reconstruction Techniques To Conventional Circular Tomography
NASA Astrophysics Data System (ADS)
Ghosh Roy, D. N.; Kruger, R. A.; Yih, B. C.; Del Rio, S. P.; Power, R. L.
1985-06-01
Two "point-by-point" iteration procedures, namely, Iterative Least Square Technique (ILST) and Simultaneous Iterative Reconstructive Technique (SIRT) were applied to classical circular tomographic reconstruction. The technique of tomosynthetic DSA was used in forming the tomographic images. Reconstructions of a dog's renal and neck anatomy are presented.
From Amorphous to Defined: Balancing the Risks of Spiral Development
2007-04-30
630 675 720 765 810 855 900 Time (Week) Work started and active PhIt [Requirements,Iter1] : JavelinCalibration work packages1 1 1 Work started and...active PhIt [Technology,Iter1] : JavelinCalibration work packages2 2 2 Work started and active PhIt [Design,Iter1] : JavelinCalibration work packages3 3 3 3...Work started and active PhIt [Manufacturing,Iter1] : JavelinCalibration work packages4 4 Work started and active PhIt [Use,Iter1] : JavelinCalibration
Bounded-Angle Iterative Decoding of LDPC Codes
NASA Technical Reports Server (NTRS)
Dolinar, Samuel; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush
2009-01-01
Bounded-angle iterative decoding is a modified version of conventional iterative decoding, conceived as a means of reducing undetected-error rates for short low-density parity-check (LDPC) codes. For a given code, bounded-angle iterative decoding can be implemented by means of a simple modification of the decoder algorithm, without redesigning the code. Bounded-angle iterative decoding is based on a representation of received words and code words as vectors in an n-dimensional Euclidean space (where n is an integer).
Close-out report with links to abstracts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marmar, Earl S.
This grant provided A/V support for two technical meetings of the Edge Coordinating Committee: (1) Nov 13, 2013 (co-located with the APS-DPP meeting in Denver, CO) https://ecc.mit.edu/fall-2013-technical-meeting#overlay-context=ecc-meetings; (2) April 28-May 1, 2015 (embedded sessions in the Transport Task Force Meeting, Salem, MA) http://www-internal.psfc.mit.edu/TTF2015/index.html. The ultimate goal of the U.S. Transport Task Force is to develop a physics-based understanding of particle, momentum and heat transport in magnetic fusion devices. This understanding should be of sufficient depth that it allows the development of predictive models of plasma transport that can be validated against experiment, and then used to anticipate the future performancemore » of burning plasmas in ITER, as well as to provide guidance for the design of next-step fusion nuclear science facilities. To achieve success in transport science, it is essential to characterize local fluctuations and transport in toroidal plasmas, to understand the basic mechanisms responsible for transport, and ultimately to control these transport processes. These goals must be pursued in multiple areas, and these topics evolve in order to reflect current interests.« less
Defocus and magnification dependent variation of TEM image astigmatism.
Yan, Rui; Li, Kunpeng; Jiang, Wen
2018-01-10
Daily alignment of the microscope is a prerequisite to reaching optimal lens conditions for high resolution imaging in cryo-EM. In this study, we have investigated how image astigmatism varies with the imaging conditions (e.g. defocus, magnification). We have found that the large change of defocus/magnification between visual correction of astigmatism and subsequent data collection tasks, or during data collection, will inevitably result in undesirable astigmatism in the final images. The dependence of astigmatism on the imaging conditions varies significantly from time to time, so that it cannot be reliably compensated by pre-calibration of the microscope. Based on these findings, we recommend that the same magnification and the median defocus of the intended defocus range for final data collection are used in the objective lens astigmatism correction task during microscope alignment and in the focus mode of the iterative low-dose imaging. It is also desirable to develop a fast, accurate method that can perform dynamic correction of the astigmatism for different intended defocuses during automated imaging. Our findings also suggest that the slope of astigmatism changes caused by varying defocuses can be used as a convenient measurement of objective lens rotation symmetry and potentially an acceptance test of new electron microscopes.
Total quality management: It works for aerospace information services
NASA Technical Reports Server (NTRS)
Erwin, James; Eberline, Carl; Colquitt, Wanda
1993-01-01
Today we are in the midst of information and 'total quality' revolutions. At the NASA STI Program's Center for AeroSpace Information (CASI), we are focused on using continuous improvements techniques to enrich today's services and products and to ensure that tomorrow's technology supports the TQM-based improvement of future STI program products and services. The Continuous Improvements Program at CASI is the foundation for Total Quality Management in products and services. The focus is customer-driven; its goal, to identify processes and procedures that can be improved and new technologies that can be integrated with the processes to gain efficiencies, provide effectiveness, and promote customer satisfaction. This Program seeks to establish quality through an iterative defect prevention approach that is based on the incorporation of standards and measurements into the processing cycle. Four projects are described that utilize cross-functional, problem-solving teams for identifying requirements and defining tasks and task standards, management participation, attention to critical processes, and measurable long-term goals. The implementation of these projects provides the customer with measurably improved access to information that is provided through several channels: the NASA STI Database, document requests for microfiche and hardcopy, and the Centralized Help Desk.
NASA Technical Reports Server (NTRS)
Hall, E. J.; Topp, D. A.; Delaney, R. A.
1996-01-01
The overall objective of this study was to develop a 3-D numerical analysis for compressor casing treatment flowfields. The current version of the computer code resulting from this study is referred to as ADPAC (Advanced Ducted Propfan Analysis Codes-Version 7). This report is intended to serve as a computer program user's manual for the ADPAC code developed under Tasks 6 and 7 of the NASA Contract. The ADPAC program is based on a flexible multiple- block grid discretization scheme permitting coupled 2-D/3-D mesh block solutions with application to a wide variety of geometries. Aerodynamic calculations are based on a four-stage Runge-Kutta time-marching finite volume solution technique with added numerical dissipation. Steady flow predictions are accelerated by a multigrid procedure. An iterative implicit algorithm is available for rapid time-dependent flow calculations, and an advanced two equation turbulence model is incorporated to predict complex turbulent flows. The consolidated code generated during this study is capable of executing in either a serial or parallel computing mode from a single source code. Numerous examples are given in the form of test cases to demonstrate the utility of this approach for predicting the aerodynamics of modem turbomachinery configurations.
A CNN Regression Approach for Real-Time 2D/3D Registration.
Shun Miao; Wang, Z Jane; Rui Liao
2016-05-01
In this paper, we present a Convolutional Neural Network (CNN) regression approach to address the two major limitations of existing intensity-based 2-D/3-D registration technology: 1) slow computation and 2) small capture range. Different from optimization-based methods, which iteratively optimize the transformation parameters over a scalar-valued metric function representing the quality of the registration, the proposed method exploits the information embedded in the appearances of the digitally reconstructed radiograph and X-ray images, and employs CNN regressors to directly estimate the transformation parameters. An automatic feature extraction step is introduced to calculate 3-D pose-indexed features that are sensitive to the variables to be regressed while robust to other factors. The CNN regressors are then trained for local zones and applied in a hierarchical manner to break down the complex regression task into multiple simpler sub-tasks that can be learned separately. Weight sharing is furthermore employed in the CNN regression model to reduce the memory footprint. The proposed approach has been quantitatively evaluated on 3 potential clinical applications, demonstrating its significant advantage in providing highly accurate real-time 2-D/3-D registration with a significantly enlarged capture range when compared to intensity-based methods.
Deep Learning: A Primer for Radiologists.
Chartrand, Gabriel; Cheng, Phillip M; Vorontsov, Eugene; Drozdzal, Michal; Turcotte, Simon; Pal, Christopher J; Kadoury, Samuel; Tang, An
2017-01-01
Deep learning is a class of machine learning methods that are gaining success and attracting interest in many domains, including computer vision, speech recognition, natural language processing, and playing games. Deep learning methods produce a mapping from raw inputs to desired outputs (eg, image classes). Unlike traditional machine learning methods, which require hand-engineered feature extraction from inputs, deep learning methods learn these features directly from data. With the advent of large datasets and increased computing power, these methods can produce models with exceptional performance. These models are multilayer artificial neural networks, loosely inspired by biologic neural systems. Weighted connections between nodes (neurons) in the network are iteratively adjusted based on example pairs of inputs and target outputs by back-propagating a corrective error signal through the network. For computer vision tasks, convolutional neural networks (CNNs) have proven to be effective. Recently, several clinical applications of CNNs have been proposed and studied in radiology for classification, detection, and segmentation tasks. This article reviews the key concepts of deep learning for clinical radiologists, discusses technical requirements, describes emerging applications in clinical radiology, and outlines limitations and future directions in this field. Radiologists should become familiar with the principles and potential applications of deep learning in medical imaging. © RSNA, 2017.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
This Remedial Investigation (RI) Work Plan has been developed as part of the US Department of Energy`s (DOE`s) investigation of the Groundwater Operable Unit (GWOU) at Oak Ridge National Laboratory (ORNL) located near Oak Ridge, Tennessee. The first iteration of the GWOU RI Work Plan is intended to serve as a strategy document to guide the ORNL GWOU RI. The Work Plan provides a rationale and organization for groundwater data acquisition, monitoring, and remedial actions to be performed during implementation of environmental restoration activities associated with the ORNL GWOU. It Is important to note that the RI Work Plan formore » the ORNL GWOU is not a prototypical work plan. The RI will be conducted using annual work plans to manage the work activities, and task reports will be used to document the results of the investigations. Sampling and analysis results will be compiled and reported annually with a review of data relative to risk (screening level risk assessment review) for groundwater. This Work Plan outlines the overall strategy for the RI and defines tasks which are to be conducted during the initial phase of investigation. This plan is presented with the understanding that more specific addenda to the plan will follow.« less
Design and field test of collaborative tools in the service of an innovative organization
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Beler, N.; Parfouru, S.
2012-07-01
This paper presents the design process of collaborative tools, based on ICT, aiming at supporting the tasks of the team that manages an outage of an energy production plant for maintenance activities. The design process follows an iterative and multidisciplinary approach, based on a collective tasks modeling of the outage management team in the light of Socio Organizational and Human (SOH) field studies, and on the state of the art of ICT. Field test of the collaborative tools designed plays a great place in this approach, allowing taking into account the operational world but involves also some risks which mustmore » be managed. To implement tools on all the production plants, we build an 'operational concept' with a level of description which authorizes the evolution of tools and allows some local adaptations. The field tests provide lessons on the ICT topics. For examples: the status of the remote access tools, the potential of use of a given information input by an actor for several individual and collective purposes, the actors perception of the tools meaning, and the requirements for supporting the implementation of change. (authors)« less
Developing collaborative classifiers using an expert-based model
Mountrakis, G.; Watts, R.; Luo, L.; Wang, Jingyuan
2009-01-01
This paper presents a hierarchical, multi-stage adaptive strategy for image classification. We iteratively apply various classification methods (e.g., decision trees, neural networks), identify regions of parametric and geographic space where accuracy is low, and in these regions, test and apply alternate methods repeating the process until the entire image is classified. Currently, classifiers are evaluated through human input using an expert-based system; therefore, this paper acts as the proof of concept for collaborative classifiers. Because we decompose the problem into smaller, more manageable sub-tasks, our classification exhibits increased flexibility compared to existing methods since classification methods are tailored to the idiosyncrasies of specific regions. A major benefit of our approach is its scalability and collaborative support since selected low-accuracy classifiers can be easily replaced with others without affecting classification accuracy in high accuracy areas. At each stage, we develop spatially explicit accuracy metrics that provide straightforward assessment of results by non-experts and point to areas that need algorithmic improvement or ancillary data. Our approach is demonstrated in the task of detecting impervious surface areas, an important indicator for human-induced alterations to the environment, using a 2001 Landsat scene from Las Vegas, Nevada. ?? 2009 American Society for Photogrammetry and Remote Sensing.
Case studies on optimization problems in MATLAB and COMSOL multiphysics by means of the livelink
NASA Astrophysics Data System (ADS)
Ozana, Stepan; Pies, Martin; Docekal, Tomas
2016-06-01
LiveLink for COMSOL is a tool that integrates COMSOL Multiphysics with MATLAB to extend one's modeling with scripting programming in the MATLAB environment. It allows user to utilize the full power of MATLAB and its toolboxes in preprocessing, model manipulation, and post processing. At first, the head script launches COMSOL with MATLAB and defines initial value of all parameters, refers to the objective function J described in the objective function and creates and runs the defined optimization task. Once the task is launches, the COMSOL model is being called in the iteration loop (from MATLAB environment by use of API interface), changing defined optimization parameters so that the objective function is minimized, using fmincon function to find a local or global minimum of constrained linear or nonlinear multivariable function. Once the minimum is found, it returns exit flag, terminates optimization and returns the optimized values of the parameters. The cooperation with MATLAB via LiveLink enhances a powerful computational environment with complex multiphysics simulations. The paper will introduce using of the LiveLink for COMSOL for chosen case studies in the field of technical cybernetics and bioengineering.
Cross-domain active learning for video concept detection
NASA Astrophysics Data System (ADS)
Li, Huan; Li, Chao; Shi, Yuan; Xiong, Zhang; Hauptmann, Alexander G.
2011-08-01
As video data from a variety of different domains (e.g., news, documentaries, entertainment) have distinctive data distributions, cross-domain video concept detection becomes an important task, in which one can reuse the labeled data of one domain to benefit the learning task in another domain with insufficient labeled data. In this paper, we approach this problem by proposing a cross-domain active learning method which iteratively queries labels of the most informative samples in the target domain. Traditional active learning assumes that the training (source domain) and test data (target domain) are from the same distribution. However, it may fail when the two domains have different distributions because querying informative samples according to a base learner that initially learned from source domain may no longer be helpful for the target domain. In our paper, we use the Gaussian random field model as the base learner which has the advantage of exploring the distributions in both domains, and adopt uncertainty sampling as the query strategy. Additionally, we present an instance weighting trick to accelerate the adaptability of the base learner, and develop an efficient model updating method which can significantly speed up the active learning process. Experimental results on TRECVID collections highlight the effectiveness.
Highlights of X-Stack ExM Deliverable Swift/T
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wozniak, Justin M.
Swift/T is a key success from the ExM: System support for extreme-scale, many-task applications1 X-Stack project, which proposed to use concurrent dataflow as an innovative programming model to exploit extreme parallelism in exascale computers. The Swift/T component of the project reimplemented the Swift language from scratch to allow applications that compose scientific modules together to be build and run on available petascale computers (Blue Gene, Cray). Swift/T does this via a new compiler and runtime that generates and executes the application as an MPI program. We assume that mission-critical emerging exascale applications will be composed as scalable applications using existingmore » software components, connected by data dependencies. Developers wrap native code fragments using a higherlevel language, then build composite applications to form a computational experiment. This exemplifies hierarchical concurrency: lower-level messaging libraries are used for fine-grained parallelism; highlevel control is used for inter-task coordination. These patterns are best expressed with dataflow, but static DAGs (i.e., other workflow languages) limit the applications that can be built; they do not provide the expressiveness of Swift, such as conditional execution, iteration, and recursive functions.« less
Lyon, Louisa; Burnet, Philip WJ; Kew, James NC; Corti, Corrado; Rawlins, J Nicholas P; Lane, Tracy; De Filippis, Bianca; Harrison, Paul J; Bannerman, David M
2011-01-01
Group II metabotropic glutamate receptors (mGluR2 and mGluR3, encoded by GRM2 and GRM3) are implicated in hippocampal function and cognition, and in the pathophysiology and treatment of schizophrenia and other psychiatric disorders. However, pharmacological and behavioral studies with group II mGluR agonists and antagonists have produced complex results. Here, we studied hippocampus-dependent memory in GRM2/3 double knockout (GRM2/3−/−) mice in an iterative sequence of experiments. We found that they were impaired on appetitively motivated spatial reference and working memory tasks, and on a spatial novelty preference task that relies on animals' exploratory drive, but were unimpaired on aversively motivated spatial memory paradigms. GRM2/3−/− mice also performed normally on an appetitively motivated, non-spatial, visual discrimination task. These results likely reflect an interaction between GRM2/3 genotype and the arousal-inducing properties of the experimental paradigm. The deficit seen on appetitive and exploratory spatial memory tasks may be absent in aversive tasks because the latter induce higher levels of arousal, which rescue spatial learning. Consistent with an altered arousal–cognition relationship in GRM2/3−/− mice, injection stress worsened appetitively motivated, spatial working memory in wild-types, but enhanced performance in GRM2/3−/− mice. GRM2/3−/− mice were also hypoactive in response to amphetamine. This fractionation of hippocampus-dependent memory depending on the appetitive-aversive context is to our knowledge unique, and suggests a role for group II mGluRs at the interface of arousal and cognition. These arousal-dependent effects may explain apparently conflicting data from previous studies, and have translational relevance for the involvement of these receptors in schizophrenia and other disorders. PMID:21832989
Zhang, Huaguang; Song, Ruizhuo; Wei, Qinglai; Zhang, Tieyan
2011-12-01
In this paper, a novel heuristic dynamic programming (HDP) iteration algorithm is proposed to solve the optimal tracking control problem for a class of nonlinear discrete-time systems with time delays. The novel algorithm contains state updating, control policy iteration, and performance index iteration. To get the optimal states, the states are also updated. Furthermore, the "backward iteration" is applied to state updating. Two neural networks are used to approximate the performance index function and compute the optimal control policy for facilitating the implementation of HDP iteration algorithm. At last, we present two examples to demonstrate the effectiveness of the proposed HDP iteration algorithm.
RJMCMC based Text Placement to Optimize Label Placement and Quantity
NASA Astrophysics Data System (ADS)
Touya, Guillaume; Chassin, Thibaud
2018-05-01
Label placement is a tedious task in map design, and its automation has long been a goal for researchers in cartography, but also in computational geometry. Methods that search for an optimal or nearly optimal solution that satisfies a set of constraints, such as label overlapping, have been proposed in the literature. Most of these methods mainly focus on finding the optimal position for a given set of labels, but rarely allow the removal of labels as part of the optimization. This paper proposes to apply an optimization technique called Reversible-Jump Markov Chain Monte Carlo that enables to easily model the removal or addition during the optimization iterations. The method, quite preliminary for now, is tested on a real dataset, and the first results are encouraging.
Optimal strategies for throwing accurately
2017-01-01
The accuracy of throwing in games and sports is governed by how errors in planning and initial conditions are propagated by the dynamics of the projectile. In the simplest setting, the projectile path is typically described by a deterministic parabolic trajectory which has the potential to amplify noisy launch conditions. By analysing how parabolic trajectories propagate errors, we show how to devise optimal strategies for a throwing task demanding accuracy. Our calculations explain observed speed–accuracy trade-offs, preferred throwing style of overarm versus underarm, and strategies for games such as dart throwing, despite having left out most biological complexities. As our criteria for optimal performance depend on the target location, shape and the level of uncertainty in planning, they also naturally suggest an iterative scheme to learn throwing strategies by trial and error. PMID:28484641
Region growing using superpixels with learned shape prior
NASA Astrophysics Data System (ADS)
Borovec, Jiří; Kybic, Jan; Sugimoto, Akihiro
2017-11-01
Region growing is a classical image segmentation method based on hierarchical region aggregation using local similarity rules. Our proposed method differs from classical region growing in three important aspects. First, it works on the level of superpixels instead of pixels, which leads to a substantial speed-up. Second, our method uses learned statistical shape properties that encourage plausible shapes. In particular, we use ray features to describe the object boundary. Third, our method can segment multiple objects and ensure that the segmentations do not overlap. The problem is represented as an energy minimization and is solved either greedily or iteratively using graph cuts. We demonstrate the performance of the proposed method and compare it with alternative approaches on the task of segmenting individual eggs in microscopy images of Drosophila ovaries.
Genetic algorithms for the vehicle routing problem
NASA Astrophysics Data System (ADS)
Volna, Eva
2016-06-01
The Vehicle Routing Problem (VRP) is one of the most challenging combinatorial optimization tasks. This problem consists in designing the optimal set of routes for fleet of vehicles in order to serve a given set of customers. Evolutionary algorithms are general iterative algorithms for combinatorial optimization. These algorithms have been found to be very effective and robust in solving numerous problems from a wide range of application domains. This problem is known to be NP-hard; hence many heuristic procedures for its solution have been suggested. For such problems it is often desirable to obtain approximate solutions, so they can be found fast enough and are sufficiently accurate for the purpose. In this paper we have performed an experimental study that indicates the suitable use of genetic algorithms for the vehicle routing problem.
Alexander, Carla S; Pappas, Gregory; Henley, Yvonne; Kangalawe, Angela Kaiza; Oyebola, Folaju Olusegun; Obiefune, Michael; Nwene, Ejike; Stanis-Ezeobi, Winifred; Enejoh, Victor; Nwizu, Chidi; Nwandu, Anthea Nwandu; Memiah, Peter; Etienne-Mesubi, Martine; Oni, Babatunji; Amoroso, Anthony; Redfield, Robert R
2015-08-01
Pain management (PM) has not been routinely incorporated into HIV/AIDS care and treatment in resource-constrained settings. We describe training for multidisciplinary teams tasked with integrating care management into HIV clinics to address pain for persons living with HIV in Nigeria. Education on PM was provided to mixed-disciplinary teams including didactic and iterative sessions following home and hospital visits. Participants identified challenges and performed group problem solving. HIV trainers identified barriers to introducing PM reflecting views of the patient, providers, culture, and the health environment. Implementation strategies included (1) building upon existing relationships; (2) preliminary advocacy; (3) attention to staff needs; and (4) structured data review. Implementing PM in Nigerian HIV clinics requires recognition of cultural beliefs. © The Author(s) 2014.
NASA Technical Reports Server (NTRS)
Lewis, Clayton; Wilde, Nick
1989-01-01
Space construction will require heavy investment in the development of a wide variety of user interfaces for the computer-based tools that will be involved at every stage of construction operations. Using today's technology, user interface development is very expensive for two reasons: (1) specialized and scarce programming skills are required to implement the necessary graphical representations and complex control regimes for high-quality interfaces; (2) iteration on prototypes is required to meet user and task requirements, since these are difficult to anticipate with current (and foreseeable) design knowledge. We are attacking this problem by building a user interface development tool based on extensions to the spreadsheet model of computation. The tool provides high-level support for graphical user interfaces and permits dynamic modification of interfaces, without requiring conventional programming concepts and skills.
An adaptive, object oriented strategy for base calling in DNA sequence analysis.
Giddings, M C; Brumley, R L; Haker, M; Smith, L M
1993-01-01
An algorithm has been developed for the determination of nucleotide sequence from data produced in fluorescence-based automated DNA sequencing instruments employing the four-color strategy. This algorithm takes advantage of object oriented programming techniques for modularity and extensibility. The algorithm is adaptive in that data sets from a wide variety of instruments and sequencing conditions can be used with good results. Confidence values are provided on the base calls as an estimate of accuracy. The algorithm iteratively employs confidence determinations from several different modules, each of which examines a different feature of the data for accurate peak identification. Modules within this system can be added or removed for increased performance or for application to a different task. In comparisons with commercial software, the algorithm performed well. Images PMID:8233787
Study of phase clustering method for analyzing large volumes of meteorological observation data
NASA Astrophysics Data System (ADS)
Volkov, Yu. V.; Krutikov, V. A.; Botygin, I. A.; Sherstnev, V. S.; Sherstneva, A. I.
2017-11-01
The article describes an iterative parallel phase grouping algorithm for temperature field classification. The algorithm is based on modified method of structure forming by using analytic signal. The developed method allows to solve tasks of climate classification as well as climatic zoning for any time or spatial scale. When used to surface temperature measurement series, the developed algorithm allows to find climatic structures with correlated changes of temperature field, to make conclusion on climate uniformity in a given area and to overview climate changes over time by analyzing offset in type groups. The information on climate type groups specific for selected geographical areas is expanded by genetic scheme of class distribution depending on change in mutual correlation level between ground temperature monthly average.
Optimal strategies for throwing accurately
NASA Astrophysics Data System (ADS)
Venkadesan, M.; Mahadevan, L.
2017-04-01
The accuracy of throwing in games and sports is governed by how errors in planning and initial conditions are propagated by the dynamics of the projectile. In the simplest setting, the projectile path is typically described by a deterministic parabolic trajectory which has the potential to amplify noisy launch conditions. By analysing how parabolic trajectories propagate errors, we show how to devise optimal strategies for a throwing task demanding accuracy. Our calculations explain observed speed-accuracy trade-offs, preferred throwing style of overarm versus underarm, and strategies for games such as dart throwing, despite having left out most biological complexities. As our criteria for optimal performance depend on the target location, shape and the level of uncertainty in planning, they also naturally suggest an iterative scheme to learn throwing strategies by trial and error.
Scalable and fault tolerant orthogonalization based on randomized distributed data aggregation
Gansterer, Wilfried N.; Niederbrucker, Gerhard; Straková, Hana; Schulze Grotthoff, Stefan
2013-01-01
The construction of distributed algorithms for matrix computations built on top of distributed data aggregation algorithms with randomized communication schedules is investigated. For this purpose, a new aggregation algorithm for summing or averaging distributed values, the push-flow algorithm, is developed, which achieves superior resilience properties with respect to failures compared to existing aggregation methods. It is illustrated that on a hypercube topology it asymptotically requires the same number of iterations as the optimal all-to-all reduction operation and that it scales well with the number of nodes. Orthogonalization is studied as a prototypical matrix computation task. A new fault tolerant distributed orthogonalization method rdmGS, which can produce accurate results even in the presence of node failures, is built on top of distributed data aggregation algorithms. PMID:24748902
Microprocessor utilization in search and rescue missions
NASA Technical Reports Server (NTRS)
Schwartz, M.; Bashkow, T.
1978-01-01
The position of an emergency transmitter may be determined by measuring the Doppler shift of the distress signal as received by an orbiting satellite. This requires the computation of an initial estimate and refinement of this estimate through an iterative, nonlinear, least squares estimation. A version of the algorithm was implemented and tested by locating a transmitter on the premises and obtaining observations from a satellite. The computer used was an IBM 360/95. The position was determined within the desired 10 km radius accuracy. The feasibility of performing the same task in real time using microprocessor technology, was determined. The least squares algorithm was implemented on an Intel 8080 microprocessor. The results indicate that a microprocessor can easily match the IBM implementation in accuracy and be performed inside the time limitations set.
Absorbing Software Testing into the Scrum Method
NASA Astrophysics Data System (ADS)
Tuomikoski, Janne; Tervonen, Ilkka
In this paper we study, how to absorb software testing into the Scrum method. We conducted the research as an action research during the years 2007-2008 with three iterations. The result showed that testing can and even should be absorbed to the Scrum method. The testing team was merged into the Scrum teams. The teams can now deliver better working software in a shorter time, because testing keeps track of the progress of the development. Also the team spirit is higher, because the Scrum team members are committed to the same goal. The biggest change from test manager’s point of view was the organized Product Owner Team. Test manager don’t have testing team anymore, and in the future all the testing tasks have to be assigned through the Product Backlog.
Performance analysis of improved iterated cubature Kalman filter and its application to GNSS/INS.
Cui, Bingbo; Chen, Xiyuan; Xu, Yuan; Huang, Haoqian; Liu, Xiao
2017-01-01
In order to improve the accuracy and robustness of GNSS/INS navigation system, an improved iterated cubature Kalman filter (IICKF) is proposed by considering the state-dependent noise and system uncertainty. First, a simplified framework of iterated Gaussian filter is derived by using damped Newton-Raphson algorithm and online noise estimator. Then the effect of state-dependent noise coming from iterated update is analyzed theoretically, and an augmented form of CKF algorithm is applied to improve the estimation accuracy. The performance of IICKF is verified by field test and numerical simulation, and results reveal that, compared with non-iterated filter, iterated filter is less sensitive to the system uncertainty, and IICKF improves the accuracy of yaw, roll and pitch by 48.9%, 73.1% and 83.3%, respectively, compared with traditional iterated KF. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Reducing the latency of the Fractal Iterative Method to half an iteration
NASA Astrophysics Data System (ADS)
Béchet, Clémentine; Tallon, Michel
2013-12-01
The fractal iterative method for atmospheric tomography (FRiM-3D) has been introduced to solve the wavefront reconstruction at the dimensions of an ELT with a low-computational cost. Previous studies reported the requirement of only 3 iterations of the algorithm in order to provide the best adaptive optics (AO) performance. Nevertheless, any iterative method in adaptive optics suffer from the intrinsic latency induced by the fact that one iteration can start only once the previous one is completed. Iterations hardly match the low-latency requirement of the AO real-time computer. We present here a new approach to avoid iterations in the computation of the commands with FRiM-3D, thus allowing low-latency AO response even at the scale of the European ELT (E-ELT). The method highlights the importance of "warm-start" strategy in adaptive optics. To our knowledge, this particular way to use the "warm-start" has not been reported before. Futhermore, removing the requirement of iterating to compute the commands, the computational cost of the reconstruction with FRiM-3D can be simplified and at least reduced to half the computational cost of a classical iteration. Thanks to simulations of both single-conjugate and multi-conjugate AO for the E-ELT,with FRiM-3D on Octopus ESO simulator, we demonstrate the benefit of this approach. We finally enhance the robustness of this new implementation with respect to increasing measurement noise, wind speed and even modeling errors.
Wilson, Michael P.; Nordstrom, Kimberly; Anderson, Eric L.; Ng, Anthony T.; Zun, Leslie S.; Peltzer-Jones, Jennifer M.; Allen, Michael H.
2017-01-01
Introduction The emergency medical evaluation of psychiatric patients presenting to United States emergency departments (ED), usually termed “medical clearance,” often varies between EDs. A task force of the American Association for Emergency Psychiatry (AAEP), consisting of physicians from emergency medicine, physicians from psychiatry and a psychologist, was convened to form consensus recommendations for the medical evaluation of psychiatric patients presenting to U.S.EDs. Methods The task force reviewed existing literature on the topic of medical evaluation of psychiatric patients in the ED and then combined this with expert consensus. Consensus was achieved by group discussion as well as iterative revisions of the written document. The document was reviewed and approved by the AAEP Board of Directors. Results Eight recommendations were formulated. These recommendations cover various topics in emergency medical examination of psychiatric patients, including goals of medical screening in the ED, the identification of patients at low risk for co-existing medical disease, key elements in the ED evaluation of psychiatric patients including those with cognitive disorders, specific language replacing the term “medical clearance,” and the need for better science in this area. Conclusion The evidence indicates that a thorough history and physical examination, including vital signs and mental status examination, are the minimum necessary elements in the evaluation of psychiatric patients. With respect to laboratory testing, the picture is less clear and much more controversial. PMID:28611885
Formative evaluation of a patient-specific clinical knowledge summarization tool
Del Fiol, Guilherme; Mostafa, Javed; Pu, Dongqiuye; Medlin, Richard; Slager, Stacey; Jonnalagadda, Siddhartha R.; Weir, Charlene R.
2015-01-01
Objective To iteratively design a prototype of a computerized clinical knowledge summarization (CKS) tool aimed at helping clinicians finding answers to their clinical questions; and to conduct a formative assessment of the usability, usefulness, efficiency, and impact of the CKS prototype on physicians’ perceived decision quality compared with standard search of UpToDate and PubMed. Materials and methods Mixed-methods observations of the interactions of 10 physicians with the CKS prototype vs. standard search in an effort to solve clinical problems posed as case vignettes. Results The CKS tool automatically summarizes patient-specific and actionable clinical recommendations from PubMed (high quality randomized controlled trials and systematic reviews) and UpToDate. Two thirds of the study participants completed 15 out of 17 usability tasks. The median time to task completion was less than 10 s for 12 of the 17 tasks. The difference in search time between the CKS and standard search was not significant (median = 4.9 vs. 4.5 min). Physician’s perceived decision quality was significantly higher with the CKS than with manual search (mean = 16.6 vs. 14.4; p = 0.036). Conclusions The CKS prototype was well-accepted by physicians both in terms of usability and usefulness. Physicians perceived better decision quality with the CKS prototype compared to standard search of PubMed and UpToDate within a similar search time. Due to the formative nature of this study and a small sample size, conclusions regarding efficiency and efficacy are exploratory. PMID:26612774
Jacova, Claudia; McGrenere, Joanna; Lee, Hyunsoo S; Wang, William W; Le Huray, Sarah; Corenblith, Emily F; Brehmer, Matthew; Tang, Charlotte; Hayden, Sherri; Beattie, B Lynn; Hsiung, Ging-Yuek R
2015-01-01
Cognitive Testing on Computer (C-TOC) is a novel computer-based test battery developed to improve both usability and validity in the computerized assessment of cognitive function in older adults. C-TOC's usability was evaluated concurrently with its iterative development to version 4 in subjects with and without cognitive impairment, and health professional advisors representing different ethnocultural groups. C-TOC version 4 was then validated against neuropsychological tests (NPTs), and by comparing performance scores of subjects with normal cognition, Cognitive Impairment Not Dementia (CIND) and Alzheimer disease. C-TOC's language tests were validated in subjects with aphasic disorders. The most important usability issue that emerged from consultations with 27 older adults and with 8 cultural advisors was the test-takers' understanding of the task, particularly executive function tasks. User interface features did not pose significant problems. C-TOC version 4 tests correlated with comparator NPT (r=0.4 to 0.7). C-TOC test scores were normal (n=16)>CIND (n=16)>Alzheimer disease (n=6). All normal/CIND NPT performance differences were detected on C-TOC. Low computer knowledge adversely affected test performance, particularly in CIND. C-TOC detected impairments in aphasic disorders (n=11). In general, C-TOC had good validity in detecting cognitive impairment. Ensuring test-takers' understanding of the tasks, and considering their computer knowledge appear important steps towards C-TOC's implementation.
Wilson, Michael P; Nordstrom, Kimberly; Anderson, Eric L; Ng, Anthony T; Zun, Leslie S; Peltzer-Jones, Jennifer M; Allen, Michael H
2017-06-01
The emergency medical evaluation of psychiatric patients presenting to United States emergency departments (ED), usually termed "medical clearance," often varies between EDs. A task force of the American Association for Emergency Psychiatry (AAEP), consisting of physicians from emergency medicine, physicians from psychiatry and a psychologist, was convened to form consensus recommendations for the medical evaluation of psychiatric patients presenting to U.S.EDs. The task force reviewed existing literature on the topic of medical evaluation of psychiatric patients in the ED and then combined this with expert consensus. Consensus was achieved by group discussion as well as iterative revisions of the written document. The document was reviewed and approved by the AAEP Board of Directors. Eight recommendations were formulated. These recommendations cover various topics in emergency medical examination of psychiatric patients, including goals of medical screening in the ED, the identification of patients at low risk for co-existing medical disease, key elements in the ED evaluation of psychiatric patients including those with cognitive disorders, specific language replacing the term "medical clearance," and the need for better science in this area. The evidence indicates that a thorough history and physical examination, including vital signs and mental status examination, are the minimum necessary elements in the evaluation of psychiatric patients. With respect to laboratory testing, the picture is less clear and much more controversial.
Home Health Nurse Collaboration in the Medical Neighborhood of Children with Medical Complexity.
Nageswaran, Savithri; Golden, Shannon L
2016-10-01
The objectives of this study were to describe how home healthcare nurses collaborate with other clinicians caring for children with medical complexity, and identify barriers to collaboration within the medical neighborhood. Using qualitative data obtained from 20 semistructured interviews (15 English, 5 Spanish) with primary caregivers of children with medical complexity and 18 home healthcare nurses, researchers inquired about experiences with home healthcare nursing services for these children. During an iterative analysis process, recurrent themes were identified by their prevalence and salience in the data. Home healthcare nurses collaborate with many providers within the medical neighborhood of children with medical complexity and perform many different collaborative tasks. This collaboration is valued by caregivers and nurses, but is inconsistent. Home healthcare nurses' communication with other clinicians is important to the delivery of good-quality care to children with medical complexity at home, but is not always present. Home healthcare nurses reported inability to share clinical information with other clinicians, not receiving child-specific information, and lack of support for clinical problem-solving as concerns. Barriers for optimal collaboration included lack of preparedness of parents, availability of physicians for clinical support, reimbursement for collaborative tasks, variability in home healthcare nurses' tasks, and problems at nursing agency level. Home healthcare nurses' collaboration with other clinicians is important, but problems exist in the current system of care. Optimizing collaboration between home healthcare nurses and other clinicians will likely have a positive impact on these children and their families.
Formative evaluation of a patient-specific clinical knowledge summarization tool.
Del Fiol, Guilherme; Mostafa, Javed; Pu, Dongqiuye; Medlin, Richard; Slager, Stacey; Jonnalagadda, Siddhartha R; Weir, Charlene R
2016-02-01
To iteratively design a prototype of a computerized clinical knowledge summarization (CKS) tool aimed at helping clinicians finding answers to their clinical questions; and to conduct a formative assessment of the usability, usefulness, efficiency, and impact of the CKS prototype on physicians' perceived decision quality compared with standard search of UpToDate and PubMed. Mixed-methods observations of the interactions of 10 physicians with the CKS prototype vs. standard search in an effort to solve clinical problems posed as case vignettes. The CKS tool automatically summarizes patient-specific and actionable clinical recommendations from PubMed (high quality randomized controlled trials and systematic reviews) and UpToDate. Two thirds of the study participants completed 15 out of 17 usability tasks. The median time to task completion was less than 10s for 12 of the 17 tasks. The difference in search time between the CKS and standard search was not significant (median=4.9 vs. 4.5m in). Physician's perceived decision quality was significantly higher with the CKS than with manual search (mean=16.6 vs. 14.4; p=0.036). The CKS prototype was well-accepted by physicians both in terms of usability and usefulness. Physicians perceived better decision quality with the CKS prototype compared to standard search of PubMed and UpToDate within a similar search time. Due to the formative nature of this study and a small sample size, conclusions regarding efficiency and efficacy are exploratory. Published by Elsevier Ireland Ltd.
A Universal Tare Load Prediction Algorithm for Strain-Gage Balance Calibration Data Analysis
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2011-01-01
An algorithm is discussed that may be used to estimate tare loads of wind tunnel strain-gage balance calibration data. The algorithm was originally developed by R. Galway of IAR/NRC Canada and has been described in the literature for the iterative analysis technique. Basic ideas of Galway's algorithm, however, are universally applicable and work for both the iterative and the non-iterative analysis technique. A recent modification of Galway's algorithm is presented that improves the convergence behavior of the tare load prediction process if it is used in combination with the non-iterative analysis technique. The modified algorithm allows an analyst to use an alternate method for the calculation of intermediate non-linear tare load estimates whenever Galway's original approach does not lead to a convergence of the tare load iterations. It is also shown in detail how Galway's algorithm may be applied to the non-iterative analysis technique. Hand load data from the calibration of a six-component force balance is used to illustrate the application of the original and modified tare load prediction method. During the analysis of the data both the iterative and the non-iterative analysis technique were applied. Overall, predicted tare loads for combinations of the two tare load prediction methods and the two balance data analysis techniques showed excellent agreement as long as the tare load iterations converged. The modified algorithm, however, appears to have an advantage over the original algorithm when absolute voltage measurements of gage outputs are processed using the non-iterative analysis technique. In these situations only the modified algorithm converged because it uses an exact solution of the intermediate non-linear tare load estimate for the tare load iteration.
Comparisons of Observed Process Quality in German and American Infant/Toddler Programs
ERIC Educational Resources Information Center
Tietze, Wolfgang; Cryer, Debby
2004-01-01
Observed process quality in infant/toddler classrooms was compared in Germany (n = 75) and the USA (n = 219). Process quality was assessed with the Infant/Toddler Environment Rating Scale(ITERS) and parent attitudes about ITERS content with the ITERS Parent Questionnaire (ITERSPQ). The ITERS had comparable reliabilities in the two countries and…
A Monte Carlo Study of an Iterative Wald Test Procedure for DIF Analysis
ERIC Educational Resources Information Center
Cao, Mengyang; Tay, Louis; Liu, Yaowu
2017-01-01
This study examined the performance of a proposed iterative Wald approach for detecting differential item functioning (DIF) between two groups when preknowledge of anchor items is absent. The iterative approach utilizes the Wald-2 approach to identify anchor items and then iteratively tests for DIF items with the Wald-1 approach. Monte Carlo…
Zhao, Jing; Zong, Haili
2018-01-01
In this paper, we propose parallel and cyclic iterative algorithms for solving the multiple-set split equality common fixed-point problem of firmly quasi-nonexpansive operators. We also combine the process of cyclic and parallel iterative methods and propose two mixed iterative algorithms. Our several algorithms do not need any prior information about the operator norms. Under mild assumptions, we prove weak convergence of the proposed iterative sequences in Hilbert spaces. As applications, we obtain several iterative algorithms to solve the multiple-set split equality problem.
A novel iterative scheme and its application to differential equations.
Khan, Yasir; Naeem, F; Šmarda, Zdeněk
2014-01-01
The purpose of this paper is to employ an alternative approach to reconstruct the standard variational iteration algorithm II proposed by He, including Lagrange multiplier, and to give a simpler formulation of Adomian decomposition and modified Adomian decomposition method in terms of newly proposed variational iteration method-II (VIM). Through careful investigation of the earlier variational iteration algorithm and Adomian decomposition method, we find unnecessary calculations for Lagrange multiplier and also repeated calculations involved in each iteration, respectively. Several examples are given to verify the reliability and efficiency of the method.
Integrated Collaborative Model in Research and Education with Emphasis on Small Satellite Technology
1996-01-01
feedback; the number of iterations in a complete iteration is referred to as loop depth or iteration depth, g (i). A data packet or packet is data...loop depth, g (i)) is either a finite (constant or variable) or an infinite value. 1) Finite loop depth, variable number of iterations Some problems...design time. The time needed for the first packet to leave and a new initial data to be introduced to the iteration is min(R * ( g (k) * (N+I) + k-1
AIR-MRF: Accelerated iterative reconstruction for magnetic resonance fingerprinting.
Cline, Christopher C; Chen, Xiao; Mailhe, Boris; Wang, Qiu; Pfeuffer, Josef; Nittka, Mathias; Griswold, Mark A; Speier, Peter; Nadar, Mariappan S
2017-09-01
Existing approaches for reconstruction of multiparametric maps with magnetic resonance fingerprinting (MRF) are currently limited by their estimation accuracy and reconstruction time. We aimed to address these issues with a novel combination of iterative reconstruction, fingerprint compression, additional regularization, and accelerated dictionary search methods. The pipeline described here, accelerated iterative reconstruction for magnetic resonance fingerprinting (AIR-MRF), was evaluated with simulations as well as phantom and in vivo scans. We found that the AIR-MRF pipeline provided reduced parameter estimation errors compared to non-iterative and other iterative methods, particularly at shorter sequence lengths. Accelerated dictionary search methods incorporated into the iterative pipeline reduced the reconstruction time at little cost of quality. Copyright © 2017 Elsevier Inc. All rights reserved.
Influence of Primary Gage Sensitivities on the Convergence of Balance Load Iterations
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert Manfred
2012-01-01
The connection between the convergence of wind tunnel balance load iterations and the existence of the primary gage sensitivities of a balance is discussed. First, basic elements of two load iteration equations that the iterative method uses in combination with results of a calibration data analysis for the prediction of balance loads are reviewed. Then, the connection between the primary gage sensitivities, the load format, the gage output format, and the convergence characteristics of the load iteration equation choices is investigated. A new criterion is also introduced that may be used to objectively determine if the primary gage sensitivity of a balance gage exists. Then, it is shown that both load iteration equations will converge as long as a suitable regression model is used for the analysis of the balance calibration data, the combined influence of non linear terms of the regression model is very small, and the primary gage sensitivities of all balance gages exist. The last requirement is fulfilled, e.g., if force balance calibration data is analyzed in force balance format. Finally, it is demonstrated that only one of the two load iteration equation choices, i.e., the iteration equation used by the primary load iteration method, converges if one or more primary gage sensitivities are missing. This situation may occur, e.g., if force balance calibration data is analyzed in direct read format using the original gage outputs. Data from the calibration of a six component force balance is used to illustrate the connection between the convergence of the load iteration equation choices and the existence of the primary gage sensitivities.
Mission of ITER and Challenges for the Young
NASA Astrophysics Data System (ADS)
Ikeda, Kaname
2009-02-01
It is recognized that the ongoing effort to provide sufficient energy for the wellbeing of the globe's population and to power the world economy is of the greatest importance. ITER is a joint international research and development project that aims to demonstrate the scientific and technical feasibility of fusion power. It represents the responsible actions of governments whose countries comprise over half the world's population, to create fusion power as a source of clean, economic, carbon dioxide-free energy. This is the most important science initiative of our time. The partners in the Project—the ITER Parties—are the European Union, Japan, the People's Republic of China, India, the Republic of Korea, the Russian Federation and the USA. ITER will be constructed in Europe, at Cadarache in the South of France. The talk will illustrate the genesis of the ITER Organization, the ongoing work at the Cadarache site and the planned schedule for construction. There will also be an explanation of the unique aspects of international collaboration that have been developed for ITER. Although the present focus of the project is construction activities, ITER is also a major scientific and technological research program, for which the best of the world's intellectual resources is needed. Challenges for the young, imperative for fulfillment of the objective of ITER will be identified. It is important that young students and researchers worldwide recognize the rapid development of the project, and the fundamental issues that must be overcome in ITER. The talk will also cover the exciting career and fellowship opportunities for young people at the ITER Organization.
Mission of ITER and Challenges for the Young
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ikeda, Kaname
2009-02-19
It is recognized that the ongoing effort to provide sufficient energy for the wellbeing of the globe's population and to power the world economy is of the greatest importance. ITER is a joint international research and development project that aims to demonstrate the scientific and technical feasibility of fusion power. It represents the responsible actions of governments whose countries comprise over half the world's population, to create fusion power as a source of clean, economic, carbon dioxide-free energy. This is the most important science initiative of our time.The partners in the Project--the ITER Parties--are the European Union, Japan, the People'smore » Republic of China, India, the Republic of Korea, the Russian Federation and the USA. ITER will be constructed in Europe, at Cadarache in the South of France. The talk will illustrate the genesis of the ITER Organization, the ongoing work at the Cadarache site and the planned schedule for construction. There will also be an explanation of the unique aspects of international collaboration that have been developed for ITER.Although the present focus of the project is construction activities, ITER is also a major scientific and technological research program, for which the best of the world's intellectual resources is needed. Challenges for the young, imperative for fulfillment of the objective of ITER will be identified. It is important that young students and researchers worldwide recognize the rapid development of the project, and the fundamental issues that must be overcome in ITER.The talk will also cover the exciting career and fellowship opportunities for young people at the ITER Organization.« less
New methods of testing nonlinear hypothesis using iterative NLLS estimator
NASA Astrophysics Data System (ADS)
Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.
2017-11-01
This research paper discusses the method of testing nonlinear hypothesis using iterative Nonlinear Least Squares (NLLS) estimator. Takeshi Amemiya [1] explained this method. However in the present research paper, a modified Wald test statistic due to Engle, Robert [6] is proposed to test the nonlinear hypothesis using iterative NLLS estimator. An alternative method for testing nonlinear hypothesis using iterative NLLS estimator based on nonlinear hypothesis using iterative NLLS estimator based on nonlinear studentized residuals has been proposed. In this research article an innovative method of testing nonlinear hypothesis using iterative restricted NLLS estimator is derived. Pesaran and Deaton [10] explained the methods of testing nonlinear hypothesis. This paper uses asymptotic properties of nonlinear least squares estimator proposed by Jenrich [8]. The main purpose of this paper is to provide very innovative methods of testing nonlinear hypothesis using iterative NLLS estimator, iterative NLLS estimator based on nonlinear studentized residuals and iterative restricted NLLS estimator. Eakambaram et al. [12] discussed least absolute deviation estimations versus nonlinear regression model with heteroscedastic errors and also they studied the problem of heteroscedasticity with reference to nonlinear regression models with suitable illustration. William Grene [13] examined the interaction effect in nonlinear models disused by Ai and Norton [14] and suggested ways to examine the effects that do not involve statistical testing. Peter [15] provided guidelines for identifying composite hypothesis and addressing the probability of false rejection for multiple hypotheses.
ERIC Educational Resources Information Center
Smith, Michael
1990-01-01
Presents several examples of the iteration method using computer spreadsheets. Examples included are simple iterative sequences and the solution of equations using the Newton-Raphson formula, linear interpolation, and interval bisection. (YP)
Composition of web services using Markov decision processes and dynamic programming.
Uc-Cetina, Víctor; Moo-Mena, Francisco; Hernandez-Ucan, Rafael
2015-01-01
We propose a Markov decision process model for solving the Web service composition (WSC) problem. Iterative policy evaluation, value iteration, and policy iteration algorithms are used to experimentally validate our approach, with artificial and real data. The experimental results show the reliability of the model and the methods employed, with policy iteration being the best one in terms of the minimum number of iterations needed to estimate an optimal policy, with the highest Quality of Service attributes. Our experimental work shows how the solution of a WSC problem involving a set of 100,000 individual Web services and where a valid composition requiring the selection of 1,000 services from the available set can be computed in the worst case in less than 200 seconds, using an Intel Core i5 computer with 6 GB RAM. Moreover, a real WSC problem involving only 7 individual Web services requires less than 0.08 seconds, using the same computational power. Finally, a comparison with two popular reinforcement learning algorithms, sarsa and Q-learning, shows that these algorithms require one or two orders of magnitude and more time than policy iteration, iterative policy evaluation, and value iteration to handle WSC problems of the same complexity.
Overview of the JET results in support to ITER
NASA Astrophysics Data System (ADS)
Litaudon, X.; Abduallev, S.; Abhangi, M.; Abreu, P.; Afzal, M.; Aggarwal, K. M.; Ahlgren, T.; Ahn, J. H.; Aho-Mantila, L.; Aiba, N.; Airila, M.; Albanese, R.; Aldred, V.; Alegre, D.; Alessi, E.; Aleynikov, P.; Alfier, A.; Alkseev, A.; Allinson, M.; Alper, B.; Alves, E.; Ambrosino, G.; Ambrosino, R.; Amicucci, L.; Amosov, V.; Andersson Sundén, E.; Angelone, M.; Anghel, M.; Angioni, C.; Appel, L.; Appelbee, C.; Arena, P.; Ariola, M.; Arnichand, H.; Arshad, S.; Ash, A.; Ashikawa, N.; Aslanyan, V.; Asunta, O.; Auriemma, F.; Austin, Y.; Avotina, L.; Axton, M. D.; Ayres, C.; Bacharis, M.; Baciero, A.; Baião, D.; Bailey, S.; Baker, A.; Balboa, I.; Balden, M.; Balshaw, N.; Bament, R.; Banks, J. W.; Baranov, Y. F.; Barnard, M. A.; Barnes, D.; Barnes, M.; Barnsley, R.; Baron Wiechec, A.; Barrera Orte, L.; Baruzzo, M.; Basiuk, V.; Bassan, M.; Bastow, R.; Batista, A.; Batistoni, P.; Baughan, R.; Bauvir, B.; Baylor, L.; Bazylev, B.; Beal, J.; Beaumont, P. S.; Beckers, M.; Beckett, B.; Becoulet, A.; Bekris, N.; Beldishevski, M.; Bell, K.; Belli, F.; Bellinger, M.; Belonohy, É.; Ben Ayed, N.; Benterman, N. A.; Bergsåker, H.; Bernardo, J.; Bernert, M.; Berry, M.; Bertalot, L.; Besliu, C.; Beurskens, M.; Bieg, B.; Bielecki, J.; Biewer, T.; Bigi, M.; Bílková, P.; Binda, F.; Bisoffi, A.; Bizarro, J. P. S.; Björkas, C.; Blackburn, J.; Blackman, K.; Blackman, T. R.; Blanchard, P.; Blatchford, P.; Bobkov, V.; Boboc, A.; Bodnár, G.; Bogar, O.; Bolshakova, I.; Bolzonella, T.; Bonanomi, N.; Bonelli, F.; Boom, J.; Booth, J.; Borba, D.; Borodin, D.; Borodkina, I.; Botrugno, A.; Bottereau, C.; Boulting, P.; Bourdelle, C.; Bowden, M.; Bower, C.; Bowman, C.; Boyce, T.; Boyd, C.; Boyer, H. J.; Bradshaw, J. M. A.; Braic, V.; Bravanec, R.; Breizman, B.; Bremond, S.; Brennan, P. D.; Breton, S.; Brett, A.; Brezinsek, S.; Bright, M. D. J.; Brix, M.; Broeckx, W.; Brombin, M.; Brosławski, A.; Brown, D. P. D.; Brown, M.; Bruno, E.; Bucalossi, J.; Buch, J.; Buchanan, J.; Buckley, M. A.; Budny, R.; Bufferand, H.; Bulman, M.; Bulmer, N.; Bunting, P.; Buratti, P.; Burckhart, A.; Buscarino, A.; Busse, A.; Butler, N. K.; Bykov, I.; Byrne, J.; Cahyna, P.; Calabrò, G.; Calvo, I.; Camenen, Y.; Camp, P.; Campling, D. C.; Cane, J.; Cannas, B.; Capel, A. J.; Card, P. J.; Cardinali, A.; Carman, P.; Carr, M.; Carralero, D.; Carraro, L.; Carvalho, B. B.; Carvalho, I.; Carvalho, P.; Casson, F. J.; Castaldo, C.; Catarino, N.; Caumont, J.; Causa, F.; Cavazzana, R.; Cave-Ayland, K.; Cavinato, M.; Cecconello, M.; Ceccuzzi, S.; Cecil, E.; Cenedese, A.; Cesario, R.; Challis, C. D.; Chandler, M.; Chandra, D.; Chang, C. S.; Chankin, A.; Chapman, I. T.; Chapman, S. C.; Chernyshova, M.; Chitarin, G.; Ciraolo, G.; Ciric, D.; Citrin, J.; Clairet, F.; Clark, E.; Clark, M.; Clarkson, R.; Clatworthy, D.; Clements, C.; Cleverly, M.; Coad, J. P.; Coates, P. A.; Cobalt, A.; Coccorese, V.; Cocilovo, V.; Coda, S.; Coelho, R.; Coenen, J. W.; Coffey, I.; Colas, L.; Collins, S.; Conka, D.; Conroy, S.; Conway, N.; Coombs, D.; Cooper, D.; Cooper, S. R.; Corradino, C.; Corre, Y.; Corrigan, G.; Cortes, S.; Coster, D.; Couchman, A. S.; Cox, M. P.; Craciunescu, T.; Cramp, S.; Craven, R.; Crisanti, F.; Croci, G.; Croft, D.; Crombé, K.; Crowe, R.; Cruz, N.; Cseh, G.; Cufar, A.; Cullen, A.; Curuia, M.; Czarnecka, A.; Dabirikhah, H.; Dalgliesh, P.; Dalley, S.; Dankowski, J.; Darrow, D.; Davies, O.; Davis, W.; Day, C.; Day, I. E.; De Bock, M.; de Castro, A.; de la Cal, E.; de la Luna, E.; De Masi, G.; de Pablos, J. L.; De Temmerman, G.; De Tommasi, G.; de Vries, P.; Deakin, K.; Deane, J.; Degli Agostini, F.; Dejarnac, R.; Delabie, E.; den Harder, N.; Dendy, R. O.; Denis, J.; Denner, P.; Devaux, S.; Devynck, P.; Di Maio, F.; Di Siena, A.; Di Troia, C.; Dinca, P.; D'Inca, R.; Ding, B.; Dittmar, T.; Doerk, H.; Doerner, R. P.; Donné, T.; Dorling, S. E.; Dormido-Canto, S.; Doswon, S.; Douai, D.; Doyle, P. T.; Drenik, A.; Drewelow, P.; Drews, P.; Duckworth, Ph.; Dumont, R.; Dumortier, P.; Dunai, D.; Dunne, M.; Ďuran, I.; Durodié, F.; Dutta, P.; Duval, B. P.; Dux, R.; Dylst, K.; Dzysiuk, N.; Edappala, P. V.; Edmond, J.; Edwards, A. M.; Edwards, J.; Eich, Th.; Ekedahl, A.; El-Jorf, R.; Elsmore, C. G.; Enachescu, M.; Ericsson, G.; Eriksson, F.; Eriksson, J.; Eriksson, L. G.; Esposito, B.; Esquembri, S.; Esser, H. G.; Esteve, D.; Evans, B.; Evans, G. E.; Evison, G.; Ewart, G. D.; Fagan, D.; Faitsch, M.; Falie, D.; Fanni, A.; Fasoli, A.; Faustin, J. M.; Fawlk, N.; Fazendeiro, L.; Fedorczak, N.; Felton, R. C.; Fenton, K.; Fernades, A.; Fernandes, H.; Ferreira, J.; Fessey, J. A.; Février, O.; Ficker, O.; Field, A.; Fietz, S.; Figueiredo, A.; Figueiredo, J.; Fil, A.; Finburg, P.; Firdaouss, M.; Fischer, U.; Fittill, L.; Fitzgerald, M.; Flammini, D.; Flanagan, J.; Fleming, C.; Flinders, K.; Fonnesu, N.; Fontdecaba, J. M.; Formisano, A.; Forsythe, L.; Fortuna, L.; Fortuna-Zalesna, E.; Fortune, M.; Foster, S.; Franke, T.; Franklin, T.; Frasca, M.; Frassinetti, L.; Freisinger, M.; Fresa, R.; Frigione, D.; Fuchs, V.; Fuller, D.; Futatani, S.; Fyvie, J.; Gál, K.; Galassi, D.; Gałązka, K.; Galdon-Quiroga, J.; Gallagher, J.; Gallart, D.; Galvão, R.; Gao, X.; Gao, Y.; Garcia, J.; Garcia-Carrasco, A.; García-Muñoz, M.; Gardarein, J.-L.; Garzotti, L.; Gaudio, P.; Gauthier, E.; Gear, D. F.; Gee, S. J.; Geiger, B.; Gelfusa, M.; Gerasimov, S.; Gervasini, G.; Gethins, M.; Ghani, Z.; Ghate, M.; Gherendi, M.; Giacalone, J. C.; Giacomelli, L.; Gibson, C. S.; Giegerich, T.; Gil, C.; Gil, L.; Gilligan, S.; Gin, D.; Giovannozzi, E.; Girardo, J. B.; Giroud, C.; Giruzzi, G.; Glöggler, S.; Godwin, J.; Goff, J.; Gohil, P.; Goloborod'ko, V.; Gomes, R.; Gonçalves, B.; Goniche, M.; Goodliffe, M.; Goodyear, A.; Gorini, G.; Gosk, M.; Goulding, R.; Goussarov, A.; Gowland, R.; Graham, B.; Graham, M. E.; Graves, J. P.; Grazier, N.; Grazier, P.; Green, N. R.; Greuner, H.; Grierson, B.; Griph, F. S.; Grisolia, C.; Grist, D.; Groth, M.; Grove, R.; Grundy, C. N.; Grzonka, J.; Guard, D.; Guérard, C.; Guillemaut, C.; Guirlet, R.; Gurl, C.; Utoh, H. H.; Hackett, L. J.; Hacquin, S.; Hagar, A.; Hager, R.; Hakola, A.; Halitovs, M.; Hall, S. J.; Hallworth Cook, S. P.; Hamlyn-Harris, C.; Hammond, K.; Harrington, C.; Harrison, J.; Harting, D.; Hasenbeck, F.; Hatano, Y.; Hatch, D. R.; Haupt, T. D. V.; Hawes, J.; Hawkes, N. C.; Hawkins, J.; Hawkins, P.; Haydon, P. W.; Hayter, N.; Hazel, S.; Heesterman, P. J. L.; Heinola, K.; Hellesen, C.; Hellsten, T.; Helou, W.; Hemming, O. N.; Hender, T. C.; Henderson, M.; Henderson, S. S.; Henriques, R.; Hepple, D.; Hermon, G.; Hertout, P.; Hidalgo, C.; Highcock, E. G.; Hill, M.; Hillairet, J.; Hillesheim, J.; Hillis, D.; Hizanidis, K.; Hjalmarsson, A.; Hobirk, J.; Hodille, E.; Hogben, C. H. A.; Hogeweij, G. M. D.; Hollingsworth, A.; Hollis, S.; Homfray, D. A.; Horáček, J.; Hornung, G.; Horton, A. R.; Horton, L. D.; Horvath, L.; Hotchin, S. P.; Hough, M. R.; Howarth, P. J.; Hubbard, A.; Huber, A.; Huber, V.; Huddleston, T. M.; Hughes, M.; Huijsmans, G. T. A.; Hunter, C. L.; Huynh, P.; Hynes, A. M.; Iglesias, D.; Imazawa, N.; Imbeaux, F.; Imríšek, M.; Incelli, M.; Innocente, P.; Irishkin, M.; Ivanova-Stanik, I.; Jachmich, S.; Jacobsen, A. S.; Jacquet, P.; Jansons, J.; Jardin, A.; Järvinen, A.; Jaulmes, F.; Jednoróg, S.; Jenkins, I.; Jeong, C.; Jepu, I.; Joffrin, E.; Johnson, R.; Johnson, T.; Johnston, Jane; Joita, L.; Jones, G.; Jones, T. T. C.; Hoshino, K. K.; Kallenbach, A.; Kamiya, K.; Kaniewski, J.; Kantor, A.; Kappatou, A.; Karhunen, J.; Karkinsky, D.; Karnowska, I.; Kaufman, M.; Kaveney, G.; Kazakov, Y.; Kazantzidis, V.; Keeling, D. L.; Keenan, T.; Keep, J.; Kempenaars, M.; Kennedy, C.; Kenny, D.; Kent, J.; Kent, O. N.; Khilkevich, E.; Kim, H. T.; Kim, H. S.; Kinch, A.; king, C.; King, D.; King, R. F.; Kinna, D. J.; Kiptily, V.; Kirk, A.; Kirov, K.; Kirschner, A.; Kizane, G.; Klepper, C.; Klix, A.; Knight, P.; Knipe, S. J.; Knott, S.; Kobuchi, T.; Köchl, F.; Kocsis, G.; Kodeli, I.; Kogan, L.; Kogut, D.; Koivuranta, S.; Kominis, Y.; Köppen, M.; Kos, B.; Koskela, T.; Koslowski, H. R.; Koubiti, M.; Kovari, M.; Kowalska-Strzęciwilk, E.; Krasilnikov, A.; Krasilnikov, V.; Krawczyk, N.; Kresina, M.; Krieger, K.; Krivska, A.; Kruezi, U.; Książek, I.; Kukushkin, A.; Kundu, A.; Kurki-Suonio, T.; Kwak, S.; Kwiatkowski, R.; Kwon, O. J.; Laguardia, L.; Lahtinen, A.; Laing, A.; Lam, N.; Lambertz, H. T.; Lane, C.; Lang, P. T.; Lanthaler, S.; Lapins, J.; Lasa, A.; Last, J. R.; Łaszyńska, E.; Lawless, R.; Lawson, A.; Lawson, K. D.; Lazaros, A.; Lazzaro, E.; Leddy, J.; Lee, S.; Lefebvre, X.; Leggate, H. J.; Lehmann, J.; Lehnen, M.; Leichtle, D.; Leichuer, P.; Leipold, F.; Lengar, I.; Lennholm, M.; Lerche, E.; Lescinskis, A.; Lesnoj, S.; Letellier, E.; Leyland, M.; Leysen, W.; Li, L.; Liang, Y.; Likonen, J.; Linke, J.; Linsmeier, Ch.; Lipschultz, B.; Liu, G.; Liu, Y.; Lo Schiavo, V. P.; Loarer, T.; Loarte, A.; Lobel, R. C.; Lomanowski, B.; Lomas, P. J.; Lönnroth, J.; López, J. M.; López-Razola, J.; Lorenzini, R.; Losada, U.; Lovell, J. J.; Loving, A. B.; Lowry, C.; Luce, T.; Lucock, R. M. A.; Lukin, A.; Luna, C.; Lungaroni, M.; Lungu, C. P.; Lungu, M.; Lunniss, A.; Lupelli, I.; Lyssoivan, A.; Macdonald, N.; Macheta, P.; Maczewa, K.; Magesh, B.; Maget, P.; Maggi, C.; Maier, H.; Mailloux, J.; Makkonen, T.; Makwana, R.; Malaquias, A.; Malizia, A.; Manas, P.; Manning, A.; Manso, M. E.; Mantica, P.; Mantsinen, M.; Manzanares, A.; Maquet, Ph.; Marandet, Y.; Marcenko, N.; Marchetto, C.; Marchuk, O.; Marinelli, M.; Marinucci, M.; Markovič, T.; Marocco, D.; Marot, L.; Marren, C. A.; Marshal, R.; Martin, A.; Martin, Y.; Martín de Aguilera, A.; Martínez, F. J.; Martín-Solís, J. R.; Martynova, Y.; Maruyama, S.; Masiello, A.; Maslov, M.; Matejcik, S.; Mattei, M.; Matthews, G. F.; Maviglia, F.; Mayer, M.; Mayoral, M. L.; May-Smith, T.; Mazon, D.; Mazzotta, C.; McAdams, R.; McCarthy, P. J.; McClements, K. G.; McCormack, O.; McCullen, P. A.; McDonald, D.; McIntosh, S.; McKean, R.; McKehon, J.; Meadows, R. C.; Meakins, A.; Medina, F.; Medland, M.; Medley, S.; Meigh, S.; Meigs, A. G.; Meisl, G.; Meitner, S.; Meneses, L.; Menmuir, S.; Mergia, K.; Merrigan, I. R.; Mertens, Ph.; Meshchaninov, S.; Messiaen, A.; Meyer, H.; Mianowski, S.; Michling, R.; Middleton-Gear, D.; Miettunen, J.; Militello, F.; Militello-Asp, E.; Miloshevsky, G.; Mink, F.; Minucci, S.; Miyoshi, Y.; Mlynář, J.; Molina, D.; Monakhov, I.; Moneti, M.; Mooney, R.; Moradi, S.; Mordijck, S.; Moreira, L.; Moreno, R.; Moro, F.; Morris, A. W.; Morris, J.; Moser, L.; Mosher, S.; Moulton, D.; Murari, A.; Muraro, A.; Murphy, S.; Asakura, N. N.; Na, Y. S.; Nabais, F.; Naish, R.; Nakano, T.; Nardon, E.; Naulin, V.; Nave, M. F. F.; Nedzelski, I.; Nemtsev, G.; Nespoli, F.; Neto, A.; Neu, R.; Neverov, V. S.; Newman, M.; Nicholls, K. J.; Nicolas, T.; Nielsen, A. H.; Nielsen, P.; Nilsson, E.; Nishijima, D.; Noble, C.; Nocente, M.; Nodwell, D.; Nordlund, K.; Nordman, H.; Nouailletas, R.; Nunes, I.; Oberkofler, M.; Odupitan, T.; Ogawa, M. T.; O'Gorman, T.; Okabayashi, M.; Olney, R.; Omolayo, O.; O'Mullane, M.; Ongena, J.; Orsitto, F.; Orszagh, J.; Oswuigwe, B. I.; Otin, R.; Owen, A.; Paccagnella, R.; Pace, N.; Pacella, D.; Packer, L. W.; Page, A.; Pajuste, E.; Palazzo, S.; Pamela, S.; Panja, S.; Papp, P.; Paprok, R.; Parail, V.; Park, M.; Parra Diaz, F.; Parsons, M.; Pasqualotto, R.; Patel, A.; Pathak, S.; Paton, D.; Patten, H.; Pau, A.; Pawelec, E.; Soldan, C. Paz; Peackoc, A.; Pearson, I. J.; Pehkonen, S.-P.; Peluso, E.; Penot, C.; Pereira, A.; Pereira, R.; Pereira Puglia, P. P.; Perez von Thun, C.; Peruzzo, S.; Peschanyi, S.; Peterka, M.; Petersson, P.; Petravich, G.; Petre, A.; Petrella, N.; Petržilka, V.; Peysson, Y.; Pfefferlé, D.; Philipps, V.; Pillon, M.; Pintsuk, G.; Piovesan, P.; Pires dos Reis, A.; Piron, L.; Pironti, A.; Pisano, F.; Pitts, R.; Pizzo, F.; Plyusnin, V.; Pomaro, N.; Pompilian, O. G.; Pool, P. J.; Popovichev, S.; Porfiri, M. T.; Porosnicu, C.; Porton, M.; Possnert, G.; Potzel, S.; Powell, T.; Pozzi, J.; Prajapati, V.; Prakash, R.; Prestopino, G.; Price, D.; Price, M.; Price, R.; Prior, P.; Proudfoot, R.; Pucella, G.; Puglia, P.; Puiatti, M. E.; Pulley, D.; Purahoo, K.; Pütterich, Th.; Rachlew, E.; Rack, M.; Ragona, R.; Rainford, M. S. J.; Rakha, A.; Ramogida, G.; Ranjan, S.; Rapson, C. J.; Rasmussen, J. J.; Rathod, K.; Rattá, G.; Ratynskaia, S.; Ravera, G.; Rayner, C.; Rebai, M.; Reece, D.; Reed, A.; Réfy, D.; Regan, B.; Regaña, J.; Reich, M.; Reid, N.; Reimold, F.; Reinhart, M.; Reinke, M.; Reiser, D.; Rendell, D.; Reux, C.; Reyes Cortes, S. D. A.; Reynolds, S.; Riccardo, V.; Richardson, N.; Riddle, K.; Rigamonti, D.; Rimini, F. G.; Risner, J.; Riva, M.; Roach, C.; Robins, R. J.; Robinson, S. A.; Robinson, T.; Robson, D. W.; Roccella, R.; Rodionov, R.; Rodrigues, P.; Rodriguez, J.; Rohde, V.; Romanelli, F.; Romanelli, M.; Romanelli, S.; Romazanov, J.; Rowe, S.; Rubel, M.; Rubinacci, G.; Rubino, G.; Ruchko, L.; Ruiz, M.; Ruset, C.; Rzadkiewicz, J.; Saarelma, S.; Sabot, R.; Safi, E.; Sagar, P.; Saibene, G.; Saint-Laurent, F.; Salewski, M.; Salmi, A.; Salmon, R.; Salzedas, F.; Samaddar, D.; Samm, U.; Sandiford, D.; Santa, P.; Santala, M. I. K.; Santos, B.; Santucci, A.; Sartori, F.; Sartori, R.; Sauter, O.; Scannell, R.; Schlummer, T.; Schmid, K.; Schmidt, V.; Schmuck, S.; Schneider, M.; Schöpf, K.; Schwörer, D.; Scott, S. D.; Sergienko, G.; Sertoli, M.; Shabbir, A.; Sharapov, S. E.; Shaw, A.; Shaw, R.; Sheikh, H.; Shepherd, A.; Shevelev, A.; Shumack, A.; Sias, G.; Sibbald, M.; Sieglin, B.; Silburn, S.; Silva, A.; Silva, C.; Simmons, P. A.; Simpson, J.; Simpson-Hutchinson, J.; Sinha, A.; Sipilä, S. K.; Sips, A. C. C.; Sirén, P.; Sirinelli, A.; Sjöstrand, H.; Skiba, M.; Skilton, R.; Slabkowska, K.; Slade, B.; Smith, N.; Smith, P. G.; Smith, R.; Smith, T. J.; Smithies, M.; Snoj, L.; Soare, S.; Solano, E. R.; Somers, A.; Sommariva, C.; Sonato, P.; Sopplesa, A.; Sousa, J.; Sozzi, C.; Spagnolo, S.; Spelzini, T.; Spineanu, F.; Stables, G.; Stamatelatos, I.; Stamp, M. F.; Staniec, P.; Stankūnas, G.; Stan-Sion, C.; Stead, M. J.; Stefanikova, E.; Stepanov, I.; Stephen, A. V.; Stephen, M.; Stevens, A.; Stevens, B. D.; Strachan, J.; Strand, P.; Strauss, H. R.; Ström, P.; Stubbs, G.; Studholme, W.; Subba, F.; Summers, H. P.; Svensson, J.; Świderski, Ł.; Szabolics, T.; Szawlowski, M.; Szepesi, G.; Suzuki, T. T.; Tál, B.; Tala, T.; Talbot, A. R.; Talebzadeh, S.; Taliercio, C.; Tamain, P.; Tame, C.; Tang, W.; Tardocchi, M.; Taroni, L.; Taylor, D.; Taylor, K. A.; Tegnered, D.; Telesca, G.; Teplova, N.; Terranova, D.; Testa, D.; Tholerus, E.; Thomas, J.; Thomas, J. D.; Thomas, P.; Thompson, A.; Thompson, C.-A.; Thompson, V. K.; Thorne, L.; Thornton, A.; Thrysøe, A. S.; Tigwell, P. A.; Tipton, N.; Tiseanu, I.; Tojo, H.; Tokitani, M.; Tolias, P.; Tomeš, M.; Tonner, P.; Towndrow, M.; Trimble, P.; Tripsky, M.; Tsalas, M.; Tsavalas, P.; Tskhakaya jun, D.; Turner, I.; Turner, M. M.; Turnyanskiy, M.; Tvalashvili, G.; Tyrrell, S. G. J.; Uccello, A.; Ul-Abidin, Z.; Uljanovs, J.; Ulyatt, D.; Urano, H.; Uytdenhouwen, I.; Vadgama, A. P.; Valcarcel, D.; Valentinuzzi, M.; Valisa, M.; Vallejos Olivares, P.; Valovic, M.; Van De Mortel, M.; Van Eester, D.; Van Renterghem, W.; van Rooij, G. J.; Varje, J.; Varoutis, S.; Vartanian, S.; Vasava, K.; Vasilopoulou, T.; Vega, J.; Verdoolaege, G.; Verhoeven, R.; Verona, C.; Verona Rinati, G.; Veshchev, E.; Vianello, N.; Vicente, J.; Viezzer, E.; Villari, S.; Villone, F.; Vincenzi, P.; Vinyar, I.; Viola, B.; Vitins, A.; Vizvary, Z.; Vlad, M.; Voitsekhovitch, I.; Vondráček, P.; Vora, N.; Vu, T.; Pires de Sa, W. W.; Wakeling, B.; Waldon, C. W. F.; Walkden, N.; Walker, M.; Walker, R.; Walsh, M.; Wang, E.; Wang, N.; Warder, S.; Warren, R. J.; Waterhouse, J.; Watkins, N. W.; Watts, C.; Wauters, T.; Weckmann, A.; Weiland, J.; Weisen, H.; Weiszflog, M.; Wellstood, C.; West, A. T.; Wheatley, M. R.; Whetham, S.; Whitehead, A. M.; Whitehead, B. D.; Widdowson, A. M.; Wiesen, S.; Wilkinson, J.; Williams, J.; Williams, M.; Wilson, A. R.; Wilson, D. J.; Wilson, H. R.; Wilson, J.; Wischmeier, M.; Withenshaw, G.; Withycombe, A.; Witts, D. M.; Wood, D.; Wood, R.; Woodley, C.; Wray, S.; Wright, J.; Wright, J. C.; Wu, J.; Wukitch, S.; Wynn, A.; Xu, T.; Yadikin, D.; Yanling, W.; Yao, L.; Yavorskij, V.; Yoo, M. G.; Young, C.; Young, D.; Young, I. D.; Young, R.; Zacks, J.; Zagorski, R.; Zaitsev, F. S.; Zanino, R.; Zarins, A.; Zastrow, K. D.; Zerbini, M.; Zhang, W.; Zhou, Y.; Zilli, E.; Zoita, V.; Zoletnik, S.; Zychor, I.; JET Contributors
2017-10-01
The 2014-2016 JET results are reviewed in the light of their significance for optimising the ITER research plan for the active and non-active operation. More than 60 h of plasma operation with ITER first wall materials successfully took place since its installation in 2011. New multi-machine scaling of the type I-ELM divertor energy flux density to ITER is supported by first principle modelling. ITER relevant disruption experiments and first principle modelling are reported with a set of three disruption mitigation valves mimicking the ITER setup. Insights of the L-H power threshold in Deuterium and Hydrogen are given, stressing the importance of the magnetic configurations and the recent measurements of fine-scale structures in the edge radial electric. Dimensionless scans of the core and pedestal confinement provide new information to elucidate the importance of the first wall material on the fusion performance. H-mode plasmas at ITER triangularity (H = 1 at β N ~ 1.8 and n/n GW ~ 0.6) have been sustained at 2 MA during 5 s. The ITER neutronics codes have been validated on high performance experiments. Prospects for the coming D-T campaign and 14 MeV neutron calibration strategy are reviewed.
NASA Astrophysics Data System (ADS)
Li, Zhifu; Hu, Yueming; Li, Di
2016-08-01
For a class of linear discrete-time uncertain systems, a feedback feed-forward iterative learning control (ILC) scheme is proposed, which is comprised of an iterative learning controller and two current iteration feedback controllers. The iterative learning controller is used to improve the performance along the iteration direction and the feedback controllers are used to improve the performance along the time direction. First of all, the uncertain feedback feed-forward ILC system is presented by an uncertain two-dimensional Roesser model system. Then, two robust control schemes are proposed. One can ensure that the feedback feed-forward ILC system is bounded-input bounded-output stable along time direction, and the other can ensure that the feedback feed-forward ILC system is asymptotically stable along time direction. Both schemes can guarantee the system is robust monotonically convergent along the iteration direction. Third, the robust convergent sufficient conditions are given, which contains a linear matrix inequality (LMI). Moreover, the LMI can be used to determine the gain matrix of the feedback feed-forward iterative learning controller. Finally, the simulation results are presented to demonstrate the effectiveness of the proposed schemes.
PREFACE: Progress in the ITER Physics Basis
NASA Astrophysics Data System (ADS)
Ikeda, K.
2007-06-01
I would firstly like to congratulate all who have contributed to the preparation of the `Progress in the ITER Physics Basis' (PIPB) on its publication and express my deep appreciation of the hard work and commitment of the many scientists involved. With the signing of the ITER Joint Implementing Agreement in November 2006, the ITER Members have now established the framework for construction of the project, and the ITER Organization has begun work at Cadarache. The review of recent progress in the physics basis for burning plasma experiments encompassed by the PIPB will be a valuable resource for the project and, in particular, for the current Design Review. The ITER design has been derived from a physics basis developed through experimental, modelling and theoretical work on the properties of tokamak plasmas and, in particular, on studies of burning plasma physics. The `ITER Physics Basis' (IPB), published in 1999, has been the reference for the projection methodologies for the design of ITER, but the IPB also highlighted several key issues which needed to be resolved to provide a robust basis for ITER operation. In the intervening period scientists of the ITER Participant Teams have addressed these issues intensively. The International Tokamak Physics Activity (ITPA) has provided an excellent forum for scientists involved in these studies, focusing their work on the high priority physics issues for ITER. Significant progress has been made in many of the issues identified in the IPB and this progress is discussed in depth in the PIPB. In this respect, the publication of the PIPB symbolizes the strong interest and enthusiasm of the plasma physics community for the success of the ITER project, which we all recognize as one of the great scientific challenges of the 21st century. I wish to emphasize my appreciation of the work of the ITPA Coordinating Committee members, who are listed below. Their support and encouragement for the preparation of the PIPB were fundamental to its completion. I am pleased to witness the extensive collaborations, the excellent working relationships and the free exchange of views that have been developed among scientists working on magnetic fusion, and I would particularly like to acknowledge the importance which they assign to ITER in their research. This close collaboration and the spirit of free discussion will be essential to the success of ITER. Finally, the PIPB identifies issues which remain in the projection of burning plasma performance to the ITER scale and in the control of burning plasmas. Continued R&D is therefore called for to reduce the uncertainties associated with these issues and to ensure the efficient operation and exploitation of ITER. It is important that the international fusion community maintains a high level of collaboration in the future to address these issues and to prepare the physics basis for ITER operation. ITPA Coordination Committee R. Stambaugh (Chair of ITPA CC, General Atomics, USA) D.J. Campbell (Previous Chair of ITPA CC, European Fusion Development Agreement—Close Support Unit, ITER Organization) M. Shimada (Co-Chair of ITPA CC, ITER Organization) R. Aymar (ITER International Team, CERN) V. Chuyanov (ITER Organization) J.H. Han (Korea Basic Science Institute, Korea) Y. Huo (Zengzhou University, China) Y.S. Hwang (Seoul National University, Korea) N. Ivanov (Kurchatov Institute, Russia) Y. Kamada (Japan Atomic Energy Agency, Naka, Japan) P.K. Kaw (Institute for Plasma Research, India) S. Konovalov (Kurchatov Institute, Russia) M. Kwon (National Fusion Research Center, Korea) J. Li (Academy of Science, Institute of Plasma Physics, China) S. Mirnov (TRINITI, Russia) Y. Nakamura (National Institute for Fusion Studies, Japan) H. Ninomiya (Japan Atomic Energy Agency, Naka, Japan) E. Oktay (Department of Energy, USA) J. Pamela (European Fusion Development Agreement—Close Support Unit) C. Pan (Southwestern Institute of Physics, China) F. Romanelli (Ente per le Nuove tecnologie, l'Energia e l'Ambiente, Italy and European Fusion Development Agreement—Close Support Unit) N. Sauthoff (Princeton Plasma Physics Laboratory, USA and Oak Ridge National Laboratories, USA) Y. Saxena (Institute for Plasma Research, India) Y. Shimomura (ITER Organization) R. Singh (Institute for Plasma Research, India) S. Takamura (Nagoya University, Japan) K. Toi (National Institute for Fusion Studies, Japan) M. Wakatani (Kyoto University, Japan (deceased)) H. Zohm (Max-Planck-Institut für Plasmaphysik, Garching, Germany)
Progress in Development of the ITER Plasma Control System Simulation Platform
NASA Astrophysics Data System (ADS)
Walker, Michael; Humphreys, David; Sammuli, Brian; Ambrosino, Giuseppe; de Tommasi, Gianmaria; Mattei, Massimiliano; Raupp, Gerhard; Treutterer, Wolfgang; Winter, Axel
2017-10-01
We report on progress made and expected uses of the Plasma Control System Simulation Platform (PCSSP), the primary test environment for development of the ITER Plasma Control System (PCS). PCSSP will be used for verification and validation of the ITER PCS Final Design for First Plasma, to be completed in 2020. We discuss the objectives of PCSSP, its overall structure, selected features, application to existing devices, and expected evolution over the lifetime of the ITER PCS. We describe an archiving solution for simulation results, methods for incorporating physics models of the plasma and physical plant (tokamak, actuator, and diagnostic systems) into PCSSP, and defining characteristics of models suitable for a plasma control development environment such as PCSSP. Applications of PCSSP simulation models including resistive plasma equilibrium evolution are demonstrated. PCSSP development supported by ITER Organization under ITER/CTS/6000000037. Resistive evolution code developed under General Atomics' Internal funding. The views and opinions expressed herein do not necessarily reflect those of the ITER Organization.
NASA Astrophysics Data System (ADS)
Greenfield, Charles M.
2017-10-01
The US Burning Plasma Organization is pleased to welcome Dr. Bernard Bigot, who will give an update on progress in the ITER Project. Dr. Bigot took over as Director General of the ITER Organization in early 2015 following a distinguished career that included serving as Chairman and CEO of the French Alternative Energies and Atomic Energy Commission and as High Commissioner for ITER in France. During his tenure at ITER the project has moved into high gear, with rapid progress evident on the construction site and preparation of a staged schedule and a research plan leading from where we are today through all the way to full DT operation. In an unprecedented international effort, seven partners ``China, the European Union, India, Japan, Korea, Russia and the United States'' have pooled their financial and scientific resources to build the biggest fusion reactor in history. ITER will open the way to the next step: a demonstration fusion power plant. All DPP attendees are welcome to attend this ITER town meeting.
Yu, Catherine H; Stacey, Dawn; Sale, Joanna; Hall, Susan; Kaplan, David M; Ivers, Noah; Rezmovitz, Jeremy; Leung, Fok-Han; Shah, Baiju R; Straus, Sharon E
2014-01-22
Care of patients with diabetes often occurs in the context of other chronic illness. Competing disease priorities and competing patient-physician priorities present challenges in the provision of care for the complex patient. Guideline implementation interventions to date do not acknowledge these intricacies of clinical practice. As a result, patients and providers are left overwhelmed and paralyzed by the sheer volume of recommendations and tasks. An individualized approach to the patient with diabetes and multiple comorbid conditions using shared decision-making (SDM) and goal setting has been advocated as a patient-centred approach that may facilitate prioritization of treatment options. Furthermore, incorporating interprofessional integration into practice may overcome barriers to implementation. However, these strategies have not been taken up extensively in clinical practice. To systematically develop and test an interprofessional SDM and goal-setting toolkit for patients with diabetes and other chronic diseases, following the Knowledge to Action framework. 1. Feasibility study: Individual interviews with primary care physicians, nurses, dietitians, pharmacists, and patients with diabetes will be conducted, exploring their experiences with shared decision-making and priority-setting, including facilitators and barriers, the relevance of a decision aid and toolkit for priority-setting, and how best to integrate it into practice.2. Toolkit development: Based on this data, an evidence-based multi-component SDM toolkit will be developed. The toolkit will be reviewed by content experts (primary care, endocrinology, geriatricians, nurses, dietitians, pharmacists, patients) for accuracy and comprehensiveness.3. Heuristic evaluation: A human factors engineer will review the toolkit and identify, list and categorize usability issues by severity.4. Usability testing: This will be done using cognitive task analysis.5. Iterative refinement: Throughout the development process, the toolkit will be refined through several iterative cycles of feedback and redesign. Interprofessional shared decision-making regarding priority-setting with the use of a decision aid toolkit may help prioritize care of individuals with multiple comorbid conditions. Adhering to principles of user-centered design, we will develop and refine a toolkit to assess the feasibility of this approach.
2014-01-01
Background Care of patients with diabetes often occurs in the context of other chronic illness. Competing disease priorities and competing patient-physician priorities present challenges in the provision of care for the complex patient. Guideline implementation interventions to date do not acknowledge these intricacies of clinical practice. As a result, patients and providers are left overwhelmed and paralyzed by the sheer volume of recommendations and tasks. An individualized approach to the patient with diabetes and multiple comorbid conditions using shared decision-making (SDM) and goal setting has been advocated as a patient-centred approach that may facilitate prioritization of treatment options. Furthermore, incorporating interprofessional integration into practice may overcome barriers to implementation. However, these strategies have not been taken up extensively in clinical practice. Objectives To systematically develop and test an interprofessional SDM and goal-setting toolkit for patients with diabetes and other chronic diseases, following the Knowledge to Action framework. Methods 1. Feasibility study: Individual interviews with primary care physicians, nurses, dietitians, pharmacists, and patients with diabetes will be conducted, exploring their experiences with shared decision-making and priority-setting, including facilitators and barriers, the relevance of a decision aid and toolkit for priority-setting, and how best to integrate it into practice. 2. Toolkit development: Based on this data, an evidence-based multi-component SDM toolkit will be developed. The toolkit will be reviewed by content experts (primary care, endocrinology, geriatricians, nurses, dietitians, pharmacists, patients) for accuracy and comprehensiveness. 3. Heuristic evaluation: A human factors engineer will review the toolkit and identify, list and categorize usability issues by severity. 4. Usability testing: This will be done using cognitive task analysis. 5. Iterative refinement: Throughout the development process, the toolkit will be refined through several iterative cycles of feedback and redesign. Discussion Interprofessional shared decision-making regarding priority-setting with the use of a decision aid toolkit may help prioritize care of individuals with multiple comorbid conditions. Adhering to principles of user-centered design, we will develop and refine a toolkit to assess the feasibility of this approach. PMID:24450385
NASA Astrophysics Data System (ADS)
Ongena, Jef; Mailloux, Joelle; Mayoral, Marie-Line
2009-04-01
This special cluster of papers summarizes the work accomplished during the last three years in the framework of the Task Force Heating at JET, whose mission it is to study the optimisation of heating systems for plasma heating and current drive, launching and deposition questions and the physics of plasma rotation. Good progress and new physics insights have been obtained with the three heating systems available at JET: lower hybrid (LH), ion cyclotron resonance heating (ICRH) and neutral beam injection (NBI). Topics covered in the present issue are the use of edge gas puffing to improve the coupling of LH waves at large distances between the plasma separatrix and the LH launcher. Closely linked with this topic are detailed studies of the changes in LH coupling due to modifications in the scrape-off layer during gas puffing and simultaneous application of ICRH. We revisit the fundamental ICRH heating of D plasmas, include new physics results made possible by recently installed new diagnostic capabilities on JET and point out caveats for ITER when NBI is simultaneously applied. Other topics are the study of the anomalous behaviour of fast ions from NBI, and a study of toroidal rotation induced by ICRH, both again with possible implications for ITER. In finalizing this cluster of articles, thanks are due to all colleagues involved in preparing and executing the JET programme under EFDA in recent years. We want to thank the EFDA leadership for the special privilege of appointing us as Leaders or Deputies of Task Force Heating, a wonderful and hardworking group of colleagues. Thanks also to all other European and non-European scientists who contributed to the JET scientific programme, the Operations team of JET and the colleagues of the Close Support Unit (CSU). Thanks are also due to the Editors, Editorial Board and referees of Plasma Physics and Controlled Fusion together with the publishing staff of IOP Publishing who have supported and contributed substantially to this initiative.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dodge, C.T.; Rong, J.; Dodge, C.W.
2014-06-15
Purpose: To determine how filtered back-projection (FBP), adaptive statistical (ASiR), and model based (MBIR) iterative reconstruction algorithms affect the measured modulation transfer functions (MTFs) of variable-contrast targets over a wide range of clinically applicable dose levels. Methods: The Catphan 600 CTP401 module, surrounded by an oval, fat-equivalent ring to mimic patient size/shape, was scanned on a GE HD750 CT scanner at 1, 2, 3, 6, 12 and 24 mGy CTDIvol levels with typical patient scan parameters: 120kVp, 0.8s, 40mm beam width, large SFOV, 2.5mm thickness, 0.984 pitch. The images were reconstructed using GE's Standard kernel with FBP; 20%, 40% andmore » 70% ASiR; and MBIR. A task-based MTF (MTFtask) was computed for six cylindrical targets: 2 low-contrast (Polystyrene, LDPE), 2 medium-contrast (Delrin, PMP), and 2 high-contrast (Teflon, air). MTFtask was used to compare the performance of reconstruction algorithms with decreasing CTDIvol from 24mGy, which is currently used in the clinic. Results: For the air target and 75% dose savings (6 mGy), MBIR MTFtask at 5 lp/cm measured 0.24, compared to 0.20 for 70% ASiR and 0.11 for FBP. Overall, for both high-contrast targets, MBIR MTFtask improved with increasing CTDIvol and consistently outperformed ASiR and FBP near the system's Nyquist frequency. Conversely, for Polystyrene at 6 mGy, MBIR (0.10) and 70% ASiR (0.07) MTFtask was lower than for FBP (0.18). For medium and low-contrast targets, FBP remains the best overall algorithm for improved resolution at low CTDIvol (1–6 mGy) levels, whereas MBIR is comparable at higher dose levels (12–24 mGy). Conclusion: MBIR improved the MTF of small, high-contrast targets compared to FBP and ASiR at doses of 50%–12.5% of those currently used in the clinic. However, for imaging low- and mediumcontrast targets, FBP performed the best across all dose levels. For assessing MTF from different reconstruction algorithms, task-based MTF measurements are necessary.« less
NASA Astrophysics Data System (ADS)
Ding, Huanjun; Gao, Hao; Zhao, Bo; Cho, Hyo-Min; Molloi, Sabee
2014-10-01
Both computer simulations and experimental phantom studies were carried out to investigate the radiation dose reduction with tensor framelet based iterative image reconstruction (TFIR) for a dedicated high-resolution spectral breast computed tomography (CT) based on a silicon strip photon-counting detector. The simulation was performed with a 10 cm-diameter water phantom including three contrast materials (polyethylene, 8 mg ml-1 iodine and B-100 bone-equivalent plastic). In the experimental study, the data were acquired with a 1.3 cm-diameter polymethylmethacrylate (PMMA) phantom containing iodine in three concentrations (8, 16 and 32 mg ml-1) at various radiation doses (1.2, 2.4 and 3.6 mGy) and then CT images were reconstructed using the filtered-back-projection (FBP) technique and the TFIR technique, respectively. The image quality between these two techniques was evaluated by the quantitative analysis on contrast-to-noise ratio (CNR) and spatial resolution that was evaluated using the task-based modulation transfer function (MTF). Both the simulation and experimental results indicated that the task-based MTF obtained from TFIR reconstruction with one-third of the radiation dose was comparable to that from the FBP reconstruction for low contrast target. For high contrast target, the TFIR was substantially superior to the FBP reconstruction in terms of spatial resolution. In addition, TFIR was able to achieve a factor of 1.6-1.8 increase in CNR, depending on the target contrast level. This study demonstrates that the TFIR can reduce the required radiation dose by a factor of two-thirds for a CT image reconstruction compared to the FBP technique. It achieves much better CNR and spatial resolution for high contrast target in addition to retaining similar spatial resolution for low contrast target. This TFIR technique has been implemented with a graphic processing unit system and it takes approximately 10 s to reconstruct a single-slice CT image, which can potentially be used in a future multi-slit multi-slice spiral CT system.
Development of iterative techniques for the solution of unsteady compressible viscous flows
NASA Technical Reports Server (NTRS)
Sankar, Lakshmi N.; Hixon, Duane
1992-01-01
The development of efficient iterative solution methods for the numerical solution of two- and three-dimensional compressible Navier-Stokes equations is discussed. Iterative time marching methods have several advantages over classical multi-step explicit time marching schemes, and non-iterative implicit time marching schemes. Iterative schemes have better stability characteristics than non-iterative explicit and implicit schemes. In this work, another approach based on the classical conjugate gradient method, known as the Generalized Minimum Residual (GMRES) algorithm is investigated. The GMRES algorithm has been used in the past by a number of researchers for solving steady viscous and inviscid flow problems. Here, we investigate the suitability of this algorithm for solving the system of non-linear equations that arise in unsteady Navier-Stokes solvers at each time step.
No-go theorem for iterations of unknown quantum gates
NASA Astrophysics Data System (ADS)
Soleimanifar, Mehdi; Karimipour, Vahid
2016-01-01
We propose a no-go theorem by proving the impossibility of constructing a deterministic quantum circuit that iterates a unitary oracle by calling it only once. Different schemes are provided to bypass this result and to approximately realize the iteration. The optimal scheme is also studied. An interesting observation is that for a large number of iterations, a trivial strategy like using the identity channel has the optimal performance, and preprocessing, postprocessing, or using resources like entanglement does not help at all. Intriguingly, the number of iterations, when being large enough, does not affect the performance of the proposed schemes.
Comparing direct and iterative equation solvers in a large structural analysis software system
NASA Technical Reports Server (NTRS)
Poole, E. L.
1991-01-01
Two direct Choleski equation solvers and two iterative preconditioned conjugate gradient (PCG) equation solvers used in a large structural analysis software system are described. The two direct solvers are implementations of the Choleski method for variable-band matrix storage and sparse matrix storage. The two iterative PCG solvers include the Jacobi conjugate gradient method and an incomplete Choleski conjugate gradient method. The performance of the direct and iterative solvers is compared by solving several representative structural analysis problems. Some key factors affecting the performance of the iterative solvers relative to the direct solvers are identified.
Upwind relaxation methods for the Navier-Stokes equations using inner iterations
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Ng, Wing-Fai; Walters, Robert W.
1992-01-01
A subsonic and a supersonic problem are respectively treated by an upwind line-relaxation algorithm for the Navier-Stokes equations using inner iterations to accelerate steady-state solution convergence and thereby minimize CPU time. While the ability of the inner iterative procedure to mimic the quadratic convergence of the direct solver method is attested to in both test problems, some of the nonquadratic inner iterative results are noted to have been more efficient than the quadratic. In the more successful, supersonic test case, inner iteration required only about 65 percent of the line-relaxation method-entailed CPU time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sauthoff, Ned; Reiersen, Wayne; Berry, Jan
2013-09-12
US ITER Project Manager Ned Sauthoff, joined by Wayne Reiersen, Team Leader Magnet Systems, and Jan Berry, Team Leader Tokamak Cooling System, discuss the U.S.'s role in the ITER international collaboration.
Sauthoff, Ned; Reiersen, Wayne; Berry, Jan
2017-12-12
US ITER Project Manager Ned Sauthoff, joined by Wayne Reiersen, Team Leader Magnet Systems, and Jan Berry, Team Leader Tokamak Cooling System, discuss the U.S.'s role in the ITER international collaboration.