Sample records for complex iterative process

  1. Not so Complex: Iteration in the Complex Plane

    ERIC Educational Resources Information Center

    O'Dell, Robin S.

    2014-01-01

    The simple process of iteration can produce complex and beautiful figures. In this article, Robin O'Dell presents a set of tasks requiring students to use the geometric interpretation of complex number multiplication to construct linear iteration rules. When the outputs are plotted in the complex plane, the graphs trace pleasing designs…

  2. Learning Efficient Sparse and Low Rank Models.

    PubMed

    Sprechmann, P; Bronstein, A M; Sapiro, G

    2015-09-01

    Parsimony, including sparsity and low rank, has been shown to successfully model data in numerous machine learning and signal processing tasks. Traditionally, such modeling approaches rely on an iterative algorithm that minimizes an objective function with parsimony-promoting terms. The inherently sequential structure and data-dependent complexity and latency of iterative optimization constitute a major limitation in many applications requiring real-time performance or involving large-scale data. Another limitation encountered by these modeling techniques is the difficulty of their inclusion in discriminative learning scenarios. In this work, we propose to move the emphasis from the model to the pursuit algorithm, and develop a process-centric view of parsimonious modeling, in which a learned deterministic fixed-complexity pursuit process is used in lieu of iterative optimization. We show a principled way to construct learnable pursuit process architectures for structured sparse and robust low rank models, derived from the iteration of proximal descent algorithms. These architectures learn to approximate the exact parsimonious representation at a fraction of the complexity of the standard optimization methods. We also show that appropriate training regimes allow to naturally extend parsimonious models to discriminative settings. State-of-the-art results are demonstrated on several challenging problems in image and audio processing with several orders of magnitude speed-up compared to the exact optimization algorithms.

  3. Integrating a Genetic Algorithm Into a Knowledge-Based System for Ordering Complex Design Processes

    NASA Technical Reports Server (NTRS)

    Rogers, James L.; McCulley, Collin M.; Bloebaum, Christina L.

    1996-01-01

    The design cycle associated with large engineering systems requires an initial decomposition of the complex system into design processes which are coupled through the transference of output data. Some of these design processes may be grouped into iterative subcycles. In analyzing or optimizing such a coupled system, it is essential to be able to determine the best ordering of the processes within these subcycles to reduce design cycle time and cost. Many decomposition approaches assume the capability is available to determine what design processes and couplings exist and what order of execution will be imposed during the design cycle. Unfortunately, this is often a complex problem and beyond the capabilities of a human design manager. A new feature, a genetic algorithm, has been added to DeMAID (Design Manager's Aid for Intelligent Decomposition) to allow the design manager to rapidly examine many different combinations of ordering processes in an iterative subcycle and to optimize the ordering based on cost, time, and iteration requirements. Two sample test cases are presented to show the effects of optimizing the ordering with a genetic algorithm.

  4. The Primary Physical Education Curriculum Process: More Complex That You Might Think!!

    ERIC Educational Resources Information Center

    Jess, Mike; Carse, Nicola; Keay, Jeanne

    2016-01-01

    In this paper, we present the curriculum development process as a complex, iterative and integrated phenomenon. Building on the early work of Stenhouse [1975, "An Introduction to Curriculum Research and Development". London: Heinemann Educational], we position the teacher at the heart of this process and extend his ideas by exploring how…

  5. Imaging complex objects using learning tomography

    NASA Astrophysics Data System (ADS)

    Lim, JooWon; Goy, Alexandre; Shoreh, Morteza Hasani; Unser, Michael; Psaltis, Demetri

    2018-02-01

    Optical diffraction tomography (ODT) can be described using the scattering process through an inhomogeneous media. An inherent nonlinearity exists relating the scattering medium and the scattered field due to multiple scattering. Multiple scattering is often assumed to be negligible in weakly scattering media. This assumption becomes invalid as the sample gets more complex resulting in distorted image reconstructions. This issue becomes very critical when we image a complex sample. Multiple scattering can be simulated using the beam propagation method (BPM) as the forward model of ODT combined with an iterative reconstruction scheme. The iterative error reduction scheme and the multi-layer structure of BPM are similar to neural networks. Therefore we refer to our imaging method as learning tomography (LT). To fairly assess the performance of LT in imaging complex samples, we compared LT with the conventional iterative linear scheme using Mie theory which provides the ground truth. We also demonstrate the capacity of LT to image complex samples using experimental data of a biological cell.

  6. Iterated reaction graphs: simulating complex Maillard reaction pathways.

    PubMed

    Patel, S; Rabone, J; Russell, S; Tissen, J; Klaffke, W

    2001-01-01

    This study investigates a new method of simulating a complex chemical system including feedback loops and parallel reactions. The practical purpose of this approach is to model the actual reactions that take place in the Maillard process, a set of food browning reactions, in sufficient detail to be able to predict the volatile composition of the Maillard products. The developed framework, called iterated reaction graphs, consists of two main elements: a soup of molecules and a reaction base of Maillard reactions. An iterative process loops through the reaction base, taking reactants from and feeding products back to the soup. This produces a reaction graph, with molecules as nodes and reactions as arcs. The iterated reaction graph is updated and validated by comparing output with the main products found by classical gas-chromatographic/mass spectrometric analysis. To ensure a realistic output and convergence to desired volatiles only, the approach contains a number of novel elements: rate kinetics are treated as reaction probabilities; only a subset of the true chemistry is modeled; and the reactions are blocked into groups.

  7. Composition of web services using Markov decision processes and dynamic programming.

    PubMed

    Uc-Cetina, Víctor; Moo-Mena, Francisco; Hernandez-Ucan, Rafael

    2015-01-01

    We propose a Markov decision process model for solving the Web service composition (WSC) problem. Iterative policy evaluation, value iteration, and policy iteration algorithms are used to experimentally validate our approach, with artificial and real data. The experimental results show the reliability of the model and the methods employed, with policy iteration being the best one in terms of the minimum number of iterations needed to estimate an optimal policy, with the highest Quality of Service attributes. Our experimental work shows how the solution of a WSC problem involving a set of 100,000 individual Web services and where a valid composition requiring the selection of 1,000 services from the available set can be computed in the worst case in less than 200 seconds, using an Intel Core i5 computer with 6 GB RAM. Moreover, a real WSC problem involving only 7 individual Web services requires less than 0.08 seconds, using the same computational power. Finally, a comparison with two popular reinforcement learning algorithms, sarsa and Q-learning, shows that these algorithms require one or two orders of magnitude and more time than policy iteration, iterative policy evaluation, and value iteration to handle WSC problems of the same complexity.

  8. DART system analysis.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boggs, Paul T.; Althsuler, Alan; Larzelere, Alex R.

    2005-08-01

    The Design-through-Analysis Realization Team (DART) is chartered with reducing the time Sandia analysts require to complete the engineering analysis process. The DART system analysis team studied the engineering analysis processes employed by analysts in Centers 9100 and 8700 at Sandia to identify opportunities for reducing overall design-through-analysis process time. The team created and implemented a rigorous analysis methodology based on a generic process flow model parameterized by information obtained from analysts. They also collected data from analysis department managers to quantify the problem type and complexity distribution throughout Sandia's analyst community. They then used this information to develop a communitymore » model, which enables a simple characterization of processes that span the analyst community. The results indicate that equal opportunity for reducing analysis process time is available both by reducing the ''once-through'' time required to complete a process step and by reducing the probability of backward iteration. In addition, reducing the rework fraction (i.e., improving the engineering efficiency of subsequent iterations) offers approximately 40% to 80% of the benefit of reducing the ''once-through'' time or iteration probability, depending upon the process step being considered. Further, the results indicate that geometry manipulation and meshing is the largest portion of an analyst's effort, especially for structural problems, and offers significant opportunity for overall time reduction. Iteration loops initiated late in the process are more costly than others because they increase ''inner loop'' iterations. Identifying and correcting problems as early as possible in the process offers significant opportunity for time savings.« less

  9. A new iterative triclass thresholding technique in image segmentation.

    PubMed

    Cai, Hongmin; Yang, Zhong; Cao, Xinhua; Xia, Weiming; Xu, Xiaoyin

    2014-03-01

    We present a new method in image segmentation that is based on Otsu's method but iteratively searches for subregions of the image for segmentation, instead of treating the full image as a whole region for processing. The iterative method starts with Otsu's threshold and computes the mean values of the two classes as separated by the threshold. Based on the Otsu's threshold and the two mean values, the method separates the image into three classes instead of two as the standard Otsu's method does. The first two classes are determined as the foreground and background and they will not be processed further. The third class is denoted as a to-be-determined (TBD) region that is processed at next iteration. At the succeeding iteration, Otsu's method is applied on the TBD region to calculate a new threshold and two class means and the TBD region is again separated into three classes, namely, foreground, background, and a new TBD region, which by definition is smaller than the previous TBD regions. Then, the new TBD region is processed in the similar manner. The process stops when the Otsu's thresholds calculated between two iterations is less than a preset threshold. Then, all the intermediate foreground and background regions are, respectively, combined to create the final segmentation result. Tests on synthetic and real images showed that the new iterative method can achieve better performance than the standard Otsu's method in many challenging cases, such as identifying weak objects and revealing fine structures of complex objects while the added computational cost is minimal.

  10. Composition of Web Services Using Markov Decision Processes and Dynamic Programming

    PubMed Central

    Uc-Cetina, Víctor; Moo-Mena, Francisco; Hernandez-Ucan, Rafael

    2015-01-01

    We propose a Markov decision process model for solving the Web service composition (WSC) problem. Iterative policy evaluation, value iteration, and policy iteration algorithms are used to experimentally validate our approach, with artificial and real data. The experimental results show the reliability of the model and the methods employed, with policy iteration being the best one in terms of the minimum number of iterations needed to estimate an optimal policy, with the highest Quality of Service attributes. Our experimental work shows how the solution of a WSC problem involving a set of 100,000 individual Web services and where a valid composition requiring the selection of 1,000 services from the available set can be computed in the worst case in less than 200 seconds, using an Intel Core i5 computer with 6 GB RAM. Moreover, a real WSC problem involving only 7 individual Web services requires less than 0.08 seconds, using the same computational power. Finally, a comparison with two popular reinforcement learning algorithms, sarsa and Q-learning, shows that these algorithms require one or two orders of magnitude and more time than policy iteration, iterative policy evaluation, and value iteration to handle WSC problems of the same complexity. PMID:25874247

  11. Iterative Neighbour-Information Gathering for Ranking Nodes in Complex Networks

    NASA Astrophysics Data System (ADS)

    Xu, Shuang; Wang, Pei; Lü, Jinhu

    2017-01-01

    Designing node influence ranking algorithms can provide insights into network dynamics, functions and structures. Increasingly evidences reveal that node’s spreading ability largely depends on its neighbours. We introduce an iterative neighbourinformation gathering (Ing) process with three parameters, including a transformation matrix, a priori information and an iteration time. The Ing process iteratively combines priori information from neighbours via the transformation matrix, and iteratively assigns an Ing score to each node to evaluate its influence. The algorithm appropriates for any types of networks, and includes some traditional centralities as special cases, such as degree, semi-local, LeaderRank. The Ing process converges in strongly connected networks with speed relying on the first two largest eigenvalues of the transformation matrix. Interestingly, the eigenvector centrality corresponds to a limit case of the algorithm. By comparing with eight renowned centralities, simulations of susceptible-infected-removed (SIR) model on real-world networks reveal that the Ing can offer more exact rankings, even without a priori information. We also observe that an optimal iteration time is always in existence to realize best characterizing of node influence. The proposed algorithms bridge the gaps among some existing measures, and may have potential applications in infectious disease control, designing of optimal information spreading strategies.

  12. Reducing Design Cycle Time and Cost Through Process Resequencing

    NASA Technical Reports Server (NTRS)

    Rogers, James L.

    2004-01-01

    In today's competitive environment, companies are under enormous pressure to reduce the time and cost of their design cycle. One method for reducing both time and cost is to develop an understanding of the flow of the design processes and the effects of the iterative subcycles that are found in complex design projects. Once these aspects are understood, the design manager can make decisions that take advantage of decomposition, concurrent engineering, and parallel processing techniques to reduce the total time and the total cost of the design cycle. One software tool that can aid in this decision-making process is the Design Manager's Aid for Intelligent Decomposition (DeMAID). The DeMAID software minimizes the feedback couplings that create iterative subcycles, groups processes into iterative subcycles, and decomposes the subcycles into a hierarchical structure. The real benefits of producing the best design in the least time and at a minimum cost are obtained from sequencing the processes in the subcycles.

  13. Single-agent parallel window search

    NASA Technical Reports Server (NTRS)

    Powley, Curt; Korf, Richard E.

    1991-01-01

    Parallel window search is applied to single-agent problems by having different processes simultaneously perform iterations of Iterative-Deepening-A(asterisk) (IDA-asterisk) on the same problem but with different cost thresholds. This approach is limited by the time to perform the goal iteration. To overcome this disadvantage, the authors consider node ordering. They discuss how global node ordering by minimum h among nodes with equal f = g + h values can reduce the time complexity of serial IDA-asterisk by reducing the time to perform the iterations prior to the goal iteration. Finally, the two ideas of parallel window search and node ordering are combined to eliminate the weaknesses of each approach while retaining the strengths. The resulting approach, called simply parallel window search, can be used to find a near-optimal solution quickly, improve the solution until it is optimal, and then finally guarantee optimality, depending on the amount of time available.

  14. Models of expert assessments and their study in problems of choice and decision-making in management of motor transport processes

    NASA Astrophysics Data System (ADS)

    Belokurov, V. P.; Belokurov, S. V.; Korablev, R. A.; Shtepa, A. A.

    2018-05-01

    The article deals with decision making concerning transport tasks on search iterations in the management of motor transport processes. An optimal selection of the best option for specific situations is suggested in the management of complex multi-criteria transport processes.

  15. Cognitive representation of "musical fractals": Processing hierarchy and recursion in the auditory domain.

    PubMed

    Martins, Mauricio Dias; Gingras, Bruno; Puig-Waldmueller, Estela; Fitch, W Tecumseh

    2017-04-01

    The human ability to process hierarchical structures has been a longstanding research topic. However, the nature of the cognitive machinery underlying this faculty remains controversial. Recursion, the ability to embed structures within structures of the same kind, has been proposed as a key component of our ability to parse and generate complex hierarchies. Here, we investigated the cognitive representation of both recursive and iterative processes in the auditory domain. The experiment used a two-alternative forced-choice paradigm: participants were exposed to three-step processes in which pure-tone sequences were built either through recursive or iterative processes, and had to choose the correct completion. Foils were constructed according to generative processes that did not match the previous steps. Both musicians and non-musicians were able to represent recursion in the auditory domain, although musicians performed better. We also observed that general 'musical' aptitudes played a role in both recursion and iteration, although the influence of musical training was somehow independent from melodic memory. Moreover, unlike iteration, recursion in audition was well correlated with its non-auditory (recursive) analogues in the visual and action sequencing domains. These results suggest that the cognitive machinery involved in establishing recursive representations is domain-general, even though this machinery requires access to information resulting from domain-specific processes. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  16. A simple iterative independent component analysis algorithm for vibration source signal identification of complex structures

    NASA Astrophysics Data System (ADS)

    Lee, Dong-Sup; Cho, Dae-Seung; Kim, Kookhyun; Jeon, Jae-Jin; Jung, Woo-Jin; Kang, Myeng-Hwan; Kim, Jae-Ho

    2015-01-01

    Independent Component Analysis (ICA), one of the blind source separation methods, can be applied for extracting unknown source signals only from received signals. This is accomplished by finding statistical independence of signal mixtures and has been successfully applied to myriad fields such as medical science, image processing, and numerous others. Nevertheless, there are inherent problems that have been reported when using this technique: instability and invalid ordering of separated signals, particularly when using a conventional ICA technique in vibratory source signal identification of complex structures. In this study, a simple iterative algorithm of the conventional ICA has been proposed to mitigate these problems. The proposed method to extract more stable source signals having valid order includes an iterative and reordering process of extracted mixing matrix to reconstruct finally converged source signals, referring to the magnitudes of correlation coefficients between the intermediately separated signals and the signals measured on or nearby sources. In order to review the problems of the conventional ICA technique and to validate the proposed method, numerical analyses have been carried out for a virtual response model and a 30 m class submarine model. Moreover, in order to investigate applicability of the proposed method to real problem of complex structure, an experiment has been carried out for a scaled submarine mockup. The results show that the proposed method could resolve the inherent problems of a conventional ICA technique.

  17. Second Iteration of Photogrammetric Pipeline to Enhance the Accuracy of Image Pose Estimation

    NASA Astrophysics Data System (ADS)

    Nguyen, T. G.; Pierrot-Deseilligny, M.; Muller, J.-M.; Thom, C.

    2017-05-01

    In classical photogrammetric processing pipeline, the automatic tie point extraction plays a key role in the quality of achieved results. The image tie points are crucial to pose estimation and have a significant influence on the precision of calculated orientation parameters. Therefore, both relative and absolute orientations of the 3D model can be affected. By improving the precision of image tie point measurement, one can enhance the quality of image orientation. The quality of image tie points is under the influence of several factors such as the multiplicity, the measurement precision and the distribution in 2D images as well as in 3D scenes. In complex acquisition scenarios such as indoor applications and oblique aerial images, tie point extraction is limited while only image information can be exploited. Hence, we propose here a method which improves the precision of pose estimation in complex scenarios by adding a second iteration to the classical processing pipeline. The result of a first iteration is used as a priori information to guide the extraction of new tie points with better quality. Evaluated with multiple case studies, the proposed method shows its validity and its high potiential for precision improvement.

  18. Iterative repair for scheduling and rescheduling

    NASA Technical Reports Server (NTRS)

    Zweben, Monte; Davis, Eugene; Deale, Michael

    1991-01-01

    An iterative repair search method is described called constraint based simulated annealing. Simulated annealing is a hill climbing search technique capable of escaping local minima. The utility of the constraint based framework is shown by comparing search performance with and without the constraint framework on a suite of randomly generated problems. Results are also shown of applying the technique to the NASA Space Shuttle ground processing problem. These experiments show that the search methods scales to complex, real world problems and reflects interesting anytime behavior.

  19. Automated IMRT planning with regional optimization using planning scripts

    PubMed Central

    Wong, Eugene; Bzdusek, Karl; Lock, Michael; Chen, Jeff Z.

    2013-01-01

    Intensity‐modulated radiation therapy (IMRT) has become a standard technique in radiation therapy for treating different types of cancers. Various class solutions have been developed for simple cases (e.g., localized prostate, whole breast) to generate IMRT plans efficiently. However, for more complex cases (e.g., head and neck, pelvic nodes), it can be time‐consuming for a planner to generate optimized IMRT plans. To generate optimal plans in these more complex cases which generally have multiple target volumes and organs at risk, it is often required to have additional IMRT optimization structures such as dose limiting ring structures, adjust beam geometry, select inverse planning objectives and associated weights, and additional IMRT objectives to reduce cold and hot spots in the dose distribution. These parameters are generally manually adjusted with a repeated trial and error approach during the optimization process. To improve IMRT planning efficiency in these more complex cases, an iterative method that incorporates some of these adjustment processes automatically in a planning script is designed, implemented, and validated. In particular, regional optimization has been implemented in an iterative way to reduce various hot or cold spots during the optimization process that begins with defining and automatic segmentation of hot and cold spots, introducing new objectives and their relative weights into inverse planning, and turn this into an iterative process with termination criteria. The method has been applied to three clinical sites: prostate with pelvic nodes, head and neck, and anal canal cancers, and has shown to reduce IMRT planning time significantly for clinical applications with improved plan quality. The IMRT planning scripts have been used for more than 500 clinical cases. PACS numbers: 87.55.D, 87.55.de PMID:23318393

  20. Comparison of sorting algorithms to increase the range of Hartmann-Shack aberrometry.

    PubMed

    Bedggood, Phillip; Metha, Andrew

    2010-01-01

    Recently many software-based approaches have been suggested for improving the range and accuracy of Hartmann-Shack aberrometry. We compare the performance of four representative algorithms, with a focus on aberrometry for the human eye. Algorithms vary in complexity from the simplistic traditional approach to iterative spline extrapolation based on prior spot measurements. Range is assessed for a variety of aberration types in isolation using computer modeling, and also for complex wavefront shapes using a real adaptive optics system. The effects of common sources of error for ocular wavefront sensing are explored. The results show that the simplest possible iterative algorithm produces comparable range and robustness compared to the more complicated algorithms, while keeping processing time minimal to afford real-time analysis.

  1. Comparison of sorting algorithms to increase the range of Hartmann-Shack aberrometry

    NASA Astrophysics Data System (ADS)

    Bedggood, Phillip; Metha, Andrew

    2010-11-01

    Recently many software-based approaches have been suggested for improving the range and accuracy of Hartmann-Shack aberrometry. We compare the performance of four representative algorithms, with a focus on aberrometry for the human eye. Algorithms vary in complexity from the simplistic traditional approach to iterative spline extrapolation based on prior spot measurements. Range is assessed for a variety of aberration types in isolation using computer modeling, and also for complex wavefront shapes using a real adaptive optics system. The effects of common sources of error for ocular wavefront sensing are explored. The results show that the simplest possible iterative algorithm produces comparable range and robustness compared to the more complicated algorithms, while keeping processing time minimal to afford real-time analysis.

  2. Numerical simulation and comparison of nonlinear self-focusing based on iteration and ray tracing

    NASA Astrophysics Data System (ADS)

    Li, Xiaotong; Chen, Hao; Wang, Weiwei; Ruan, Wangchao; Zhang, Luwei; Cen, Zhaofeng

    2017-05-01

    Self-focusing is observed in nonlinear materials owing to the interaction between laser and matter when laser beam propagates. Some of numerical simulation strategies such as the beam propagation method (BPM) based on nonlinear Schrödinger equation and ray tracing method based on Fermat's principle have applied to simulate the self-focusing process. In this paper we present an iteration nonlinear ray tracing method in that the nonlinear material is also cut into massive slices just like the existing approaches, but instead of paraxial approximation and split-step Fourier transform, a large quantity of sampled real rays are traced step by step through the system with changing refractive index and laser intensity by iteration. In this process a smooth treatment is employed to generate a laser density distribution at each slice to decrease the error caused by the under-sampling. The characteristics of this method is that the nonlinear refractive indices of the points on current slice are calculated by iteration so as to solve the problem of unknown parameters in the material caused by the causal relationship between laser intensity and nonlinear refractive index. Compared with the beam propagation method, this algorithm is more suitable for engineering application with lower time complexity, and has the calculation capacity for numerical simulation of self-focusing process in the systems including both of linear and nonlinear optical media. If the sampled rays are traced with their complex amplitudes and light paths or phases, it will be possible to simulate the superposition effects of different beam. At the end of the paper, the advantages and disadvantages of this algorithm are discussed.

  3. DEM Calibration Approach: design of experiment

    NASA Astrophysics Data System (ADS)

    Boikov, A. V.; Savelev, R. V.; Payor, V. A.

    2018-05-01

    The problem of DEM models calibration is considered in the article. It is proposed to divide models input parameters into those that require iterative calibration and those that are recommended to measure directly. A new method for model calibration based on the design of the experiment for iteratively calibrated parameters is proposed. The experiment is conducted using a specially designed stand. The results are processed with technical vision algorithms. Approximating functions are obtained and the error of the implemented software and hardware complex is estimated. The prospects of the obtained results are discussed.

  4. Optical Computing Based on Neuronal Models

    DTIC Science & Technology

    1988-05-01

    walking, and cognition are far too complex for existing sequential digital computers. Therefore new architectures, hardware, and algorithms modeled...collective behavior, and iterative processing into optical processing and artificial neurodynamical systems. Another intriguing promise of neural nets is...with architectures, implementations, and programming; and material research s -7- called for. Our future research in neurodynamics will continue to

  5. Active Interaction Mapping as a tool to elucidate hierarchical functions of biological processes.

    PubMed

    Farré, Jean-Claude; Kramer, Michael; Ideker, Trey; Subramani, Suresh

    2017-07-03

    Increasingly, various 'omics data are contributing significantly to our understanding of novel biological processes, but it has not been possible to iteratively elucidate hierarchical functions in complex phenomena. We describe a general systems biology approach called Active Interaction Mapping (AI-MAP), which elucidates the hierarchy of functions for any biological process. Existing and new 'omics data sets can be iteratively added to create and improve hierarchical models which enhance our understanding of particular biological processes. The best datatypes to further improve an AI-MAP model are predicted computationally. We applied this approach to our understanding of general and selective autophagy, which are conserved in most eukaryotes, setting the stage for the broader application to other cellular processes of interest. In the particular application to autophagy-related processes, we uncovered and validated new autophagy and autophagy-related processes, expanded known autophagy processes with new components, integrated known non-autophagic processes with autophagy and predict other unexplored connections.

  6. Evaluating the iterative development of VR/AR human factors tools for manual work.

    PubMed

    Liston, Paul M; Kay, Alison; Cromie, Sam; Leva, Chiara; D'Cruz, Mirabelle; Patel, Harshada; Langley, Alyson; Sharples, Sarah; Aromaa, Susanna

    2012-01-01

    This paper outlines the approach taken to iteratively evaluate a set of VR/AR (virtual reality / augmented reality) applications for five different manual-work applications - terrestrial spacecraft assembly, assembly-line design, remote maintenance of trains, maintenance of nuclear reactors, and large-machine assembly process design - and examines the evaluation data for evidence of the effectiveness of the evaluation framework as well as the benefits to the development process of feedback from iterative evaluation. ManuVAR is an EU-funded research project that is working to develop an innovative technology platform and a framework to support high-value, high-knowledge manual work throughout the product lifecycle. The results of this study demonstrate the iterative improvements reached throughout the design cycles, observable through the trending of the quantitative results from three successive trials of the applications and the investigation of the qualitative interview findings. The paper discusses the limitations of evaluation in complex, multi-disciplinary development projects and finds evidence of the effectiveness of the use of the particular set of complementary evaluation methods incorporating a common inquiry structure used for the evaluation - particularly in facilitating triangulation of the data.

  7. Iterative reactions of transient boronic acids enable sequential C-C bond formation

    NASA Astrophysics Data System (ADS)

    Battilocchio, Claudio; Feist, Florian; Hafner, Andreas; Simon, Meike; Tran, Duc N.; Allwood, Daniel M.; Blakemore, David C.; Ley, Steven V.

    2016-04-01

    The ability to form multiple carbon-carbon bonds in a controlled sequence and thus rapidly build molecular complexity in an iterative fashion is an important goal in modern chemical synthesis. In recent times, transition-metal-catalysed coupling reactions have dominated in the development of C-C bond forming processes. A desire to reduce the reliance on precious metals and a need to obtain products with very low levels of metal impurities has brought a renewed focus on metal-free coupling processes. Here, we report the in situ preparation of reactive allylic and benzylic boronic acids, obtained by reacting flow-generated diazo compounds with boronic acids, and their application in controlled iterative C-C bond forming reactions is described. Thus far we have shown the formation of up to three C-C bonds in a sequence including the final trapping of a reactive boronic acid species with an aldehyde to generate a range of new chemical structures.

  8. Fractals in the Classroom

    ERIC Educational Resources Information Center

    Fraboni, Michael; Moller, Trisha

    2008-01-01

    Fractal geometry offers teachers great flexibility: It can be adapted to the level of the audience or to time constraints. Although easily explained, fractal geometry leads to rich and interesting mathematical complexities. In this article, the authors describe fractal geometry, explain the process of iteration, and provide a sample exercise.…

  9. A framework to observe and evaluate the sustainability of human-natural systems in a complex dynamic context.

    PubMed

    Satanarachchi, Niranji; Mino, Takashi

    2014-01-01

    This paper aims to explore the prominent implications of the process of observing complex dynamics linked to sustainability in human-natural systems and to propose a framework for sustainability evaluation by introducing the concept of sustainability boundaries. Arguing that both observing and evaluating sustainability should engage awareness of complex dynamics from the outset, we try to embody this idea in the framework by two complementary methods, namely, the layer view- and dimensional view-based methods, which support the understanding of a reflexive and iterative sustainability process. The framework enables the observation of complex dynamic sustainability contexts, which we call observation metastructures, and enable us to map the contexts to sustainability boundaries.

  10. Cold Test and Performance Evaluation of Prototype Cryoline-X

    NASA Astrophysics Data System (ADS)

    Shah, N.; Choukekar, K.; Kapoor, H.; Muralidhara, S.; Garg, A.; Kumar, U.; Jadon, M.; Dash, B.; Bhattachrya, R.; Badgujar, S.; Billot, V.; Bravais, P.; Cadeau, P.

    2017-12-01

    The multi-process pipe vacuum jacketed cryolines for the ITER project are probably world’s most complex cryolines in terms of layout, load cases, quality, safety and regulatory requirements. As a risk mitigation plan, design, manufacturing and testing of prototype cryoline (PTCL) was planned before the approval of final design of ITER cryolines. The 29 meter long PTCL consist of 6 process pipes encased by thermal shield inside Outer Vacuum Jacket of DN 600 size and carries cold helium at 4.5 K and 80 K. The global heat load limit was defined as 1.2 W/m at 4.5 K and 4.5 W/m at 80 K. The PTCL-X (PTCL for Group-X cryolines) was specified in detail by ITER-India and designed as well as manufactured by Air Liquide. PTCL-X was installed and tested at cryogenic temperature at ITER-India Cryogenic Laboratory in 2016. The heat load at 4.5 K and 80 K, estimated using enthalpy difference method, was found to be approximately 0.8 W/m at 4.5 K, 4.2 W/m at 80 K, which is well within the defined limits. Thermal shield temperature profile was also found to be satisfactory. Paper summarizes the cold test results of PTCL-X

  11. Cultural Emergence: Theorizing Culture in and from the Margins of Science Education

    ERIC Educational Resources Information Center

    Wood, Nathan Brent; Erichsen, Elizabeth Anne; Anicha, Cali L.

    2013-01-01

    This special issue of the Journal of Research in Science Teaching seeks to explore conceptualizations of culture that address contemporary challenges in science education. Toward this end, we unite two theoretical perspectives to advance a conceptualization of culture as a complex system, emerging from iterative processes of cultural bricolage,…

  12. Discrete Fourier Transform in a Complex Vector Space

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H. (Inventor)

    2015-01-01

    An image-based phase retrieval technique has been developed that can be used on board a space based iterative transformation system. Image-based wavefront sensing is computationally demanding due to the floating-point nature of the process. The discrete Fourier transform (DFT) calculation is presented in "diagonal" form. By diagonal we mean that a transformation of basis is introduced by an application of the similarity transform of linear algebra. The current method exploits the diagonal structure of the DFT in a special way, particularly when parts of the calculation do not have to be repeated at each iteration to converge to an acceptable solution in order to focus an image.

  13. A comparison of multiprocessor scheduling methods for iterative data flow architectures

    NASA Technical Reports Server (NTRS)

    Storch, Matthew

    1993-01-01

    A comparative study is made between the Algorithm to Architecture Mapping Model (ATAMM) and three other related multiprocessing models from the published literature. The primary focus of all four models is the non-preemptive scheduling of large-grain iterative data flow graphs as required in real-time systems, control applications, signal processing, and pipelined computations. Important characteristics of the models such as injection control, dynamic assignment, multiple node instantiations, static optimum unfolding, range-chart guided scheduling, and mathematical optimization are identified. The models from the literature are compared with the ATAMM for performance, scheduling methods, memory requirements, and complexity of scheduling and design procedures.

  14. Dynamics of a new family of iterative processes for quadratic polynomials

    NASA Astrophysics Data System (ADS)

    Gutiérrez, J. M.; Hernández, M. A.; Romero, N.

    2010-03-01

    In this work we show the presence of the well-known Catalan numbers in the study of the convergence and the dynamical behavior of a family of iterative methods for solving nonlinear equations. In fact, we introduce a family of methods, depending on a parameter . These methods reach the order of convergence m+2 when they are applied to quadratic polynomials with different roots. Newton's and Chebyshev's methods appear as particular choices of the family appear for m=0 and m=1, respectively. We make both analytical and graphical studies of these methods, which give rise to rational functions defined in the extended complex plane. Firstly, we prove that the coefficients of the aforementioned family of iterative processes can be written in terms of the Catalan numbers. Secondly, we make an incursion into its dynamical behavior. In fact, we show that the rational maps related to these methods can be written in terms of the entries of the Catalan triangle. Next we analyze its general convergence, by including some computer plots showing the intricate structure of the Universal Julia sets associated with the methods.

  15. Ground Truth Creation for Complex Clinical NLP Tasks - an Iterative Vetting Approach and Lessons Learned.

    PubMed

    Liang, Jennifer J; Tsou, Ching-Huei; Devarakonda, Murthy V

    2017-01-01

    Natural language processing (NLP) holds the promise of effectively analyzing patient record data to reduce cognitive load on physicians and clinicians in patient care, clinical research, and hospital operations management. A critical need in developing such methods is the "ground truth" dataset needed for training and testing the algorithms. Beyond localizable, relatively simple tasks, ground truth creation is a significant challenge because medical experts, just as physicians in patient care, have to assimilate vast amounts of data in EHR systems. To mitigate potential inaccuracies of the cognitive challenges, we present an iterative vetting approach for creating the ground truth for complex NLP tasks. In this paper, we present the methodology, and report on its use for an automated problem list generation task, its effect on the ground truth quality and system accuracy, and lessons learned from the effort.

  16. Exploring a New Simulation Approach to Improve Clinical Reasoning Teaching and Assessment: Randomized Trial Protocol

    PubMed Central

    Moussa, Ahmed; Loye, Nathalie; Charlin, Bernard; Audétat, Marie-Claude

    2016-01-01

    Background Helping trainees develop appropriate clinical reasoning abilities is a challenging goal in an environment where clinical situations are marked by high levels of complexity and unpredictability. The benefit of simulation-based education to assess clinical reasoning skills has rarely been reported. More specifically, it is unclear if clinical reasoning is better acquired if the instructor's input occurs entirely after or is integrated during the scenario. Based on educational principles of the dual-process theory of clinical reasoning, a new simulation approach called simulation with iterative discussions (SID) is introduced. The instructor interrupts the flow of the scenario at three key moments of the reasoning process (data gathering, integration, and confirmation). After each stop, the scenario is continued where it was interrupted. Finally, a brief general debriefing ends the session. System-1 process of clinical reasoning is assessed by verbalization during management of the case, and System-2 during the iterative discussions without providing feedback. Objective The aim of this study is to evaluate the effectiveness of Simulation with Iterative Discussions versus the classical approach of simulation in developing reasoning skills of General Pediatrics and Neonatal-Perinatal Medicine residents. Methods This will be a prospective exploratory, randomized study conducted at Sainte-Justine hospital in Montreal, Qc, between January and March 2016. All post-graduate year (PGY) 1 to 6 residents will be invited to complete one SID or classical simulation 30 minutes audio video-recorded complex high-fidelity simulations covering a similar neonatology topic. Pre- and post-simulation questionnaires will be completed and a semistructured interview will be conducted after each simulation. Data analyses will use SPSS and NVivo softwares. Results This study is in its preliminary stages and the results are expected to be made available by April, 2016. Conclusions This will be the first study to explore a new simulation approach designed to enhance clinical reasoning. By assessing more closely reasoning processes throughout a simulation session, we believe that Simulation with Iterative Discussions will be an interesting and more effective approach for students. The findings of the study will benefit medical educators, education programs, and medical students. PMID:26888076

  17. Exploring a New Simulation Approach to Improve Clinical Reasoning Teaching and Assessment: Randomized Trial Protocol.

    PubMed

    Pennaforte, Thomas; Moussa, Ahmed; Loye, Nathalie; Charlin, Bernard; Audétat, Marie-Claude

    2016-02-17

    Helping trainees develop appropriate clinical reasoning abilities is a challenging goal in an environment where clinical situations are marked by high levels of complexity and unpredictability. The benefit of simulation-based education to assess clinical reasoning skills has rarely been reported. More specifically, it is unclear if clinical reasoning is better acquired if the instructor's input occurs entirely after or is integrated during the scenario. Based on educational principles of the dual-process theory of clinical reasoning, a new simulation approach called simulation with iterative discussions (SID) is introduced. The instructor interrupts the flow of the scenario at three key moments of the reasoning process (data gathering, integration, and confirmation). After each stop, the scenario is continued where it was interrupted. Finally, a brief general debriefing ends the session. System-1 process of clinical reasoning is assessed by verbalization during management of the case, and System-2 during the iterative discussions without providing feedback. The aim of this study is to evaluate the effectiveness of Simulation with Iterative Discussions versus the classical approach of simulation in developing reasoning skills of General Pediatrics and Neonatal-Perinatal Medicine residents. This will be a prospective exploratory, randomized study conducted at Sainte-Justine hospital in Montreal, Qc, between January and March 2016. All post-graduate year (PGY) 1 to 6 residents will be invited to complete one SID or classical simulation 30 minutes audio video-recorded complex high-fidelity simulations covering a similar neonatology topic. Pre- and post-simulation questionnaires will be completed and a semistructured interview will be conducted after each simulation. Data analyses will use SPSS and NVivo softwares. This study is in its preliminary stages and the results are expected to be made available by April, 2016. This will be the first study to explore a new simulation approach designed to enhance clinical reasoning. By assessing more closely reasoning processes throughout a simulation session, we believe that Simulation with Iterative Discussions will be an interesting and more effective approach for students. The findings of the study will benefit medical educators, education programs, and medical students.

  18. Structural materials by powder HIP for fusion reactors

    NASA Astrophysics Data System (ADS)

    Dellis, C.; Le Marois, G.; van Osch, E. V.

    1998-10-01

    Tokamak blankets have complex shapes and geometries with double curvature and embedded cooling channels. Usual manufacturing techniques such as forging, bending and welding generate very complex fabrication routes. Hot Isostatic Pressing (HIP) is a versatile and flexible fabrication technique that has a broad range of commercial applications. Powder HIP appears to be one of the most suitable techniques for the manufacturing of such complex shape components as fusion reactor modules. During the HIP cycle, consolidation of the powder is made and porosity in the material disappears. This involves a variation of 30% in volume of the component. These deformations are not isotropic due to temperature gradients in the part and the stiffness of the canister. This paper discusses the following points: (i) Availability of manufacturing process by powder HIP of 316LN stainless steel (ITER modules) and F82H martensitic steel (ITER Test Module and DEMO blanket) with properties equivalent to the forged one.(ii) Availability of powerful modelling techniques to simulate the densification of powder during the HIP cycle, and to control the deformation of components during consolidation by improving the canister design.(iii) Material data base needed for simulation of the HIP process, and the optimisation of canister geometry.(iv) Irradiation behaviour on powder HIP materials from preliminary results.

  19. Polynomiography and Chaos

    NASA Astrophysics Data System (ADS)

    Kalantari, Bahman

    Polynomiography is the algorithmic visualization of iterative systems for computing roots of a complex polynomial. It is well known that iterations of a rational function in the complex plane result in chaotic behavior near its Julia set. In one scheme of computing polynomiography for a given polynomial p(z), we select an individual member from the Basic Family, an infinite fundamental family of rational iteration functions that in particular include Newton's. Polynomiography is an excellent means for observing, understanding, and comparing chaotic behavior for variety of iterative systems. Other iterative schemes in polynomiography are possible and result in chaotic behavior of different kinds. In another scheme, the Basic Family is collectively applied to p(z) and the iterates for any seed in the Voronoi cell of a root converge to that root. Polynomiography reveals chaotic behavior of another kind near the boundary of the Voronoi diagram of the roots. We also describe a novel Newton-Ellipsoid iterative system with its own chaos and exhibit images demonstrating polynomiographies of chaotic behavior of different kinds. Finally, we consider chaos for the more general case of polynomiography of complex analytic functions. On the one hand polynomiography is a powerful medium capable of demonstrating chaos in different forms, it is educationally instructive to students and researchers, also it gives rise to numerous research problems. On the other hand, it is a medium resulting in images with enormous aesthetic appeal to general audiences.

  20. Solving large mixed linear models using preconditioned conjugate gradient iteration.

    PubMed

    Strandén, I; Lidauer, M

    1999-12-01

    Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.

  1. Field validation of a free-agent cellular automata model of fire spread with fire–atmosphere coupling

    Treesearch

    Gary Achtemeier

    2012-01-01

    A cellular automata fire model represents ‘elements’ of fire by autonomous agents. A few simple algebraic expressions substituted for complex physical and meteorological processes and solved iteratively yield simulations for ‘super-diffusive’ fire spread and coupled surface-layer (2-m) fire–atmosphere processes. Pressure anomalies, which are integrals of the thermal...

  2. Low complexity Reed-Solomon-based low-density parity-check design for software defined optical transmission system based on adaptive puncturing decoding algorithm

    NASA Astrophysics Data System (ADS)

    Pan, Xiaolong; Liu, Bo; Zheng, Jianglong; Tian, Qinghua

    2016-08-01

    We propose and demonstrate a low complexity Reed-Solomon-based low-density parity-check (RS-LDPC) code with adaptive puncturing decoding algorithm for elastic optical transmission system. Partial received codes and the relevant column in parity-check matrix can be punctured to reduce the calculation complexity by adaptive parity-check matrix during decoding process. The results show that the complexity of the proposed decoding algorithm is reduced by 30% compared with the regular RS-LDPC system. The optimized code rate of the RS-LDPC code can be obtained after five times iteration.

  3. Evaluation of integration methods for hybrid simulation of complex structural systems through collapse

    NASA Astrophysics Data System (ADS)

    Del Carpio R., Maikol; Hashemi, M. Javad; Mosqueda, Gilberto

    2017-10-01

    This study examines the performance of integration methods for hybrid simulation of large and complex structural systems in the context of structural collapse due to seismic excitations. The target application is not necessarily for real-time testing, but rather for models that involve large-scale physical sub-structures and highly nonlinear numerical models. Four case studies are presented and discussed. In the first case study, the accuracy of integration schemes including two widely used methods, namely, modified version of the implicit Newmark with fixed-number of iteration (iterative) and the operator-splitting (non-iterative) is examined through pure numerical simulations. The second case study presents the results of 10 hybrid simulations repeated with the two aforementioned integration methods considering various time steps and fixed-number of iterations for the iterative integration method. The physical sub-structure in these tests consists of a single-degree-of-freedom (SDOF) cantilever column with replaceable steel coupons that provides repeatable highlynonlinear behavior including fracture-type strength and stiffness degradations. In case study three, the implicit Newmark with fixed-number of iterations is applied for hybrid simulations of a 1:2 scale steel moment frame that includes a relatively complex nonlinear numerical substructure. Lastly, a more complex numerical substructure is considered by constructing a nonlinear computational model of a moment frame coupled to a hybrid model of a 1:2 scale steel gravity frame. The last two case studies are conducted on the same porotype structure and the selection of time steps and fixed number of iterations are closely examined in pre-test simulations. The generated unbalance forces is used as an index to track the equilibrium error and predict the accuracy and stability of the simulations.

  4. Complex Adaptive Systems and the Origins of Adaptive Structure: What Experiments Can Tell Us

    ERIC Educational Resources Information Center

    Cornish, Hannah; Tamariz, Monica; Kirby, Simon

    2009-01-01

    Language is a product of both biological and cultural evolution. Clues to the origins of key structural properties of language can be found in the process of cultural transmission between learners. Recent experiments have shown that iterated learning by human participants in the laboratory transforms an initially unstructured artificial language…

  5. Diagonalization of complex symmetric matrices: Generalized Householder reflections, iterative deflation and implicit shifts

    NASA Astrophysics Data System (ADS)

    Noble, J. H.; Lubasch, M.; Stevens, J.; Jentschura, U. D.

    2017-12-01

    We describe a matrix diagonalization algorithm for complex symmetric (not Hermitian) matrices, A ̲ =A̲T, which is based on a two-step algorithm involving generalized Householder reflections based on the indefinite inner product 〈 u ̲ , v ̲ 〉 ∗ =∑iuivi. This inner product is linear in both arguments and avoids complex conjugation. The complex symmetric input matrix is transformed to tridiagonal form using generalized Householder transformations (first step). An iterative, generalized QL decomposition of the tridiagonal matrix employing an implicit shift converges toward diagonal form (second step). The QL algorithm employs iterative deflation techniques when a machine-precision zero is encountered "prematurely" on the super-/sub-diagonal. The algorithm allows for a reliable and computationally efficient computation of resonance and antiresonance energies which emerge from complex-scaled Hamiltonians, and for the numerical determination of the real energy eigenvalues of pseudo-Hermitian and PT-symmetric Hamilton matrices. Numerical reference values are provided.

  6. Approximate Joint Diagonalization and Geometric Mean of Symmetric Positive Definite Matrices

    PubMed Central

    Congedo, Marco; Afsari, Bijan; Barachant, Alexandre; Moakher, Maher

    2015-01-01

    We explore the connection between two problems that have arisen independently in the signal processing and related fields: the estimation of the geometric mean of a set of symmetric positive definite (SPD) matrices and their approximate joint diagonalization (AJD). Today there is a considerable interest in estimating the geometric mean of a SPD matrix set in the manifold of SPD matrices endowed with the Fisher information metric. The resulting mean has several important invariance properties and has proven very useful in diverse engineering applications such as biomedical and image data processing. While for two SPD matrices the mean has an algebraic closed form solution, for a set of more than two SPD matrices it can only be estimated by iterative algorithms. However, none of the existing iterative algorithms feature at the same time fast convergence, low computational complexity per iteration and guarantee of convergence. For this reason, recently other definitions of geometric mean based on symmetric divergence measures, such as the Bhattacharyya divergence, have been considered. The resulting means, although possibly useful in practice, do not satisfy all desirable invariance properties. In this paper we consider geometric means of covariance matrices estimated on high-dimensional time-series, assuming that the data is generated according to an instantaneous mixing model, which is very common in signal processing. We show that in these circumstances we can approximate the Fisher information geometric mean by employing an efficient AJD algorithm. Our approximation is in general much closer to the Fisher information geometric mean as compared to its competitors and verifies many invariance properties. Furthermore, convergence is guaranteed, the computational complexity is low and the convergence rate is quadratic. The accuracy of this new geometric mean approximation is demonstrated by means of simulations. PMID:25919667

  7. Increasing feasibility of the field-programmable gate array implementation of an iterative image registration using a kernel-warping algorithm

    NASA Astrophysics Data System (ADS)

    Nguyen, An Hung; Guillemette, Thomas; Lambert, Andrew J.; Pickering, Mark R.; Garratt, Matthew A.

    2017-09-01

    Image registration is a fundamental image processing technique. It is used to spatially align two or more images that have been captured at different times, from different sensors, or from different viewpoints. There have been many algorithms proposed for this task. The most common of these being the well-known Lucas-Kanade (LK) and Horn-Schunck approaches. However, the main limitation of these approaches is the computational complexity required to implement the large number of iterations necessary for successful alignment of the images. Previously, a multi-pass image interpolation algorithm (MP-I2A) was developed to considerably reduce the number of iterations required for successful registration compared with the LK algorithm. This paper develops a kernel-warping algorithm (KWA), a modified version of the MP-I2A, which requires fewer iterations to successfully register two images and less memory space for the field-programmable gate array (FPGA) implementation than the MP-I2A. These reductions increase feasibility of the implementation of the proposed algorithm on FPGAs with very limited memory space and other hardware resources. A two-FPGA system rather than single FPGA system is successfully developed to implement the KWA in order to compensate insufficiency of hardware resources supported by one FPGA, and increase parallel processing ability and scalability of the system.

  8. Sometimes "Newton's Method" Always "Cycles"

    ERIC Educational Resources Information Center

    Latulippe, Joe; Switkes, Jennifer

    2012-01-01

    Are there functions for which Newton's method cycles for all non-trivial initial guesses? We construct and solve a differential equation whose solution is a real-valued function that two-cycles under Newton iteration. Higher-order cycles of Newton's method iterates are explored in the complex plane using complex powers of "x." We find a class of…

  9. Cyclic Game Dynamics Driven by Iterated Reasoning

    PubMed Central

    Frey, Seth; Goldstone, Robert L.

    2013-01-01

    Recent theories from complexity science argue that complex dynamics are ubiquitous in social and economic systems. These claims emerge from the analysis of individually simple agents whose collective behavior is surprisingly complicated. However, economists have argued that iterated reasoning–what you think I think you think–will suppress complex dynamics by stabilizing or accelerating convergence to Nash equilibrium. We report stable and efficient periodic behavior in human groups playing the Mod Game, a multi-player game similar to Rock-Paper-Scissors. The game rewards subjects for thinking exactly one step ahead of others in their group. Groups that play this game exhibit cycles that are inconsistent with any fixed-point solution concept. These cycles are driven by a “hopping” behavior that is consistent with other accounts of iterated reasoning: agents are constrained to about two steps of iterated reasoning and learn an additional one-half step with each session. If higher-order reasoning can be complicit in complex emergent dynamics, then cyclic and chaotic patterns may be endogenous features of real-world social and economic systems. PMID:23441191

  10. The ITER disruption mitigation trigger: developing its preliminary design

    NASA Astrophysics Data System (ADS)

    Pautasso, G.; de Vries, P. C.; Humphreys, D.; Lehnen, M.; Rapson, C.; Raupp, G.; Snipes, J. A.; Treutterer, W.; Vergara-Fernandez, A.; Zabeo, L.

    2018-03-01

    A concept for the generation of the trigger for the ITER disruption mitigation system is described in this paper. The issuing of the trigger will be the result of a complex decision process, taken by the plasma control system, or by the central interlock system, determining that the plasma is unavoidably going to disrupt—or has disrupted—and that a fast mitigated shut-down is required. Given the redundancy of the mitigation system, the plasma control system must also formulate an injection scheme and specify when and how the injectors of the mitigation system should be activated. The parameters and the conceptual algorithms required for the configuration and generation of the trigger are discussed.

  11. Nonlinear random response prediction using MSC/NASTRAN

    NASA Technical Reports Server (NTRS)

    Robinson, J. H.; Chiang, C. K.; Rizzi, S. A.

    1993-01-01

    An equivalent linearization technique was incorporated into MSC/NASTRAN to predict the nonlinear random response of structures by means of Direct Matrix Abstract Programming (DMAP) modifications and inclusion of the nonlinear differential stiffness module inside the iteration loop. An iterative process was used to determine the rms displacements. Numerical results obtained for validation on simple plates and beams are in good agreement with existing solutions in both the linear and linearized regions. The versatility of the implementation will enable the analyst to determine the nonlinear random responses for complex structures under combined loads. The thermo-acoustic response of a hexagonal thermal protection system panel is used to highlight some of the features of the program.

  12. Comparisons of Observed Process Quality in German and American Infant/Toddler Programs

    ERIC Educational Resources Information Center

    Tietze, Wolfgang; Cryer, Debby

    2004-01-01

    Observed process quality in infant/toddler classrooms was compared in Germany (n = 75) and the USA (n = 219). Process quality was assessed with the Infant/Toddler Environment Rating Scale(ITERS) and parent attitudes about ITERS content with the ITERS Parent Questionnaire (ITERSPQ). The ITERS had comparable reliabilities in the two countries and…

  13. Context Matters: The Value of Analyzing Human Factors within Educational Contexts as a Way of Informing Technology-Related Decisions within Design Research

    ERIC Educational Resources Information Center

    MacKinnon, Kim

    2012-01-01

    While design research can be useful for designing effective technology integrations within complex social settings, it currently fails to provide concrete methodological guidelines for gathering and organizing information about the research context, or for determining how such analyses ought to guide the iterative design and innovation process. A…

  14. Cochrane Qualitative and Implementation Methods Group guidance series-paper 2: methods for question formulation, searching, and protocol development for qualitative evidence synthesis.

    PubMed

    Harris, Janet L; Booth, Andrew; Cargo, Margaret; Hannes, Karin; Harden, Angela; Flemming, Kate; Garside, Ruth; Pantoja, Tomas; Thomas, James; Noyes, Jane

    2018-05-01

    This paper updates previous Cochrane guidance on question formulation, searching, and protocol development, reflecting recent developments in methods for conducting qualitative evidence syntheses to inform Cochrane intervention reviews. Examples are used to illustrate how decisions about boundaries for a review are formed via an iterative process of constructing lines of inquiry and mapping the available information to ascertain whether evidence exists to answer questions related to effectiveness, implementation, feasibility, appropriateness, economic evidence, and equity. The process of question formulation allows reviewers to situate the topic in relation to how it informs and explains effectiveness, using the criterion of meaningfulness, appropriateness, feasibility, and implementation. Questions related to complex questions and interventions can be structured by drawing on an increasingly wide range of question frameworks. Logic models and theoretical frameworks are useful tools for conceptually mapping the literature to illustrate the complexity of the phenomenon of interest. Furthermore, protocol development may require iterative question formulation and searching. Consequently, the final protocol may function as a guide rather than a prescriptive route map, particularly in qualitative reviews that ask more exploratory and open-ended questions. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Analysis of Artificial Neural Network in Erosion Modeling: A Case Study of Serang Watershed

    NASA Astrophysics Data System (ADS)

    Arif, N.; Danoedoro, P.; Hartono

    2017-12-01

    Erosion modeling is an important measuring tool for both land users and decision makers to evaluate land cultivation and thus it is necessary to have a model to represent the actual reality. Erosion models are a complex model because of uncertainty data with different sources and processing procedures. Artificial neural networks can be relied on for complex and non-linear data processing such as erosion data. The main difficulty in artificial neural network training is the determination of the value of each network input parameters, i.e. hidden layer, momentum, learning rate, momentum, and RMS. This study tested the capability of artificial neural network application in the prediction of erosion risk with some input parameters through multiple simulations to get good classification results. The model was implemented in Serang Watershed, Kulonprogo, Yogyakarta which is one of the critical potential watersheds in Indonesia. The simulation results showed the number of iterations that gave a significant effect on the accuracy compared to other parameters. A small number of iterations can produce good accuracy if the combination of other parameters was right. In this case, one hidden layer was sufficient to produce good accuracy. The highest training accuracy achieved in this study was 99.32%, occurred in ANN 14 simulation with combination of network input parameters of 1 HL; LR 0.01; M 0.5; RMS 0.0001, and the number of iterations of 15000. The ANN training accuracy was not influenced by the number of channels, namely input dataset (erosion factors) as well as data dimensions, rather it was determined by changes in network parameters.

  16. Noise models for low counting rate coherent diffraction imaging.

    PubMed

    Godard, Pierre; Allain, Marc; Chamard, Virginie; Rodenburg, John

    2012-11-05

    Coherent diffraction imaging (CDI) is a lens-less microscopy method that extracts the complex-valued exit field from intensity measurements alone. It is of particular importance for microscopy imaging with diffraction set-ups where high quality lenses are not available. The inversion scheme allowing the phase retrieval is based on the use of an iterative algorithm. In this work, we address the question of the choice of the iterative process in the case of data corrupted by photon or electron shot noise. Several noise models are presented and further used within two inversion strategies, the ordered subset and the scaled gradient. Based on analytical and numerical analysis together with Monte-Carlo studies, we show that any physical interpretations drawn from a CDI iterative technique require a detailed understanding of the relationship between the noise model and the used inversion method. We observe that iterative algorithms often assume implicitly a noise model. For low counting rates, each noise model behaves differently. Moreover, the used optimization strategy introduces its own artefacts. Based on this analysis, we develop a hybrid strategy which works efficiently in the absence of an informed initial guess. Our work emphasises issues which should be considered carefully when inverting experimental data.

  17. Loads specification and embedded plate definition for the ITER cryoline system

    NASA Astrophysics Data System (ADS)

    Badgujar, S.; Benkheira, L.; Chalifour, M.; Forgeas, A.; Shah, N.; Vaghela, H.; Sarkar, B.

    2015-12-01

    ITER cryolines (CLs) are complex network of vacuum-insulated multi and single process pipe lines, distributed in three different areas at ITER site. The CLs will support different operating loads during the machine life-time; either considered as nominal, occasional or exceptional. The major loads, which form the design basis are inertial, pressure, temperature, assembly, magnetic, snow, wind, enforced relative displacement and are put together in loads specification. Based on the defined load combinations, conceptual estimation of reaction loads have been carried out for the lines located inside the Tokamak building. Adequate numbers of embedded plates (EPs) per line have been defined and integrated in the building design. The finalization of building EPs to support the lines, before the detailed design, is one of the major design challenges as the usual logic of the design may alter. At the ITER project level, it was important to finalize EPs to allow adequate design and timely availability of the Tokamak building. The paper describes the single loads, load combinations considered in load specification and the approach for conceptual load estimation and selection of EPs for Toroidal Field (TF) Cryoline as an example by converting the load combinations in two main load categories; pressure and seismic.

  18. Gaussian mixed model in support of semiglobal matching leveraged by ground control points

    NASA Astrophysics Data System (ADS)

    Ma, Hao; Zheng, Shunyi; Li, Chang; Li, Yingsong; Gui, Li

    2017-04-01

    Semiglobal matching (SGM) has been widely applied in large aerial images because of its good tradeoff between complexity and robustness. The concept of ground control points (GCPs) is adopted to make SGM more robust. We model the effect of GCPs as two data terms for stereo matching between high-resolution aerial epipolar images in an iterative scheme. One term based on GCPs is formulated by Gaussian mixture model, which strengths the relation between GCPs and the pixels to be estimated and encodes some degree of consistency between them with respect to disparity values. Another term depends on pixel-wise confidence, and we further design a confidence updating equation based on three rules. With this confidence-based term, the assignment of disparity can be heuristically selected among disparity search ranges during the iteration process. Several iterations are sufficient to bring out satisfactory results according to our experiments. Experimental results validate that the proposed method outperforms surface reconstruction, which is a representative variant of SGM and behaves excellently on aerial images.

  19. Iterative Integration of Visual Insights during Scalable Patent Search and Analysis.

    PubMed

    Koch, S; Bosch, H; Giereth, M; Ertl, T

    2011-05-01

    Patents are of growing importance in current economic markets. Analyzing patent information has, therefore, become a common task for many interest groups. As a prerequisite for patent analysis, extensive search for relevant patent information is essential. Unfortunately, the complexity of patent material inhibits a straightforward retrieval of all relevant patent documents and leads to iterative, time-consuming approaches in practice. Already the amount of patent data to be analyzed poses challenges with respect to scalability. Further scalability issues arise concerning the diversity of users and the large variety of analysis tasks. With "PatViz", a system for interactive analysis of patent information has been developed addressing scalability at various levels. PatViz provides a visual environment allowing for interactive reintegration of insights into subsequent search iterations, thereby bridging the gap between search and analytic processes. Because of its extensibility, we expect that the approach we have taken can be employed in different problem domains that require high quality of search results regarding their completeness.

  20. Using iterative learning to improve understanding during the informed consent process in a South African psychiatric genomics study.

    PubMed

    Campbell, Megan M; Susser, Ezra; Mall, Sumaya; Mqulwana, Sibonile G; Mndini, Michael M; Ntola, Odwa A; Nagdee, Mohamed; Zingela, Zukiswa; Van Wyk, Stephanus; Stein, Dan J

    2017-01-01

    Obtaining informed consent is a great challenge in global health research. There is a need for tools that can screen for and improve potential research participants' understanding of the research study at the time of recruitment. Limited empirical research has been conducted in low and middle income countries, evaluating informed consent processes in genomics research. We sought to investigate the quality of informed consent obtained in a South African psychiatric genomics study. A Xhosa language version of the University of California, San Diego Brief Assessment of Capacity to Consent Questionnaire (UBACC) was used to screen for capacity to consent and improve understanding through iterative learning in a sample of 528 Xhosa people with schizophrenia and 528 controls. We address two questions: firstly, whether research participants' understanding of the research study improved through iterative learning; and secondly, what were predictors for better understanding of the research study at the initial screening? During screening 290 (55%) cases and 172 (33%) controls scored below the 14.5 cut-off for acceptable understanding of the research study elements, however after iterative learning only 38 (7%) cases and 13 (2.5%) controls continued to score below this cut-off. Significant variables associated with increased understanding of the consent included the psychiatric nurse recruiter conducting the consent screening, higher participant level of education, and being a control. The UBACC proved an effective tool to improve understanding of research study elements during consent, for both cases and controls. The tool holds utility for complex studies such as those involving genomics, where iterative learning can be used to make significant improvements in understanding of research study elements. The UBACC may be particularly important in groups with severe mental illness and lower education levels. Study recruiters play a significant role in managing the quality of the informed consent process.

  1. Iterative deep convolutional encoder-decoder network for medical image segmentation.

    PubMed

    Jung Uk Kim; Hak Gu Kim; Yong Man Ro

    2017-07-01

    In this paper, we propose a novel medical image segmentation using iterative deep learning framework. We have combined an iterative learning approach and an encoder-decoder network to improve segmentation results, which enables to precisely localize the regions of interest (ROIs) including complex shapes or detailed textures of medical images in an iterative manner. The proposed iterative deep convolutional encoder-decoder network consists of two main paths: convolutional encoder path and convolutional decoder path with iterative learning. Experimental results show that the proposed iterative deep learning framework is able to yield excellent medical image segmentation performances for various medical images. The effectiveness of the proposed method has been proved by comparing with other state-of-the-art medical image segmentation methods.

  2. Adaptive iterative design (AID): a novel approach for evaluating the interactive effects of multiple stressors on aquatic organisms.

    PubMed

    Glaholt, Stephen P; Chen, Celia Y; Demidenko, Eugene; Bugge, Deenie M; Folt, Carol L; Shaw, Joseph R

    2012-08-15

    The study of stressor interactions by eco-toxicologists using nonlinear response variables is limited by required amounts of a priori knowledge, complexity of experimental designs, the use of linear models, and the lack of use of optimal designs of nonlinear models to characterize complex interactions. Therefore, we developed AID, an adaptive-iterative design for eco-toxicologist to more accurately and efficiently examine complex multiple stressor interactions. AID incorporates the power of the general linear model and A-optimal criteria with an iterative process that: 1) minimizes the required amount of a priori knowledge, 2) simplifies the experimental design, and 3) quantifies both individual and interactive effects. Once a stable model is determined, the best fit model is identified and the direction and magnitude of stressors, individually and all combinations (including complex interactions) are quantified. To validate AID, we selected five commonly co-occurring components of polluted aquatic systems, three metal stressors (Cd, Zn, As) and two water chemistry parameters (pH, hardness) to be tested using standard acute toxicity tests in which Daphnia mortality is the (nonlinear) response variable. We found after the initial data input of experimental data, although literature values (e.g. EC-values) may also be used, and after only two iterations of AID, our dose response model was stable. The model ln(Cd)*ln(Zn) was determined the best predictor of Daphnia mortality response to the combined effects of Cd, Zn, As, pH, and hardness. This model was then used to accurately identify and quantify the strength of both greater- (e.g. As*Cd) and less-than additive interactions (e.g. Cd*Zn). Interestingly, our study found only binary interactions significant, not higher order interactions. We conclude that AID is more efficient and effective at assessing multiple stressor interactions than current methods. Other applications, including life-history endpoints commonly used by regulators, could benefit from AID's efficiency in assessing water quality criteria. Copyright © 2012 Elsevier B.V. All rights reserved.

  3. Image preprocessing for improving computational efficiency in implementation of restoration and superresolution algorithms.

    PubMed

    Sundareshan, Malur K; Bhattacharjee, Supratik; Inampudi, Radhika; Pang, Ho-Yuen

    2002-12-10

    Computational complexity is a major impediment to the real-time implementation of image restoration and superresolution algorithms in many applications. Although powerful restoration algorithms have been developed within the past few years utilizing sophisticated mathematical machinery (based on statistical optimization and convex set theory), these algorithms are typically iterative in nature and require a sufficient number of iterations to be executed to achieve the desired resolution improvement that may be needed to meaningfully perform postprocessing image exploitation tasks in practice. Additionally, recent technological breakthroughs have facilitated novel sensor designs (focal plane arrays, for instance) that make it possible to capture megapixel imagery data at video frame rates. A major challenge in the processing of these large-format images is to complete the execution of the image processing steps within the frame capture times and to keep up with the output rate of the sensor so that all data captured by the sensor can be efficiently utilized. Consequently, development of novel methods that facilitate real-time implementation of image restoration and superresolution algorithms is of significant practical interest and is the primary focus of this study. The key to designing computationally efficient processing schemes lies in strategically introducing appropriate preprocessing steps together with the superresolution iterations to tailor optimized overall processing sequences for imagery data of specific formats. For substantiating this assertion, three distinct methods for tailoring a preprocessing filter and integrating it with the superresolution processing steps are outlined. These methods consist of a region-of-interest extraction scheme, a background-detail separation procedure, and a scene-derived information extraction step for implementing a set-theoretic restoration of the image that is less demanding in computation compared with the superresolution iterations. A quantitative evaluation of the performance of these algorithms for restoring and superresolving various imagery data captured by diffraction-limited sensing operations are also presented.

  4. Characterizing Young Giant Planets with the Gemini Planet Imager: An Iterative Approach to Planet Characterization

    NASA Technical Reports Server (NTRS)

    Marley, Mark

    2015-01-01

    After discovery, the first task of exoplanet science is characterization. However experience has shown that the limited spectral range and resolution of most directly imaged exoplanet data requires an iterative approach to spectral modeling. Simple, brown dwarf-like models, must first be tested to ascertain if they are both adequate to reproduce the available data and consistent with additional constraints, including the age of the system and available limits on the planet's mass and luminosity, if any. When agreement is lacking, progressively more complex solutions must be considered, including non-solar composition, partial cloudiness, and disequilibrium chemistry. Such additional complexity must be balanced against an understanding of the limitations of the atmospheric models themselves. For example while great strides have been made in improving the opacities of important molecules, particularly NH3 and CH4, at high temperatures, much more work is needed to understand the opacity of atomic Na and K. The highly pressure broadened fundamental band of Na and K in the optical stretches into the near-infrared, strongly influencing the spectral shape of Y and J spectral bands. Discerning gravity and atmospheric composition is difficult, if not impossible, without both good atomic opacities as well as an excellent understanding of the relevant atmospheric chemistry. I will present examples of the iterative process of directly imaged exoplanet characterization as applied to both known and potentially newly discovered exoplanets with a focus on constraints provided by GPI spectra. If a new GPI planet is lacking, as a case study I will discuss HR 8799 c and d will explain why some solutions, such as spatially inhomogeneous cloudiness, introduce their own additional layers of complexity. If spectra of new planets from GPI are available I will explain the modeling process in the context of understanding these new worlds.

  5. Iterated unscented Kalman filter for phase unwrapping of interferometric fringes.

    PubMed

    Xie, Xianming

    2016-08-22

    A fresh phase unwrapping algorithm based on iterated unscented Kalman filter is proposed to estimate unambiguous unwrapped phase of interferometric fringes. This method is the result of combining an iterated unscented Kalman filter with a robust phase gradient estimator based on amended matrix pencil model, and an efficient quality-guided strategy based on heap sort. The iterated unscented Kalman filter that is one of the most robust methods under the Bayesian theorem frame in non-linear signal processing so far, is applied to perform simultaneously noise suppression and phase unwrapping of interferometric fringes for the first time, which can simplify the complexity and the difficulty of pre-filtering procedure followed by phase unwrapping procedure, and even can remove the pre-filtering procedure. The robust phase gradient estimator is used to efficiently and accurately obtain phase gradient information from interferometric fringes, which is needed for the iterated unscented Kalman filtering phase unwrapping model. The efficient quality-guided strategy is able to ensure that the proposed method fast unwraps wrapped pixels along the path from the high-quality area to the low-quality area of wrapped phase images, which can greatly improve the efficiency of phase unwrapping. Results obtained from synthetic data and real data show that the proposed method can obtain better solutions with an acceptable time consumption, with respect to some of the most used algorithms.

  6. Monte Carlo Simulations: Number of Iterations and Accuracy

    DTIC Science & Technology

    2015-07-01

    iterations because of its added complexity compared to the WM . We recommend that the WM be used for a priori estimates of the number of MC ...inaccurate.15 Although the WM and the WSM have generally proven useful in estimating the number of MC iterations and addressing the accuracy of the MC ...Theorem 3 3. A Priori Estimate of Number of MC Iterations 7 4. MC Result Accuracy 11 5. Using Percentage Error of the Mean to Estimate Number of MC

  7. On differential operators generating iterative systems of linear ODEs of maximal symmetry algebra

    NASA Astrophysics Data System (ADS)

    Ndogmo, J. C.

    2017-06-01

    Although every iterative scalar linear ordinary differential equation is of maximal symmetry algebra, the situation is different and far more complex for systems of linear ordinary differential equations, and an iterative system of linear equations need not be of maximal symmetry algebra. We illustrate these facts by examples and derive families of vector differential operators whose iterations are all linear systems of equations of maximal symmetry algebra. Some consequences of these results are also discussed.

  8. Simultaneous gains tuning in boiler/turbine PID-based controller clusters using iterative feedback tuning methodology.

    PubMed

    Zhang, Shu; Taft, Cyrus W; Bentsman, Joseph; Hussey, Aaron; Petrus, Bryan

    2012-09-01

    Tuning a complex multi-loop PID based control system requires considerable experience. In today's power industry the number of available qualified tuners is dwindling and there is a great need for better tuning tools to maintain and improve the performance of complex multivariable processes. Multi-loop PID tuning is the procedure for the online tuning of a cluster of PID controllers operating in a closed loop with a multivariable process. This paper presents the first application of the simultaneous tuning technique to the multi-input-multi-output (MIMO) PID based nonlinear controller in the power plant control context, with the closed-loop system consisting of a MIMO nonlinear boiler/turbine model and a nonlinear cluster of six PID-type controllers. Although simplified, the dynamics and cross-coupling of the process and the PID cluster are similar to those used in a real power plant. The particular technique selected, iterative feedback tuning (IFT), utilizes the linearized version of the PID cluster for signal conditioning, but the data collection and tuning is carried out on the full nonlinear closed-loop system. Based on the figure of merit for the control system performance, the IFT is shown to deliver performance favorably comparable to that attained through the empirical tuning carried out by an experienced control engineer. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Comparison between iteration schemes for three-dimensional coordinate-transformed saturated-unsaturated flow model

    NASA Astrophysics Data System (ADS)

    An, Hyunuk; Ichikawa, Yutaka; Tachikawa, Yasuto; Shiiba, Michiharu

    2012-11-01

    SummaryThree different iteration methods for a three-dimensional coordinate-transformed saturated-unsaturated flow model are compared in this study. The Picard and Newton iteration methods are the common approaches for solving Richards' equation. The Picard method is simple to implement and cost-efficient (on an individual iteration basis). However it converges slower than the Newton method. On the other hand, although the Newton method converges faster, it is more complex to implement and consumes more CPU resources per iteration than the Picard method. The comparison of the two methods in finite-element model (FEM) for saturated-unsaturated flow has been well evaluated in previous studies. However, two iteration methods might exhibit different behavior in the coordinate-transformed finite-difference model (FDM). In addition, the Newton-Krylov method could be a suitable alternative for the coordinate-transformed FDM because it requires the evaluation of a 19-point stencil matrix. The formation of a 19-point stencil is quite a complex and laborious procedure. Instead, the Newton-Krylov method calculates the matrix-vector product, which can be easily approximated by calculating the differences of the original nonlinear function. In this respect, the Newton-Krylov method might be the most appropriate iteration method for coordinate-transformed FDM. However, this method involves the additional cost of taking an approximation at each Krylov iteration in the Newton-Krylov method. In this paper, we evaluated the efficiency and robustness of three iteration methods—the Picard, Newton, and Newton-Krylov methods—for simulating saturated-unsaturated flow through porous media using a three-dimensional coordinate-transformed FDM.

  10. Design Features of the Neutral Particle Diagnostic System for the ITER Tokamak

    NASA Astrophysics Data System (ADS)

    Petrov, S. Ya.; Afanasyev, V. I.; Melnik, A. D.; Mironov, M. I.; Navolotsky, A. S.; Nesenevich, V. G.; Petrov, M. P.; Chernyshev, F. V.; Kedrov, I. V.; Kuzmin, E. G.; Lyublin, B. V.; Kozlovski, S. S.; Mokeev, A. N.

    2017-12-01

    The control of the deuterium-tritium (DT) fuel isotopic ratio has to ensure the best performance of the ITER thermonuclear fusion reactor. The diagnostic system described in this paper allows the measurement of this ratio analyzing the hydrogen isotope fluxes (performing neutral particle analysis (NPA)). The development and supply of the NPA diagnostics for ITER was delegated to the Russian Federation. The diagnostics is being developed at the Ioffe Institute. The system consists of two analyzers, viz., LENPA (Low Energy Neutral Particle Analyzer) with 10-200 keV energy range and HENPA (High Energy Neutral Particle Analyzer) with 0.1-4.0MeV energy range. Simultaneous operation of both analyzers in different energy ranges enables researchers to measure the DT fuel ratio both in the central burning plasma (thermonuclear burn zone) and at the edge as well. When developing the diagnostic complex, it was necessary to account for the impact of several factors: high levels of neutron and gamma radiation, the direct vacuum connection to the ITER vessel, implying high tritium containment, strict requirements on reliability of all units and mechanisms, and the limited space available for accommodation of the diagnostic hardware at the ITER tokamak. The paper describes the design of the diagnostic complex and the engineering solutions that make it possible to conduct measurements under tokamak reactor conditions. The proposed engineering solutions provide a safe—with respect to thermal and mechanical loads—common vacuum channel for hydrogen isotope atoms to pass to the analyzers; ensure efficient shielding of the analyzers from the ITER stray magnetic field (up to 1 kG); provide the remote control of the NPA diagnostic complex, in particular, connection/disconnection of the NPA vacuum beamline from the ITER vessel; meet the ITER radiation safety requirements; and ensure measurements of the fuel isotopic ratio under high levels of neutron and gamma radiation.

  11. Demons deformable registration of CT and cone-beam CT using an iterative intensity matching approach.

    PubMed

    Nithiananthan, Sajendra; Schafer, Sebastian; Uneri, Ali; Mirota, Daniel J; Stayman, J Webster; Zbijewski, Wojciech; Brock, Kristy K; Daly, Michael J; Chan, Harley; Irish, Jonathan C; Siewerdsen, Jeffrey H

    2011-04-01

    A method of intensity-based deformable registration of CT and cone-beam CT (CBCT) images is described, in which intensity correction occurs simultaneously within the iterative registration process. The method preserves the speed and simplicity of the popular Demons algorithm while providing robustness and accuracy in the presence of large mismatch between CT and CBCT voxel values ("intensity"). A variant of the Demons algorithm was developed in which an estimate of the relationship between CT and CBCT intensity values for specific materials in the image is computed at each iteration based on the set of currently overlapping voxels. This tissue-specific intensity correction is then used to estimate the registration output for that iteration and the process is repeated. The robustness of the method was tested in CBCT images of a cadaveric head exhibiting a broad range of simulated intensity variations associated with x-ray scatter, object truncation, and/or errors in the reconstruction algorithm. The accuracy of CT-CBCT registration was also measured in six real cases, exhibiting deformations ranging from simple to complex during surgery or radiotherapy guided by a CBCT-capable C-arm or linear accelerator, respectively. The iterative intensity matching approach was robust against all levels of intensity variation examined, including spatially varying errors in voxel value of a factor of 2 or more, as can be encountered in cases of high x-ray scatter. Registration accuracy without intensity matching degraded severely with increasing magnitude of intensity error and introduced image distortion. A single histogram match performed prior to registration alleviated some of these effects but was also prone to image distortion and was quantifiably less robust and accurate than the iterative approach. Within the six case registration accuracy study, iterative intensity matching Demons reduced mean TRE to (2.5 +/- 2.8) mm compared to (3.5 +/- 3.0) mm with rigid registration. A method was developed to iteratively correct CT-CBCT intensity disparity during Demons registration, enabling fast, intensity-based registration in CBCT-guided procedures such as surgery and radiotherapy, in which CBCT voxel values may be inaccurate. Accurate CT-CBCT registration in turn facilitates registration of multimodality preoperative image and planning data to intraoperative CBCT by way of the preoperative CT, thereby linking the intraoperative frame of reference to a wealth of preoperative information that could improve interventional guidance.

  12. Application of a neural network to simulate analysis in an optimization process

    NASA Technical Reports Server (NTRS)

    Rogers, James L.; Lamarsh, William J., II

    1992-01-01

    A new experimental software package called NETS/PROSSS aimed at reducing the computing time required to solve a complex design problem is described. The software combines a neural network for simulating the analysis program with an optimization program. The neural network is applied to approximate results of a finite element analysis program to quickly obtain a near-optimal solution. Results of the NETS/PROSSS optimization process can also be used as an initial design in a normal optimization process and make it possible to converge to an optimum solution with significantly fewer iterations.

  13. Phase extraction based on iterative algorithm using five-frame crossed fringes in phase measuring deflectometry

    NASA Astrophysics Data System (ADS)

    Jin, Chengying; Li, Dahai; Kewei, E.; Li, Mengyang; Chen, Pengyu; Wang, Ruiyang; Xiong, Zhao

    2018-06-01

    In phase measuring deflectometry, two orthogonal sinusoidal fringe patterns are separately projected on the test surface and the distorted fringes reflected by the surface are recorded, each with a sequential phase shift. Then the two components of the local surface gradients are obtained by triangulation. It usually involves some complicated and time-consuming procedures (fringe projection in the orthogonal directions). In addition, the digital light devices (e.g. LCD screen and CCD camera) are not error free. There are quantization errors for each pixel of both LCD and CCD. Therefore, to avoid the complex process and improve the reliability of the phase distribution, a phase extraction algorithm with five-frame crossed fringes is presented in this paper. It is based on a least-squares iterative process. Using the proposed algorithm, phase distributions and phase shift amounts in two orthogonal directions can be simultaneously and successfully determined through an iterative procedure. Both a numerical simulation and a preliminary experiment are conducted to verify the validity and performance of this algorithm. Experimental results obtained by our method are shown, and comparisons between our experimental results and those obtained by the traditional 16-step phase-shifting algorithm and between our experimental results and those measured by the Fizeau interferometer are made.

  14. A complex guided spectral transform Lanczos method for studying quantum resonance states

    DOE PAGES

    Yu, Hua-Gen

    2014-12-28

    A complex guided spectral transform Lanczos (cGSTL) algorithm is proposed to compute both bound and resonance states including energies, widths and wavefunctions. The algorithm comprises of two layers of complex-symmetric Lanczos iterations. A short inner layer iteration produces a set of complex formally orthogonal Lanczos (cFOL) polynomials. They are used to span the guided spectral transform function determined by a retarded Green operator. An outer layer iteration is then carried out with the transform function to compute the eigen-pairs of the system. The guided spectral transform function is designed to have the same wavefunctions as the eigenstates of the originalmore » Hamiltonian in the spectral range of interest. Therefore the energies and/or widths of bound or resonance states can be easily computed with their wavefunctions or by using a root-searching method from the guided spectral transform surface. The new cGSTL algorithm is applied to bound and resonance states of HO₂, and compared to previous calculations.« less

  15. The Effect of Iteration on the Design Performance of Primary School Children

    ERIC Educational Resources Information Center

    Looijenga, Annemarie; Klapwijk, Remke; de Vries, Marc J.

    2015-01-01

    Iteration during the design process is an essential element. Engineers optimize their design by iteration. Research on iteration in Primary Design Education is however scarce; possibly teachers believe they do not have enough time for iteration in daily classroom practices. Spontaneous playing behavior of children indicates that iteration fits in…

  16. Beyond dualism: leading out of oppression.

    PubMed

    Fletcher, Karen

    2006-01-01

    To reexamine our beliefs about our gender identity in order to identify new possibilities for leading in nursing. Leadership is complex. This article is the result of a lengthy iterative process of exploring the empowerment, image, leadership, feminist, and oppression literature. All of this was distilled in the context of the author's experience as a nurse and nurse leader. Moving beyond dualism creates new possibilities for leading nurses out of oppression.

  17. On Design Mining: Coevolution and Surrogate Models.

    PubMed

    Preen, Richard J; Bull, Larry

    2017-01-01

    Design mining is the use of computational intelligence techniques to iteratively search and model the attribute space of physical objects evaluated directly through rapid prototyping to meet given objectives. It enables the exploitation of novel materials and processes without formal models or complex simulation. In this article, we focus upon the coevolutionary nature of the design process when it is decomposed into concurrent sub-design-threads due to the overall complexity of the task. Using an abstract, tunable model of coevolution, we consider strategies to sample subthread designs for whole-system testing and how best to construct and use surrogate models within the coevolutionary scenario. Drawing on our findings, we then describe the effective design of an array of six heterogeneous vertical-axis wind turbines.

  18. A Tutorial Review on Fractal Spacetime and Fractional Calculus

    NASA Astrophysics Data System (ADS)

    He, Ji-Huan

    2014-11-01

    This tutorial review of fractal-Cantorian spacetime and fractional calculus begins with Leibniz's notation for derivative without limits which can be generalized to discontinuous media like fractal derivative and q-derivative of quantum calculus. Fractal spacetime is used to elucidate some basic properties of fractal which is the foundation of fractional calculus, and El Naschie's mass-energy equation for the dark energy. The variational iteration method is used to introduce the definition of fractional derivatives. Fractal derivative is explained geometrically and q-derivative is motivated by quantum mechanics. Some effective analytical approaches to fractional differential equations, e.g., the variational iteration method, the homotopy perturbation method, the exp-function method, the fractional complex transform, and Yang-Laplace transform, are outlined and the main solution processes are given.

  19. ITER Central Solenoid Module Fabrication

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, John

    The fabrication of the modules for the ITER Central Solenoid (CS) has started in a dedicated production facility located in Poway, California, USA. The necessary tools have been designed, built, installed, and tested in the facility to enable the start of production. The current schedule has first module fabrication completed in 2017, followed by testing and subsequent shipment to ITER. The Central Solenoid is a key component of the ITER tokamak providing the inductive voltage to initiate and sustain the plasma current and to position and shape the plasma. The design of the CS has been a collaborative effort betweenmore » the US ITER Project Office (US ITER), the international ITER Organization (IO) and General Atomics (GA). GA’s responsibility includes: completing the fabrication design, developing and qualifying the fabrication processes and tools, and then completing the fabrication of the seven 110 tonne CS modules. The modules will be shipped separately to the ITER site, and then stacked and aligned in the Assembly Hall prior to insertion in the core of the ITER tokamak. A dedicated facility in Poway, California, USA has been established by GA to complete the fabrication of the seven modules. Infrastructure improvements included thick reinforced concrete floors, a diesel generator for backup power, along with, cranes for moving the tooling within the facility. The fabrication process for a single module requires approximately 22 months followed by five months of testing, which includes preliminary electrical testing followed by high current (48.5 kA) tests at 4.7K. The production of the seven modules is completed in a parallel fashion through ten process stations. The process stations have been designed and built with most stations having completed testing and qualification for carrying out the required fabrication processes. The final qualification step for each process station is achieved by the successful production of a prototype coil. Fabrication of the first ITER module is in progress. The seven modules will be individually shipped to Cadarache, France upon their completion. This paper describes the processes and status of the fabrication of the CS Modules for ITER.« less

  20. Spectral-element simulations of wave propagation in complex exploration-industry models: Imaging and adjoint tomography

    NASA Astrophysics Data System (ADS)

    Luo, Y.; Nissen-Meyer, T.; Morency, C.; Tromp, J.

    2008-12-01

    Seismic imaging in the exploration industry is often based upon ray-theoretical migration techniques (e.g., Kirchhoff) or other ideas which neglect some fraction of the seismic wavefield (e.g., wavefield continuation for acoustic-wave first arrivals) in the inversion process. In a companion paper we discuss the possibility of solving the full physical forward problem (i.e., including visco- and poroelastic, anisotropic media) using the spectral-element method. With such a tool at hand, we can readily apply the adjoint method to tomographic inversions, i.e., iteratively improving an initial 3D background model to fit the data. In the context of this inversion process, we draw connections between kernels in adjoint tomography and basic imaging principles in migration. We show that the images obtained by migration are nothing but particular kinds of adjoint kernels (mainly density kernels). Migration is basically a first step in the iterative inversion process of adjoint tomography. We apply the approach to basic 2D problems involving layered structures, overthrusting faults, topography, salt domes, and poroelastic regions.

  1. Intelligent model-based OPC

    NASA Astrophysics Data System (ADS)

    Huang, W. C.; Lai, C. M.; Luo, B.; Tsai, C. K.; Chih, M. H.; Lai, C. W.; Kuo, C. C.; Liu, R. G.; Lin, H. T.

    2006-03-01

    Optical proximity correction is the technique of pre-distorting mask layouts so that the printed patterns are as close to the desired shapes as possible. For model-based optical proximity correction, a lithographic model to predict the edge position (contour) of patterns on the wafer after lithographic processing is needed. Generally, segmentation of edges is performed prior to the correction. Pattern edges are dissected into several small segments with corresponding target points. During the correction, the edges are moved back and forth from the initial drawn position, assisted by the lithographic model, to finally settle on the proper positions. When the correction converges, the intensity predicted by the model in every target points hits the model-specific threshold value. Several iterations are required to achieve the convergence and the computation time increases with the increase of the required iterations. An artificial neural network is an information-processing paradigm inspired by biological nervous systems, such as how the brain processes information. It is composed of a large number of highly interconnected processing elements (neurons) working in unison to solve specific problems. A neural network can be a powerful data-modeling tool that is able to capture and represent complex input/output relationships. The network can accurately predict the behavior of a system via the learning procedure. A radial basis function network, a variant of artificial neural network, is an efficient function approximator. In this paper, a radial basis function network was used to build a mapping from the segment characteristics to the edge shift from the drawn position. This network can provide a good initial guess for each segment that OPC has carried out. The good initial guess reduces the required iterations. Consequently, cycle time can be shortened effectively. The optimization of the radial basis function network for this system was practiced by genetic algorithm, which is an artificially intelligent optimization method with a high probability to obtain global optimization. From preliminary results, the required iterations were reduced from 5 to 2 for a simple dumbbell-shape layout.

  2. Novel Image Quality Control Systems(Add-On). Innovative Computational Methods for Inverse Problems in Optical and SAR Imaging

    DTIC Science & Technology

    2007-02-28

    Iterative Ultrasonic Signal and Image Deconvolution for Estimation of the Complex Medium Response, International Journal of Imaging Systems and...1767-1782, 2006. 31. Z. Mu, R. Plemmons, and P. Santago. Iterative Ultrasonic Signal and Image Deconvolution for Estimation of the Complex...rigorous mathematical and computational research on inverse problems in optical imaging of direct interest to the Army and also the intelligence agencies

  3. Parallel computation of multigroup reactivity coefficient using iterative method

    NASA Astrophysics Data System (ADS)

    Susmikanti, Mike; Dewayatna, Winter

    2013-09-01

    One of the research activities to support the commercial radioisotope production program is a safety research target irradiation FPM (Fission Product Molybdenum). FPM targets form a tube made of stainless steel in which the nuclear degrees of superimposed high-enriched uranium. FPM irradiation tube is intended to obtain fission. The fission material widely used in the form of kits in the world of nuclear medicine. Irradiation FPM tube reactor core would interfere with performance. One of the disorders comes from changes in flux or reactivity. It is necessary to study a method for calculating safety terrace ongoing configuration changes during the life of the reactor, making the code faster became an absolute necessity. Neutron safety margin for the research reactor can be reused without modification to the calculation of the reactivity of the reactor, so that is an advantage of using perturbation method. The criticality and flux in multigroup diffusion model was calculate at various irradiation positions in some uranium content. This model has a complex computation. Several parallel algorithms with iterative method have been developed for the sparse and big matrix solution. The Black-Red Gauss Seidel Iteration and the power iteration parallel method can be used to solve multigroup diffusion equation system and calculated the criticality and reactivity coeficient. This research was developed code for reactivity calculation which used one of safety analysis with parallel processing. It can be done more quickly and efficiently by utilizing the parallel processing in the multicore computer. This code was applied for the safety limits calculation of irradiated targets FPM with increment Uranium.

  4. Novel Fourier-domain constraint for fast phase retrieval in coherent diffraction imaging.

    PubMed

    Latychevskaia, Tatiana; Longchamp, Jean-Nicolas; Fink, Hans-Werner

    2011-09-26

    Coherent diffraction imaging (CDI) for visualizing objects at atomic resolution has been realized as a promising tool for imaging single molecules. Drawbacks of CDI are associated with the difficulty of the numerical phase retrieval from experimental diffraction patterns; a fact which stimulated search for better numerical methods and alternative experimental techniques. Common phase retrieval methods are based on iterative procedures which propagate the complex-valued wave between object and detector plane. Constraints in both, the object and the detector plane are applied. While the constraint in the detector plane employed in most phase retrieval methods requires the amplitude of the complex wave to be equal to the squared root of the measured intensity, we propose a novel Fourier-domain constraint, based on an analogy to holography. Our method allows achieving a low-resolution reconstruction already in the first step followed by a high-resolution reconstruction after further steps. In comparison to conventional schemes this Fourier-domain constraint results in a fast and reliable convergence of the iterative reconstruction process. © 2011 Optical Society of America

  5. How to Compute Labile Metal-Ligand Equilibria

    ERIC Educational Resources Information Center

    de Levie, Robert

    2007-01-01

    The different methods used for computing labile metal-ligand complexes, which are suitable for an iterative computer solution, are illustrated. The ligand function has allowed students to relegate otherwise tedious iterations to a computer, while retaining complete control over what is calculated.

  6. Iterative methods for 3D implicit finite-difference migration using the complex Padé approximation

    NASA Astrophysics Data System (ADS)

    Costa, Carlos A. N.; Campos, Itamara S.; Costa, Jessé C.; Neto, Francisco A.; Schleicher, Jörg; Novais, Amélia

    2013-08-01

    Conventional implementations of 3D finite-difference (FD) migration use splitting techniques to accelerate performance and save computational cost. However, such techniques are plagued with numerical anisotropy that jeopardises the correct positioning of dipping reflectors in the directions not used for the operator splitting. We implement 3D downward continuation FD migration without splitting using a complex Padé approximation. In this way, the numerical anisotropy is eliminated at the expense of a computationally more intensive solution of a large-band linear system. We compare the performance of the iterative stabilized biconjugate gradient (BICGSTAB) and that of the multifrontal massively parallel direct solver (MUMPS). It turns out that the use of the complex Padé approximation not only stabilizes the solution, but also acts as an effective preconditioner for the BICGSTAB algorithm, reducing the number of iterations as compared to the implementation using the real Padé expansion. As a consequence, the iterative BICGSTAB method is more efficient than the direct MUMPS method when solving a single term in the Padé expansion. The results of both algorithms, here evaluated by computing the migration impulse response in the SEG/EAGE salt model, are of comparable quality.

  7. When Homoplasy Is Not Homoplasy: Dissecting Trait Evolution by Contrasting Composite and Reductive Coding.

    PubMed

    Torres-Montúfar, Alejandro; Borsch, Thomas; Ochoterena, Helga

    2018-05-01

    The conceptualization and coding of characters is a difficult issue in phylogenetic systematics, no matter which inference method is used when reconstructing phylogenetic trees or if the characters are just mapped onto a specific tree. Complex characters are groups of features that can be divided into simpler hierarchical characters (reductive coding), although the implied hierarchical relational information may change depending on the type of coding (composite vs. reductive). Up to now, there is no common agreement to either code characters as complex or simple. Phylogeneticists have discussed which coding method is best but have not incorporated the heuristic process of reciprocal illumination to evaluate the coding. Composite coding allows to test whether 1) several characters were linked resulting in a structure described as a complex character or trait or 2) independently evolving characters resulted in the configuration incorrectly interpreted as a complex character. We propose that complex characters or character states should be decomposed iteratively into simpler characters when the original homology hypothesis is not corroborated by a phylogenetic analysis, and the character or character state is retrieved as homoplastic. We tested this approach using the case of fruit types within subfamily Cinchonoideae (Rubiaceae). The iterative reductive coding of characters associated with drupes allowed us to unthread fruit evolution within Cinchonoideae. Our results show that drupes and berries are not homologous. As a consequence, a more precise ontology for the Cinchonoideae drupes is required.

  8. Topology Optimization for Reducing Additive Manufacturing Processing Distortions

    DTIC Science & Technology

    2017-12-01

    features that curl or warp under thermal load and are subsequently struck by the recoater blade /roller. Support structures act to wick heat away and...was run for 150 iterations. The material properties for all examples were Young’s modulus E = 1 GPa, Poisson’s ratio ν = 0.25, and thermal expansion...the element-birth model is significantly more computationally expensive for a full op- timization run . Consider, the computational complexity of a

  9. A reduced complexity highly power/bandwidth efficient coded FQPSK system with iterative decoding

    NASA Technical Reports Server (NTRS)

    Simon, M. K.; Divsalar, D.

    2001-01-01

    Based on a representation of FQPSK as a trellis-coded modulation, this paper investigates the potential improvement in power efficiency obtained from the application of simple outer codes to form a concatenated coding arrangement with iterative decoding.

  10. A UWB Radar Signal Processing Platform for Real-Time Human Respiratory Feature Extraction Based on Four-Segment Linear Waveform Model.

    PubMed

    Hsieh, Chi-Hsuan; Chiu, Yu-Fang; Shen, Yi-Hsiang; Chu, Ta-Shun; Huang, Yuan-Hao

    2016-02-01

    This paper presents an ultra-wideband (UWB) impulse-radio radar signal processing platform used to analyze human respiratory features. Conventional radar systems used in human detection only analyze human respiration rates or the response of a target. However, additional respiratory signal information is available that has not been explored using radar detection. The authors previously proposed a modified raised cosine waveform (MRCW) respiration model and an iterative correlation search algorithm that could acquire additional respiratory features such as the inspiration and expiration speeds, respiration intensity, and respiration holding ratio. To realize real-time respiratory feature extraction by using the proposed UWB signal processing platform, this paper proposes a new four-segment linear waveform (FSLW) respiration model. This model offers a superior fit to the measured respiration signal compared with the MRCW model and decreases the computational complexity of feature extraction. In addition, an early-terminated iterative correlation search algorithm is presented, substantially decreasing the computational complexity and yielding negligible performance degradation. These extracted features can be considered the compressed signals used to decrease the amount of data storage required for use in long-term medical monitoring systems and can also be used in clinical diagnosis. The proposed respiratory feature extraction algorithm was designed and implemented using the proposed UWB radar signal processing platform including a radar front-end chip and an FPGA chip. The proposed radar system can detect human respiration rates at 0.1 to 1 Hz and facilitates the real-time analysis of the respiratory features of each respiration period.

  11. Conjugate gradient type methods for linear systems with complex symmetric coefficient matrices

    NASA Technical Reports Server (NTRS)

    Freund, Roland

    1989-01-01

    We consider conjugate gradient type methods for the solution of large sparse linear system Ax equals b with complex symmetric coefficient matrices A equals A(T). Such linear systems arise in important applications, such as the numerical solution of the complex Helmholtz equation. Furthermore, most complex non-Hermitian linear systems which occur in practice are actually complex symmetric. We investigate conjugate gradient type iterations which are based on a variant of the nonsymmetric Lanczos algorithm for complex symmetric matrices. We propose a new approach with iterates defined by a quasi-minimal residual property. The resulting algorithm presents several advantages over the standard biconjugate gradient method. We also include some remarks on the obvious approach to general complex linear systems by solving equivalent real linear systems for the real and imaginary parts of x. Finally, numerical experiments for linear systems arising from the complex Helmholtz equation are reported.

  12. The Iterative Design Process in Research and Development: A Work Experience Paper

    NASA Technical Reports Server (NTRS)

    Sullivan, George F. III

    2013-01-01

    The iterative design process is one of many strategies used in new product development. Top-down development strategies, like waterfall development, place a heavy emphasis on planning and simulation. The iterative process, on the other hand, is better suited to the management of small to medium scale projects. Over the past four months, I have worked with engineers at Johnson Space Center on a multitude of electronics projects. By describing the work I have done these last few months, analyzing the factors that have driven design decisions, and examining the testing and verification process, I will demonstrate that iterative design is the obvious choice for research and development projects.

  13. Predicting the evolution of spreading on complex networks

    PubMed Central

    Chen, Duan-Bing; Xiao, Rui; Zeng, An

    2014-01-01

    Due to the wide applications, spreading processes on complex networks have been intensively studied. However, one of the most fundamental problems has not yet been well addressed: predicting the evolution of spreading based on a given snapshot of the propagation on networks. With this problem solved, one can accelerate or slow down the spreading in advance if the predicted propagation result is narrower or wider than expected. In this paper, we propose an iterative algorithm to estimate the infection probability of the spreading process and then apply it to a mean-field approach to predict the spreading coverage. The validation of the method is performed in both artificial and real networks. The results show that our method is accurate in both infection probability estimation and spreading coverage prediction. PMID:25130862

  14. Medical image segmentation based on SLIC superpixels model

    NASA Astrophysics Data System (ADS)

    Chen, Xiang-ting; Zhang, Fan; Zhang, Ruo-ya

    2017-01-01

    Medical imaging has been widely used in clinical practice. It is an important basis for medical experts to diagnose the disease. However, medical images have many unstable factors such as complex imaging mechanism, the target displacement will cause constructed defect and the partial volume effect will lead to error and equipment wear, which increases the complexity of subsequent image processing greatly. The segmentation algorithm which based on SLIC (Simple Linear Iterative Clustering, SLIC) superpixels is used to eliminate the influence of constructed defect and noise by means of the feature similarity in the preprocessing stage. At the same time, excellent clustering effect can reduce the complexity of the algorithm extremely, which provides an effective basis for the rapid diagnosis of experts.

  15. Why and how Mastering an Incremental and Iterative Software Development Process

    NASA Astrophysics Data System (ADS)

    Dubuc, François; Guichoux, Bernard; Cormery, Patrick; Mescam, Jean Christophe

    2004-06-01

    One of the key issues regularly mentioned in the current software crisis of the space domain is related to the software development process that must be performed while the system definition is not yet frozen. This is especially true for complex systems like launchers or space vehicles.Several more or less mature solutions are under study by EADS SPACE Transportation and are going to be presented in this paper. The basic principle is to develop the software through an iterative and incremental process instead of the classical waterfall approach, with the following advantages:- It permits systematic management and incorporation of requirements changes over the development cycle with a minimal cost. As far as possible the most dimensioning requirements are analyzed and developed in priority for validating very early the architecture concept without the details.- A software prototype is very quickly available. It improves the communication between system and software teams, as it enables to check very early and efficiently the common understanding of the system requirements.- It allows the software team to complete a whole development cycle very early, and thus to become quickly familiar with the software development environment (methodology, technology, tools...). This is particularly important when the team is new, or when the environment has changed since the previous development. Anyhow, it improves a lot the learning curve of the software team.These advantages seem very attractive, but mastering efficiently an iterative development process is not so easy and induces a lot of difficulties such as:- How to freeze one configuration of the system definition as a development baseline, while most of thesystem requirements are completely and naturally unstable?- How to distinguish stable/unstable and dimensioning/standard requirements?- How to plan the development of each increment?- How to link classical waterfall development milestones with an iterative approach: when should theclassical reviews be performed: Software Specification Review? Preliminary Design Review? CriticalDesign Review? Code Review? Etc...Several solutions envisaged or already deployed by EADS SPACE Transportation will be presented, both from a methodological and technological point of view:- How the MELANIE EADS ST internal methodology improves the concurrent engineering activitiesbetween GNC, software and simulation teams in a very iterative and reactive way.- How the CMM approach can help by better formalizing Requirements Management and Planningprocesses.- How the Automatic Code Generation with "certified" tools (SCADE) can still dramatically shorten thedevelopment cycle.Then the presentation will conclude by showing an evaluation of the cost and planning reduction based on a pilot application by comparing figures on two similar projects: one with the classical waterfall process, the other one with an iterative and incremental approach.

  16. Multicriteria hierarchical iterative interactive algorithm for organizing operational modes of large heat supply systems

    NASA Astrophysics Data System (ADS)

    Korotkova, T. I.; Popova, V. I.

    2017-11-01

    The generalized mathematical model of decision-making in the problem of planning and mode selection providing required heat loads in a large heat supply system is considered. The system is multilevel, decomposed into levels of main and distribution heating networks with intermediate control stages. Evaluation of the effectiveness, reliability and safety of such a complex system is carried out immediately according to several indicators, in particular pressure, flow, temperature. This global multicriteria optimization problem with constraints is decomposed into a number of local optimization problems and the coordination problem. An agreed solution of local problems provides a solution to the global multicriterion problem of decision making in a complex system. The choice of the optimum operational mode of operation of a complex heat supply system is made on the basis of the iterative coordination process, which converges to the coordinated solution of local optimization tasks. The interactive principle of multicriteria task decision-making includes, in particular, periodic adjustment adjustments, if necessary, guaranteeing optimal safety, reliability and efficiency of the system as a whole in the process of operation. The degree of accuracy of the solution, for example, the degree of deviation of the internal air temperature from the required value, can also be changed interactively. This allows to carry out adjustment activities in the best way and to improve the quality of heat supply to consumers. At the same time, an energy-saving task is being solved to determine the minimum required values of heads at sources and pumping stations.

  17. Demons deformable registration of CT and cone-beam CT using an iterative intensity matching approach

    PubMed Central

    Nithiananthan, Sajendra; Schafer, Sebastian; Uneri, Ali; Mirota, Daniel J.; Stayman, J. Webster; Zbijewski, Wojciech; Brock, Kristy K.; Daly, Michael J.; Chan, Harley; Irish, Jonathan C.; Siewerdsen, Jeffrey H.

    2011-01-01

    Purpose: A method of intensity-based deformable registration of CT and cone-beam CT (CBCT) images is described, in which intensity correction occurs simultaneously within the iterative registration process. The method preserves the speed and simplicity of the popular Demons algorithm while providing robustness and accuracy in the presence of large mismatch between CT and CBCT voxel values (“intensity”). Methods: A variant of the Demons algorithm was developed in which an estimate of the relationship between CT and CBCT intensity values for specific materials in the image is computed at each iteration based on the set of currently overlapping voxels. This tissue-specific intensity correction is then used to estimate the registration output for that iteration and the process is repeated. The robustness of the method was tested in CBCT images of a cadaveric head exhibiting a broad range of simulated intensity variations associated with x-ray scatter, object truncation, and∕or errors in the reconstruction algorithm. The accuracy of CT-CBCT registration was also measured in six real cases, exhibiting deformations ranging from simple to complex during surgery or radiotherapy guided by a CBCT-capable C-arm or linear accelerator, respectively. Results: The iterative intensity matching approach was robust against all levels of intensity variation examined, including spatially varying errors in voxel value of a factor of 2 or more, as can be encountered in cases of high x-ray scatter. Registration accuracy without intensity matching degraded severely with increasing magnitude of intensity error and introduced image distortion. A single histogram match performed prior to registration alleviated some of these effects but was also prone to image distortion and was quantifiably less robust and accurate than the iterative approach. Within the six case registration accuracy study, iterative intensity matching Demons reduced mean TRE to (2.5±2.8) mm compared to (3.5±3.0) mm with rigid registration. Conclusions: A method was developed to iteratively correct CT-CBCT intensity disparity during Demons registration, enabling fast, intensity-based registration in CBCT-guided procedures such as surgery and radiotherapy, in which CBCT voxel values may be inaccurate. Accurate CT-CBCT registration in turn facilitates registration of multimodality preoperative image and planning data to intraoperative CBCT by way of the preoperative CT, thereby linking the intraoperative frame of reference to a wealth of preoperative information that could improve interventional guidance. PMID:21626913

  18. Demons deformable registration of CT and cone-beam CT using an iterative intensity matching approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nithiananthan, Sajendra; Schafer, Sebastian; Uneri, Ali

    2011-04-15

    Purpose: A method of intensity-based deformable registration of CT and cone-beam CT (CBCT) images is described, in which intensity correction occurs simultaneously within the iterative registration process. The method preserves the speed and simplicity of the popular Demons algorithm while providing robustness and accuracy in the presence of large mismatch between CT and CBCT voxel values (''intensity''). Methods: A variant of the Demons algorithm was developed in which an estimate of the relationship between CT and CBCT intensity values for specific materials in the image is computed at each iteration based on the set of currently overlapping voxels. This tissue-specificmore » intensity correction is then used to estimate the registration output for that iteration and the process is repeated. The robustness of the method was tested in CBCT images of a cadaveric head exhibiting a broad range of simulated intensity variations associated with x-ray scatter, object truncation, and/or errors in the reconstruction algorithm. The accuracy of CT-CBCT registration was also measured in six real cases, exhibiting deformations ranging from simple to complex during surgery or radiotherapy guided by a CBCT-capable C-arm or linear accelerator, respectively. Results: The iterative intensity matching approach was robust against all levels of intensity variation examined, including spatially varying errors in voxel value of a factor of 2 or more, as can be encountered in cases of high x-ray scatter. Registration accuracy without intensity matching degraded severely with increasing magnitude of intensity error and introduced image distortion. A single histogram match performed prior to registration alleviated some of these effects but was also prone to image distortion and was quantifiably less robust and accurate than the iterative approach. Within the six case registration accuracy study, iterative intensity matching Demons reduced mean TRE to (2.5{+-}2.8) mm compared to (3.5{+-}3.0) mm with rigid registration. Conclusions: A method was developed to iteratively correct CT-CBCT intensity disparity during Demons registration, enabling fast, intensity-based registration in CBCT-guided procedures such as surgery and radiotherapy, in which CBCT voxel values may be inaccurate. Accurate CT-CBCT registration in turn facilitates registration of multimodality preoperative image and planning data to intraoperative CBCT by way of the preoperative CT, thereby linking the intraoperative frame of reference to a wealth of preoperative information that could improve interventional guidance.« less

  19. Sequencing the Cortical Processing of Pitch-Evoking Stimuli using EEG Analysis and Source Estimation

    PubMed Central

    Butler, Blake E.; Trainor, Laurel J.

    2012-01-01

    Cues to pitch include spectral cues that arise from tonotopic organization and temporal cues that arise from firing patterns of auditory neurons. fMRI studies suggest a common pitch center is located just beyond primary auditory cortex along the lateral aspect of Heschl’s gyrus, but little work has examined the stages of processing for the integration of pitch cues. Using electroencephalography, we recorded cortical responses to high-pass filtered iterated rippled noise (IRN) and high-pass filtered complex harmonic stimuli, which differ in temporal and spectral content. The two stimulus types were matched for pitch saliency, and a mismatch negativity (MMN) response was elicited by infrequent pitch changes. The P1 and N1 components of event-related potentials (ERPs) are thought to arise from primary and secondary auditory areas, respectively, and to result from simple feature extraction. MMN is generated in secondary auditory cortex and is thought to act on feature-integrated auditory objects. We found that peak latencies of both P1 and N1 occur later in response to IRN stimuli than to complex harmonic stimuli, but found no latency differences between stimulus types for MMN. The location of each ERP component was estimated based on iterative fitting of regional sources in the auditory cortices. The sources of both the P1 and N1 components elicited by IRN stimuli were located dorsal to those elicited by complex harmonic stimuli, whereas no differences were observed for MMN sources across stimuli. Furthermore, the MMN component was located between the P1 and N1 components, consistent with fMRI studies indicating a common pitch region in lateral Heschl’s gyrus. These results suggest that while the spectral and temporal processing of different pitch-evoking stimuli involves different cortical areas during early processing, by the time the object-related MMN response is formed, these cues have been integrated into a common representation of pitch. PMID:22740836

  20. Tailored and Integrated Web-Based Tools for Improving Psychosocial Outcomes of Cancer Patients: The DoTTI Development Framework

    PubMed Central

    Bryant, Jamie; Sanson-Fisher, Rob; Tzelepis, Flora; Henskens, Frans; Paul, Christine; Stevenson, William

    2014-01-01

    Background Effective communication with cancer patients and their families about their disease, treatment options, and possible outcomes may improve psychosocial outcomes. However, traditional approaches to providing information to patients, including verbal information and written booklets, have a number of shortcomings centered on their limited ability to meet patient preferences and literacy levels. New-generation Web-based technologies offer an innovative and pragmatic solution for overcoming these limitations by providing a platform for interactive information seeking, information sharing, and user-centered tailoring. Objective The primary goal of this paper is to discuss the advantages of comprehensive and iterative Web-based technologies for health information provision and propose a four-phase framework for the development of Web-based information tools. Methods The proposed framework draws on our experience of constructing a Web-based information tool for hematological cancer patients and their families. The framework is based on principles for the development and evaluation of complex interventions and draws on the Agile methodology of software programming that emphasizes collaboration and iteration throughout the development process. Results The DoTTI framework provides a model for a comprehensive and iterative approach to the development of Web-based informational tools for patients. The process involves 4 phases of development: (1) Design and development, (2) Testing early iterations, (3) Testing for effectiveness, and (4) Integration and implementation. At each step, stakeholders (including researchers, clinicians, consumers, and programmers) are engaged in consultations to review progress, provide feedback on versions of the Web-based tool, and based on feedback, determine the appropriate next steps in development. Conclusions This 4-phase framework is evidence-informed and consumer-centered and could be applied widely to develop Web-based programs for a diverse range of diseases. PMID:24641991

  1. Tailored and integrated Web-based tools for improving psychosocial outcomes of cancer patients: the DoTTI development framework.

    PubMed

    Smits, Rochelle; Bryant, Jamie; Sanson-Fisher, Rob; Tzelepis, Flora; Henskens, Frans; Paul, Christine; Stevenson, William

    2014-03-14

    Effective communication with cancer patients and their families about their disease, treatment options, and possible outcomes may improve psychosocial outcomes. However, traditional approaches to providing information to patients, including verbal information and written booklets, have a number of shortcomings centered on their limited ability to meet patient preferences and literacy levels. New-generation Web-based technologies offer an innovative and pragmatic solution for overcoming these limitations by providing a platform for interactive information seeking, information sharing, and user-centered tailoring. The primary goal of this paper is to discuss the advantages of comprehensive and iterative Web-based technologies for health information provision and propose a four-phase framework for the development of Web-based information tools. The proposed framework draws on our experience of constructing a Web-based information tool for hematological cancer patients and their families. The framework is based on principles for the development and evaluation of complex interventions and draws on the Agile methodology of software programming that emphasizes collaboration and iteration throughout the development process. The DoTTI framework provides a model for a comprehensive and iterative approach to the development of Web-based informational tools for patients. The process involves 4 phases of development: (1) Design and development, (2) Testing early iterations, (3) Testing for effectiveness, and (4) Integration and implementation. At each step, stakeholders (including researchers, clinicians, consumers, and programmers) are engaged in consultations to review progress, provide feedback on versions of the Web-based tool, and based on feedback, determine the appropriate next steps in development. This 4-phase framework is evidence-informed and consumer-centered and could be applied widely to develop Web-based programs for a diverse range of diseases.

  2. Comparing the basins of attraction for several methods in the circular Sitnikov problem with spheroid primaries

    NASA Astrophysics Data System (ADS)

    Zotos, Euaggelos E.

    2018-06-01

    The circular Sitnikov problem, where the two primary bodies are prolate or oblate spheroids, is numerically investigated. In particular, the basins of convergence on the complex plane are revealed by using a large collection of numerical methods of several order. We consider four cases, regarding the value of the oblateness coefficient which determines the nature of the roots (attractors) of the system. For all cases we use the iterative schemes for performing a thorough and systematic classification of the nodes on the complex plane. The distribution of the iterations as well as the probability and their correlations with the corresponding basins of convergence are also discussed. Our numerical computations indicate that most of the iterative schemes provide relatively similar convergence structures on the complex plane. However, there are some numerical methods for which the corresponding basins of attraction are extremely complicated with highly fractal basin boundaries. Moreover, it is proved that the efficiency strongly varies between the numerical methods.

  3. Fast and Epsilon-Optimal Discretized Pursuit Learning Automata.

    PubMed

    Zhang, JunQi; Wang, Cheng; Zhou, MengChu

    2015-10-01

    Learning automata (LA) are powerful tools for reinforcement learning. A discretized pursuit LA is the most popular one among them. During an iteration its operation consists of three basic phases: 1) selecting the next action; 2) finding the optimal estimated action; and 3) updating the state probability. However, when the number of actions is large, the learning becomes extremely slow because there are too many updates to be made at each iteration. The increased updates are mostly from phases 1 and 3. A new fast discretized pursuit LA with assured ε -optimality is proposed to perform both phases 1 and 3 with the computational complexity independent of the number of actions. Apart from its low computational complexity, it achieves faster convergence speed than the classical one when operating in stationary environments. This paper can promote the applications of LA toward the large-scale-action oriented area that requires efficient reinforcement learning tools with assured ε -optimality, fast convergence speed, and low computational complexity for each iteration.

  4. Memory-induced nonlinear dynamics of excitation in cardiac diseases.

    PubMed

    Landaw, Julian; Qu, Zhilin

    2018-04-01

    Excitable cells, such as cardiac myocytes, exhibit short-term memory, i.e., the state of the cell depends on its history of excitation. Memory can originate from slow recovery of membrane ion channels or from accumulation of intracellular ion concentrations, such as calcium ion or sodium ion concentration accumulation. Here we examine the effects of memory on excitation dynamics in cardiac myocytes under two diseased conditions, early repolarization and reduced repolarization reserve, each with memory from two different sources: slow recovery of a potassium ion channel and slow accumulation of the intracellular calcium ion concentration. We first carry out computer simulations of action potential models described by differential equations to demonstrate complex excitation dynamics, such as chaos. We then develop iterated map models that incorporate memory, which accurately capture the complex excitation dynamics and bifurcations of the action potential models. Finally, we carry out theoretical analyses of the iterated map models to reveal the underlying mechanisms of memory-induced nonlinear dynamics. Our study demonstrates that the memory effect can be unmasked or greatly exacerbated under certain diseased conditions, which promotes complex excitation dynamics, such as chaos. The iterated map models reveal that memory converts a monotonic iterated map function into a nonmonotonic one to promote the bifurcations leading to high periodicity and chaos.

  5. Memory-induced nonlinear dynamics of excitation in cardiac diseases

    NASA Astrophysics Data System (ADS)

    Landaw, Julian; Qu, Zhilin

    2018-04-01

    Excitable cells, such as cardiac myocytes, exhibit short-term memory, i.e., the state of the cell depends on its history of excitation. Memory can originate from slow recovery of membrane ion channels or from accumulation of intracellular ion concentrations, such as calcium ion or sodium ion concentration accumulation. Here we examine the effects of memory on excitation dynamics in cardiac myocytes under two diseased conditions, early repolarization and reduced repolarization reserve, each with memory from two different sources: slow recovery of a potassium ion channel and slow accumulation of the intracellular calcium ion concentration. We first carry out computer simulations of action potential models described by differential equations to demonstrate complex excitation dynamics, such as chaos. We then develop iterated map models that incorporate memory, which accurately capture the complex excitation dynamics and bifurcations of the action potential models. Finally, we carry out theoretical analyses of the iterated map models to reveal the underlying mechanisms of memory-induced nonlinear dynamics. Our study demonstrates that the memory effect can be unmasked or greatly exacerbated under certain diseased conditions, which promotes complex excitation dynamics, such as chaos. The iterated map models reveal that memory converts a monotonic iterated map function into a nonmonotonic one to promote the bifurcations leading to high periodicity and chaos.

  6. A Model of Supervisor Decision-Making in the Accommodation of Workers with Low Back Pain.

    PubMed

    Williams-Whitt, Kelly; Kristman, Vicki; Shaw, William S; Soklaridis, Sophie; Reguly, Paula

    2016-09-01

    Purpose To explore supervisors' perspectives and decision-making processes in the accommodation of back injured workers. Methods Twenty-three semi-structured, in-depth interviews were conducted with supervisors from eleven Canadian organizations about their role in providing job accommodations. Supervisors were identified through an on-line survey and interviews were recorded, transcribed and entered into NVivo software. The initial analyses identified common units of meaning, which were used to develop a coding guide. Interviews were coded, and a model of supervisor decision-making was developed based on the themes, categories and connecting ideas identified in the data. Results The decision-making model includes a process element that is described as iterative "trial and error" decision-making. Medical restrictions are compared to job demands, employee abilities and available alternatives. A feasible modification is identified through brainstorming and then implemented by the supervisor. Resources used for brainstorming include information, supervisor experience and autonomy, and organizational supports. The model also incorporates the experience of accommodation as a job demand that causes strain for the supervisor. Accommodation demands affect the supervisor's attitude, brainstorming and monitoring effort, and communication with returning employees. Resources and demands have a combined effect on accommodation decision complexity, which in turn affects the quality of the accommodation option selected. If the employee is unable to complete the tasks or is reinjured during the accommodation, the decision cycle repeats. More frequent iteration through the trial and error process reduces the likelihood of return to work success. Conclusion A series of propositions is developed to illustrate the relationships among categories in the model. The model and propositions show: (a) the iterative, problem solving nature of the RTW process; (b) decision resources necessary for accommodation planning, and (c) the impact accommodation demands may have on supervisors and RTW quality.

  7. Thermo-mechanical analysis of ITER first mirrors and its use for the ITER equatorial visible∕infrared wide angle viewing system optical design.

    PubMed

    Joanny, M; Salasca, S; Dapena, M; Cantone, B; Travère, J M; Thellier, C; Fermé, J J; Marot, L; Buravand, O; Perrollaz, G; Zeile, C

    2012-10-01

    ITER first mirrors (FMs), as the first components of most ITER optical diagnostics, will be exposed to high plasma radiation flux and neutron load. To reduce the FMs heating and optical surface deformation induced during ITER operation, the use of relevant materials and cooling system are foreseen. The calculations led on different materials and FMs designs and geometries (100 mm and 200 mm) show that the use of CuCrZr and TZM, and a complex integrated cooling system can limit efficiently the FMs heating and reduce their optical surface deformation under plasma radiation flux and neutron load. These investigations were used to evaluate, for the ITER equatorial port visible∕infrared wide angle viewing system, the impact of the FMs properties change during operation on the instrument main optical performances. The results obtained are presented and discussed.

  8. Preliminary consideration of CFETR ITER-like case diagnostic system.

    PubMed

    Li, G S; Yang, Y; Wang, Y M; Ming, T F; Han, X; Liu, S C; Wang, E H; Liu, Y K; Yang, W J; Li, G Q; Hu, Q S; Gao, X

    2016-11-01

    Chinese Fusion Engineering Test Reactor (CFETR) is a new superconducting tokamak device being designed in China, which aims at bridging the gap between ITER and DEMO, where DEMO is a tokamak demonstration fusion reactor. Two diagnostic cases, ITER-like case and towards DEMO case, have been considered for CFETR early and later operating phases, respectively. In this paper, some preliminary consideration of ITER-like case will be presented. Based on ITER diagnostic system, three versions of increased complexity and coverage of the ITER-like case diagnostic system have been developed with different goals and functions. Version A aims only machine protection and basic control. Both of version B and version C are mainly for machine protection, basic and advanced control, but version C has an increased level of redundancy necessary for improved measurements capability. The performance of these versions and needed R&D work are outlined.

  9. Preliminary consideration of CFETR ITER-like case diagnostic system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, G. S.; Liu, Y. K.; Gao, X.

    2016-11-15

    Chinese Fusion Engineering Test Reactor (CFETR) is a new superconducting tokamak device being designed in China, which aims at bridging the gap between ITER and DEMO, where DEMO is a tokamak demonstration fusion reactor. Two diagnostic cases, ITER-like case and towards DEMO case, have been considered for CFETR early and later operating phases, respectively. In this paper, some preliminary consideration of ITER-like case will be presented. Based on ITER diagnostic system, three versions of increased complexity and coverage of the ITER-like case diagnostic system have been developed with different goals and functions. Version A aims only machine protection and basicmore » control. Both of version B and version C are mainly for machine protection, basic and advanced control, but version C has an increased level of redundancy necessary for improved measurements capability. The performance of these versions and needed R&D work are outlined.« less

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jing Yanfei, E-mail: yanfeijing@uestc.edu.c; Huang Tingzhu, E-mail: tzhuang@uestc.edu.c; Duan Yong, E-mail: duanyong@yahoo.c

    This study is mainly focused on iterative solutions with simple diagonal preconditioning to two complex-valued nonsymmetric systems of linear equations arising from a computational chemistry model problem proposed by Sherry Li of NERSC. Numerical experiments show the feasibility of iterative methods to some extent when applied to the problems and reveal the competitiveness of our recently proposed Lanczos biconjugate A-orthonormalization methods to other classic and popular iterative methods. By the way, experiment results also indicate that application specific preconditioners may be mandatory and required for accelerating convergence.

  11. High resolution x-ray CMT: Reconstruction methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, J.K.

    This paper qualitatively discusses the primary characteristics of methods for reconstructing tomographic images from a set of projections. These reconstruction methods can be categorized as either {open_quotes}analytic{close_quotes} or {open_quotes}iterative{close_quotes} techniques. Analytic algorithms are derived from the formal inversion of equations describing the imaging process, while iterative algorithms incorporate a model of the imaging process and provide a mechanism to iteratively improve image estimates. Analytic reconstruction algorithms are typically computationally more efficient than iterative methods; however, analytic algorithms are available for a relatively limited set of imaging geometries and situations. Thus, the framework of iterative reconstruction methods is better suited formore » high accuracy, tomographic reconstruction codes.« less

  12. Application of Conjugate Gradient methods to tidal simulation

    USGS Publications Warehouse

    Barragy, E.; Carey, G.F.; Walters, R.A.

    1993-01-01

    A harmonic decomposition technique is applied to the shallow water equations to yield a complex, nonsymmetric, nonlinear, Helmholtz type problem for the sea surface and an accompanying complex, nonlinear diagonal problem for the velocities. The equation for the sea surface is linearized using successive approximation and then discretized with linear, triangular finite elements. The study focuses on applying iterative methods to solve the resulting complex linear systems. The comparative evaluation includes both standard iterative methods for the real subsystems and complex versions of the well known Bi-Conjugate Gradient and Bi-Conjugate Gradient Squared methods. Several Incomplete LU type preconditioners are discussed, and the effects of node ordering, rejection strategy, domain geometry and Coriolis parameter (affecting asymmetry) are investigated. Implementation details for the complex case are discussed. Performance studies are presented and comparisons made with a frontal solver. ?? 1993.

  13. FPGA implementation of low complexity LDPC iterative decoder

    NASA Astrophysics Data System (ADS)

    Verma, Shivani; Sharma, Sanjay

    2016-07-01

    Low-density parity-check (LDPC) codes, proposed by Gallager, emerged as a class of codes which can yield very good performance on the additive white Gaussian noise channel as well as on the binary symmetric channel. LDPC codes have gained lots of importance due to their capacity achieving property and excellent performance in the noisy channel. Belief propagation (BP) algorithm and its approximations, most notably min-sum, are popular iterative decoding algorithms used for LDPC and turbo codes. The trade-off between the hardware complexity and the decoding throughput is a critical factor in the implementation of the practical decoder. This article presents introduction to LDPC codes and its various decoding algorithms followed by realisation of LDPC decoder by using simplified message passing algorithm and partially parallel decoder architecture. Simplified message passing algorithm has been proposed for trade-off between low decoding complexity and decoder performance. It greatly reduces the routing and check node complexity of the decoder. Partially parallel decoder architecture possesses high speed and reduced complexity. The improved design of the decoder possesses a maximum symbol throughput of 92.95 Mbps and a maximum of 18 decoding iterations. The article presents implementation of 9216 bits, rate-1/2, (3, 6) LDPC decoder on Xilinx XC3D3400A device from Spartan-3A DSP family.

  14. Research on material removal accuracy analysis and correction of removal function during ion beam figuring

    NASA Astrophysics Data System (ADS)

    Wu, Weibin; Dai, Yifan; Zhou, Lin; Xu, Mingjin

    2016-09-01

    Material removal accuracy has a direct impact on the machining precision and efficiency of ion beam figuring. By analyzing the factors suppressing the improvement of material removal accuracy, we conclude that correcting the removal function deviation and reducing the removal material amount during each iterative process could help to improve material removal accuracy. Removal function correcting principle can effectively compensate removal function deviation between actual figuring and simulated processes, while experiments indicate that material removal accuracy decreases with a long machining time, so a small amount of removal material in each iterative process is suggested. However, more clamping and measuring steps will be introduced in this way, which will also generate machining errors and suppress the improvement of material removal accuracy. On this account, a free-measurement iterative process method is put forward to improve material removal accuracy and figuring efficiency by using less measuring and clamping steps. Finally, an experiment on a φ 100-mm Zerodur planar is preformed, which shows that, in similar figuring time, three free-measurement iterative processes could improve the material removal accuracy and the surface error convergence rate by 62.5% and 17.6%, respectively, compared with a single iterative process.

  15. Chimera states in networks of logistic maps with hierarchical connectivities

    NASA Astrophysics Data System (ADS)

    zur Bonsen, Alexander; Omelchenko, Iryna; Zakharova, Anna; Schöll, Eckehard

    2018-04-01

    Chimera states are complex spatiotemporal patterns consisting of coexisting domains of coherence and incoherence. We study networks of nonlocally coupled logistic maps and analyze systematically how the dilution of the network links influences the appearance of chimera patterns. The network connectivities are constructed using an iterative Cantor algorithm to generate fractal (hierarchical) connectivities. Increasing the hierarchical level of iteration, we compare the resulting spatiotemporal patterns. We demonstrate that a high clustering coefficient and symmetry of the base pattern promotes chimera states, and asymmetric connectivities result in complex nested chimera patterns.

  16. Design of 4D x-ray tomography experiments for reconstruction using regularized iterative algorithms

    NASA Astrophysics Data System (ADS)

    Mohan, K. Aditya

    2017-10-01

    4D X-ray computed tomography (4D-XCT) is widely used to perform non-destructive characterization of time varying physical processes in various materials. The conventional approach to improving temporal resolution in 4D-XCT involves the development of expensive and complex instrumentation that acquire data faster with reduced noise. It is customary to acquire data with many tomographic views at a high signal to noise ratio. Instead, temporal resolution can be improved using regularized iterative algorithms that are less sensitive to noise and limited views. These algorithms benefit from optimization of other parameters such as the view sampling strategy while improving temporal resolution by reducing the total number of views or the detector exposure time. This paper presents the design principles of 4D-XCT experiments when using regularized iterative algorithms derived using the framework of model-based reconstruction. A strategy for performing 4D-XCT experiments is presented that allows for improving the temporal resolution by progressively reducing the number of views or the detector exposure time. Theoretical analysis of the effect of the data acquisition parameters on the detector signal to noise ratio, spatial reconstruction resolution, and temporal reconstruction resolution is also presented in this paper.

  17. Computing eigenfunctions and eigenvalues of boundary-value problems with the orthogonal spectral renormalization method

    NASA Astrophysics Data System (ADS)

    Cartarius, Holger; Musslimani, Ziad H.; Schwarz, Lukas; Wunner, Günter

    2018-03-01

    The spectral renormalization method was introduced in 2005 as an effective way to compute ground states of nonlinear Schrödinger and Gross-Pitaevskii type equations. In this paper, we introduce an orthogonal spectral renormalization (OSR) method to compute ground and excited states (and their respective eigenvalues) of linear and nonlinear eigenvalue problems. The implementation of the algorithm follows four simple steps: (i) reformulate the underlying eigenvalue problem as a fixed-point equation, (ii) introduce a renormalization factor that controls the convergence properties of the iteration, (iii) perform a Gram-Schmidt orthogonalization process in order to prevent the iteration from converging to an unwanted mode, and (iv) compute the solution sought using a fixed-point iteration. The advantages of the OSR scheme over other known methods (such as Newton's and self-consistency) are (i) it allows the flexibility to choose large varieties of initial guesses without diverging, (ii) it is easy to implement especially at higher dimensions, and (iii) it can easily handle problems with complex and random potentials. The OSR method is implemented on benchmark Hermitian linear and nonlinear eigenvalue problems as well as linear and nonlinear non-Hermitian PT -symmetric models.

  18. Eliciting design patterns for e-learning systems

    NASA Astrophysics Data System (ADS)

    Retalis, Symeon; Georgiakakis, Petros; Dimitriadis, Yannis

    2006-06-01

    Design pattern creation, especially in the e-learning domain, is a highly complex process that has not been sufficiently studied and formalized. In this paper, we propose a systematic pattern development cycle, whose most important aspects focus on reverse engineering of existing systems in order to elicit features that are cross-validated through the use of appropriate, authentic scenarios. However, an iterative pattern process is proposed that takes advantage of multiple data sources, thus emphasizing a holistic view of the teaching learning processes. The proposed schema of pattern mining has been extensively validated for Asynchronous Network Supported Collaborative Learning (ANSCL) systems, as well as for other types of tools in a variety of scenarios, with promising results.

  19. Conduction at the onset of chaos

    NASA Astrophysics Data System (ADS)

    Baldovin, Fulvio

    2017-02-01

    After a general discussion of the thermodynamics of conductive processes, we introduce specific observables enabling the connection of the diffusive transport properties with the microscopic dynamics. We solve the case of Brownian particles, both analytically and numerically, and address then whether aspects of the classic Onsager's picture generalize to the non-local non-reversible dynamics described by logistic map iterates. While in the chaotic case numerical evidence of a monotonic relaxation is found, at the onset of chaos complex relaxation patterns emerge.

  20. A Principled Approach to the Specification of System Architectures for Space Missions

    NASA Technical Reports Server (NTRS)

    McKelvin, Mark L. Jr.; Castillo, Robert; Bonanne, Kevin; Bonnici, Michael; Cox, Brian; Gibson, Corrina; Leon, Juan P.; Gomez-Mustafa, Jose; Jimenez, Alejandro; Madni, Azad

    2015-01-01

    Modern space systems are increasing in complexity and scale at an unprecedented pace. Consequently, innovative methods, processes, and tools are needed to cope with the increasing complexity of architecting these systems. A key systems challenge in practice is the ability to scale processes, methods, and tools used to architect complex space systems. Traditionally, the process for specifying space system architectures has largely relied on capturing the system architecture in informal descriptions that are often embedded within loosely coupled design documents and domain expertise. Such informal descriptions often lead to misunderstandings between design teams, ambiguous specifications, difficulty in maintaining consistency as the architecture evolves throughout the system development life cycle, and costly design iterations. Therefore, traditional methods are becoming increasingly inefficient to cope with ever-increasing system complexity. We apply the principles of component-based design and platform-based design to the development of the system architecture for a practical space system to demonstrate feasibility of our approach using SysML. Our results show that we are able to apply a systematic design method to manage system complexity, thus enabling effective data management, semantic coherence and traceability across different levels of abstraction in the design chain. Just as important, our approach enables interoperability among heterogeneous tools in a concurrent engineering model based design environment.

  1. Accurate Micro-Tool Manufacturing by Iterative Pulsed-Laser Ablation

    NASA Astrophysics Data System (ADS)

    Warhanek, Maximilian; Mayr, Josef; Dörig, Christian; Wegener, Konrad

    2017-12-01

    Iterative processing solutions, including multiple cycles of material removal and measurement, are capable of achieving higher geometric accuracy by compensating for most deviations manifesting directly on the workpiece. Remaining error sources are the measurement uncertainty and the repeatability of the material-removal process including clamping errors. Due to the lack of processing forces, process fluids and wear, pulsed-laser ablation has proven high repeatability and can be realized directly on a measuring machine. This work takes advantage of this possibility by implementing an iterative, laser-based correction process for profile deviations registered directly on an optical measurement machine. This way efficient iterative processing is enabled, which is precise, applicable for all tool materials including diamond and eliminates clamping errors. The concept is proven by a prototypical implementation on an industrial tool measurement machine and a nanosecond fibre laser. A number of measurements are performed on both the machine and the processed workpieces. Results show production deviations within 2 μm diameter tolerance.

  2. Abstraction of information in repository performance assessments. Examples from the SKI project Site-94

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dverstorp, B.; Andersson, J.

    1995-12-01

    Performance Assessment of a nuclear waste repository implies an analysis of a complex system with many interacting processes. Even if some of these processes may be known to large detail, problems arise when combining all information, and means of abstracting information from complex detailed models into models that couple different processes are needed. Clearly, one of the major objectives of performance assessment, to calculate doses or other performance indicators, implies an enormous abstraction of information compared to all information that is used as input. Other problems are that the knowledge of different parts or processes is strongly variable and adjustments,more » interpretations, are needed when combining models from different disciplines. In addition, people as well as computers, even today, always have a limited capacity to process information and choices have to be made. However, because abstraction of information clearly is unavoidable in performance assessment the validity of choices made, always need to be scrutinized and judgements made need to be updated in an iterative process.« less

  3. The fractal geometry of Hartree-Fock

    NASA Astrophysics Data System (ADS)

    Theel, Friethjof; Karamatskou, Antonia; Santra, Robin

    2017-12-01

    The Hartree-Fock method is an important approximation for the ground-state electronic wave function of atoms and molecules so that its usage is widespread in computational chemistry and physics. The Hartree-Fock method is an iterative procedure in which the electronic wave functions of the occupied orbitals are determined. The set of functions found in one step builds the basis for the next iteration step. In this work, we interpret the Hartree-Fock method as a dynamical system since dynamical systems are iterations where iteration steps represent the time development of the system, as encountered in the theory of fractals. The focus is put on the convergence behavior of the dynamical system as a function of a suitable control parameter. In our case, a complex parameter λ controls the strength of the electron-electron interaction. An investigation of the convergence behavior depending on the parameter λ is performed for helium, neon, and argon. We observe fractal structures in the complex λ-plane, which resemble the well-known Mandelbrot set, determine their fractal dimension, and find that with increasing nuclear charge, the fragmentation increases as well.

  4. Parallel iterative solution for h and p approximations of the shallow water equations

    USGS Publications Warehouse

    Barragy, E.J.; Walters, R.A.

    1998-01-01

    A p finite element scheme and parallel iterative solver are introduced for a modified form of the shallow water equations. The governing equations are the three-dimensional shallow water equations. After a harmonic decomposition in time and rearrangement, the resulting equations are a complex Helmholz problem for surface elevation, and a complex momentum equation for the horizontal velocity. Both equations are nonlinear and the resulting system is solved using the Picard iteration combined with a preconditioned biconjugate gradient (PBCG) method for the linearized subproblems. A subdomain-based parallel preconditioner is developed which uses incomplete LU factorization with thresholding (ILUT) methods within subdomains, overlapping ILUT factorizations for subdomain boundaries and under-relaxed iteration for the resulting block system. The method builds on techniques successfully applied to linear elements by introducing ordering and condensation techniques to handle uniform p refinement. The combined methods show good performance for a range of p (element order), h (element size), and N (number of processors). Performance and scalability results are presented for a field scale problem where up to 512 processors are used. ?? 1998 Elsevier Science Ltd. All rights reserved.

  5. Pattern Formation and Complexity Emergence

    NASA Astrophysics Data System (ADS)

    Berezin, Alexander A.

    2001-03-01

    Success of nonlinear modelling of pattern formation and self-organization encourages speculations on informational and number theoretical foundations of complexity emergence. Pythagorean "unreasonable effectiveness of integers" in natural processes is perhaps extrapolatable even to universal emergence "out-of-nothing" (Leibniz, Wheeler). Because rational numbers (R = M/N) are everywhere dense on real axis, any digital string (hence any "book" from "Library of Babel" of J.L.Borges) is "recorded" infinitely many times in arbitrary many rationals. Furthermore, within any arbitrary small interval there are infinitely many Rs for which (either or both) integers (Ms and Ns) "carry" any given string of any given length. Because any iterational process (such as generation of fractal features of Mandelbrot Set) is arbitrary closely approximatable with rational numbers, the infinite pattern of integers expresses itself in generation of complexity of the world, as well as in emergence of the world itself. This "tunnelling" from Platonic World ("Platonia" of J.Barbour) to a real (physical) world is modern recast of Leibniz's motto ("for deriving all from nothing there suffices a single principle").

  6. High-Dimensional Bayesian Geostatistics

    PubMed Central

    Banerjee, Sudipto

    2017-01-01

    With the growing capabilities of Geographic Information Systems (GIS) and user-friendly software, statisticians today routinely encounter geographically referenced data containing observations from a large number of spatial locations and time points. Over the last decade, hierarchical spatiotemporal process models have become widely deployed statistical tools for researchers to better understand the complex nature of spatial and temporal variability. However, fitting hierarchical spatiotemporal models often involves expensive matrix computations with complexity increasing in cubic order for the number of spatial locations and temporal points. This renders such models unfeasible for large data sets. This article offers a focused review of two methods for constructing well-defined highly scalable spatiotemporal stochastic processes. Both these processes can be used as “priors” for spatiotemporal random fields. The first approach constructs a low-rank process operating on a lower-dimensional subspace. The second approach constructs a Nearest-Neighbor Gaussian Process (NNGP) that ensures sparse precision matrices for its finite realizations. Both processes can be exploited as a scalable prior embedded within a rich hierarchical modeling framework to deliver full Bayesian inference. These approaches can be described as model-based solutions for big spatiotemporal datasets. The models ensure that the algorithmic complexity has ~ n floating point operations (flops), where n the number of spatial locations (per iteration). We compare these methods and provide some insight into their methodological underpinnings. PMID:29391920

  7. High-Dimensional Bayesian Geostatistics.

    PubMed

    Banerjee, Sudipto

    2017-06-01

    With the growing capabilities of Geographic Information Systems (GIS) and user-friendly software, statisticians today routinely encounter geographically referenced data containing observations from a large number of spatial locations and time points. Over the last decade, hierarchical spatiotemporal process models have become widely deployed statistical tools for researchers to better understand the complex nature of spatial and temporal variability. However, fitting hierarchical spatiotemporal models often involves expensive matrix computations with complexity increasing in cubic order for the number of spatial locations and temporal points. This renders such models unfeasible for large data sets. This article offers a focused review of two methods for constructing well-defined highly scalable spatiotemporal stochastic processes. Both these processes can be used as "priors" for spatiotemporal random fields. The first approach constructs a low-rank process operating on a lower-dimensional subspace. The second approach constructs a Nearest-Neighbor Gaussian Process (NNGP) that ensures sparse precision matrices for its finite realizations. Both processes can be exploited as a scalable prior embedded within a rich hierarchical modeling framework to deliver full Bayesian inference. These approaches can be described as model-based solutions for big spatiotemporal datasets. The models ensure that the algorithmic complexity has ~ n floating point operations (flops), where n the number of spatial locations (per iteration). We compare these methods and provide some insight into their methodological underpinnings.

  8. Determination of an effective scoring function for RNA-RNA interactions with a physics-based double-iterative method.

    PubMed

    Yan, Yumeng; Wen, Zeyu; Zhang, Di; Huang, Sheng-You

    2018-05-18

    RNA-RNA interactions play fundamental roles in gene and cell regulation. Therefore, accurate prediction of RNA-RNA interactions is critical to determine their complex structures and understand the molecular mechanism of the interactions. Here, we have developed a physics-based double-iterative strategy to determine the effective potentials for RNA-RNA interactions based on a training set of 97 diverse RNA-RNA complexes. The double-iterative strategy circumvented the reference state problem in knowledge-based scoring functions by updating the potentials through iteration and also overcame the decoy-dependent limitation in previous iterative methods by constructing the decoys iteratively. The derived scoring function, which is referred to as DITScoreRR, was evaluated on an RNA-RNA docking benchmark of 60 test cases and compared with three other scoring functions. It was shown that for bound docking, our scoring function DITScoreRR obtained the excellent success rates of 90% and 98.3% in binding mode predictions when the top 1 and 10 predictions were considered, compared to 63.3% and 71.7% for van der Waals interactions, 45.0% and 65.0% for ITScorePP, and 11.7% and 26.7% for ZDOCK 2.1, respectively. For unbound docking, DITScoreRR achieved the good success rates of 53.3% and 71.7% in binding mode predictions when the top 1 and 10 predictions were considered, compared to 13.3% and 28.3% for van der Waals interactions, 11.7% and 26.7% for our ITScorePP, and 3.3% and 6.7% for ZDOCK 2.1, respectively. DITScoreRR also performed significantly better in ranking decoys and obtained significantly higher score-RMSD correlations than the other three scoring functions. DITScoreRR will be of great value for the prediction and design of RNA structures and RNA-RNA complexes.

  9. Clinical Complexity in Medicine: A Measurement Model of Task and Patient Complexity.

    PubMed

    Islam, R; Weir, C; Del Fiol, G

    2016-01-01

    Complexity in medicine needs to be reduced to simple components in a way that is comprehensible to researchers and clinicians. Few studies in the current literature propose a measurement model that addresses both task and patient complexity in medicine. The objective of this paper is to develop an integrated approach to understand and measure clinical complexity by incorporating both task and patient complexity components focusing on the infectious disease domain. The measurement model was adapted and modified for the healthcare domain. Three clinical infectious disease teams were observed, audio-recorded and transcribed. Each team included an infectious diseases expert, one infectious diseases fellow, one physician assistant and one pharmacy resident fellow. The transcripts were parsed and the authors independently coded complexity attributes. This baseline measurement model of clinical complexity was modified in an initial set of coding processes and further validated in a consensus-based iterative process that included several meetings and email discussions by three clinical experts from diverse backgrounds from the Department of Biomedical Informatics at the University of Utah. Inter-rater reliability was calculated using Cohen's kappa. The proposed clinical complexity model consists of two separate components. The first is a clinical task complexity model with 13 clinical complexity-contributing factors and 7 dimensions. The second is the patient complexity model with 11 complexity-contributing factors and 5 dimensions. The measurement model for complexity encompassing both task and patient complexity will be a valuable resource for future researchers and industry to measure and understand complexity in healthcare.

  10. ITER Cryoplant Infrastructures

    NASA Astrophysics Data System (ADS)

    Fauve, E.; Monneret, E.; Voigt, T.; Vincent, G.; Forgeas, A.; Simon, M.

    2017-02-01

    The ITER Tokamak requires an average 75 kW of refrigeration power at 4.5 K and 600 kW of refrigeration Power at 80 K to maintain the nominal operation condition of the ITER thermal shields, superconducting magnets and cryopumps. This is produced by the ITER Cryoplant, a complex cluster of refrigeration systems including in particular three identical Liquid Helium Plants and two identical Liquid Nitrogen Plants. Beyond the equipment directly part of the Cryoplant, colossal infrastructures are required. These infrastructures account for a large part of the Cryoplants lay-out, budget and engineering efforts. It is ITER Organization responsibility to ensure that all infrastructures are adequately sized and designed to interface with the Cryoplant. This proceeding presents the overall architecture of the cryoplant. It provides order of magnitude related to the cryoplant building and utilities: electricity, cooling water, heating, ventilation and air conditioning (HVAC).

  11. Automated quantitative muscle biopsy analysis system

    NASA Technical Reports Server (NTRS)

    Castleman, Kenneth R. (Inventor)

    1980-01-01

    An automated system to aid the diagnosis of neuromuscular diseases by producing fiber size histograms utilizing histochemically stained muscle biopsy tissue. Televised images of the microscopic fibers are processed electronically by a multi-microprocessor computer, which isolates, measures, and classifies the fibers and displays the fiber size distribution. The architecture of the multi-microprocessor computer, which is iterated to any required degree of complexity, features a series of individual microprocessors P.sub.n each receiving data from a shared memory M.sub.n-1 and outputing processed data to a separate shared memory M.sub.n+1 under control of a program stored in dedicated memory M.sub.n.

  12. LETTER TO THE EDITOR: Iteratively-coupled propagating exterior complex scaling method for electron hydrogen collisions

    NASA Astrophysics Data System (ADS)

    Bartlett, Philip L.; Stelbovics, Andris T.; Bray, Igor

    2004-02-01

    A newly-derived iterative coupling procedure for the propagating exterior complex scaling (PECS) method is used to efficiently calculate the electron-impact wavefunctions for atomic hydrogen. An overview of this method is given along with methods for extracting scattering cross sections. Differential scattering cross sections at 30 eV are presented for the electron-impact excitation to the n = 1, 2, 3 and 4 final states, for both PECS and convergent close coupling (CCC), which are in excellent agreement with each other and with experiment. PECS results are presented at 27.2 eV and 30 eV for symmetric and asymmetric energy-sharing triple differential cross sections, which are in excellent agreement with CCC and exterior complex scaling calculations, and with experimental data. At these intermediate energies, the efficiency of the PECS method with iterative coupling has allowed highly accurate partial-wave solutions of the full Schrödinger equation, for L les 50 and a large number of coupled angular momentum states, to be obtained with minimal computing resources.

  13. How children perceive fractals: Hierarchical self-similarity and cognitive development

    PubMed Central

    Martins, Maurício Dias; Laaha, Sabine; Freiberger, Eva Maria; Choi, Soonja; Fitch, W. Tecumseh

    2014-01-01

    The ability to understand and generate hierarchical structures is a crucial component of human cognition, available in language, music, mathematics and problem solving. Recursion is a particularly useful mechanism for generating complex hierarchies by means of self-embedding rules. In the visual domain, fractals are recursive structures in which simple transformation rules generate hierarchies of infinite depth. Research on how children acquire these rules can provide valuable insight into the cognitive requirements and learning constraints of recursion. Here, we used fractals to investigate the acquisition of recursion in the visual domain, and probed for correlations with grammar comprehension and general intelligence. We compared second (n = 26) and fourth graders (n = 26) in their ability to represent two types of rules for generating hierarchical structures: Recursive rules, on the one hand, which generate new hierarchical levels; and iterative rules, on the other hand, which merely insert items within hierarchies without generating new levels. We found that the majority of fourth graders, but not second graders, were able to represent both recursive and iterative rules. This difference was partially accounted by second graders’ impairment in detecting hierarchical mistakes, and correlated with between-grade differences in grammar comprehension tasks. Empirically, recursion and iteration also differed in at least one crucial aspect: While the ability to learn recursive rules seemed to depend on the previous acquisition of simple iterative representations, the opposite was not true, i.e., children were able to acquire iterative rules before they acquired recursive representations. These results suggest that the acquisition of recursion in vision follows learning constraints similar to the acquisition of recursion in language, and that both domains share cognitive resources involved in hierarchical processing. PMID:24955884

  14. Iteration and Prototyping in Creating Technical Specifications.

    ERIC Educational Resources Information Center

    Flynt, John P.

    1994-01-01

    Claims that the development process for computer software can be greatly aided by the writers of specifications if they employ basic iteration and prototyping techniques. Asserts that computer software configuration management practices provide ready models for iteration and prototyping. (HB)

  15. Improving Patient Experience and Primary Care Quality for Patients With Complex Chronic Disease Using the Electronic Patient-Reported Outcomes Tool: Adopting Qualitative Methods Into a User-Centered Design Approach.

    PubMed

    Steele Gray, Carolyn; Khan, Anum Irfan; Kuluski, Kerry; McKillop, Ian; Sharpe, Sarah; Bierman, Arlene S; Lyons, Renee F; Cott, Cheryl

    2016-02-18

    Many mHealth technologies do not meet the needs of patients with complex chronic disease and disabilities (CCDDs) who are among the highest users of health systems worldwide. Furthermore, many of the development methodologies used in the creation of mHealth and eHealth technologies lack the ability to embrace users with CCDD in the specification process. This paper describes how we adopted and modified development techniques to create the electronic Patient-Reported Outcomes (ePRO) tool, a patient-centered mHealth solution to help improve primary health care for patients experiencing CCDD. This paper describes the design and development approach, specifically the process of incorporating qualitative research methods into user-centered design approaches to create the ePRO tool. Key lessons learned are offered as a guide for other eHealth and mHealth research and technology developers working with complex patient populations and their primary health care providers. Guided by user-centered design principles, interpretive descriptive qualitative research methods were adopted to capture user experiences through interviews and working groups. Consistent with interpretive descriptive methods, an iterative analysis technique was used to generate findings, which were then organized in relation to the tool design and function to help systematically inform modifications to the tool. User feedback captured and analyzed through this method was used to challenge the design and inform the iterative development of the tool. Interviews with primary health care providers (n=7) and content experts (n=6), and four focus groups with patients and carers (n=14) along with a PICK analysis-Possible, Implementable, (to be) Challenged, (to be) Killed-guided development of the first prototype. The initial prototype was presented in three design working groups with patients/carers (n=5), providers (n=6), and experts (n=5). Working group findings were broken down into categories of what works and what does not work to inform modifications to the prototype. This latter phase led to a major shift in the purpose and design of the prototype, validating the importance of using iterative codesign processes. Interpretive descriptive methods allow for an understanding of user experiences of patients with CCDD, their carers, and primary care providers. Qualitative methods help to capture and interpret user needs, and identify contextual barriers and enablers to tool adoption, informing a redesign to better suit the needs of this diverse user group. This study illustrates the value of adopting interpretive descriptive methods into user-centered mHealth tool design and can also serve to inform the design of other eHealth technologies. Our approach is particularly useful in requirements determination when developing for a complex user group and their health care providers.

  16. Iterative approach as alternative to S-matrix in modal methods

    NASA Astrophysics Data System (ADS)

    Semenikhin, Igor; Zanuccoli, Mauro

    2014-12-01

    The continuously increasing complexity of opto-electronic devices and the rising demands of simulation accuracy lead to the need of solving very large systems of linear equations making iterative methods promising and attractive from the computational point of view with respect to direct methods. In particular, iterative approach potentially enables the reduction of required computational time to solve Maxwell's equations by Eigenmode Expansion algorithms. Regardless of the particular eigenmodes finding method used, the expansion coefficients are computed as a rule by scattering matrix (S-matrix) approach or similar techniques requiring order of M3 operations. In this work we consider alternatives to the S-matrix technique which are based on pure iterative or mixed direct-iterative approaches. The possibility to diminish the impact of M3 -order calculations to overall time and in some cases even to reduce the number of arithmetic operations to M2 by applying iterative techniques are discussed. Numerical results are illustrated to discuss validity and potentiality of the proposed approaches.

  17. Iterative color-multiplexed, electro-optical processor.

    PubMed

    Psaltis, D; Casasent, D; Carlotto, M

    1979-11-01

    A noncoherent optical vector-matrix multiplier using a linear LED source array and a linear P-I-N photodiode detector array has been combined with a 1-D adder in a feedback loop. The resultant iterative optical processor and its use in solving simultaneous linear equations are described. Operation on complex data is provided by a novel color-multiplexing system.

  18. Iterative feature refinement for accurate undersampled MR image reconstruction

    NASA Astrophysics Data System (ADS)

    Wang, Shanshan; Liu, Jianbo; Liu, Qiegen; Ying, Leslie; Liu, Xin; Zheng, Hairong; Liang, Dong

    2016-05-01

    Accelerating MR scan is of great significance for clinical, research and advanced applications, and one main effort to achieve this is the utilization of compressed sensing (CS) theory. Nevertheless, the existing CSMRI approaches still have limitations such as fine structure loss or high computational complexity. This paper proposes a novel iterative feature refinement (IFR) module for accurate MR image reconstruction from undersampled K-space data. Integrating IFR with CSMRI which is equipped with fixed transforms, we develop an IFR-CS method to restore meaningful structures and details that are originally discarded without introducing too much additional complexity. Specifically, the proposed IFR-CS is realized with three iterative steps, namely sparsity-promoting denoising, feature refinement and Tikhonov regularization. Experimental results on both simulated and in vivo MR datasets have shown that the proposed module has a strong capability to capture image details, and that IFR-CS is comparable and even superior to other state-of-the-art reconstruction approaches.

  19. An adaptive moving finite volume scheme for modeling flood inundation over dry and complex topography

    NASA Astrophysics Data System (ADS)

    Zhou, Feng; Chen, Guoxian; Huang, Yuefei; Yang, Jerry Zhijian; Feng, Hui

    2013-04-01

    A new geometrical conservative interpolation on unstructured meshes is developed for preserving still water equilibrium and positivity of water depth at each iteration of mesh movement, leading to an adaptive moving finite volume (AMFV) scheme for modeling flood inundation over dry and complex topography. Unlike traditional schemes involving position-fixed meshes, the iteration process of the AFMV scheme moves a fewer number of the meshes adaptively in response to flow variables calculated in prior solutions and then simulates their posterior values on the new meshes. At each time step of the simulation, the AMFV scheme consists of three parts: an adaptive mesh movement to shift the vertices position, a geometrical conservative interpolation to remap the flow variables by summing the total mass over old meshes to avoid the generation of spurious waves, and a partial differential equations(PDEs) discretization to update the flow variables for a new time step. Five different test cases are presented to verify the computational advantages of the proposed scheme over nonadaptive methods. The results reveal three attractive features: (i) the AMFV scheme could preserve still water equilibrium and positivity of water depth within both mesh movement and PDE discretization steps; (ii) it improved the shock-capturing capability for handling topographic source terms and wet-dry interfaces by moving triangular meshes to approximate the spatial distribution of time-variant flood processes; (iii) it was able to solve the shallow water equations with a relatively higher accuracy and spatial-resolution with a lower computational cost.

  20. Analysis of Radiation Transport Due to Activated Coolant in the ITER Neutral Beam Injection Cell

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Royston, Katherine; Wilson, Stephen C.; Risner, Joel M.

    Detailed spatial distributions of the biological dose rate due to a variety of sources are required for the design of the ITER tokamak facility to ensure that all radiological zoning limits are met. During operation, water in the Integrated loop of Blanket, Edge-localized mode and vertical stabilization coils, and Divertor (IBED) cooling system will be activated by plasma neutrons and will flow out of the bioshield through a complex system of pipes and heat exchangers. This paper discusses the methods used to characterize the biological dose rate outside the tokamak complex due to 16N gamma radiation emitted by the activatedmore » coolant in the Neutral Beam Injection (NBI) cell of the tokamak building. Activated coolant will enter the NBI cell through the IBED Primary Heat Transfer System (PHTS), and the NBI PHTS will also become activated due to radiation streaming through the NBI system. To properly characterize these gamma sources, the production of 16N, the decay of 16N, and the flow of activated water through the coolant loops were modeled. The impact of conservative approximations on the solution was also examined. Once the source due to activated coolant was calculated, the resulting biological dose rate outside the north wall of the NBI cell was determined through the use of sophisticated variance reduction techniques. The AutomateD VAriaNce reducTion Generator (ADVANTG) software implements methods developed specifically to provide highly effective variance reduction for complex radiation transport simulations such as those encountered with ITER. Using ADVANTG with the Monte Carlo N-particle (MCNP) radiation transport code, radiation responses were calculated on a fine spatial mesh with a high degree of statistical accuracy. In conclusion, advanced visualization tools were also developed and used to determine pipe cell connectivity, to facilitate model checking, and to post-process the transport simulation results.« less

  1. Analysis of Radiation Transport Due to Activated Coolant in the ITER Neutral Beam Injection Cell

    DOE PAGES

    Royston, Katherine; Wilson, Stephen C.; Risner, Joel M.; ...

    2017-07-26

    Detailed spatial distributions of the biological dose rate due to a variety of sources are required for the design of the ITER tokamak facility to ensure that all radiological zoning limits are met. During operation, water in the Integrated loop of Blanket, Edge-localized mode and vertical stabilization coils, and Divertor (IBED) cooling system will be activated by plasma neutrons and will flow out of the bioshield through a complex system of pipes and heat exchangers. This paper discusses the methods used to characterize the biological dose rate outside the tokamak complex due to 16N gamma radiation emitted by the activatedmore » coolant in the Neutral Beam Injection (NBI) cell of the tokamak building. Activated coolant will enter the NBI cell through the IBED Primary Heat Transfer System (PHTS), and the NBI PHTS will also become activated due to radiation streaming through the NBI system. To properly characterize these gamma sources, the production of 16N, the decay of 16N, and the flow of activated water through the coolant loops were modeled. The impact of conservative approximations on the solution was also examined. Once the source due to activated coolant was calculated, the resulting biological dose rate outside the north wall of the NBI cell was determined through the use of sophisticated variance reduction techniques. The AutomateD VAriaNce reducTion Generator (ADVANTG) software implements methods developed specifically to provide highly effective variance reduction for complex radiation transport simulations such as those encountered with ITER. Using ADVANTG with the Monte Carlo N-particle (MCNP) radiation transport code, radiation responses were calculated on a fine spatial mesh with a high degree of statistical accuracy. In conclusion, advanced visualization tools were also developed and used to determine pipe cell connectivity, to facilitate model checking, and to post-process the transport simulation results.« less

  2. Measurement of the complex transmittance of large optical elements with Ptychographical Iterative Engine.

    PubMed

    Wang, Hai-Yan; Liu, Cheng; Veetil, Suhas P; Pan, Xing-Chen; Zhu, Jian-Qiang

    2014-01-27

    Wavefront control is a significant parameter in inertial confinement fusion (ICF). The complex transmittance of large optical elements which are often used in ICF is obtained by computing the phase difference of the illuminating and transmitting fields using Ptychographical Iterative Engine (PIE). This can accurately and effectively measure the transmittance of large optical elements with irregular surface profiles, which are otherwise not measurable using commonly used interferometric techniques due to a lack of standard reference plate. Experiments are done with a Continue Phase Plate (CPP) to illustrate the feasibility of this method.

  3. A Systematic Review of Conceptual Frameworks of Medical Complexity and New Model Development.

    PubMed

    Zullig, Leah L; Whitson, Heather E; Hastings, Susan N; Beadles, Chris; Kravchenko, Julia; Akushevich, Igor; Maciejewski, Matthew L

    2016-03-01

    Patient complexity is often operationalized by counting multiple chronic conditions (MCC) without considering contextual factors that can affect patient risk for adverse outcomes. Our objective was to develop a conceptual model of complexity addressing gaps identified in a review of published conceptual models. We searched for English-language MEDLINE papers published between 1 January 2004 and 16 January 2014. Two reviewers independently evaluated abstracts and all authors contributed to the development of the conceptual model in an iterative process. From 1606 identified abstracts, six conceptual models were selected. One additional model was identified through reference review. Each model had strengths, but several constructs were not fully considered: 1) contextual factors; 2) dynamics of complexity; 3) patients' preferences; 4) acute health shocks; and 5) resilience. Our Cycle of Complexity model illustrates relationships between acute shocks and medical events, healthcare access and utilization, workload and capacity, and patient preferences in the context of interpersonal, organizational, and community factors. This model may inform studies on the etiology of and changes in complexity, the relationship between complexity and patient outcomes, and intervention development to improve modifiable elements of complex patients.

  4. Swarm size and iteration number effects to the performance of PSO algorithm in RFID tag coverage optimization

    NASA Astrophysics Data System (ADS)

    Prathabrao, M.; Nawawi, Azli; Sidek, Noor Azizah

    2017-04-01

    Radio Frequency Identification (RFID) system has multiple benefits which can improve the operational efficiency of the organization. The advantages are the ability to record data systematically and quickly, reducing human errors and system errors, update the database automatically and efficiently. It is often more readers (reader) is needed for the installation purposes in RFID system. Thus, it makes the system more complex. As a result, RFID network planning process is needed to ensure the RFID system works perfectly. The planning process is also considered as an optimization process and power adjustment because the coordinates of each RFID reader to be determined. Therefore, algorithms inspired by the environment (Algorithm Inspired by Nature) is often used. In the study, PSO algorithm is used because it has few number of parameters, the simulation time is fast, easy to use and also very practical. However, PSO parameters must be adjusted correctly, for robust and efficient usage of PSO. Failure to do so may result in disruption of performance and results of PSO optimization of the system will be less good. To ensure the efficiency of PSO, this study will examine the effects of two parameters on the performance of PSO Algorithm in RFID tag coverage optimization. The parameters to be studied are the swarm size and iteration number. In addition to that, the study will also recommend the most optimal adjustment for both parameters that is, 200 for the no. iterations and 800 for the no. of swarms. Finally, the results of this study will enable PSO to operate more efficiently in order to optimize RFID network planning system.

  5. An improved method for polarimetric image restoration in interferometry

    NASA Astrophysics Data System (ADS)

    Pratley, Luke; Johnston-Hollitt, Melanie

    2016-11-01

    Interferometric radio astronomy data require the effects of limited coverage in the Fourier plane to be accounted for via a deconvolution process. For the last 40 years this process, known as `cleaning', has been performed almost exclusively on all Stokes parameters individually as if they were independent scalar images. However, here we demonstrate for the case of the linear polarization P, this approach fails to properly account for the complex vector nature resulting in a process which is dependent on the axes under which the deconvolution is performed. We present here an improved method, `Generalized Complex CLEAN', which properly accounts for the complex vector nature of polarized emission and is invariant under rotations of the deconvolution axes. We use two Australia Telescope Compact Array data sets to test standard and complex CLEAN versions of the Högbom and SDI (Steer-Dwedney-Ito) CLEAN algorithms. We show that in general the complex CLEAN version of each algorithm produces more accurate clean components with fewer spurious detections and lower computation cost due to reduced iterations than the current methods. In particular, we find that the complex SDI CLEAN produces the best results for diffuse polarized sources as compared with standard CLEAN algorithms and other complex CLEAN algorithms. Given the move to wide-field, high-resolution polarimetric imaging with future telescopes such as the Square Kilometre Array, we suggest that Generalized Complex CLEAN should be adopted as the deconvolution method for all future polarimetric surveys and in particular that the complex version of an SDI CLEAN should be used.

  6. Cx-02 Program, workshop on modeling complex systems

    USGS Publications Warehouse

    Mossotti, Victor G.; Barragan, Jo Ann; Westergard, Todd D.

    2003-01-01

    This publication contains the abstracts and program for the workshop on complex systems that was held on November 19-21, 2002, in Reno, Nevada. Complex systems are ubiquitous within the realm of the earth sciences. Geological systems consist of a multiplicity of linked components with nested feedback loops; the dynamics of these systems are non-linear, iterative, multi-scale, and operate far from equilibrium. That notwithstanding, It appears that, with the exception of papers on seismic studies, geology and geophysics work has been disproportionally underrepresented at regional and national meetings on complex systems relative to papers in the life sciences. This is somewhat puzzling because geologists and geophysicists are, in many ways, preadapted to thinking of complex system mechanisms. Geologists and geophysicists think about processes involving large volumes of rock below the sunlit surface of Earth, the accumulated consequence of processes extending hundreds of millions of years in the past. Not only do geologists think in the abstract by virtue of the vast time spans, most of the evidence is out-of-sight. A primary goal of this workshop is to begin to bridge the gap between the Earth sciences and life sciences through demonstration of the universality of complex systems science, both philosophically and in model structures.

  7. Cupola Furnace Computer Process Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seymour Katz

    2004-12-31

    The cupola furnace generates more than 50% of the liquid iron used to produce the 9+ million tons of castings annually. The cupola converts iron and steel into cast iron. The main advantages of the cupola furnace are lower energy costs than those of competing furnaces (electric) and the ability to melt less expensive metallic scrap than the competing furnaces. However the chemical and physical processes that take place in the cupola furnace are highly complex making it difficult to operate the furnace in optimal fashion. The results are low energy efficiency and poor recovery of important and expensive alloymore » elements due to oxidation. Between 1990 and 2004 under the auspices of the Department of Energy, the American Foundry Society and General Motors Corp. a computer simulation of the cupola furnace was developed that accurately describes the complex behavior of the furnace. When provided with the furnace input conditions the model provides accurate values of the output conditions in a matter of seconds. It also provides key diagnostics. Using clues from the diagnostics a trained specialist can infer changes in the operation that will move the system toward higher efficiency. Repeating the process in an iterative fashion leads to near optimum operating conditions with just a few iterations. More advanced uses of the program have been examined. The program is currently being combined with an ''Expert System'' to permit optimization in real time. The program has been combined with ''neural network'' programs to affect very easy scanning of a wide range of furnace operation. Rudimentary efforts were successfully made to operate the furnace using a computer. References to these more advanced systems will be found in the ''Cupola Handbook''. Chapter 27, American Foundry Society, Des Plaines, IL (1999).« less

  8. Strong Convergence of Iteration Processes for Infinite Family of General Extended Mappings

    NASA Astrophysics Data System (ADS)

    Hussein Maibed, Zena

    2018-05-01

    The aim of this paper, we introduce a concept of general extended mapping which is independent of nonexpansive mapping and give an iteration process of families of quasi nonexpansive and of general extended mappings. Also, the existence of common fixed point are studied for these process in the Hilbert spaces.

  9. A COMSOL-GEMS interface for modeling coupled reactive-transport geochemical processes

    NASA Astrophysics Data System (ADS)

    Azad, Vahid Jafari; Li, Chang; Verba, Circe; Ideker, Jason H.; Isgor, O. Burkan

    2016-07-01

    An interface was developed between COMSOL MultiphysicsTM finite element analysis software and (geo)chemical modeling platform, GEMS, for the reactive-transport modeling of (geo)chemical processes in variably saturated porous media. The two standalone software packages are managed from the interface that uses a non-iterative operator splitting technique to couple the transport (COMSOL) and reaction (GEMS) processes. The interface allows modeling media with complex chemistry (e.g. cement) using GEMS thermodynamic database formats. Benchmark comparisons show that the developed interface can be used to predict a variety of reactive-transport processes accurately. The full functionality of the interface was demonstrated to model transport processes, governed by extended Nernst-Plank equation, in Class H Portland cement samples in high pressure and temperature autoclaves simulating systems that are used to store captured carbon dioxide (CO2) in geological reservoirs.

  10. Rapid and low-cost prototyping of medical devices using 3D printed molds for liquid injection molding.

    PubMed

    Chung, Philip; Heller, J Alex; Etemadi, Mozziyar; Ottoson, Paige E; Liu, Jonathan A; Rand, Larry; Roy, Shuvo

    2014-06-27

    Biologically inert elastomers such as silicone are favorable materials for medical device fabrication, but forming and curing these elastomers using traditional liquid injection molding processes can be an expensive process due to tooling and equipment costs. As a result, it has traditionally been impractical to use liquid injection molding for low-cost, rapid prototyping applications. We have devised a method for rapid and low-cost production of liquid elastomer injection molded devices that utilizes fused deposition modeling 3D printers for mold design and a modified desiccator as an injection system. Low costs and rapid turnaround time in this technique lower the barrier to iteratively designing and prototyping complex elastomer devices. Furthermore, CAD models developed in this process can be later adapted for metal mold tooling design, enabling an easy transition to a traditional injection molding process. We have used this technique to manufacture intravaginal probes involving complex geometries, as well as overmolding over metal parts, using tools commonly available within an academic research laboratory. However, this technique can be easily adapted to create liquid injection molded devices for many other applications.

  11. Rapid production of hollow SS316 profiles by extrusion based additive manufacturing

    NASA Astrophysics Data System (ADS)

    Rane, Kedarnath; Cataldo, Salvatore; Parenti, Paolo; Sbaglia, Luca; Mussi, Valerio; Annoni, Massimiliano; Giberti, Hermes; Strano, Matteo

    2018-05-01

    Complex shaped stainless steel tubes are often required for special purpose biomedical equipment. Nevertheless, traditional manufacturing technologies, such as extrusion, lack the ability to compete in a market of customized complex components because of associated expenses towards tooling and extrusion presses. To rapid manufacture few of such components with low cost and high precision, a new Extrusion based Additive Manufacturing (EAM) process, is proposed in this paper, and as an example, short stainless steel 316L complex shaped and sectioned tubes were prepared by EAM. Several sample parts were produced using this process; the dimensional stability, surface roughness and chemical composition of sintered samples were investigated to prove process competence. The results indicate that feedstock with a 316L particle content of 92.5 wt. % can be prepared with a sigma blade mixing, whose rheological behavior is fit for EAM. The green samples have sufficient strength to handle them for subsequent treatments. The sintered samples considerably shrunk to designed dimensions and have a homogeneous microstructure to impart mechanical strength. Whereas, maintaining comparable dimensional accuracy and chemical composition which are required for biomedical equipment still need iterations, a kinematic correction and modification in debinding cycle was proposed.

  12. Systematic development of technical textiles

    NASA Astrophysics Data System (ADS)

    Beer, M.; Schrank, V.; Gloy, Y.-S.; Gries, T.

    2016-07-01

    Technical textiles are used in various fields of applications, ranging from small scale (e.g. medical applications) to large scale products (e.g. aerospace applications). The development of new products is often complex and time consuming, due to multiple interacting parameters. These interacting parameters are production process related and also a result of the textile structure and used material. A huge number of iteration steps are necessary to adjust the process parameter to finalize the new fabric structure. A design method is developed to support the systematic development of technical textiles and to reduce iteration steps. The design method is subdivided into six steps, starting from the identification of the requirements. The fabric characteristics vary depending on the field of application. If possible, benchmarks are tested. A suitable fabric production technology needs to be selected. The aim of the method is to support a development team within the technology selection without restricting the textile developer. After a suitable technology is selected, the transformation and correlation between input and output parameters follows. This generates the information for the production of the structure. Afterwards, the first prototype can be produced and tested. The resulting characteristics are compared with the initial product requirements.

  13. Programmable Iterative Optical Image And Data Processing

    NASA Technical Reports Server (NTRS)

    Jackson, Deborah J.

    1995-01-01

    Proposed method of iterative optical image and data processing overcomes limitations imposed by loss of optical power after repeated passes through many optical elements - especially, beam splitters. Involves selective, timed combination of optical wavefront phase conjugation and amplification to regenerate images in real time to compensate for losses in optical iteration loops; timing such that amplification turned on to regenerate desired image, then turned off so as not to regenerate other, undesired images or spurious light propagating through loops from unwanted reflections.

  14. An information transfer based novel framework for fault root cause tracing of complex electromechanical systems in the processing industry

    NASA Astrophysics Data System (ADS)

    Wang, Rongxi; Gao, Xu; Gao, Jianmin; Gao, Zhiyong; Kang, Jiani

    2018-02-01

    As one of the most important approaches for analyzing the mechanism of fault pervasion, fault root cause tracing is a powerful and useful tool for detecting the fundamental causes of faults so as to prevent any further propagation and amplification. Focused on the problems arising from the lack of systematic and comprehensive integration, an information transfer-based novel data-driven framework for fault root cause tracing of complex electromechanical systems in the processing industry was proposed, taking into consideration the experience and qualitative analysis of conventional fault root cause tracing methods. Firstly, an improved symbolic transfer entropy method was presented to construct a directed-weighted information model for a specific complex electromechanical system based on the information flow. Secondly, considering the feedback mechanisms in the complex electromechanical systems, a method for determining the threshold values of weights was developed to explore the disciplines of fault propagation. Lastly, an iterative method was introduced to identify the fault development process. The fault root cause was traced by analyzing the changes in information transfer between the nodes along with the fault propagation pathway. An actual fault root cause tracing application of a complex electromechanical system is used to verify the effectiveness of the proposed framework. A unique fault root cause is obtained regardless of the choice of the initial variable. Thus, the proposed framework can be flexibly and effectively used in fault root cause tracing for complex electromechanical systems in the processing industry, and formulate the foundation of system vulnerability analysis and condition prediction, as well as other engineering applications.

  15. Solving the multiple-set split equality common fixed-point problem of firmly quasi-nonexpansive operators.

    PubMed

    Zhao, Jing; Zong, Haili

    2018-01-01

    In this paper, we propose parallel and cyclic iterative algorithms for solving the multiple-set split equality common fixed-point problem of firmly quasi-nonexpansive operators. We also combine the process of cyclic and parallel iterative methods and propose two mixed iterative algorithms. Our several algorithms do not need any prior information about the operator norms. Under mild assumptions, we prove weak convergence of the proposed iterative sequences in Hilbert spaces. As applications, we obtain several iterative algorithms to solve the multiple-set split equality problem.

  16. Application of Intervention Mapping to the Development of a Complex Physical Therapist Intervention.

    PubMed

    Jones, Taryn M; Dear, Blake F; Hush, Julia M; Titov, Nickolai; Dean, Catherine M

    2016-12-01

    Physical therapist interventions, such as those designed to change physical activity behavior, are often complex and multifaceted. In order to facilitate rigorous evaluation and implementation of these complex interventions into clinical practice, the development process must be comprehensive, systematic, and transparent, with a sound theoretical basis. Intervention Mapping is designed to guide an iterative and problem-focused approach to the development of complex interventions. The purpose of this case report is to demonstrate the application of an Intervention Mapping approach to the development of a complex physical therapist intervention, a remote self-management program aimed at increasing physical activity after acquired brain injury. Intervention Mapping consists of 6 steps to guide the development of complex interventions: (1) needs assessment; (2) identification of outcomes, performance objectives, and change objectives; (3) selection of theory-based intervention methods and practical applications; (4) organization of methods and applications into an intervention program; (5) creation of an implementation plan; and (6) generation of an evaluation plan. The rationale and detailed description of this process are presented using an example of the development of a novel and complex physical therapist intervention, myMoves-a program designed to help individuals with an acquired brain injury to change their physical activity behavior. The Intervention Mapping framework may be useful in the development of complex physical therapist interventions, ensuring the development is comprehensive, systematic, and thorough, with a sound theoretical basis. This process facilitates translation into clinical practice and allows for greater confidence and transparency when the program efficacy is investigated. © 2016 American Physical Therapy Association.

  17. Iterative algorithm for joint zero diagonalization with application in blind source separation.

    PubMed

    Zhang, Wei-Tao; Lou, Shun-Tian

    2011-07-01

    A new iterative algorithm for the nonunitary joint zero diagonalization of a set of matrices is proposed for blind source separation applications. On one hand, since the zero diagonalizer of the proposed algorithm is constructed iteratively by successive multiplications of an invertible matrix, the singular solutions that occur in the existing nonunitary iterative algorithms are naturally avoided. On the other hand, compared to the algebraic method for joint zero diagonalization, the proposed algorithm requires fewer matrices to be zero diagonalized to yield even better performance. The extension of the algorithm to the complex and nonsquare mixing cases is also addressed. Numerical simulations on both synthetic data and blind source separation using time-frequency distributions illustrate the performance of the algorithm and provide a comparison to the leading joint zero diagonalization schemes.

  18. Language Evolution by Iterated Learning with Bayesian Agents

    ERIC Educational Resources Information Center

    Griffiths, Thomas L.; Kalish, Michael L.

    2007-01-01

    Languages are transmitted from person to person and generation to generation via a process of iterated learning: people learn a language from other people who once learned that language themselves. We analyze the consequences of iterated learning for learning algorithms based on the principles of Bayesian inference, assuming that learners compute…

  19. Ambient Assisted Living spaces validation by services and devices simulation.

    PubMed

    Fernández-Llatas, Carlos; Mocholí, Juan Bautista; Sala, Pilar; Naranjo, Juan Carlos; Pileggi, Salvatore F; Guillén, Sergio; Traver, Vicente

    2011-01-01

    The design of Ambient Assisted Living (AAL) products is a very demanding challenge. AAL products creation is a complex iterative process which must accomplish exhaustive prerequisites about accessibility and usability. In this process the early detection of errors is crucial to create cost-effective systems. Computer-assisted tools can suppose a vital help to usability designers in order to avoid design errors. Specifically computer simulation of products in AAL environments can be used in all the design phases to support the validation. In this paper, a computer simulation tool for supporting usability designers in the creation of innovative AAL products is presented. This application will benefit their work saving time and improving the final system functionality.

  20. Risk Management in Biologics Technology Transfer.

    PubMed

    Toso, Robert; Tsang, Jonathan; Xie, Jasmina; Hohwald, Stephen; Bain, David; Willison-Parry, Derek

    Technology transfer of biological products is a complex process that is important for product commercialization. To achieve a successful technology transfer, the risks that arise from changes throughout the project must be managed. Iterative risk analysis and mitigation tools can be used to both evaluate and reduce risk. The technology transfer stage gate model is used as an example tool to help manage risks derived from both designed process change and unplanned changes that arise due to unforeseen circumstances. The strategy of risk assessment for a change can be tailored to the type of change. In addition, a cross-functional team and centralized documentation helps maximize risk management efficiency to achieve a successful technology transfer. © PDA, Inc. 2016.

  1. Accelerating spirocyclic polyketide synthesis using flow chemistry.

    PubMed

    Newton, Sean; Carter, Catherine F; Pearson, Colin M; de C Alves, Leandro; Lange, Heiko; Thansandote, Praew; Ley, Steven V

    2014-05-05

    Over the past decade, the integration of synthetic chemistry with flow processing has resulted in a powerful platform for molecular assembly that is making an impact throughout the chemical community. Herein, we demonstrate the extension of these tools to encompass complex natural product synthesis. We have developed a number of novel flow-through processes for reactions commonly encountered in natural product synthesis programs to achieve the first total synthesis of spirodienal A and the preparation of spirangien A methyl ester. Highlights of the synthetic route include an iridium-catalyzed hydrogenation, iterative Roush crotylations, gold-catalyzed spiroketalization and a late-stage cis-selective reduction. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Controlled iterative cross-coupling: on the way to the automation of organic synthesis.

    PubMed

    Wang, Congyang; Glorius, Frank

    2009-01-01

    Repetition does not hurt! New strategies for the modulation of the reactivity of difunctional building blocks are discussed, allowing the palladium-catalyzed controlled iterative cross-coupling and, thus, the efficient formation of complex molecules of defined size and structure (see scheme). As in peptide synthesis, this development will enable the automation of these reactions. M(PG)=protected metal, M(act)=metal.

  3. Drawing dynamical and parameters planes of iterative families and methods.

    PubMed

    Chicharro, Francisco I; Cordero, Alicia; Torregrosa, Juan R

    2013-01-01

    The complex dynamical analysis of the parametric fourth-order Kim's iterative family is made on quadratic polynomials, showing the MATLAB codes generated to draw the fractal images necessary to complete the study. The parameter spaces associated with the free critical points have been analyzed, showing the stable (and unstable) regions where the selection of the parameter will provide us the excellent schemes (or dreadful ones).

  4. Development and Evaluation of an Intuitive Operations Planning Process

    DTIC Science & Technology

    2006-03-01

    designed to be iterative and also prescribes the way in which iterations should occur. On the other hand, participants’ perceived level of trust and...16 4. DESIGN AND METHOD OF THE EXPERIMENTAL EVALUATION OF THE INTUITIVE PLANNING PROCESS...20 4.1.3 Design

  5. Performance analysis of Rogowski coils and the measurement of the total toroidal current in the ITER machine

    NASA Astrophysics Data System (ADS)

    Quercia, A.; Albanese, R.; Fresa, R.; Minucci, S.; Arshad, S.; Vayakis, G.

    2017-12-01

    The paper carries out a comprehensive study of the performances of Rogowski coils. It describes methodologies that were developed in order to assess the capabilities of the Continuous External Rogowski (CER), which measures the total toroidal current in the ITER machine. Even though the paper mainly considers the CER, the contents are general and relevant to any Rogowski sensor. The CER consists of two concentric helical coils which are wound along a complex closed path. Modelling and computational activities were performed to quantify the measurement errors, taking detailed account of the ITER environment. The geometrical complexity of the sensor is accurately accounted for and the standard model which provides the classical expression to compute the flux linkage of Rogowski sensors is quantitatively validated. Then, in order to take into account the non-ideality of the winding, a generalized expression, formally analogue to the classical one, is presented. Models to determine the worst case and the statistical measurement accuracies are hence provided. The following sources of error are considered: effect of the joints, disturbances due to external sources of field (the currents flowing in the poloidal field coils and the ferromagnetic inserts of ITER), deviations from ideal geometry, toroidal field variations, calibration, noise and integration drift. The proposed methods are applied to the measurement error of the CER, in particular in its high and low operating ranges, as prescribed by the ITER system design description documents, and during transients, which highlight the large time constant related to the shielding of the vacuum vessel. The analyses presented in the paper show that the design of the CER diagnostic is capable of achieving the requisite performance as needed for the operation of the ITER machine.

  6. Soft-Decision Decoding of Binary Linear Block Codes Based on an Iterative Search Algorithm

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Kasami, Tadao; Moorthy, H. T.

    1997-01-01

    This correspondence presents a suboptimum soft-decision decoding scheme for binary linear block codes based on an iterative search algorithm. The scheme uses an algebraic decoder to iteratively generate a sequence of candidate codewords one at a time using a set of test error patterns that are constructed based on the reliability information of the received symbols. When a candidate codeword is generated, it is tested based on an optimality condition. If it satisfies the optimality condition, then it is the most likely (ML) codeword and the decoding stops. If it fails the optimality test, a search for the ML codeword is conducted in a region which contains the ML codeword. The search region is determined by the current candidate codeword and the reliability of the received symbols. The search is conducted through a purged trellis diagram for the given code using the Viterbi algorithm. If the search fails to find the ML codeword, a new candidate is generated using a new test error pattern, and the optimality test and search are renewed. The process of testing and search continues until either the MEL codeword is found or all the test error patterns are exhausted and the decoding process is terminated. Numerical results show that the proposed decoding scheme achieves either practically optimal performance or a performance only a fraction of a decibel away from the optimal maximum-likelihood decoding with a significant reduction in decoding complexity compared with the Viterbi decoding based on the full trellis diagram of the codes.

  7. A model of supervisor decision-making in the accommodation of workers with low back pain

    PubMed Central

    Williams-Whitt, Kelly; Kristman, Vicki; Shaw, William S.; Soklaridis, Sophie; Reguly, Paula

    2016-01-01

    PURPOSE To explore supervisors’ perspectives and decision-making processes in the accommodation of back injured workers. METHODS Twenty-three semi-structured, in-depth interviews were conducted with supervisors from eleven Canadian organizations about their role in providing job accommodations. Supervisors were identified through an on-line survey and interviews were recorded, transcribed and entered into NVivo software. The initial analyses identified common units of meaning, which were used to develop a coding guide. Interviews were coded, and a model of supervisor decision-making was developed based on the themes, categories and connecting ideas identified in the data. RESULTS The decision-making model includes a process element that is described as iterative “trial and error” decision-making. Medical restrictions are compared to job demands, employee abilities and available alternatives. A feasible modification is identified through brainstorming and then implemented by the supervisor. Resources used for brainstorming include information, supervisor experience and autonomy, and organizational supports. The model also incorporates the experience of accommodation as a job demand that causes strain for the supervisor. Accommodation demands affect the supervisor’s attitude, brainstorming and monitoring effort and communication with returning employees. Resources and demands have a combined effect on accommodation decision complexity, which in turn affects the quality of the accommodation option selected. If the employee is unable to complete the tasks or is reinjured during the accommodation, the decision cycle repeats. More frequent iteration through the trial and error process reduces the likelihood of return to work success. CONCLUSIONS A series of propositions is developed to illustrate the relationships among categories in the model. The model and propositions show: a) the iterative, problem solving nature of the RTW process; b) decision resources necessary for accommodation planning, and c) the impact accommodation demands may have on supervisors and RTW quality. PMID:26811170

  8. A power-efficient communication system between brain-implantable devices and external computers.

    PubMed

    Yao, Ning; Lee, Heung-No; Chang, Cheng-Chun; Sclabassi, Robert J; Sun, Mingui

    2007-01-01

    In this paper, we propose a power efficient communication system for linking a brain-implantable device to an external system. For battery powered implantable devices, the processor and the transmitter power should be reduced in order to both conserve battery power and reduce the health risks associated with transmission. To accomplish this, a joint source-channel coding/decoding system is devised. Low-density generator matrix (LDGM) codes are used in our system due to their low encoding complexity. The power cost for signal processing within the implantable device is greatly reduced by avoiding explicit source encoding. Raw data which is highly correlated is transmitted. At the receiver, a Markov chain source correlation model is utilized to approximate and capture the correlation of raw data. A turbo iterative receiver algorithm is designed which connects the Markov chain source model to the LDGM decoder in a turbo-iterative way. Simulation results show that the proposed system can save up to 1 to 2.5 dB on transmission power.

  9. A Universal Tare Load Prediction Algorithm for Strain-Gage Balance Calibration Data Analysis

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.

    2011-01-01

    An algorithm is discussed that may be used to estimate tare loads of wind tunnel strain-gage balance calibration data. The algorithm was originally developed by R. Galway of IAR/NRC Canada and has been described in the literature for the iterative analysis technique. Basic ideas of Galway's algorithm, however, are universally applicable and work for both the iterative and the non-iterative analysis technique. A recent modification of Galway's algorithm is presented that improves the convergence behavior of the tare load prediction process if it is used in combination with the non-iterative analysis technique. The modified algorithm allows an analyst to use an alternate method for the calculation of intermediate non-linear tare load estimates whenever Galway's original approach does not lead to a convergence of the tare load iterations. It is also shown in detail how Galway's algorithm may be applied to the non-iterative analysis technique. Hand load data from the calibration of a six-component force balance is used to illustrate the application of the original and modified tare load prediction method. During the analysis of the data both the iterative and the non-iterative analysis technique were applied. Overall, predicted tare loads for combinations of the two tare load prediction methods and the two balance data analysis techniques showed excellent agreement as long as the tare load iterations converged. The modified algorithm, however, appears to have an advantage over the original algorithm when absolute voltage measurements of gage outputs are processed using the non-iterative analysis technique. In these situations only the modified algorithm converged because it uses an exact solution of the intermediate non-linear tare load estimate for the tare load iteration.

  10. An Improved Method to Control the Critical Parameters of a Multivariable Control System

    NASA Astrophysics Data System (ADS)

    Subha Hency Jims, P.; Dharmalingam, S.; Wessley, G. Jims John

    2017-10-01

    The role of control systems is to cope with the process deficiencies and the undesirable effect of the external disturbances. Most of the multivariable processes are highly iterative and complex in nature. Aircraft systems, Modern Power Plants, Refineries, Robotic systems are few such complex systems that involve numerous critical parameters that need to be monitored and controlled. Control of these important parameters is not only tedious and cumbersome but also is crucial from environmental, safety and quality perspective. In this paper, one such multivariable system, namely, a utility boiler has been considered. A modern power plant is a complex arrangement of pipework and machineries with numerous interacting control loops and support systems. In this paper, the calculation of controller parameters based on classical tuning concepts has been presented. The controller parameters thus obtained and employed has controlled the critical parameters of a boiler during fuel switching disturbances. The proposed method can be applied to control the critical parameters like elevator, aileron, rudder, elevator trim rudder and aileron trim, flap control systems of aircraft systems.

  11. Real-time blind image deconvolution based on coordinated framework of FPGA and DSP

    NASA Astrophysics Data System (ADS)

    Wang, Ze; Li, Hang; Zhou, Hua; Liu, Hongjun

    2015-10-01

    Image restoration takes a crucial place in several important application domains. With the increasing of computation requirement as the algorithms become much more complexity, there has been a significant rise in the need for accelerating implementation. In this paper, we focus on an efficient real-time image processing system for blind iterative deconvolution method by means of the Richardson-Lucy (R-L) algorithm. We study the characteristics of algorithm, and an image restoration processing system based on the coordinated framework of FPGA and DSP (CoFD) is presented. Single precision floating-point processing units with small-scale cascade and special FFT/IFFT processing modules are adopted to guarantee the accuracy of the processing. Finally, Comparing experiments are done. The system could process a blurred image of 128×128 pixels within 32 milliseconds, and is up to three or four times faster than the traditional multi-DSPs systems.

  12. Calibration and compensation method of three-axis geomagnetic sensor based on pre-processing total least square iteration

    NASA Astrophysics Data System (ADS)

    Zhou, Y.; Zhang, X.; Xiao, W.

    2018-04-01

    As the geomagnetic sensor is susceptible to interference, a pre-processing total least square iteration method is proposed for calibration compensation. Firstly, the error model of the geomagnetic sensor is analyzed and the correction model is proposed, then the characteristics of the model are analyzed and converted into nine parameters. The geomagnetic data is processed by Hilbert transform (HHT) to improve the signal-to-noise ratio, and the nine parameters are calculated by using the combination of Newton iteration method and the least squares estimation method. The sifter algorithm is used to filter the initial value of the iteration to ensure that the initial error is as small as possible. The experimental results show that this method does not need additional equipment and devices, can continuously update the calibration parameters, and better than the two-step estimation method, it can compensate geomagnetic sensor error well.

  13. Solving coupled groundwater flow systems using a Jacobian Free Newton Krylov method

    NASA Astrophysics Data System (ADS)

    Mehl, S.

    2012-12-01

    Jacobian Free Newton Kyrlov (JFNK) methods can have several advantages for simulating coupled groundwater flow processes versus conventional methods. Conventional methods are defined here as those based on an iterative coupling (rather than a direct coupling) and/or that use Picard iteration rather than Newton iteration. In an iterative coupling, the systems are solved separately, coupling information is updated and exchanged between the systems, and the systems are re-solved, etc., until convergence is achieved. Trusted simulators, such as Modflow, are based on these conventional methods of coupling and work well in many cases. An advantage of the JFNK method is that it only requires calculation of the residual vector of the system of equations and thus can make use of existing simulators regardless of how the equations are formulated. This opens the possibility of coupling different process models via augmentation of a residual vector by each separate process, which often requires substantially fewer changes to the existing source code than if the processes were directly coupled. However, appropriate perturbation sizes need to be determined for accurate approximations of the Frechet derivative, which is not always straightforward. Furthermore, preconditioning is necessary for reasonable convergence of the linear solution required at each Kyrlov iteration. Existing preconditioners can be used and applied separately to each process which maximizes use of existing code and robust preconditioners. In this work, iteratively coupled parent-child local grid refinement models of groundwater flow and groundwater flow models with nonlinear exchanges to streams are used to demonstrate the utility of the JFNK approach for Modflow models. Use of incomplete Cholesky preconditioners with various levels of fill are examined on a suite of nonlinear and linear models to analyze the effect of the preconditioner. Comparisons of convergence and computer simulation time are made using conventional iteratively coupled methods and those based on Picard iteration to those formulated with JFNK to gain insights on the types of nonlinearities and system features that make one approach advantageous. Results indicate that nonlinearities associated with stream/aquifer exchanges are more problematic than those resulting from unconfined flow.

  14. Computerization of Mental Health Integration Complexity Scores at Intermountain Healthcare

    PubMed Central

    Oniki, Thomas A.; Rodrigues, Drayton; Rahman, Noman; Patur, Saritha; Briot, Pascal; Taylor, David P.; Wilcox, Adam B.; Reiss-Brennan, Brenda; Cannon, Wayne H.

    2014-01-01

    Intermountain Healthcare’s Mental Health Integration (MHI) Care Process Model (CPM) contains formal scoring criteria for assessing a patient’s mental health complexity as “mild,” “medium,” or “high” based on patient data. The complexity score attempts to assist Primary Care Physicians in assessing the mental health needs of their patients and what resources will need to be brought to bear. We describe an effort to computerize the scoring. Informatics and MHI personnel collaboratively and iteratively refined the criteria to make them adequately explicit and reflective of MHI objectives. When tested on retrospective data of 540 patients, the clinician agreed with the computer’s conclusion in 52.8% of the cases (285/540). We considered the analysis sufficiently successful to begin piloting the computerized score in prospective clinical care. So far in the pilot, clinicians have agreed with the computer in 70.6% of the cases (24/34). PMID:25954401

  15. Nested Krylov methods and preserving the orthogonality

    NASA Technical Reports Server (NTRS)

    Desturler, Eric; Fokkema, Diederik R.

    1993-01-01

    Recently the GMRESR inner-outer iteraction scheme for the solution of linear systems of equations was proposed by Van der Vorst and Vuik. Similar methods have been proposed by Axelsson and Vassilevski and Saad (FGMRES). The outer iteration is GCR, which minimizes the residual over a given set of direction vectors. The inner iteration is GMRES, which at each step computes a new direction vector by approximately solving the residual equation. However, the optimality of the approximation over the space of outer search directions is ignored in the inner GMRES iteration. This leads to suboptimal corrections to the solution in the outer iteration, as components of the outer iteration directions may reenter in the inner iteration process. Therefore we propose to preserve the orthogonality relations of GCR in the inner GMRES iteration. This gives optimal corrections; however, it involves working with a singular, non-symmetric operator. We will discuss some important properties, and we will show by experiments that, in terms of matrix vector products, this modification (almost) always leads to better convergence. However, because we do more orthogonalizations, it does not always give an improved performance in CPU-time. Furthermore, we will discuss efficient implementations as well as the truncation possibilities of the outer GCR process. The experimental results indicate that for such methods it is advantageous to preserve the orthogonality in the inner iteration. Of course we can also use iteration schemes other than GMRES as the inner method; methods with short recurrences like GICGSTAB are of interest.

  16. A system dynamics evaluation model: implementation of health information exchange for public health reporting

    PubMed Central

    Merrill, Jacqueline A; Deegan, Michael; Wilson, Rosalind V; Kaushal, Rainu; Fredericks, Kimberly

    2013-01-01

    Objective To evaluate the complex dynamics involved in implementing electronic health information exchange (HIE) for public health reporting at a state health department, and to identify policy implications to inform similar implementations. Materials and methods Qualitative data were collected over 8 months from seven experts at New York State Department of Health who implemented web services and protocols for querying, receipt, and validation of electronic data supplied by regional health information organizations. Extensive project documentation was also collected. During group meetings experts described the implementation process and created reference modes and causal diagrams that the evaluation team used to build a preliminary model. System dynamics modeling techniques were applied iteratively to build causal loop diagrams representing the implementation. The diagrams were validated iteratively by individual experts followed by group review online, and through confirmatory review of documents and artifacts. Results Three casual loop diagrams captured well-recognized system dynamics: Sliding Goals, Project Rework, and Maturity of Resources. The findings were associated with specific policies that address funding, leadership, ensuring expertise, planning for rework, communication, and timeline management. Discussion This evaluation illustrates the value of a qualitative approach to system dynamics modeling. As a tool for strategic thinking on complicated and intense processes, qualitative models can be produced with fewer resources than a full simulation, yet still provide insights that are timely and relevant. Conclusions System dynamics techniques clarified endogenous and exogenous factors at play in a highly complex technology implementation, which may inform other states engaged in implementing HIE supported by federal Health Information Technology for Economic and Clinical Health (HITECH) legislation. PMID:23292910

  17. A system dynamics evaluation model: implementation of health information exchange for public health reporting.

    PubMed

    Merrill, Jacqueline A; Deegan, Michael; Wilson, Rosalind V; Kaushal, Rainu; Fredericks, Kimberly

    2013-06-01

    To evaluate the complex dynamics involved in implementing electronic health information exchange (HIE) for public health reporting at a state health department, and to identify policy implications to inform similar implementations. Qualitative data were collected over 8 months from seven experts at New York State Department of Health who implemented web services and protocols for querying, receipt, and validation of electronic data supplied by regional health information organizations. Extensive project documentation was also collected. During group meetings experts described the implementation process and created reference modes and causal diagrams that the evaluation team used to build a preliminary model. System dynamics modeling techniques were applied iteratively to build causal loop diagrams representing the implementation. The diagrams were validated iteratively by individual experts followed by group review online, and through confirmatory review of documents and artifacts. Three casual loop diagrams captured well-recognized system dynamics: Sliding Goals, Project Rework, and Maturity of Resources. The findings were associated with specific policies that address funding, leadership, ensuring expertise, planning for rework, communication, and timeline management. This evaluation illustrates the value of a qualitative approach to system dynamics modeling. As a tool for strategic thinking on complicated and intense processes, qualitative models can be produced with fewer resources than a full simulation, yet still provide insights that are timely and relevant. System dynamics techniques clarified endogenous and exogenous factors at play in a highly complex technology implementation, which may inform other states engaged in implementing HIE supported by federal Health Information Technology for Economic and Clinical Health (HITECH) legislation.

  18. Optimal Chebyshev polynomials on ellipses in the complex plane

    NASA Technical Reports Server (NTRS)

    Fischer, Bernd; Freund, Roland

    1989-01-01

    The design of iterative schemes for sparse matrix computations often leads to constrained polynomial approximation problems on sets in the complex plane. For the case of ellipses, we introduce a new class of complex polynomials which are in general very good approximations to the best polynomials and even optimal in most cases.

  19. Complexity-Based Learning and Teaching: A Case Study in Higher Education

    ERIC Educational Resources Information Center

    Fabricatore, Carlo; López, María Ximena

    2014-01-01

    This paper presents a learning and teaching strategy based on complexity science and explores its impacts on a higher education game design course. The strategy aimed at generating conditions fostering individual and collective learning in educational complex adaptive systems, and led the design of the course through an iterative and adaptive…

  20. Developing Conceptual Understanding and Procedural Skill in Mathematics: An Iterative Process.

    ERIC Educational Resources Information Center

    Rittle-Johnson, Bethany; Siegler, Robert S.; Alibali, Martha Wagner

    2001-01-01

    Proposes that conceptual and procedural knowledge develop in an iterative fashion and improved problem representation is one mechanism underlying the relations between them. Two experiments were conducted with 5th and 6th grade students learning about decimal fractions. Results indicate conceptual and procedural knowledge do develop, iteratively,…

  1. Radioactivity measurements of ITER materials using the TFTR D-T neutron field

    NASA Astrophysics Data System (ADS)

    Kumar, A.; Abdou, M. A.; Barnes, C. W.; Kugel, H. W.

    1994-06-01

    The availability of high D-T fusion neutron yields at TFTR has provided a useful opportunity to directly measure D-T neutron-induced radioactivity in a realistic tokamak fusion reactor environment for materials of vital interest to ITER. These measurements are valuable for characterizing radioactivity in various ITER candidate materials, for validating complex neutron transport calculations, and for meeting fusion reactor licensing requirements. The radioactivity measurements at TFTR involve potential ITER materials including stainless steel 316, vanadium, titanium, chromium, silicon, iron, cobalt, nickel, molybdenum, aluminum, copper, zinc, zirconium, niobium, and tungsten. Small samples of these materials were irradiated close to the plasma and just outside the vacuum vessel wall of TFTR, locations of different neutron energy spectra. Saturation activities for both threshold and capture reactions were measured. Data from dosimetric reactions have been used to obtain preliminary neutron energy spectra. Spectra from the first wall were compared to calculations from ITER and to measurements from accelerator-based tests.

  2. Finite Volume Element (FVE) discretization and multilevel solution of the axisymmetric heat equation

    NASA Astrophysics Data System (ADS)

    Litaker, Eric T.

    1994-12-01

    The axisymmetric heat equation, resulting from a point-source of heat applied to a metal block, is solved numerically; both iterative and multilevel solutions are computed in order to compare the two processes. The continuum problem is discretized in two stages: finite differences are used to discretize the time derivatives, resulting is a fully implicit backward time-stepping scheme, and the Finite Volume Element (FVE) method is used to discretize the spatial derivatives. The application of the FVE method to a problem in cylindrical coordinates is new, and results in stencils which are analyzed extensively. Several iteration schemes are considered, including both Jacobi and Gauss-Seidel; a thorough analysis of these schemes is done, using both the spectral radii of the iteration matrices and local mode analysis. Using this discretization, a Gauss-Seidel relaxation scheme is used to solve the heat equation iteratively. A multilevel solution process is then constructed, including the development of intergrid transfer and coarse grid operators. Local mode analysis is performed on the components of the amplification matrix, resulting in the two-level convergence factors for various combinations of the operators. A multilevel solution process is implemented by using multigrid V-cycles; the iterative and multilevel results are compared and discussed in detail. The computational savings resulting from the multilevel process are then discussed.

  3. Steady-state configuration and tension calculations of marine cables under complex currents via separated particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Xu, Xue-song

    2014-12-01

    Under complex currents, the motion governing equations of marine cables are complex and nonlinear, and the calculations of cable configuration and tension become difficult compared with those under the uniform or simple currents. To obtain the numerical results, the usual Newton-Raphson iteration is often adopted, but its stability depends on the initial guessed solution to the governing equations. To improve the stability of numerical calculation, this paper proposed separated the particle swarm optimization, in which the variables are separated into several groups, and the dimension of search space is reduced to facilitate the particle swarm optimization. Via the separated particle swarm optimization, these governing nonlinear equations can be solved successfully with any initial solution, and the process of numerical calculation is very stable. For the calculations of cable configuration and tension of marine cables under complex currents, the proposed separated swarm particle optimization is more effective than the other particle swarm optimizations.

  4. Drawing Dynamical and Parameters Planes of Iterative Families and Methods

    PubMed Central

    Chicharro, Francisco I.

    2013-01-01

    The complex dynamical analysis of the parametric fourth-order Kim's iterative family is made on quadratic polynomials, showing the MATLAB codes generated to draw the fractal images necessary to complete the study. The parameter spaces associated with the free critical points have been analyzed, showing the stable (and unstable) regions where the selection of the parameter will provide us the excellent schemes (or dreadful ones). PMID:24376386

  5. Accelerating the weighted histogram analysis method by direct inversion in the iterative subspace.

    PubMed

    Zhang, Cheng; Lai, Chun-Liang; Pettitt, B Montgomery

    The weighted histogram analysis method (WHAM) for free energy calculations is a valuable tool to produce free energy differences with the minimal errors. Given multiple simulations, WHAM obtains from the distribution overlaps the optimal statistical estimator of the density of states, from which the free energy differences can be computed. The WHAM equations are often solved by an iterative procedure. In this work, we use a well-known linear algebra algorithm which allows for more rapid convergence to the solution. We find that the computational complexity of the iterative solution to WHAM and the closely-related multiple Bennett acceptance ratio (MBAR) method can be improved by using the method of direct inversion in the iterative subspace. We give examples from a lattice model, a simple liquid and an aqueous protein solution.

  6. Improving Patient Experience and Primary Care Quality for Patients With Complex Chronic Disease Using the Electronic Patient-Reported Outcomes Tool: Adopting Qualitative Methods Into a User-Centered Design Approach

    PubMed Central

    Khan, Anum Irfan; Kuluski, Kerry; McKillop, Ian; Sharpe, Sarah; Bierman, Arlene S; Lyons, Renee F; Cott, Cheryl

    2016-01-01

    Background Many mHealth technologies do not meet the needs of patients with complex chronic disease and disabilities (CCDDs) who are among the highest users of health systems worldwide. Furthermore, many of the development methodologies used in the creation of mHealth and eHealth technologies lack the ability to embrace users with CCDD in the specification process. This paper describes how we adopted and modified development techniques to create the electronic Patient-Reported Outcomes (ePRO) tool, a patient-centered mHealth solution to help improve primary health care for patients experiencing CCDD. Objective This paper describes the design and development approach, specifically the process of incorporating qualitative research methods into user-centered design approaches to create the ePRO tool. Key lessons learned are offered as a guide for other eHealth and mHealth research and technology developers working with complex patient populations and their primary health care providers. Methods Guided by user-centered design principles, interpretive descriptive qualitative research methods were adopted to capture user experiences through interviews and working groups. Consistent with interpretive descriptive methods, an iterative analysis technique was used to generate findings, which were then organized in relation to the tool design and function to help systematically inform modifications to the tool. User feedback captured and analyzed through this method was used to challenge the design and inform the iterative development of the tool. Results Interviews with primary health care providers (n=7) and content experts (n=6), and four focus groups with patients and carers (n=14) along with a PICK analysis—Possible, Implementable, (to be) Challenged, (to be) Killed—guided development of the first prototype. The initial prototype was presented in three design working groups with patients/carers (n=5), providers (n=6), and experts (n=5). Working group findings were broken down into categories of what works and what does not work to inform modifications to the prototype. This latter phase led to a major shift in the purpose and design of the prototype, validating the importance of using iterative codesign processes. Conclusions Interpretive descriptive methods allow for an understanding of user experiences of patients with CCDD, their carers, and primary care providers. Qualitative methods help to capture and interpret user needs, and identify contextual barriers and enablers to tool adoption, informing a redesign to better suit the needs of this diverse user group. This study illustrates the value of adopting interpretive descriptive methods into user-centered mHealth tool design and can also serve to inform the design of other eHealth technologies. Our approach is particularly useful in requirements determination when developing for a complex user group and their health care providers. PMID:26892952

  7. Robust lateral blended-wing-body aircraft feedback control design using a parameterized LFR model and DGK-iteration

    NASA Astrophysics Data System (ADS)

    Schirrer, A.; Westermayer, C.; Hemedi, M.; Kozek, M.

    2013-12-01

    This paper shows control design results, performance, and limitations of robust lateral control law designs based on the DGK-iteration mixed-μ-synthesis procedure for a large, flexible blended wing body (BWB) passenger aircraft. The aircraft dynamics is preshaped by a low-complexity inner loop control law providing stabilization, basic response shaping, and flexible mode damping. The μ controllers are designed to further improve vibration damping of the main flexible modes by exploiting the structure of the arising significant parameter-dependent plant variations. This is achieved by utilizing parameterized Linear Fractional Representations (LFR) of the aircraft rigid and flexible dynamics. Designs with various levels of LFR complexity are carried out and discussed, showing the achieved performance improvement over the initial controller and their robustness and complexity properties.

  8. A conceptual model for the development process of confirmatory adaptive clinical trials within an emergency research network.

    PubMed

    Mawocha, Samkeliso C; Fetters, Michael D; Legocki, Laurie J; Guetterman, Timothy C; Frederiksen, Shirley; Barsan, William G; Lewis, Roger J; Berry, Donald A; Meurer, William J

    2017-06-01

    Adaptive clinical trials use accumulating data from enrolled subjects to alter trial conduct in pre-specified ways based on quantitative decision rules. In this research, we sought to characterize the perspectives of key stakeholders during the development process of confirmatory-phase adaptive clinical trials within an emergency clinical trials network and to build a model to guide future development of adaptive clinical trials. We used an ethnographic, qualitative approach to evaluate key stakeholders' views about the adaptive clinical trial development process. Stakeholders participated in a series of multidisciplinary meetings during the development of five adaptive clinical trials and completed a Strengths-Weaknesses-Opportunities-Threats questionnaire. In the analysis, we elucidated overarching themes across the stakeholders' responses to develop a conceptual model. Four major overarching themes emerged during the analysis of stakeholders' responses to questioning: the perceived statistical complexity of adaptive clinical trials and the roles of collaboration, communication, and time during the development process. Frequent and open communication and collaboration were viewed by stakeholders as critical during the development process, as were the careful management of time and logistical issues related to the complexity of planning adaptive clinical trials. The Adaptive Design Development Model illustrates how statistical complexity, time, communication, and collaboration are moderating factors in the adaptive design development process. The intensity and iterative nature of this process underscores the need for funding mechanisms for the development of novel trial proposals in academic settings.

  9. Quantifying the buildup in extent and complexity of free exploration in mice

    PubMed Central

    Benjamini, Yoav; Fonio, Ehud; Galili, Tal; Havkin, Gregor Z.; Golani, Ilan

    2011-01-01

    To obtain a perspective on an animal's own functional world, we study its behavior in situations that allow the animal to regulate the growth rate of its behavior and provide us with the opportunity to quantify its moment-by-moment developmental dynamics. Thus, we are able to show that mouse exploratory behavior consists of sequences of repeated motion: iterative processes that increase in extent and complexity, whose presumed function is a systematic active management of input acquired during the exploration of a novel environment. We use this study to demonstrate our approach to quantifying behavior: targeting aspects of behavior that are shown to be actively managed by the animal, and using measures that are discriminative across strains and treatments and replicable across laboratories. PMID:21383149

  10. The Structural Enzymology of Iterative Aromatic Polyketide Synthases: A Critical Comparison with Fatty Acid Synthases.

    PubMed

    Tsai, Shiou-Chuan Sheryl

    2018-06-20

    Polyketides are a large family of structurally complex natural products including compounds with important bioactivities. Polyketides are biosynthesized by polyketide synthases (PKSs), multienzyme complexes derived evolutionarily from fatty acid synthases (FASs). The focus of this review is to critically compare the properties of FASs with iterative aromatic PKSs, including type II PKSs and fungal type I nonreducing PKSs whose chemical logic is distinct from that of modular PKSs. This review focuses on structural and enzymological studies that reveal both similarities and striking differences between FASs and aromatic PKSs. The potential application of FAS and aromatic PKS structures for bioengineering future drugs and biofuels is highlighted.

  11. Status of the ITER Cryodistribution

    NASA Astrophysics Data System (ADS)

    Chang, H.-S.; Vaghela, H.; Patel, P.; Rizzato, A.; Cursan, M.; Henry, D.; Forgeas, A.; Grillot, D.; Sarkar, B.; Muralidhara, S.; Das, J.; Shukla, V.; Adler, E.

    2017-12-01

    Since the conceptual design of the ITER Cryodistribution many modifications have been applied due to both system optimization and improved knowledge of the clients’ requirements. Process optimizations in the Cryoplant resulted in component simplifications whereas increased heat load in some of the superconducting magnet systems required more complicated process configuration but also the removal of a cold box was possible due to component arrangement standardization. Another cold box, planned for redundancy, has been removed due to the Tokamak in-Cryostat piping layout modification. In this proceeding we will summarize the present design status and component configuration of the ITER Cryodistribution with all changes implemented which aim at process optimization and simplification as well as operational reliability, stability and flexibility.

  12. Accuracy Quantification of the Loci-CHEM Code for Chamber Wall Heat Transfer in a GO2/GH2 Single Element Injector Model Problem

    NASA Technical Reports Server (NTRS)

    West, Jeff; Westra, Doug; Lin, Jeff; Tucker, Kevin

    2006-01-01

    A robust rocket engine combustor design and development process must include tools which can accurately predict the multi-dimensional thermal environments imposed on solid surfaces by the hot combustion products. Currently, empirical methods used in the design process are typically one dimensional and do not adequately account for the heat flux rise rate in the near-injector region of the chamber. Computational Fluid Dynamics holds promise to meet the design tool requirement, but requires accuracy quantification, or validation, before it can be confidently applied in the design process. This effort presents the beginning of such a validation process for the Loci-CHEM CFD code. The model problem examined here is a gaseous oxygen (GO2)/gaseous hydrogen (GH2) shear coaxial single element injector operating at a chamber pressure of 5.42 MPa. The GO2/GH2 propellant combination in this geometry represents one the simplest rocket model problems and is thus foundational to subsequent validation efforts for more complex injectors. Multiple steady state solutions have been produced with Loci-CHEM employing different hybrid grids and two-equation turbulence models. Iterative convergence for each solution is demonstrated via mass conservation, flow variable monitoring at discrete flow field locations as a function of solution iteration and overall residual performance. A baseline hybrid was used and then locally refined to demonstrate grid convergence. Solutions were obtained with three variations of the k-omega turbulence model.

  13. Accuracy Quantification of the Loci-CHEM Code for Chamber Wall Heat Fluxes in a G02/GH2 Single Element Injector Model Problem

    NASA Technical Reports Server (NTRS)

    West, Jeff; Westra, Doug; Lin, Jeff; Tucker, Kevin

    2006-01-01

    A robust rocket engine combustor design and development process must include tools which can accurately predict the multi-dimensional thermal environments imposed on solid surfaces by the hot combustion products. Currently, empirical methods used in the design process are typically one dimensional and do not adequately account for the heat flux rise rate in the near-injector region of the chamber. Computational Fluid Dynamics holds promise to meet the design tool requirement, but requires accuracy quantification, or validation, before it can be confidently applied in the design process. This effort presents the beginning of such a validation process for the Loci- CHEM CPD code. The model problem examined here is a gaseous oxygen (GO2)/gaseous hydrogen (GH2) shear coaxial single element injector operating at a chamber pressure of 5.42 MPa. The GO2/GH2 propellant combination in this geometry represents one the simplest rocket model problems and is thus foundational to subsequent validation efforts for more complex injectors. Multiple steady state solutions have been produced with Loci-CHEM employing different hybrid grids and two-equation turbulence models. Iterative convergence for each solution is demonstrated via mass conservation, flow variable monitoring at discrete flow field locations as a function of solution iteration and overall residual performance. A baseline hybrid grid was used and then locally refined to demonstrate grid convergence. Solutions were also obtained with three variations of the k-omega turbulence model.

  14. Exploiting parallel computing with limited program changes using a network of microcomputers

    NASA Technical Reports Server (NTRS)

    Rogers, J. L., Jr.; Sobieszczanski-Sobieski, J.

    1985-01-01

    Network computing and multiprocessor computers are two discernible trends in parallel processing. The computational behavior of an iterative distributed process in which some subtasks are completed later than others because of an imbalance in computational requirements is of significant interest. The effects of asynchronus processing was studied. A small existing program was converted to perform finite element analysis by distributing substructure analysis over a network of four Apple IIe microcomputers connected to a shared disk, simulating a parallel computer. The substructure analysis uses an iterative, fully stressed, structural resizing procedure. A framework of beams divided into three substructures is used as the finite element model. The effects of asynchronous processing on the convergence of the design variables are determined by not resizing particular substructures on various iterations.

  15. Metal-induced streak artifact reduction using iterative reconstruction algorithms in x-ray computed tomography image of the dentoalveolar region.

    PubMed

    Dong, Jian; Hayakawa, Yoshihiko; Kannenberg, Sven; Kober, Cornelia

    2013-02-01

    The objective of this study was to reduce metal-induced streak artifact on oral and maxillofacial x-ray computed tomography (CT) images by developing the fast statistical image reconstruction system using iterative reconstruction algorithms. Adjacent CT images often depict similar anatomical structures in thin slices. So, first, images were reconstructed using the same projection data of an artifact-free image. Second, images were processed by the successive iterative restoration method where projection data were generated from reconstructed image in sequence. Besides the maximum likelihood-expectation maximization algorithm, the ordered subset-expectation maximization algorithm (OS-EM) was examined. Also, small region of interest (ROI) setting and reverse processing were applied for improving performance. Both algorithms reduced artifacts instead of slightly decreasing gray levels. The OS-EM and small ROI reduced the processing duration without apparent detriments. Sequential and reverse processing did not show apparent effects. Two alternatives in iterative reconstruction methods were effective for artifact reduction. The OS-EM algorithm and small ROI setting improved the performance. Copyright © 2012 Elsevier Inc. All rights reserved.

  16. Comparison between iterative wavefront control algorithm and direct gradient wavefront control algorithm for adaptive optics system

    NASA Astrophysics Data System (ADS)

    Cheng, Sheng-Yi; Liu, Wen-Jin; Chen, Shan-Qiu; Dong, Li-Zhi; Yang, Ping; Xu, Bing

    2015-08-01

    Among all kinds of wavefront control algorithms in adaptive optics systems, the direct gradient wavefront control algorithm is the most widespread and common method. This control algorithm obtains the actuator voltages directly from wavefront slopes through pre-measuring the relational matrix between deformable mirror actuators and Hartmann wavefront sensor with perfect real-time characteristic and stability. However, with increasing the number of sub-apertures in wavefront sensor and deformable mirror actuators of adaptive optics systems, the matrix operation in direct gradient algorithm takes too much time, which becomes a major factor influencing control effect of adaptive optics systems. In this paper we apply an iterative wavefront control algorithm to high-resolution adaptive optics systems, in which the voltages of each actuator are obtained through iteration arithmetic, which gains great advantage in calculation and storage. For AO system with thousands of actuators, the computational complexity estimate is about O(n2) ˜ O(n3) in direct gradient wavefront control algorithm, while the computational complexity estimate in iterative wavefront control algorithm is about O(n) ˜ (O(n)3/2), in which n is the number of actuators of AO system. And the more the numbers of sub-apertures and deformable mirror actuators, the more significant advantage the iterative wavefront control algorithm exhibits. Project supported by the National Key Scientific and Research Equipment Development Project of China (Grant No. ZDYZ2013-2), the National Natural Science Foundation of China (Grant No. 11173008), and the Sichuan Provincial Outstanding Youth Academic Technology Leaders Program, China (Grant No. 2012JQ0012).

  17. Experimental demonstration of non-iterative interpolation-based partial ICI compensation in100G RGI-DP-CO-OFDM transport systems.

    PubMed

    Mousa-Pasandi, Mohammad E; Zhuge, Qunbi; Xu, Xian; Osman, Mohamed M; El-Sahn, Ziad A; Chagnon, Mathieu; Plant, David V

    2012-07-02

    We experimentally investigate the performance of a low-complexity non-iterative phase noise induced inter-carrier interference (ICI) compensation algorithm in reduced-guard-interval dual-polarization coherent-optical orthogonal-frequency-division-multiplexing (RGI-DP-CO-OFDM) transport systems. This interpolation-based ICI compensator estimates the time-domain phase noise samples by a linear interpolation between the CPE estimates of the consecutive OFDM symbols. We experimentally study the performance of this scheme for a 28 Gbaud QPSK RGI-DP-CO-OFDM employing a low cost distributed feedback (DFB) laser. Experimental results using a DFB laser with the linewidth of 2.6 MHz demonstrate 24% and 13% improvement in transmission reach with respect to the conventional equalizer (CE) in presence of weak and strong dispersion-enhanced-phase-noise (DEPN), respectively. A brief analysis of the computational complexity of this scheme in terms of the number of required complex multiplications is provided. This practical approach does not suffer from error propagation while enjoying low computational complexity.

  18. The current state of drug discovery and a potential role for NMR metabolomics.

    PubMed

    Powers, Robert

    2014-07-24

    The pharmaceutical industry has significantly contributed to improving human health. Drugs have been attributed to both increasing life expectancy and decreasing health care costs. Unfortunately, there has been a recent decline in the creativity and productivity of the pharmaceutical industry. This is a complex issue with many contributing factors resulting from the numerous mergers, increase in out-sourcing, and the heavy dependency on high-throughput screening (HTS). While a simple solution to such a complex problem is unrealistic and highly unlikely, the inclusion of metabolomics as a routine component of the drug discovery process may provide some solutions to these problems. Specifically, as the binding affinity of a chemical lead is evolved during the iterative structure-based drug design process, metabolomics can provide feedback on the selectivity and the in vivo mechanism of action. Similarly, metabolomics can be used to evaluate and validate HTS leads. In effect, metabolomics can be used to eliminate compounds with potential efficacy and side effect problems while prioritizing well-behaved leads with druglike characteristics.

  19. Complete denture tooth arrangement technology driven by a reconfigurable rule.

    PubMed

    Dai, Ning; Yu, Xiaoling; Fan, Qilei; Yuan, Fulai; Liu, Lele; Sun, Yuchun

    2018-01-01

    The conventional technique for the fabrication of complete dentures is complex, with a long fabrication process and difficult-to-control restoration quality. In recent years, digital complete denture design has become a research focus. Digital complete denture tooth arrangement is a challenging issue that is difficult to efficiently implement under the constraints of complex tooth arrangement rules and the patient's individualized functional aesthetics. The present study proposes a complete denture automatic tooth arrangement method driven by a reconfigurable rule; it uses four typical operators, including a position operator, a scaling operator, a posture operator, and a contact operator, to establish the constraint mapping association between the teeth and the constraint set of the individual patient. By using the process reorganization of different constraint operators, this method can flexibly implement different clinical tooth arrangement rules. When combined with a virtual occlusion algorithm based on progressive iterative Laplacian deformation, the proposed method can achieve automatic and individual tooth arrangement. Finally, the experimental results verify that the proposed method is flexible and efficient.

  20. To address surface reaction network complexity using scaling relations machine learning and DFT calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ulissi, Zachary W.; Medford, Andrew J.; Bligaard, Thomas

    Surface reaction networks involving hydrocarbons exhibit enormous complexity with thousands of species and reactions for all but the very simplest of chemistries. We present a framework for optimization under uncertainty for heterogeneous catalysis reaction networks using surrogate models that are trained on the fly. The surrogate model is constructed by teaching a Gaussian process adsorption energies based on group additivity fingerprints, combined with transition-state scaling relations and a simple classifier for determining the rate-limiting step. The surrogate model is iteratively used to predict the most important reaction step to be calculated explicitly with computationally demanding electronic structure theory. Applying thesemore » methods to the reaction of syngas on rhodium(111), we identify the most likely reaction mechanism. Lastly, propagating uncertainty throughout this process yields the likelihood that the final mechanism is complete given measurements on only a subset of the entire network and uncertainty in the underlying density functional theory calculations.« less

  1. To address surface reaction network complexity using scaling relations machine learning and DFT calculations

    DOE PAGES

    Ulissi, Zachary W.; Medford, Andrew J.; Bligaard, Thomas; ...

    2017-03-06

    Surface reaction networks involving hydrocarbons exhibit enormous complexity with thousands of species and reactions for all but the very simplest of chemistries. We present a framework for optimization under uncertainty for heterogeneous catalysis reaction networks using surrogate models that are trained on the fly. The surrogate model is constructed by teaching a Gaussian process adsorption energies based on group additivity fingerprints, combined with transition-state scaling relations and a simple classifier for determining the rate-limiting step. The surrogate model is iteratively used to predict the most important reaction step to be calculated explicitly with computationally demanding electronic structure theory. Applying thesemore » methods to the reaction of syngas on rhodium(111), we identify the most likely reaction mechanism. Lastly, propagating uncertainty throughout this process yields the likelihood that the final mechanism is complete given measurements on only a subset of the entire network and uncertainty in the underlying density functional theory calculations.« less

  2. The ITER project construction status

    NASA Astrophysics Data System (ADS)

    Motojima, O.

    2015-10-01

    The pace of the ITER project in St Paul-lez-Durance, France is accelerating rapidly into its peak construction phase. With the completion of the B2 slab in August 2014, which will support about 400 000 metric tons of the tokamak complex structures and components, the construction is advancing on a daily basis. Magnet, vacuum vessel, cryostat, thermal shield, first wall and divertor structures are under construction or in prototype phase in the ITER member states of China, Europe, India, Japan, Korea, Russia, and the United States. Each of these member states has its own domestic agency (DA) to manage their procurements of components for ITER. Plant systems engineering is being transformed to fully integrate the tokamak and its auxiliary systems in preparation for the assembly and operations phase. CODAC, diagnostics, and the three main heating and current drive systems are also progressing, including the construction of the neutral beam test facility building in Padua, Italy. The conceptual design of the Chinese test blanket module system for ITER has been completed and those of the EU are well under way. Significant progress has been made addressing several outstanding physics issues including disruption load characterization, prediction, avoidance, and mitigation, first wall and divertor shaping, edge pedestal and SOL plasma stability, fuelling and plasma behaviour during confinement transients and W impurity transport. Further development of the ITER Research Plan has included a definition of the required plant configuration for 1st plasma and subsequent phases of ITER operation as well as the major plasma commissioning activities and the needs of the accompanying R&D program to ITER construction by the ITER parties.

  3. The next generation

    NASA Technical Reports Server (NTRS)

    Yudkin, Howard

    1988-01-01

    The next generation of computer systems are studied by examining the processes and methodologies. The present generation is ok for small projects, but not so good for large projects. They are not good for addressing the iterative nature of requirements, resolution, and implementation. They do not address complexity issues of requirements stabilization. They do not explicitly address reuse opportunities, and they do not help with people shortages. Therefore, there is a need to define and automate improved software engineering processes. Some help may be gained by reuse and prototyping, which are two sides of the same coin. Reuse library parts are used to generate good approximations to desired solutions, i.e., prototypes. And rapid prototype composition implies use of preexistent parts, i.e., reusable parts.

  4. Low-Cost 3-D Flow Estimation of Blood With Clutter.

    PubMed

    Wei, Siyuan; Yang, Ming; Zhou, Jian; Sampson, Richard; Kripfgans, Oliver D; Fowlkes, J Brian; Wenisch, Thomas F; Chakrabarti, Chaitali

    2017-05-01

    Volumetric flow rate estimation is an important ultrasound medical imaging modality that is used for diagnosing cardiovascular diseases. Flow rates are obtained by integrating velocity estimates over a cross-sectional plane. Speckle tracking is a promising approach that overcomes the angle dependency of traditional Doppler methods, but suffers from poor lateral resolution. Recent work improves lateral velocity estimation accuracy by reconstructing a synthetic lateral phase (SLP) signal. However, the estimation accuracy of such approaches is compromised by the presence of clutter. Eigen-based clutter filtering has been shown to be effective in removing the clutter signal; but it is computationally expensive, precluding its use at high volume rates. In this paper, we propose low-complexity schemes for both velocity estimation and clutter filtering. We use a two-tiered motion estimation scheme to combine the low complexity sum-of-absolute-difference and SLP methods to achieve subpixel lateral accuracy. We reduce the complexity of eigen-based clutter filtering by processing in subgroups and replacing singular value decomposition with less compute-intensive power iteration and subspace iteration methods. Finally, to improve flow rate estimation accuracy, we use kernel power weighting when integrating the velocity estimates. We evaluate our method for fast- and slow-moving clutter for beam-to-flow angles of 90° and 60° using Field II simulations, demonstrating high estimation accuracy across scenarios. For instance, for a beam-to-flow angle of 90° and fast-moving clutter, our estimation method provides a bias of -8.8% and standard deviation of 3.1% relative to the actual flow rate.

  5. A Multi-Fidelity Surrogate Model for the Equation of State for Mixtures of Real Gases

    NASA Astrophysics Data System (ADS)

    Ouellet, Frederick; Park, Chanyoung; Koneru, Rahul; Balachandar, S.; Rollin, Bertrand

    2017-11-01

    The explosive dispersal of particles is a complex multiphase and multi-species fluid flow problem. In these flows, the products of detonated explosives must be treated as real gases while the ideal gas equation of state is used for the ambient air. As the products expand outward, they mix with the air and create a region where both state equations must be satisfied. One of the most accurate, yet expensive, methods to handle this problem is an algorithm that iterates between both state equations until both pressure and thermal equilibrium are achieved inside of each computational cell. This work creates a multi-fidelity surrogate model to replace this process. This is achieved by using a Kriging model to produce a curve fit which interpolates selected data from the iterative algorithm. The surrogate is optimized for computing speed and model accuracy by varying the number of sampling points chosen to construct the model. The performance of the surrogate with respect to the iterative method is tested in simulations using a finite volume code. The model's computational speed and accuracy are analyzed to show the benefits of this novel approach. This work was supported by the U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program, as a Cooperative Agreement under the Predictive Science Academic Alliance Program, under Contract No. DE-NA00023.

  6. Analysis of a Multi-Fidelity Surrogate for Handling Real Gas Equations of State

    NASA Astrophysics Data System (ADS)

    Ouellet, Frederick; Park, Chanyoung; Rollin, Bertrand; Balachandar, S.

    2017-06-01

    The explosive dispersal of particles is a complex multiphase and multi-species fluid flow problem. In these flows, the detonation products of the explosive must be treated as real gas while the ideal gas equation of state is used for the surrounding air. As the products expand outward from the detonation point, they mix with ambient air and create a mixing region where both state equations must be satisfied. One of the most accurate, yet computationally expensive, methods to handle this problem is an algorithm that iterates between both equations of state until pressure and thermal equilibrium are achieved inside of each computational cell. This work aims to use a multi-fidelity surrogate model to replace this process. A Kriging model is used to produce a curve fit which interpolates selected data from the iterative algorithm using Bayesian statistics. We study the model performance with respect to the iterative method in simulations using a finite volume code. The model's (i) computational speed, (ii) memory requirements and (iii) computational accuracy are analyzed to show the benefits of this novel approach. Also, optimizing the combination of model accuracy and computational speed through the choice of sampling points is explained. This work was supported by the U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program as a Cooperative Agreement under the Predictive Science Academic Alliance Program under Contract No. DE-NA0002378.

  7. An iterative ensemble quasi-linear data assimilation approach for integrated reservoir monitoring

    NASA Astrophysics Data System (ADS)

    Li, J. Y.; Kitanidis, P. K.

    2013-12-01

    Reservoir forecasting and management are increasingly relying on an integrated reservoir monitoring approach, which involves data assimilation to calibrate the complex process of multi-phase flow and transport in the porous medium. The numbers of unknowns and measurements arising in such joint inversion problems are usually very large. The ensemble Kalman filter and other ensemble-based techniques are popular because they circumvent the computational barriers of computing Jacobian matrices and covariance matrices explicitly and allow nonlinear error propagation. These algorithms are very useful but their performance is not well understood and it is not clear how many realizations are needed for satisfactory results. In this presentation we introduce an iterative ensemble quasi-linear data assimilation approach for integrated reservoir monitoring. It is intended for problems for which the posterior or conditional probability density function is not too different from a Gaussian, despite nonlinearity in the state transition and observation equations. The algorithm generates realizations that have the potential to adequately represent the conditional probability density function (pdf). Theoretical analysis sheds light on the conditions under which this algorithm should work well and explains why some applications require very few realizations while others require many. This algorithm is compared with the classical ensemble Kalman filter (Evensen, 2003) and with Gu and Oliver's (2007) iterative ensemble Kalman filter on a synthetic problem of monitoring a reservoir using wellbore pressure and flux data.

  8. How to build a course in mathematical-biological modeling: content and processes for knowledge and skill.

    PubMed

    Hoskinson, Anne-Marie

    2010-01-01

    Biological problems in the twenty-first century are complex and require mathematical insight, often resulting in mathematical models of biological systems. Building mathematical-biological models requires cooperation among biologists and mathematicians, and mastery of building models. A new course in mathematical modeling presented the opportunity to build both content and process learning of mathematical models, the modeling process, and the cooperative process. There was little guidance from the literature on how to build such a course. Here, I describe the iterative process of developing such a course, beginning with objectives and choosing content and process competencies to fulfill the objectives. I include some inductive heuristics for instructors seeking guidance in planning and developing their own courses, and I illustrate with a description of one instructional model cycle. Students completing this class reported gains in learning of modeling content, the modeling process, and cooperative skills. Student content and process mastery increased, as assessed on several objective-driven metrics in many types of assessments.

  9. How to Build a Course in Mathematical–Biological Modeling: Content and Processes for Knowledge and Skill

    PubMed Central

    2010-01-01

    Biological problems in the twenty-first century are complex and require mathematical insight, often resulting in mathematical models of biological systems. Building mathematical–biological models requires cooperation among biologists and mathematicians, and mastery of building models. A new course in mathematical modeling presented the opportunity to build both content and process learning of mathematical models, the modeling process, and the cooperative process. There was little guidance from the literature on how to build such a course. Here, I describe the iterative process of developing such a course, beginning with objectives and choosing content and process competencies to fulfill the objectives. I include some inductive heuristics for instructors seeking guidance in planning and developing their own courses, and I illustrate with a description of one instructional model cycle. Students completing this class reported gains in learning of modeling content, the modeling process, and cooperative skills. Student content and process mastery increased, as assessed on several objective-driven metrics in many types of assessments. PMID:20810966

  10. Simultaneous and iterative weighted regression analysis of toxicity tests using a microplate reader.

    PubMed

    Galgani, F; Cadiou, Y; Gilbert, F

    1992-04-01

    A system is described for determination of LC50 or IC50 by an iterative process based on data obtained from a plate reader using a marine unicellular alga as a target species. The esterase activity of Tetraselmis suesica on fluorescein diacetate as a substrate was measured using a fluorescence titerplate. Simultaneous analysis of results was performed using an iterative process adopting the sigmoid function Y = y/1 (dose of toxicant/IC50)slope for dose-response relationships. IC50 (+/- SEM) was estimated (P less than 0.05). An application with phosalone as a toxicant is presented.

  11. C–IBI: Targeting cumulative coordination within an iterative protocol to derive coarse-grained models of (multi-component) complex fluids

    DOE PAGES

    de Oliveira, Tiago E.; Netz, Paulo A.; Kremer, Kurt; ...

    2016-05-03

    We present a coarse-graining strategy that we test for aqueous mixtures. The method uses pair-wise cumulative coordination as a target function within an iterative Boltzmann inversion (IBI) like protocol. We name this method coordination iterative Boltzmann inversion (C–IBI). While the underlying coarse-grained model is still structure based and, thus, preserves pair-wise solution structure, our method also reproduces solvation thermodynamics of binary and/or ternary mixtures. In addition, we observe much faster convergence within C–IBI compared to IBI. To validate the robustness, we apply C–IBI to study test cases of solvation thermodynamics of aqueous urea and a triglycine solvation in aqueous urea.

  12. Polycapillary lenses for soft x-ray transmission in ITER: Model, comparison with experiments, and potential application

    NASA Astrophysics Data System (ADS)

    Mazon, D.; Liegeard, C.; Jardin, A.; Barnsley, R.; Walsh, M.; O'Mullane, M.; Sirinelli, A.; Dorchies, F.

    2016-11-01

    Measuring Soft X-Ray (SXR) radiation [0.1 keV; 15 keV] in tokamaks is a standard way of extracting valuable information on the particle transport and magnetohydrodynamic activity. Generally, the analysis is performed with detectors positioned close to the plasma for a direct line of sight. A burning plasma, like the ITER deuterium-tritium phase, is too harsh an environment to permit the use of such detectors in close vicinity of the machine. We have thus investigated in this article the possibility of using polycapillary lenses in ITER to transport the SXR information several meters away from the plasma in the complex port-plug geometry.

  13. Polycapillary lenses for soft x-ray transmission in ITER: Model, comparison with experiments, and potential application.

    PubMed

    Mazon, D; Liegeard, C; Jardin, A; Barnsley, R; Walsh, M; O'Mullane, M; Sirinelli, A; Dorchies, F

    2016-11-01

    Measuring Soft X-Ray (SXR) radiation [0.1 keV; 15 keV] in tokamaks is a standard way of extracting valuable information on the particle transport and magnetohydrodynamic activity. Generally, the analysis is performed with detectors positioned close to the plasma for a direct line of sight. A burning plasma, like the ITER deuterium-tritium phase, is too harsh an environment to permit the use of such detectors in close vicinity of the machine. We have thus investigated in this article the possibility of using polycapillary lenses in ITER to transport the SXR information several meters away from the plasma in the complex port-plug geometry.

  14. Experiments on water detritiation and cryogenic distillation at TLK; Impact on ITER fuel cycle subsystems interfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cristescu, I.; Cristescu, I. R.; Doerr, L.

    2008-07-15

    The ITER Isotope Separation System (ISS) and Water Detritiation System (WDS) should be integrated in order to reduce potential chronic tritium emissions from the ISS. This is achieved by routing the top (protium) product from the ISS to a feed point near the bottom end of the WDS Liquid Phase Catalytic Exchange (LPCE) column. This provides an additional barrier against ISS emissions and should mitigate the memory effects due to process parameter fluctuations in the ISS. To support the research activities needed to characterize the performances of various components for WDS and ISS processes under various working conditions and configurationsmore » as needed for ITER design, an experimental facility called TRENTA representative of the ITER WDS and ISS protium separation column, has been commissioned and is in operation at TLK The experimental program on TRENTA facility is conducted to provide the necessary design data related to the relevant ITER operating modes. The operation availability and performances of ISS-WDS have impact on ITER fuel cycle subsystems with consequences on the design integration. The preliminary experimental data on TRENTA facility are presented. (authors)« less

  15. Exploiting the User: Adapting Personas for Use in Security Visualization Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoll, Jennifer C.; McColgin, David W.; Gregory, Michelle L.

    It has long been noted that visual representations of complex information can facilitate rapid understanding of data {citation], even with respect to ComSec applications {citation]. Recognizing that visualizations can increase usability in ComSec applications, [Zurko, Sasse] have argued that there is a need to create more usable security visualizations. (VisSec) However, usability of applications generally fall into the domain of Human Computer Interaction (HCI), which generally relies on heavy-weight user-centered design (UCD) processes. For example, the UCD process can involve many prototype iterations, or an ethnographic field study that can take months to complete. The problem is that VisSec projectsmore » generally do not have the resources to perform ethnographic field studies, or to employ complex UCD methods. They often are running on tight deadlines and budgets that can not afford standard UCD methods. In order to help resolve the conflict of needing more usable designs in ComSec, but not having the resources to employ complex UCD methods, in this paper we offer a stripped-down lighter weight version of a UCD process which can help with capturing user requirements. The approach we use is personas which a user requirements capturing method arising out of the Participatory Design philosophy [Grudin02].« less

  16. Process improvement methods increase the efficiency, accuracy, and utility of a neurocritical care research repository.

    PubMed

    O'Connor, Sydney; Ayres, Alison; Cortellini, Lynelle; Rosand, Jonathan; Rosenthal, Eric; Kimberly, W Taylor

    2012-08-01

    Reliable and efficient data repositories are essential for the advancement of research in Neurocritical care. Various factors, such as the large volume of patients treated within the neuro ICU, their differing length and complexity of hospital stay, and the substantial amount of desired information can complicate the process of data collection. We adapted the tools of process improvement to the data collection and database design of a research repository for a Neuroscience intensive care unit. By the Shewhart-Deming method, we implemented an iterative approach to improve the process of data collection for each element. After an initial design phase, we re-evaluated all data fields that were challenging or time-consuming to collect. We then applied root-cause analysis to optimize the accuracy and ease of collection, and to determine the most efficient manner of collecting the maximal amount of data. During a 6-month period, we iteratively analyzed the process of data collection for various data elements. For example, the pre-admission medications were found to contain numerous inaccuracies after comparison with a gold standard (sensitivity 71% and specificity 94%). Also, our first method of tracking patient admissions and discharges contained higher than expected errors (sensitivity 94% and specificity 93%). In addition to increasing accuracy, we focused on improving efficiency. Through repeated incremental improvements, we reduced the number of subject records that required daily monitoring from 40 to 6 per day, and decreased daily effort from 4.5 to 1.5 h/day. By applying process improvement methods to the design of a Neuroscience ICU data repository, we achieved a threefold improvement in efficiency and increased accuracy. Although individual barriers to data collection will vary from institution to institution, a focus on process improvement is critical to overcoming these barriers.

  17. Iterative-Transform Phase Retrieval Using Adaptive Diversity

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H.

    2007-01-01

    A phase-diverse iterative-transform phase-retrieval algorithm enables high spatial-frequency, high-dynamic-range, image-based wavefront sensing. [The terms phase-diverse, phase retrieval, image-based, and wavefront sensing are defined in the first of the two immediately preceding articles, Broadband Phase Retrieval for Image-Based Wavefront Sensing (GSC-14899-1).] As described below, no prior phase-retrieval algorithm has offered both high dynamic range and the capability to recover high spatial-frequency components. Each of the previously developed image-based phase-retrieval techniques can be classified into one of two categories: iterative transform or parametric. Among the modifications of the original iterative-transform approach has been the introduction of a defocus diversity function (also defined in the cited companion article). Modifications of the original parametric approach have included minimizing alternative objective functions as well as implementing a variety of nonlinear optimization methods. The iterative-transform approach offers the advantage of ability to recover low, middle, and high spatial frequencies, but has disadvantage of having a limited dynamic range to one wavelength or less. In contrast, parametric phase retrieval offers the advantage of high dynamic range, but is poorly suited for recovering higher spatial frequency aberrations. The present phase-diverse iterative transform phase-retrieval algorithm offers both the high-spatial-frequency capability of the iterative-transform approach and the high dynamic range of parametric phase-recovery techniques. In implementation, this is a focus-diverse iterative-transform phaseretrieval algorithm that incorporates an adaptive diversity function, which makes it possible to avoid phase unwrapping while preserving high-spatial-frequency recovery. The algorithm includes an inner and an outer loop (see figure). An initial estimate of phase is used to start the algorithm on the inner loop, wherein multiple intensity images are processed, each using a different defocus value. The processing is done by an iterative-transform method, yielding individual phase estimates corresponding to each image of the defocus-diversity data set. These individual phase estimates are combined in a weighted average to form a new phase estimate, which serves as the initial phase estimate for either the next iteration of the iterative-transform method or, if the maximum number of iterations has been reached, for the next several steps, which constitute the outerloop portion of the algorithm. The details of the next several steps must be omitted here for the sake of brevity. The overall effect of these steps is to adaptively update the diversity defocus values according to recovery of global defocus in the phase estimate. Aberration recovery varies with differing amounts as the amount of diversity defocus is updated in each image; thus, feedback is incorporated into the recovery process. This process is iterated until the global defocus error is driven to zero during the recovery process. The amplitude of aberration may far exceed one wavelength after completion of the inner-loop portion of the algorithm, and the classical iterative transform method does not, by itself, enable recovery of multi-wavelength aberrations. Hence, in the absence of a means of off-loading the multi-wavelength portion of the aberration, the algorithm would produce a wrapped phase map. However, a special aberration-fitting procedure can be applied to the wrapped phase data to transfer at least some portion of the multi-wavelength aberration to the diversity function, wherein the data are treated as known phase values. In this way, a multiwavelength aberration can be recovered incrementally by successively applying the aberration-fitting procedure to intermediate wrapped phase maps. During recovery, as more of the aberration is transferred to the diversity function following successive iterations around the ter loop, the estimated phase ceases to wrap in places where the aberration values become incorporated as part of the diversity function. As a result, as the aberration content is transferred to the diversity function, the phase estimate resembles that of a reference flat.

  18. Modeling the dynamics of evaluation: a multilevel neural network implementation of the iterative reprocessing model.

    PubMed

    Ehret, Phillip J; Monroe, Brian M; Read, Stephen J

    2015-05-01

    We present a neural network implementation of central components of the iterative reprocessing (IR) model. The IR model argues that the evaluation of social stimuli (attitudes, stereotypes) is the result of the IR of stimuli in a hierarchy of neural systems: The evaluation of social stimuli develops and changes over processing. The network has a multilevel, bidirectional feedback evaluation system that integrates initial perceptual processing and later developing semantic processing. The network processes stimuli (e.g., an individual's appearance) over repeated iterations, with increasingly higher levels of semantic processing over time. As a result, the network's evaluations of stimuli evolve. We discuss the implications of the network for a number of different issues involved in attitudes and social evaluation. The success of the network supports the IR model framework and provides new insights into attitude theory. © 2014 by the Society for Personality and Social Psychology, Inc.

  19. Anisotropic modeling and joint-MAP stitching for improved ultrasound model-based iterative reconstruction of large and thick specimens

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Almansouri, Hani; Venkatakrishnan, Singanallur V.; Clayton, Dwight A.

    One-sided non-destructive evaluation (NDE) is widely used to inspect materials, such as concrete structures in nuclear power plants (NPP). A widely used method for one-sided NDE is the synthetic aperture focusing technique (SAFT). The SAFT algorithm produces reasonable results when inspecting simple structures. However, for complex structures, such as heavily reinforced thick concrete structures, SAFT results in artifacts and hence there is a need for a more sophisticated inversion technique. Model-based iterative reconstruction (MBIR) algorithms, which are typically equivalent to regularized inversion techniques, offer a powerful framework to incorporate complex models for the physics, detector miscalibrations and the materials beingmore » imaged to obtain high quality reconstructions. Previously, we have proposed an ultrasonic MBIR method that signifcantly improves reconstruction quality compared to SAFT. However, the method made some simplifying assumptions on the propagation model and did not disucss ways to handle data that is obtained by raster scanning a system over a surface to inspect large regions. In this paper, we propose a novel MBIR algorithm that incorporates an anisotropic forward model and allows for the joint processing of data obtained from a system that raster scans a large surface. We demonstrate that the new MBIR method can produce dramatic improvements in reconstruction quality compared to SAFT and suppresses articfacts compared to the perviously presented MBIR approach.« less

  20. A Fast, Open EEG Classification Framework Based on Feature Compression and Channel Ranking

    PubMed Central

    Han, Jiuqi; Zhao, Yuwei; Sun, Hongji; Chen, Jiayun; Ke, Ang; Xu, Gesen; Zhang, Hualiang; Zhou, Jin; Wang, Changyong

    2018-01-01

    Superior feature extraction, channel selection and classification methods are essential for designing electroencephalography (EEG) classification frameworks. However, the performance of most frameworks is limited by their improper channel selection methods and too specifical design, leading to high computational complexity, non-convergent procedure and narrow expansibility. In this paper, to remedy these drawbacks, we propose a fast, open EEG classification framework centralized by EEG feature compression, low-dimensional representation, and convergent iterative channel ranking. First, to reduce the complexity, we use data clustering to compress the EEG features channel-wise, packing the high-dimensional EEG signal, and endowing them with numerical signatures. Second, to provide easy access to alternative superior methods, we structurally represent each EEG trial in a feature vector with its corresponding numerical signature. Thus, the recorded signals of many trials shrink to a low-dimensional structural matrix compatible with most pattern recognition methods. Third, a series of effective iterative feature selection approaches with theoretical convergence is introduced to rank the EEG channels and remove redundant ones, further accelerating the EEG classification process and ensuring its stability. Finally, a classical linear discriminant analysis (LDA) model is employed to classify a single EEG trial with selected channels. Experimental results on two real world brain-computer interface (BCI) competition datasets demonstrate the promising performance of the proposed framework over state-of-the-art methods. PMID:29713262

  1. Anisotropic modeling and joint-MAP stitching for improved ultrasound model-based iterative reconstruction of large and thick specimens

    NASA Astrophysics Data System (ADS)

    Almansouri, Hani; Venkatakrishnan, Singanallur; Clayton, Dwight; Polsky, Yarom; Bouman, Charles; Santos-Villalobos, Hector

    2018-04-01

    One-sided non-destructive evaluation (NDE) is widely used to inspect materials, such as concrete structures in nuclear power plants (NPP). A widely used method for one-sided NDE is the synthetic aperture focusing technique (SAFT). The SAFT algorithm produces reasonable results when inspecting simple structures. However, for complex structures, such as heavily reinforced thick concrete structures, SAFT results in artifacts and hence there is a need for a more sophisticated inversion technique. Model-based iterative reconstruction (MBIR) algorithms, which are typically equivalent to regularized inversion techniques, offer a powerful framework to incorporate complex models for the physics, detector miscalibrations and the materials being imaged to obtain high quality reconstructions. Previously, we have proposed an ultrasonic MBIR method that signifcantly improves reconstruction quality compared to SAFT. However, the method made some simplifying assumptions on the propagation model and did not disucss ways to handle data that is obtained by raster scanning a system over a surface to inspect large regions. In this paper, we propose a novel MBIR algorithm that incorporates an anisotropic forward model and allows for the joint processing of data obtained from a system that raster scans a large surface. We demonstrate that the new MBIR method can produce dramatic improvements in reconstruction quality compared to SAFT and suppresses articfacts compared to the perviously presented MBIR approach.

  2. A novel technique to solve nonlinear higher-index Hessenberg differential-algebraic equations by Adomian decomposition method.

    PubMed

    Benhammouda, Brahim

    2016-01-01

    Since 1980, the Adomian decomposition method (ADM) has been extensively used as a simple powerful tool that applies directly to solve different kinds of nonlinear equations including functional, differential, integro-differential and algebraic equations. However, for differential-algebraic equations (DAEs) the ADM is applied only in four earlier works. There, the DAEs are first pre-processed by some transformations like index reductions before applying the ADM. The drawback of such transformations is that they can involve complex algorithms, can be computationally expensive and may lead to non-physical solutions. The purpose of this paper is to propose a novel technique that applies the ADM directly to solve a class of nonlinear higher-index Hessenberg DAEs systems efficiently. The main advantage of this technique is that; firstly it avoids complex transformations like index reductions and leads to a simple general algorithm. Secondly, it reduces the computational work by solving only linear algebraic systems with a constant coefficient matrix at each iteration, except for the first iteration where the algebraic system is nonlinear (if the DAE is nonlinear with respect to the algebraic variable). To demonstrate the effectiveness of the proposed technique, we apply it to a nonlinear index-three Hessenberg DAEs system with nonlinear algebraic constraints. This technique is straightforward and can be programmed in Maple or Mathematica to simulate real application problems.

  3. Rapid and Low-cost Prototyping of Medical Devices Using 3D Printed Molds for Liquid Injection Molding

    PubMed Central

    Chung, Philip; Heller, J. Alex; Etemadi, Mozziyar; Ottoson, Paige E.; Liu, Jonathan A.; Rand, Larry; Roy, Shuvo

    2014-01-01

    Biologically inert elastomers such as silicone are favorable materials for medical device fabrication, but forming and curing these elastomers using traditional liquid injection molding processes can be an expensive process due to tooling and equipment costs. As a result, it has traditionally been impractical to use liquid injection molding for low-cost, rapid prototyping applications. We have devised a method for rapid and low-cost production of liquid elastomer injection molded devices that utilizes fused deposition modeling 3D printers for mold design and a modified desiccator as an injection system. Low costs and rapid turnaround time in this technique lower the barrier to iteratively designing and prototyping complex elastomer devices. Furthermore, CAD models developed in this process can be later adapted for metal mold tooling design, enabling an easy transition to a traditional injection molding process. We have used this technique to manufacture intravaginal probes involving complex geometries, as well as overmolding over metal parts, using tools commonly available within an academic research laboratory. However, this technique can be easily adapted to create liquid injection molded devices for many other applications. PMID:24998993

  4. 'The biggest thing is trying to live for two people': Spousal experiences of supporting decision-making participation for partners with TBI.

    PubMed

    Knox, Lucy; Douglas, Jacinta M; Bigby, Christine

    2015-01-01

    To understand how the spouses of individuals with severe TBI experience the process of supporting their partners with decision-making. This study adopted a constructivist grounded theory approach, with data consisting of in-depth interviews conducted with spouses over a 12-month period. Data were analysed through an iterative process of open and focused coding, identification of emergent categories and exploration of relationships between categories. Participants were four spouses of individuals with severe TBI (with moderate-severe disability). Spouses had shared committed relationships (marriage or domestic partnerships) for at least 4 years at initial interview. Three spouses were in relationships that had commenced following injury. Two main themes emerged from the data. The first identified the saliency of the relational space in which decision-making took place. The second revealed the complex nature of decision-making within the spousal relationship. Spouses experience decision-making as a complex multi-stage process underpinned by a number of relational factors. Increased understanding of this process can guide health professionals in their provision of support for couples in exploring decision-making participation after injury.

  5. Can SNOMED CT be squeezed without losing its shape?

    PubMed

    López-García, Pablo; Schulz, Stefan

    2016-09-21

    In biomedical applications where the size and complexity of SNOMED CT become problematic, using a smaller subset that can act as a reasonable substitute is usually preferred. In a special class of use cases-like ontology-based quality assurance, or when performing scaling experiments for real-time performance-it is essential that modules show a similar shape than SNOMED CT in terms of concept distribution per sub-hierarchy. Exactly how to extract such balanced modules remains unclear, as most previous work on ontology modularization has focused on other problems. In this study, we investigate to what extent extracting balanced modules that preserve the original shape of SNOMED CT is possible, by presenting and evaluating an iterative algorithm. We used a graph-traversal modularization approach based on an input signature. To conform to our definition of a balanced module, we implemented an iterative algorithm that carefully bootstraped and dynamically adjusted the signature at each step. We measured the error for each sub-hierarchy and defined convergence as a residual sum of squares <1. Using 2000 concepts as an initial signature, our algorithm converged after seven iterations and extracted a module 4.7 % the size of SNOMED CT. Seven sub-hierarhies were either over or under-represented within a range of 1-8 %. Our study shows that balanced modules from large terminologies can be extracted using ontology graph-traversal modularization techniques under certain conditions: that the process is repeated a number of times, the input signature is dynamically adjusted in each iteration, and a moderate under/over-representation of some hierarchies is tolerated. In the case of SNOMED CT, our results conclusively show that it can be squeezed to less than 5 % of its size without any sub-hierarchy losing its shape more than 8 %, which is likely sufficient in most use cases.

  6. Aerodynamic Optimization of Rocket Control Surface Geometry Using Cartesian Methods and CAD Geometry

    NASA Technical Reports Server (NTRS)

    Nelson, Andrea; Aftosmis, Michael J.; Nemec, Marian; Pulliam, Thomas H.

    2004-01-01

    Aerodynamic design is an iterative process involving geometry manipulation and complex computational analysis subject to physical constraints and aerodynamic objectives. A design cycle consists of first establishing the performance of a baseline design, which is usually created with low-fidelity engineering tools, and then progressively optimizing the design to maximize its performance. Optimization techniques have evolved from relying exclusively on designer intuition and insight in traditional trial and error methods, to sophisticated local and global search methods. Recent attempts at automating the search through a large design space with formal optimization methods include both database driven and direct evaluation schemes. Databases are being used in conjunction with surrogate and neural network models as a basis on which to run optimization algorithms. Optimization algorithms are also being driven by the direct evaluation of objectives and constraints using high-fidelity simulations. Surrogate methods use data points obtained from simulations, and possibly gradients evaluated at the data points, to create mathematical approximations of a database. Neural network models work in a similar fashion, using a number of high-fidelity database calculations as training iterations to create a database model. Optimal designs are obtained by coupling an optimization algorithm to the database model. Evaluation of the current best design then gives either a new local optima and/or increases the fidelity of the approximation model for the next iteration. Surrogate methods have also been developed that iterate on the selection of data points to decrease the uncertainty of the approximation model prior to searching for an optimal design. The database approximation models for each of these cases, however, become computationally expensive with increase in dimensionality. Thus the method of using optimization algorithms to search a database model becomes problematic as the number of design variables is increased.

  7. Comparison of Node-Centered and Cell-Centered Unstructured Finite-Volume Discretizations: Inviscid Fluxes

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.

    2010-01-01

    Cell-centered and node-centered approaches have been compared for unstructured finite-volume discretization of inviscid fluxes. The grids range from regular grids to irregular grids, including mixed-element grids and grids with random perturbations of nodes. Accuracy, complexity, and convergence rates of defect-correction iterations are studied for eight nominally second-order accurate schemes: two node-centered schemes with weighted and unweighted least-squares (LSQ) methods for gradient reconstruction and six cell-centered schemes two node-averaging with and without clipping and four schemes that employ different stencils for LSQ gradient reconstruction. The cell-centered nearest-neighbor (CC-NN) scheme has the lowest complexity; a version of the scheme that involves smart augmentation of the LSQ stencil (CC-SA) has only marginal complexity increase. All other schemes have larger complexity; complexity of node-centered (NC) schemes are somewhat lower than complexity of cell-centered node-averaging (CC-NA) and full-augmentation (CC-FA) schemes. On highly anisotropic grids typical of those encountered in grid adaptation, discretization errors of five of the six cell-centered schemes converge with second order on all tested grids; the CC-NA scheme with clipping degrades solution accuracy to first order. The NC schemes converge with second order on regular and/or triangular grids and with first order on perturbed quadrilaterals and mixed-element grids. All schemes may produce large relative errors in gradient reconstruction on grids with perturbed nodes. Defect-correction iterations for schemes employing weighted least-square gradient reconstruction diverge on perturbed stretched grids. Overall, the CC-NN and CC-SA schemes offer the best options of the lowest complexity and secondorder discretization errors. On anisotropic grids over a curved body typical of turbulent flow simulations, the discretization errors converge with second order and are small for the CC-NN, CC-SA, and CC-FA schemes on all grids and for NC schemes on triangular grids; the discretization errors of the CC-NA scheme without clipping do not converge on irregular grids. Accurate gradient reconstruction can be achieved by introducing a local approximate mapping; without approximate mapping, only the NC scheme with weighted LSQ method provides accurate gradients. Defect correction iterations for the CC-NA scheme without clipping diverge; for the NC scheme with weighted LSQ method, the iterations either diverge or converge very slowly. The best option in curved geometries is the CC-SA scheme that offers low complexity, second-order discretization errors, and fast convergence.

  8. Devil is in the details: Using logic models to investigate program process.

    PubMed

    Peyton, David J; Scicchitano, Michael

    2017-12-01

    Theory-based logic models are commonly developed as part of requirements for grant funding. As a tool to communicate complex social programs, theory based logic models are an effective visual communication. However, after initial development, theory based logic models are often abandoned and remain in their initial form despite changes in the program process. This paper examines the potential benefits of committing time and resources to revising the initial theory driven logic model and developing detailed logic models that describe key activities to accurately reflect the program and assist in effective program management. The authors use a funded special education teacher preparation program to exemplify the utility of drill down logic models. The paper concludes with lessons learned from the iterative revision process and suggests how the process can lead to more flexible and calibrated program management. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. The Iterative Research Cycle: Process-Based Model Evaluation

    NASA Astrophysics Data System (ADS)

    Vrugt, J. A.

    2014-12-01

    The ever increasing pace of computational power, along with continued advances in measurement technologies and improvements in process understanding has stimulated the development of increasingly complex physics based models that simulate a myriad of processes at different spatial and temporal scales. Reconciling these high-order system models with perpetually larger volumes of field data is becoming more and more difficult, particularly because classical likelihood-based fitting methods lack the power to detect and pinpoint deficiencies in the model structure. In this talk I will give an overview of our latest research on process-based model calibration and evaluation. This approach, rooted in Bayesian theory, uses summary metrics of the calibration data rather than the data itself to help detect which component(s) of the model is (are) malfunctioning and in need of improvement. A few case studies involving hydrologic and geophysical models will be used to demonstrate the proposed methodology.

  10. US NDC Modernization Iteration E1 Prototyping Report: Processing Control Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prescott, Ryan; Hamlet, Benjamin R.

    2014-12-01

    During the first iteration of the US NDC Modernization Elaboration phase (E1), the SNL US NDC modernization project team developed an initial survey of applicable COTS solutions, and established exploratory prototyping related to the processing control framework in support of system architecture definition. This report summarizes these activities and discusses planned follow-on work.

  11. Foucauldian Iterative Learning Conversations--An Example of Organisational Change: Developing Conjoint-Work between EPS and Social Workers

    ERIC Educational Resources Information Center

    Apter, Brian

    2014-01-01

    An organisational change-process in a UK local authority (LA) over two years is examined using transcribed excerpts from three meetings. The change-process is analysed using a Foucauldian analytical tool--Iterative Learning Conversations (ILCS). An Educational Psychology Service was changed from being primarily an education-focussed…

  12. Biomolecular Interaction Analysis Using an Optical Surface Plasmon Resonance Biosensor: The Marquardt Algorithm vs Newton Iteration Algorithm

    PubMed Central

    Hu, Jiandong; Ma, Liuzheng; Wang, Shun; Yang, Jianming; Chang, Keke; Hu, Xinran; Sun, Xiaohui; Chen, Ruipeng; Jiang, Min; Zhu, Juanhua; Zhao, Yuanyuan

    2015-01-01

    Kinetic analysis of biomolecular interactions are powerfully used to quantify the binding kinetic constants for the determination of a complex formed or dissociated within a given time span. Surface plasmon resonance biosensors provide an essential approach in the analysis of the biomolecular interactions including the interaction process of antigen-antibody and receptors-ligand. The binding affinity of the antibody to the antigen (or the receptor to the ligand) reflects the biological activities of the control antibodies (or receptors) and the corresponding immune signal responses in the pathologic process. Moreover, both the association rate and dissociation rate of the receptor to ligand are the substantial parameters for the study of signal transmission between cells. A number of experimental data may lead to complicated real-time curves that do not fit well to the kinetic model. This paper presented an analysis approach of biomolecular interactions established by utilizing the Marquardt algorithm. This algorithm was intensively considered to implement in the homemade bioanalyzer to perform the nonlinear curve-fitting of the association and disassociation process of the receptor to ligand. Compared with the results from the Newton iteration algorithm, it shows that the Marquardt algorithm does not only reduce the dependence of the initial value to avoid the divergence but also can greatly reduce the iterative regression times. The association and dissociation rate constants, ka, kd and the affinity parameters for the biomolecular interaction, KA, KD, were experimentally obtained 6.969×105 mL·g-1·s-1, 0.00073 s-1, 9.5466×108 mL·g-1 and 1.0475×10-9 g·mL-1, respectively from the injection of the HBsAg solution with the concentration of 16ng·mL-1. The kinetic constants were evaluated distinctly by using the obtained data from the curve-fitting results. PMID:26147997

  13. An iterative reduced field-of-view reconstruction for periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) MRI.

    PubMed

    Lin, Jyh-Miin; Patterson, Andrew J; Chang, Hing-Chiu; Gillard, Jonathan H; Graves, Martin J

    2015-10-01

    To propose a new reduced field-of-view (rFOV) strategy for iterative reconstructions in a clinical environment. Iterative reconstructions can incorporate regularization terms to improve the image quality of periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) MRI. However, the large amount of calculations required for full FOV iterative reconstructions has posed a huge computational challenge for clinical usage. By subdividing the entire problem into smaller rFOVs, the iterative reconstruction can be accelerated on a desktop with a single graphic processing unit (GPU). This rFOV strategy divides the iterative reconstruction into blocks, based on the block-diagonal dominant structure. A near real-time reconstruction system was developed for the clinical MR unit, and parallel computing was implemented using the object-oriented model. In addition, the Toeplitz method was implemented on the GPU to reduce the time required for full interpolation. Using the data acquired from the PROPELLER MRI, the reconstructed images were then saved in the digital imaging and communications in medicine format. The proposed rFOV reconstruction reduced the gridding time by 97%, as the total iteration time was 3 s even with multiple processes running. A phantom study showed that the structure similarity index for rFOV reconstruction was statistically superior to conventional density compensation (p < 0.001). In vivo study validated the increased signal-to-noise ratio, which is over four times higher than with density compensation. Image sharpness index was improved using the regularized reconstruction implemented. The rFOV strategy permits near real-time iterative reconstruction to improve the image quality of PROPELLER images. Substantial improvements in image quality metrics were validated in the experiments. The concept of rFOV reconstruction may potentially be applied to other kinds of iterative reconstructions for shortened reconstruction duration.

  14. Iterative Minimum Variance Beamformer with Low Complexity for Medical Ultrasound Imaging.

    PubMed

    Deylami, Ali Mohades; Asl, Babak Mohammadzadeh

    2018-06-04

    Minimum variance beamformer (MVB) improves the resolution and contrast of medical ultrasound images compared with delay and sum (DAS) beamformer. The weight vector of this beamformer should be calculated for each imaging point independently, with a cost of increasing computational complexity. The large number of necessary calculations limits this beamformer to application in real-time systems. A beamformer is proposed based on the MVB with lower computational complexity while preserving its advantages. This beamformer avoids matrix inversion, which is the most complex part of the MVB, by solving the optimization problem iteratively. The received signals from two imaging points close together do not vary much in medical ultrasound imaging. Therefore, using the previously optimized weight vector for one point as initial weight vector for the new neighboring point can improve the convergence speed and decrease the computational complexity. The proposed method was applied on several data sets, and it has been shown that the method can regenerate the results obtained by the MVB while the order of complexity is decreased from O(L 3 ) to O(L 2 ). Copyright © 2018 World Federation for Ultrasound in Medicine and Biology. Published by Elsevier Inc. All rights reserved.

  15. Evolution of weighted complex bus transit networks with flow

    NASA Astrophysics Data System (ADS)

    Huang, Ailing; Xiong, Jie; Shen, Jinsheng; Guan, Wei

    2016-02-01

    Study on the intrinsic properties and evolutional mechanism of urban public transit networks (PTNs) has great significance for transit planning and control, particularly considering passengers’ dynamic behaviors. This paper presents an empirical analysis for exploring the complex properties of Beijing’s weighted bus transit network (BTN) based on passenger flow in L-space, and proposes a bi-level evolution model to simulate the development of transit routes from the view of complex network. The model is an iterative process that is driven by passengers’ travel demands and dual-controlled interest mechanism, which is composed of passengers’ spatio-temporal requirements and cost constraint of transit agencies. Also, the flow’s dynamic behaviors, including the evolutions of travel demand, sectional flow attracted by a new link and flow perturbation triggered in nearby routes, are taken into consideration in the evolutional process. We present the numerical experiment to validate the model, where the main parameters are estimated by using distribution functions that are deduced from real-world data. The results obtained have proven that our model can generate a BTN with complex properties, such as the scale-free behavior or small-world phenomenon, which shows an agreement with our empirical results. Our study’s results can be exploited to optimize the real BTN’s structure and improve the network’s robustness.

  16. Laser simulation applying Fox-Li iteration: investigation of reason for non-convergence

    NASA Astrophysics Data System (ADS)

    Paxton, Alan H.; Yang, Chi

    2017-02-01

    Fox-Li iteration is often used to numerically simulate lasers. If a solution is found, the complex field amplitude is a good indication of the laser mode. The case of a semiconductor laser, for which the medium possesses a self-focusing nonlinearity, was investigated. For a case of interest, the iterations did not yield a converged solution. Another approach was needed to explore the properties of the laser mode. The laser was treated (unphysically) as a regenerative amplifier. As the input to the amplifier, we required a smooth complex field distribution that matched the laser resonator. To obtain such a field, we found what would be the solution for the laser field if the strength of the self focusing nonlinearity were α = 0. This was used as the input to the laser, treated as an amplifier. Because the beam deteriorated as it propagated multiple passes in the resonator and through the gain medium (for α = 2.7), we concluded that a mode with good beam quality could not exist in the laser.

  17. Diverse Power Iteration Embeddings and Its Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang H.; Yoo S.; Yu, D.

    2014-12-14

    Abstract—Spectral Embedding is one of the most effective dimension reduction algorithms in data mining. However, its computation complexity has to be mitigated in order to apply it for real-world large scale data analysis. Many researches have been focusing on developing approximate spectral embeddings which are more efficient, but meanwhile far less effective. This paper proposes Diverse Power Iteration Embeddings (DPIE), which not only retains the similar efficiency of power iteration methods but also produces a series of diverse and more effective embedding vectors. We test this novel method by applying it to various data mining applications (e.g. clustering, anomaly detectionmore » and feature selection) and evaluating their performance improvements. The experimental results show our proposed DPIE is more effective than popular spectral approximation methods, and obtains the similar quality of classic spectral embedding derived from eigen-decompositions. Moreover it is extremely fast on big data applications. For example in terms of clustering result, DPIE achieves as good as 95% of classic spectral clustering on the complex datasets but 4000+ times faster in limited memory environment.« less

  18. Development of a pressure based multigrid solution method for complex fluid flows

    NASA Technical Reports Server (NTRS)

    Shyy, Wei

    1991-01-01

    In order to reduce the computational difficulty associated with a single grid (SG) solution procedure, the multigrid (MG) technique was identified as a useful means for improving the convergence rate of iterative methods. A full MG full approximation storage (FMG/FAS) algorithm is used to solve the incompressible recirculating flow problems in complex geometries. The algorithm is implemented in conjunction with a pressure correction staggered grid type of technique using the curvilinear coordinates. In order to show the performance of the method, two flow configurations, one a square cavity and the other a channel, are used as test problems. Comparisons are made between the iterations, equivalent work units, and CPU time. Besides showing that the MG method can yield substantial speed-up with wide variations in Reynolds number, grid distributions, and geometry, issues such as the convergence characteristics of different grid levels, the choice of convection schemes, and the effectiveness of the basic iteration smoothers are studied. An adaptive grid scheme is also combined with the MG procedure to explore the effects of grid resolution on the MG convergence rate as well as the numerical accuracy.

  19. Stream Kriging: Incremental and recursive ordinary Kriging over spatiotemporal data streams

    NASA Astrophysics Data System (ADS)

    Zhong, Xu; Kealy, Allison; Duckham, Matt

    2016-05-01

    Ordinary Kriging is widely used for geospatial interpolation and estimation. Due to the O (n3) time complexity of solving the system of linear equations, ordinary Kriging for a large set of source points is computationally intensive. Conducting real-time Kriging interpolation over continuously varying spatiotemporal data streams can therefore be especially challenging. This paper develops and tests two new strategies for improving the performance of an ordinary Kriging interpolator adapted to a stream-processing environment. These strategies rely on the expectation that, over time, source data points will frequently refer to the same spatial locations (for example, where static sensor nodes are generating repeated observations of a dynamic field). First, an incremental strategy improves efficiency in cases where a relatively small proportion of previously processed spatial locations are absent from the source points at any given iteration. Second, a recursive strategy improves efficiency in cases where there is substantial set overlap between the sets of spatial locations of source points at the current and previous iterations. These two strategies are evaluated in terms of their computational efficiency in comparison to ordinary Kriging algorithm. The results show that these two strategies can reduce the time taken to perform the interpolation by up to 90%, and approach average-case time complexity of O (n2) when most but not all source points refer to the same locations over time. By combining the approaches developed in this paper with existing heuristic ordinary Kriging algorithms, the conclusions indicate how further efficiency gains could potentially be accrued. The work ultimately contributes to the development of online ordinary Kriging interpolation algorithms, capable of real-time spatial interpolation with large streaming data sets.

  20. Label-free and amplified quantitation of proteins in complex mixtures using diffractive optics technology.

    PubMed

    Cleverley, Steve; Chen, Irene; Houle, Jean-François

    2010-01-15

    Immunoaffinity approaches remain invaluable tools for characterization and quantitation of biopolymers. Their application in separation science is often limited due to the challenges of immunoassay development. Typical end-point immunoassays require time consuming and labor-intensive approaches for optimization. Real-time label-free analysis using diffractive optics technology (dot) helps guide a very effective iterative process for rapid immunoassay development. Both label-free and amplified approaches can be used throughout feasibility testing and ultimately in the final assay, providing a robust platform for biopolymer analysis over a very broad dynamic range. We demonstrate the use of dot in rapidly developing assays for quantitating (1) human IgG in complex media, (2) a fusion protein in production media and (3) protein A contamination in purified immunoglobulin preparations. 2009 Elsevier B.V. All rights reserved.

  1. From Intent to Action: An Iterative Engineering Process

    ERIC Educational Resources Information Center

    Mouton, Patrice; Rodet, Jacques; Vacaresse, Sylvain

    2015-01-01

    Quite by chance, and over the course of a few haphazard meetings, a Master's degree in "E-learning Design" gradually developed in a Faculty of Economics. Its original and evolving design was the result of an iterative process carried out, not by a single Instructional Designer (ID), but by a full ID team. Over the last 10 years it has…

  2. Iterated learning and the evolution of language.

    PubMed

    Kirby, Simon; Griffiths, Tom; Smith, Kenny

    2014-10-01

    Iterated learning describes the process whereby an individual learns their behaviour by exposure to another individual's behaviour, who themselves learnt it in the same way. It can be seen as a key mechanism of cultural evolution. We review various methods for understanding how behaviour is shaped by the iterated learning process: computational agent-based simulations; mathematical modelling; and laboratory experiments in humans and non-human animals. We show how this framework has been used to explain the origins of structure in language, and argue that cultural evolution must be considered alongside biological evolution in explanations of language origins. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Stimulating Scientific Reasoning with Drawing-Based Modeling

    NASA Astrophysics Data System (ADS)

    Heijnes, Dewi; van Joolingen, Wouter; Leenaars, Frank

    2018-02-01

    We investigate the way students' reasoning about evolution can be supported by drawing-based modeling. We modified the drawing-based modeling tool SimSketch to allow for modeling evolutionary processes. In three iterations of development and testing, students in lower secondary education worked on creating an evolutionary model. After each iteration, the user interface and instructions were adjusted based on students' remarks and the teacher's observations. Students' conversations were analyzed on reasoning complexity as a measurement of efficacy of the modeling tool and the instructions. These findings were also used to compose a set of recommendations for teachers and curriculum designers for using and constructing models in the classroom. Our findings suggest that to stimulate scientific reasoning in students working with a drawing-based modeling, tool instruction about the tool and the domain should be integrated. In creating models, a sufficient level of scaffolding is necessary. Without appropriate scaffolds, students are not able to create the model. With scaffolding that is too high, students may show reasoning that incorrectly assigns external causes to behavior in the model.

  4. Probabilistic Cellular Automata

    PubMed Central

    Agapie, Alexandru; Giuclea, Marius

    2014-01-01

    Abstract Cellular automata are binary lattices used for modeling complex dynamical systems. The automaton evolves iteratively from one configuration to another, using some local transition rule based on the number of ones in the neighborhood of each cell. With respect to the number of cells allowed to change per iteration, we speak of either synchronous or asynchronous automata. If randomness is involved to some degree in the transition rule, we speak of probabilistic automata, otherwise they are called deterministic. With either type of cellular automaton we are dealing with, the main theoretical challenge stays the same: starting from an arbitrary initial configuration, predict (with highest accuracy) the end configuration. If the automaton is deterministic, the outcome simplifies to one of two configurations, all zeros or all ones. If the automaton is probabilistic, the whole process is modeled by a finite homogeneous Markov chain, and the outcome is the corresponding stationary distribution. Based on our previous results for the asynchronous case—connecting the probability of a configuration in the stationary distribution to its number of zero-one borders—the article offers both numerical and theoretical insight into the long-term behavior of synchronous cellular automata. PMID:24999557

  5. Probabilistic cellular automata.

    PubMed

    Agapie, Alexandru; Andreica, Anca; Giuclea, Marius

    2014-09-01

    Cellular automata are binary lattices used for modeling complex dynamical systems. The automaton evolves iteratively from one configuration to another, using some local transition rule based on the number of ones in the neighborhood of each cell. With respect to the number of cells allowed to change per iteration, we speak of either synchronous or asynchronous automata. If randomness is involved to some degree in the transition rule, we speak of probabilistic automata, otherwise they are called deterministic. With either type of cellular automaton we are dealing with, the main theoretical challenge stays the same: starting from an arbitrary initial configuration, predict (with highest accuracy) the end configuration. If the automaton is deterministic, the outcome simplifies to one of two configurations, all zeros or all ones. If the automaton is probabilistic, the whole process is modeled by a finite homogeneous Markov chain, and the outcome is the corresponding stationary distribution. Based on our previous results for the asynchronous case-connecting the probability of a configuration in the stationary distribution to its number of zero-one borders-the article offers both numerical and theoretical insight into the long-term behavior of synchronous cellular automata.

  6. An implementation of the QMR method based on coupled two-term recurrences

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.; Nachtigal, Noeel M.

    1992-01-01

    The authors have proposed a new Krylov subspace iteration, the quasi-minimal residual algorithm (QMR), for solving non-Hermitian linear systems. In the original implementation of the QMR method, the Lanczos process with look-ahead is used to generate basis vectors for the underlying Krylov subspaces. In the Lanczos algorithm, these basis vectors are computed by means of three-term recurrences. It has been observed that, in finite precision arithmetic, vector iterations based on three-term recursions are usually less robust than mathematically equivalent coupled two-term vector recurrences. This paper presents a look-ahead algorithm that constructs the Lanczos basis vectors by means of coupled two-term recursions. Implementation details are given, and the look-ahead strategy is described. A new implementation of the QMR method, based on this coupled two-term algorithm, is described. A simplified version of the QMR algorithm without look-ahead is also presented, and the special case of QMR for complex symmetric linear systems is considered. Results of numerical experiments comparing the original and the new implementations of the QMR method are reported.

  7. Digital adaptive optics confocal microscopy based on iterative retrieval of optical aberration from a guidestar hologram

    PubMed Central

    Liu, Changgeng; Thapa, Damber; Yao, Xincheng

    2017-01-01

    Guidestar hologram based digital adaptive optics (DAO) is one recently emerging active imaging modality. It records each complex distorted line field reflected or scattered from the sample by an off-axis digital hologram, measures the optical aberration from a separate off-axis digital guidestar hologram, and removes the optical aberration from the distorted line fields by numerical processing. In previously demonstrated DAO systems, the optical aberration was directly retrieved from the guidestar hologram by taking its Fourier transform and extracting the phase term. For the direct retrieval method (DRM), when the sample is not coincident with the guidestar focal plane, the accuracy of the optical aberration retrieved by DRM undergoes a fast decay, leading to quality deterioration of corrected images. To tackle this problem, we explore here an image metrics-based iterative method (MIM) to retrieve the optical aberration from the guidestar hologram. Using an aberrated objective lens and scattering samples, we demonstrate that MIM can improve the accuracy of the retrieved aberrations from both focused and defocused guidestar holograms, compared to DRM, to improve the robustness of the DAO. PMID:28380937

  8. Ion beam figuring of Φ520mm convex hyperbolic secondary mirror

    NASA Astrophysics Data System (ADS)

    Meng, Xiaohui; Wang, Yonggang; Li, Ang; Li, Wenqing

    2016-10-01

    The convex hyperbolic secondary mirror is a Φ520-mm Zerodur lightweight hyperbolic convex mirror. Typically conventional methods like CCOS, stressed-lap polishing are used to manufacture this secondary mirror. Nevertheless, the required surface accuracy cannot be achieved through the use of conventional polishing methods because of the unpredictable behavior of the polishing tools, which leads to an unstable removal rate. Ion beam figuring is an optical fabrication method that provides highly controlled error of previously polished surfaces using a directed, inert and neutralized ion beam to physically sputter material from the optic surface. Several iterations with different ion beam size are selected and optimized to fit different stages of surface figure error and spatial frequency components. Before ion beam figuring, surface figure error of the secondary mirror is 2.5λ p-v, 0.23λ rms, and is improved to 0.12λ p-v, 0.014λ rms in several process iterations. The demonstration clearly shows that ion beam figuring can not only be used to the final correction of aspheric, but also be suitable for polishing the coarse surface of large, complex mirror.

  9. 2.5D transient electromagnetic inversion with OCCAM method

    NASA Astrophysics Data System (ADS)

    Li, R.; Hu, X.

    2016-12-01

    In the application of time-domain electromagnetic method (TEM), some multidimensional inversion schemes are applied for imaging in the past few decades to overcome great error produced by 1D model inversion when the subsurface structure is complex. The current mainstream multidimensional inversion for EM data, with the finite-difference time-domain (FDTD) forward method, mainly implemented by Nonlinear Conjugate Gradient (NLCG). But the convergence rate of NLCG heavily depends on Lagrange multiplier and maybe fail to converge. We use the OCCAM inversion method to avoid the weakness. OCCAM inversion is proven to be a more stable and reliable method to image the subsurface 2.5D electrical conductivity. Firstly, we simulate the 3D transient EM fields governed by Maxwell's equations with FDTD method. Secondly, we use the OCCAM inversion scheme with the appropriate objective error functional we established to image the 2.5D structure. And the data space OCCAM's inversion (DASOCC) strategy based on OCCAM scheme were given in this paper. The sensitivity matrix is calculated with the method of time-integrated back-propagated fields. Imaging result of example model shown in Fig. 1 have proven that the OCCAM scheme is an efficient inversion method for TEM with FDTD method. The processes of the inversion iterations have shown the great ability of convergence with few iterations. Summarizing the process of the imaging, we can make the following conclusions. Firstly, the 2.5D imaging in FDTD system with OCCAM inversion demonstrates that we could get desired imaging results for the resistivity structure in the homogeneous half-space. Secondly, the imaging results usually do not over-depend on the initial model, but the iteration times can be reduced distinctly if the background resistivity of initial model get close to the truthful model. So it is batter to set the initial model based on the other geologic information in the application. When the background resistivity fit the truthful model well, the imaging of anomalous body only need a few iteration steps. Finally, the speed of imaging vertical boundaries is slower than the speed of imaging the horizontal boundaries.

  10. Reconsidering 'ethics' and 'quality' in healthcare research: the case for an iterative ethical paradigm.

    PubMed

    Stevenson, Fiona A; Gibson, William; Pelletier, Caroline; Chrysikou, Vasiliki; Park, Sophie

    2015-05-08

    UK-based research conducted within a healthcare setting generally requires approval from the National Research Ethics Service. Research ethics committees are required to assess a vast range of proposals, differing in both their topic and methodology. We argue the methodological benchmarks with which research ethics committees are generally familiar and which form the basis of assessments of quality do not fit with the aims and objectives of many forms of qualitative inquiry and their more iterative goals of describing social processes/mechanisms and making visible the complexities of social practices. We review current debates in the literature related to ethical review and social research, and illustrate the importance of re-visiting the notion of ethics in healthcare research. We present an analysis of two contrasting paradigms of ethics. We argue that the first of these is characteristic of the ways that NHS ethics boards currently tend to operate, and the second is an alternative paradigm, that we have labelled the 'iterative' paradigm, which draws explicitly on methodological issues in qualitative research to produce an alternative vision of ethics. We suggest that there is an urgent need to re-think the ways that ethical issues are conceptualised in NHS ethical procedures. In particular, we argue that embedded in the current paradigm is a restricted notion of 'quality', which frames how ethics are developed and worked through. Specific, pre-defined outcome measures are generally seen as the traditional marker of quality, which means that research questions that focus on processes rather than on 'outcomes' may be regarded as problematic. We show that the alternative 'iterative' paradigm offers a useful starting point for moving beyond these limited views. We conclude that a 'one size fits all' standardisation of ethical procedures and approach to ethical review acts against the production of knowledge about healthcare and dramatically restricts what can be known about the social practices and conditions of healthcare. Our central argument is that assessment of ethical implications is important, but that the current paradigm does not facilitate an adequate understanding of the very issues it aims to invigilate.

  11. Twostep-by-twostep PIRK-type PC methods with continuous output formulas

    NASA Astrophysics Data System (ADS)

    Cong, Nguyen Huu; Xuan, Le Ngoc

    2008-11-01

    This paper deals with parallel predictor-corrector (PC) iteration methods based on collocation Runge-Kutta (RK) corrector methods with continuous output formulas for solving nonstiff initial-value problems (IVPs) for systems of first-order differential equations. At nth step, the continuous output formulas are used not only for predicting the stage values in the PC iteration methods but also for calculating the step values at (n+2)th step. In this case, the integration processes can be proceeded twostep-by-twostep. The resulting twostep-by-twostep (TBT) parallel-iterated RK-type (PIRK-type) methods with continuous output formulas (twostep-by-twostep PIRKC methods or TBTPIRKC methods) give us a faster integration process. Fixed stepsize applications of these TBTPIRKC methods to a few widely-used test problems reveal that the new PC methods are much more efficient when compared with the well-known parallel-iterated RK methods (PIRK methods), parallel-iterated RK-type PC methods with continuous output formulas (PIRKC methods) and sequential explicit RK codes DOPRI5 and DOP853 available from the literature.

  12. Polycapillary lenses for soft x-ray transmission in ITER: Model, comparison with experiments, and potential application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mazon, D., E-mail: Didier.Mazon@cea.fr; Jardin, A.; Liegeard, C.

    2016-11-15

    Measuring Soft X-Ray (SXR) radiation [0.1 keV; 15 keV] in tokamaks is a standard way of extracting valuable information on the particle transport and magnetohydrodynamic activity. Generally, the analysis is performed with detectors positioned close to the plasma for a direct line of sight. A burning plasma, like the ITER deuterium-tritium phase, is too harsh an environment to permit the use of such detectors in close vicinity of the machine. We have thus investigated in this article the possibility of using polycapillary lenses in ITER to transport the SXR information several meters away from the plasma in the complex port-plugmore » geometry.« less

  13. Multidisciplinary systems optimization by linear decomposition

    NASA Technical Reports Server (NTRS)

    Sobieski, J.

    1984-01-01

    In a typical design process major decisions are made sequentially. An illustrated example is given for an aircraft design in which the aerodynamic shape is usually decided first, then the airframe is sized for strength and so forth. An analogous sequence could be laid out for any other major industrial product, for instance, a ship. The loops in the discipline boxes symbolize iterative design improvements carried out within the confines of a single engineering discipline, or subsystem. The loops spanning several boxes depict multidisciplinary design improvement iterations. Omitted for graphical simplicity is parallelism of the disciplinary subtasks. The parallelism is important in order to develop a broad workfront necessary to shorten the design time. If all the intradisciplinary and interdisciplinary iterations were carried out to convergence, the process could yield a numerically optimal design. However, it usually stops short of that because of time and money limitations. This is especially true for the interdisciplinary iterations.

  14. Integrating Low-Cost Rapid Usability Testing into Agile System Development of Healthcare IT: A Methodological Perspective.

    PubMed

    Kushniruk, Andre W; Borycki, Elizabeth M

    2015-01-01

    The development of more usable and effective healthcare information systems has become a critical issue. In the software industry methodologies such as agile and iterative development processes have emerged to lead to more effective and usable systems. These approaches highlight focusing on user needs and promoting iterative and flexible development practices. Evaluation and testing of iterative agile development cycles is considered an important part of the agile methodology and iterative processes for system design and re-design. However, the issue of how to effectively integrate usability testing methods into rapid and flexible agile design cycles has remained to be fully explored. In this paper we describe our application of an approach known as low-cost rapid usability testing as it has been applied within agile system development in healthcare. The advantages of the integrative approach are described, along with current methodological considerations.

  15. Generative Representations for Automated Design of Robots

    NASA Technical Reports Server (NTRS)

    Homby, Gregory S.; Lipson, Hod; Pollack, Jordan B.

    2007-01-01

    A method of automated design of complex, modular robots involves an evolutionary process in which generative representations of designs are used. The term generative representations as used here signifies, loosely, representations that consist of or include algorithms, computer programs, and the like, wherein encoded designs can reuse elements of their encoding and thereby evolve toward greater complexity. Automated design of robots through synthetic evolutionary processes has already been demonstrated, but it is not clear whether genetically inspired search algorithms can yield designs that are sufficiently complex for practical engineering. The ultimate success of such algorithms as tools for automation of design depends on the scaling properties of representations of designs. A nongenerative representation (one in which each element of the encoded design is used at most once in translating to the design) scales linearly with the number of elements. Search algorithms that use nongenerative representations quickly become intractable (search times vary approximately exponentially with numbers of design elements), and thus are not amenable to scaling to complex designs. Generative representations are compact representations and were devised as means to circumvent the above-mentioned fundamental restriction on scalability. In the present method, a robot is defined by a compact programmatic form (its generative representation) and the evolutionary variation takes place on this form. The evolutionary process is an iterative one, wherein each cycle consists of the following steps: 1. Generative representations are generated in an evolutionary subprocess. 2. Each generative representation is a program that, when compiled, produces an assembly procedure. 3. In a computational simulation, a constructor executes an assembly procedure to generate a robot. 4. A physical-simulation program tests the performance of a simulated constructed robot, evaluating the performance according to a fitness criterion to yield a figure of merit that is fed back into the evolutionary subprocess of the next iteration. In comparison with prior approaches to automated evolutionary design of robots, the use of generative representations offers two advantages: First, a generative representation enables the reuse of components in regular and hierarchical ways and thereby serves a systematic means of creating more complex modules out of simpler ones. Second, the evolved generative representation may capture intrinsic properties of the design problem, so that variations in the representations move through the design space more effectively than do equivalent variations in a nongenerative representation. This method has been demonstrated by using it to design some robots that move, variously, by walking, rolling, or sliding. Some of the robots were built (see figure). Although these robots are very simple, in comparison with robots designed by humans, their structures are more regular, modular, hierarchical, and complex than are those of evolved designs of comparable functionality synthesized by use of nongenerative representations.

  16. Modified CTAB and TRIzol protocols improve RNA extraction from chemically complex Embryophyta1

    PubMed Central

    Jordon-Thaden, Ingrid E.; Chanderbali, Andre S.; Gitzendanner, Matthew A.; Soltis, Douglas E.

    2015-01-01

    Premise of the study: Here we present a series of protocols for RNA extraction across a diverse array of plants; we focus on woody, aromatic, aquatic, and other chemically complex taxa. Methods and Results: Ninety-one taxa were subjected to RNA extraction with three methods presented here: (1) TRIzol/TURBO DNA-free kits using the manufacturer’s protocol with the addition of sarkosyl; (2) a combination method using cetyltrimethylammonium bromide (CTAB) and TRIzol/sarkosyl/TURBO DNA-free; and (3) a combination of CTAB and QIAGEN RNeasy Plant Mini Kit. Bench-ready protocols are given. Conclusions: After an iterative process of working with chemically complex taxa, we conclude that the use of TRIzol supplemented with sarkosyl and the TURBO DNA-free kit is an effective, efficient, and robust method for obtaining RNA from 100 mg of leaf tissue of land plant species (Embryophyta) examined. Our protocols can be used to provide RNA of suitable stability, quantity, and quality for transcriptome sequencing. PMID:25995975

  17. Quantized Iterative Learning Consensus Tracking of Digital Networks With Limited Information Communication.

    PubMed

    Xiong, Wenjun; Yu, Xinghuo; Chen, Yao; Gao, Jie

    2017-06-01

    This brief investigates the quantized iterative learning problem for digital networks with time-varying topologies. The information is first encoded as symbolic data and then transmitted. After the data are received, a decoder is used by the receiver to get an estimate of the sender's state. Iterative learning quantized communication is considered in the process of encoding and decoding. A sufficient condition is then presented to achieve the consensus tracking problem in a finite interval using the quantized iterative learning controllers. Finally, simulation results are given to illustrate the usefulness of the developed criterion.

  18. The TRIDEC Virtual Tsunami Atlas - customized value-added simulation data products for Tsunami Early Warning generated on compute clusters

    NASA Astrophysics Data System (ADS)

    Löwe, P.; Hammitzsch, M.; Babeyko, A.; Wächter, J.

    2012-04-01

    The development of new Tsunami Early Warning Systems (TEWS) requires the modelling of spatio-temporal spreading of tsunami waves both recorded from past events and hypothetical future cases. The model results are maintained in digital repositories for use in TEWS command and control units for situation assessment once a real tsunami occurs. Thus the simulation results must be absolutely trustworthy, in a sense that the quality of these datasets is assured. This is a prerequisite as solid decision making during a crisis event and the dissemination of dependable warning messages to communities under risk will be based on them. This requires data format validity, but even more the integrity and information value of the content, being a derived value-added product derived from raw tsunami model output. Quality checking of simulation result products can be done in multiple ways, yet the visual verification of both temporal and spatial spreading characteristics for each simulation remains important. The eye of the human observer still remains an unmatched tool for the detection of irregularities. This requires the availability of convenient, human-accessible mappings of each simulation. The improvement of tsunami models necessitates the changes in many variables, including simulation end-parameters. Whenever new improved iterations of the general models or underlying spatial data are evaluated, hundreds to thousands of tsunami model results must be generated for each model iteration, each one having distinct initial parameter settings. The use of a Compute Cluster Environment (CCE) of sufficient size allows the automated generation of all tsunami-results within model iterations in little time. This is a significant improvement to linear processing on dedicated desktop machines or servers. This allows for accelerated/improved visual quality checking iterations, which in turn can provide a positive feedback into the overall model improvement iteratively. An approach to set-up and utilize the CCE has been implemented by the project Collaborative, Complex, and Critical Decision Processes in Evolving Crises (TRIDEC) funded under the European Union's FP7. TRIDEC focuses on real-time intelligent information management in Earth management. The addressed challenges include the design and implementation of a robust and scalable service infrastructure supporting the integration and utilisation of existing resources with accelerated generation of large volumes of data. These include sensor systems, geo-information repositories, simulations and data fusion tools. Additionally, TRIDEC adopts enhancements of Service Oriented Architecture (SOA) principles in terms of Event Driven Architecture (EDA) design. As a next step the implemented CCE's services to generate derived and customized simulation products are foreseen to be provided via an EDA service for on-demand processing for specific threat-parameters and to accommodate for model improvements.

  19. Panel discussion summary: do we need a revolution in design and process integration to enable sub-100-nm technology nodes?

    NASA Astrophysics Data System (ADS)

    Grobman, Warren D.

    2002-07-01

    Dramatically increasing mask set costs, long-loop design-fabrication iterations, and lithography of unprecedented complexity and cost threaten to disrupt time-accepted IC industry progression as described by Moore"s Law. Practical and cost-effective IC manufacturing below the 100nm technology node presents significant and unique new challenges spanning multiple disciplines and overlapping traditionally separable components of the design-through-chip manufacturing flow. Lithographic and other process complexity is compounded by design, mask, and infrastructure technologies, which do not sufficiently account for increasingly stringent and complex manufacturing issues. Deep subwavelength and atomic-scale process and device physics effects increasingly invade and impact the design flow strongly at a time when the pressures for increased design productivity are escalating at a superlinear rate. Productivity gaps, both upstream in design and downstream in fabrication, are anticipated by many to increase due to dramatic increases in inherent complexity of the design-to-chip equation. Furthermore, the cost of lithographic equipment is increasing at an aggressive compound growth rate so large that we can no longer economically derive the benefit of the increased number of circuits per unit area unless we extend the life of lithographic equipment for more generations, and deeper into the subwavelength regime. Do these trends unambiguously lead to the conclusion that we need a revolution in design and design-process integration to enable the sub-100nm nodes? Or is such a premise similar to other well-known predictions of technology brick walls that never came true?

  20. Reconceptualizing children's complex discharge with health systems theory: novel integrative review with embedded expert consultation and theory development.

    PubMed

    Noyes, Jane; Brenner, Maria; Fox, Patricia; Guerin, Ashleigh

    2014-05-01

    To report a novel review to develop a health systems model of successful transition of children with complex healthcare needs from hospital to home. Children with complex healthcare needs commonly experience an expensive, ineffectual and prolonged nurse-led discharge process. Children gain no benefit from prolonged hospitalization and are exposed to significant harm. Research to enable intervention development and process evaluation across the entire health system is lacking. Novel mixed-method integrative review informed by health systems theory. DATA  CINAHL, PsychInfo, EMBASE, PubMed, citation searching, personal contact. REVIEW  Informed by consultation with experts. English language studies, opinion/discussion papers reporting research, best practice and experiences of children, parents and healthcare professionals and purposively selected policies/guidelines from 2002-December 2012 were abstracted using Framework synthesis, followed by iterative theory development. Seven critical factors derived from thirty-four sources across five health system levels explained successful discharge (new programme theory). All seven factors are required in an integrated care pathway, with a dynamic communication loop to facilitate effective discharge (new programme logic). Current health system responses were frequently static and critical success factors were commonly absent, thereby explaining ineffectual discharge. The novel evidence-based model, which reconceptualizes 'discharge' as a highly complex longitudinal health system intervention, makes a significant contribution to global knowledge to drive practice development. Research is required to develop process and outcome measures at different time points in the discharge process and future trials are needed to determine the effectiveness of integrated health system discharge models. © 2013 John Wiley & Sons Ltd.

  1. Multiple-image authentication with a cascaded multilevel architecture based on amplitude field random sampling and phase information multiplexing.

    PubMed

    Fan, Desheng; Meng, Xiangfeng; Wang, Yurong; Yang, Xiulun; Pan, Xuemei; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi

    2015-04-10

    A multiple-image authentication method with a cascaded multilevel architecture in the Fresnel domain is proposed, in which a synthetic encoded complex amplitude is first fabricated, and its real amplitude component is generated by iterative amplitude encoding, random sampling, and space multiplexing for the low-level certification images, while the phase component of the synthetic encoded complex amplitude is constructed by iterative phase information encoding and multiplexing for the high-level certification images. Then the synthetic encoded complex amplitude is iteratively encoded into two phase-type ciphertexts located in two different transform planes. During high-level authentication, when the two phase-type ciphertexts and the high-level decryption key are presented to the system and then the Fresnel transform is carried out, a meaningful image with good quality and a high correlation coefficient with the original certification image can be recovered in the output plane. Similar to the procedure of high-level authentication, in the case of low-level authentication with the aid of a low-level decryption key, no significant or meaningful information is retrieved, but it can result in a remarkable peak output in the nonlinear correlation coefficient of the output image and the corresponding original certification image. Therefore, the method realizes different levels of accessibility to the original certification image for different authority levels with the same cascaded multilevel architecture.

  2. Dynamic simulation of relief line during loss of insulation vacuum of the ITER cryoline

    NASA Astrophysics Data System (ADS)

    Badgujar, S.; Kosek, J.; Grillot, D.; Forgeas, A.; Sarkar, B.; Shah, N.; Choukekar, K.; Chang, H.-S.

    2017-12-01

    The ITER cryoline (CL) system consists of 37 types of vacuum jacketed transfer lines which forms a complex structured network with a total length of about 5 km, spread inside the Tokamak building, on a dedicated plant bridge and in the Cryoplant building/area. One of them, the low pressure relief line (RL) recovers helium discharged from process safety relief valves of the different cryogenic users and is sent it back to the Cryoplant via heater and recovery system. The process pipe diameters of the RL vary from DN 50 to DN 200 and the length is more than 1500 m. Loss of insulation vacuum (LIV) of a CL is one of the worst scenarios apart from LIV in Auxiliary Cold Boxes (ACBs). The Torus and Cryostat CL is chosen to simulate the virtual LIV and to study the anticipated behavior of the RL. Both helium LIV (LIV due to leak in helium pipe) and air LIV (LIV due to air ingress in outer vacuum jacket of the cryoline) with and without fire) have been simulated during this study. After the brief description of the CL system, the paper will describe the EcosimPro® model prepared for the dynamic study. The paper will also describe the results like minimum temperature of RL, mass flow and maximum pressure in the RL which are essentially used to choose the type and location of safety relief devices to protect the CL process pipes.

  3. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets.

    PubMed

    Bicer, Tekin; Gürsoy, Doğa; Andrade, Vincent De; Kettimuthu, Rajkumar; Scullin, William; Carlo, Francesco De; Foster, Ian T

    2017-01-01

    Modern synchrotron light sources and detectors produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used imaging techniques that generates data at tens of gigabytes per second is computed tomography (CT). Although CT experiments result in rapid data generation, the analysis and reconstruction of the collected data may require hours or even days of computation time with a medium-sized workstation, which hinders the scientific progress that relies on the results of analysis. We present Trace, a data-intensive computing engine that we have developed to enable high-performance implementation of iterative tomographic reconstruction algorithms for parallel computers. Trace provides fine-grained reconstruction of tomography datasets using both (thread-level) shared memory and (process-level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations that we apply to the replicated reconstruction objects and evaluate them using tomography datasets collected at the Advanced Photon Source. Our experimental evaluations show that our optimizations and parallelization techniques can provide 158× speedup using 32 compute nodes (384 cores) over a single-core configuration and decrease the end-to-end processing time of a large sinogram (with 4501 × 1 × 22,400 dimensions) from 12.5 h to <5 min per iteration. The proposed tomographic reconstruction engine can efficiently process large-scale tomographic data using many compute nodes and minimize reconstruction times.

  4. Small Modifications of Curvilinear Coordinates and Successive Approximations Applied in Geopotential Determination

    NASA Astrophysics Data System (ADS)

    Holota, P.; Nesvadba, O.

    2016-12-01

    The mathematical apparatus currently applied for geopotential determination is undoubtedly quite developed. This concerns numerical methods as well as methods based on classical analysis, equally as classical and weak solution concepts. Nevertheless, the nature of the real surface of the Earth has its specific features and is still rather complex. The aim of this paper is to consider these limits and to seek a balance between the performance of an apparatus developed for the surface of the Earth smoothed (or simplified) up to a certain degree and an iteration procedure used to bridge the difference between the real and smoothed topography. The approach is applied for the solution of the linear gravimetric boundary value problem in geopotential determination. Similarly as in other branches of engineering and mathematical physics a transformation of coordinates is used that offers a possibility to solve an alternative between the boundary complexity and the complexity of the coefficients of the partial differential equation governing the solution. As examples the use of modified spherical and also modified ellipsoidal coordinates for the transformation of the solution domain is discussed. However, the complexity of the boundary is then reflected in the structure of Laplace's operator. This effect is taken into account by means of successive approximations. The structure of the respective iteration steps is derived and analyzed. On the level of individual iteration steps the attention is paid to the representation of the solution in terms of function bases or in terms of Green's functions. The convergence of the procedure and the efficiency of its use for geopotential determination is discussed.

  5. The development of the Final Approach Spacing Tool (FAST): A cooperative controller-engineer design approach

    NASA Technical Reports Server (NTRS)

    Lee, Katharine K.; Davis, Thomas J.

    1995-01-01

    Historically, the development of advanced automation for air traffic control in the United States has excluded the input of the air traffic controller until the need of the development process. In contrast, the development of the Final Approach Spacing Tool (FAST), for the terminal area controller, has incorporated the end-user in early, iterative testing. This paper describes a cooperative between the controller and the developer to create a tool which incorporates the complexity of the air traffic controller's job. This approach to software development has enhanced the usability of FAST and has helped smooth the introduction of FAST into the operational environment.

  6. Signalling networks and dynamics of allosteric transitions in bacterial chaperonin GroEL: implications for iterative annealing of misfolded proteins.

    PubMed

    Thirumalai, D; Hyeon, Changbong

    2018-06-19

    Signal transmission at the molecular level in many biological complexes occurs through allosteric transitions. Allostery describes the responses of a complex to binding of ligands at sites that are spatially well separated from the binding region. We describe the structural perturbation method, based on phonon propagation in solids, which can be used to determine the signal-transmitting allostery wiring diagram (AWD) in large but finite-sized biological complexes. Application to the bacterial chaperonin GroEL-GroES complex shows that the AWD determined from structures also drives the allosteric transitions dynamically. From both a structural and dynamical perspective these transitions are largely determined by formation and rupture of salt-bridges. The molecular description of allostery in GroEL provides insights into its function, which is quantitatively described by the iterative annealing mechanism. Remarkably, in this complex molecular machine, a deep connection is established between the structures, reaction cycle during which GroEL undergoes a sequence of allosteric transitions, and function, in a self-consistent manner.This article is part of a discussion meeting issue 'Allostery and molecular machines'. © 2018 The Author(s).

  7. Discrete Fourier Transform Analysis in a Complex Vector Space

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H.

    2009-01-01

    Alternative computational strategies for the Discrete Fourier Transform (DFT) have been developed using analysis of geometric manifolds. This approach provides a general framework for performing DFT calculations, and suggests a more efficient implementation of the DFT for applications using iterative transform methods, particularly phase retrieval. The DFT can thus be implemented using fewer operations when compared to the usual DFT counterpart. The software decreases the run time of the DFT in certain applications such as phase retrieval that iteratively call the DFT function. The algorithm exploits a special computational approach based on analysis of the DFT as a transformation in a complex vector space. As such, this approach has the potential to realize a DFT computation that approaches N operations versus Nlog(N) operations for the equivalent Fast Fourier Transform (FFT) calculation.

  8. A Subspace Pursuit–based Iterative Greedy Hierarchical Solution to the Neuromagnetic Inverse Problem

    PubMed Central

    Babadi, Behtash; Obregon-Henao, Gabriel; Lamus, Camilo; Hämäläinen, Matti S.; Brown, Emery N.; Purdon, Patrick L.

    2013-01-01

    Magnetoencephalography (MEG) is an important non-invasive method for studying activity within the human brain. Source localization methods can be used to estimate spatiotemporal activity from MEG measurements with high temporal resolution, but the spatial resolution of these estimates is poor due to the ill-posed nature of the MEG inverse problem. Recent developments in source localization methodology have emphasized temporal as well as spatial constraints to improve source localization accuracy, but these methods can be computationally intense. Solutions emphasizing spatial sparsity hold tremendous promise, since the underlying neurophysiological processes generating MEG signals are often sparse in nature, whether in the form of focal sources, or distributed sources representing large-scale functional networks. Recent developments in the theory of compressed sensing (CS) provide a rigorous framework to estimate signals with sparse structure. In particular, a class of CS algorithms referred to as greedy pursuit algorithms can provide both high recovery accuracy and low computational complexity. Greedy pursuit algorithms are difficult to apply directly to the MEG inverse problem because of the high-dimensional structure of the MEG source space and the high spatial correlation in MEG measurements. In this paper, we develop a novel greedy pursuit algorithm for sparse MEG source localization that overcomes these fundamental problems. This algorithm, which we refer to as the Subspace Pursuit-based Iterative Greedy Hierarchical (SPIGH) inverse solution, exhibits very low computational complexity while achieving very high localization accuracy. We evaluate the performance of the proposed algorithm using comprehensive simulations, as well as the analysis of human MEG data during spontaneous brain activity and somatosensory stimuli. These studies reveal substantial performance gains provided by the SPIGH algorithm in terms of computational complexity, localization accuracy, and robustness. PMID:24055554

  9. Beamforming Based Full-Duplex for Millimeter-Wave Communication

    PubMed Central

    Liu, Xiao; Xiao, Zhenyu; Bai, Lin; Choi, Jinho; Xia, Pengfei; Xia, Xiang-Gen

    2016-01-01

    In this paper, we study beamforming based full-duplex (FD) systems in millimeter-wave (mmWave) communications. A joint transmission and reception (Tx/Rx) beamforming problem is formulated to maximize the achievable rate by mitigating self-interference (SI). Since the optimal solution is difficult to find due to the non-convexity of the objective function, suboptimal schemes are proposed in this paper. A low-complexity algorithm, which iteratively maximizes signal power while suppressing SI, is proposed and its convergence is proven. Moreover, two closed-form solutions, which do not require iterations, are also derived under minimum-mean-square-error (MMSE), zero-forcing (ZF), and maximum-ratio transmission (MRT) criteria. Performance evaluations show that the proposed iterative scheme converges fast (within only two iterations on average) and approaches an upper-bound performance, while the two closed-form solutions also achieve appealing performances, although there are noticeable differences from the upper bound depending on channel conditions. Interestingly, these three schemes show different robustness against the geometry of Tx/Rx antenna arrays and channel estimation errors. PMID:27455256

  10. Toward Generalization of Iterative Small Molecule Synthesis

    PubMed Central

    Lehmann, Jonathan W.; Blair, Daniel J.; Burke, Martin D.

    2018-01-01

    Small molecules have extensive untapped potential to benefit society, but access to this potential is too often restricted by limitations inherent to the customized approach currently used to synthesize this class of chemical matter. In contrast, the “building block approach”, i.e., generalized iterative assembly of interchangeable parts, has now proven to be a highly efficient and flexible way to construct things ranging all the way from skyscrapers to macromolecules to artificial intelligence algorithms. The structural redundancy found in many small molecules suggests that they possess a similar capacity for generalized building block-based construction. It is also encouraging that many customized iterative synthesis methods have been developed that improve access to specific classes of small molecules. There has also been substantial recent progress toward the iterative assembly of many different types of small molecules, including complex natural products, pharmaceuticals, biological probes, and materials, using common building blocks and coupling chemistry. Collectively, these advances suggest that a generalized building block approach for small molecule synthesis may be within reach. PMID:29696152

  11. Matrix completion-based reconstruction for undersampled magnetic resonance fingerprinting data.

    PubMed

    Doneva, Mariya; Amthor, Thomas; Koken, Peter; Sommer, Karsten; Börnert, Peter

    2017-09-01

    An iterative reconstruction method for undersampled magnetic resonance fingerprinting data is presented. The method performs the reconstruction entirely in k-space and is related to low rank matrix completion methods. A low dimensional data subspace is estimated from a small number of k-space locations fully sampled in the temporal direction and used to reconstruct the missing k-space samples before MRF dictionary matching. Performing the iterations in k-space eliminates the need for applying a forward and an inverse Fourier transform in each iteration required in previously proposed iterative reconstruction methods for undersampled MRF data. A projection onto the low dimensional data subspace is performed as a matrix multiplication instead of a singular value thresholding typically used in low rank matrix completion, further reducing the computational complexity of the reconstruction. The method is theoretically described and validated in phantom and in-vivo experiments. The quality of the parameter maps can be significantly improved compared to direct matching on undersampled data. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Region of interest processing for iterative reconstruction in x-ray computed tomography

    NASA Astrophysics Data System (ADS)

    Kopp, Felix K.; Nasirudin, Radin A.; Mei, Kai; Fehringer, Andreas; Pfeiffer, Franz; Rummeny, Ernst J.; Noël, Peter B.

    2015-03-01

    The recent advancements in the graphics card technology raised the performance of parallel computing and contributed to the introduction of iterative reconstruction methods for x-ray computed tomography in clinical CT scanners. Iterative maximum likelihood (ML) based reconstruction methods are known to reduce image noise and to improve the diagnostic quality of low-dose CT. However, iterative reconstruction of a region of interest (ROI), especially ML based, is challenging. But for some clinical procedures, like cardiac CT, only a ROI is needed for diagnostics. A high-resolution reconstruction of the full field of view (FOV) consumes unnecessary computation effort that results in a slower reconstruction than clinically acceptable. In this work, we present an extension and evaluation of an existing ROI processing algorithm. Especially improvements for the equalization between regions inside and outside of a ROI are proposed. The evaluation was done on data collected from a clinical CT scanner. The performance of the different algorithms is qualitatively and quantitatively assessed. Our solution to the ROI problem provides an increase in signal-to-noise ratio and leads to visually less noise in the final reconstruction. The reconstruction speed of our technique was observed to be comparable with other previous proposed techniques. The development of ROI processing algorithms in combination with iterative reconstruction will provide higher diagnostic quality in the near future.

  13. A 2D systems approach to iterative learning control for discrete linear processes with zero Markov parameters

    NASA Astrophysics Data System (ADS)

    Hladowski, Lukasz; Galkowski, Krzysztof; Cai, Zhonglun; Rogers, Eric; Freeman, Chris T.; Lewin, Paul L.

    2011-07-01

    In this article a new approach to iterative learning control for the practically relevant case of deterministic discrete linear plants with uniform rank greater than unity is developed. The analysis is undertaken in a 2D systems setting that, by using a strong form of stability for linear repetitive processes, allows simultaneous consideration of both trial-to-trial error convergence and along the trial performance, resulting in design algorithms that can be computed using linear matrix inequalities (LMIs). Finally, the control laws are experimentally verified on a gantry robot that replicates a pick and place operation commonly found in a number of applications to which iterative learning control is applicable.

  14. Analysis of Anderson Acceleration on a Simplified Neutronics/Thermal Hydraulics System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toth, Alex; Kelley, C. T.; Slattery, Stuart R

    ABSTRACT A standard method for solving coupled multiphysics problems in light water reactors is Picard iteration, which sequentially alternates between solving single physics applications. This solution approach is appealing due to simplicity of implementation and the ability to leverage existing software packages to accurately solve single physics applications. However, there are several drawbacks in the convergence behavior of this method; namely slow convergence and the necessity of heuristically chosen damping factors to achieve convergence in many cases. Anderson acceleration is a method that has been seen to be more robust and fast converging than Picard iteration for many problems, withoutmore » significantly higher cost per iteration or complexity of implementation, though its effectiveness in the context of multiphysics coupling is not well explored. In this work, we develop a one-dimensional model simulating the coupling between the neutron distribution and fuel and coolant properties in a single fuel pin. We show that this model generally captures the convergence issues noted in Picard iterations which couple high-fidelity physics codes. We then use this model to gauge potential improvements with regard to rate of convergence and robustness from utilizing Anderson acceleration as an alternative to Picard iteration.« less

  15. Modeling the Spatial Dynamics of International Tuna Fleets

    PubMed Central

    2016-01-01

    We developed an iterative sequential random utility model to investigate the social and environmental determinants of the spatiotemporal decision process of tuna purse-seine fishery fishing effort in the eastern Pacific Ocean. Operations of the fishing gear mark checkpoints in a continuous complex decision-making process. Individual fisher behavior is modeled by identifying diversified choices over decision-space for an entire fishing trip, which allows inclusion of prior and current vessel locations and conditions among the explanatory variables. Among these factors are vessel capacity; departure and arrival port; duration of the fishing trip; daily and cumulative distance travelled, which provides a proxy for operation costs; expected revenue; oceanographic conditions; and tons of fish on board. The model uses a two-step decision process to capture the probability of a vessel choosing a specific fishing region for the first set and the probability of switching to (or staying in) a specific region to fish before returning to its landing port. The model provides a means to anticipate the success of marine resource management, and it can be used to evaluate fleet diversity in fisher behavior, the impact of climate variability, and the stability and resilience of complex coupled human and natural systems. PMID:27537545

  16. DyNAMiC Workbench: an integrated development environment for dynamic DNA nanotechnology

    PubMed Central

    Grun, Casey; Werfel, Justin; Zhang, David Yu; Yin, Peng

    2015-01-01

    Dynamic DNA nanotechnology provides a promising avenue for implementing sophisticated assembly processes, mechanical behaviours, sensing and computation at the nanoscale. However, design of these systems is complex and error-prone, because the need to control the kinetic pathway of a system greatly increases the number of design constraints and possible failure modes for the system. Previous tools have automated some parts of the design workflow, but an integrated solution is lacking. Here, we present software implementing a three ‘tier’ design process: a high-level visual programming language is used to describe systems, a molecular compiler builds a DNA implementation and nucleotide sequences are generated and optimized. Additionally, our software includes tools for analysing and ‘debugging’ the designs in silico, and for importing/exporting designs to other commonly used software systems. The software we present is built on many existing pieces of software, but is integrated into a single package—accessible using a Web-based interface at http://molecular-systems.net/workbench. We hope that the deep integration between tools and the flexibility of this design process will lead to better experimental results, fewer experimental design iterations and the development of more complex DNA nanosystems. PMID:26423437

  17. A Technique for Transient Thermal Testing of Thick Structures

    NASA Technical Reports Server (NTRS)

    Horn, Thomas J.; Richards, W. Lance; Gong, Leslie

    1997-01-01

    A new open-loop heat flux control technique has been developed to conduct transient thermal testing of thick, thermally-conductive aerospace structures. This technique uses calibration of the radiant heater system power level as a function of heat flux, predicted aerodynamic heat flux, and the properties of an instrumented test article. An iterative process was used to generate open-loop heater power profiles prior to each transient thermal test. Differences between the measured and predicted surface temperatures were used to refine the heater power level command profiles through the iteration process. This iteration process has reduced the effects of environmental and test system design factors, which are normally compensated for by closed-loop temperature control, to acceptable levels. The final revised heater power profiles resulted in measured temperature time histories which deviated less than 25 F from the predicted surface temperatures.

  18. A Real-Time Data Acquisition and Processing Framework Based on FlexRIO FPGA and ITER Fast Plant System Controller

    NASA Astrophysics Data System (ADS)

    Yang, C.; Zheng, W.; Zhang, M.; Yuan, T.; Zhuang, G.; Pan, Y.

    2016-06-01

    Measurement and control of the plasma in real-time are critical for advanced Tokamak operation. It requires high speed real-time data acquisition and processing. ITER has designed the Fast Plant System Controllers (FPSC) for these purposes. At J-TEXT Tokamak, a real-time data acquisition and processing framework has been designed and implemented using standard ITER FPSC technologies. The main hardware components of this framework are an Industrial Personal Computer (IPC) with a real-time system and FlexRIO devices based on FPGA. With FlexRIO devices, data can be processed by FPGA in real-time before they are passed to the CPU. The software elements are based on a real-time framework which runs under Red Hat Enterprise Linux MRG-R and uses Experimental Physics and Industrial Control System (EPICS) for monitoring and configuring. That makes the framework accord with ITER FPSC standard technology. With this framework, any kind of data acquisition and processing FlexRIO FPGA program can be configured with a FPSC. An application using the framework has been implemented for the polarimeter-interferometer diagnostic system on J-TEXT. The application is able to extract phase-shift information from the intermediate frequency signal produced by the polarimeter-interferometer diagnostic system and calculate plasma density profile in real-time. Different algorithms implementations on the FlexRIO FPGA are compared in the paper.

  19. Depression as a systemic syndrome: mapping the feedback loops of major depressive disorder.

    PubMed

    Wittenborn, A K; Rahmandad, H; Rick, J; Hosseinichimeh, N

    2016-02-01

    Depression is a complex public health problem with considerable variation in treatment response. The systemic complexity of depression, or the feedback processes among diverse drivers of the disorder, contribute to the persistence of depression. This paper extends prior attempts to understand the complex causal feedback mechanisms that underlie depression by presenting the first broad boundary causal loop diagram of depression dynamics. We applied qualitative system dynamics methods to map the broad feedback mechanisms of depression. We used a structured approach to identify candidate causal mechanisms of depression in the literature. We assessed the strength of empirical support for each mechanism and prioritized those with support from validation studies. Through an iterative process, we synthesized the empirical literature and created a conceptual model of major depressive disorder. The literature review and synthesis resulted in the development of the first causal loop diagram of reinforcing feedback processes of depression. It proposes candidate drivers of illness, or inertial factors, and their temporal functioning, as well as the interactions among drivers of depression. The final causal loop diagram defines 13 key reinforcing feedback loops that involve nine candidate drivers of depression. Future research is needed to expand upon this initial model of depression dynamics. Quantitative extensions may result in a better understanding of the systemic syndrome of depression and contribute to personalized methods of evaluation, prevention and intervention.

  20. NETIMIS: Dynamic Simulation of Health Economics Outcomes Using Big Data.

    PubMed

    Johnson, Owen A; Hall, Peter S; Hulme, Claire

    2016-02-01

    Many healthcare organizations are now making good use of electronic health record (EHR) systems to record clinical information about their patients and the details of their healthcare. Electronic data in EHRs is generated by people engaged in complex processes within complex environments, and their human input, albeit shaped by computer systems, is compromised by many human factors. These data are potentially valuable to health economists and outcomes researchers but are sufficiently large and complex enough to be considered part of the new frontier of 'big data'. This paper describes emerging methods that draw together data mining, process modelling, activity-based costing and dynamic simulation models. Our research infrastructure includes safe links to Leeds hospital's EHRs with 3 million secondary and tertiary care patients. We created a multidisciplinary team of health economists, clinical specialists, and data and computer scientists, and developed a dynamic simulation tool called NETIMIS (Network Tools for Intervention Modelling with Intelligent Simulation; http://www.netimis.com ) suitable for visualization of both human-designed and data-mined processes which can then be used for 'what-if' analysis by stakeholders interested in costing, designing and evaluating healthcare interventions. We present two examples of model development to illustrate how dynamic simulation can be informed by big data from an EHR. We found the tool provided a focal point for multidisciplinary team work to help them iteratively and collaboratively 'deep dive' into big data.

  1. Depression as a systemic syndrome: mapping the feedback loops of major depressive disorder

    PubMed Central

    Wittenborn, A. K.; Rahmandad, H.; Rick, J.; Hosseinichimeh, N.

    2016-01-01

    Background Depression is a complex public health problem with considerable variation in treatment response. The systemic complexity of depression, or the feedback processes among diverse drivers of the disorder, contribute to the persistence of depression. This paper extends prior attempts to understand the complex causal feedback mechanisms that underlie depression by presenting the first broad boundary causal loop diagram of depression dynamics. Method We applied qualitative system dynamics methods to map the broad feedback mechanisms of depression. We used a structured approach to identify candidate causal mechanisms of depression in the literature. We assessed the strength of empirical support for each mechanism and prioritized those with support from validation studies. Through an iterative process, we synthesized the empirical literature and created a conceptual model of major depressive disorder. Results The literature review and synthesis resulted in the development of the first causal loop diagram of reinforcing feedback processes of depression. It proposes candidate drivers of illness, or inertial factors, and their temporal functioning, as well as the interactions among drivers of depression. The final causal loop diagram defines 13 key reinforcing feedback loops that involve nine candidate drivers of depression. Conclusions Future research is needed to expand upon this initial model of depression dynamics. Quantitative extensions may result in a better understanding of the systemic syndrome of depression and contribute to personalized methods of evaluation, prevention and intervention. PMID:26621339

  2. Rater variables associated with ITER ratings.

    PubMed

    Paget, Michael; Wu, Caren; McIlwrick, Joann; Woloschuk, Wayne; Wright, Bruce; McLaughlin, Kevin

    2013-10-01

    Advocates of holistic assessment consider the ITER a more authentic way to assess performance. But this assessment format is subjective and, therefore, susceptible to rater bias. Here our objective was to study the association between rater variables and ITER ratings. In this observational study our participants were clerks at the University of Calgary and preceptors who completed online ITERs between February 2008 and July 2009. Our outcome variable was global rating on the ITER (rated 1-5), and we used a generalized estimating equation model to identify variables associated with this rating. Students were rated "above expected level" or "outstanding" on 66.4 % of 1050 online ITERs completed during the study period. Two rater variables attenuated ITER ratings: the log transformed time taken to complete the ITER [β = -0.06, 95 % confidence interval (-0.10, -0.02), p = 0.002], and the number of ITERs that a preceptor completed over the time period of the study [β = -0.008 (-0.02, -0.001), p = 0.02]. In this study we found evidence of leniency bias that resulted in two thirds of students being rated above expected level of performance. This leniency bias appeared to be attenuated by delay in ITER completion, and was also blunted in preceptors who rated more students. As all biases threaten the internal validity of the assessment process, further research is needed to confirm these and other sources of rater bias in ITER ratings, and to explore ways of limiting their impact.

  3. eNOSHA, a Free, Open and Flexible Learning Object Repository--An Iterative Development Process for Global User-Friendliness

    ERIC Educational Resources Information Center

    Mozelius, Peter; Hettiarachchi, Enosha

    2012-01-01

    This paper describes the iterative development process of a Learning Object Repository (LOR), named eNOSHA. Discussions on a project for a LOR started at the e-Learning Centre (eLC) at The University of Colombo, School of Computing (UCSC) in 2007. The eLC has during the last decade been developing learning content for a nationwide e-learning…

  4. Pressure-induced silica quartz amorphization studied by iterative stochastic surface walking reaction sampling.

    PubMed

    Zhang, Xiao-Jie; Shang, Cheng; Liu, Zhi-Pan

    2017-02-08

    The crystal to amorphous transformation is a common phenomenon in Nature and has important impacts on material properties. Our current knowledge on such complex solid transformation processes is, however, limited because of their slow kinetics and the lack of long-range ordering in amorphous structures. To reveal the kinetics in the amorphization of solids, this work, by developing iterative reaction sampling based on the stochastic surface walking global optimization method, investigates the well-known crystal to amorphous transformation of silica (SiO 2 ) under external pressures, the mechanism of which has long been debated for its non-equilibrium, pressure-sensitive kinetics and complex product components. Here we report for the first time the global potential energy surface (PES) and the lowest energy pathways for α-quartz amorphization from first principles. We show that the pressurization at 15 GPa, the reaction condition, can lift the quartz phase energetically close to the amorphous zone, which thermodynamically initializes the amorphization. More importantly, the large flexibility of Si cation coordination (including four, five and six coordination) results in many kinetically competing routes to more stable dense forms, including the known MI, stishovite, newly-identified MII and TI phases. All these pathways have high barriers due to the local Si-O bond breaking and are mediated by amorphous structures with five-fold Si. This causes simultaneous crystal-to-crystal and crystal-to-amorphous transitions. The high barrier and the reconstructive nature of the phase transition are the key kinetics origin for silica amorphization under pressures.

  5. Evolutionary Software Development (Developpement Evolutionnaire de Logiciels)

    DTIC Science & Technology

    2008-08-01

    development processes. While this may be true, frequently it is not. MIL-STD-498 was explicitly introduced to encourage iterative development; ISO /IEC... 12207 was carefully worded not to prohibit iterative development. Yet both standards were widely interpreted as requiring waterfall development, as

  6. Evolutionary Software Development (Developpement evolutionnaire de logiciels)

    DTIC Science & Technology

    2008-08-01

    development processes. While this may be true, frequently it is not. MIL-STD-498 was explicitly introduced to encourage iterative development; ISO /IEC... 12207 was carefully worded not to prohibit iterative development. Yet both standards were widely interpreted as requiring waterfall development, as

  7. Quickprop method to speed up learning process of Artificial Neural Network in money's nominal value recognition case

    NASA Astrophysics Data System (ADS)

    Swastika, Windra

    2017-03-01

    A money's nominal value recognition system has been developed using Artificial Neural Network (ANN). ANN with Back Propagation has one disadvantage. The learning process is very slow (or never reach the target) in the case of large number of iteration, weight and samples. One way to speed up the learning process is using Quickprop method. Quickprop method is based on Newton's method and able to speed up the learning process by assuming that the weight adjustment (E) is a parabolic function. The goal is to minimize the error gradient (E'). In our system, we use 5 types of money's nominal value, i.e. 1,000 IDR, 2,000 IDR, 5,000 IDR, 10,000 IDR and 50,000 IDR. One of the surface of each nominal were scanned and digitally processed. There are 40 patterns to be used as training set in ANN system. The effectiveness of Quickprop method in the ANN system was validated by 2 factors, (1) number of iterations required to reach error below 0.1; and (2) the accuracy to predict nominal values based on the input. Our results shows that the use of Quickprop method is successfully reduce the learning process compared to Back Propagation method. For 40 input patterns, Quickprop method successfully reached error below 0.1 for only 20 iterations, while Back Propagation method required 2000 iterations. The prediction accuracy for both method is higher than 90%.

  8. Mixed Material Plasma-Surface Interactions in ITER: Recent Results from the PISCES Group

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tynan, George R.; Baldwin, Matthew; Doerner, Russell

    This paper summarizes recent PISCES studies focused on the effects associated with mixed species plasmas that are similar in composition to what one might expect in ITER. Formation of nanometer scale whiskerlike features occurs in W surfaces exposed to pure He and mixed D/He plasmas and appears to be associated with the formation of He nanometer-scaled bubbles in the W surface. Studies of Be-W alloy formation in Be-seeded D plasmas suggest that this process may be important in ITER all metal wall operational scenarios. Studies also suggest that BeD formation via chemical sputtering of Be walls may be an importantmore » first wall erosion mechanism. D retention in ITER mixed materials has also been studied. The D release behavior from beryllium co-deposits does not appear to be a diffusion dominated process, but instead is consistent with thermal release from a number of variable trapping energy sites. As a result, the amount of tritium remaining in codeposits in ITER after baking will be determined by the maximum temperature achieved, rather than by the duration of the baking cycle.« less

  9. Six sigma: process of understanding the control and capability of ranitidine hydrochloride tablet.

    PubMed

    Chabukswar, Ar; Jagdale, Sc; Kuchekar, Bs; Joshi, Vd; Deshmukh, Gr; Kothawade, Hs; Kuckekar, Ab; Lokhande, Pd

    2011-01-01

    The process of understanding the control and capability (PUCC) is an iterative closed loop process for continuous improvement. It covers the DMAIC toolkit in its three phases. PUCC is an iterative approach that rotates between the three pillars of the process of understanding, process control, and process capability, with each iteration resulting in a more capable and robust process. It is rightly said that being at the top is a marathon and not a sprint. The objective of the six sigma study of Ranitidine hydrochloride tablets is to achieve perfection in tablet manufacturing by reviewing the present robust manufacturing process, to find out ways to improve and modify the process, which will yield tablets that are defect-free and will give more customer satisfaction. The application of six sigma led to an improved process capability, due to the improved sigma level of the process from 1.5 to 4, a higher yield, due to reduced variation and reduction of thick tablets, reduction in packing line stoppages, reduction in re-work by 50%, a more standardized process, with smooth flow and change in coating suspension reconstitution level (8%w/w), a huge cost reduction of approximately Rs.90 to 95 lakhs per annum, an improved overall efficiency by 30% approximately, and improved overall quality of the product.

  10. Six Sigma: Process of Understanding the Control and Capability of Ranitidine Hydrochloride Tablet

    PubMed Central

    Chabukswar, AR; Jagdale, SC; Kuchekar, BS; Joshi, VD; Deshmukh, GR; Kothawade, HS; Kuckekar, AB; Lokhande, PD

    2011-01-01

    The process of understanding the control and capability (PUCC) is an iterative closed loop process for continuous improvement. It covers the DMAIC toolkit in its three phases. PUCC is an iterative approach that rotates between the three pillars of the process of understanding, process control, and process capability, with each iteration resulting in a more capable and robust process. It is rightly said that being at the top is a marathon and not a sprint. The objective of the six sigma study of Ranitidine hydrochloride tablets is to achieve perfection in tablet manufacturing by reviewing the present robust manufacturing process, to find out ways to improve and modify the process, which will yield tablets that are defect-free and will give more customer satisfaction. The application of six sigma led to an improved process capability, due to the improved sigma level of the process from 1.5 to 4, a higher yield, due to reduced variation and reduction of thick tablets, reduction in packing line stoppages, reduction in re-work by 50%, a more standardized process, with smooth flow and change in coating suspension reconstitution level (8%w/w), a huge cost reduction of approximately Rs.90 to 95 lakhs per annum, an improved overall efficiency by 30% approximately, and improved overall quality of the product. PMID:21607050

  11. Label propagation algorithm for community detection based on node importance and label influence

    NASA Astrophysics Data System (ADS)

    Zhang, Xian-Kun; Ren, Jing; Song, Chen; Jia, Jia; Zhang, Qian

    2017-09-01

    Recently, the detection of high-quality community has become a hot spot in the research of social network. Label propagation algorithm (LPA) has been widely concerned since it has the advantages of linear time complexity and is unnecessary to define objective function and the number of community in advance. However, LPA has the shortcomings of uncertainty and randomness in the label propagation process, which affects the accuracy and stability of the community. For large-scale social network, this paper proposes a novel label propagation algorithm for community detection based on node importance and label influence (LPA_NI). The experiments with comparative algorithms on real-world networks and synthetic networks have shown that LPA_NI can significantly improve the quality of community detection and shorten the iteration period. Also, it has better accuracy and stability in the case of similar complexity.

  12. Luring anglers to enhance fisheries

    USGS Publications Warehouse

    Martin, Dustin R.; Pope, Kevin L.

    2011-01-01

    Current fisheries management is, unfortunately, reactive rather than proactive to changes in fishery characteristics. Furthermore, anglers do not act independently on waterbodies, and thus, fisheries are complex socio-ecological systems. Proactive management of these complex systems necessitates an approach-adaptive fisheries management-that allows learning to occur simultaneously with management. A promising area for implementation of adaptive fisheries management is the study of luring anglers to or from specific waterbodies to meet management goals. Purposeful manipulation of anglers, and its associated field of study, is nonexistent in past management. Evaluation of different management practices (i.e., hypotheses) through an iterative adaptive management process should include both a biological and sociological survey to address changes in fish populations and changes in angler satisfaction related to changes in management. We believe adaptive management is ideal for development and assessment of management strategies targeted at angler participation. Moreover these concepts and understandings should be applicable to other natural resource users such as hunters and hikers.

  13. Combining density functional theory (DFT) and pair distribution function (PDF) analysis to solve the structure of metastable materials: the case of metakaolin.

    PubMed

    White, Claire E; Provis, John L; Proffen, Thomas; Riley, Daniel P; van Deventer, Jannie S J

    2010-04-07

    Understanding the atomic structure of complex metastable (including glassy) materials is of great importance in research and industry, however, such materials resist solution by most standard techniques. Here, a novel technique combining thermodynamics and local structure is presented to solve the structure of the metastable aluminosilicate material metakaolin (calcined kaolinite) without the use of chemical constraints. The structure is elucidated by iterating between least-squares real-space refinement using neutron pair distribution function data, and geometry optimisation using density functional modelling. The resulting structural representation is both energetically feasible and in excellent agreement with experimental data. This accurate structural representation of metakaolin provides new insight into the local environment of the aluminium atoms, with evidence of the existence of tri-coordinated aluminium. By the availability of this detailed chemically feasible atomic description, without the need to artificially impose constraints during the refinement process, there exists the opportunity to tailor chemical and mechanical processes involving metakaolin and other complex metastable materials at the atomic level to obtain optimal performance at the macro-scale.

  14. Salient contour extraction from complex natural scene in night vision image

    NASA Astrophysics Data System (ADS)

    Han, Jing; Yue, Jiang; Zhang, Yi; Bai, Lian-fa

    2014-03-01

    The theory of center-surround interaction in non-classical receptive field can be applied in night vision information processing. In this work, an optimized compound receptive field modulation method is proposed to extract salient contour from complex natural scene in low-light-level (LLL) and infrared images. The kernel idea is that multi-feature analysis can recognize the inhomogeneity in modulatory coverage more accurately and that center and surround with the grouping structure satisfying Gestalt rule deserves high connection-probability. Computationally, a multi-feature contrast weighted inhibition model is presented to suppress background and lower mutual inhibition among contour elements; a fuzzy connection facilitation model is proposed to achieve the enhancement of contour response, the connection of discontinuous contour and the further elimination of randomly distributed noise and texture; a multi-scale iterative attention method is designed to accomplish dynamic modulation process and extract contours of targets in multi-size. This work provides a series of biologically motivated computational visual models with high-performance for contour detection from cluttered scene in night vision images.

  15. The Current State of Drug Discovery and a Potential Role for NMR Metabolomics

    PubMed Central

    2015-01-01

    The pharmaceutical industry has significantly contributed to improving human health. Drugs have been attributed to both increasing life expectancy and decreasing health care costs. Unfortunately, there has been a recent decline in the creativity and productivity of the pharmaceutical industry. This is a complex issue with many contributing factors resulting from the numerous mergers, increase in out-sourcing, and the heavy dependency on high-throughput screening (HTS). While a simple solution to such a complex problem is unrealistic and highly unlikely, the inclusion of metabolomics as a routine component of the drug discovery process may provide some solutions to these problems. Specifically, as the binding affinity of a chemical lead is evolved during the iterative structure-based drug design process, metabolomics can provide feedback on the selectivity and the in vivo mechanism of action. Similarly, metabolomics can be used to evaluate and validate HTS leads. In effect, metabolomics can be used to eliminate compounds with potential efficacy and side effect problems while prioritizing well-behaved leads with druglike characteristics. PMID:24588729

  16. A new analytical method for characterizing nonlinear visual processes with stimuli of arbitrary distribution: Theory and applications.

    PubMed

    Hayashi, Ryusuke; Watanabe, Osamu; Yokoyama, Hiroki; Nishida, Shin'ya

    2017-06-01

    Characterization of the functional relationship between sensory inputs and neuronal or observers' perceptual responses is one of the fundamental goals of systems neuroscience and psychophysics. Conventional methods, such as reverse correlation and spike-triggered data analyses are limited in their ability to resolve complex and inherently nonlinear neuronal/perceptual processes because these methods require input stimuli to be Gaussian with a zero mean. Recent studies have shown that analyses based on a generalized linear model (GLM) do not require such specific input characteristics and have advantages over conventional methods. GLM, however, relies on iterative optimization algorithms and its calculation costs become very expensive when estimating the nonlinear parameters of a large-scale system using large volumes of data. In this paper, we introduce a new analytical method for identifying a nonlinear system without relying on iterative calculations and yet also not requiring any specific stimulus distribution. We demonstrate the results of numerical simulations, showing that our noniterative method is as accurate as GLM in estimating nonlinear parameters in many cases and outperforms conventional, spike-triggered data analyses. As an example of the application of our method to actual psychophysical data, we investigated how different spatiotemporal frequency channels interact in assessments of motion direction. The nonlinear interaction estimated by our method was consistent with findings from previous vision studies and supports the validity of our method for nonlinear system identification.

  17. Distant Supervision with Transductive Learning for Adverse Drug Reaction Identification from Electronic Medical Records

    PubMed Central

    Ikeda, Mitsuru

    2017-01-01

    Information extraction and knowledge discovery regarding adverse drug reaction (ADR) from large-scale clinical texts are very useful and needy processes. Two major difficulties of this task are the lack of domain experts for labeling examples and intractable processing of unstructured clinical texts. Even though most previous works have been conducted on these issues by applying semisupervised learning for the former and a word-based approach for the latter, they face with complexity in an acquisition of initial labeled data and ignorance of structured sequence of natural language. In this study, we propose automatic data labeling by distant supervision where knowledge bases are exploited to assign an entity-level relation label for each drug-event pair in texts, and then, we use patterns for characterizing ADR relation. The multiple-instance learning with expectation-maximization method is employed to estimate model parameters. The method applies transductive learning to iteratively reassign a probability of unknown drug-event pair at the training time. By investigating experiments with 50,998 discharge summaries, we evaluate our method by varying large number of parameters, that is, pattern types, pattern-weighting models, and initial and iterative weightings of relations for unlabeled data. Based on evaluations, our proposed method outperforms the word-based feature for NB-EM (iEM), MILR, and TSVM with F1 score of 11.3%, 9.3%, and 6.5% improvement, respectively. PMID:29090077

  18. Learning multimodal dictionaries.

    PubMed

    Monaci, Gianluca; Jost, Philippe; Vandergheynst, Pierre; Mailhé, Boris; Lesage, Sylvain; Gribonval, Rémi

    2007-09-01

    Real-world phenomena involve complex interactions between multiple signal modalities. As a consequence, humans are used to integrate at each instant perceptions from all their senses in order to enrich their understanding of the surrounding world. This paradigm can be also extremely useful in many signal processing and computer vision problems involving mutually related signals. The simultaneous processing of multimodal data can, in fact, reveal information that is otherwise hidden when considering the signals independently. However, in natural multimodal signals, the statistical dependencies between modalities are in general not obvious. Learning fundamental multimodal patterns could offer deep insight into the structure of such signals. In this paper, we present a novel model of multimodal signals based on their sparse decomposition over a dictionary of multimodal structures. An algorithm for iteratively learning multimodal generating functions that can be shifted at all positions in the signal is proposed, as well. The learning is defined in such a way that it can be accomplished by iteratively solving a generalized eigenvector problem, which makes the algorithm fast, flexible, and free of user-defined parameters. The proposed algorithm is applied to audiovisual sequences and it is able to discover underlying structures in the data. The detection of such audio-video patterns in audiovisual clips allows to effectively localize the sound source on the video in presence of substantial acoustic and visual distractors, outperforming state-of-the-art audiovisual localization algorithms.

  19. DeMAID: A Design Manager's Aide for Intelligent Decomposition user's guide

    NASA Technical Reports Server (NTRS)

    Rogers, James L.

    1989-01-01

    A design problem is viewed as a complex system divisible into modules. Before the design of a complex system can begin, the couplings among modules and the presence of iterative loops is determined. This is important because the design manager must know how to group the modules into subsystems and how to assign subsystems to design teams so that changes in one subsystem will have predictable effects on other subsystems. Determining these subsystems is not an easy, straightforward process and often important couplings are overlooked. Moreover, the planning task must be repeated as new information become available or as the design specifications change. The purpose of this research is to develop a knowledge-based tool called the Design Manager's Aide for Intelligent Decomposition (DeMAID) to act as an intelligent advisor for the design manager. DeMaid identifies the subsystems of a complex design problem, orders them into a well-structured format, and marks the couplings among the subsystems to facilitate the use of multilevel tools. DeMAID also provides the design manager with the capability of examining the trade-offs between sequential and parallel processing. This type of approach could lead to a substantial savings or organizing and displaying a complex problem as a sequence of subsystems easily divisible among design teams. This report serves as a User's Guide for the program.

  20. Design, fabrication and control of origami robots

    NASA Astrophysics Data System (ADS)

    Rus, Daniela; Tolley, Michael T.

    2018-06-01

    Origami robots are created using folding processes, which provide a simple approach to fabricating a wide range of robot morphologies. Inspired by biological systems, engineers have started to explore origami folding in combination with smart material actuators to enable intrinsic actuation as a means to decouple design from fabrication complexity. The built-in crease structure of origami bodies has the potential to yield compliance and exhibit many soft body properties. Conventional fabrication of robots is generally a bottom-up assembly process with multiple low-level steps for creating subsystems that include manual operations and often multiple iterations. By contrast, natural systems achieve elegant designs and complex functionalities using top-down parallel transformation approaches such as folding. Folding in nature creates a wide spectrum of complex morpho-functional structures such as proteins and intestines and enables the development of structures such as flowers, leaves and insect wings. Inspired by nature, engineers have started to explore folding powered by embedded smart material actuators to create origami robots. The design and fabrication of origami robots exploits top-down, parallel transformation approaches to achieve elegant designs and complex functionalities. In this Review, we first introduce the concept of origami robotics and then highlight advances in design principles, fabrication methods, actuation, smart materials and control algorithms. Applications of origami robots for a variety of devices are investigated, and future directions of the field are discussed, examining both challenges and opportunities.

  1. P-Tether-Mediated, Iterative SN2'-Cuprate Alkylation Strategy to Skipped Polyol Stereotetrads: Utility of an Oxidative "Function Switch" with Phosphite-Borane Tethers.

    PubMed

    Markley, Jana L; Hanson, Paul R

    2017-05-19

    The development of a P-tether-mediated, iterative S N 2'-cuprate alkylation protocol for the formation of 1,3-skipped polyol stereotetrads is reported. This two-directional synthetic strategy builds molecular complexity from simple, readily prepared C 2 -symmetric dienediols and unites the chemistry of both temporary phosphite-borane tethers and temporary phosphate tethers-through an oxidative "function switch" of the P-tether itself-to generate intermediates that were previously inaccessible via either method alone.

  2. Challenges and status of ITER conductor production

    NASA Astrophysics Data System (ADS)

    Devred, A.; Backbier, I.; Bessette, D.; Bevillard, G.; Gardner, M.; Jong, C.; Lillaz, F.; Mitchell, N.; Romano, G.; Vostner, A.

    2014-04-01

    Taking the relay of the large Hadron collider (LHC) at CERN, ITER has become the largest project in applied superconductivity. In addition to its technical complexity, ITER is also a management challenge as it relies on an unprecedented collaboration of seven partners, representing more than half of the world population, who provide 90% of the components as in-kind contributions. The ITER magnet system is one of the most sophisticated superconducting magnet systems ever designed, with an enormous stored energy of 51 GJ. It involves six of the ITER partners. The coils are wound from cable-in-conduit conductors (CICCs) made up of superconducting and copper strands assembled into a multistage cable, inserted into a conduit of butt-welded austenitic steel tubes. The conductors for the toroidal field (TF) and central solenoid (CS) coils require about 600 t of Nb3Sn strands while the poloidal field (PF) and correction coil (CC) and busbar conductors need around 275 t of Nb-Ti strands. The required amount of Nb3Sn strands far exceeds pre-existing industrial capacity and has called for a significant worldwide production scale up. The TF conductors are the first ITER components to be mass produced and are more than 50% complete. During its life time, the CS coil will have to sustain several tens of thousands of electromagnetic (EM) cycles to high current and field conditions, way beyond anything a large Nb3Sn coil has ever experienced. Following a comprehensive R&D program, a technical solution has been found for the CS conductor, which ensures stable performance versus EM and thermal cycling. Productions of PF, CC and busbar conductors are also underway. After an introduction to the ITER project and magnet system, we describe the ITER conductor procurements and the quality assurance/quality control programs that have been implemented to ensure production uniformity across numerous suppliers. Then, we provide examples of technical challenges that have been encountered and we present the status of ITER conductor production worldwide.

  3. Improvements in surface singularity analysis and design methods. [applicable to airfoils

    NASA Technical Reports Server (NTRS)

    Bristow, D. R.

    1979-01-01

    The coupling of the combined source vortex distribution of Green's potential flow function with contemporary numerical techniques is shown to provide accurate, efficient, and stable solutions to subsonic inviscid analysis and design problems for multi-element airfoils. The analysis problem is solved by direct calculation of the surface singularity distribution required to satisfy the flow tangency boundary condition. The design or inverse problem is solved by an iteration process. In this process, the geometry and the associated pressure distribution are iterated until the pressure distribution most nearly corresponding to the prescribed design distribution is obtained. Typically, five iteration cycles are required for convergence. A description of the analysis and design method is presented, along with supporting examples.

  4. A Rapid Convergent Low Complexity Interference Alignment Algorithm for Wireless Sensor Networks.

    PubMed

    Jiang, Lihui; Wu, Zhilu; Ren, Guanghui; Wang, Gangyi; Zhao, Nan

    2015-07-29

    Interference alignment (IA) is a novel technique that can effectively eliminate the interference and approach the sum capacity of wireless sensor networks (WSNs) when the signal-to-noise ratio (SNR) is high, by casting the desired signal and interference into different signal subspaces. The traditional alternating minimization interference leakage (AMIL) algorithm for IA shows good performance in high SNR regimes, however, the complexity of the AMIL algorithm increases dramatically as the number of users and antennas increases, posing limits to its applications in the practical systems. In this paper, a novel IA algorithm, called directional quartic optimal (DQO) algorithm, is proposed to minimize the interference leakage with rapid convergence and low complexity. The properties of the AMIL algorithm are investigated, and it is discovered that the difference between the two consecutive iteration results of the AMIL algorithm will approximately point to the convergence solution when the precoding and decoding matrices obtained from the intermediate iterations are sufficiently close to their convergence values. Based on this important property, the proposed DQO algorithm employs the line search procedure so that it can converge to the destination directly. In addition, the optimal step size can be determined analytically by optimizing a quartic function. Numerical results show that the proposed DQO algorithm can suppress the interference leakage more rapidly than the traditional AMIL algorithm, and can achieve the same level of sum rate as that of AMIL algorithm with far less iterations and execution time.

  5. Preliminary Climate Uncertainty Quantification Study on Model-Observation Test Beds at Earth Systems Grid Federation Repository

    NASA Astrophysics Data System (ADS)

    Lin, G.; Stephan, E.; Elsethagen, T.; Meng, D.; Riihimaki, L. D.; McFarlane, S. A.

    2012-12-01

    Uncertainty quantification (UQ) is the science of quantitative characterization and reduction of uncertainties in applications. It determines how likely certain outcomes are if some aspects of the system are not exactly known. UQ studies such as the atmosphere datasets greatly increased in size and complexity because they now comprise of additional complex iterative steps, involve numerous simulation runs and can consist of additional analytical products such as charts, reports, and visualizations to explain levels of uncertainty. These new requirements greatly expand the need for metadata support beyond the NetCDF convention and vocabulary and as a result an additional formal data provenance ontology is required to provide a historical explanation of the origin of the dataset that include references between the explanations and components within the dataset. This work shares a climate observation data UQ science use case and illustrates how to reduce climate observation data uncertainty and use a linked science application called Provenance Environment (ProvEn) to enable and facilitate scientific teams to publish, share, link, and discover knowledge about the UQ research results. UQ results include terascale datasets that are published to an Earth Systems Grid Federation (ESGF) repository. Uncertainty exists in observation data sets, which is due to sensor data process (such as time averaging), sensor failure in extreme weather conditions, and sensor manufacture error etc. To reduce the uncertainty in the observation data sets, a method based on Principal Component Analysis (PCA) was proposed to recover the missing values in observation data. Several large principal components (PCs) of data with missing values are computed based on available values using an iterative method. The computed PCs can approximate the true PCs with high accuracy given a condition of missing values is met; the iterative method greatly improve the computational efficiency in computing PCs. Moreover, noise removal is done at the same time during the process of computing missing values by using only several large PCs. The uncertainty quantification is done through statistical analysis of the distribution of different PCs. To record above UQ process, and provide an explanation on the uncertainty before and after the UQ process on the observation data sets, additional data provenance ontology, such as ProvEn, is necessary. In this study, we demonstrate how to reduce observation data uncertainty on climate model-observation test beds and using ProvEn to record the UQ process on ESGF. ProvEn demonstrates how a scientific team conducting UQ studies can discover dataset links using its domain knowledgebase, allowing them to better understand and convey the UQ study research objectives, the experimental protocol used, the resulting dataset lineage, related analytical findings, ancillary literature citations, along with the social network of scientists associated with the study. Climate scientists will not only benefit from understanding a particular dataset within a knowledge context, but also benefit from the cross reference of knowledge among the numerous UQ studies being stored in ESGF.

  6. Neutron residual stress measurement and numerical modeling in a curved thin-walled structure by laser powder bed fusion additive manufacturing

    DOE PAGES

    An, Ke; Yuan, Lang; Dial, Laura; ...

    2017-09-11

    Severe residual stresses in metal parts made by laser powder bed fusion additive manufacturing processes (LPBFAM) can cause both distortion and cracking during the fabrication processes. Limited data is currently available for both iterating through process conditions and design, and in particular, for validating numerical models to accelerate process certification. In this work, residual stresses of a curved thin-walled structure, made of Ni-based superalloy Inconel 625™ and fabricated by LPBFAM, were resolved by neutron diffraction without measuring the stress-free lattices along both the build and the transverse directions. The stresses of the entire part during fabrication and after cooling downmore » were predicted by a simplified layer-by-layer finite element based numerical model. The simulated and measured stresses were found in good quantitative agreement. The validated simplified simulation methodology will allow to assess residual stresses in more complex structures and to significantly reduce manufacturing cycle time.« less

  7. Neutron residual stress measurement and numerical modeling in a curved thin-walled structure by laser powder bed fusion additive manufacturing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    An, Ke; Yuan, Lang; Dial, Laura

    Severe residual stresses in metal parts made by laser powder bed fusion additive manufacturing processes (LPBFAM) can cause both distortion and cracking during the fabrication processes. Limited data is currently available for both iterating through process conditions and design, and in particular, for validating numerical models to accelerate process certification. In this work, residual stresses of a curved thin-walled structure, made of Ni-based superalloy Inconel 625™ and fabricated by LPBFAM, were resolved by neutron diffraction without measuring the stress-free lattices along both the build and the transverse directions. The stresses of the entire part during fabrication and after cooling downmore » were predicted by a simplified layer-by-layer finite element based numerical model. The simulated and measured stresses were found in good quantitative agreement. The validated simplified simulation methodology will allow to assess residual stresses in more complex structures and to significantly reduce manufacturing cycle time.« less

  8. Analyzing developmental processes on an individual level using nonstationary time series modeling.

    PubMed

    Molenaar, Peter C M; Sinclair, Katerina O; Rovine, Michael J; Ram, Nilam; Corneal, Sherry E

    2009-01-01

    Individuals change over time, often in complex ways. Generally, studies of change over time have combined individuals into groups for analysis, which is inappropriate in most, if not all, studies of development. The authors explain how to identify appropriate levels of analysis (individual vs. group) and demonstrate how to estimate changes in developmental processes over time using a multivariate nonstationary time series model. They apply this model to describe the changing relationships between a biological son and father and a stepson and stepfather at the individual level. The authors also explain how to use an extended Kalman filter with iteration and smoothing estimator to capture how dynamics change over time. Finally, they suggest further applications of the multivariate nonstationary time series model and detail the next steps in the development of statistical models used to analyze individual-level data.

  9. A Fractal Excursion.

    ERIC Educational Resources Information Center

    Camp, Dane R.

    1991-01-01

    After introducing the two-dimensional Koch curve, which is generated by simple recursions on an equilateral triangle, the process is extended to three dimensions with simple recursions on a regular tetrahedron. Included, for both fractal sequences, are iterative formulae, illustrations of the first several iterations, and a sample PASCAL program.…

  10. The application of contraction theory to an iterative formulation of electromagnetic scattering

    NASA Technical Reports Server (NTRS)

    Brand, J. C.; Kauffman, J. F.

    1985-01-01

    Contraction theory is applied to an iterative formulation of electromagnetic scattering from periodic structures and a computational method for insuring convergence is developed. A short history of spectral (or k-space) formulation is presented with an emphasis on application to periodic surfaces. To insure a convergent solution of the iterative equation, a process called the contraction corrector method is developed. Convergence properties of previously presented iterative solutions to one-dimensional problems are examined utilizing contraction theory and the general conditions for achieving a convergent solution are explored. The contraction corrector method is then applied to several scattering problems including an infinite grating of thin wires with the solution data compared to previous works.

  11. Fast generating Greenberger-Horne-Zeilinger state via iterative interaction pictures

    NASA Astrophysics Data System (ADS)

    Huang, Bi-Hua; Chen, Ye-Hong; Wu, Qi-Cheng; Song, Jie; Xia, Yan

    2016-10-01

    We delve a little deeper into the construction of shortcuts to adiabatic passage for three-level systems by iterative interaction picture (multiple Schrödinger dynamics). As an application example, we use the deduced iterative based shortcuts to rapidly generate the Greenberger-Horne-Zeilinger (GHZ) state in a three-atom system with the help of quantum Zeno dynamics. Numerical simulation shows the dynamics designed by the iterative picture method is physically feasible and the shortcut scheme performs much better than that using the conventional adiabatic passage techniques. Also, the influences of various decoherence processes are discussed by numerical simulation and the results prove that the scheme is fast and robust against decoherence and operational imperfection.

  12. Single-shot dual-wavelength in-line and off-axis hybrid digital holography

    NASA Astrophysics Data System (ADS)

    Wang, Fengpeng; Wang, Dayong; Rong, Lu; Wang, Yunxin; Zhao, Jie

    2018-02-01

    We propose an in-line and off-axis hybrid holographic real-time imaging technique. The in-line and off-axis digital holograms are generated simultaneously by two lasers with different wavelengths, and they are recorded using a color camera with a single shot. The reconstruction is carried using an iterative algorithm in which the initial input is designed to include the intensity of the in-line hologram and the approximate phase distributions obtained from the off-axis hologram. In this way, the complex field in the object plane and the output by the iterative procedure can produce higher quality amplitude and phase images compared to traditional iterative phase retrieval. The performance of the technique has been demonstrated by acquiring the amplitude and phase images of a green lacewing's wing and a living moon jellyfish.

  13. A unified inversion scheme to process multifrequency measurements of various dispersive electromagnetic properties

    NASA Astrophysics Data System (ADS)

    Han, Y.; Misra, S.

    2018-04-01

    Multi-frequency measurement of a dispersive electromagnetic (EM) property, such as electrical conductivity, dielectric permittivity, or magnetic permeability, is commonly analyzed for purposes of material characterization. Such an analysis requires inversion of the multi-frequency measurement based on a specific relaxation model, such as Cole-Cole model or Pelton's model. We develop a unified inversion scheme that can be coupled to various type of relaxation models to independently process multi-frequency measurement of varied EM properties for purposes of improved EM-based geomaterial characterization. The proposed inversion scheme is firstly tested in few synthetic cases in which different relaxation models are coupled into the inversion scheme and then applied to multi-frequency complex conductivity, complex resistivity, complex permittivity, and complex impedance measurements. The method estimates up to seven relaxation-model parameters exhibiting convergence and accuracy for random initializations of the relaxation-model parameters within up to 3-orders of magnitude variation around the true parameter values. The proposed inversion method implements a bounded Levenberg algorithm with tuning initial values of damping parameter and its iterative adjustment factor, which are fixed in all the cases shown in this paper and irrespective of the type of measured EM property and the type of relaxation model. Notably, jump-out step and jump-back-in step are implemented as automated methods in the inversion scheme to prevent the inversion from getting trapped around local minima and to honor physical bounds of model parameters. The proposed inversion scheme can be easily used to process various types of EM measurements without major changes to the inversion scheme.

  14. Psychosocial work characteristics of personal care and service occupations: a process for developing meaningful measures for a multiethnic workforce.

    PubMed

    Hoppe, Annekatrin; Heaney, Catherine A; Fujishiro, Kaori; Gong, Fang; Baron, Sherry

    2015-01-01

    Despite their rapid increase in number, workers in personal care and service occupations are underrepresented in research on psychosocial work characteristics and occupational health. Some of the research challenges stem from the high proportion of immigrants in these occupations. Language barriers, low literacy, and cultural differences as well as their nontraditional work setting (i.e., providing service for one person in his/her home) make generic questionnaire measures inadequate for capturing salient aspects of personal care and service work. This study presents strategies for (1) identifying psychosocial work characteristics of home care workers that may affect their occupational safety and health and (2) creating survey measures that overcome barriers posed by language, low literacy, and cultural differences. We pursued these aims in four phases: (Phase 1) Six focus groups to identify the psychosocial work characteristics affecting the home care workers' occupational safety and health; (Phase 2) Selection of questionnaire items (i.e., questions or statements to assess the target construct) and first round of cognitive interviews (n = 30) to refine the items in an iterative process; (Phase 3) Item revision and second round of cognitive interviews (n = 11); (Phase 4) Quantitative pilot test to ensure the scales' reliability and validity across three language groups (English, Spanish, and Chinese; total n = 404). Analysis of the data from each phase informed the nature of subsequent phases. This iterative process ensured that survey measures not only met the reliability and validity criteria across groups, but were also meaningful to home care workers. This complex process is necessary when conducting research with nontraditional and multilingual worker populations.

  15. Equivalent charge source model based iterative maximum neighbor weight for sparse EEG source localization.

    PubMed

    Xu, Peng; Tian, Yin; Lei, Xu; Hu, Xiao; Yao, Dezhong

    2008-12-01

    How to localize the neural electric activities within brain effectively and precisely from the scalp electroencephalogram (EEG) recordings is a critical issue for current study in clinical neurology and cognitive neuroscience. In this paper, based on the charge source model and the iterative re-weighted strategy, proposed is a new maximum neighbor weight based iterative sparse source imaging method, termed as CMOSS (Charge source model based Maximum neighbOr weight Sparse Solution). Different from the weight used in focal underdetermined system solver (FOCUSS) where the weight for each point in the discrete solution space is independently updated in iterations, the new designed weight for each point in each iteration is determined by the source solution of the last iteration at both the point and its neighbors. Using such a new weight, the next iteration may have a bigger chance to rectify the local source location bias existed in the previous iteration solution. The simulation studies with comparison to FOCUSS and LORETA for various source configurations were conducted on a realistic 3-shell head model, and the results confirmed the validation of CMOSS for sparse EEG source localization. Finally, CMOSS was applied to localize sources elicited in a visual stimuli experiment, and the result was consistent with those source areas involved in visual processing reported in previous studies.

  16. Optimal shield mass distribution for space radiation protection

    NASA Technical Reports Server (NTRS)

    Billings, M. P.

    1972-01-01

    Computational methods have been developed and successfully used for determining the optimum distribution of space radiation shielding on geometrically complex space vehicles. These methods have been incorporated in computer program SWORD for dose evaluation in complex geometry, and iteratively calculating the optimum distribution for (minimum) shield mass satisfying multiple acute and protected dose constraints associated with each of several body organs.

  17. Wind turbine generator application places unique demands on tower design and materials

    NASA Technical Reports Server (NTRS)

    Kita, J. P.

    1978-01-01

    The most relevant contractual tower design requirements and goal for the Mod-1 tower are related to steel truss tower construction, cost-effective state-of-the-art design, a design life of 30 years, and maximum wind conditions of 120 mph at 30 feet elevation. The Mod-1 tower design approach was an iterative process. Static design loads were calculated and member sizes and overall geometry chosen with the use of finite element computer techniques. Initial tower dynamic characteristics were then combined with the dynamic properties of the other wind turbine components, and a series of complex dynamic computer programs were run to establish a dynamic load set and then a second tower design.

  18. Research on the generation of the background with sea and sky in infrared scene

    NASA Astrophysics Data System (ADS)

    Dong, Yan-zhi; Han, Yan-li; Lou, Shu-li

    2008-03-01

    It is important for scene generation to keep the texture of infrared images in simulation of anti-ship infrared imaging guidance. We studied the fractal method and applied it to the infrared scene generation. We adopted the method of horizontal-vertical (HV) partition to encode the original image. Basing on the properties of infrared image with sea-sky background, we took advantage of Local Iteration Function System (LIFS) to decrease the complexity of computation and enhance the processing rate. Some results were listed. The results show that the fractal method can keep the texture of infrared image better and can be used in the infrared scene generation widely in future.

  19. Operations mission planner beyond the baseline

    NASA Technical Reports Server (NTRS)

    Biefeld, Eric; Cooper, Lynne

    1991-01-01

    The scheduling of Space Station Freedom must satisfy four major requirements. It must ensure efficient housekeeping operations, maximize the collection of science, respond to changes in tasking and available resources, and accommodate the above changes in a manner that minimizes disruption of the ongoing operations of the station. While meeting these requirements the scheduler must cope with the complexity, scope, and flexibility of SSF operations. This requires the scheduler to deal with an astronomical number of possible schedules. The Operations Mission Planner (OMP) is centered around minimally disruptive replanning and the use of heuristics limit search in scheduling. OMP has already shown several artificial intelligence based scheduling techniques such as Interleaved Iterative Refinement and Bottleneck Identification using Process Chronologies.

  20. Development of a Mixed Methods Investigation of Process and Outcomes of Community-Based Participatory Research.

    PubMed

    Lucero, Julie; Wallerstein, Nina; Duran, Bonnie; Alegria, Margarita; Greene-Moton, Ella; Israel, Barbara; Kastelic, Sarah; Magarati, Maya; Oetzel, John; Pearson, Cynthia; Schulz, Amy; Villegas, Malia; White Hat, Emily R

    2018-01-01

    This article describes a mixed methods study of community-based participatory research (CBPR) partnership practices and the links between these practices and changes in health status and disparities outcomes. Directed by a CBPR conceptual model and grounded in indigenous-transformative theory, our nation-wide, cross-site study showcases the value of a mixed methods approach for better understanding the complexity of CBPR partnerships across diverse community and research contexts. The article then provides examples of how an iterative, integrated approach to our mixed methods analysis yielded enriched understandings of two key constructs of the model: trust and governance. Implications and lessons learned while using mixed methods to study CBPR are provided.

  1. Robust Transmission of H.264/AVC Streams Using Adaptive Group Slicing and Unequal Error Protection

    NASA Astrophysics Data System (ADS)

    Thomos, Nikolaos; Argyropoulos, Savvas; Boulgouris, Nikolaos V.; Strintzis, Michael G.

    2006-12-01

    We present a novel scheme for the transmission of H.264/AVC video streams over lossy packet networks. The proposed scheme exploits the error-resilient features of H.264/AVC codec and employs Reed-Solomon codes to protect effectively the streams. A novel technique for adaptive classification of macroblocks into three slice groups is also proposed. The optimal classification of macroblocks and the optimal channel rate allocation are achieved by iterating two interdependent steps. Dynamic programming techniques are used for the channel rate allocation process in order to reduce complexity. Simulations clearly demonstrate the superiority of the proposed method over other recent algorithms for transmission of H.264/AVC streams.

  2. Applying matching pursuit decomposition time-frequency processing to UGS footstep classification

    NASA Astrophysics Data System (ADS)

    Larsen, Brett W.; Chung, Hugh; Dominguez, Alfonso; Sciacca, Jacob; Kovvali, Narayan; Papandreou-Suppappola, Antonia; Allee, David R.

    2013-06-01

    The challenge of rapid footstep detection and classification in remote locations has long been an important area of study for defense technology and national security. Also, as the military seeks to create effective and disposable unattended ground sensors (UGS), computational complexity and power consumption have become essential considerations in the development of classification techniques. In response to these issues, a research project at the Flexible Display Center at Arizona State University (ASU) has experimented with footstep classification using the matching pursuit decomposition (MPD) time-frequency analysis method. The MPD provides a parsimonious signal representation by iteratively selecting matched signal components from a pre-determined dictionary. The resulting time-frequency representation of the decomposed signal provides distinctive features for different types of footsteps, including footsteps during walking or running activities. The MPD features were used in a Bayesian classification method to successfully distinguish between the different activities. The computational cost of the iterative MPD algorithm was reduced, without significant loss in performance, using a modified MPD with a dictionary consisting of signals matched to cadence temporal gait patterns obtained from real seismic measurements. The classification results were demonstrated with real data from footsteps under various conditions recorded using a low-cost seismic sensor.

  3. Iterative nonlinear joint transform correlation for the detection of objects in cluttered scenes

    NASA Astrophysics Data System (ADS)

    Haist, Tobias; Tiziani, Hans J.

    1999-03-01

    An iterative correlation technique with digital image processing in the feedback loop for the detection of small objects in cluttered scenes is proposed. A scanning aperture is combined with the method in order to improve the immunity against noise and clutter. Multiple reference objects or different views of one object are processed in parallel. We demonstrate the method by detecting a noisy and distorted face in a crowd with a nonlinear joint transform correlator.

  4. An Approach to Verification and Validation of a Reliable Multicasting Protocol

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Montgomery, Todd L.

    1994-01-01

    This paper describes the process of implementing a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally using a combination of formal and informal techniques in an attempt to ensure the correctness of its implementation. Our development process involved three concurrent activities: (1) the initial construction and incremental enhancement of a formal state model of the protocol machine; (2) the initial coding and incremental enhancement of the implementation; and (3) model-based testing of iterative implementations of the protocol. These activities were carried out by two separate teams: a design team and a V&V team. The design team built the first version of RMP with limited functionality to handle only nominal requirements of data delivery. In a series of iterative steps, the design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation. This was done by generating test cases based on suspected errant or offnominal behaviors predicted by the current model. If the execution of a test was different between the model and implementation, then the differences helped identify inconsistencies between the model and implementation. The dialogue between both teams drove the co-evolution of the model and implementation. Testing served as the vehicle for keeping the model and implementation in fidelity with each other. This paper describes (1) our experiences in developing our process model; and (2) three example problems found during the development of RMP.

  5. An approach to verification and validation of a reliable multicasting protocol

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Montgomery, Todd L.

    1995-01-01

    This paper describes the process of implementing a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally using a combination of formal and informal techniques in an attempt to ensure the correctness of its implementation. Our development process involved three concurrent activities: (1) the initial construction and incremental enhancement of a formal state model of the protocol machine; (2) the initial coding and incremental enhancement of the implementation; and (3) model-based testing of iterative implementations of the protocol. These activities were carried out by two separate teams: a design team and a V&V team. The design team built the first version of RMP with limited functionality to handle only nominal requirements of data delivery. In a series of iterative steps, the design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation. This was done by generating test cases based on suspected errant or off-nominal behaviors predicted by the current model. If the execution of a test was different between the model and implementation, then the differences helped identify inconsistencies between the model and implementation. The dialogue between both teams drove the co-evolution of the model and implementation. Testing served as the vehicle for keeping the model and implementation in fidelity with each other. This paper describes (1) our experiences in developing our process model; and (2) three example problems found during the development of RMP.

  6. Compensation for the phase-type spatial periodic modulation of the near-field beam at 1053 nm

    NASA Astrophysics Data System (ADS)

    Gao, Yaru; Liu, Dean; Yang, Aihua; Tang, Ruyu; Zhu, Jianqiang

    2017-10-01

    A phase-only spatial light modulator is used to provide and compensate for the spatial periodic modulation (SPM) of the near-field beam at the near infrared at 1053nm wavelength with an improved iterative weight-based method. The transmission characteristics of the incident beam has been changed by a spatial light modulator (SLM) to shape the spatial intensity of the output beam. The propagation and reverse propagation of the light in free space are two important processes in the iterative process. The based theory is the beam angular spectrum transmit formula (ASTF) and the principle of the iterative weight-based method. We have made two improvements to the originally proposed iterative weight-based method. We select the appropriate parameter by choosing the minimum value of the output beam contrast degree and use the MATLAB built-in angle function to acquire the corresponding phase of the light wave function. The required phase that compensates for the intensity distribution of the incident SPM beam is iterated by this algorithm, which can decrease the magnitude of the SPM of the intensity on the observation plane. The experimental results show that the phase-type SPM of the near-field beam is subject to a certain restriction. We have also analyzed some factors that make the results imperfect. The experiment results verifies the possible applicability of this iterative weight-based method to compensate for the SPM of the near-field beam.

  7. Improved Savitzky-Golay-method-based fluorescence subtraction algorithm for rapid recovery of Raman spectra.

    PubMed

    Chen, Kun; Zhang, Hongyuan; Wei, Haoyun; Li, Yan

    2014-08-20

    In this paper, we propose an improved subtraction algorithm for rapid recovery of Raman spectra that can substantially reduce the computation time. This algorithm is based on an improved Savitzky-Golay (SG) iterative smoothing method, which involves two key novel approaches: (a) the use of the Gauss-Seidel method and (b) the introduction of a relaxation factor into the iterative procedure. By applying a novel successive relaxation (SG-SR) iterative method to the relaxation factor, additional improvement in the convergence speed over the standard Savitzky-Golay procedure is realized. The proposed improved algorithm (the RIA-SG-SR algorithm), which uses SG-SR-based iteration instead of Savitzky-Golay iteration, has been optimized and validated with a mathematically simulated Raman spectrum, as well as experimentally measured Raman spectra from non-biological and biological samples. The method results in a significant reduction in computing cost while yielding consistent rejection of fluorescence and noise for spectra with low signal-to-fluorescence ratios and varied baselines. In the simulation, RIA-SG-SR achieved 1 order of magnitude improvement in iteration number and 2 orders of magnitude improvement in computation time compared with the range-independent background-subtraction algorithm (RIA). Furthermore the computation time of the experimentally measured raw Raman spectrum processing from skin tissue decreased from 6.72 to 0.094 s. In general, the processing of the SG-SR method can be conducted within dozens of milliseconds, which can provide a real-time procedure in practical situations.

  8. Development of an evidence-based review with recommendations using an online iterative process.

    PubMed

    Rudmik, Luke; Smith, Timothy L

    2011-01-01

    The practice of modern medicine is governed by evidence-based principles. Due to the plethora of medical literature, clinicians often rely on systematic reviews and clinical guidelines to summarize the evidence and provide best practices. Implementation of an evidence-based clinical approach can minimize variation in health care delivery and optimize the quality of patient care. This article reports a method for developing an "Evidence-based Review with Recommendations" using an online iterative process. The manuscript describes the following steps involved in this process: Clinical topic selection, Evidence-hased review assignment, Literature review and initial manuscript preparation, Iterative review process with author selection, and Manuscript finalization. The goal of this article is to improve efficiency and increase the production of evidence-based reviews while maintaining the high quality and transparency associated with the rigorous methodology utilized for clinical guideline development. With the rise of evidence-based medicine, most medical and surgical specialties have an abundance of clinical topics which would benefit from a formal evidence-based review. Although clinical guideline development is an important methodology, the associated challenges limit development to only the absolute highest priority clinical topics. As outlined in this article, the online iterative approach to the development of an Evidence-based Review with Recommendations may improve productivity without compromising the quality associated with formal guideline development methodology. Copyright © 2011 American Rhinologic Society-American Academy of Otolaryngic Allergy, LLC.

  9. Performing Systematic Literature Reviews with Novices: An Iterative Approach

    ERIC Educational Resources Information Center

    Lavallée, Mathieu; Robillard, Pierre-N.; Mirsalari, Reza

    2014-01-01

    Reviewers performing systematic literature reviews require understanding of the review process and of the knowledge domain. This paper presents an iterative approach for conducting systematic literature reviews that addresses the problems faced by reviewers who are novices in one or both levels of understanding. This approach is derived from…

  10. High-Performance Agent-Based Modeling Applied to Vocal Fold Inflammation and Repair.

    PubMed

    Seekhao, Nuttiiya; Shung, Caroline; JaJa, Joseph; Mongeau, Luc; Li-Jessen, Nicole Y K

    2018-01-01

    Fast and accurate computational biology models offer the prospect of accelerating the development of personalized medicine. A tool capable of estimating treatment success can help prevent unnecessary and costly treatments and potential harmful side effects. A novel high-performance Agent-Based Model (ABM) was adopted to simulate and visualize multi-scale complex biological processes arising in vocal fold inflammation and repair. The computational scheme was designed to organize the 3D ABM sub-tasks to fully utilize the resources available on current heterogeneous platforms consisting of multi-core CPUs and many-core GPUs. Subtasks are further parallelized and convolution-based diffusion is used to enhance the performance of the ABM simulation. The scheme was implemented using a client-server protocol allowing the results of each iteration to be analyzed and visualized on the server (i.e., in-situ ) while the simulation is running on the same server. The resulting simulation and visualization software enables users to interact with and steer the course of the simulation in real-time as needed. This high-resolution 3D ABM framework was used for a case study of surgical vocal fold injury and repair. The new framework is capable of completing the simulation, visualization and remote result delivery in under 7 s per iteration, where each iteration of the simulation represents 30 min in the real world. The case study model was simulated at the physiological scale of a human vocal fold. This simulation tracks 17 million biological cells as well as a total of 1.7 billion signaling chemical and structural protein data points. The visualization component processes and renders all simulated biological cells and 154 million signaling chemical data points. The proposed high-performance 3D ABM was verified through comparisons with empirical vocal fold data. Representative trends of biomarker predictions in surgically injured vocal folds were observed.

  11. High-Performance Agent-Based Modeling Applied to Vocal Fold Inflammation and Repair

    PubMed Central

    Seekhao, Nuttiiya; Shung, Caroline; JaJa, Joseph; Mongeau, Luc; Li-Jessen, Nicole Y. K.

    2018-01-01

    Fast and accurate computational biology models offer the prospect of accelerating the development of personalized medicine. A tool capable of estimating treatment success can help prevent unnecessary and costly treatments and potential harmful side effects. A novel high-performance Agent-Based Model (ABM) was adopted to simulate and visualize multi-scale complex biological processes arising in vocal fold inflammation and repair. The computational scheme was designed to organize the 3D ABM sub-tasks to fully utilize the resources available on current heterogeneous platforms consisting of multi-core CPUs and many-core GPUs. Subtasks are further parallelized and convolution-based diffusion is used to enhance the performance of the ABM simulation. The scheme was implemented using a client-server protocol allowing the results of each iteration to be analyzed and visualized on the server (i.e., in-situ) while the simulation is running on the same server. The resulting simulation and visualization software enables users to interact with and steer the course of the simulation in real-time as needed. This high-resolution 3D ABM framework was used for a case study of surgical vocal fold injury and repair. The new framework is capable of completing the simulation, visualization and remote result delivery in under 7 s per iteration, where each iteration of the simulation represents 30 min in the real world. The case study model was simulated at the physiological scale of a human vocal fold. This simulation tracks 17 million biological cells as well as a total of 1.7 billion signaling chemical and structural protein data points. The visualization component processes and renders all simulated biological cells and 154 million signaling chemical data points. The proposed high-performance 3D ABM was verified through comparisons with empirical vocal fold data. Representative trends of biomarker predictions in surgically injured vocal folds were observed. PMID:29706894

  12. Harmonics analysis of the ITER poloidal field converter based on a piecewise method

    NASA Astrophysics Data System (ADS)

    Xudong, WANG; Liuwei, XU; Peng, FU; Ji, LI; Yanan, WU

    2017-12-01

    Poloidal field (PF) converters provide controlled DC voltage and current to PF coils. The many harmonics generated by the PF converter flow into the power grid and seriously affect power systems and electric equipment. Due to the complexity of the system, the traditional integral operation in Fourier analysis is complicated and inaccurate. This paper presents a piecewise method to calculate the harmonics of the ITER PF converter. The relationship between the grid input current and the DC output current of the ITER PF converter is deduced. The grid current is decomposed into the sum of some simple functions. By calculating simple function harmonics based on the piecewise method, the harmonics of the PF converter under different operation modes are obtained. In order to examine the validity of the method, a simulation model is established based on Matlab/Simulink and a relevant experiment is implemented in the ITER PF integration test platform. Comparative results are given. The calculated results are found to be consistent with simulation and experiment. The piecewise method is proved correct and valid for calculating the system harmonics.

  13. A novel Iterative algorithm to text segmentation for web born-digital images

    NASA Astrophysics Data System (ADS)

    Xu, Zhigang; Zhu, Yuesheng; Sun, Ziqiang; Liu, Zhen

    2015-07-01

    Since web born-digital images have low resolution and dense text atoms, text region over-merging and miss detection are still two open issues to be addressed. In this paper a novel iterative algorithm is proposed to locate and segment text regions. In each iteration, the candidate text regions are generated by detecting Maximally Stable Extremal Region (MSER) with diminishing thresholds, and categorized into different groups based on a new similarity graph, and the texted region groups are identified by applying several features and rules. With our proposed overlap checking method the final well-segmented text regions are selected from these groups in all iterations. Experiments have been carried out on the web born-digital image datasets used for robust reading competition in ICDAR 2011 and 2013, and the results demonstrate that our proposed scheme can significantly reduce both the number of over-merge regions and the lost rate of target atoms, and the overall performance outperforms the best compared with the methods shown in the two competitions in term of recall rate and f-score at the cost of slightly higher computational complexity.

  14. OVERVIEW OF NEUTRON MEASUREMENTS IN JET FUSION DEVICE.

    PubMed

    Batistoni, P; Villari, R; Obryk, B; Packer, L W; Stamatelatos, I E; Popovichev, S; Colangeli, A; Colling, B; Fonnesu, N; Loreti, S; Klix, A; Klosowski, M; Malik, K; Naish, J; Pillon, M; Vasilopoulou, T; De Felice, P; Pimpinella, M; Quintieri, L

    2017-10-05

    The design and operation of ITER experimental fusion reactor requires the development of neutron measurement techniques and numerical tools to derive the fusion power and the radiation field in the device and in the surrounding areas. Nuclear analyses provide essential input to the conceptual design, optimisation, engineering and safety case in ITER and power plant studies. The required radiation transport calculations are extremely challenging because of the large physical extent of the reactor plant, the complexity of the geometry, and the combination of deep penetration and streaming paths. This article reports the experimental activities which are carried-out at JET to validate the neutronics measurements methods and numerical tools used in ITER and power plant design. A new deuterium-tritium campaign is proposed in 2019 at JET: the unique 14 MeV neutron yields produced will be exploited as much as possible to validate measurement techniques, codes, procedures and data currently used in ITER design thus reducing the related uncertainties and the associated risks in the machine operation. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  15. A Phenomenographic Investigation of the Ways Engineering Students Experience Innovation

    NASA Astrophysics Data System (ADS)

    Fila, Nicholas David

    Innovation has become an important phenomenon in engineering and engineering education. By developing novel, feasible, viable, and valued solutions to complex technical and human problems, engineers support the economic competitiveness of organizations, make a difference in the lives of users and other stakeholders, drive societal and scientific progress, and obtain key personal benefits. Innovation is also a complex phenomenon. It occurs across a variety of contexts and domains, encompasses numerous phases and activities, and requires unique competency profiles. Despite this complexity, many studies in engineering education focus on specific aspects (e.g., engineering students' abilities to generate original concepts during idea generation), and we still know little about the variety of ways engineering students approach and understand innovation. This study addresses that gap by asking: 1. What are the qualitatively different ways engineering students experience innovation during their engineering projects? 2. What are the structural relationships between the ways engineering students experience innovation? This study utilized phenomenography, a qualitative research method, to explore the above research questions. Thirty-three engineering students were recruited to ensure thorough coverage along four factors suggested by the literature to support differences related to innovation: engineering project experience, academic major, year in school, and gender. Each participant completed a 1-2 hour, semi-structured interview that focused on experiences with and conceptions of innovation. Whole transcripts were analyzed using an eight-stage, iterative, and comparative approach meant to identify a limited number of categories of description (composite ways of experiencing innovation comprised of the experiences of several participants), and the structural relationships between these categories. Phenomenographic analysis revealed eight categories of description that were structured in a semi-hierarchical, two-dimensional outcome space. The first four categories demonstrated a progression toward greater comprehensiveness in both process and focus dimensions. In the process dimension, subsequent categories added increasingly preliminary innovation phases: idea realization, idea generation, problem scoping, and problem finding. In the focus dimension, subsequent categories added key areas engineers considered during innovation: technical, human, and enterprise. The final four categories each incorporated all previous process phases and focus areas, but prioritized different focus areas in sophisticated ways and acknowledged a macro-iterative cycle, i.e., an understanding of how the processes within a single innovation project built upon and contributed to past and future innovation projects. These results demonstrate important differences between engineering students and suggest how they may come to experience innovation in increasingly comprehensive ways. A framework based on the results can be used by educators and researchers to support more robust educational offerings and nuanced research designs that reflect these differences.

  16. Conceptual design of data acquisition and control system for two Rf driver based negative ion source for fusion R&D

    NASA Astrophysics Data System (ADS)

    Soni, Jigensh; Yadav, R. K.; Patel, A.; Gahlaut, A.; Mistry, H.; Parmar, K. G.; Mahesh, V.; Parmar, D.; Prajapati, B.; Singh, M. J.; Bandyopadhyay, M.; Bansal, G.; Pandya, K.; Chakraborty, A.

    2013-02-01

    Twin Source - An Inductively coupled two RF driver based 180 kW, 1 MHz negative ion source experimental setup is initiated at IPR, Gandhinagar, under Indian program, with the objective of understanding the physics and technology of multi-driver coupling. Twin Source [1] (TS) also provides an intermediate platform between operational ROBIN [2] [5] and eight RF drivers based Indian test facility -INTF [3]. A twin source experiment requires a central system to provide control, data acquisition and communication interface, referred as TS-CODAC, for which a software architecture similar to ITER CODAC core system has been decided for implementation. The Core System is a software suite for ITER plant system manufacturers to use as a template for the development of their interface with CODAC. The ITER approach, in terms of technology, has been adopted for the TS-CODAC so as to develop necessary expertise for developing and operating a control system based on the ITER guidelines as similar configuration needs to be implemented for the INTF. This cost effective approach will provide an opportunity to evaluate and learn ITER CODAC technology, documentation, information technology and control system processes, on an operational machine. Conceptual design of the TS-CODAC system has been completed. For complete control of the system, approximately 200 Nos. control signals and 152 acquisition signals are needed. In TS-CODAC, control loop time required is within the range of 5ms - 10 ms, therefore for the control system, PLC (Siemens S-7 400) has been chosen as suggested in the ITER slow controller catalog. For the data acquisition, the maximum sampling interval required is 100 micro second, and therefore National Instruments (NI) PXIe system and NI 6259 digitizer cards have been selected as suggested in the ITER fast controller catalog. This paper will present conceptual design of TS -CODAC system based on ITER CODAC Core software and applicable plant system integration processes.

  17. Shading correction assisted iterative cone-beam CT reconstruction

    NASA Astrophysics Data System (ADS)

    Yang, Chunlin; Wu, Pengwei; Gong, Shutao; Wang, Jing; Lyu, Qihui; Tang, Xiangyang; Niu, Tianye

    2017-11-01

    Recent advances in total variation (TV) technology enable accurate CT image reconstruction from highly under-sampled and noisy projection data. The standard iterative reconstruction algorithms, which work well in conventional CT imaging, fail to perform as expected in cone beam CT (CBCT) applications, wherein the non-ideal physics issues, including scatter and beam hardening, are more severe. These physics issues result in large areas of shading artifacts and cause deterioration to the piecewise constant property assumed in reconstructed images. To overcome this obstacle, we incorporate a shading correction scheme into low-dose CBCT reconstruction and propose a clinically acceptable and stable three-dimensional iterative reconstruction method that is referred to as the shading correction assisted iterative reconstruction. In the proposed method, we modify the TV regularization term by adding a shading compensation image to the reconstructed image to compensate for the shading artifacts while leaving the data fidelity term intact. This compensation image is generated empirically, using image segmentation and low-pass filtering, and updated in the iterative process whenever necessary. When the compensation image is determined, the objective function is minimized using the fast iterative shrinkage-thresholding algorithm accelerated on a graphic processing unit. The proposed method is evaluated using CBCT projection data of the Catphan© 600 phantom and two pelvis patients. Compared with the iterative reconstruction without shading correction, the proposed method reduces the overall CT number error from around 200 HU to be around 25 HU and increases the spatial uniformity by a factor of 20 percent, given the same number of sparsely sampled projections. A clinically acceptable and stable iterative reconstruction algorithm for CBCT is proposed in this paper. Differing from the existing algorithms, this algorithm incorporates a shading correction scheme into the low-dose CBCT reconstruction and achieves more stable optimization path and more clinically acceptable reconstructed image. The method proposed by us does not rely on prior information and thus is practically attractive to the applications of low-dose CBCT imaging in the clinic.

  18. Layout compliance for triple patterning lithography: an iterative approach

    NASA Astrophysics Data System (ADS)

    Yu, Bei; Garreton, Gilda; Pan, David Z.

    2014-10-01

    As the semiconductor process further scales down, the industry encounters many lithography-related issues. In the 14nm logic node and beyond, triple patterning lithography (TPL) is one of the most promising techniques for Metal1 layer and possibly Via0 layer. As one of the most challenging problems in TPL, recently layout decomposition efforts have received more attention from both industry and academia. Ideally the decomposer should point out locations in the layout that are not triple patterning decomposable and therefore manual intervention by designers is required. A traditional decomposition flow would be an iterative process, where each iteration consists of an automatic layout decomposition step and manual layout modification task. However, due to the NP-hardness of triple patterning layout decomposition, automatic full chip level layout decomposition requires long computational time and therefore design closure issues continue to linger around in the traditional flow. Challenged by this issue, we present a novel incremental layout decomposition framework to facilitate accelerated iterative decomposition. In the first iteration, our decomposer not only points out all conflicts, but also provides the suggestions to fix them. After the layout modification, instead of solving the full chip problem from scratch, our decomposer can provide a quick solution for a selected portion of layout. We believe this framework is efficient, in terms of performance and designer friendly.

  19. ITER Fusion Energy

    ScienceCinema

    Holtkamp, Norbert

    2018-01-09

    ITER (in Latin “the way”) is designed to demonstrate the scientific and technological feasibility of fusion energy. Fusion is the process by which two light atomic nuclei combine to form a heavier over one and thus release energy. In the fusion process two isotopes of hydrogen – deuterium and tritium – fuse together to form a helium atom and a neutron. Thus fusion could provide large scale energy production without greenhouse effects; essentially limitless fuel would be available all over the world. The principal goals of ITER are to generate 500 megawatts of fusion power for periods of 300 to 500 seconds with a fusion power multiplication factor, Q, of at least 10. Q ? 10 (input power 50 MW / output power 500 MW). The ITER Organization was officially established in Cadarache, France, on 24 October 2007. The seven members engaged in the project – China, the European Union, India, Japan, Korea, Russia and the United States – represent more than half the world’s population. The costs for ITER are shared by the seven members. The cost for the construction will be approximately 5.5 billion Euros, a similar amount is foreseen for the twenty-year phase of operation and the subsequent decommissioning.

  20. On the safety of ITER accelerators.

    PubMed

    Li, Ge

    2013-01-01

    Three 1 MV/40A accelerators in heating neutral beams (HNB) are on track to be implemented in the International Thermonuclear Experimental Reactor (ITER). ITER may produce 500 MWt of power by 2026 and may serve as a green energy roadmap for the world. They will generate -1 MV 1 h long-pulse ion beams to be neutralised for plasma heating. Due to frequently occurring vacuum sparking in the accelerators, the snubbers are used to limit the fault arc current to improve ITER safety. However, recent analyses of its reference design have raised concerns. General nonlinear transformer theory is developed for the snubber to unify the former snubbers' different design models with a clear mechanism. Satisfactory agreement between theory and tests indicates that scaling up to a 1 MV voltage may be possible. These results confirm the nonlinear process behind transformer theory and map out a reliable snubber design for a safer ITER.

  1. On the convergence of an iterative formulation of the electromagnetic scattering from an infinite grating of thin wires

    NASA Technical Reports Server (NTRS)

    Brand, J. C.

    1985-01-01

    Contraction theory is applied to an iterative formulation of electromagnetic scattering from periodic structures and a computational method for insuring convergence is developed. A short history of spectral (or k-space) formulation is presented with an emphasis on application to periodic surfaces. The mathematical background for formulating an iterative equation is covered using straightforward single variable examples including an extension to vector spaces. To insure a convergent solution of the iterative equation, a process called the contraction corrector method is developed. Convergence properties of previously presented iterative solutions to one-dimensional problems are examined utilizing contraction theory and the general conditions for achieving a convergent solution are explored. The contraction corrector method is then applied to several scattering problems including an infinite grating of thin wires with the solution data compared to previous works.

  2. On the safety of ITER accelerators

    PubMed Central

    Li, Ge

    2013-01-01

    Three 1 MV/40A accelerators in heating neutral beams (HNB) are on track to be implemented in the International Thermonuclear Experimental Reactor (ITER). ITER may produce 500 MWt of power by 2026 and may serve as a green energy roadmap for the world. They will generate −1 MV 1 h long-pulse ion beams to be neutralised for plasma heating. Due to frequently occurring vacuum sparking in the accelerators, the snubbers are used to limit the fault arc current to improve ITER safety. However, recent analyses of its reference design have raised concerns. General nonlinear transformer theory is developed for the snubber to unify the former snubbers' different design models with a clear mechanism. Satisfactory agreement between theory and tests indicates that scaling up to a 1 MV voltage may be possible. These results confirm the nonlinear process behind transformer theory and map out a reliable snubber design for a safer ITER. PMID:24008267

  3. Determination of association constants at moderately fast chemical exchange: complexation of camphor enantiomers by alpha-cyclodextrin.

    PubMed

    Bernatowicz, Piotr; Nowakowski, Michał; Dodziuk, Helena; Ejchart, Andrzej

    2006-08-01

    Association constants in weak molecular complexes can be determined by analysis of chemical shifts variations resulting from changes of guest to host concentration ratio. In the regime of very fast exchange, i.e., when exchange rate is several orders of magnitude larger than the Larmor angular frequency difference of the observed resonance in free and complexed molecule, the apparent position of averaged resonance is a population-weighted mean of resonances of particular forms involved in the equilibrium. The assumption of very fast exchange is often, however, tacitly admitted in literature even in cases where the process of interest is much slower than required. We show that such an unjustified simplification may, under certain circumstances, lead to significant underestimation of association constant and, in consequence, to non-negligible errors in Gibbs free energy under determination. We present a general method, based on iterative numerical NMR line shape analysis, which allows one for the compensation of chemical exchange effects, and delivers both the correct association constants and the exchange rates. The latter are not delivered by the other mentioned method. Practical application of our algorithm is illustrated by the case of camphor-alpha-cyclodextrin complexes.

  4. A fast reconstruction algorithm for fluorescence optical diffusion tomography based on preiteration.

    PubMed

    Song, Xiaolei; Xiong, Xiaoyun; Bai, Jing

    2007-01-01

    Fluorescence optical diffusion tomography in the near-infrared (NIR) bandwidth is considered to be one of the most promising ways for noninvasive molecular-based imaging. Many reconstructive approaches to it utilize iterative methods for data inversion. However, they are time-consuming and they are far from meeting the real-time imaging demands. In this work, a fast preiteration algorithm based on the generalized inverse matrix is proposed. This method needs only one step of matrix-vector multiplication online, by pushing the iteration process to be executed offline. In the preiteration process, the second-order iterative format is employed to exponentially accelerate the convergence. Simulations based on an analytical diffusion model show that the distribution of fluorescent yield can be well estimated by this algorithm and the reconstructed speed is remarkably increased.

  5. Improvement of tritium accountancy technology for ITER fuel cycle safety enhancement

    NASA Astrophysics Data System (ADS)

    O'hira, S.; Hayashi, T.; Nakamura, H.; Kobayashi, K.; Tadokoro, T.; Nakamura, H.; Itoh, T.; Yamanishi, T.; Kawamura, Y.; Iwai, Y.; Arita, T.; Maruyama, T.; Kakuta, T.; Konishi, S.; Enoeda, M.; Yamada, M.; Suzuki, T.; Nishi, M.; Nagashima, T.; Ohta, M.

    2000-03-01

    In order to improve the safe handling and control of tritium for the ITER fuel cycle, effective in situ tritium accounting methods have been developed at the Tritium Process Laboratory in the Japan Atomic Energy Research Institute under one of the ITER-EDA R&D tasks. The remote and multilocation analysis of process gases by an application of laser Raman spectroscopy developed and tested could provide a measurement of hydrogen isotope gases with a detection limit of 0.3 kPa analytical periods of 120 s. An in situ tritium inventory measurement by application of a `self-assaying' storage bed with 25 g tritium capacity could provide a measurement with the required detection limit of less than 1% and a design proof of a bed with 100 g tritium capacity.

  6. A Fast, Minimalist Search Tool for Remote Sensing Data

    NASA Astrophysics Data System (ADS)

    Lynnes, C. S.; Macharrie, P. G.; Elkins, M.; Joshi, T.; Fenichel, L. H.

    2005-12-01

    We present a tool that emphasizes speed and simplicity in searching remotely sensed Earth Science data. The tool, nicknamed "Mirador" (Spanish for a scenic overlook), provides only four freetext search form fields, for Keywords, Location, Data Start and Data Stop. This contrasts with many current Earth Science search tools that offer highly structured interfaces in order to ensure precise, non-zero results. The disadvantages of the structured approach lie in its complexity and resultant learning curve, as well as the time it takes to formulate and execute the search, thus discouraging iterative discovery. On the other hand, the success of the basic Google search interface shows that many users are willing to forgo high search precision if the search process is fast enough to enable rapid iteration. Therefore, we employ several methods to increase the speed of search formulation and execution. Search formulation is expedited by the minimalist search form, with only one required field. Also, a gazetteer enables the use of geographic terms as shorthand for latitude/longitude coordinates. The search execution is accelerated by initially presenting dataset results (returned from a Google Mini appliance) with an estimated number of "hits" for each dataset based on the user's space-time constraints. The more costly file-level search is executed against a PostGres database only when the user "drills down", and then covering only the fraction of the time period needed to return the next page of results. The simplicity of the search form makes the tool easy to learn and use, and the speed of the searches enables an iterative form of data discovery.

  7. Research on complex 3D tree modeling based on L-system

    NASA Astrophysics Data System (ADS)

    Gang, Chen; Bin, Chen; Yuming, Liu; Hui, Li

    2018-03-01

    L-system as a fractal iterative system could simulate complex geometric patterns. Based on the field observation data of trees and knowledge of forestry experts, this paper extracted modeling constraint rules and obtained an L-system rules set. Using the self-developed L-system modeling software the L-system rule set was parsed to generate complex tree 3d models.The results showed that the geometrical modeling method based on l-system could be used to describe the morphological structure of complex trees and generate 3D tree models.

  8. Establishing Relationships and Navigating Boundaries When Caring for Children With Medical Complexity at Home.

    PubMed

    Nageswaran, Savithri; Golden, Shannon L

    Children with medical complexity receive care from many healthcare providers including home healthcare nurses. The objective of our study, based on a conceptual framework, was to describe the relationships between parents/caregivers of children with medical complexity and home healthcare nurses caring for these children. We collected qualitative data in 20 semistructured in-depth interviews (15 English, 5 Spanish) with 26 primary caregivers of children with medical complexity, and 4 focus groups of 18 home healthcare nurses inquiring about their experiences about home healthcare nursing services for children with medical complexity. During an iterative analysis process, we identified recurrent themes related to caregiver-nurse relationships. Our study showed that: (1) caregiver-nurse relationships evolved over time and were determined by multiple factors; (2) communication and trust were essential to the establishment of caregiver-nurse relationships; (3) both caregivers and nurses described difficulties of navigating physical, professional, personal, and emotional boundaries, and identified strategies to maintain these boundaries; and (4) good caregiver-nurse relationships helped in the care of children with medical complexity, reduced caregiver burden, resulted in less stress for nurses, and was a factor in nurse retention. We conclude that trusted relationships between caregivers and nurses are important to the home care of children with medical complexity. Interventions to develop and maintain good caregiver-nurse relationships are necessary.

  9. EC assisted start-up experiments reproduction in FTU and AUG for simulations of the ITER case

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Granucci, G.; Ricci, D.; Farina, D.

    The breakdown and plasma start-up in ITER are well known issues studied in the last few years in many tokamaks with the aid of calculation based on simplified modeling. The thickness of ITER metallic wall and the voltage limits of the Central Solenoid Power Supply strongly limit the maximum toroidal electric field achievable (0.3 V/m), well below the level used in the present generation of tokamaks. In order to have a safe and robust breakdown, the use of Electron Cyclotron Power to assist plasma formation and current rump up has been foreseen. This has raised attention on plasma formation phasemore » in presence of EC wave, especially in order to predict the required power for a robust breakdown in ITER. Few detailed theory studies have been performed up to nowadays, due to the complexity of the problems. A simplified approach, extended from that proposed in ref[1] has been developed including a impurity multispecies distribution and an EC wave propagation and absorption based on GRAY code. This integrated model (BK0D) has been benchmarked on ohmic and EC assisted experiments on FTU and AUG, finding the key aspects for a good reproduction of data. On the basis of this, the simulation has been devoted to understand the best configuration for ITER case. The dependency of impurity distribution content and neutral gas pressure limits has been considered. As results of the analysis a reasonable amount of power (1 - 2 MW) seems to be enough to extend in a significant way the breakdown and current start up capability of ITER. The work reports the FTU data reproduction and the ITER case simulations.« less

  10. Iterative Authoring Using Story Generation Feedback: Debugging or Co-creation?

    NASA Astrophysics Data System (ADS)

    Swartjes, Ivo; Theune, Mariët

    We explore the role that story generation feedback may play within the creative process of interactive story authoring. While such feedback is often used as 'debugging' information, we explore here a 'co-creation' view, in which the outcome of the story generator influences authorial intent. We illustrate an iterative authoring approach in which each iteration consists of idea generation, implementation and simulation. We find that the tension between authorial intent and the partially uncontrollable story generation outcome may be relieved by taking such a co-creation approach.

  11. Evaluation of the cryogenic mechanical properties of the insulation material for ITER Feeder superconducting joint

    NASA Astrophysics Data System (ADS)

    Wu, Zhixiong; Huang, Rongjin; Huang, ChuanJun; Yang, Yanfang; Huang, Xiongyi; Li, Laifeng

    2017-12-01

    The Glass-fiber reinforced plastic (GFRP) fabricated by the vacuum bag process was selected as the high voltage electrical insulation and mechanical support for the superconducting joints and the current leads for the ITER Feeder system. To evaluate the cryogenic mechanical properties of the GFRP, the mechanical properties such as the short beam strength (SBS), the tensile strength and the fatigue fracture strength after 30,000 cycles, were measured at 77K in this study. The results demonstrated that the GFRP met the design requirements of ITER.

  12. Rescheduling with iterative repair

    NASA Technical Reports Server (NTRS)

    Zweben, Monte; Davis, Eugene; Daun, Brian; Deale, Michael

    1992-01-01

    This paper presents a new approach to rescheduling called constraint-based iterative repair. This approach gives our system the ability to satisfy domain constraints, address optimization concerns, minimize perturbation to the original schedule, and produce modified schedules quickly. The system begins with an initial, flawed schedule and then iteratively repairs constraint violations until a conflict-free schedule is produced. In an empirical demonstration, we vary the importance of minimizing perturbation and report how fast the system is able to resolve conflicts in a given time bound. These experiments were performed within the domain of Space Shuttle ground processing.

  13. Studies on the behaviour of tritium in components and structure materials of tritium confinement and detritiation systems of ITER

    NASA Astrophysics Data System (ADS)

    Kobayashi, K.; Isobe, K.; Iwai, Y.; Hayashi, T.; Shu, W.; Nakamura, H.; Kawamura, Y.; Yamada, M.; Suzuki, T.; Miura, H.; Uzawa, M.; Nishikawa, M.; Yamanishi, T.

    2007-12-01

    Confinement and the removal of tritium are key subjects for the safety of ITER. The ITER buildings are confinement barriers of tritium. In a hot cell, tritium is often released as vapour and is in contact with the inner walls. The inner walls of the ITER tritium plant building will also be exposed to tritium in an accident. The tritium released in the buildings is removed by the atmosphere detritiation systems (ADS), where the tritium is oxidized by catalysts and is removed as water. A special gas of SF6 is used in ITER and is expected to be released in an accident such as a fire. Although the SF6 gas has potential as a catalyst poison, the performance of ADS with the existence of SF6 has not been confirmed as yet. Tritiated water is produced in the regeneration process of ADS and is subsequently processed by the ITER water detritiation system (WDS). One of the key components of the WDS is an electrolysis cell. To overcome the issues in a global tritium confinement, a series of experimental studies have been carried out as an ITER R&D task: (1) tritium behaviour in concrete; (2) the effect of SF6 on the performance of ADS and (3) tritium durability of the electrolysis cell of the ITER-WDS. (1) The tritiated water vapour penetrated up to 50 mm into the concrete from the surface in six months' exposure. The penetration rate of tritium in the concrete was thus appreciably first, the isotope exchange capacity of the cement paste plays an important role in tritium trapping and penetration into concrete materials when concrete is exposed to tritiated water vapour. It is required to evaluate the effect of coating on the penetration rate quantitatively from the actual tritium tests. (2) SF6 gas decreased the detritiation factor of ADS. Since the effect of SF6 depends closely on its concentration, the amount of SF6 released into the tritium handling area in an accident should be reduced by some ideas of arrangement of components in the buildings. (3) It was expected that the electrolysis cell of the ITER-WDS could endure 3 years' operation under the ITER design conditions. Measuring the concentration of the fluorine ions could be a promising technique for monitoring the damage to the electrolysis cell.

  14. Improving Drive Files for Vehicle Road Simulations

    NASA Astrophysics Data System (ADS)

    Cherng, John G.; Goktan, Ali; French, Mark; Gu, Yi; Jacob, Anil

    2001-09-01

    Shaker tables are commonly used in laboratories for automotive vehicle component testing to study durability and acoustics performance. An example is development testing of car seats. However, it is difficult to repeat the measured road data perfectly with the response of a shaker table as there are basic differences in dynamic characteristics between a flexible vehicle and substantially rigid shaker table. In addition, there are performance limits in the shaker table drive systems that can limit correlation. In practice, an optimal drive signal for the actuators is created iteratively. During each iteration, the error between the road data and the response data is minimised by an optimising algorithm which is generally a part of the feed back loop of the shake table controller. This study presents a systematic investigation to the errors in time and frequency domains as well as joint time-frequency domain and an evaluation of different digital signal processing techniques that have been used in previous work. In addition, we present an innovative approach that integrates the dynamic characteristics of car seats and the human body into the error-minimising iteration process. We found that the iteration process can be shortened and the error reduced by using a weighting function created by normalising the frequency response function of the car seat. Two road data test sets were used in the study.

  15. Solution of the symmetric eigenproblem AX=lambda BX by delayed division

    NASA Technical Reports Server (NTRS)

    Thurston, G. A.; Bains, N. J. C.

    1986-01-01

    Delayed division is an iterative method for solving the linear eigenvalue problem AX = lambda BX for a limited number of small eigenvalues and their corresponding eigenvectors. The distinctive feature of the method is the reduction of the problem to an approximate triangular form by systematically dropping quadratic terms in the eigenvalue lambda. The report describes the pivoting strategy in the reduction and the method for preserving symmetry in submatrices at each reduction step. Along with the approximate triangular reduction, the report extends some techniques used in the method of inverse subspace iteration. Examples are included for problems of varying complexity.

  16. Application of an iterative least-squares waveform inversion of strong-motion and teleseismic records to the 1978 Tabas, Iran, earthquake

    USGS Publications Warehouse

    Hartzell, S.; Mendoza, C.

    1991-01-01

    An iterative least-squares technique is used to simultaneously invert the strong-motion records and teleseismic P waveforms for the 1978 Tabas, Iran, earthquake to deduce the rupture history. The effects of using different data sets and different parametrizations of the problem (linear versus nonlinear) are considered. A consensus of all the inversion runs indicates a complex, multiple source for the Tabas earthquake, with four main source regions over a fault length of 90 km and an average rupture velocity of 2.5 km/sec. -from Authors

  17. A unified convergence theory of a numerical method, and applications to the replenishment policies.

    PubMed

    Mi, Xiang-jiang; Wang, Xing-hua

    2004-01-01

    In determining the replenishment policy for an inventory system, some researchers advocated that the iterative method of Newton could be applied to the derivative of the total cost function in order to get the optimal solution. But this approach requires calculation of the second derivative of the function. Avoiding this complex computation we use another iterative method presented by the second author. One of the goals of this paper is to present a unified convergence theory of this method. Then we give a numerical example to show the application of our theory.

  18. Conjecture Mapping to Optimize the Educational Design Research Process

    ERIC Educational Resources Information Center

    Wozniak, Helen

    2015-01-01

    While educational design research promotes closer links between practice and theory, reporting its outcomes from iterations across multiple contexts is often constrained by the volumes of data generated, and the context bound nature of the research outcomes. Reports tend to focus on a single iteration of implementation without further research to…

  19. Closed form unsupervised registration of multi-temporal structure from motion-multiview stereo data using non-linearly weighted image features

    NASA Astrophysics Data System (ADS)

    Seers, T. D.; Hodgetts, D.

    2013-12-01

    Seers, T. D. & Hodgetts, D. School of Earth, Atmospheric and Environmental Sciences, University of Manchester, UK. M13 9PL. The detection of topological change at the Earth's surface is of considerable scholarly interest, allowing the quantification of the rates of geomorphic processes whilst providing lucid insights into the underlying mechanisms driving landscape evolution. In this regard, the past decade has witnessed the ever increasing proliferation of studies employing multi-temporal topographic data in within the geosciences, bolstered by continuing technical advancements in the acquisition and processing of prerequisite datasets. Provided by workers within the field of Computer Vision, multiview stereo (MVS) dense surface reconstructions, primed by structure-from-motion (SfM) based camera pose estimation represents one such development. Providing a cost effective, operationally efficient data capture medium, the modest requirement of a consumer grade camera for data collection coupled with the minimal user intervention required during post-processing makes SfM-MVS an attractive alternative to terrestrial laser scanners for collecting multi-temporal topographic datasets. However, in similitude to terrestrial scanner derived data, the co-registration of spatially coincident or partially overlapping scans produced by SfM-MVS presents a major technical challenge, particularly in the case of semi non-rigid scenes produced during topographic change detection studies. Moreover, the arbitrary scaling resulting from SfM ambiguity requires that a scale matrix must be estimated during the transformation, introducing further complexity into its formulation. Here, we present a novel, fully unsupervised algorithm which utilises non-linearly weighted image features for the solving the similarity transform (scale, translation rotation) between partially overlapping scans produced by SfM-MVS image processing. With the only initialization condition being partial intersection between input image sets, our method has major advantages over conventional iterative least squares minimization based methods (e.g. Iterative Closest Point variants), acting only on rigid areas of target scenes, being capable of reliably estimating the scaling factor and requiring no incipient estimation of the transformation to initialize (i.e. manual rough alignment). Moreover, because the solution is closed form, convergence is considerably more expedient that most iterative methods. It is hoped that the availability of improved co-registration routines, such as the one presented here, will facilitate the routine collection of multi-temporal topographic datasets by a wider range of geoscience practitioners.

  20. BeeSign: Designing to Support Mediated Group Inquiry of Complex Science by Early Elementary Students

    ERIC Educational Resources Information Center

    Danish, Joshua A.; Peppler, Kylie; Phelps, David

    2010-01-01

    All too often, designers assume that complex science and cycles of inquiry are beyond the capabilities of young children (5-8 years old). However, with carefully designed mediators, we argue that such concepts are well within their grasp. In this paper we describe two design iterations of the BeeSign simulation software that was designed to help…

  1. An O({radical}nL) primal-dual affine scaling algorithm for linear programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Siming

    1994-12-31

    We present a new primal-dual affine scaling algorithm for linear programming. The search direction of the algorithm is a combination of classical affine scaling direction of Dikin and a recent new affine scaling direction of Jansen, Roos and Terlaky. The algorithm has an iteration complexity of O({radical}nL), comparing to O(nL) complexity of Jansen, Roos and Terlaky.

  2. A fast method to emulate an iterative POCS image reconstruction algorithm.

    PubMed

    Zeng, Gengsheng L

    2017-10-01

    Iterative image reconstruction algorithms are commonly used to optimize an objective function, especially when the objective function is nonquadratic. Generally speaking, the iterative algorithms are computationally inefficient. This paper presents a fast algorithm that has one backprojection and no forward projection. This paper derives a new method to solve an optimization problem. The nonquadratic constraint, for example, an edge-preserving denoising constraint is implemented as a nonlinear filter. The algorithm is derived based on the POCS (projections onto projections onto convex sets) approach. A windowed FBP (filtered backprojection) algorithm enforces the data fidelity. An iterative procedure, divided into segments, enforces edge-enhancement denoising. Each segment performs nonlinear filtering. The derived iterative algorithm is computationally efficient. It contains only one backprojection and no forward projection. Low-dose CT data are used for algorithm feasibility studies. The nonlinearity is implemented as an edge-enhancing noise-smoothing filter. The patient studies results demonstrate its effectiveness in processing low-dose x ray CT data. This fast algorithm can be used to replace many iterative algorithms. © 2017 American Association of Physicists in Medicine.

  3. Robust iterative learning control for multi-phase batch processes: an average dwell-time method with 2D convergence indexes

    NASA Astrophysics Data System (ADS)

    Wang, Limin; Shen, Yiteng; Yu, Jingxian; Li, Ping; Zhang, Ridong; Gao, Furong

    2018-01-01

    In order to cope with system disturbances in multi-phase batch processes with different dimensions, a hybrid robust control scheme of iterative learning control combined with feedback control is proposed in this paper. First, with a hybrid iterative learning control law designed by introducing the state error, the tracking error and the extended information, the multi-phase batch process is converted into a two-dimensional Fornasini-Marchesini (2D-FM) switched system with different dimensions. Second, a switching signal is designed using the average dwell-time method integrated with the related switching conditions to give sufficient conditions ensuring stable running for the system. Finally, the minimum running time of the subsystems and the control law gains are calculated by solving the linear matrix inequalities. Meanwhile, a compound 2D controller with robust performance is obtained, which includes a robust extended feedback control for ensuring the steady-state tracking error to converge rapidly. The application on an injection molding process displays the effectiveness and superiority of the proposed strategy.

  4. Developing a holistic policy and intervention framework for global mental health.

    PubMed

    Khenti, Akwatu; Fréel, Stéfanie; Trainor, Ruth; Mohamoud, Sirad; Diaz, Pablo; Suh, Erica; Bobbili, Sireesha J; Sapag, Jaime C

    2016-02-01

    There are significant gaps in the accessibility and quality of mental health services around the globe. A wide range of institutions are addressing the challenges, but there is limited reflection and evaluation on the various approaches, how they compare with each other, and conclusions regarding the most effective approach for particular settings. This article presents a framework for global mental health capacity building that could potentially serve as a promising or best practice in the field. The framework is the outcome of a decade of collaborative global health work at the Centre for Addiction and Mental Health (CAMH) (Ontario, Canada). The framework is grounded in scientific evidence, relevant learning and behavioural theories and the underlying principles of health equity and human rights. Grounded in CAMH's research, programme evaluation and practical experience in developing and implementing mental health capacity building interventions, this article presents the iterative learning process and impetus that formed the basis of the framework. A developmental evaluation (Patton M.2010. Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use. New York: Guilford Press.) approach was used to build the framework, as global mental health collaboration occurs in complex or uncertain environments and evolving learning systems. A multilevel framework consists of five central components: (1) holistic health, (2) cultural and socioeconomic relevance, (3) partnerships, (4) collaborative action-based education and learning and (5) sustainability. The framework's practical application is illustrated through the presentation of three international case studies and four policy implications. Lessons learned, limitations and future opportunities are also discussed. The holistic policy and intervention framework for global mental health reflects an iterative learning process that can be applied and scaled up across different settings through appropriate modifications. © The Author 2015. Published by Oxford University Press in association with The London School of Hygiene and Tropical Medicine.

  5. Parametric boundary reconstruction algorithm for industrial CT metrology application.

    PubMed

    Yin, Zhye; Khare, Kedar; De Man, Bruno

    2009-01-01

    High-energy X-ray computed tomography (CT) systems have been recently used to produce high-resolution images in various nondestructive testing and evaluation (NDT/NDE) applications. The accuracy of the dimensional information extracted from CT images is rapidly approaching the accuracy achieved with a coordinate measuring machine (CMM), the conventional approach to acquire the metrology information directly. On the other hand, CT systems generate the sinogram which is transformed mathematically to the pixel-based images. The dimensional information of the scanned object is extracted later by performing edge detection on reconstructed CT images. The dimensional accuracy of this approach is limited by the grid size of the pixel-based representation of CT images since the edge detection is performed on the pixel grid. Moreover, reconstructed CT images usually display various artifacts due to the underlying physical process and resulting object boundaries from the edge detection fail to represent the true boundaries of the scanned object. In this paper, a novel algorithm to reconstruct the boundaries of an object with uniform material composition and uniform density is presented. There are three major benefits in the proposed approach. First, since the boundary parameters are reconstructed instead of image pixels, the complexity of the reconstruction algorithm is significantly reduced. The iterative approach, which can be computationally intensive, will be practical with the parametric boundary reconstruction. Second, the object of interest in metrology can be represented more directly and accurately by the boundary parameters instead of the image pixels. By eliminating the extra edge detection step, the overall dimensional accuracy and process time can be improved. Third, since the parametric reconstruction approach shares the boundary representation with other conventional metrology modalities such as CMM, boundary information from other modalities can be directly incorporated as prior knowledge to improve the convergence of an iterative approach. In this paper, the feasibility of parametric boundary reconstruction algorithm is demonstrated with both simple and complex simulated objects. Finally, the proposed algorithm is applied to the experimental industrial CT system data.

  6. A frequency dependent preconditioned wavelet method for atmospheric tomography

    NASA Astrophysics Data System (ADS)

    Yudytskiy, Mykhaylo; Helin, Tapio; Ramlau, Ronny

    2013-12-01

    Atmospheric tomography, i.e. the reconstruction of the turbulence in the atmosphere, is a main task for the adaptive optics systems of the next generation telescopes. For extremely large telescopes, such as the European Extremely Large Telescope, this problem becomes overly complex and an efficient algorithm is needed to reduce numerical costs. Recently, a conjugate gradient method based on wavelet parametrization of turbulence layers was introduced [5]. An iterative algorithm can only be numerically efficient when the number of iterations required for a sufficient reconstruction is low. A way to achieve this is to design an efficient preconditioner. In this paper we propose a new frequency-dependent preconditioner for the wavelet method. In the context of a multi conjugate adaptive optics (MCAO) system simulated on the official end-to-end simulation tool OCTOPUS of the European Southern Observatory we demonstrate robustness and speed of the preconditioned algorithm. We show that three iterations are sufficient for a good reconstruction.

  7. Stokes space modulation format classification based on non-iterative clustering algorithm for coherent optical receivers.

    PubMed

    Mai, Xiaofeng; Liu, Jie; Wu, Xiong; Zhang, Qun; Guo, Changjian; Yang, Yanfu; Li, Zhaohui

    2017-02-06

    A Stokes-space modulation format classification (MFC) technique is proposed for coherent optical receivers by using a non-iterative clustering algorithm. In the clustering algorithm, two simple parameters are calculated to help find the density peaks of the data points in Stokes space and no iteration is required. Correct MFC can be realized in numerical simulations among PM-QPSK, PM-8QAM, PM-16QAM, PM-32QAM and PM-64QAM signals within practical optical signal-to-noise ratio (OSNR) ranges. The performance of the proposed MFC algorithm is also compared with those of other schemes based on clustering algorithms. The simulation results show that good classification performance can be achieved using the proposed MFC scheme with moderate time complexity. Proof-of-concept experiments are finally implemented to demonstrate MFC among PM-QPSK/16QAM/64QAM signals, which confirm the feasibility of our proposed MFC scheme.

  8. Achieving a high mode count in the exact electromagnetic simulation of diffractive optical elements.

    PubMed

    Junker, André; Brenner, Karl-Heinz

    2018-03-01

    The application of rigorous optical simulation algorithms, both in the modal as well as in the time domain, is known to be limited to the nano-optical scale due to severe computing time and memory constraints. This is true even for today's high-performance computers. To address this problem, we develop the fast rigorous iterative method (FRIM), an algorithm based on an iterative approach, which, under certain conditions, allows solving also large-size problems approximation free. We achieve this in the case of a modal representation by avoiding the computationally complex eigenmode decomposition. Thereby, the numerical cost is reduced from O(N 3 ) to O(N log N), enabling a simulation of structures like certain diffractive optical elements with a significantly higher mode count than presently possible. Apart from speed, another major advantage of the iterative FRIM over standard modal methods is the possibility to trade runtime against accuracy.

  9. Domain decomposition method for the Baltic Sea based on theory of adjoint equation and inverse problem.

    NASA Astrophysics Data System (ADS)

    Lezina, Natalya; Agoshkov, Valery

    2017-04-01

    Domain decomposition method (DDM) allows one to present a domain with complex geometry as a set of essentially simpler subdomains. This method is particularly applied for the hydrodynamics of oceans and seas. In each subdomain the system of thermo-hydrodynamic equations in the Boussinesq and hydrostatic approximations is solved. The problem of obtaining solution in the whole domain is that it is necessary to combine solutions in subdomains. For this purposes iterative algorithm is created and numerical experiments are conducted to investigate an effectiveness of developed algorithm using DDM. For symmetric operators in DDM, Poincare-Steklov's operators [1] are used, but for the problems of the hydrodynamics, it is not suitable. In this case for the problem, adjoint equation method [2] and inverse problem theory are used. In addition, it is possible to create algorithms for the parallel calculations using DDM on multiprocessor computer system. DDM for the model of the Baltic Sea dynamics is numerically studied. The results of numerical experiments using DDM are compared with the solution of the system of hydrodynamic equations in the whole domain. The work was supported by the Russian Science Foundation (project 14-11-00609, the formulation of the iterative process and numerical experiments). [1] V.I. Agoshkov, Domain Decompositions Methods in the Mathematical Physics Problem // Numerical processes and systems, No 8, Moscow, 1991 (in Russian). [2] V.I. Agoshkov, Optimal Control Approaches and Adjoint Equations in the Mathematical Physics Problem, Institute of Numerical Mathematics, RAS, Moscow, 2003 (in Russian).

  10. An historical framework for psychiatric nosology

    PubMed Central

    Kendler, K. S.

    2009-01-01

    This essay, which seeks to provide an historical framework for our efforts to develop a scientific psychiatric nosology, begins by reviewing the classificatory approaches that arose in the early history of biological taxonomy. Initial attempts at species definition used top-down approaches advocated by experts and based on a few essential features of the organism chosen a priori. This approach was subsequently rejected on both conceptual and practical grounds and replaced by bottom-up approaches making use of a much wider array of features. Multiple parallels exist between the beginnings of biological taxonomy and psychiatric nosology. Like biological taxonomy, psychiatric nosology largely began with ‘expert’ classifications, typically influenced by a few essential features, articulated by one or more great 19th-century diagnosticians. Like biology, psychiatry is struggling toward more soundly based bottom-up approaches using diverse illness characteristics. The underemphasized historically contingent nature of our current psychiatric classification is illustrated by recounting the history of how ‘Schneiderian’ symptoms of schizophrenia entered into DSM-III. Given these historical contingencies, it is vital that our psychiatric nosologic enterprise be cumulative. This can be best achieved through a process of epistemic iteration. If we can develop a stable consensus in our theoretical orientation toward psychiatric illness, we can apply this approach, which has one crucial virtue. Regardless of the starting point, if each iteration (or revision) improves the performance of the nosology, the eventual success of the nosologic process, to optimally reflect the complex reality of psychiatric illness, is assured. PMID:19368761

  11. A VLSI implementation of DCT using pass transistor technology

    NASA Technical Reports Server (NTRS)

    Kamath, S.; Lynn, Douglas; Whitaker, Sterling

    1992-01-01

    A VLSI design for performing the Discrete Cosine Transform (DCT) operation on image blocks of size 16 x 16 in a real time fashion operating at 34 MHz (worst case) is presented. The process used was Hewlett-Packard's CMOS26--A 3 metal CMOS process with a minimum feature size of 0.75 micron. The design is based on Multiply-Accumulate (MAC) cells which make use of a modified Booth recoding algorithm for performing multiplication. The design of these cells is straight forward, and the layouts are regular with no complex routing. Two versions of these MAC cells were designed and their layouts completed. Both versions were simulated using SPICE to estimate their performance. One version is slightly faster at the cost of larger silicon area and higher power consumption. An improvement in speed of almost 20 percent is achieved after several iterations of simulation and re-sizing.

  12. A Chebyshev Collocation Method for Moving Boundaries, Heat Transfer, and Convection During Directional Solidification

    NASA Technical Reports Server (NTRS)

    Zhang, Yiqiang; Alexander, J. I. D.; Ouazzani, J.

    1994-01-01

    Free and moving boundary problems require the simultaneous solution of unknown field variables and the boundaries of the domains on which these variables are defined. There are many technologically important processes that lead to moving boundary problems associated with fluid surfaces and solid-fluid boundaries. These include crystal growth, metal alloy and glass solidification, melting and name propagation. The directional solidification of semi-conductor crystals by the Bridgman-Stockbarger method is a typical example of such a complex process. A numerical model of this growth method must solve the appropriate heat, mass and momentum transfer equations and determine the location of the melt-solid interface. In this work, a Chebyshev pseudospectra collocation method is adapted to the problem of directional solidification. Implementation involves a solution algorithm that combines domain decomposition, finite-difference preconditioned conjugate minimum residual method and a Picard type iterative scheme.

  13. Integrating musculoskeletal sonography into rehabilitation: Therapists’ experiences with training and implementation

    PubMed Central

    Gray, Julie McLaughlin; Frank, Gelya; Roll, Shawn C.

    2018-01-01

    Musculoskeletal sonography is rapidly extending beyond radiology; however, best practices for successful integration into new practice contexts are unknown. This study explored non-physician experiences with the processes of training and integration of musculoskeletal sonography into rehabilitation. Qualitative data were captured through multiple sources and iterative thematic analysis was used to describe two occupational therapists’ experiences. The dominant emerging theme was competency, in three domains: technical, procedural and analytical. Additionally, three practice considerations were illuminated: (1) understanding imaging within the dynamics of rehabilitation, (2) navigating nuances of interprofessional care, and (3) implications for post-professional training. Findings indicate that sonography training for rehabilitation providers requires multi-level competency development and consideration of practice complexities. These data lay a foundation on which to explore and develop best practices for incorporating sonographic imaging into the clinic as a means for engaging clients as active participants in the rehabilitation process to improve health and rehabilitation outcomes. PMID:28830315

  14. Development of iterative techniques for the solution of unsteady compressible viscous flows

    NASA Technical Reports Server (NTRS)

    Sankar, Lakshmi N.; Hixon, Duane

    1991-01-01

    Efficient iterative solution methods are being developed for the numerical solution of two- and three-dimensional compressible Navier-Stokes equations. Iterative time marching methods have several advantages over classical multi-step explicit time marching schemes, and non-iterative implicit time marching schemes. Iterative schemes have better stability characteristics than non-iterative explicit and implicit schemes. Thus, the extra work required by iterative schemes can also be designed to perform efficiently on current and future generation scalable, missively parallel machines. An obvious candidate for iteratively solving the system of coupled nonlinear algebraic equations arising in CFD applications is the Newton method. Newton's method was implemented in existing finite difference and finite volume methods. Depending on the complexity of the problem, the number of Newton iterations needed per step to solve the discretized system of equations can, however, vary dramatically from a few to several hundred. Another popular approach based on the classical conjugate gradient method, known as the GMRES (Generalized Minimum Residual) algorithm is investigated. The GMRES algorithm was used in the past by a number of researchers for solving steady viscous and inviscid flow problems with considerable success. Here, the suitability of this algorithm is investigated for solving the system of nonlinear equations that arise in unsteady Navier-Stokes solvers at each time step. Unlike the Newton method which attempts to drive the error in the solution at each and every node down to zero, the GMRES algorithm only seeks to minimize the L2 norm of the error. In the GMRES algorithm the changes in the flow properties from one time step to the next are assumed to be the sum of a set of orthogonal vectors. By choosing the number of vectors to a reasonably small value N (between 5 and 20) the work required for advancing the solution from one time step to the next may be kept to (N+1) times that of a noniterative scheme. Many of the operations required by the GMRES algorithm such as matrix-vector multiplies, matrix additions and subtractions can all be vectorized and parallelized efficiently.

  15. Human Exploration Framework Team: Strategy and Status

    NASA Technical Reports Server (NTRS)

    Muirhead, Brian K.; Sherwood, Brent; Olson, John

    2011-01-01

    Human Exploration Framework Team (HEFT) was formulated to create a decision framework for human space exploration that drives out the knowledge, capabilities and infrastructure NASA needs to send people to explore multiple destinations in the Solar System in an efficient, sustainable way. The specific goal is to generate an initial architecture that can evolve into a long term, enterprise-wide architecture that is the basis for a robust human space flight enterprise. This paper will discuss the initial HEFT activity which focused on starting up the cross-agency team, getting it functioning, developing a comprehensive development and analysis process and conducting multiple iterations of the process. The outcome of this process will be discussed including initial analysis of capabilities and missions for at least two decades, keeping Mars as the ultimate destination. Details are provided on strategies that span a broad technical and programmatic trade space, are analyzed against design reference missions and evaluated against a broad set of figures of merit including affordability, operational complexity, and technical and programmatic risk.

  16. Fast iterative solution of the Bethe-Salpeter eigenvalue problem using low-rank and QTT tensor approximation

    NASA Astrophysics Data System (ADS)

    Benner, Peter; Dolgov, Sergey; Khoromskaia, Venera; Khoromskij, Boris N.

    2017-04-01

    In this paper, we propose and study two approaches to approximate the solution of the Bethe-Salpeter equation (BSE) by using structured iterative eigenvalue solvers. Both approaches are based on the reduced basis method and low-rank factorizations of the generating matrices. We also propose to represent the static screen interaction part in the BSE matrix by a small active sub-block, with a size balancing the storage for rank-structured representations of other matrix blocks. We demonstrate by various numerical tests that the combination of the diagonal plus low-rank plus reduced-block approximation exhibits higher precision with low numerical cost, providing as well a distinct two-sided error estimate for the smallest eigenvalues of the Bethe-Salpeter operator. The complexity is reduced to O (Nb2) in the size of the atomic orbitals basis set, Nb, instead of the practically intractable O (Nb6) scaling for the direct diagonalization. In the second approach, we apply the quantized-TT (QTT) tensor representation to both, the long eigenvectors and the column vectors in the rank-structured BSE matrix blocks, and combine this with the ALS-type iteration in block QTT format. The QTT-rank of the matrix entities possesses almost the same magnitude as the number of occupied orbitals in the molecular systems, No

  17. Regularization Parameter Selection for Nonlinear Iterative Image Restoration and MRI Reconstruction Using GCV and SURE-Based Methods

    PubMed Central

    Ramani, Sathish; Liu, Zhihao; Rosen, Jeffrey; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.

    2012-01-01

    Regularized iterative reconstruction algorithms for imaging inverse problems require selection of appropriate regularization parameter values. We focus on the challenging problem of tuning regularization parameters for nonlinear algorithms for the case of additive (possibly complex) Gaussian noise. Generalized cross-validation (GCV) and (weighted) mean-squared error (MSE) approaches (based on Stein's Unbiased Risk Estimate— SURE) need the Jacobian matrix of the nonlinear reconstruction operator (representative of the iterative algorithm) with respect to the data. We derive the desired Jacobian matrix for two types of nonlinear iterative algorithms: a fast variant of the standard iterative reweighted least-squares method and the contemporary split-Bregman algorithm, both of which can accommodate a wide variety of analysis- and synthesis-type regularizers. The proposed approach iteratively computes two weighted SURE-type measures: Predicted-SURE and Projected-SURE (that require knowledge of noise variance σ2), and GCV (that does not need σ2) for these algorithms. We apply the methods to image restoration and to magnetic resonance image (MRI) reconstruction using total variation (TV) and an analysis-type ℓ1-regularization. We demonstrate through simulations and experiments with real data that minimizing Predicted-SURE and Projected-SURE consistently lead to near-MSE-optimal reconstructions. We also observed that minimizing GCV yields reconstruction results that are near-MSE-optimal for image restoration and slightly sub-optimal for MRI. Theoretical derivations in this work related to Jacobian matrix evaluations can be extended, in principle, to other types of regularizers and reconstruction algorithms. PMID:22531764

  18. An iterative analytical technique for the design of interplanetary direct transfer trajectories including perturbations

    NASA Astrophysics Data System (ADS)

    Parvathi, S. P.; Ramanan, R. V.

    2018-06-01

    An iterative analytical trajectory design technique that includes perturbations in the departure phase of the interplanetary orbiter missions is proposed. The perturbations such as non-spherical gravity of Earth and the third body perturbations due to Sun and Moon are included in the analytical design process. In the design process, first the design is obtained using the iterative patched conic technique without including the perturbations and then modified to include the perturbations. The modification is based on, (i) backward analytical propagation of the state vector obtained from the iterative patched conic technique at the sphere of influence by including the perturbations, and (ii) quantification of deviations in the orbital elements at periapsis of the departure hyperbolic orbit. The orbital elements at the sphere of influence are changed to nullify the deviations at the periapsis. The analytical backward propagation is carried out using the linear approximation technique. The new analytical design technique, named as biased iterative patched conic technique, does not depend upon numerical integration and all computations are carried out using closed form expressions. The improved design is very close to the numerical design. The design analysis using the proposed technique provides a realistic insight into the mission aspects. Also, the proposed design is an excellent initial guess for numerical refinement and helps arrive at the four distinct design options for a given opportunity.

  19. Incorporating Prototyping and Iteration into Intervention Development: A Case Study of a Dining Hall-Based Intervention

    ERIC Educational Resources Information Center

    McClain, Arianna D.; Hekler, Eric B.; Gardner, Christopher D.

    2013-01-01

    Background: Previous research from the fields of computer science and engineering highlight the importance of an iterative design process (IDP) to create more creative and effective solutions. Objective: This study describes IDP as a new method for developing health behavior interventions and evaluates the effectiveness of a dining hall--based…

  20. Not All Wizards Are from Oz: Iterative Design of Intelligent Learning Environments by Communication Capacity Tapering

    ERIC Educational Resources Information Center

    Mavrikis, Manolis; Gutierrez-Santos, Sergio

    2010-01-01

    This paper presents a methodology for the design of intelligent learning environments. We recognise that in the educational technology field, theory development and system-design should be integrated and rely on an iterative process that addresses: (a) the difficulty to elicit precise, concise, and operationalized knowledge from "experts" and (b)…

  1. Item Purification Does Not Always Improve DIF Detection: A Counterexample with Angoff's Delta Plot

    ERIC Educational Resources Information Center

    Magis, David; Facon, Bruno

    2013-01-01

    Item purification is an iterative process that is often advocated as improving the identification of items affected by differential item functioning (DIF). With test-score-based DIF detection methods, item purification iteratively removes the items currently flagged as DIF from the test scores to get purified sets of items, unaffected by DIF. The…

  2. How Students, Collaborating as Peer Mentors, Enabled an Audacious Group-Based Project

    ERIC Educational Resources Information Center

    Bernstein, Jeffrey L.; Abad, Andrew P.; Bower, Benjamin C.; Box, Sara E.; Huckestein, Hailey L.; Mikulic, Steven M.; Walsh, Brian F.

    2016-01-01

    We discuss how a professor worked with six students to design and implement a complex teaching strategy for a course, and used the students' assistance to create a sustainable model for future iterations of the course.

  3. Computer program determines chemical equilibria in complex systems

    NASA Technical Reports Server (NTRS)

    Gordon, S.; Zeleznik, F. J.

    1966-01-01

    Computer program numerically solves nonlinear algebraic equations for chemical equilibrium based on iteration equations independent of choice of components. This program calculates theoretical performance for frozen and equilibrium composition during expansion and Chapman-Jouguet flame properties, studies combustion, and designs hardware.

  4. Understanding Implementation of Complex Interventions in Primary Care Teams.

    PubMed

    Luig, Thea; Asselin, Jodie; Sharma, Arya M; Campbell-Scherer, Denise L

    2018-01-01

    The implementation of interventions to support practice change in primary care settings is complex. Pragmatic strategies, grounded in empiric data, are needed to navigate real-world challenges and unanticipated interactions with context that can impact implementation and outcomes. This article uses the example of the "5As Team" randomized control trial to explore implementation strategies to promote knowledge transfer, capacity building, and practice integration, and their interaction within the context of an interdisciplinary primary care team. We performed a qualitative evaluation of the implementation process of the 5As Team intervention study, a randomized control trial of a complex intervention in primary care. We conducted thematic analysis of field notes of intervention sessions, log books of the practice facilitation team members, and semistructured interviews with 29 interdisciplinary clinician participants. We used and further developed the Interactive Systems Framework for dissemination and implementation to interpret and structure findings. Three themes emerged that illuminate interactions between implementation processes, context, and outcomes: (1) facilitating team communication supported collective and individual sense-making and adoption of the innovation, (2) iterative evaluation of the implementation process and real-time feedback-driven adaptions of the intervention proved crucial for sustainable, context-appropriate intervention impact, (3) stakeholder engagement led to both knowledge exchange that contributes to local problem solving and to shaping a clinical context that is supportive to practice change. Our findings contribute pragmatic strategies that can help practitioners and researchers to navigate interactions between context, intervention, and implementation factors to increase implementation success. We further developed an implementation framework that includes sustained engagement with stakeholders, facilitation of team sense-making, and dynamic evaluation and intervention design as integral parts of complex intervention implementation. NCT01967797. 18 October 2013. © Copyright 2018 by the American Board of Family Medicine.

  5. The role of simulation in the design of a neural network chip

    NASA Technical Reports Server (NTRS)

    Desai, Utpal; Roppel, Thaddeus A.; Padgett, Mary L.

    1993-01-01

    An iterative, simulation-based design procedure for a neural network chip is introduced. For this design procedure, the goal is to produce a chip layout for a neural network in which the weights are determined by transistor gate width-to-length ratios. In a given iteration, the current layout is simulated using the circuit simulator SPICE, and layout adjustments are made based on conventional gradient-decent methods. After the iteration converges, the chip is fabricated. Monte Carlo analysis is used to predict the effect of statistical fabrication process variations on the overall performance of the neural network chip.

  6. Sampling-Based Coverage Path Planning for Complex 3D Structures

    DTIC Science & Technology

    2012-09-01

    one such task, in which a single robot must sweep its end effector over the entirety of a known workspace. For two-dimensional environments, optimal...structures. First, we introduce a new algorithm for planning feasible coverage paths. It is more computationally efficient in problems of complex geometry...iteratively shortens and smooths a feasible coverage path; robot configurations are adjusted without violating any coverage con- straints. Third, we propose

  7. Modeling design iteration in product design and development and its solution by a novel artificial bee colony algorithm.

    PubMed

    Chen, Tinggui; Xiao, Renbin

    2014-01-01

    Due to fierce market competition, how to improve product quality and reduce development cost determines the core competitiveness of enterprises. However, design iteration generally causes increases of product cost and delays of development time as well, so how to identify and model couplings among tasks in product design and development has become an important issue for enterprises to settle. In this paper, the shortcomings existing in WTM model are discussed and tearing approach as well as inner iteration method is used to complement the classic WTM model. In addition, the ABC algorithm is also introduced to find out the optimal decoupling schemes. In this paper, firstly, tearing approach and inner iteration method are analyzed for solving coupled sets. Secondly, a hybrid iteration model combining these two technologies is set up. Thirdly, a high-performance swarm intelligence algorithm, artificial bee colony, is adopted to realize problem-solving. Finally, an engineering design of a chemical processing system is given in order to verify its reasonability and effectiveness.

  8. Decision-aided ICI mitigation with time-domain average approximation in CO-OFDM

    NASA Astrophysics Data System (ADS)

    Ren, Hongliang; Cai, Jiaxing; Ye, Xin; Lu, Jin; Cao, Quanjun; Guo, Shuqin; Xue, Lin-lin; Qin, Yali; Hu, Weisheng

    2015-07-01

    We introduce and investigate the feasibility of a novel iterative blind phase noise inter-carrier interference (ICI) mitigation scheme for coherent optical orthogonal frequency division multiplexing (CO-OFDM) systems. The ICI mitigation scheme is performed through the combination of frequency-domain symbol decision-aided estimation and the ICI phase noise time-average approximation. An additional initial decision process with suitable threshold is introduced in order to suppress the decision error symbols. Our proposed ICI mitigation scheme is proved to be effective in removing the ICI for a simulated CO-OFDM with 16-QAM modulation format. With the slightly high computational complexity, it outperforms the time-domain average blind ICI (Avg-BL-ICI) algorithm at a relatively wide laser line-width and high OSNR.

  9. Sandia fracture challenge 2: Sandia California's modeling approach

    DOE PAGES

    Karlson, Kyle N.; James W. Foulk, III; Brown, Arthur A.; ...

    2016-03-09

    The second Sandia Fracture Challenge illustrates that predicting the ductile fracture of Ti-6Al-4V subjected to moderate and elevated rates of loading requires thermomechanical coupling, elasto-thermo-poro-viscoplastic constitutive models with the physics of anisotropy and regularized numerical methods for crack initiation and propagation. We detail our initial approach with an emphasis on iterative calibration and systematically increasing complexity to accommodate anisotropy in the context of an isotropic material model. Blind predictions illustrate strengths and weaknesses of our initial approach. We then revisit our findings to illustrate the importance of including anisotropy in the failure process. Furthermore, mesh-independent solutions of continuum damage modelsmore » having both isotropic and anisotropic yields surfaces are obtained through nonlocality and localization elements.« less

  10. Development of a Mixed Methods Investigation of Process and Outcomes of Community-Based Participatory Research

    PubMed Central

    Lucero, Julie; Wallerstein, Nina; Duran, Bonnie; Alegria, Margarita; Greene-Moton, Ella; Israel, Barbara; Kastelic, Sarah; Magarati, Maya; Oetzel, John; Pearson, Cynthia; Schulz, Amy; Villegas, Malia; White Hat, Emily R.

    2017-01-01

    This article describes a mixed methods study of community-based participatory research (CBPR) partnership practices and the links between these practices and changes in health status and disparities outcomes. Directed by a CBPR conceptual model and grounded in indigenous-transformative theory, our nation-wide, cross-site study showcases the value of a mixed methods approach for better understanding the complexity of CBPR partnerships across diverse community and research contexts. The article then provides examples of how an iterative, integrated approach to our mixed methods analysis yielded enriched understandings of two key constructs of the model: trust and governance. Implications and lessons learned while using mixed methods to study CBPR are provided. PMID:29230152

  11. Is there a need for a specific educational scholarship for using e-learning in medical education?

    PubMed

    Sandars, John; Goh, Poh Sun

    2016-10-01

    We propose the need for a specific educational scholarship when using e-learning in medical education. Effective e-learning has additional factors that require specific critical attention, including the design and delivery of e-learning. An important aspect is the recognition that e-learning is a complex intervention, with several interconnecting components that have to be aligned. This alignment requires an essential iterative development process with usability testing. Effectiveness of e-learning in one context may not be fully realized in another context unless there is further consideration of applicability and scalability. We recommend a participatory approach for an educational scholarship for using e-learning in medical education, such as by action research or design-based research.

  12. Usability Evaluation of a Clinical Decision Support System for Geriatric ED Pain Treatment.

    PubMed

    Genes, Nicholas; Kim, Min Soon; Thum, Frederick L; Rivera, Laura; Beato, Rosemary; Song, Carolyn; Soriano, Jared; Kannry, Joseph; Baumlin, Kevin; Hwang, Ula

    2016-01-01

    Older adults are at risk for inadequate emergency department (ED) pain care. Unrelieved acute pain is associated with poor outcomes. Clinical decision support systems (CDSS) hold promise to improve patient care, but CDSS quality varies widely, particularly when usability evaluation is not employed. To conduct an iterative usability and redesign process of a novel geriatric abdominal pain care CDSS. We hypothesized this process would result in the creation of more usable and favorable pain care interventions. Thirteen emergency physicians familiar with the Electronic Health Record (EHR) in use at the study site were recruited. Over a 10-week period, 17 1-hour usability test sessions were conducted across 3 rounds of testing. Participants were given 3 patient scenarios and provided simulated clinical care using the EHR, while interacting with the CDSS interventions. Quantitative System Usability Scores (SUS), favorability scores and qualitative narrative feedback were collected for each session. Using a multi-step review process by an interdisciplinary team, positive and negative usability issues in effectiveness, efficiency, and satisfaction were considered, prioritized and incorporated in the iterative redesign process of the CDSS. Video analysis was used to determine the appropriateness of the CDS appearances during simulated clinical care. Over the 3 rounds of usability evaluations and subsequent redesign processes, mean SUS progressively improved from 74.8 to 81.2 to 88.9; mean favorability scores improved from 3.23 to 4.29 (1 worst, 5 best). Video analysis revealed that, in the course of the iterative redesign processes, rates of physicians' acknowledgment of CDS interventions increased, however most rates of desired actions by physicians (such as more frequent pain score updates) decreased. The iterative usability redesign process was instrumental in improving the usability of the CDSS; if implemented in practice, it could improve geriatric pain care. The usability evaluation process led to improved acknowledgement and favorability. Incorporating usability testing when designing CDSS interventions for studies may be effective to enhance clinician use.

  13. Using evolutionary algorithms for fitting high-dimensional models to neuronal data.

    PubMed

    Svensson, Carl-Magnus; Coombes, Stephen; Peirce, Jonathan Westley

    2012-04-01

    In the study of neurosciences, and of complex biological systems in general, there is frequently a need to fit mathematical models with large numbers of parameters to highly complex datasets. Here we consider algorithms of two different classes, gradient following (GF) methods and evolutionary algorithms (EA) and examine their performance in fitting a 9-parameter model of a filter-based visual neuron to real data recorded from a sample of 107 neurons in macaque primary visual cortex (V1). Although the GF method converged very rapidly on a solution, it was highly susceptible to the effects of local minima in the error surface and produced relatively poor fits unless the initial estimates of the parameters were already very good. Conversely, although the EA required many more iterations of evaluating the model neuron's response to a series of stimuli, it ultimately found better solutions in nearly all cases and its performance was independent of the starting parameters of the model. Thus, although the fitting process was lengthy in terms of processing time, the relative lack of human intervention in the evolutionary algorithm, and its ability ultimately to generate model fits that could be trusted as being close to optimal, made it far superior in this particular application than the gradient following methods. This is likely to be the case in many further complex systems, as are often found in neuroscience.

  14. A Monte-Carlo Benchmark of TRIPOLI-4® and MCNP on ITER neutronics

    NASA Astrophysics Data System (ADS)

    Blanchet, David; Pénéliau, Yannick; Eschbach, Romain; Fontaine, Bruno; Cantone, Bruno; Ferlet, Marc; Gauthier, Eric; Guillon, Christophe; Letellier, Laurent; Proust, Maxime; Mota, Fernando; Palermo, Iole; Rios, Luis; Guern, Frédéric Le; Kocan, Martin; Reichle, Roger

    2017-09-01

    Radiation protection and shielding studies are often based on the extensive use of 3D Monte-Carlo neutron and photon transport simulations. ITER organization hence recommends the use of MCNP-5 code (version 1.60), in association with the FENDL-2.1 neutron cross section data library, specifically dedicated to fusion applications. The MCNP reference model of the ITER tokamak, the `C-lite', is being continuously developed and improved. This article proposes to develop an alternative model, equivalent to the 'C-lite', but for the Monte-Carlo code TRIPOLI-4®. A benchmark study is defined to test this new model. Since one of the most critical areas for ITER neutronics analysis concerns the assessment of radiation levels and Shutdown Dose Rates (SDDR) behind the Equatorial Port Plugs (EPP), the benchmark is conducted to compare the neutron flux through the EPP. This problem is quite challenging with regard to the complex geometry and considering the important neutron flux attenuation ranging from 1014 down to 108 n•cm-2•s-1. Such code-to-code comparison provides independent validation of the Monte-Carlo simulations, improving the confidence in neutronic results.

  15. Linear-scaling implementation of molecular response theory in self-consistent field electronic-structure theory.

    PubMed

    Coriani, Sonia; Høst, Stinne; Jansík, Branislav; Thøgersen, Lea; Olsen, Jeppe; Jørgensen, Poul; Reine, Simen; Pawłowski, Filip; Helgaker, Trygve; Sałek, Paweł

    2007-04-21

    A linear-scaling implementation of Hartree-Fock and Kohn-Sham self-consistent field theories for the calculation of frequency-dependent molecular response properties and excitation energies is presented, based on a nonredundant exponential parametrization of the one-electron density matrix in the atomic-orbital basis, avoiding the use of canonical orbitals. The response equations are solved iteratively, by an atomic-orbital subspace method equivalent to that of molecular-orbital theory. Important features of the subspace method are the use of paired trial vectors (to preserve the algebraic structure of the response equations), a nondiagonal preconditioner (for rapid convergence), and the generation of good initial guesses (for robust solution). As a result, the performance of the iterative method is the same as in canonical molecular-orbital theory, with five to ten iterations needed for convergence. As in traditional direct Hartree-Fock and Kohn-Sham theories, the calculations are dominated by the construction of the effective Fock/Kohn-Sham matrix, once in each iteration. Linear complexity is achieved by using sparse-matrix algebra, as illustrated in calculations of excitation energies and frequency-dependent polarizabilities of polyalanine peptides containing up to 1400 atoms.

  16. Blind motion image deblurring using nonconvex higher-order total variation model

    NASA Astrophysics Data System (ADS)

    Li, Weihong; Chen, Rui; Xu, Shangwen; Gong, Weiguo

    2016-09-01

    We propose a nonconvex higher-order total variation (TV) method for blind motion image deblurring. First, we introduce a nonconvex higher-order TV differential operator to define a new model of the blind motion image deblurring, which can effectively eliminate the staircase effect of the deblurred image; meanwhile, we employ an image sparse prior to improve the edge recovery quality. Second, to improve the accuracy of the estimated motion blur kernel, we use L1 norm and H1 norm as the blur kernel regularization term, considering the sparsity and smoothing of the motion blur kernel. Third, because it is difficult to solve the numerically computational complexity problem of the proposed model owing to the intrinsic nonconvexity, we propose a binary iterative strategy, which incorporates a reweighted minimization approximating scheme in the outer iteration, and a split Bregman algorithm in the inner iteration. And we also discuss the convergence of the proposed binary iterative strategy. Last, we conduct extensive experiments on both synthetic and real-world degraded images. The results demonstrate that the proposed method outperforms the previous representative methods in both quality of visual perception and quantitative measurement.

  17. SUMOFLUX: A Generalized Method for Targeted 13C Metabolic Flux Ratio Analysis

    PubMed Central

    Kogadeeva, Maria

    2016-01-01

    Metabolic fluxes are a cornerstone of cellular physiology that emerge from a complex interplay of enzymes, carriers, and nutrients. The experimental assessment of in vivo intracellular fluxes using stable isotopic tracers is essential if we are to understand metabolic function and regulation. Flux estimation based on 13C or 2H labeling relies on complex simulation and iterative fitting; processes that necessitate a level of expertise that ordinarily preclude the non-expert user. To overcome this, we have developed SUMOFLUX, a methodology that is broadly applicable to the targeted analysis of 13C-metabolic fluxes. By combining surrogate modeling and machine learning, we trained a predictor to specialize in estimating flux ratios from measurable 13C-data. SUMOFLUX targets specific flux features individually, which makes it fast, user-friendly, applicable to experimental design and robust in terms of experimental noise and exchange flux magnitude. Collectively, we predict that SUMOFLUX's properties realistically pave the way to high-throughput flux analyses. PMID:27626798

  18. A Novel Strategy Using Factor Graphs and the Sum-Product Algorithm for Satellite Broadcast Scheduling Problems

    NASA Astrophysics Data System (ADS)

    Chen, Jung-Chieh

    This paper presents a low complexity algorithmic framework for finding a broadcasting schedule in a low-altitude satellite system, i. e., the satellite broadcast scheduling (SBS) problem, based on the recent modeling and computational methodology of factor graphs. Inspired by the huge success of the low density parity check (LDPC) codes in the field of error control coding, in this paper, we transform the SBS problem into an LDPC-like problem through a factor graph instead of using the conventional neural network approaches to solve the SBS problem. Based on a factor graph framework, the soft-information, describing the probability that each satellite will broadcast information to a terminal at a specific time slot, is exchanged among the local processing in the proposed framework via the sum-product algorithm to iteratively optimize the satellite broadcasting schedule. Numerical results show that the proposed approach not only can obtain optimal solution but also enjoys the low complexity suitable for integral-circuit implementation.

  19. Microgravity isolation system design: A modern control analysis framework

    NASA Technical Reports Server (NTRS)

    Hampton, R. D.; Knospe, C. R.; Allaire, P. E.; Grodsinsky, C. M.

    1994-01-01

    Many acceleration-sensitive, microgravity science experiments will require active vibration isolation from the manned orbiters on which they will be mounted. The isolation problem, especially in the case of a tethered payload, is a complex three-dimensional one that is best suited to modern-control design methods. These methods, although more powerful than their classical counterparts, can nonetheless go only so far in meeting the design requirements for practical systems. Once a tentative controller design is available, it must still be evaluated to determine whether or not it is fully acceptable, and to compare it with other possible design candidates. Realistically, such evaluation will be an inherent part of a necessary iterative design process. In this paper, an approach is presented for applying complex mu-analysis methods to a closed-loop vibration isolation system (experiment plus controller). An analysis framework is presented for evaluating nominal stability, nominal performance, robust stability, and robust performance of active microgravity isolation systems, with emphasis on the effective use of mu-analysis methods.

  20. Optimal spiral phase modulation in Gerchberg-Saxton algorithm for wavefront reconstruction and correction

    NASA Astrophysics Data System (ADS)

    Baránek, M.; Běhal, J.; Bouchal, Z.

    2018-01-01

    In the phase retrieval applications, the Gerchberg-Saxton (GS) algorithm is widely used for the simplicity of implementation. This iterative process can advantageously be deployed in the combination with a spatial light modulator (SLM) enabling simultaneous correction of optical aberrations. As recently demonstrated, the accuracy and efficiency of the aberration correction using the GS algorithm can be significantly enhanced by a vortex image spot used as the target intensity pattern in the iterative process. Here we present an optimization of the spiral phase modulation incorporated into the GS algorithm.

  1. Database searching and accounting of multiplexed precursor and product ion spectra from the data independent analysis of simple and complex peptide mixtures.

    PubMed

    Li, Guo-Zhong; Vissers, Johannes P C; Silva, Jeffrey C; Golick, Dan; Gorenstein, Marc V; Geromanos, Scott J

    2009-03-01

    A novel database search algorithm is presented for the qualitative identification of proteins over a wide dynamic range, both in simple and complex biological samples. The algorithm has been designed for the analysis of data originating from data independent acquisitions, whereby multiple precursor ions are fragmented simultaneously. Measurements used by the algorithm include retention time, ion intensities, charge state, and accurate masses on both precursor and product ions from LC-MS data. The search algorithm uses an iterative process whereby each iteration incrementally increases the selectivity, specificity, and sensitivity of the overall strategy. Increased specificity is obtained by utilizing a subset database search approach, whereby for each subsequent stage of the search, only those peptides from securely identified proteins are queried. Tentative peptide and protein identifications are ranked and scored by their relative correlation to a number of models of known and empirically derived physicochemical attributes of proteins and peptides. In addition, the algorithm utilizes decoy database techniques for automatically determining the false positive identification rates. The search algorithm has been tested by comparing the search results from a four-protein mixture, the same four-protein mixture spiked into a complex biological background, and a variety of other "system" type protein digest mixtures. The method was validated independently by data dependent methods, while concurrently relying on replication and selectivity. Comparisons were also performed with other commercially and publicly available peptide fragmentation search algorithms. The presented results demonstrate the ability to correctly identify peptides and proteins from data independent acquisition strategies with high sensitivity and specificity. They also illustrate a more comprehensive analysis of the samples studied; providing approximately 20% more protein identifications, compared to a more conventional data directed approach using the same identification criteria, with a concurrent increase in both sequence coverage and the number of modified peptides.

  2. Designing magnetic systems for reliability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heitzenroeder, P.J.

    1991-01-01

    Designing magnetic system is an iterative process in which the requirements are set, a design is developed, materials and manufacturing processes are defined, interrelationships with the various elements of the system are established, engineering analyses are performed, and fault modes and effects are studied. Reliability requires that all elements of the design process, from the seemingly most straightforward such as utilities connection design and implementation, to the most sophisticated such as advanced finite element analyses, receives a balanced and appropriate level of attention. D.B. Montgomery's study of magnet failures has shown that the predominance of magnet failures tend not tomore » be in the most intensively engineered areas, but are associated with insulation, leads, ad unanticipated conditions. TFTR, JET, JT-60, and PBX are all major tokamaks which have suffered loss of reliability due to water leaks. Similarly the majority of causes of loss of magnet reliability at PPPL has not been in the sophisticated areas of the design but are due to difficulties associated with coolant connections, bus connections, and external structural connections. Looking towards the future, the major next-devices such as BPX and ITER are most costly and complex than any of their predecessors and are pressing the bounds of operating levels, materials, and fabrication. Emphasis on reliability is a must as the fusion program enters a phase where there are fewer, but very costly devices with the goal of reaching a reactor prototype stage in the next two or three decades. This paper reviews some of the magnet reliability issues which PPPL has faced over the years the lessons learned from them, and magnet design and fabrication practices which have been found to contribute to magnet reliability.« less

  3. Systematic iteration between model and methodology: A proposed approach to evaluating unintended consequences.

    PubMed

    Morell, Jonathan A

    2018-06-01

    This article argues that evaluators could better deal with unintended consequences if they improved their methods of systematically and methodically combining empirical data collection and model building over the life cycle of an evaluation. This process would be helpful because it can increase the timespan from when the need for a change in methodology is first suspected to the time when the new element of the methodology is operational. The article begins with an explanation of why logic models are so important in evaluation, and why the utility of models is limited if they are not continually revised based on empirical evaluation data. It sets the argument within the larger context of the value and limitations of models in the scientific enterprise. Following will be a discussion of various issues that are relevant to model development and revision. What is the relevance of complex system behavior for understanding predictable and unpredictable unintended consequences, and the methods needed to deal with them? How might understanding of unintended consequences be improved with an appreciation of generic patterns of change that are independent of any particular program or change effort? What are the social and organizational dynamics that make it rational and adaptive to design programs around single-outcome solutions to multi-dimensional problems? How does cognitive bias affect our ability to identify likely program outcomes? Why is it hard to discern change as a result of programs being embedded in multi-component, continually fluctuating, settings? The last part of the paper outlines a process for actualizing systematic iteration between model and methodology, and concludes with a set of research questions that speak to how the model/data process can be made efficient and effective. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Enhancing evidence informed policymaking in complex health systems: lessons from multi-site collaborative approaches.

    PubMed

    Langlois, Etienne V; Becerril Montekio, Victor; Young, Taryn; Song, Kayla; Alcalde-Rabanal, Jacqueline; Tran, Nhan

    2016-03-17

    There is an increasing interest worldwide to ensure evidence-informed health policymaking as a means to improve health systems performance. There is a need to engage policymakers in collaborative approaches to generate and use knowledge in real world settings. To address this gap, we implemented two interventions based on iterative exchanges between researchers and policymakers/implementers. This article aims to reflect on the implementation and impact of these multi-site evidence-to-policy approaches implemented in low-resource settings. The first approach was implemented in Mexico and Nicaragua and focused on implementation research facilitated by communities of practice (CoP) among maternal health stakeholders. We conducted a process evaluation of the CoPs and assessed the professionals' abilities to acquire, analyse, adapt and apply research. The second approach, called the Policy BUilding Demand for evidence in Decision making through Interaction and Enhancing Skills (Policy BUDDIES), was implemented in South Africa and Cameroon. The intervention put forth a 'buddying' process to enhance demand and use of systematic reviews by sub-national policymakers. The Policy BUDDIES initiative was assessed using a mixed-methods realist evaluation design. In Mexico, the implementation research supported by CoPs triggered monitoring by local health organizations of the quality of maternal healthcare programs. Health programme personnel involved in CoPs in Mexico and Nicaragua reported improved capacities to identify and use evidence in solving implementation problems. In South Africa, Policy BUDDIES informed a policy framework for medication adherence for chronic diseases, including both HIV and non-communicable diseases. Policymakers engaged in the buddying process reported an enhanced recognition of the value of research, and greater demand for policy-relevant knowledge. The collaborative evidence-to-policy approaches underline the importance of iterations and continuity in the engagement of researchers and policymakers/programme managers, in order to account for swift evolutions in health policy planning and implementation. In developing and supporting evidence-to-policy interventions, due consideration should be given to fit-for-purpose approaches, as different needs in policymaking cycles require adapted processes and knowledge. Greater consideration should be provided to approaches embedding the use of research in real-world policymaking, better suited to the complex adaptive nature of health systems.

  5. A polynomial primal-dual Dikin-type algorithm for linear programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jansen, B.; Roos, R.; Terlaky, T.

    1994-12-31

    We present a new primal-dual affine scaling method for linear programming. The search direction is obtained by using Dikin`s original idea: minimize the objective function (which is the duality gap in a primal-dual algorithm) over a suitable ellipsoid. The search direction has no obvious relationship with the directions proposed in the literature so far. It guarantees a significant decrease in the duality gap in each iteration, and at the same time drives the iterates to the central path. The method admits a polynomial complexity bound that is better than the one for Monteiro et al.`s original primal-dual affine scaling method.

  6. Using sparsity information for iterative phase retrieval in x-ray propagation imaging.

    PubMed

    Pein, A; Loock, S; Plonka, G; Salditt, T

    2016-04-18

    For iterative phase retrieval algorithms in near field x-ray propagation imaging experiments with a single distance measurement, it is indispensable to have a strong constraint based on a priori information about the specimen; for example, information about the specimen's support. Recently, Loock and Plonka proposed to use the a priori information that the exit wave is sparsely represented in a certain directional representation system, a so-called shearlet system. In this work, we extend this approach to complex-valued signals by applying the new shearlet constraint to amplitude and phase separately. Further, we demonstrate its applicability to experimental data.

  7. Precise and fast spatial-frequency analysis using the iterative local Fourier transform.

    PubMed

    Lee, Sukmock; Choi, Heejoo; Kim, Dae Wook

    2016-09-19

    The use of the discrete Fourier transform has decreased since the introduction of the fast Fourier transform (fFT), which is a numerically efficient computing process. This paper presents the iterative local Fourier transform (ilFT), a set of new processing algorithms that iteratively apply the discrete Fourier transform within a local and optimal frequency domain. The new technique achieves 210 times higher frequency resolution than the fFT within a comparable computation time. The method's superb computing efficiency, high resolution, spectrum zoom-in capability, and overall performance are evaluated and compared to other advanced high-resolution Fourier transform techniques, such as the fFT combined with several fitting methods. The effectiveness of the ilFT is demonstrated through the data analysis of a set of Talbot self-images (1280 × 1024 pixels) obtained with an experimental setup using grating in a diverging beam produced by a coherent point source.

  8. Approximate techniques of structural reanalysis

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Lowder, H. E.

    1974-01-01

    A study is made of two approximate techniques for structural reanalysis. These include Taylor series expansions for response variables in terms of design variables and the reduced-basis method. In addition, modifications to these techniques are proposed to overcome some of their major drawbacks. The modifications include a rational approach to the selection of the reduced-basis vectors and the use of Taylor series approximation in an iterative process. For the reduced basis a normalized set of vectors is chosen which consists of the original analyzed design and the first-order sensitivity analysis vectors. The use of the Taylor series approximation as a first (initial) estimate in an iterative process, can lead to significant improvements in accuracy, even with one iteration cycle. Therefore, the range of applicability of the reanalysis technique can be extended. Numerical examples are presented which demonstrate the gain in accuracy obtained by using the proposed modification techniques, for a wide range of variations in the design variables.

  9. On iterative processes in the Krylov-Sonneveld subspaces

    NASA Astrophysics Data System (ADS)

    Ilin, Valery P.

    2016-10-01

    The iterative Induced Dimension Reduction (IDR) methods are considered for solving large systems of linear algebraic equations (SLAEs) with nonsingular nonsymmetric matrices. These approaches are investigated by many authors and are charachterized sometimes as the alternative to the classical processes of Krylov type. The key moments of the IDR algorithms consist in the construction of the embedded Sonneveld subspaces, which have the decreasing dimensions and use the orthogonalization to some fixed subspace. Other independent approaches for research and optimization of the iterations are based on the augmented and modified Krylov subspaces by using the aggregation and deflation procedures with present various low rank approximations of the original matrices. The goal of this paper is to show, that IDR method in Sonneveld subspaces present an original interpretation of the modified algorithms in the Krylov subspaces. In particular, such description is given for the multi-preconditioned semi-conjugate direction methods which are actual for the parallel algebraic domain decomposition approaches.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wahanani, Nursinta Adi, E-mail: sintaadi@batan.go.id; Natsir, Khairina, E-mail: sintaadi@batan.go.id; Hartini, Entin, E-mail: sintaadi@batan.go.id

    Data processing software packages such as VSOP and MCNPX are softwares that has been scientifically proven and complete. The result of VSOP and MCNPX are huge and complex text files. In the analyze process, user need additional processing like Microsoft Excel to show informative result. This research develop an user interface software for output of VSOP and MCNPX. VSOP program output is used to support neutronic analysis and MCNPX program output is used to support burn-up analysis. Software development using iterative development methods which allow for revision and addition of features according to user needs. Processing time with this softwaremore » 500 times faster than with conventional methods using Microsoft Excel. PYTHON is used as a programming language, because Python is available for all major operating systems: Windows, Linux/Unix, OS/2, Mac, Amiga, among others. Values that support neutronic analysis are k-eff, burn-up and mass Pu{sup 239} and Pu{sup 241}. Burn-up analysis used the mass inventory values of actinide (Thorium, Plutonium, Neptunium and Uranium). Values are visualized in graphical shape to support analysis.« less

  11. Numerical Simulation of Molten Flow in Directed Energy Deposition Using an Iterative Geometry Technique

    NASA Astrophysics Data System (ADS)

    Vincent, Timothy J.; Rumpfkeil, Markus P.; Chaudhary, Anil

    2018-03-01

    The complex, multi-faceted physics of laser-based additive metals processing tends to demand high-fidelity models and costly simulation tools to provide predictions accurate enough to aid in selecting process parameters. Of particular difficulty is the accurate determination of melt pool shape and size, which are useful for predicting lack-of-fusion, as this typically requires an adequate treatment of thermal and fluid flow. In this article we describe a novel numerical simulation tool which aims to achieve a balance between accuracy and cost. This is accomplished by making simplifying assumptions regarding the behavior of the gas-liquid interface for processes with a moderate energy density, such as Laser Engineered Net Shaping (LENS). The details of the implementation, which is based on the solver simpleFoam of the well-known software suite OpenFOAM, are given here and the tool is verified and validated for a LENS process involving Ti-6Al-4V. The results indicate that the new tool predicts width and height of a deposited track to engineering accuracy levels.

  12. Low activation steels welding with PWHT and coating for ITER test blanket modules and DEMO

    NASA Astrophysics Data System (ADS)

    Aubert, P.; Tavassoli, F.; Rieth, M.; Diegele, E.; Poitevin, Y.

    2011-02-01

    EUROFER weldability is investigated in support of the European material properties database and TBM manufacturing. Electron Beam, Hybrid, laser and narrow gap TIG processes have been carried out on the EUROFER-97 steel (thickness up to 40 mm), a reduced activation ferritic-martensitic steel developed in Europe. These welding processes produce similar welding results with high joint coefficients and are well adapted for minimizing residual distortions. The fusion zones are typically composed of martensite laths, with small grain sizes. In the heat-affected zones, martensite grains contain carbide precipitates. High hardness values are measured in all these zones that if not tempered would degrade toughness and creep resistance. PWHT developments have driven to a one-step PWHT (750 °C/3 h), successfully applied to joints restoring good material performances. It will produce less distortion levels than a full austenitization PWHT process, not really applicable to a complex welded structure such as the TBM. Different tungsten coatings have been successfully processed on EUROFER material. It has shown no really effect on the EUROFER base material microstructure.

  13. Exploiting lipopolysaccharide-induced deformation of lipid bilayers to modify membrane composition and generate two-dimensional geometric membrane array patterns

    DOE PAGES

    Adams, Peter G.; Swingle, Kirstie L.; Paxton, Walter F.; ...

    2015-05-27

    Supported lipid bilayers have proven effective as model membranes for investigating biophysical processes and in development of sensor and array technologies. The ability to modify lipid bilayers after their formation and in situ could greatly advance membrane technologies, but is difficult via current state-of-the-art technologies. Here we demonstrate a novel method that allows the controlled post-formation processing and modification of complex supported lipid bilayer arrangements, under aqueous conditions. We exploit the destabilization effect of lipopolysaccharide, an amphiphilic biomolecule, interacting with lipid bilayers to generate voids that can be backfilled to introduce desired membrane components. We further demonstrate that when usedmore » in combination with a single, traditional soft lithography process, it is possible to generate hierarchically-organized membrane domains and microscale 2-D array patterns of domains. Significantly, this technique can be used to repeatedly modify membranes allowing iterative control over membrane composition. This approach expands our toolkit for functional membrane design, with potential applications for enhanced materials templating, biosensing and investigating lipid-membrane processes.« less

  14. Automated segmentation of three-dimensional MR brain images

    NASA Astrophysics Data System (ADS)

    Park, Jonggeun; Baek, Byungjun; Ahn, Choong-Il; Ku, Kyo Bum; Jeong, Dong Kyun; Lee, Chulhee

    2006-03-01

    Brain segmentation is a challenging problem due to the complexity of the brain. In this paper, we propose an automated brain segmentation method for 3D magnetic resonance (MR) brain images which are represented as a sequence of 2D brain images. The proposed method consists of three steps: pre-processing, removal of non-brain regions (e.g., the skull, meninges, other organs, etc), and spinal cord restoration. In pre-processing, we perform adaptive thresholding which takes into account variable intensities of MR brain images corresponding to various image acquisition conditions. In segmentation process, we iteratively apply 2D morphological operations and masking for the sequences of 2D sagittal, coronal, and axial planes in order to remove non-brain tissues. Next, final 3D brain regions are obtained by applying OR operation for segmentation results of three planes. Finally we reconstruct the spinal cord truncated during the previous processes. Experiments are performed with fifteen 3D MR brain image sets with 8-bit gray-scale. Experiment results show the proposed algorithm is fast, and provides robust and satisfactory results.

  15. Numerical Simulation of Molten Flow in Directed Energy Deposition Using an Iterative Geometry Technique

    NASA Astrophysics Data System (ADS)

    Vincent, Timothy J.; Rumpfkeil, Markus P.; Chaudhary, Anil

    2018-06-01

    The complex, multi-faceted physics of laser-based additive metals processing tends to demand high-fidelity models and costly simulation tools to provide predictions accurate enough to aid in selecting process parameters. Of particular difficulty is the accurate determination of melt pool shape and size, which are useful for predicting lack-of-fusion, as this typically requires an adequate treatment of thermal and fluid flow. In this article we describe a novel numerical simulation tool which aims to achieve a balance between accuracy and cost. This is accomplished by making simplifying assumptions regarding the behavior of the gas-liquid interface for processes with a moderate energy density, such as Laser Engineered Net Shaping (LENS). The details of the implementation, which is based on the solver simpleFoam of the well-known software suite OpenFOAM, are given here and the tool is verified and validated for a LENS process involving Ti-6Al-4V. The results indicate that the new tool predicts width and height of a deposited track to engineering accuracy levels.

  16. Comparing mass balance and adjoint methods for inverse modeling of nitrogen dioxide columns for global nitrogen oxide emissions

    NASA Astrophysics Data System (ADS)

    Cooper, Matthew; Martin, Randall V.; Padmanabhan, Akhila; Henze, Daven K.

    2017-04-01

    Satellite observations offer information applicable to top-down constraints on emission inventories through inverse modeling. Here we compare two methods of inverse modeling for emissions of nitrogen oxides (NOx) from nitrogen dioxide (NO2) columns using the GEOS-Chem chemical transport model and its adjoint. We treat the adjoint-based 4D-Var modeling approach for estimating top-down emissions as a benchmark against which to evaluate variations on the mass balance method. We use synthetic NO2 columns generated from known NOx emissions to serve as "truth." We find that error in mass balance inversions can be reduced by up to a factor of 2 with an iterative process that uses finite difference calculations of the local sensitivity of NO2 columns to a change in emissions. In a simplified experiment to recover local emission perturbations, horizontal smearing effects due to NOx transport are better resolved by the adjoint approach than by mass balance. For more complex emission changes, or at finer resolution, the iterative finite difference mass balance and adjoint methods produce similar global top-down inventories when inverting hourly synthetic observations, both reducing the a priori error by factors of 3-4. Inversions of simulated satellite observations from low Earth and geostationary orbits also indicate that both the mass balance and adjoint inversions produce similar results, reducing a priori error by a factor of 3. As the iterative finite difference mass balance method provides similar accuracy as the adjoint method, it offers the prospect of accurately estimating top-down NOx emissions using models that do not have an adjoint.

  17. An Iterative Inference Procedure Applying Conditional Random Fields for Simultaneous Classification of Land Cover and Land Use

    NASA Astrophysics Data System (ADS)

    Albert, L.; Rottensteiner, F.; Heipke, C.

    2015-08-01

    Land cover and land use exhibit strong contextual dependencies. We propose a novel approach for the simultaneous classification of land cover and land use, where semantic and spatial context is considered. The image sites for land cover and land use classification form a hierarchy consisting of two layers: a land cover layer and a land use layer. We apply Conditional Random Fields (CRF) at both layers. The layers differ with respect to the image entities corresponding to the nodes, the employed features and the classes to be distinguished. In the land cover layer, the nodes represent super-pixels; in the land use layer, the nodes correspond to objects from a geospatial database. Both CRFs model spatial dependencies between neighbouring image sites. The complex semantic relations between land cover and land use are integrated in the classification process by using contextual features. We propose a new iterative inference procedure for the simultaneous classification of land cover and land use, in which the two classification tasks mutually influence each other. This helps to improve the classification accuracy for certain classes. The main idea of this approach is that semantic context helps to refine the class predictions, which, in turn, leads to more expressive context information. Thus, potentially wrong decisions can be reversed at later stages. The approach is designed for input data based on aerial images. Experiments are carried out on a test site to evaluate the performance of the proposed method. We show the effectiveness of the iterative inference procedure and demonstrate that a smaller size of the super-pixels has a positive influence on the classification result.

  18. Quantification of Chemical Erosion in the DIII-D Divertor

    NASA Astrophysics Data System (ADS)

    McLean, Adam

    2009-11-01

    Chemical erosion (CE) yield at the graphite divertor target in DIII-D was measured to be substantially lower in cold near-detached plasma conditions compared to well-attached ones, with major implications for ITER. Current estimates of tritium retention by co-deposition with hydrocarbons (HCs) in ITER place potentially severe restrictions on operation. However, calculations done to date have been based on excessively conservative assumptions, due to limited understanding of cold divertor plasmas (1-5eV) which bridge energy thresholds for complex atomic and molecular processes not present in attached conditions. Hydrocarbon injection through a unique porous graphite plate which realistically simulates secondary reactions of HCs with a graphite surface has been used to measure CE in-situ. For the first time in a divertor, measurements were made at extrinsic CH4 injection rates comparable to the expected intrinsic CE rate of C, with the resulting spectroscopic emissions separated from those of the intrinsic sources. Under cold plasma conditions the contribution of CE-produced C relative to total C sources in the divertor declined dramatically from ˜50% to <15%. Photon efficiencies for products from the breakup of injected CH4 were greater than previous measurements at higher puff rates, indicating the importance of minimizing perturbation to the local plasma. At 350K, the measured CE yield near the outer strike point was ˜2.6% in attachment dropping to only ˜0.5% in cold plasma; results are consistent with some theoretical predications and lab studies. Under full detachment, near total extinction of the CD band occurred, consistent with suppression of net C erosion. These findings have potentially major impact on projected target lifetime and tritium retention in future reactors, and for the PFC choice in ITER.

  19. Parallel Ellipsoidal Perfectly Matched Layers for Acoustic Helmholtz Problems on Exterior Domains

    DOE PAGES

    Bunting, Gregory; Prakash, Arun; Walsh, Timothy; ...

    2018-01-26

    Exterior acoustic problems occur in a wide range of applications, making the finite element analysis of such problems a common practice in the engineering community. Various methods for truncating infinite exterior domains have been developed, including absorbing boundary conditions, infinite elements, and more recently, perfectly matched layers (PML). PML are gaining popularity due to their generality, ease of implementation, and effectiveness as an absorbing boundary condition. PML formulations have been developed in Cartesian, cylindrical, and spherical geometries, but not ellipsoidal. In addition, the parallel solution of PML formulations with iterative solvers for the solution of the Helmholtz equation, and howmore » this compares with more traditional strategies such as infinite elements, has not been adequately investigated. In this study, we present a parallel, ellipsoidal PML formulation for acoustic Helmholtz problems. To faciliate the meshing process, the ellipsoidal PML layer is generated with an on-the-fly mesh extrusion. Though the complex stretching is defined along ellipsoidal contours, we modify the Jacobian to include an additional mapping back to Cartesian coordinates in the weak formulation of the finite element equations. This allows the equations to be solved in Cartesian coordinates, which is more compatible with existing finite element software, but without the necessity of dealing with corners in the PML formulation. Herein we also compare the conditioning and performance of the PML Helmholtz problem with infinite element approach that is based on high order basis functions. On a set of representative exterior acoustic examples, we show that high order infinite element basis functions lead to an increasing number of Helmholtz solver iterations, whereas for PML the number of iterations remains constant for the same level of accuracy. Finally, this provides an additional advantage of PML over the infinite element approach.« less

  20. Parallel Ellipsoidal Perfectly Matched Layers for Acoustic Helmholtz Problems on Exterior Domains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bunting, Gregory; Prakash, Arun; Walsh, Timothy

    Exterior acoustic problems occur in a wide range of applications, making the finite element analysis of such problems a common practice in the engineering community. Various methods for truncating infinite exterior domains have been developed, including absorbing boundary conditions, infinite elements, and more recently, perfectly matched layers (PML). PML are gaining popularity due to their generality, ease of implementation, and effectiveness as an absorbing boundary condition. PML formulations have been developed in Cartesian, cylindrical, and spherical geometries, but not ellipsoidal. In addition, the parallel solution of PML formulations with iterative solvers for the solution of the Helmholtz equation, and howmore » this compares with more traditional strategies such as infinite elements, has not been adequately investigated. In this study, we present a parallel, ellipsoidal PML formulation for acoustic Helmholtz problems. To faciliate the meshing process, the ellipsoidal PML layer is generated with an on-the-fly mesh extrusion. Though the complex stretching is defined along ellipsoidal contours, we modify the Jacobian to include an additional mapping back to Cartesian coordinates in the weak formulation of the finite element equations. This allows the equations to be solved in Cartesian coordinates, which is more compatible with existing finite element software, but without the necessity of dealing with corners in the PML formulation. Herein we also compare the conditioning and performance of the PML Helmholtz problem with infinite element approach that is based on high order basis functions. On a set of representative exterior acoustic examples, we show that high order infinite element basis functions lead to an increasing number of Helmholtz solver iterations, whereas for PML the number of iterations remains constant for the same level of accuracy. Finally, this provides an additional advantage of PML over the infinite element approach.« less

  1. A Mixed Methods Bounded Case Study: Data-Driven Decision Making within Professional Learning Communities for Response to Intervention

    ERIC Educational Resources Information Center

    Rodriguez, Gabriel R.

    2017-01-01

    A growing number of schools are implementing PLCs to address school improvement, staff engage with data to identify student needs and determine instructional interventions. This is a starting point for engaging in the iterative process of learning for the teach in order to increase student learning (Hord & Sommers, 2008). The iterative process…

  2. An adaptive Gaussian process-based iterative ensemble smoother for data assimilation

    NASA Astrophysics Data System (ADS)

    Ju, Lei; Zhang, Jiangjiang; Meng, Long; Wu, Laosheng; Zeng, Lingzao

    2018-05-01

    Accurate characterization of subsurface hydraulic conductivity is vital for modeling of subsurface flow and transport. The iterative ensemble smoother (IES) has been proposed to estimate the heterogeneous parameter field. As a Monte Carlo-based method, IES requires a relatively large ensemble size to guarantee its performance. To improve the computational efficiency, we propose an adaptive Gaussian process (GP)-based iterative ensemble smoother (GPIES) in this study. At each iteration, the GP surrogate is adaptively refined by adding a few new base points chosen from the updated parameter realizations. Then the sensitivity information between model parameters and measurements is calculated from a large number of realizations generated by the GP surrogate with virtually no computational cost. Since the original model evaluations are only required for base points, whose number is much smaller than the ensemble size, the computational cost is significantly reduced. The applicability of GPIES in estimating heterogeneous conductivity is evaluated by the saturated and unsaturated flow problems, respectively. Without sacrificing estimation accuracy, GPIES achieves about an order of magnitude of speed-up compared with the standard IES. Although subsurface flow problems are considered in this study, the proposed method can be equally applied to other hydrological models.

  3. Iteration of ultrasound aberration correction methods

    NASA Astrophysics Data System (ADS)

    Maasoey, Svein-Erik; Angelsen, Bjoern; Varslot, Trond

    2004-05-01

    Aberration in ultrasound medical imaging is usually modeled by time-delay and amplitude variations concentrated on the transmitting/receiving array. This filter process is here denoted a TDA filter. The TDA filter is an approximation to the physical aberration process, which occurs over an extended part of the human body wall. Estimation of the TDA filter, and performing correction on transmit and receive, has proven difficult. It has yet to be shown that this method works adequately for severe aberration. Estimation of the TDA filter can be iterated by retransmitting a corrected signal and re-estimate until a convergence criterion is fulfilled (adaptive imaging). Two methods for estimating time-delay and amplitude variations in receive signals from random scatterers have been developed. One method correlates each element signal with a reference signal. The other method use eigenvalue decomposition of the receive cross-spectrum matrix, based upon a receive energy-maximizing criterion. Simulations of iterating aberration correction with a TDA filter have been investigated to study its convergence properties. A weak and strong human-body wall model generated aberration. Both emulated the human abdominal wall. Results after iteration improve aberration correction substantially, and both estimation methods converge, even for the case of strong aberration.

  4. Iterative load-balancing method with multigrid level relaxation for particle simulation with short-range interactions

    NASA Astrophysics Data System (ADS)

    Furuichi, Mikito; Nishiura, Daisuke

    2017-10-01

    We developed dynamic load-balancing algorithms for Particle Simulation Methods (PSM) involving short-range interactions, such as Smoothed Particle Hydrodynamics (SPH), Moving Particle Semi-implicit method (MPS), and Discrete Element method (DEM). These are needed to handle billions of particles modeled in large distributed-memory computer systems. Our method utilizes flexible orthogonal domain decomposition, allowing the sub-domain boundaries in the column to be different for each row. The imbalances in the execution time between parallel logical processes are treated as a nonlinear residual. Load-balancing is achieved by minimizing the residual within the framework of an iterative nonlinear solver, combined with a multigrid technique in the local smoother. Our iterative method is suitable for adjusting the sub-domain frequently by monitoring the performance of each computational process because it is computationally cheaper in terms of communication and memory costs than non-iterative methods. Numerical tests demonstrated the ability of our approach to handle workload imbalances arising from a non-uniform particle distribution, differences in particle types, or heterogeneous computer architecture which was difficult with previously proposed methods. We analyzed the parallel efficiency and scalability of our method using Earth simulator and K-computer supercomputer systems.

  5. A multiresolution approach to iterative reconstruction algorithms in X-ray computed tomography.

    PubMed

    De Witte, Yoni; Vlassenbroeck, Jelle; Van Hoorebeke, Luc

    2010-09-01

    In computed tomography, the application of iterative reconstruction methods in practical situations is impeded by their high computational demands. Especially in high resolution X-ray computed tomography, where reconstruction volumes contain a high number of volume elements (several giga voxels), this computational burden prevents their actual breakthrough. Besides the large amount of calculations, iterative algorithms require the entire volume to be kept in memory during reconstruction, which quickly becomes cumbersome for large data sets. To overcome this obstacle, we present a novel multiresolution reconstruction, which greatly reduces the required amount of memory without significantly affecting the reconstructed image quality. It is shown that, combined with an efficient implementation on a graphical processing unit, the multiresolution approach enables the application of iterative algorithms in the reconstruction of large volumes at an acceptable speed using only limited resources.

  6. The child's perspective as a guiding principle: Young children as co-designers in the design of an interactive application meant to facilitate participation in healthcare situations.

    PubMed

    Stålberg, Anna; Sandberg, Anette; Söderbäck, Maja; Larsson, Thomas

    2016-06-01

    During the last decade, interactive technology has entered mainstream society. Its many users also include children, even the youngest ones, who use the technology in different situations for both fun and learning. When designing technology for children, it is crucial to involve children in the process in order to arrive at an age-appropriate end product. In this study we describe the specific iterative process by which an interactive application was developed. This application is intended to facilitate young children's, three-to five years old, participation in healthcare situations. We also describe the specific contributions of the children, who tested the prototypes in a preschool, a primary health care clinic and an outpatient unit at a hospital, during the development process. The iterative phases enabled the children to be involved at different stages of the process and to evaluate modifications and improvements made after each prior iteration. The children contributed their own perspectives (the child's perspective) on the usability, content and graphic design of the application, substantially improving the software and resulting in an age-appropriate product. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Iterative dip-steering median filter

    NASA Astrophysics Data System (ADS)

    Huo, Shoudong; Zhu, Weihong; Shi, Taikun

    2017-09-01

    Seismic data are always contaminated with high noise components, which present processing challenges especially for signal preservation and its true amplitude response. This paper deals with an extension of the conventional median filter, which is widely used in random noise attenuation. It is known that the standard median filter works well with laterally aligned coherent events but cannot handle steep events, especially events with conflicting dips. In this paper, an iterative dip-steering median filter is proposed for the attenuation of random noise in the presence of multiple dips. The filter first identifies the dominant dips inside an optimized processing window by a Fourier-radial transform in the frequency-wavenumber domain. The optimum size of the processing window depends on the intensity of random noise that needs to be attenuated and the amount of signal to be preserved. It then applies median filter along the dominant dip and retains the signals. Iterations are adopted to process the residual signals along the remaining dominant dips in a descending sequence, until all signals have been retained. The method is tested by both synthetic and field data gathers and also compared with the commonly used f-k least squares de-noising and f-x deconvolution.

  8. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets

    DOE PAGES

    Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De; ...

    2017-01-28

    Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less

  9. Salient Point Detection in Protrusion Parts of 3D Object Robust to Isometric Variations

    NASA Astrophysics Data System (ADS)

    Mirloo, Mahsa; Ebrahimnezhad, Hosein

    2018-03-01

    In this paper, a novel method is proposed to detect 3D object salient points robust to isometric variations and stable against scaling and noise. Salient points can be used as the representative points from object protrusion parts in order to improve the object matching and retrieval algorithms. The proposed algorithm is started by determining the first salient point of the model based on the average geodesic distance of several random points. Then, according to the previous salient point, a new point is added to this set of points in each iteration. By adding every salient point, decision function is updated. Hence, a condition is created for selecting the next point in which the iterative point is not extracted from the same protrusion part so that drawing out of a representative point from every protrusion part is guaranteed. This method is stable against model variations with isometric transformations, scaling, and noise with different levels of strength due to using a feature robust to isometric variations and considering the relation between the salient points. In addition, the number of points used in averaging process is decreased in this method, which leads to lower computational complexity in comparison with the other salient point detection algorithms.

  10. Numerical Characterization of Piezoceramics Using Resonance Curves

    PubMed Central

    Pérez, Nicolás; Buiochi, Flávio; Brizzotti Andrade, Marco Aurélio; Adamowski, Julio Cezar

    2016-01-01

    Piezoelectric materials characterization is a challenging problem involving physical concepts, electrical and mechanical measurements and numerical optimization techniques. Piezoelectric ceramics such as Lead Zirconate Titanate (PZT) belong to the 6 mm symmetry class, which requires five elastic, three piezoelectric and two dielectric constants to fully represent the material properties. If losses are considered, the material properties can be represented by complex numbers. In this case, 20 independent material constants are required to obtain the full model. Several numerical methods have been used to adjust the theoretical models to the experimental results. The continuous improvement of the computer processing ability has allowed the use of a specific numerical method, the Finite Element Method (FEM), to iteratively solve the problem of finding the piezoelectric constants. This review presents the recent advances in the numerical characterization of 6 mm piezoelectric materials from experimental electrical impedance curves. The basic strategy consists in measuring the electrical impedance curve of a piezoelectric disk, and then combining the Finite Element Method with an iterative algorithm to find a set of material properties that minimizes the difference between the numerical impedance curve and the experimental one. Different methods to validate the results are also discussed. Examples of characterization of some common piezoelectric ceramics are presented to show the practical application of the described methods. PMID:28787875

  11. Numerical Characterization of Piezoceramics Using Resonance Curves.

    PubMed

    Pérez, Nicolás; Buiochi, Flávio; Brizzotti Andrade, Marco Aurélio; Adamowski, Julio Cezar

    2016-01-27

    Piezoelectric materials characterization is a challenging problem involving physical concepts, electrical and mechanical measurements and numerical optimization techniques. Piezoelectric ceramics such as Lead Zirconate Titanate (PZT) belong to the 6 mm symmetry class, which requires five elastic, three piezoelectric and two dielectric constants to fully represent the material properties. If losses are considered, the material properties can be represented by complex numbers. In this case, 20 independent material constants are required to obtain the full model. Several numerical methods have been used to adjust the theoretical models to the experimental results. The continuous improvement of the computer processing ability has allowed the use of a specific numerical method, the Finite Element Method (FEM), to iteratively solve the problem of finding the piezoelectric constants. This review presents the recent advances in the numerical characterization of 6 mm piezoelectric materials from experimental electrical impedance curves. The basic strategy consists in measuring the electrical impedance curve of a piezoelectric disk, and then combining the Finite Element Method with an iterative algorithm to find a set of material properties that minimizes the difference between the numerical impedance curve and the experimental one. Different methods to validate the results are also discussed. Examples of characterization of some common piezoelectric ceramics are presented to show the practical application of the described methods.

  12. Solution algorithm of dwell time in slope-based figuring model

    NASA Astrophysics Data System (ADS)

    Li, Yong; Zhou, Lin

    2017-10-01

    Surface slope profile is commonly used to evaluate X-ray reflective optics, which is used in synchrotron radiation beam. Moreover, the measurement result of measuring instrument for X-ray reflective optics is usually the surface slope profile rather than the surface height profile. To avoid the conversion error, the slope-based figuring model is introduced introduced by processing the X-ray reflective optics based on surface height-based model. However, the pulse iteration method, which can quickly obtain the dell time solution of the traditional height-based figuring model, is not applied to the slope-based figuring model because property of the slope removal function have both positive and negative values and complex asymmetric structure. To overcome this problem, we established the optimal mathematical model for the dwell time solution, By introducing the upper and lower limits of the dwell time and the time gradient constraint. Then we used the constrained least squares algorithm to solve the dwell time in slope-based figuring model. To validate the proposed algorithm, simulations and experiments are conducted. A flat mirror with effective aperture of 80 mm is polished on the ion beam machine. After iterative polishing three times, the surface slope profile error of the workpiece is converged from RMS 5.65 μrad to RMS 1.12 μrad.

  13. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De

    Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less

  14. How does culture affect experiential training feedback in exported Canadian health professional curricula?

    PubMed Central

    Mousa Bacha, Rasha; Abdelaziz, Somaia

    2017-01-01

    Objectives To explore feedback processes of Western-based health professional student training curricula conducted in an Arab clinical teaching setting. Methods This qualitative study employed document analysis of in-training evaluation reports (ITERs) used by Canadian nursing, pharmacy, respiratory therapy, paramedic, dental hygiene, and pharmacy technician programs established in Qatar. Six experiential training program coordinators were interviewed between February and May 2016 to explore how national cultural differences are perceived to affect feedback processes between students and clinical supervisors. Interviews were recorded, transcribed, and coded according to a priori cultural themes. Results Document analysis found all programs’ ITERs outlined competency items for students to achieve. Clinical supervisors choose a response option corresponding to their judgment of student performance and may provide additional written feedback in spaces provided. Only one program required formal face-to-face feedback exchange between students and clinical supervisors. Experiential training program coordinators identified that no ITER was expressly culturally adapted, although in some instances, modifications were made for differences in scopes of practice between Canada and Qatar.  Power distance was recognized by all coordinators who also identified both student and supervisor reluctance to document potentially negative feedback in ITERs. Instances of collectivism were described as more lenient student assessment by clinical supervisors of the same cultural background. Uncertainty avoidance did not appear to impact feedback processes. Conclusions Our findings suggest that differences in specific cultural dimensions between Qatar and Canada have implications on the feedback process in experiential training which may be addressed through simple measures to accommodate communication preferences. PMID:28315858

  15. How does culture affect experiential training feedback in exported Canadian health professional curricula?

    PubMed

    Wilbur, Kerry; Mousa Bacha, Rasha; Abdelaziz, Somaia

    2017-03-17

    To explore feedback processes of Western-based health professional student training curricula conducted in an Arab clinical teaching setting. This qualitative study employed document analysis of in-training evaluation reports (ITERs) used by Canadian nursing, pharmacy, respiratory therapy, paramedic, dental hygiene, and pharmacy technician programs established in Qatar. Six experiential training program coordinators were interviewed between February and May 2016 to explore how national cultural differences are perceived to affect feedback processes between students and clinical supervisors. Interviews were recorded, transcribed, and coded according to a priori cultural themes. Document analysis found all programs' ITERs outlined competency items for students to achieve. Clinical supervisors choose a response option corresponding to their judgment of student performance and may provide additional written feedback in spaces provided. Only one program required formal face-to-face feedback exchange between students and clinical supervisors. Experiential training program coordinators identified that no ITER was expressly culturally adapted, although in some instances, modifications were made for differences in scopes of practice between Canada and Qatar.  Power distance was recognized by all coordinators who also identified both student and supervisor reluctance to document potentially negative feedback in ITERs. Instances of collectivism were described as more lenient student assessment by clinical supervisors of the same cultural background. Uncertainty avoidance did not appear to impact feedback processes. Our findings suggest that differences in specific cultural dimensions between Qatar and Canada have implications on the feedback process in experiential training which may be addressed through simple measures to accommodate communication preferences.

  16. Pre-pregnancy community-based intervention for couples in Malaysia: application of intervention mapping.

    PubMed

    Norris, Shane A; Ho, Julius Cheah Chee; Rashed, Aswir Abd; Vinding, Vibeke; Skau, Jutta K H; Biesma, Regien; Aagaard-Hansen, Jens; Hanson, Mark; Matzen, Priya

    2016-11-17

    Malaysia is experiencing a nutrition transition with burgeoning obesity, particularly in women, and a growing prevalence of non-communicable disease. These health burdens have severe implications not only for adult health but also across generations. Pre-conception health promotion could address the intergenerational risk of metabolic disease. This paper describes the development of the "Jom Mama" intervention using Intervention Mapping (IM). The Jom Mama intervention aims to improve the health of young adult couples in Malaysia prior to conception. IM comprises of five steps prior to the last one, which involves the evaluation of the intervention. We used the five steps to develop the Jom Mama intervention. Both the process and evidence is documented providing the rationale to the selection of the key objectives of the intervention: (i) increasing healthy dietary practice; (ii) increasing physical activity levels, (iii) reducing sedentary activity; and (iv) improving social support to offset stressful lifestyles. From the IM process, Jom Mama will be health-system centred approach that uniquely combines both community health promoters and an electronic-health platform to deliver the complex intervention. IM is an iterative process that systematically gathers "best" evidence, selects appropriate theories of behaviour change, and facilitates formative research so as to develop a complex intervention. Though the IM process is time consuming, complex, and costly, it has enriched the Jom Mama intervention with a number of notable advantages: (i) intervention fashioned on formative work with stakeholders and in the target group; (ii) intervention combines research evidence with theory; (iii) intervention acknowledges multiple dynamics of influence; and (iv) intervention is embedded within health service priorities in Malaysia for greater scale-up possibility.

  17. Model for Simulating a Spiral Software-Development Process

    NASA Technical Reports Server (NTRS)

    Mizell, Carolyn; Curley, Charles; Nayak, Umanath

    2010-01-01

    A discrete-event simulation model, and a computer program that implements the model, have been developed as means of analyzing a spiral software-development process. This model can be tailored to specific development environments for use by software project managers in making quantitative cases for deciding among different software-development processes, courses of action, and cost estimates. A spiral process can be contrasted with a waterfall process, which is a traditional process that consists of a sequence of activities that include analysis of requirements, design, coding, testing, and support. A spiral process is an iterative process that can be regarded as a repeating modified waterfall process. Each iteration includes assessment of risk, analysis of requirements, design, coding, testing, delivery, and evaluation. A key difference between a spiral and a waterfall process is that a spiral process can accommodate changes in requirements at each iteration, whereas in a waterfall process, requirements are considered to be fixed from the beginning and, therefore, a waterfall process is not flexible enough for some projects, especially those in which requirements are not known at the beginning or may change during development. For a given project, a spiral process may cost more and take more time than does a waterfall process, but may better satisfy a customer's expectations and needs. Models for simulating various waterfall processes have been developed previously, but until now, there have been no models for simulating spiral processes. The present spiral-process-simulating model and the software that implements it were developed by extending a discrete-event simulation process model of the IEEE 12207 Software Development Process, which was built using commercially available software known as the Process Analysis Tradeoff Tool (PATT). Typical inputs to PATT models include industry-average values of product size (expressed as number of lines of code), productivity (number of lines of code per hour), and number of defects per source line of code. The user provides the number of resources, the overall percent of effort that should be allocated to each process step, and the number of desired staff members for each step. The output of PATT includes the size of the product, a measure of effort, a measure of rework effort, the duration of the entire process, and the numbers of injected, detected, and corrected defects as well as a number of other interesting features. In the development of the present model, steps were added to the IEEE 12207 waterfall process, and this model and its implementing software were made to run repeatedly through the sequence of steps, each repetition representing an iteration in a spiral process. Because the IEEE 12207 model is founded on a waterfall paradigm, it enables direct comparison of spiral and waterfall processes. The model can be used throughout a software-development project to analyze the project as more information becomes available. For instance, data from early iterations can be used as inputs to the model, and the model can be used to estimate the time and cost of carrying the project to completion.

  18. Quantitative phase and amplitude imaging using Differential-Interference Contrast (DIC) microscopy

    NASA Astrophysics Data System (ADS)

    Preza, Chrysanthe; O'Sullivan, Joseph A.

    2009-02-01

    We present an extension of the development of an alternating minimization (AM) method for the computation of a specimen's complex transmittance function (magnitude and phase) from DIC images. The ability to extract both quantitative phase and amplitude information from two rotationally-diverse DIC images (i.e., acquired by rotating the sample) extends previous efforts in computational DIC microscopy that have focused on quantitative phase imaging only. Simulation results show that the inverse problem at hand is sensitive to noise as well as to the choice of the AM algorithm parameters. The AM framework allows constraints and penalties on the magnitude and phase estimates to be incorporated in a principled manner. Towards this end, Green and De Pierro's "log-cosh" regularization penalty is applied to the magnitude of differences of neighboring values of the complex-valued function of the specimen during the AM iterations. The penalty is shown to be convex in the complex space. A procedure to approximate the penalty within the iterations is presented. In addition, a methodology to pre-compute AM parameters that are optimal with respect to the convergence rate of the AM algorithm is also presented. Both extensions of the AM method are investigated with simulations.

  19. An intelligent advisor for the design manager

    NASA Technical Reports Server (NTRS)

    Rogers, James L.; Padula, Sharon L.

    1989-01-01

    A design problem is viewed as a complex system divisible into modules. Before the design of a complex system can begin, much time and money are spent in determining the couplings among modules and the presence of iterative loops. This is important because the design manager must know how to group the modules into substems and how to assign subsystems to design teams so that changes in one subsystem will have predictable effects on other subsystems. Determining these subsystems is not an easy, straightforward process and often important couplings are overlooked. Moreover, the planning task must be repeated as new information becomes available or as the design specifications change. The purchase of this research effort is to develop a knowledge-based tool to act as an intelligent advisor for the design manager. This tool identifies the subsystems of a complex design problem, orders them into a well-structured format, and marks the couplings among the subsystems to facilitate the use of multilevel tools. The tool was tested in the decomposition of the COFS (Control of Flexible Structures) mast design which has about 50 modules. This test indicated that this type of approach could lead to a substantial savings by organizing and displaying a complex problem as a sequence of subsystems easily divisible among design teams.

  20. Varying face occlusion detection and iterative recovery for face recognition

    NASA Astrophysics Data System (ADS)

    Wang, Meng; Hu, Zhengping; Sun, Zhe; Zhao, Shuhuan; Sun, Mei

    2017-05-01

    In most sparse representation methods for face recognition (FR), occlusion problems were usually solved via removing the occlusion part of both query samples and training samples to perform the recognition process. This practice ignores the global feature of facial image and may lead to unsatisfactory results due to the limitation of local features. Considering the aforementioned drawback, we propose a method called varying occlusion detection and iterative recovery for FR. The main contributions of our method are as follows: (1) to detect an accurate occlusion area of facial images, an image processing and intersection-based clustering combination method is used for occlusion FR; (2) according to an accurate occlusion map, the new integrated facial images are recovered iteratively and put into a recognition process; and (3) the effectiveness on recognition accuracy of our method is verified by comparing it with three typical occlusion map detection methods. Experiments show that the proposed method has a highly accurate detection and recovery performance and that it outperforms several similar state-of-the-art methods against partial contiguous occlusion.

  1. Iterative near-term ecological forecasting: Needs, opportunities, and challenges

    USGS Publications Warehouse

    Dietze, Michael C.; Fox, Andrew; Beck-Johnson, Lindsay; Betancourt, Julio L.; Hooten, Mevin B.; Jarnevich, Catherine S.; Keitt, Timothy H.; Kenney, Melissa A.; Laney, Christine M.; Larsen, Laurel G.; Loescher, Henry W.; Lunch, Claire K.; Pijanowski, Bryan; Randerson, James T.; Read, Emily; Tredennick, Andrew T.; Vargas, Rodrigo; Weathers, Kathleen C.; White, Ethan P.

    2018-01-01

    Two foundational questions about sustainability are “How are ecosystems and the services they provide going to change in the future?” and “How do human decisions affect these trajectories?” Answering these questions requires an ability to forecast ecological processes. Unfortunately, most ecological forecasts focus on centennial-scale climate responses, therefore neither meeting the needs of near-term (daily to decadal) environmental decision-making nor allowing comparison of specific, quantitative predictions to new observational data, one of the strongest tests of scientific theory. Near-term forecasts provide the opportunity to iteratively cycle between performing analyses and updating predictions in light of new evidence. This iterative process of gaining feedback, building experience, and correcting models and methods is critical for improving forecasts. Iterative, near-term forecasting will accelerate ecological research, make it more relevant to society, and inform sustainable decision-making under high uncertainty and adaptive management. Here, we identify the immediate scientific and societal needs, opportunities, and challenges for iterative near-term ecological forecasting. Over the past decade, data volume, variety, and accessibility have greatly increased, but challenges remain in interoperability, latency, and uncertainty quantification. Similarly, ecologists have made considerable advances in applying computational, informatic, and statistical methods, but opportunities exist for improving forecast-specific theory, methods, and cyberinfrastructure. Effective forecasting will also require changes in scientific training, culture, and institutions. The need to start forecasting is now; the time for making ecology more predictive is here, and learning by doing is the fastest route to drive the science forward.

  2. Iterative near-term ecological forecasting: Needs, opportunities, and challenges.

    PubMed

    Dietze, Michael C; Fox, Andrew; Beck-Johnson, Lindsay M; Betancourt, Julio L; Hooten, Mevin B; Jarnevich, Catherine S; Keitt, Timothy H; Kenney, Melissa A; Laney, Christine M; Larsen, Laurel G; Loescher, Henry W; Lunch, Claire K; Pijanowski, Bryan C; Randerson, James T; Read, Emily K; Tredennick, Andrew T; Vargas, Rodrigo; Weathers, Kathleen C; White, Ethan P

    2018-02-13

    Two foundational questions about sustainability are "How are ecosystems and the services they provide going to change in the future?" and "How do human decisions affect these trajectories?" Answering these questions requires an ability to forecast ecological processes. Unfortunately, most ecological forecasts focus on centennial-scale climate responses, therefore neither meeting the needs of near-term (daily to decadal) environmental decision-making nor allowing comparison of specific, quantitative predictions to new observational data, one of the strongest tests of scientific theory. Near-term forecasts provide the opportunity to iteratively cycle between performing analyses and updating predictions in light of new evidence. This iterative process of gaining feedback, building experience, and correcting models and methods is critical for improving forecasts. Iterative, near-term forecasting will accelerate ecological research, make it more relevant to society, and inform sustainable decision-making under high uncertainty and adaptive management. Here, we identify the immediate scientific and societal needs, opportunities, and challenges for iterative near-term ecological forecasting. Over the past decade, data volume, variety, and accessibility have greatly increased, but challenges remain in interoperability, latency, and uncertainty quantification. Similarly, ecologists have made considerable advances in applying computational, informatic, and statistical methods, but opportunities exist for improving forecast-specific theory, methods, and cyberinfrastructure. Effective forecasting will also require changes in scientific training, culture, and institutions. The need to start forecasting is now; the time for making ecology more predictive is here, and learning by doing is the fastest route to drive the science forward.

  3. A novel decoding algorithm based on the hierarchical reliable strategy for SCG-LDPC codes in optical communications

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Tong, Qing-zhen; Huang, Sheng; Wang, Yong

    2013-11-01

    An effective hierarchical reliable belief propagation (HRBP) decoding algorithm is proposed according to the structural characteristics of systematically constructed Gallager low-density parity-check (SCG-LDPC) codes. The novel decoding algorithm combines the layered iteration with the reliability judgment, and can greatly reduce the number of the variable nodes involved in the subsequent iteration process and accelerate the convergence rate. The result of simulation for SCG-LDPC(3969,3720) code shows that the novel HRBP decoding algorithm can greatly reduce the computing amount at the condition of ensuring the performance compared with the traditional belief propagation (BP) algorithm. The bit error rate (BER) of the HRBP algorithm is considerable at the threshold value of 15, but in the subsequent iteration process, the number of the variable nodes for the HRBP algorithm can be reduced by about 70% at the high signal-to-noise ratio (SNR) compared with the BP algorithm. When the threshold value is further increased, the HRBP algorithm will gradually degenerate into the layered-BP algorithm, but at the BER of 10-7 and the maximal iteration number of 30, the net coding gain (NCG) of the HRBP algorithm is 0.2 dB more than that of the BP algorithm, and the average iteration times can be reduced by about 40% at the high SNR. Therefore, the novel HRBP decoding algorithm is more suitable for optical communication systems.

  4. Designing a complex intervention for dementia case management in primary care

    PubMed Central

    2013-01-01

    Background Community-based support will become increasingly important for people with dementia, but currently services are fragmented and the quality of care is variable. Case management is a popular approach to care co-ordination, but evidence to date on its effectiveness in dementia has been equivocal. Case management interventions need to be designed to overcome obstacles to care co-ordination and maximise benefit. A successful case management methodology was adapted from the United States (US) version for use in English primary care, with a view to a definitive trial. Medical Research Council guidance on the development of complex interventions was implemented in the adaptation process, to capture the skill sets, person characteristics and learning needs of primary care based case managers. Methods Co-design of the case manager role in a single NHS provider organisation, with external peer review by professionals and carers, in an iterative technology development process. Results The generic skills and personal attributes were described for practice nurses taking up the case manager role in their workplaces, and for social workers seconded to general practice teams, together with a method of assessing their learning needs. A manual of information material for people with dementia and their family carers was also created using the US intervention as its source. Conclusions Co-design produces rich products that have face validity and map onto the complexities of dementia and of health and care services. The feasibility of the case manager role, as described and defined by this process, needs evaluation in ‘real life’ settings. PMID:23865537

  5. Professional identity formation in medical education for humanistic, resilient physicians: pedagogic strategies for bridging theory to practice.

    PubMed

    Wald, Hedy S; Anthony, David; Hutchinson, Tom A; Liben, Stephen; Smilovitch, Mark; Donato, Anthony A

    2015-06-01

    Recent calls for an expanded perspective on medical education and training include focusing on complexities of professional identity formation (PIF). Medical educators are challenged to facilitate the active constructive, integrative developmental process of PIF within standardized and personalized and/or formal and informal curricular approaches. How can we best support the complex iterative PIF process for a humanistic, resilient health care professional? How can we effectively scaffold the necessary critical reflective learning and practice skill set for our learners to support the shaping of a professional identity?The authors present three pedagogic innovations contributing to the PIF process within undergraduate and graduate medical education (GME) at their institutions. These are (1) interactive reflective writing fostering reflective capacity, emotional awareness, and resiliency (as complexities within physician-patient interactions are explored) for personal and professional development; (2) synergistic teaching modules about mindful clinical practice and resilient responses to difficult interactions, to foster clinician resilience and enhanced well-being for effective professional functioning; and (3) strategies for effective use of a professional development e-portfolio and faculty development of reflective coaching skills in GME.These strategies as "bridges from theory to practice" embody and integrate key elements of promoting and enriching PIF, including guided reflection, the significant role of relationships (faculty and peers), mindfulness, adequate feedback, and creating collaborative learning environments. Ideally, such pedagogic innovations can make a significant contribution toward enhancing quality of care and caring with resilience for the being, relating, and doing of a humanistic health care professional.

  6. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3; An Iterative Decoding Algorithm for Linear Block Codes Based on a Low-Weight Trellis Search

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc

    1998-01-01

    For long linear block codes, maximum likelihood decoding based on full code trellises would be very hard to implement if not impossible. In this case, we may wish to trade error performance for the reduction in decoding complexity. Sub-optimum soft-decision decoding of a linear block code based on a low-weight sub-trellis can be devised to provide an effective trade-off between error performance and decoding complexity. This chapter presents such a suboptimal decoding algorithm for linear block codes. This decoding algorithm is iterative in nature and based on an optimality test. It has the following important features: (1) a simple method to generate a sequence of candidate code-words, one at a time, for test; (2) a sufficient condition for testing a candidate code-word for optimality; and (3) a low-weight sub-trellis search for finding the most likely (ML) code-word.

  7. First density profile measurements using frequency modulation of the continuous wave reflectometry on JETa)

    NASA Astrophysics Data System (ADS)

    Meneses, L.; Cupido, L.; Sirinelli, A.; Manso, M. E.; Jet-Efds Contributors

    2008-10-01

    We present the main design options and implementation of an X-mode reflectometer developed and successfully installed at JET using an innovative approach. It aims to prove the viability of measuring density profiles with high spatial and temporal resolution using broadband reflectometry operating in long and complex transmission lines. It probes the plasma with magnetic fields between 2.4 and 3.0 T using the V band [~(0-1.4)×1019 m-3]. The first experimental results show the high sensitivity of the diagnostic when measuring changes in the plasma density profile occurring ITER relevant regimes, such as ELMy H-modes. The successful demonstration of this concept motivated the upgrade of the JET frequency modulation of the continuous wave (FMCW) reflectometry diagnostic, to probe both the edge and core. This new system is essential to prove the viability of using the FMCW reflectometry technique to probe the plasma in next step devices, such as ITER, since they share the same waveguide complexity.

  8. Comparison of the convolution quadrature method and enhanced inverse FFT with application in elastodynamic boundary element method

    NASA Astrophysics Data System (ADS)

    Schanz, Martin; Ye, Wenjing; Xiao, Jinyou

    2016-04-01

    Transient problems can often be solved with transformation methods, where the inverse transformation is usually performed numerically. Here, the discrete Fourier transform in combination with the exponential window method is compared with the convolution quadrature method formulated as inverse transformation. Both are inverse Laplace transforms, which are formally identical but use different complex frequencies. A numerical study is performed, first with simple convolution integrals and, second, with a boundary element method (BEM) for elastodynamics. Essentially, when combined with the BEM, the discrete Fourier transform needs less frequency calculations, but finer mesh compared to the convolution quadrature method to obtain the same level of accuracy. If further fast methods like the fast multipole method are used to accelerate the boundary element method the convolution quadrature method is better, because the iterative solver needs much less iterations to converge. This is caused by the larger real part of the complex frequencies necessary for the calculation, which improves the conditions of system matrix.

  9. Neural Network Training by Integration of Adjoint Systems of Equations Forward in Time

    NASA Technical Reports Server (NTRS)

    Toomarian, Nikzad (Inventor); Barhen, Jacob (Inventor)

    1999-01-01

    A method and apparatus for supervised neural learning of time dependent trajectories exploits the concepts of adjoint operators to enable computation of the gradient of an objective functional with respect to the various parameters of the network architecture in a highly efficient manner. Specifically. it combines the advantage of dramatic reductions in computational complexity inherent in adjoint methods with the ability to solve two adjoint systems of equations together forward in time. Not only is a large amount of computation and storage saved. but the handling of real-time applications becomes also possible. The invention has been applied it to two examples of representative complexity which have recently been analyzed in the open literature and demonstrated that a circular trajectory can be learned in approximately 200 iterations compared to the 12000 reported in the literature. A figure eight trajectory was achieved in under 500 iterations compared to 20000 previously required. Tbc trajectories computed using our new method are much closer to the target trajectories than was reported in previous studies.

  10. Neural network training by integration of adjoint systems of equations forward in time

    NASA Technical Reports Server (NTRS)

    Toomarian, Nikzad (Inventor); Barhen, Jacob (Inventor)

    1992-01-01

    A method and apparatus for supervised neural learning of time dependent trajectories exploits the concepts of adjoint operators to enable computation of the gradient of an objective functional with respect to the various parameters of the network architecture in a highly efficient manner. Specifically, it combines the advantage of dramatic reductions in computational complexity inherent in adjoint methods with the ability to solve two adjoint systems of equations together forward in time. Not only is a large amount of computation and storage saved, but the handling of real-time applications becomes also possible. The invention has been applied it to two examples of representative complexity which have recently been analyzed in the open literature and demonstrated that a circular trajectory can be learned in approximately 200 iterations compared to the 12000 reported in the literature. A figure eight trajectory was achieved in under 500 iterations compared to 20000 previously required. The trajectories computed using our new method are much closer to the target trajectories than was reported in previous studies.

  11. Highly Scalable Matching Pursuit Signal Decomposition Algorithm

    NASA Technical Reports Server (NTRS)

    Christensen, Daniel; Das, Santanu; Srivastava, Ashok N.

    2009-01-01

    Matching Pursuit Decomposition (MPD) is a powerful iterative algorithm for signal decomposition and feature extraction. MPD decomposes any signal into linear combinations of its dictionary elements or atoms . A best fit atom from an arbitrarily defined dictionary is determined through cross-correlation. The selected atom is subtracted from the signal and this procedure is repeated on the residual in the subsequent iterations until a stopping criterion is met. The reconstructed signal reveals the waveform structure of the original signal. However, a sufficiently large dictionary is required for an accurate reconstruction; this in return increases the computational burden of the algorithm, thus limiting its applicability and level of adoption. The purpose of this research is to improve the scalability and performance of the classical MPD algorithm. Correlation thresholds were defined to prune insignificant atoms from the dictionary. The Coarse-Fine Grids and Multiple Atom Extraction techniques were proposed to decrease the computational burden of the algorithm. The Coarse-Fine Grids method enabled the approximation and refinement of the parameters for the best fit atom. The ability to extract multiple atoms within a single iteration enhanced the effectiveness and efficiency of each iteration. These improvements were implemented to produce an improved Matching Pursuit Decomposition algorithm entitled MPD++. Disparate signal decomposition applications may require a particular emphasis of accuracy or computational efficiency. The prominence of the key signal features required for the proper signal classification dictates the level of accuracy necessary in the decomposition. The MPD++ algorithm may be easily adapted to accommodate the imposed requirements. Certain feature extraction applications may require rapid signal decomposition. The full potential of MPD++ may be utilized to produce incredible performance gains while extracting only slightly less energy than the standard algorithm. When the utmost accuracy must be achieved, the modified algorithm extracts atoms more conservatively but still exhibits computational gains over classical MPD. The MPD++ algorithm was demonstrated using an over-complete dictionary on real life data. Computational times were reduced by factors of 1.9 and 44 for the emphases of accuracy and performance, respectively. The modified algorithm extracted similar amounts of energy compared to classical MPD. The degree of the improvement in computational time depends on the complexity of the data, the initialization parameters, and the breadth of the dictionary. The results of the research confirm that the three modifications successfully improved the scalability and computational efficiency of the MPD algorithm. Correlation Thresholding decreased the time complexity by reducing the dictionary size. Multiple Atom Extraction also reduced the time complexity by decreasing the number of iterations required for a stopping criterion to be reached. The Course-Fine Grids technique enabled complicated atoms with numerous variable parameters to be effectively represented in the dictionary. Due to the nature of the three proposed modifications, they are capable of being stacked and have cumulative effects on the reduction of the time complexity.

  12. Rescheduling with iterative repair

    NASA Technical Reports Server (NTRS)

    Zweben, Monte; Davis, Eugene; Daun, Brian; Deale, Michael

    1992-01-01

    This paper presents a new approach to rescheduling called constraint-based iterative repair. This approach gives our system the ability to satisfy domain constraints, address optimization concerns, minimize perturbation to the original schedule, produce modified schedules, quickly, and exhibits 'anytime' behavior. The system begins with an initial, flawed schedule and then iteratively repairs constraint violations until a conflict-free schedule is produced. In an empirical demonstration, we vary the importance of minimizing perturbation and report how fast the system is able to resolve conflicts in a given time bound. We also show the anytime characteristics of the system. These experiments were performed within the domain of Space Shuttle ground processing.

  13. Scheduling and rescheduling with iterative repair

    NASA Technical Reports Server (NTRS)

    Zweben, Monte; Davis, Eugene; Daun, Brian; Deale, Michael

    1992-01-01

    This paper describes the GERRY scheduling and rescheduling system being applied to coordinate Space Shuttle Ground Processing. The system uses constraint-based iterative repair, a technique that starts with a complete but possibly flawed schedule and iteratively improves it by using constraint knowledge within repair heuristics. In this paper we explore the tradeoff between the informedness and the computational cost of several repair heuristics. We show empirically that some knowledge can greatly improve the convergence speed of a repair-based system, but that too much knowledge, such as the knowledge embodied within the MIN-CONFLICTS lookahead heuristic, can overwhelm a system and result in degraded performance.

  14. The moisture outgassing kinetics of a silica reinforced polydimethylsiloxane

    NASA Astrophysics Data System (ADS)

    Sharma, H. N.; McLean, W.; Maxwell, R. S.; Dinh, L. N.

    2016-09-01

    A silica-filled polydimethylsiloxane (PDMS) composite M9787 was investigated for potential outgassing in a vacuum/dry environment with the temperature programmed desorption/reaction method. The outgassing kinetics of 463 K vacuum heat-treated samples, vacuum heat-treated samples which were subsequently re-exposed to moisture, and untreated samples were extracted using the isoconversional and constrained iterative regression methods in a complementary fashion. Density functional theory (DFT) calculations of water interactions with a silica surface were also performed to provide insight into the structural motifs leading to the obtained kinetic parameters. Kinetic analysis/model revealed that no outgassing occurs from the vacuum heat-treated samples in subsequent vacuum/dry environment applications at room temperature (˜300 K). The main effect of re-exposure of the vacuum heat-treated samples to a glove box condition (˜30 ppm by volume of H2O) for even a couple of days was the formation, on the silica surface fillers, of ˜60 ppm by weight of physisorbed and loosely bonded moisture, which subsequently outgasses at room temperature in a vacuum/dry environment in a time span of 10 yr. However, without any vacuum heat treatment and even after 1 h of vacuum pump down, about 300 ppm by weight of H2O would be released from the PDMS in the next few hours. Thereafter the outgassing rate slows down substantially. The presented methodology of using the isoconversional kinetic analysis results and some appropriate nature of the reaction as the constraints for more accurate iterative regression analysis/deconvolution of complex kinetic spectra, and of checking the so-obtained results with first principle calculations such as DFT can serve as a template for treating other complex physical/chemical processes as well.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, H. N.; McLean, W.; Maxwell, R. S.

    We investigated a silica-filled polydimethylsiloxane (PDMS) composite M9787 for potential outgassing in a vacuum/dry environment with the temperature programmed desorption/reaction method. The outgassing kinetics of 463 K vacuum heat-treated samples, vacuum heat-treated samples which were subsequently re-exposed to moisture, and untreated samples were extracted using the isoconversional and constrained iterative regression methods in a complementary fashion. Density functional theory (DFT) calculations of water interactions with a silica surface were also performed to provide insight into the structural motifs leading to the obtained kinetic parameters. Kinetic analysis/model revealed that no outgassing occurs from the vacuum heat-treated samples in subsequent vacuum/dry environmentmore » applications at room temperature (~300 K). Moreover, the main effect of re-exposure of the vacuum heat-treated samples to a glove box condition (~30 ppm by volume of H 2O) for even a couple of days was the formation, on the silica surface fillers, of ~60 ppm by weight of physisorbed and loosely bonded moisture, which subsequently outgasses at room temperature in a vacuum/dry environment in a time span of 10 yr. However, without any vacuum heat treatment and even after 1 h of vacuum pump down, about 300 ppm by weight of H 2O would be released from the PDMS in the next few hours. Thereafter the outgassing rate slows down substantially. Our presented methodology of using the isoconversional kinetic analysis results and some appropriate nature of the reaction as the constraints for more accurate iterative regression analysis/deconvolution of complex kinetic spectra, and of checking the so-obtained results with first principle calculations such as DFT can serve as a template for treating other complex physical/chemical processes as well.« less

  16. The moisture outgassing kinetics of a silica reinforced polydimethylsiloxane

    DOE PAGES

    Sharma, H. N.; McLean, W.; Maxwell, R. S.; ...

    2016-09-21

    We investigated a silica-filled polydimethylsiloxane (PDMS) composite M9787 for potential outgassing in a vacuum/dry environment with the temperature programmed desorption/reaction method. The outgassing kinetics of 463 K vacuum heat-treated samples, vacuum heat-treated samples which were subsequently re-exposed to moisture, and untreated samples were extracted using the isoconversional and constrained iterative regression methods in a complementary fashion. Density functional theory (DFT) calculations of water interactions with a silica surface were also performed to provide insight into the structural motifs leading to the obtained kinetic parameters. Kinetic analysis/model revealed that no outgassing occurs from the vacuum heat-treated samples in subsequent vacuum/dry environmentmore » applications at room temperature (~300 K). Moreover, the main effect of re-exposure of the vacuum heat-treated samples to a glove box condition (~30 ppm by volume of H 2O) for even a couple of days was the formation, on the silica surface fillers, of ~60 ppm by weight of physisorbed and loosely bonded moisture, which subsequently outgasses at room temperature in a vacuum/dry environment in a time span of 10 yr. However, without any vacuum heat treatment and even after 1 h of vacuum pump down, about 300 ppm by weight of H 2O would be released from the PDMS in the next few hours. Thereafter the outgassing rate slows down substantially. Our presented methodology of using the isoconversional kinetic analysis results and some appropriate nature of the reaction as the constraints for more accurate iterative regression analysis/deconvolution of complex kinetic spectra, and of checking the so-obtained results with first principle calculations such as DFT can serve as a template for treating other complex physical/chemical processes as well.« less

  17. Application of Four-Point Newton-EGSOR iteration for the numerical solution of 2D Porous Medium Equations

    NASA Astrophysics Data System (ADS)

    Chew, J. V. L.; Sulaiman, J.

    2017-09-01

    Partial differential equations that are used in describing the nonlinear heat and mass transfer phenomena are difficult to be solved. For the case where the exact solution is difficult to be obtained, it is necessary to use a numerical procedure such as the finite difference method to solve a particular partial differential equation. In term of numerical procedure, a particular method can be considered as an efficient method if the method can give an approximate solution within the specified error with the least computational complexity. Throughout this paper, the two-dimensional Porous Medium Equation (2D PME) is discretized by using the implicit finite difference scheme to construct the corresponding approximation equation. Then this approximation equation yields a large-sized and sparse nonlinear system. By using the Newton method to linearize the nonlinear system, this paper deals with the application of the Four-Point Newton-EGSOR (4NEGSOR) iterative method for solving the 2D PMEs. In addition to that, the efficiency of the 4NEGSOR iterative method is studied by solving three examples of the problems. Based on the comparative analysis, the Newton-Gauss-Seidel (NGS) and the Newton-SOR (NSOR) iterative methods are also considered. The numerical findings show that the 4NEGSOR method is superior to the NGS and the NSOR methods in terms of the number of iterations to get the converged solutions, the time of computation and the maximum absolute errors produced by the methods.

  18. Effective Iterated Greedy Algorithm for Flow-Shop Scheduling Problems with Time lags

    NASA Astrophysics Data System (ADS)

    ZHAO, Ning; YE, Song; LI, Kaidian; CHEN, Siyu

    2017-05-01

    Flow shop scheduling problem with time lags is a practical scheduling problem and attracts many studies. Permutation problem(PFSP with time lags) is concentrated but non-permutation problem(non-PFSP with time lags) seems to be neglected. With the aim to minimize the makespan and satisfy time lag constraints, efficient algorithms corresponding to PFSP and non-PFSP problems are proposed, which consist of iterated greedy algorithm for permutation(IGTLP) and iterated greedy algorithm for non-permutation (IGTLNP). The proposed algorithms are verified using well-known simple and complex instances of permutation and non-permutation problems with various time lag ranges. The permutation results indicate that the proposed IGTLP can reach near optimal solution within nearly 11% computational time of traditional GA approach. The non-permutation results indicate that the proposed IG can reach nearly same solution within less than 1% computational time compared with traditional GA approach. The proposed research combines PFSP and non-PFSP together with minimal and maximal time lag consideration, which provides an interesting viewpoint for industrial implementation.

  19. Defense Advanced Research Projects Agency (DARPA) Network Archive (DNA)

    DTIC Science & Technology

    2008-12-01

    therefore decided for an iterative development process even within such a small project. The first iteration consisted of conducting specific...Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions...regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden to Washington

  20. Development of the Nuclear-Electronic Orbital Approach and Applications to Ionic Liquids and Tunneling Processes

    DTIC Science & Technology

    2010-02-24

    electronic Schrodinger equation . In previous grant cycles, we implemented the NEO approach at the Hartree-Fock (NEO-HF),13 configuration interaction...electronic and nuclear molecular orbitals. The resulting electronic and nuclear Hartree-Fock-Roothaan equations are solved iteratively until self...directly into the standard Hartree- Fock-Roothaan equations , which are solved iteratively to self-consistency. The density matrix representation

  1. Development of FWIGPR, an open-source package for full-waveform inversion of common-offset GPR data

    NASA Astrophysics Data System (ADS)

    Jazayeri, S.; Kruse, S.

    2017-12-01

    We introduce a package for full-waveform inversion (FWI) of Ground Penetrating Radar (GPR) data based on a combination of open-source programs. The FWI requires a good starting model, based on direct knowledge of field conditions or on traditional ray-based inversion methods. With a good starting model, the FWI can improve resolution of selected subsurface features. The package will be made available for general use in educational and research activities. The FWIGPR package consists of four main components: 3D to 2D data conversion, source wavelet estimation, forward modeling, and inversion. (These four components additionally require the development, by the user, of a good starting model.) A major challenge with GPR data is the unknown form of the waveform emitted by the transmitter held close to the ground surface. We apply a blind deconvolution method to estimate the source wavelet, based on a sparsity assumption about the reflectivity series of the subsurface model (Gholami and Sacchi 2012). The estimated wavelet is deconvolved from the data and the sparsest reflectivity series with fewest reflectors. The gprMax code (www.gprmax.com) is used as the forward modeling tool and the PEST parameter estimation package (www.pesthomepage.com) for the inversion. To reduce computation time, the field data are converted to an effective 2D equivalent, and the gprMax code can be run in 2D mode. In the first step, the user must create a good starting model of the data, presumably using ray-based methods. This estimated model will be introduced to the FWI process as an initial model. Next, the 3D data is converted to 2D, then the user estimates the source wavelet that best fits the observed data by sparsity assumption of the earth's response. Last, PEST runs gprMax with the initial model and calculates the misfit between the synthetic and observed data, and using an iterative algorithm calling gprMax several times ineach iteration, finds successive models that better fit the data. To gauge whether the iterative process has arrived at a local or global minima, the process can be repeated with a range of starting models. Tests have shown that this package can successfully improve estimates of selected subsurface model parameters for simple synthetic and real data. Ongoing research will focus on FWI of more complex scenarios.

  2. Mechanical and electrical performance characterization of partial mock-up of the ITER PF6 coil tail

    NASA Astrophysics Data System (ADS)

    Zhang, Z.; Song, Y.; Wu, H.; Zhang, M.; Xie, Y.; Hu, B.; Liu, F.; Shen, G.; Wu, W.; Lu, K.; Wei, J.; Bilbao, M.; Peñate, J.; Readman, P.; Sborchia, C.; Valente, P.; Smith, K.

    2017-12-01

    International Thermonuclear Experimental Reactor (ITER) is a full superconducting coil tokamak. The tail is an important component of Poloidal Field (PF) coil, of which the main functions are to provide the electrical isolation and transfer the longitudinal load from the last turn to the last-but-one turn. The paper focuses on an optimized mechanical structure of PF6 coil tail, which is made up of two main parts. One was welded to the last turn and the other was welded to the last-but-one turn. Both of them were connected by the mechanical coupling. The electrical isolation between the two parts was maintained by a strap made of insulating composite. In addition, as the PF6 coil is operated under the cyclic electromagnetic load during the tokamak operation, the fatigue property of the tail should be assessed and qualified at low temperature. Moreover, taking into consideration the complexity of the insulation winding process which is performed in a confined space, the wrapping process of the insulation needs to be established. Meanwhile, the high voltage (HV) tests of the tail insulation, including the direct current (DC) and alternating current (AC) tests, need to be assessed before and after the fatigue test. In this paper, a fully bonded PF6 coil tail partial mock-up (not including the weld of the tail to the last conductor turn) was designed and manufactured by simulating the actual manufacturing processes. In addition, the fatigue tests on the sample were carried out at 77 K, and the results showed the sample had good and stable fatigue properties at cryogenic temperature. The HV tests before and after the fatigue test, also including the final 30 kV breakdown DC test after the fatigue test, were carried out. The test results satisfied the requirements of ITER and were discussed in depth. Finally, the sample was destructively inspected to validate the integrity of the insulation by mechanical cross sectioning, and no voids and cracks were observed. Therefore it can be verified from the test results that the designed PF6 coil tail has good comprehensive properties, which can be applied to the formal production of the PF6 coil.

  3. RAMI Analysis for Designing and Optimizing Tokamak Cooling Water System (TCWS) for the ITER's Fusion Reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferrada, Juan J; Reiersen, Wayne T

    U.S.-ITER is responsible for the design, engineering, and procurement of the Tokamak Cooling Water System (TCWS). TCWS is designed to provide cooling and baking for client systems that include the first wall/blanket, vacuum vessel, divertor, and neutral beam injector. Additional operations that support these primary functions include chemical control of water provided to client systems, draining and drying for maintenance, and leak detection/localization. TCWS interfaces with 27 systems including the secondary cooling system, which rejects this heat to the environment. TCWS transfers heat generated in the Tokamak during nominal pulsed operation - 850 MW at up to 150 C andmore » 4.2 MPa water pressure. Impurities are diffused from in-vessel components and the vacuum vessel by water baking at 200-240 C at up to 4.4 MPa. TCWS is complex because it serves vital functions for four primary clients whose performance is critical to ITER's success and interfaces with more than 20 additional ITER systems. Conceptual design of this one-of-a-kind cooling system has been completed; however, several issues remain that must be resolved before moving to the next stage of the design process. The 2004 baseline design indicated cooling loops that have no fault tolerance for component failures. During plasma operation, each cooling loop relies on a single pump, a single pressurizer, and one heat exchanger. Consequently, failure of any of these would render TCWS inoperable, resulting in plasma shutdown. The application of reliability, availability, maintainability, and inspectability (RAMI) tools during the different stages of TCWS design is crucial for optimization purposes and for maintaining compliance with project requirements. RAMI analysis will indicate appropriate equipment redundancy that provides graceful degradation in the event of an equipment failure. This analysis helps demonstrate that using proven, commercially available equipment is better than using custom-designed equipment with no field experience and lowers specific costs while providing higher reliability. This paper presents a brief description of the TCWS conceptual design and the application of RAMI tools to optimize the design at different stages during the project.« less

  4. Application of a repetitive process setting to design of monotonically convergent iterative learning control

    NASA Astrophysics Data System (ADS)

    Boski, Marcin; Paszke, Wojciech

    2015-11-01

    This paper deals with the problem of designing an iterative learning control algorithm for discrete linear systems using repetitive process stability theory. The resulting design produces a stabilizing output feedback controller in the time domain and a feedforward controller that guarantees monotonic convergence in the trial-to-trial domain. The results are also extended to limited frequency range design specification. New design procedure is introduced in terms of linear matrix inequality (LMI) representations, which guarantee the prescribed performances of ILC scheme. A simulation example is given to illustrate the theoretical developments.

  5. A framework for stakeholder identification in concept mapping and health research: a novel process and its application to older adult mobility and the built environment.

    PubMed

    Schiller, Claire; Winters, Meghan; Hanson, Heather M; Ashe, Maureen C

    2013-05-02

    Stakeholders, as originally defined in theory, are groups or individual who can affect or are affected by an issue. Stakeholders are an important source of information in health research, providing critical perspectives and new insights on the complex determinants of health. The intersection of built and social environments with older adult mobility is an area of research that is fundamentally interdisciplinary and would benefit from a better understanding of stakeholder perspectives. Although a rich body of literature surrounds stakeholder theory, a systematic process for identifying health stakeholders in practice does not exist. This paper presents a framework of stakeholders related to older adult mobility and the built environment, and further outlines a process for systematically identifying stakeholders that can be applied in other health contexts, with a particular emphasis on concept mapping research. Informed by gaps in the relevant literature we developed a framework for identifying and categorizing health stakeholders. The framework was created through a novel iterative process of stakeholder identification and categorization. The development entailed a literature search to identify stakeholder categories, representation of identified stakeholders in a visual chart, and correspondence with expert informants to obtain practice-based insight. The three-step, iterative creation process progressed from identifying stakeholder categories, to identifying specific stakeholder groups and soliciting feedback from expert informants. The result was a stakeholder framework comprised of seven categories with detailed sub-groups. The main categories of stakeholders were, (1) the Public, (2) Policy makers and governments, (3) Research community, (4) Practitioners and professionals, (5) Health and social service providers, (6) Civil society organizations, and (7) Private business. Stakeholders related to older adult mobility and the built environment span many disciplines and realms of practice. Researchers studying this issue may use the detailed stakeholder framework process we present to identify participants for future projects. Health researchers pursuing stakeholder-based projects in other contexts are encouraged to incorporate this process of stakeholder identification and categorization to ensure systematic consideration of relevant perspectives in their work.

  6. A framework for stakeholder identification in concept mapping and health research: a novel process and its application to older adult mobility and the built environment

    PubMed Central

    2013-01-01

    Background Stakeholders, as originally defined in theory, are groups or individual who can affect or are affected by an issue. Stakeholders are an important source of information in health research, providing critical perspectives and new insights on the complex determinants of health. The intersection of built and social environments with older adult mobility is an area of research that is fundamentally interdisciplinary and would benefit from a better understanding of stakeholder perspectives. Although a rich body of literature surrounds stakeholder theory, a systematic process for identifying health stakeholders in practice does not exist. This paper presents a framework of stakeholders related to older adult mobility and the built environment, and further outlines a process for systematically identifying stakeholders that can be applied in other health contexts, with a particular emphasis on concept mapping research. Methods Informed by gaps in the relevant literature we developed a framework for identifying and categorizing health stakeholders. The framework was created through a novel iterative process of stakeholder identification and categorization. The development entailed a literature search to identify stakeholder categories, representation of identified stakeholders in a visual chart, and correspondence with expert informants to obtain practice-based insight. Results The three-step, iterative creation process progressed from identifying stakeholder categories, to identifying specific stakeholder groups and soliciting feedback from expert informants. The result was a stakeholder framework comprised of seven categories with detailed sub-groups. The main categories of stakeholders were, (1) the Public, (2) Policy makers and governments, (3) Research community, (4) Practitioners and professionals, (5) Health and social service providers, (6) Civil society organizations, and (7) Private business. Conclusions Stakeholders related to older adult mobility and the built environment span many disciplines and realms of practice. Researchers studying this issue may use the detailed stakeholder framework process we present to identify participants for future projects. Health researchers pursuing stakeholder-based projects in other contexts are encouraged to incorporate this process of stakeholder identification and categorization to ensure systematic consideration of relevant perspectives in their work. PMID:23639179

  7. Inside the Black Box: The Case Review Process of an Elder Abuse Forensic Center.

    PubMed

    Navarro, Adria E; Wysong, Julia; DeLiema, Marguerite; Schwartz, Elizabeth L; Nichol, Michael B; Wilber, Kathleen H

    2016-08-01

    Preliminary evidence suggests that elder abuse forensic centers improve victim welfare by increasing necessary prosecutions and conservatorships and reducing the recurrence of protective service referrals. Center team members gather information and make decisions designed to protect clients and their assets, yet the collective process of how these case reviews are conducted remains unexamined. The purpose of this study is to present a model describing the interprofessional approach of investigation and response to financial exploitation (FE), a frequent and complex type of abuse of vulnerable adults. To develop an understanding of the case review process at the Los Angeles County Elder Abuse Forensic Center (Center), a quasi-Delphi field study approach was used involving direct observations of meetings, surveying team members, and review from the Center's Advisory Council. The goal of this iterative analysis was to understand the case review process for suspected FE in Los Angeles County. A process map of key forensic center elements was developed that may be useful for replication in other settings. The process map includes: (a) multidisciplinary data collection, (b) key decisions for consideration, and (c) strategic actions utilized by an interprofessional team focused on elder justice. Elder justice relies on a complex system of providers. Elder abuse forensic centers provide a process designed to efficiently address client safety, client welfare, and protection of assets. Study findings provide a process map that may help other communities replicate an established multidisciplinary team, one experienced with justice system outcomes designed to protect FE victims. © The Author 2015. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  8. From synthesis to function via iterative assembly of N-methyliminodiacetic acid boronate building blocks.

    PubMed

    Li, Junqi; Grillo, Anthony S; Burke, Martin D

    2015-08-18

    The study and optimization of small molecule function is often impeded by the time-intensive and specialist-dependent process that is typically used to make such compounds. In contrast, general and automated platforms have been developed for making peptides, oligonucleotides, and increasingly oligosaccharides, where synthesis is simplified to iterative applications of the same reactions. Inspired by the way natural products are biosynthesized via the iterative assembly of a defined set of building blocks, we developed a platform for small molecule synthesis involving the iterative coupling of haloboronic acids protected as the corresponding N-methyliminodiacetic acid (MIDA) boronates. Here we summarize our efforts thus far to develop this platform into a generalized and automated approach for small molecule synthesis. We and others have employed this approach to access many polyene-based compounds, including the polyene motifs found in >75% of all polyene natural products. This platform further allowed us to derivatize amphotericin B, the powerful and resistance-evasive but also highly toxic last line of defense in treating systemic fungal infections, and thereby understand its mechanism of action. This synthesis-enabled mechanistic understanding has led us to develop less toxic derivatives currently under evaluation as improved antifungal agents. To access more Csp(3)-containing small molecules, we gained a stereocontrolled entry into chiral, non-racemic α-boryl aldehydes through the discovery of a chiral derivative of MIDA. These α-boryl aldehydes are versatile intermediates for the synthesis of many Csp(3) boronate building blocks that are otherwise difficult to access. In addition, we demonstrated the utility of these types of building blocks in accessing pharmaceutically relevant targets via an iterative Csp(3) cross-coupling cycle. We have further expanded the scope of the platform to include stereochemically complex macrocyclic and polycyclic molecules using a linear-to-cyclized strategy, in which Csp(3) boronate building blocks are iteratively assembled into linear precursors that are then cyclized into the cyclic frameworks found in many natural products and natural product-like structures. Enabled by the serendipitous discovery of a catch-and-release protocol for generally purifying MIDA boronate intermediates, the platform has been automated. The synthesis of 14 distinct classes of small molecules, including pharmaceuticals, materials components, and polycyclic natural products, has been achieved using this new synthesis machine. It is anticipated that the scope of small molecules accessible by this platform will continue to expand via further developments in building block synthesis, Csp(3) cross-coupling methodologies, and cyclization strategies. Achieving these goals will enable the more generalized synthesis of small molecules and thereby help shift the rate-limiting step in small molecule science from synthesis to function.

  9. Subband Image Coding with Jointly Optimized Quantizers

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Chung, Wilson C.; Smith Mark J. T.

    1995-01-01

    An iterative design algorithm for the joint design of complexity- and entropy-constrained subband quantizers and associated entropy coders is proposed. Unlike conventional subband design algorithms, the proposed algorithm does not require the use of various bit allocation algorithms. Multistage residual quantizers are employed here because they provide greater control of the complexity-performance tradeoffs, and also because they allow efficient and effective high-order statistical modeling. The resulting subband coder exploits statistical dependencies within subbands, across subbands, and across stages, mainly through complexity-constrained high-order entropy coding. Experimental results demonstrate that the complexity-rate-distortion performance of the new subband coder is exceptional.

  10. Process control strategy for ITER central solenoid operation

    NASA Astrophysics Data System (ADS)

    Maekawa, R.; Takami, S.; Iwamoto, A.; Chang, H.-S.; Forgeas, A.; Chalifour, M.

    2016-12-01

    ITER Central Solenoid (CS) pulse operation induces significant flow disturbance in the forced-flow Supercritical Helium (SHe) cooling circuit, which could impact primarily on the operation of cold circulator (SHe centrifugal pump) in Auxiliary Cold Box (ACB). Numerical studies using Venecia®, SUPERMAGNET and 4C have identified reverse flow at the CS module inlet due to the substantial thermal energy deposition at the inner-most winding. To assess the reliable operation of ACB-CS (dedicated ACB for CS), the process analyses have been conducted with a dynamic process simulation model developed by Cryogenic Process REal-time SimulaTor (C-PREST). As implementing process control of hydrodynamic instability, several strategies have been applied to evaluate their feasibility. The paper discusses control strategy to protect the centrifugal type cold circulator/compressor operations and its impact on the CS cooling.

  11. Improving performances of suboptimal greedy iterative biclustering heuristics via localization.

    PubMed

    Erten, Cesim; Sözdinler, Melih

    2010-10-15

    Biclustering gene expression data is the problem of extracting submatrices of genes and conditions exhibiting significant correlation across both the rows and the columns of a data matrix of expression values. Even the simplest versions of the problem are computationally hard. Most of the proposed solutions therefore employ greedy iterative heuristics that locally optimize a suitably assigned scoring function. We provide a fast and simple pre-processing algorithm called localization that reorders the rows and columns of the input data matrix in such a way as to group correlated entries in small local neighborhoods within the matrix. The proposed localization algorithm takes its roots from effective use of graph-theoretical methods applied to problems exhibiting a similar structure to that of biclustering. In order to evaluate the effectivenesss of the localization pre-processing algorithm, we focus on three representative greedy iterative heuristic methods. We show how the localization pre-processing can be incorporated into each representative algorithm to improve biclustering performance. Furthermore, we propose a simple biclustering algorithm, Random Extraction After Localization (REAL) that randomly extracts submatrices from the localization pre-processed data matrix, eliminates those with low similarity scores, and provides the rest as correlated structures representing biclusters. We compare the proposed localization pre-processing with another pre-processing alternative, non-negative matrix factorization. We show that our fast and simple localization procedure provides similar or even better results than the computationally heavy matrix factorization pre-processing with regards to H-value tests. We next demonstrate that the performances of the three representative greedy iterative heuristic methods improve with localization pre-processing when biological correlations in the form of functional enrichment and PPI verification constitute the main performance criteria. The fact that the random extraction method based on localization REAL performs better than the representative greedy heuristic methods under same criteria also confirms the effectiveness of the suggested pre-processing method. Supplementary material including code implementations in LEDA C++ library, experimental data, and the results are available at http://code.google.com/p/biclustering/ cesim@khas.edu.tr; melihsozdinler@boun.edu.tr Supplementary data are available at Bioinformatics online.

  12. Low-temperature tensile strength of the ITER-TF model coil insulation system after reactor irradiation

    NASA Astrophysics Data System (ADS)

    Bittner-Rohrhofer, K.; Humer, K.; Weber, H. W.

    The windings of the superconducting magnet coils for the ITER-FEAT fusion device are affected by high mechanical stresses at cryogenic temperatures and by a radiation environment, which impose certain constraints especially on the insulating materials. A glass fiber reinforced plastic (GFRP) laminate, which consists of Kapton/R-glass-fiber reinforcement tapes, vacuum-impregnated in a DGEBA epoxy system, was used for the European toroidal field model coil turn insulation of ITER. In order to assess its mechanical properties under the actual operating conditions of ITER-FEAT, cryogenic (77 K) static tensile tests and tension-tension fatigue measurements were done before and after irradiation to a fast neutron fluence of 1×10 22 m -2 ( E>0.1 MeV), i.e. the ITER-FEAT design fluence level. We find that the mechanical strength and the fracture behavior of this GFRP are strongly influenced by the winding direction of the tape and by the radiation induced delamination process. In addition, the composite swells by 3%, forming bubbles inside the laminate, and loses weight (1.4%) at the design fluence.

  13. A stopping criterion to halt iterations at the Richardson-Lucy deconvolution of radiographic images

    NASA Astrophysics Data System (ADS)

    Almeida, G. L.; Silvani, M. I.; Souza, E. S.; Lopes, R. T.

    2015-07-01

    Radiographic images, as any experimentally acquired ones, are affected by spoiling agents which degrade their final quality. The degradation caused by agents of systematic character, can be reduced by some kind of treatment such as an iterative deconvolution. This approach requires two parameters, namely the system resolution and the best number of iterations in order to achieve the best final image. This work proposes a novel procedure to estimate the best number of iterations, which replaces the cumbersome visual inspection by a comparison of numbers. These numbers are deduced from the image histograms, taking into account the global difference G between them for two subsequent iterations. The developed algorithm, including a Richardson-Lucy deconvolution procedure has been embodied into a Fortran program capable to plot the 1st derivative of G as the processing progresses and to stop it automatically when this derivative - within the data dispersion - reaches zero. The radiograph of a specially chosen object acquired with thermal neutrons from the Argonauta research reactor at Institutode Engenharia Nuclear - CNEN, Rio de Janeiro, Brazil, have undergone this treatment with fair results.

  14. A heuristic statistical stopping rule for iterative reconstruction in emission tomography.

    PubMed

    Ben Bouallègue, F; Crouzet, J F; Mariano-Goulart, D

    2013-01-01

    We propose a statistical stopping criterion for iterative reconstruction in emission tomography based on a heuristic statistical description of the reconstruction process. The method was assessed for MLEM reconstruction. Based on Monte-Carlo numerical simulations and using a perfectly modeled system matrix, our method was compared with classical iterative reconstruction followed by low-pass filtering in terms of Euclidian distance to the exact object, noise, and resolution. The stopping criterion was then evaluated with realistic PET data of a Hoffman brain phantom produced using the GATE platform for different count levels. The numerical experiments showed that compared with the classical method, our technique yielded significant improvement of the noise-resolution tradeoff for a wide range of counting statistics compatible with routine clinical settings. When working with realistic data, the stopping rule allowed a qualitatively and quantitatively efficient determination of the optimal image. Our method appears to give a reliable estimation of the optimal stopping point for iterative reconstruction. It should thus be of practical interest as it produces images with similar or better quality than classical post-filtered iterative reconstruction with a mastered computation time.

  15. Modeling Design Iteration in Product Design and Development and Its Solution by a Novel Artificial Bee Colony Algorithm

    PubMed Central

    2014-01-01

    Due to fierce market competition, how to improve product quality and reduce development cost determines the core competitiveness of enterprises. However, design iteration generally causes increases of product cost and delays of development time as well, so how to identify and model couplings among tasks in product design and development has become an important issue for enterprises to settle. In this paper, the shortcomings existing in WTM model are discussed and tearing approach as well as inner iteration method is used to complement the classic WTM model. In addition, the ABC algorithm is also introduced to find out the optimal decoupling schemes. In this paper, firstly, tearing approach and inner iteration method are analyzed for solving coupled sets. Secondly, a hybrid iteration model combining these two technologies is set up. Thirdly, a high-performance swarm intelligence algorithm, artificial bee colony, is adopted to realize problem-solving. Finally, an engineering design of a chemical processing system is given in order to verify its reasonability and effectiveness. PMID:25431584

  16. Differential Characteristics Based Iterative Multiuser Detection for Wireless Sensor Networks

    PubMed Central

    Chen, Xiaoguang; Jiang, Xu; Wu, Zhilu; Zhuang, Shufeng

    2017-01-01

    High throughput, low latency and reliable communication has always been a hot topic for wireless sensor networks (WSNs) in various applications. Multiuser detection is widely used to suppress the bad effect of multiple access interference in WSNs. In this paper, a novel multiuser detection method based on differential characteristics is proposed to suppress multiple access interference. The proposed iterative receive method consists of three stages. Firstly, a differential characteristics function is presented based on the optimal multiuser detection decision function; then on the basis of differential characteristics, a preliminary threshold detection is utilized to find the potential wrongly received bits; after that an error bit corrector is employed to correct the wrong bits. In order to further lower the bit error ratio (BER), the differential characteristics calculation, threshold detection and error bit correction process described above are iteratively executed. Simulation results show that after only a few iterations the proposed multiuser detection method can achieve satisfactory BER performance. Besides, BER and near far resistance performance are much better than traditional suboptimal multiuser detection methods. Furthermore, the proposed iterative multiuser detection method also has a large system capacity. PMID:28212328

  17. Iterative filtering decomposition based on local spectral evolution kernel

    PubMed Central

    Wang, Yang; Wei, Guo-Wei; Yang, Siyang

    2011-01-01

    The synthesizing information, achieving understanding, and deriving insight from increasingly massive, time-varying, noisy and possibly conflicting data sets are some of most challenging tasks in the present information age. Traditional technologies, such as Fourier transform and wavelet multi-resolution analysis, are inadequate to handle all of the above-mentioned tasks. The empirical model decomposition (EMD) has emerged as a new powerful tool for resolving many challenging problems in data processing and analysis. Recently, an iterative filtering decomposition (IFD) has been introduced to address the stability and efficiency problems of the EMD. Another data analysis technique is the local spectral evolution kernel (LSEK), which provides a near prefect low pass filter with desirable time-frequency localizations. The present work utilizes the LSEK to further stabilize the IFD, and offers an efficient, flexible and robust scheme for information extraction, complexity reduction, and signal and image understanding. The performance of the present LSEK based IFD is intensively validated over a wide range of data processing tasks, including mode decomposition, analysis of time-varying data, information extraction from nonlinear dynamic systems, etc. The utility, robustness and usefulness of the proposed LESK based IFD are demonstrated via a large number of applications, such as the analysis of stock market data, the decomposition of ocean wave magnitudes, the understanding of physiologic signals and information recovery from noisy images. The performance of the proposed method is compared with that of existing methods in the literature. Our results indicate that the LSEK based IFD improves both the efficiency and the stability of conventional EMD algorithms. PMID:22350559

  18. Iterative Otsu's method for OCT improved delineation in the aorta wall

    NASA Astrophysics Data System (ADS)

    Alonso, Daniel; Real, Eusebio; Val-Bernal, José F.; Revuelta, José M.; Pontón, Alejandro; Calvo Díez, Marta; Mayorga, Marta; López-Higuera, José M.; Conde, Olga M.

    2015-07-01

    Degradation of human ascending thoracic aorta has been visualized with Optical Coherence Tomography (OCT). OCT images of the vessel wall exhibit structural degradation in the media layer of the artery, being this disorder the final trigger of the pathology. The degeneration in the vessel wall appears as low-reflectivity areas due to different optical properties of acidic polysaccharides and mucopolysaccharides in contrast with typical ordered structure of smooth muscle cells, elastin and collagen fibers. An OCT dimension indicator of wall degradation can be generated upon the spatial quantification of the extension of degraded areas in a similar way as conventional histopathology. This proposed OCT marker can offer in the future a real-time clinical perception of the vessel status to help cardiovascular surgeons in vessel repair interventions. However, the delineation of degraded areas on the B-scan image from OCT is sometimes difficult due to presence of speckle noise, variable signal to noise ratio (SNR) conditions on the measurement process, etc. Degraded areas can be delimited by basic thresholding techniques taking advantage of disorders evidences in B-scan images, but this delineation is not optimum in the aorta samples and requires complex additional processing stages. This work proposes an optimized delineation of degraded areas within the aorta wall, robust to noisy environments, based on the iterative application of Otsu's thresholding method. Results improve the delineation of wall anomalies compared with the simple application of the algorithm. Achievements could be also transferred to other clinical scenarios: carotid arteries, aorto-iliac or ilio-femoral sections, intracranial, etc.

  19. 3D shape reconstruction of specular surfaces by using phase measuring deflectometry

    NASA Astrophysics Data System (ADS)

    Zhou, Tian; Chen, Kun; Wei, Haoyun; Li, Yan

    2016-10-01

    The existing estimation methods for recovering height information from surface gradient are mainly divided into Modal and Zonal techniques. Since specular surfaces used in the industry always have complex and large areas, considerations must be given to both the improvement of measurement accuracy and the acceleration of on-line processing speed, which beyond the capacity of existing estimations. Incorporating the Modal and Zonal approaches into a unifying scheme, we introduce an improved 3D shape reconstruction version of specular surfaces based on Phase Measuring Deflectometry in this paper. The Modal estimation is firstly implemented to derive the coarse height information of the measured surface as initial iteration values. Then the real shape can be recovered utilizing a modified Zonal wave-front reconstruction algorithm. By combining the advantages of Modal and Zonal estimations, the proposed method simultaneously achieves consistently high accuracy and dramatically rapid convergence. Moreover, the iterative process based on an advanced successive overrelaxation technique shows a consistent rejection of measurement errors, guaranteeing the stability and robustness in practical applications. Both simulation and experimentally measurement demonstrate the validity and efficiency of the proposed improved method. According to the experimental result, the computation time decreases approximately 74.92% in contrast to the Zonal estimation and the surface error is about 6.68 μm with reconstruction points of 391×529 pixels of an experimentally measured sphere mirror. In general, this method can be conducted with fast convergence speed and high accuracy, providing an efficient, stable and real-time approach for the shape reconstruction of specular surfaces in practical situations.

  20. Developing a Decision Support System for Tobacco Use Counseling Using Primary Care Physicians

    PubMed Central

    Marcy, Theodore W.; Kaplan, Bonnie; Connolly, Scott W.; Michel, George; Shiffman, Richard N.; Flynn, Brian S.

    2009-01-01

    Background Clinical decision support systems (CDSS) have the potential to improve adherence to guidelines, but only if they are designed to work in the complex environment of ambulatory clinics as otherwise physicians may not use them. Objective To gain input from primary care physicians in designing a CDSS for smoking cessation to ensure that the design is appropriate to a clinical environment before attempts to test this CDSS in a clinical trial. This approach is of general interest to those designing similar systems. Design and Approach We employed an iterative ethnographic process that used multiple evaluation methods to understand physician preferences and workflow integration. Using results from our prior survey of physicians and clinic managers, we developed a prototype CDSS, validated content and design with an expert panel, and then subjected it to usability testing by physicians, followed by iterative design changes based on their feedback. We then performed clinical testing with individual patients, and conducted field tests of the CDSS in two primary care clinics during which four physicians used it for routine patient visits. Results The CDSS prototype was substantially modified through these cycles of usability and clinical testing, including removing a potentially fatal design flaw. During field tests in primary care clinics, physicians incorporated the final CDSS prototype into their workflow, and used it to assist in smoking cessation interventions up to eight times daily. Conclusions A multi-method evaluation process utilizing primary care physicians proved useful for developing a CDSS that was acceptable to physicians and patients, and feasible to use in their clinical environment. PMID:18713526

  1. Managing Multiplicity: Conceptualizing Physician Cognition in Multipatient Environments.

    PubMed

    Chan, Teresa M; Mercuri, Mathew; Van Dewark, Kenneth; Sherbino, Jonathan; Schwartz, Alan; Norman, Geoff; Lineberry, Matthew

    2018-05-01

    Emergency physicians (EPs) regularly manage multiple patients simultaneously, often making time-sensitive decisions around priorities for multiple patients. Few studies have explored physician cognition in multipatient scenarios. The authors sought to develop a conceptual framework to describe how EPs think in busy, multipatient environments. From July 2014 to May 2015, a qualitative study was conducted at McMaster University, using a think-aloud protocol to examine how 10 attending EPs and 10 junior residents made decisions in multipatient environments. Participants engaged in the think-aloud exercise for five different simulated multipatient scenarios. Transcripts from recorded interviews were analyzed inductively, with an iterative process involving two independent coders, and compared between attendings and residents. The attending EPs and junior residents used similar processes to prioritize patients in these multipatient scenarios. The think-aloud processes demonstrated a similar process used by almost all participants. The cognitive task of patient prioritization consisted of three components: a brief overview of the entire cohort of patients to determine a general strategy; an individual chart review, whereby the participant created a functional patient story from information available in a file (i.e., vitals, brief clinical history); and creation of a relative priority list. Compared with residents, the attendings were better able to construct deeper and more complex patient stories. The authors propose a conceptual framework for how EPs prioritize care for multiple patients in complex environments. This study may be useful to teachers who train physicians to function more efficiently in busy clinical environments.

  2. Swarmic autopoiesis and computational creativity

    NASA Astrophysics Data System (ADS)

    al-Rifaie, Mohammad Majid; Leymarie, Frédéric Fol; Latham, William; Bishop, Mark

    2017-10-01

    In this paper two swarm intelligence algorithms are used, the first leading the "attention" of the swarm and the latter responsible for the tracing mechanism. The attention mechanism is coordinated by agents of Stochastic Diffusion Search where they selectively attend to areas of a digital canvas (with line drawings) which contains (sharper) corners. Once the swarm's attention is drawn to the line of interest with a sharp corner, the corresponding line segment is fed into the tracing algorithm, Dispersive Flies Optimisation which "consumes" the input in order to generate a "swarmic sketch" of the input line. The sketching process is the result of the "flies" leaving traces of their movements on the digital canvas which are then revisited repeatedly in an attempt to re-sketch the traces they left. This cyclic process is then introduced in the context of autopoiesis, where the philosophical aspects of the autopoietic artist are discussed. The autopoetic artist is described in two modalities: gluttonous and contented. In the Gluttonous Autopoietic Artist mode, by iteratively focussing on areas-of-rich-complexity, as the decoding process of the input sketch unfolds, it leads to a less complex structure which ultimately results in an empty canvas; therein reifying the artwork's "death". In the Contented Autopoietic Artist mode, by refocussing the autopoietic artist's reflections on "meaning" onto different constitutive elements, and modifying her reconstitution, different behaviours of autopoietic creativity can be induced and therefore, the autopoietic processes become less likely to fade away and more open-ended in their creative endeavour.

  3. A human-oriented framework for developing assistive service robots.

    PubMed

    McGinn, Conor; Cullinan, Michael F; Culleton, Mark; Kelly, Kevin

    2018-04-01

    Multipurpose robots that can perform a range of useful tasks have the potential to increase the quality of life for many people living with disabilities. Owing to factors such as high system complexity, as-yet unresolved research questions and current technology limitations, there is a need for effective strategies to coordinate the development process. Integrating established methodologies based on human-centred design and universal design, a framework was formulated to coordinate the robot design process over successive iterations of prototype development. An account is given of how the framework was practically applied to the problem of developing a personal service robot. Application of the framework led to the formation of several design goals which addressed a wide range of identified user needs. The resultant prototype solution, which consisted of several component elements, succeeded in demonstrating the performance stipulated by all of the proposed metrics. Application of the framework resulted in the development of a complex prototype that addressed many aspects of the functional and usability requirements of a personal service robot. Following the process led to several important insights which directly benefit the development of subsequent prototypes. Implications for Rehabilitation This research shows how universal design might be used to formulate usability requirements for assistive service robots. A framework is presented that guides the process of designing service robots in a human-centred way. Through practical application of the framework, a prototype robot system that addressed a range of identified user needs was developed.

  4. Self-consistent hybrid functionals for solids: a fully-automated implementation

    NASA Astrophysics Data System (ADS)

    Erba, A.

    2017-08-01

    A fully-automated algorithm for the determination of the system-specific optimal fraction of exact exchange in self-consistent hybrid functionals of the density-functional-theory is illustrated, as implemented into the public Crystal program. The exchange fraction of this new class of functionals is self-consistently updated proportionally to the inverse of the dielectric response of the system within an iterative procedure (Skone et al 2014 Phys. Rev. B 89, 195112). Each iteration of the present scheme, in turn, implies convergence of a self-consistent-field (SCF) and a coupled-perturbed-Hartree-Fock/Kohn-Sham (CPHF/KS) procedure. The present implementation, beside improving the user-friendliness of self-consistent hybrids, exploits the unperturbed and electric-field perturbed density matrices from previous iterations as guesses for subsequent SCF and CPHF/KS iterations, which is documented to reduce the overall computational cost of the whole process by a factor of 2.

  5. Shock and vibration response of multistage structure

    NASA Technical Reports Server (NTRS)

    Lee, S. Y.; Liyeos, J. G.; Tang, S. S.

    1968-01-01

    Study of the shock and vibration response of a multistage structure employed analytically, lumped-mass, continuous-beam, multimode, and matrix-iteration methods. The study was made on the load paths, transmissibility, and attenuation properties along a longitudinal axis of a long, slender structure with increasing degree of complexity.

  6. Choosing order of operations to accelerate strip structure analysis in parameter range

    NASA Astrophysics Data System (ADS)

    Kuksenko, S. P.; Akhunov, R. R.; Gazizov, T. R.

    2018-05-01

    The paper considers the issue of using iteration methods in solving the sequence of linear algebraic systems obtained in quasistatic analysis of strip structures with the method of moments. Using the analysis of 4 strip structures, the authors have proved that additional acceleration (up to 2.21 times) of the iterative process can be obtained during the process of solving linear systems repeatedly by means of choosing a proper order of operations and a preconditioner. The obtained results can be used to accelerate the process of computer-aided design of various strip structures. The choice of the order of operations to accelerate the process is quite simple, universal and could be used not only for strip structure analysis but also for a wide range of computational problems.

  7. Evaluation of noise limits to improve image processing in soft X-ray projection microscopy.

    PubMed

    Jamsranjav, Erdenetogtokh; Kuge, Kenichi; Ito, Atsushi; Kinjo, Yasuhito; Shiina, Tatsuo

    2017-03-03

    Soft X-ray microscopy has been developed for high resolution imaging of hydrated biological specimens due to the availability of water window region. In particular, a projection type microscopy has advantages in wide viewing area, easy zooming function and easy extensibility to computed tomography (CT). The blur of projection image due to the Fresnel diffraction of X-rays, which eventually reduces spatial resolution, could be corrected by an iteration procedure, i.e., repetition of Fresnel and inverse Fresnel transformations. However, it was found that the correction is not enough to be effective for all images, especially for images with low contrast. In order to improve the effectiveness of image correction by computer processing, we in this study evaluated the influence of background noise in the iteration procedure through a simulation study. In the study, images of model specimen with known morphology were used as a substitute for the chromosome images, one of the targets of our microscope. Under the condition that artificial noise was distributed on the images randomly, we introduced two different parameters to evaluate noise effects according to each situation where the iteration procedure was not successful, and proposed an upper limit of the noise within which the effective iteration procedure for the chromosome images was possible. The study indicated that applying the new simulation and noise evaluation method was useful for image processing where background noises cannot be ignored compared with specimen images.

  8. To repair or not to repair: with FAVOR there is no question

    NASA Astrophysics Data System (ADS)

    Garetto, Anthony; Schulz, Kristian; Tabbone, Gilles; Himmelhaus, Michael; Scheruebl, Thomas

    2016-10-01

    In the mask shop the challenges associated with today's advanced technology nodes, both technical and economic, are becoming increasingly difficult. The constant drive to continue shrinking features means more masks per device, smaller manufacturing tolerances and more complexity along the manufacturing line with respect to the number of manufacturing steps required. Furthermore, the extremely competitive nature of the industry makes it critical for mask shops to optimize asset utilization and processes in order to maximize their competitive advantage and, in the end, profitability. Full maximization of profitability in such a complex and technologically sophisticated environment simply cannot be achieved without the use of smart automation. Smart automation allows productivity to be maximized through better asset utilization and process optimization. Reliability is improved through the minimization of manual interactions leading to fewer human error contributions and a more efficient manufacturing line. In addition to these improvements in productivity and reliability, extra value can be added through the collection and cross-verification of data from multiple sources which provides more information about our products and processes. When it comes to handling mask defects, for instance, the process consists largely of time consuming manual interactions that are error prone and often require quick decisions from operators and engineers who are under pressure. The handling of defects itself is a multiple step process consisting of several iterations of inspection, disposition, repair, review and cleaning steps. Smaller manufacturing tolerances and features with higher complexity contribute to a higher number of defects which must be handled as well as a higher level of complexity. In this paper the recent efforts undertaken by ZEISS to provide solutions which address these challenges, particularly those associated with defectivity, will be presented. From automation of aerial image analysis to the use of data driven decision making to predict and propose the optimized back end of line process flow, productivity and reliability improvements are targeted by smart automation. Additionally the generation of the ideal aerial image from the design and several repair enhancement features offer additional capabilities to improve the efficiency and yield associated with defect handling.

  9. Mobile sociology. 2000.

    PubMed

    Urry, John

    2010-01-01

    This article seeks to develop a manifesto for a sociology concerned with the diverse mobilities of peoples, objects, images, information, and wastes; and of the complex interdependencies between, and social consequences of, such diverse mobilities. A number of key concepts relevant for such a sociology are elaborated: 'gamekeeping', networks, fluids, scapes, flows, complexity and iteration. The article concludes by suggesting that a 'global civil society' might constitute the social base of a sociology of mobilities as we move into the twenty-first century.

  10. Complex symmetric matrices with strongly stable iterates

    NASA Technical Reports Server (NTRS)

    Tadmor, E.

    1985-01-01

    Complex-valued symmetric matrices are studied. A simple expression for the spectral norm of such matrices is obtained, by utilizing a unitarily congruent invariant form. A sharp criterion is provided for identifying those symmetric matrices whose spectral norm is not exceeding one: such strongly stable matrices are usually sought in connection with convergent difference approximations to partial differential equations. As an example, the derived criterion is applied to conclude the strong stability of a Lax-Wendroff scheme.

  11. Developing "My Asthma Diary": a process exemplar of a patient-driven arts-based knowledge translation tool.

    PubMed

    Archibald, Mandy M; Hartling, Lisa; Ali, Samina; Caine, Vera; Scott, Shannon D

    2018-06-05

    Although it is well established that family-centered education is critical to managing childhood asthma, the information needs of parents of children with asthma are not being met through current educational approaches. Patient-driven educational materials that leverage the power of the storytelling and the arts show promise in communicating health information and assisting in illness self-management. However, such arts-based knowledge translation approaches are in their infancy, and little is known about how to develop such tools for parents. This paper reports on the development of "My Asthma Diary" - an innovative knowledge translation tool based on rigorous research evidence and tailored to parents' asthma-related information needs. We used a multi-stage process to develop four eBook prototypes of "My Asthma Diary." We conducted formative research on parents' information needs and identified high quality research evidence on childhood asthma, and used these data to inform the development of the asthma eBooks. We established interdisciplinary consulting teams with health researchers, practitioners, and artists to help iteratively create the knowledge translation tools. We describe the iterative, transdisciplinary process of developing asthma eBooks which incorporates: (I) parents' preferences and information needs on childhood asthma, (II) quality evidence on childhood asthma and its management, and (III) the engaging and informative powers of storytelling and visual art as methods to communicate complex health information to parents. We identified four dominant methodological and procedural challenges encountered during this process: (I) working within an inter-disciplinary team, (II) quantity and ordering of information, (III) creating a composite narrative, and (IV) balancing actual and ideal management scenarios. We describe a replicable and rigorous multi-staged approach to developing a patient-driven, creative knowledge translation tool, which can be adapted for use with different populations and contexts. We identified specific procedural and methodological challenges that others conducting comparable work should consider, particularly as creative, patient-driven knowledge translation strategies continue to emerge across health disciplines.

  12. The truncated conjugate gradient (TCG), a non-iterative/fixed-cost strategy for computing polarization in molecular dynamics: Fast evaluation of analytical forces

    NASA Astrophysics Data System (ADS)

    Aviat, Félix; Lagardère, Louis; Piquemal, Jean-Philip

    2017-10-01

    In a recent paper [F. Aviat et al., J. Chem. Theory Comput. 13, 180-190 (2017)], we proposed the Truncated Conjugate Gradient (TCG) approach to compute the polarization energy and forces in polarizable molecular simulations. The method consists in truncating the conjugate gradient algorithm at a fixed predetermined order leading to a fixed computational cost and can thus be considered "non-iterative." This gives the possibility to derive analytical forces avoiding the usual energy conservation (i.e., drifts) issues occurring with iterative approaches. A key point concerns the evaluation of the analytical gradients, which is more complex than that with a usual solver. In this paper, after reviewing the present state of the art of polarization solvers, we detail a viable strategy for the efficient implementation of the TCG calculation. The complete cost of the approach is then measured as it is tested using a multi-time step scheme and compared to timings using usual iterative approaches. We show that the TCG methods are more efficient than traditional techniques, making it a method of choice for future long molecular dynamics simulations using polarizable force fields where energy conservation matters. We detail the various steps required for the implementation of the complete method by software developers.

  13. The truncated conjugate gradient (TCG), a non-iterative/fixed-cost strategy for computing polarization in molecular dynamics: Fast evaluation of analytical forces.

    PubMed

    Aviat, Félix; Lagardère, Louis; Piquemal, Jean-Philip

    2017-10-28

    In a recent paper [F. Aviat et al., J. Chem. Theory Comput. 13, 180-190 (2017)], we proposed the Truncated Conjugate Gradient (TCG) approach to compute the polarization energy and forces in polarizable molecular simulations. The method consists in truncating the conjugate gradient algorithm at a fixed predetermined order leading to a fixed computational cost and can thus be considered "non-iterative." This gives the possibility to derive analytical forces avoiding the usual energy conservation (i.e., drifts) issues occurring with iterative approaches. A key point concerns the evaluation of the analytical gradients, which is more complex than that with a usual solver. In this paper, after reviewing the present state of the art of polarization solvers, we detail a viable strategy for the efficient implementation of the TCG calculation. The complete cost of the approach is then measured as it is tested using a multi-time step scheme and compared to timings using usual iterative approaches. We show that the TCG methods are more efficient than traditional techniques, making it a method of choice for future long molecular dynamics simulations using polarizable force fields where energy conservation matters. We detail the various steps required for the implementation of the complete method by software developers.

  14. Investigation of a Parabolic Iterative Solver for Three-dimensional Configurations

    NASA Technical Reports Server (NTRS)

    Nark, Douglas M.; Watson, Willie R.; Mani, Ramani

    2007-01-01

    A parabolic iterative solution procedure is investigated that seeks to extend the parabolic approximation used within the internal propagation module of the duct noise propagation and radiation code CDUCT-LaRC. The governing convected Helmholtz equation is split into a set of coupled equations governing propagation in the positive and negative directions. The proposed method utilizes an iterative procedure to solve the coupled equations in an attempt to account for possible reflections from internal bifurcations, impedance discontinuities, and duct terminations. A geometry consistent with the NASA Langley Curved Duct Test Rig is considered and the effects of acoustic treatment and non-anechoic termination are included. Two numerical implementations are studied and preliminary results indicate that improved accuracy in predicted amplitude and phase can be obtained for modes at a cut-off ratio of 1.7. Further predictions for modes at a cut-off ratio of 1.1 show improvement in predicted phase at the expense of increased amplitude error. Possible methods of improvement are suggested based on analytic and numerical analysis. It is hoped that coupling the parabolic iterative approach with less efficient, high fidelity finite element approaches will ultimately provide the capability to perform efficient, higher fidelity acoustic calculations within complex 3-D geometries for impedance eduction and noise propagation and radiation predictions.

  15. Stakeholder assessment of comparative effectiveness research needs for Medicaid populations.

    PubMed

    Fischer, Michael A; Allen-Coleman, Cora; Farrell, Stephen F; Schneeweiss, Sebastian

    2015-09-01

    Patients, providers and policy-makers rely heavily on comparative effectiveness research (CER) when making complex, real-world medical decisions. In particular, Medicaid providers and policy-makers face unique challenges in decision-making because their program cares for traditionally underserved populations, especially children, pregnant women and people with mental illness. Because these patient populations have generally been underrepresented in research discussions, CER questions for these groups may be understudied. To address this problem, the Agency for Healthcare Research and Quality commissioned our team to work with Medicaid Medical Directors and other stakeholders to identify relevant CER questions. Through an iterative process of topic identification and refinement, we developed relevant, feasible and actionable questions based on issues affecting Medicaid programs nationwide. We describe challenges and limitations and provide recommendations for future stakeholder engagement.

  16. Local orientational mobility in regular hyperbranched polymers.

    PubMed

    Dolgushev, Maxim; Markelov, Denis A; Fürstenberg, Florian; Guérin, Thomas

    2016-07-01

    We study the dynamics of local bond orientation in regular hyperbranched polymers modeled by Vicsek fractals. The local dynamics is investigated through the temporal autocorrelation functions of single bonds and the corresponding relaxation forms of the complex dielectric susceptibility. We show that the dynamic behavior of single segments depends on their remoteness from the periphery rather than on the size of the whole macromolecule. Remarkably, the dynamics of the core segments (which are most remote from the periphery) shows a scaling behavior that differs from the dynamics obtained after structural average. We analyze the most relevant processes of single segment motion and provide an analytic approximation for the corresponding relaxation times. Furthermore, we describe an iterative method to calculate the orientational dynamics in the case of very large macromolecular sizes.

  17. Video encryption using chaotic masks in joint transform correlator

    NASA Astrophysics Data System (ADS)

    Saini, Nirmala; Sinha, Aloka

    2015-03-01

    A real-time optical video encryption technique using a chaotic map has been reported. In the proposed technique, each frame of video is encrypted using two different chaotic random phase masks in the joint transform correlator architecture. The different chaotic random phase masks can be obtained either by using different iteration levels or by using different seed values of the chaotic map. The use of different chaotic random phase masks makes the decryption process very complex for an unauthorized person. Optical, as well as digital, methods can be used for video encryption but the decryption is possible only digitally. To further enhance the security of the system, the key parameters of the chaotic map are encoded using RSA (Rivest-Shamir-Adleman) public key encryption. Numerical simulations are carried out to validate the proposed technique.

  18. Stakeholder assessment of comparative effectiveness research needs for Medicaid populations

    PubMed Central

    Fischer, Michael A; Allen-Coleman, Cora; Farrell, Stephen F; Schneeweiss, Sebastian

    2015-01-01

    Patients, providers and policy-makers rely heavily on comparative effectiveness research (CER) when making complex, real-world medical decisions. In particular, Medicaid providers and policy-makers face unique challenges in decision-making because their program cares for traditionally underserved populations, especially children, pregnant women and people with mental illness. Because these patient populations have generally been underrepresented in research discussions, CER questions for these groups may be understudied. To address this problem, the Agency for Healthcare Research and Quality commissioned our team to work with Medicaid Medical Directors and other stakeholders to identify relevant CER questions. Through an iterative process of topic identification and refinement, we developed relevant, feasible and actionable questions based on issues affecting Medicaid programs nationwide. We describe challenges and limitations and provide recommendations for future stakeholder engagement. PMID:26388438

  19. Community-Engagement Strategies of the Developmental Disabilities Practice-Based Research Network (DD-PBRN)

    PubMed Central

    Tyler, Carl; Werner, James J.

    2016-01-01

    There is often a rich but untold history of events that occurred and relationships that formed prior to the launching of a practice-based research network (PBRN.) This is particularly the case in PBRNs that are community-based and comprised of partnerships outside of the health care system. In this article we summarize an organizational "prenatal history" prior to the birth of a PBRN devoted to persons with developmental disabilities. Using a case study approach, this article describes the historical events that preceded and fostered the evolution of this PBRN and contrasts how the processes leading to the creation of this multi-stakeholder community-based PBRN differ from those of typical academic-clinical practice PBRNs. We propose potential advantages and complexities inherent to this newest iteration of PBRNs. PMID:25381081

  20. A multilevel finite element method for Fredholm integral eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Xie, Hehu; Zhou, Tao

    2015-12-01

    In this work, we proposed a multigrid finite element (MFE) method for solving the Fredholm integral eigenvalue problems. The main motivation for such studies is to compute the Karhunen-Loève expansions of random fields, which play an important role in the applications of uncertainty quantification. In our MFE framework, solving the eigenvalue problem is converted to doing a series of integral iterations and eigenvalue solving in the coarsest mesh. Then, any existing efficient integration scheme can be used for the associated integration process. The error estimates are provided, and the computational complexity is analyzed. It is noticed that the total computational work of our method is comparable with a single integration step in the finest mesh. Several numerical experiments are presented to validate the efficiency of the proposed numerical method.

Top