Sample records for iterative refinement process

  1. Improved cryoEM-Guided Iterative Molecular Dynamics–Rosetta Protein Structure Refinement Protocol for High Precision Protein Structure Prediction

    PubMed Central

    2016-01-01

    Many excellent methods exist that incorporate cryo-electron microscopy (cryoEM) data to constrain computational protein structure prediction and refinement. Previously, it was shown that iteration of two such orthogonal sampling and scoring methods – Rosetta and molecular dynamics (MD) simulations – facilitated exploration of conformational space in principle. Here, we go beyond a proof-of-concept study and address significant remaining limitations of the iterative MD–Rosetta protein structure refinement protocol. Specifically, all parts of the iterative refinement protocol are now guided by medium-resolution cryoEM density maps, and previous knowledge about the native structure of the protein is no longer necessary. Models are identified solely based on score or simulation time. All four benchmark proteins showed substantial improvement through three rounds of the iterative refinement protocol. The best-scoring final models of two proteins had sub-Ångstrom RMSD to the native structure over residues in secondary structure elements. Molecular dynamics was most efficient in refining secondary structure elements and was thus highly complementary to the Rosetta refinement which is most powerful in refining side chains and loop regions. PMID:25883538

  2. Combining Static Analysis and Model Checking for Software Analysis

    NASA Technical Reports Server (NTRS)

    Brat, Guillaume; Visser, Willem; Clancy, Daniel (Technical Monitor)

    2003-01-01

    We present an iterative technique in which model checking and static analysis are combined to verify large software systems. The role of the static analysis is to compute partial order information which the model checker uses to reduce the state space. During exploration, the model checker also computes aliasing information that it gives to the static analyzer which can then refine its analysis. The result of this refined analysis is then fed back to the model checker which updates its partial order reduction. At each step of this iterative process, the static analysis computes optimistic information which results in an unsafe reduction of the state space. However we show that the process converges to a fired point at which time the partial order information is safe and the whole state space is explored.

  3. Learner Centred Design for a Hybrid Interaction Application

    ERIC Educational Resources Information Center

    Wood, Simon; Romero, Pablo

    2010-01-01

    Learner centred design methods highlight the importance of involving the stakeholders of the learning process (learners, teachers, educational researchers) at all stages of the design of educational applications and of refining the design through an iterative prototyping process. These methods have been used successfully when designing systems…

  4. Marine Controlled-Source Electromagnetic 2D Inversion for synthetic models.

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Li, Y.

    2016-12-01

    We present a 2D inverse algorithm for frequency domain marine controlled-source electromagnetic (CSEM) data, which is based on the regularized Gauss-Newton approach. As a forward solver, our parallel adaptive finite element forward modeling program is employed. It is a self-adaptive, goal-oriented grid refinement algorithm in which a finite element analysis is performed on a sequence of refined meshes. The mesh refinement process is guided by a dual error estimate weighting to bias refinement towards elements that affect the solution at the EM receiver locations. With the use of the direct solver (MUMPS), we can effectively compute the electromagnetic fields for multi-sources and parametric sensitivities. We also implement the parallel data domain decomposition approach of Key and Ovall (2011), with the goal of being able to compute accurate responses in parallel for complicated models and a full suite of data parameters typical of offshore CSEM surveys. All minimizations are carried out by using the Gauss-Newton algorithm and model perturbations at each iteration step are obtained by using the Inexact Conjugate Gradient iteration method. Synthetic test inversions are presented.

  5. MODFLOW–LGR—Documentation of ghost node local grid refinement (LGR2) for multiple areas and the boundary flow and head (BFH2) package

    USGS Publications Warehouse

    Mehl, Steffen W.; Hill, Mary C.

    2013-01-01

    This report documents the addition of ghost node Local Grid Refinement (LGR2) to MODFLOW-2005, the U.S. Geological Survey modular, transient, three-dimensional, finite-difference groundwater flow model. LGR2 provides the capability to simulate groundwater flow using multiple block-shaped higher-resolution local grids (a child model) within a coarser-grid parent model. LGR2 accomplishes this by iteratively coupling separate MODFLOW-2005 models such that heads and fluxes are balanced across the grid-refinement interface boundary. LGR2 can be used in two-and three-dimensional, steady-state and transient simulations and for simulations of confined and unconfined groundwater systems. Traditional one-way coupled telescopic mesh refinement methods can have large, often undetected, inconsistencies in heads and fluxes across the interface between two model grids. The iteratively coupled ghost-node method of LGR2 provides a more rigorous coupling in which the solution accuracy is controlled by convergence criteria defined by the user. In realistic problems, this can result in substantially more accurate solutions and require an increase in computer processing time. The rigorous coupling enables sensitivity analysis, parameter estimation, and uncertainty analysis that reflects conditions in both model grids. This report describes the method used by LGR2, evaluates accuracy and performance for two-and three-dimensional test cases, provides input instructions, and lists selected input and output files for an example problem. It also presents the Boundary Flow and Head (BFH2) Package, which allows the child and parent models to be simulated independently using the boundary conditions obtained through the iterative process of LGR2.

  6. A Technique for Transient Thermal Testing of Thick Structures

    NASA Technical Reports Server (NTRS)

    Horn, Thomas J.; Richards, W. Lance; Gong, Leslie

    1997-01-01

    A new open-loop heat flux control technique has been developed to conduct transient thermal testing of thick, thermally-conductive aerospace structures. This technique uses calibration of the radiant heater system power level as a function of heat flux, predicted aerodynamic heat flux, and the properties of an instrumented test article. An iterative process was used to generate open-loop heater power profiles prior to each transient thermal test. Differences between the measured and predicted surface temperatures were used to refine the heater power level command profiles through the iteration process. This iteration process has reduced the effects of environmental and test system design factors, which are normally compensated for by closed-loop temperature control, to acceptable levels. The final revised heater power profiles resulted in measured temperature time histories which deviated less than 25 F from the predicted surface temperatures.

  7. The REFINEMENT Glossary of Terms: An International Terminology for Mental Health Systems Assessment.

    PubMed

    Montagni, Ilaria; Salvador-Carulla, Luis; Mcdaid, David; Straßmayr, Christa; Endel, Florian; Näätänen, Petri; Kalseth, Jorid; Kalseth, Birgitte; Matosevic, Tihana; Donisi, Valeria; Chevreul, Karine; Prigent, Amélie; Sfectu, Raluca; Pauna, Carmen; Gutiérrez-Colosia, Mencia R; Amaddeo, Francesco; Katschnig, Heinz

    2018-03-01

    Comparing mental health systems across countries is difficult because of the lack of an agreed upon terminology covering services and related financing issues. Within the European Union project REFINEMENT, international mental health care experts applied an innovative mixed "top-down" and "bottom-up" approach following a multistep design thinking strategy to compile a glossary on mental health systems, using local services as pilots. The final REFINEMENT glossary consisted of 432 terms related to service provision, service utilisation, quality of care and financing. The aim of this study was to describe the iterative process and methodology of developing this glossary.

  8. Integrating Climate Change Resilience Features into the Incremental Refinement of an Existing Marine Park

    PubMed Central

    Beckley, Lynnath E.; Kobryn, Halina T.; Lombard, Amanda T.; Radford, Ben; Heyward, Andrew

    2016-01-01

    Marine protected area (MPA) designs are likely to require iterative refinement as new knowledge is gained. In particular, there is an increasing need to consider the effects of climate change, especially the ability of ecosystems to resist and/or recover from climate-related disturbances, within the MPA planning process. However, there has been limited research addressing the incorporation of climate change resilience into MPA design. This study used Marxan conservation planning software with fine-scale shallow water (<20 m) bathymetry and habitat maps, models of major benthic communities for deeper water, and comprehensive human use information from Ningaloo Marine Park in Western Australia to identify climate change resilience features to integrate into the incremental refinement of the marine park. The study assessed the representation of benthic habitats within the current marine park zones, identified priority areas of high resilience for inclusion within no-take zones and examined if any iterative refinements to the current no-take zones are necessary. Of the 65 habitat classes, 16 did not meet representation targets within the current no-take zones, most of which were in deeper offshore waters. These deeper areas also demonstrated the highest resilience values and, as such, Marxan outputs suggested minor increases to the current no-take zones in the deeper offshore areas. This work demonstrates that inclusion of fine-scale climate change resilience features within the design process for MPAs is feasible, and can be applied to future marine spatial planning practices globally. PMID:27529820

  9. Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor.

    PubMed

    Kim, Heegwang; Park, Jinho; Park, Hasil; Paik, Joonki

    2017-12-09

    Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system.

  10. Systems and methods for predicting materials properties

    DOEpatents

    Ceder, Gerbrand; Fischer, Chris; Tibbetts, Kevin; Morgan, Dane; Curtarolo, Stefano

    2007-11-06

    Systems and methods for predicting features of materials of interest. Reference data are analyzed to deduce relationships between the input data sets and output data sets. Reference data includes measured values and/or computed values. The deduced relationships can be specified as equations, correspondences, and/or algorithmic processes that produce appropriate output data when suitable input data is used. In some instances, the output data set is a subset of the input data set, and computational results may be refined by optionally iterating the computational procedure. To deduce features of a new material of interest, a computed or measured input property of the material is provided to an equation, correspondence, or algorithmic procedure previously deduced, and an output is obtained. In some instances, the output is iteratively refined. In some instances, new features deduced for the material of interest are added to a database of input and output data for known materials.

  11. Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor

    PubMed Central

    Park, Jinho; Park, Hasil

    2017-01-01

    Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system. PMID:29232826

  12. MODFLOW-2005, the U.S. Geological Survey modular ground-water model - documentation of shared node local grid refinement (LGR) and the boundary flow and head (BFH) package

    USGS Publications Warehouse

    Mehl, Steffen W.; Hill, Mary C.

    2006-01-01

    This report documents the addition of shared node Local Grid Refinement (LGR) to MODFLOW-2005, the U.S. Geological Survey modular, transient, three-dimensional, finite-difference ground-water flow model. LGR provides the capability to simulate ground-water flow using one block-shaped higher-resolution local grid (a child model) within a coarser-grid parent model. LGR accomplishes this by iteratively coupling two separate MODFLOW-2005 models such that heads and fluxes are balanced across the shared interfacing boundary. LGR can be used in two-and three-dimensional, steady-state and transient simulations and for simulations of confined and unconfined ground-water systems. Traditional one-way coupled telescopic mesh refinement (TMR) methods can have large, often undetected, inconsistencies in heads and fluxes across the interface between two model grids. The iteratively coupled shared-node method of LGR provides a more rigorous coupling in which the solution accuracy is controlled by convergence criteria defined by the user. In realistic problems, this can result in substantially more accurate solutions and require an increase in computer processing time. The rigorous coupling enables sensitivity analysis, parameter estimation, and uncertainty analysis that reflects conditions in both model grids. This report describes the method used by LGR, evaluates LGR accuracy and performance for two- and three-dimensional test cases, provides input instructions, and lists selected input and output files for an example problem. It also presents the Boundary Flow and Head (BFH) Package, which allows the child and parent models to be simulated independently using the boundary conditions obtained through the iterative process of LGR.

  13. Using Google Earth to Teach Plate Tectonics and Science Explanations

    ERIC Educational Resources Information Center

    Blank, Lisa M.; Plautz, Mike; Almquist, Heather; Crews, Jeff; Estrada, Jen

    2012-01-01

    "A Framework for K-12 Science Education: Practices, Crosscutting Concepts, and Core Ideas" emphasizes that the practice of science is inherently a model-building activity focused on constructing explanations using evidence and reasoning (NRC 2012). Because building and refining is an iterative process, middle school students may view this practice…

  14. On the implementation of an accurate and efficient solver for convection-diffusion equations

    NASA Astrophysics Data System (ADS)

    Wu, Chin-Tien

    In this dissertation, we examine several different aspects of computing the numerical solution of the convection-diffusion equation. The solution of this equation often exhibits sharp gradients due to Dirichlet outflow boundaries or discontinuities in boundary conditions. Because of the singular-perturbed nature of the equation, numerical solutions often have severe oscillations when grid sizes are not small enough to resolve sharp gradients. To overcome such difficulties, the streamline diffusion discretization method can be used to obtain an accurate approximate solution in regions where the solution is smooth. To increase accuracy of the solution in the regions containing layers, adaptive mesh refinement and mesh movement based on a posteriori error estimations can be employed. An error-adapted mesh refinement strategy based on a posteriori error estimations is also proposed to resolve layers. For solving the sparse linear systems that arise from discretization, goemetric multigrid (MG) and algebraic multigrid (AMG) are compared. In addition, both methods are also used as preconditioners for Krylov subspace methods. We derive some convergence results for MG with line Gauss-Seidel smoothers and bilinear interpolation. Finally, while considering adaptive mesh refinement as an integral part of the solution process, it is natural to set a stopping tolerance for the iterative linear solvers on each mesh stage so that the difference between the approximate solution obtained from iterative methods and the finite element solution is bounded by an a posteriori error bound. Here, we present two stopping criteria. The first is based on a residual-type a posteriori error estimator developed by Verfurth. The second is based on an a posteriori error estimator, using local solutions, developed by Kay and Silvester. Our numerical results show the refined mesh obtained from the iterative solution which satisfies the second criteria is similar to the refined mesh obtained from the finite element solution.

  15. Iterative feature refinement for accurate undersampled MR image reconstruction

    NASA Astrophysics Data System (ADS)

    Wang, Shanshan; Liu, Jianbo; Liu, Qiegen; Ying, Leslie; Liu, Xin; Zheng, Hairong; Liang, Dong

    2016-05-01

    Accelerating MR scan is of great significance for clinical, research and advanced applications, and one main effort to achieve this is the utilization of compressed sensing (CS) theory. Nevertheless, the existing CSMRI approaches still have limitations such as fine structure loss or high computational complexity. This paper proposes a novel iterative feature refinement (IFR) module for accurate MR image reconstruction from undersampled K-space data. Integrating IFR with CSMRI which is equipped with fixed transforms, we develop an IFR-CS method to restore meaningful structures and details that are originally discarded without introducing too much additional complexity. Specifically, the proposed IFR-CS is realized with three iterative steps, namely sparsity-promoting denoising, feature refinement and Tikhonov regularization. Experimental results on both simulated and in vivo MR datasets have shown that the proposed module has a strong capability to capture image details, and that IFR-CS is comparable and even superior to other state-of-the-art reconstruction approaches.

  16. Iterative model building, structure refinement and density modification with the PHENIX AutoBuild wizard.

    PubMed

    Terwilliger, Thomas C; Grosse-Kunstleve, Ralf W; Afonine, Pavel V; Moriarty, Nigel W; Zwart, Peter H; Hung, Li Wei; Read, Randy J; Adams, Paul D

    2008-01-01

    The PHENIX AutoBuild wizard is a highly automated tool for iterative model building, structure refinement and density modification using RESOLVE model building, RESOLVE statistical density modification and phenix.refine structure refinement. Recent advances in the AutoBuild wizard and phenix.refine include automated detection and application of NCS from models as they are built, extensive model-completion algorithms and automated solvent-molecule picking. Model-completion algorithms in the AutoBuild wizard include loop building, crossovers between chains in different models of a structure and side-chain optimization. The AutoBuild wizard has been applied to a set of 48 structures at resolutions ranging from 1.1 to 3.2 A, resulting in a mean R factor of 0.24 and a mean free R factor of 0.29. The R factor of the final model is dependent on the quality of the starting electron density and is relatively independent of resolution.

  17. Technique for identifying, tracing, or tracking objects in image data

    DOEpatents

    Anderson, Robert J [Albuquerque, NM; Rothganger, Fredrick [Albuquerque, NM

    2012-08-28

    A technique for computer vision uses a polygon contour to trace an object. The technique includes rendering a polygon contour superimposed over a first frame of image data. The polygon contour is iteratively refined to more accurately trace the object within the first frame after each iteration. The refinement includes computing image energies along lengths of contour lines of the polygon contour and adjusting positions of the contour lines based at least in part on the image energies.

  18. An IEP for Me: Program Improvement for Rural Teachers of Students with Moderate to Severe Disability and Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Pennington, Robert C.

    2017-01-01

    Developing high-quality programming for students with moderate to severe disability (MSD) and/or autism spectrum disorder (ASD) can be challenging for teachers across the range of experience and training including those in rural contexts. This article outlines a process for the iterative refinement of teaching programs comprised of an evaluation…

  19. Iterative model building, structure refinement and density modification with the PHENIX AutoBuild wizard

    PubMed Central

    Terwilliger, Thomas C.; Grosse-Kunstleve, Ralf W.; Afonine, Pavel V.; Moriarty, Nigel W.; Zwart, Peter H.; Hung, Li-Wei; Read, Randy J.; Adams, Paul D.

    2008-01-01

    The PHENIX AutoBuild wizard is a highly automated tool for iterative model building, structure refinement and density modification using RESOLVE model building, RESOLVE statistical density modification and phenix.refine structure refinement. Recent advances in the AutoBuild wizard and phenix.refine include automated detection and application of NCS from models as they are built, extensive model-completion algorithms and automated solvent-molecule picking. Model-completion algorithms in the AutoBuild wizard include loop building, crossovers between chains in different models of a structure and side-chain optimization. The AutoBuild wizard has been applied to a set of 48 structures at resolutions ranging from 1.1 to 3.2 Å, resulting in a mean R factor of 0.24 and a mean free R factor of 0.29. The R factor of the final model is dependent on the quality of the starting electron density and is relatively independent of resolution. PMID:18094468

  20. Convergence characteristics of nonlinear vortex-lattice methods for configuration aerodynamics

    NASA Technical Reports Server (NTRS)

    Seginer, A.; Rusak, Z.; Wasserstrom, E.

    1983-01-01

    Nonlinear panel methods have no proof for the existence and uniqueness of their solutions. The convergence characteristics of an iterative, nonlinear vortex-lattice method are, therefore, carefully investigated. The effects of several parameters, including (1) the surface-paneling method, (2) an integration method of the trajectories of the wake vortices, (3) vortex-grid refinement, and (4) the initial conditions for the first iteration on the computed aerodynamic coefficients and on the flow-field details are presented. The convergence of the iterative-solution procedure is usually rapid. The solution converges with grid refinement to a constant value, but the final value is not unique and varies with the wing surface-paneling and wake-discretization methods within some range in the vicinity of the experimental result.

  1. An adaptive Gaussian process-based iterative ensemble smoother for data assimilation

    NASA Astrophysics Data System (ADS)

    Ju, Lei; Zhang, Jiangjiang; Meng, Long; Wu, Laosheng; Zeng, Lingzao

    2018-05-01

    Accurate characterization of subsurface hydraulic conductivity is vital for modeling of subsurface flow and transport. The iterative ensemble smoother (IES) has been proposed to estimate the heterogeneous parameter field. As a Monte Carlo-based method, IES requires a relatively large ensemble size to guarantee its performance. To improve the computational efficiency, we propose an adaptive Gaussian process (GP)-based iterative ensemble smoother (GPIES) in this study. At each iteration, the GP surrogate is adaptively refined by adding a few new base points chosen from the updated parameter realizations. Then the sensitivity information between model parameters and measurements is calculated from a large number of realizations generated by the GP surrogate with virtually no computational cost. Since the original model evaluations are only required for base points, whose number is much smaller than the ensemble size, the computational cost is significantly reduced. The applicability of GPIES in estimating heterogeneous conductivity is evaluated by the saturated and unsaturated flow problems, respectively. Without sacrificing estimation accuracy, GPIES achieves about an order of magnitude of speed-up compared with the standard IES. Although subsurface flow problems are considered in this study, the proposed method can be equally applied to other hydrological models.

  2. Three-dimensional local grid refinement for block-centered finite-difference groundwater models using iteratively coupled shared nodes: A new method of interpolation and analysis of errors

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2004-01-01

    This paper describes work that extends to three dimensions the two-dimensional local-grid refinement method for block-centered finite-difference groundwater models of Mehl and Hill [Development and evaluation of a local grid refinement method for block-centered finite-difference groundwater models using shared nodes. Adv Water Resour 2002;25(5):497-511]. In this approach, the (parent) finite-difference grid is discretized more finely within a (child) sub-region. The grid refinement method sequentially solves each grid and uses specified flux (parent) and specified head (child) boundary conditions to couple the grids. Iteration achieves convergence between heads and fluxes of both grids. Of most concern is how to interpolate heads onto the boundary of the child grid such that the physics of the parent-grid flow is retained in three dimensions. We develop a new two-step, "cage-shell" interpolation method based on the solution of the flow equation on the boundary of the child between nodes shared with the parent grid. Error analysis using a test case indicates that the shared-node local grid refinement method with cage-shell boundary head interpolation is accurate and robust, and the resulting code is used to investigate three-dimensional local grid refinement of stream-aquifer interactions. Results reveal that (1) the parent and child grids interact to shift the true head and flux solution to a different solution where the heads and fluxes of both grids are in equilibrium, (2) the locally refined model provided a solution for both heads and fluxes in the region of the refinement that was more accurate than a model without refinement only if iterations are performed so that both heads and fluxes are in equilibrium, and (3) the accuracy of the coupling is limited by the parent-grid size - A coarse parent grid limits correct representation of the hydraulics in the feedback from the child grid.

  3. Evolution Of USDOE Performance Assessments Over 20 Years

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seitz, Roger R.; Suttora, Linda C.

    2013-02-26

    Performance assessments (PAs) have been used for many years for the analysis of post-closure hazards associated with a radioactive waste disposal facility and to provide a reasonable expectation of the ability of the site and facility design to meet objectives for the protection of members of the public and the environment. The use of PA to support decision-making for LLW disposal facilities has been mandated in United States Department of Energy (USDOE) directives governing radioactive waste management since 1988 (currently DOE Order 435.1, Radioactive Waste Management). Prior to that time, PAs were also used in a less formal role. Overmore » the past 20+ years, the USDOE approach to conduct, review and apply PAs has evolved into an efficient, rigorous and mature process that includes specific requirements for continuous improvement and independent reviews. The PA process has evolved through refinement of a graded and iterative approach designed to help focus efforts on those aspects of the problem expected to have the greatest influence on the decision being made. Many of the evolutionary changes to the PA process are linked to the refinement of the PA maintenance concept that has proven to be an important element of USDOE PA requirements in the context of supporting decision-making for safe disposal of LLW. The PA maintenance concept represents the evolution of the graded and iterative philosophy and has helped to drive the evolution of PAs from a deterministic compliance calculation into a systematic approach that helps to focus on critical aspects of the disposal system in a manner designed to provide a more informed basis for decision-making throughout the life of a disposal facility (e.g., monitoring, research and testing, waste acceptance criteria, design improvements, data collection, model refinements). A significant evolution in PA modeling has been associated with improved use of uncertainty and sensitivity analysis techniques to support efficient implementation of the graded and iterative approach. Rather than attempt to exactly predict the migration of radionuclides in a disposal unit, the best PAs have evolved into tools that provide a range of results to guide decision-makers in planning the most efficient, cost effective, and safe disposal of radionuclides.« less

  4. RAVE: Rapid Visualization Environment

    NASA Technical Reports Server (NTRS)

    Klumpar, D. M.; Anderson, Kevin; Simoudis, Avangelos

    1994-01-01

    Visualization is used in the process of analyzing large, multidimensional data sets. However, the selection and creation of visualizations that are appropriate for the characteristics of a particular data set and the satisfaction of the analyst's goals is difficult. The process consists of three tasks that are performed iteratively: generate, test, and refine. The performance of these tasks requires the utilization of several types of domain knowledge that data analysts do not often have. Existing visualization systems and frameworks do not adequately support the performance of these tasks. In this paper we present the RApid Visualization Environment (RAVE), a knowledge-based system that interfaces with commercial visualization frameworks and assists a data analyst in quickly and easily generating, testing, and refining visualizations. RAVE was used for the visualization of in situ measurement data captured by spacecraft.

  5. i3Drefine software for protein 3D structure refinement and its assessment in CASP10.

    PubMed

    Bhattacharya, Debswapna; Cheng, Jianlin

    2013-01-01

    Protein structure refinement refers to the process of improving the qualities of protein structures during structure modeling processes to bring them closer to their native states. Structure refinement has been drawing increasing attention in the community-wide Critical Assessment of techniques for Protein Structure prediction (CASP) experiments since its addition in 8(th) CASP experiment. During the 9(th) and recently concluded 10(th) CASP experiments, a consistent growth in number of refinement targets and participating groups has been witnessed. Yet, protein structure refinement still remains a largely unsolved problem with majority of participating groups in CASP refinement category failed to consistently improve the quality of structures issued for refinement. In order to alleviate this need, we developed a completely automated and computationally efficient protein 3D structure refinement method, i3Drefine, based on an iterative and highly convergent energy minimization algorithm with a powerful all-atom composite physics and knowledge-based force fields and hydrogen bonding (HB) network optimization technique. In the recent community-wide blind experiment, CASP10, i3Drefine (as 'MULTICOM-CONSTRUCT') was ranked as the best method in the server section as per the official assessment of CASP10 experiment. Here we provide the community with free access to i3Drefine software and systematically analyse the performance of i3Drefine in strict blind mode on the refinement targets issued in CASP10 refinement category and compare with other state-of-the-art refinement methods participating in CASP10. Our analysis demonstrates that i3Drefine is only fully-automated server participating in CASP10 exhibiting consistent improvement over the initial structures in both global and local structural quality metrics. Executable version of i3Drefine is freely available at http://protein.rnet.missouri.edu/i3drefine/.

  6. Intelligent process mapping through systematic improvement of heuristics

    NASA Technical Reports Server (NTRS)

    Ieumwananonthachai, Arthur; Aizawa, Akiko N.; Schwartz, Steven R.; Wah, Benjamin W.; Yan, Jerry C.

    1992-01-01

    The present system for automatic learning/evaluation of novel heuristic methods applicable to the mapping of communication-process sets on a computer network has its basis in the testing of a population of competing heuristic methods within a fixed time-constraint. The TEACHER 4.1 prototype learning system implemented or learning new postgame analysis heuristic methods iteratively generates and refines the mappings of a set of communicating processes on a computer network. A systematic exploration of the space of possible heuristic methods is shown to promise significant improvement.

  7. Contextual Refinement of Regulatory Targets Reveals Effects on Breast Cancer Prognosis of the Regulome

    PubMed Central

    Andrews, Erik; Wang, Yue; Xia, Tian; Cheng, Wenqing; Cheng, Chao

    2017-01-01

    Gene expression regulators, such as transcription factors (TFs) and microRNAs (miRNAs), have varying regulatory targets based on the tissue and physiological state (context) within which they are expressed. While the emergence of regulator-characterizing experiments has inferred the target genes of many regulators across many contexts, methods for transferring regulator target genes across contexts are lacking. Further, regulator target gene lists frequently are not curated or have permissive inclusion criteria, impairing their use. Here, we present a method called iterative Contextual Transcriptional Activity Inference of Regulators (icTAIR) to resolve these issues. icTAIR takes a regulator’s previously-identified target gene list and combines it with gene expression data from a context, quantifying that regulator’s activity for that context. It then calculates the correlation between each listed target gene’s expression and the quantitative score of regulatory activity, removes the uncorrelated genes from the list, and iterates the process until it derives a stable list of refined target genes. To validate and demonstrate icTAIR’s power, we use it to refine the MSigDB c3 database of TF, miRNA and unclassified motif target gene lists for breast cancer. We then use its output for survival analysis with clinicopathological multivariable adjustment in 7 independent breast cancer datasets covering 3,430 patients. We uncover many novel prognostic regulators that were obscured prior to refinement, in particular NFY, and offer a detailed look at the composition and relationships among the breast cancer prognostic regulome. We anticipate icTAIR will be of general use in contextually refining regulator target genes for discoveries across many contexts. The icTAIR algorithm can be downloaded from https://github.com/icTAIR. PMID:28103241

  8. Solving coupled groundwater flow systems using a Jacobian Free Newton Krylov method

    NASA Astrophysics Data System (ADS)

    Mehl, S.

    2012-12-01

    Jacobian Free Newton Kyrlov (JFNK) methods can have several advantages for simulating coupled groundwater flow processes versus conventional methods. Conventional methods are defined here as those based on an iterative coupling (rather than a direct coupling) and/or that use Picard iteration rather than Newton iteration. In an iterative coupling, the systems are solved separately, coupling information is updated and exchanged between the systems, and the systems are re-solved, etc., until convergence is achieved. Trusted simulators, such as Modflow, are based on these conventional methods of coupling and work well in many cases. An advantage of the JFNK method is that it only requires calculation of the residual vector of the system of equations and thus can make use of existing simulators regardless of how the equations are formulated. This opens the possibility of coupling different process models via augmentation of a residual vector by each separate process, which often requires substantially fewer changes to the existing source code than if the processes were directly coupled. However, appropriate perturbation sizes need to be determined for accurate approximations of the Frechet derivative, which is not always straightforward. Furthermore, preconditioning is necessary for reasonable convergence of the linear solution required at each Kyrlov iteration. Existing preconditioners can be used and applied separately to each process which maximizes use of existing code and robust preconditioners. In this work, iteratively coupled parent-child local grid refinement models of groundwater flow and groundwater flow models with nonlinear exchanges to streams are used to demonstrate the utility of the JFNK approach for Modflow models. Use of incomplete Cholesky preconditioners with various levels of fill are examined on a suite of nonlinear and linear models to analyze the effect of the preconditioner. Comparisons of convergence and computer simulation time are made using conventional iteratively coupled methods and those based on Picard iteration to those formulated with JFNK to gain insights on the types of nonlinearities and system features that make one approach advantageous. Results indicate that nonlinearities associated with stream/aquifer exchanges are more problematic than those resulting from unconfined flow.

  9. An iterative analytical technique for the design of interplanetary direct transfer trajectories including perturbations

    NASA Astrophysics Data System (ADS)

    Parvathi, S. P.; Ramanan, R. V.

    2018-06-01

    An iterative analytical trajectory design technique that includes perturbations in the departure phase of the interplanetary orbiter missions is proposed. The perturbations such as non-spherical gravity of Earth and the third body perturbations due to Sun and Moon are included in the analytical design process. In the design process, first the design is obtained using the iterative patched conic technique without including the perturbations and then modified to include the perturbations. The modification is based on, (i) backward analytical propagation of the state vector obtained from the iterative patched conic technique at the sphere of influence by including the perturbations, and (ii) quantification of deviations in the orbital elements at periapsis of the departure hyperbolic orbit. The orbital elements at the sphere of influence are changed to nullify the deviations at the periapsis. The analytical backward propagation is carried out using the linear approximation technique. The new analytical design technique, named as biased iterative patched conic technique, does not depend upon numerical integration and all computations are carried out using closed form expressions. The improved design is very close to the numerical design. The design analysis using the proposed technique provides a realistic insight into the mission aspects. Also, the proposed design is an excellent initial guess for numerical refinement and helps arrive at the four distinct design options for a given opportunity.

  10. Enabling Incremental Iterative Development at Scale: Quality Attribute Refinement and Allocation in Practice

    DTIC Science & Technology

    2015-06-01

    abstract constraints along six dimen- sions for expansion: user, actions, data , business rules, interfaces, and quality attributes [Gottesdiener 2010...relevant open source systems. For example, the CONNECT and HADOOP Distributed File System (HDFS) projects have many user stories that deal with...Iteration Zero involves architecture planning before writing any code. An overly long Iteration Zero is equivalent to the dysfunctional “ Big Up-Front

  11. Iterative Refinement of a Binding Pocket Model: Active Computational Steering of Lead Optimization

    PubMed Central

    2012-01-01

    Computational approaches for binding affinity prediction are most frequently demonstrated through cross-validation within a series of molecules or through performance shown on a blinded test set. Here, we show how such a system performs in an iterative, temporal lead optimization exercise. A series of gyrase inhibitors with known synthetic order formed the set of molecules that could be selected for “synthesis.” Beginning with a small number of molecules, based only on structures and activities, a model was constructed. Compound selection was done computationally, each time making five selections based on confident predictions of high activity and five selections based on a quantitative measure of three-dimensional structural novelty. Compound selection was followed by model refinement using the new data. Iterative computational candidate selection produced rapid improvements in selected compound activity, and incorporation of explicitly novel compounds uncovered much more diverse active inhibitors than strategies lacking active novelty selection. PMID:23046104

  12. Single image super-resolution based on approximated Heaviside functions and iterative refinement

    PubMed Central

    Wang, Xin-Yu; Huang, Ting-Zhu; Deng, Liang-Jian

    2018-01-01

    One method of solving the single-image super-resolution problem is to use Heaviside functions. This has been done previously by making a binary classification of image components as “smooth” and “non-smooth”, describing these with approximated Heaviside functions (AHFs), and iteration including l1 regularization. We now introduce a new method in which the binary classification of image components is extended to different degrees of smoothness and non-smoothness, these components being represented by various classes of AHFs. Taking into account the sparsity of the non-smooth components, their coefficients are l1 regularized. In addition, to pick up more image details, the new method uses an iterative refinement for the residuals between the original low-resolution input and the downsampled resulting image. Experimental results showed that the new method is superior to the original AHF method and to four other published methods. PMID:29329298

  13. Evolution of US DOE Performance Assessments Over 20 Years - 13597

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suttora, Linda C.; Seitz, Roger R.

    2013-07-01

    Performance assessments (PAs) have been used for many years for the analysis of post-closure hazards associated with a radioactive waste disposal facility and to provide a reasonable expectation of the ability of the site and facility design to meet objectives for the protection of members of the public and the environment. The use of PA to support decision-making for LLW disposal facilities has been mandated in United States Department of Energy (US DOE) directives governing radioactive waste management since 1988 (currently DOE Order 435.1, Radioactive Waste Management). Prior to that time, PAs were also used in a less formal role.more » Over the past 20+ years, the US DOE approach to conduct, review and apply PAs has evolved into an efficient, rigorous and mature process that includes specific requirements for continuous improvement and independent reviews. The PA process has evolved through refinement of a graded and iterative approach designed to help focus efforts on those aspects of the problem expected to have the greatest influence on the decision being made. Many of the evolutionary changes to the PA process are linked to the refinement of the PA maintenance concept that has proven to be an important element of US DOE PA requirements in the context of supporting decision-making for safe disposal of LLW. The PA maintenance concept is central to the evolution of the graded and iterative philosophy and has helped to drive the evolution of PAs from a deterministic compliance calculation into a systematic approach that helps to focus on critical aspects of the disposal system in a manner designed to provide a more informed basis for decision-making throughout the life of a disposal facility (e.g., monitoring, research and testing, waste acceptance criteria, design improvements, data collection, model refinements). A significant evolution in PA modeling has been associated with improved use of uncertainty and sensitivity analysis techniques to support efficient implementation of the graded and iterative approach. Rather than attempt to exactly predict the migration of radionuclides in a disposal unit, the best PAs have evolved into tools that provide a range of results to guide decision-makers in planning the most efficient, cost effective, and safe disposal of radionuclides. (authors)« less

  14. i3Drefine Software for Protein 3D Structure Refinement and Its Assessment in CASP10

    PubMed Central

    Bhattacharya, Debswapna; Cheng, Jianlin

    2013-01-01

    Protein structure refinement refers to the process of improving the qualities of protein structures during structure modeling processes to bring them closer to their native states. Structure refinement has been drawing increasing attention in the community-wide Critical Assessment of techniques for Protein Structure prediction (CASP) experiments since its addition in 8th CASP experiment. During the 9th and recently concluded 10th CASP experiments, a consistent growth in number of refinement targets and participating groups has been witnessed. Yet, protein structure refinement still remains a largely unsolved problem with majority of participating groups in CASP refinement category failed to consistently improve the quality of structures issued for refinement. In order to alleviate this need, we developed a completely automated and computationally efficient protein 3D structure refinement method, i3Drefine, based on an iterative and highly convergent energy minimization algorithm with a powerful all-atom composite physics and knowledge-based force fields and hydrogen bonding (HB) network optimization technique. In the recent community-wide blind experiment, CASP10, i3Drefine (as ‘MULTICOM-CONSTRUCT’) was ranked as the best method in the server section as per the official assessment of CASP10 experiment. Here we provide the community with free access to i3Drefine software and systematically analyse the performance of i3Drefine in strict blind mode on the refinement targets issued in CASP10 refinement category and compare with other state-of-the-art refinement methods participating in CASP10. Our analysis demonstrates that i3Drefine is only fully-automated server participating in CASP10 exhibiting consistent improvement over the initial structures in both global and local structural quality metrics. Executable version of i3Drefine is freely available at http://protein.rnet.missouri.edu/i3drefine/. PMID:23894517

  15. Summary of Results from the Risk Management Program for the Mars Microrover Flight Experiment

    NASA Technical Reports Server (NTRS)

    Shishko, Robert; Matijevic, Jacob R.

    2000-01-01

    On 4 July 1997, the Mars Pathfinder landed on the surface of Mars carrying the first planetary rover, known as the Sojourner. Formally known as the Microrover Flight Experiment (MFEX), the Sojourner was a low cost, high-risk technology demonstration, in which new risk management techniques were tried. This paper summarizes the activities and results of the effort to conduct a low-cost, yet meaningful risk management program for the MFEX. The specific activities focused on cost, performance, schedule, and operations risks. Just as the systems engineering process was iterative and produced successive refinements of requirements, designs, etc., so was the risk management process. Qualitative risk assessments were performed first to gain some insights for refining the microrover design and operations concept. These then evolved into more quantitative analyses. Risk management lessons from the manager's perspective is presented for other low-cost, high-risk space missions.

  16. The Multidimensional Assessment of Interoceptive Awareness (MAIA)

    PubMed Central

    Mehling, Wolf E.; Price, Cynthia; Daubenmier, Jennifer J.; Acree, Mike; Bartmess, Elizabeth; Stewart, Anita

    2012-01-01

    This paper describes the development of a multidimensional self-report measure of interoceptive body awareness. The systematic mixed-methods process involved reviewing the current literature, specifying a multidimensional conceptual framework, evaluating prior instruments, developing items, and analyzing focus group responses to scale items by instructors and patients of body awareness-enhancing therapies. Following refinement by cognitive testing, items were field-tested in students and instructors of mind-body approaches. Final item selection was achieved by submitting the field test data to an iterative process using multiple validation methods, including exploratory cluster and confirmatory factor analyses, comparison between known groups, and correlations with established measures of related constructs. The resulting 32-item multidimensional instrument assesses eight concepts. The psychometric properties of these final scales suggest that the Multidimensional Assessment of Interoceptive Awareness (MAIA) may serve as a starting point for research and further collaborative refinement. PMID:23133619

  17. The Contribution of Tidal Fluvial Habitats in the Columbia River Estuary to the Recovery of Diverse Salmon ESUs

    DTIC Science & Technology

    2013-05-01

    Chinook salmon (presumably subyearling) was the most prevalent life-history type detected at the Russian Island and Woody Island sites. The number of...Extend and refine the computational grid We extended the Virtual Columbia River to include regions upstream of Beaver Army, which previously served as...the Columbia River above Beaver Army and particularly above the confluence of the Willamette River. That process of calibration is highly iterative

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clarke, Kester Diederik

    The intent of this report is to document a procedure used at LANL for HIP bonding aluminum cladding to U-10Mo fuel foils using a formed HIP can for the Domestic Reactor Conversion program in the NNSA Office of Material, Management and Minimization, and provide some details that may not have been published elsewhere. The HIP process is based on the procedures that have been used to develop the formed HIP can process, including the baseline process developed at Idaho National Laboratory (INL). The HIP bonding cladding process development is summarized in the listed references. Further iterations with Babcock & Wilcoxmore » (B&W) to refine the process to meet production and facility requirements is expected.« less

  19. Iterative refinement of structure-based sequence alignments by Seed Extension

    PubMed Central

    Kim, Changhoon; Tai, Chin-Hsien; Lee, Byungkook

    2009-01-01

    Background Accurate sequence alignment is required in many bioinformatics applications but, when sequence similarity is low, it is difficult to obtain accurate alignments based on sequence similarity alone. The accuracy improves when the structures are available, but current structure-based sequence alignment procedures still mis-align substantial numbers of residues. In order to correct such errors, we previously explored the possibility of replacing the residue-based dynamic programming algorithm in structure alignment procedures with the Seed Extension algorithm, which does not use a gap penalty. Here, we describe a new procedure called RSE (Refinement with Seed Extension) that iteratively refines a structure-based sequence alignment. Results RSE uses SE (Seed Extension) in its core, which is an algorithm that we reported recently for obtaining a sequence alignment from two superimposed structures. The RSE procedure was evaluated by comparing the correctly aligned fractions of residues before and after the refinement of the structure-based sequence alignments produced by popular programs. CE, DaliLite, FAST, LOCK2, MATRAS, MATT, TM-align, SHEBA and VAST were included in this analysis and the NCBI's CDD root node set was used as the reference alignments. RSE improved the average accuracy of sequence alignments for all programs tested when no shift error was allowed. The amount of improvement varied depending on the program. The average improvements were small for DaliLite and MATRAS but about 5% for CE and VAST. More substantial improvements have been seen in many individual cases. The additional computation times required for the refinements were negligible compared to the times taken by the structure alignment programs. Conclusion RSE is a computationally inexpensive way of improving the accuracy of a structure-based sequence alignment. It can be used as a standalone procedure following a regular structure-based sequence alignment or to replace the traditional iterative refinement procedures based on residue-level dynamic programming algorithm in many structure alignment programs. PMID:19589133

  20. Domain decomposition methods in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Gropp, William D.; Keyes, David E.

    1991-01-01

    The divide-and-conquer paradigm of iterative domain decomposition, or substructuring, has become a practical tool in computational fluid dynamic applications because of its flexibility in accommodating adaptive refinement through locally uniform (or quasi-uniform) grids, its ability to exploit multiple discretizations of the operator equations, and the modular pathway it provides towards parallelism. These features are illustrated on the classic model problem of flow over a backstep using Newton's method as the nonlinear iteration. Multiple discretizations (second-order in the operator and first-order in the preconditioner) and locally uniform mesh refinement pay dividends separately, and they can be combined synergistically. Sample performance results are included from an Intel iPSC/860 hypercube implementation.

  1. Development of CPR security using impact analysis.

    PubMed Central

    Salazar-Kish, J.; Tate, D.; Hall, P. D.; Homa, K.

    2000-01-01

    The HIPAA regulations will require that institutions ensure the prevention of unauthorized access to electronically stored or transmitted patient records. This paper discusses a process for analyzing the impact of security mechanisms on users of computerized patient records through "behind the scenes" electronic access audits. In this way, those impacts can be assessed and refined to an acceptable standard prior to implementation. Through an iterative process of design and evaluation, we develop security algorithms that will protect electronic health information from improper access, alteration or loss, while minimally affecting the flow of work of the user population as a whole. PMID:11079984

  2. NREL Spectrum of Innovation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2011-02-25

    There are many voices calling for a future of abundant clean energy. The choices are difficult and the challenges daunting. How will we get there? The National Renewable Energy Laboratory integrates the entire spectrum of innovation including fundamental science, market relevant research, systems integration, testing and validation, commercialization and deployment. The innovation process at NREL is interdependent and iterative. Many scientific breakthroughs begin in our own laboratories, but new ideas and technologies come to NREL at any point along the innovation spectrum to be validated and refined for commercial use.

  3. NREL Spectrum of Innovation

    ScienceCinema

    None

    2018-05-11

    There are many voices calling for a future of abundant clean energy. The choices are difficult and the challenges daunting. How will we get there? The National Renewable Energy Laboratory integrates the entire spectrum of innovation including fundamental science, market relevant research, systems integration, testing and validation, commercialization and deployment. The innovation process at NREL is interdependent and iterative. Many scientific breakthroughs begin in our own laboratories, but new ideas and technologies come to NREL at any point along the innovation spectrum to be validated and refined for commercial use.

  4. The ATLAS Public Web Pages: Online Management of HEP External Communication Content

    NASA Astrophysics Data System (ADS)

    Goldfarb, S.; Marcelloni, C.; Eli Phoboo, A.; Shaw, K.

    2015-12-01

    The ATLAS Education and Outreach Group is in the process of migrating its public online content to a professionally designed set of web pages built on the Drupal [1] content management system. Development of the front-end design passed through several key stages, including audience surveys, stakeholder interviews, usage analytics, and a series of fast design iterations, called sprints. Implementation of the web site involves application of the html design using Drupal templates, refined development iterations, and the overall population of the site with content. We present the design and development processes and share the lessons learned along the way, including the results of the data-driven discovery studies. We also demonstrate the advantages of selecting a back-end supported by content management, with a focus on workflow. Finally, we discuss usage of the new public web pages to implement outreach strategy through implementation of clearly presented themes, consistent audience targeting and messaging, and the enforcement of a well-defined visual identity.

  5. Optimization Control of the Color-Coating Production Process for Model Uncertainty

    PubMed Central

    He, Dakuo; Wang, Zhengsong; Yang, Le; Mao, Zhizhong

    2016-01-01

    Optimized control of the color-coating production process (CCPP) aims at reducing production costs and improving economic efficiency while meeting quality requirements. However, because optimization control of the CCPP is hampered by model uncertainty, a strategy that considers model uncertainty is proposed. Previous work has introduced a mechanistic model of CCPP based on process analysis to simulate the actual production process and generate process data. The partial least squares method is then applied to develop predictive models of film thickness and economic efficiency. To manage the model uncertainty, the robust optimization approach is introduced to improve the feasibility of the optimized solution. Iterative learning control is then utilized to further refine the model uncertainty. The constrained film thickness is transformed into one of the tracked targets to overcome the drawback that traditional iterative learning control cannot address constraints. The goal setting of economic efficiency is updated continuously according to the film thickness setting until this reaches its desired value. Finally, fuzzy parameter adjustment is adopted to ensure that the economic efficiency and film thickness converge rapidly to their optimized values under the constraint conditions. The effectiveness of the proposed optimization control strategy is validated by simulation results. PMID:27247563

  6. Optimization Control of the Color-Coating Production Process for Model Uncertainty.

    PubMed

    He, Dakuo; Wang, Zhengsong; Yang, Le; Mao, Zhizhong

    2016-01-01

    Optimized control of the color-coating production process (CCPP) aims at reducing production costs and improving economic efficiency while meeting quality requirements. However, because optimization control of the CCPP is hampered by model uncertainty, a strategy that considers model uncertainty is proposed. Previous work has introduced a mechanistic model of CCPP based on process analysis to simulate the actual production process and generate process data. The partial least squares method is then applied to develop predictive models of film thickness and economic efficiency. To manage the model uncertainty, the robust optimization approach is introduced to improve the feasibility of the optimized solution. Iterative learning control is then utilized to further refine the model uncertainty. The constrained film thickness is transformed into one of the tracked targets to overcome the drawback that traditional iterative learning control cannot address constraints. The goal setting of economic efficiency is updated continuously according to the film thickness setting until this reaches its desired value. Finally, fuzzy parameter adjustment is adopted to ensure that the economic efficiency and film thickness converge rapidly to their optimized values under the constraint conditions. The effectiveness of the proposed optimization control strategy is validated by simulation results.

  7. The 3D Recognition, Generation, Fusion, Update and Refinement (RG4) Concept

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Cheeseman, Peter; Smelyanskyi, Vadim N.; Kuehnel, Frank; Morris, Robin D.; Norvig, Peter (Technical Monitor)

    2001-01-01

    This paper describes an active (real time) recognition strategy whereby information is inferred iteratively across several viewpoints in descent imagery. We will show how we use inverse theory within the context of parametric model generation, namely height and spectral reflection functions, to generate model assertions. Using this strategy in an active context implies that, from every viewpoint, the proposed system must refine its hypotheses taking into account the image and the effect of uncertainties as well. The proposed system employs probabilistic solutions to the problem of iteratively merging information (images) from several viewpoints. This involves feeding the posterior distribution from all previous images as a prior for the next view. Novel approaches will be developed to accelerate the inversion search using novel statistic implementations and reducing the model complexity using foveated vision. Foveated vision refers to imagery where the resolution varies across the image. In this paper, we allow the model to be foveated where the highest resolution region is called the foveation region. Typically, the images will have dynamic control of the location of the foveation region. For descent imagery in the Entry, Descent, and Landing (EDL) process, it is possible to have more than one foveation region. This research initiative is directed towards descent imagery in connection with NASA's EDL applications. Three-Dimensional Model Recognition, Generation, Fusion, Update, and Refinement (RGFUR or RG4) for height and the spectral reflection characteristics are in focus for various reasons, one of which is the prospect that their interpretation will provide for real time active vision for automated EDL.

  8. SOMFlow: Guided Exploratory Cluster Analysis with Self-Organizing Maps and Analytic Provenance.

    PubMed

    Sacha, Dominik; Kraus, Matthias; Bernard, Jurgen; Behrisch, Michael; Schreck, Tobias; Asano, Yuki; Keim, Daniel A

    2018-01-01

    Clustering is a core building block for data analysis, aiming to extract otherwise hidden structures and relations from raw datasets, such as particular groups that can be effectively related, compared, and interpreted. A plethora of visual-interactive cluster analysis techniques has been proposed to date, however, arriving at useful clusterings often requires several rounds of user interactions to fine-tune the data preprocessing and algorithms. We present a multi-stage Visual Analytics (VA) approach for iterative cluster refinement together with an implementation (SOMFlow) that uses Self-Organizing Maps (SOM) to analyze time series data. It supports exploration by offering the analyst a visual platform to analyze intermediate results, adapt the underlying computations, iteratively partition the data, and to reflect previous analytical activities. The history of previous decisions is explicitly visualized within a flow graph, allowing to compare earlier cluster refinements and to explore relations. We further leverage quality and interestingness measures to guide the analyst in the discovery of useful patterns, relations, and data partitions. We conducted two pair analytics experiments together with a subject matter expert in speech intonation research to demonstrate that the approach is effective for interactive data analysis, supporting enhanced understanding of clustering results as well as the interactive process itself.

  9. P80 SRM low torque flex-seal development - thermal and chemical modeling of molding process

    NASA Astrophysics Data System (ADS)

    Descamps, C.; Gautronneau, E.; Rousseau, G.; Daurat, M.

    2009-09-01

    The development of the flex-seal component of the P80 nozzle gave the opportunity to set up new design and manufacturing process methods. Due to the short development lead time required by VEGA program, the usual manufacturing iterative tests work flow, which is usually time consuming, had to be enhanced in order to use a more predictive approach. A newly refined rubber vulcanization description was built up and identified on laboratory samples. This chemical model was implemented in a thermal analysis code. The complete model successfully supports the manufacturing processes. These activities were conducted with the support of ESA/CNES Research & Technologies and DGA (General Delegation for Armament).

  10. Network news: prime time for systems biology of the plant circadian clock.

    PubMed

    McClung, C Robertson; Gutiérrez, Rodrigo A

    2010-12-01

    Whole-transcriptome analyses have established that the plant circadian clock regulates virtually every plant biological process and most prominently hormonal and stress response pathways. Systems biology efforts have successfully modeled the plant central clock machinery and an iterative process of model refinement and experimental validation has contributed significantly to the current view of the central clock machinery. The challenge now is to connect this central clock to the output pathways for understanding how the plant circadian clock contributes to plant growth and fitness in a changing environment. Undoubtedly, systems approaches will be needed to integrate and model the vastly increased volume of experimental data in order to extract meaningful biological information. Thus, we have entered an era of systems modeling, experimental testing, and refinement. This approach, coupled with advances from the genetic and biochemical analyses of clock function, is accelerating our progress towards a comprehensive understanding of the plant circadian clock network. Copyright © 2010 Elsevier Ltd. All rights reserved.

  11. Interactive iterative relative fuzzy connectedness lung segmentation on thoracic 4D dynamic MR images

    NASA Astrophysics Data System (ADS)

    Tong, Yubing; Udupa, Jayaram K.; Odhner, Dewey; Wu, Caiyun; Zhao, Yue; McDonough, Joseph M.; Capraro, Anthony; Torigian, Drew A.; Campbell, Robert M.

    2017-03-01

    Lung delineation via dynamic 4D thoracic magnetic resonance imaging (MRI) is necessary for quantitative image analysis for studying pediatric respiratory diseases such as thoracic insufficiency syndrome (TIS). This task is very challenging because of the often-extreme malformations of the thorax in TIS, lack of signal from bone and connective tissues resulting in inadequate image quality, abnormal thoracic dynamics, and the inability of the patients to cooperate with the protocol needed to get good quality images. We propose an interactive fuzzy connectedness approach as a potential practical solution to this difficult problem. Manual segmentation is too labor intensive especially due to the 4D nature of the data and can lead to low repeatability of the segmentation results. Registration-based approaches are somewhat inefficient and may produce inaccurate results due to accumulated registration errors and inadequate boundary information. The proposed approach works in a manner resembling the Iterative Livewire tool but uses iterative relative fuzzy connectedness (IRFC) as the delineation engine. Seeds needed by IRFC are set manually and are propagated from slice-to-slice, decreasing the needed human labor, and then a fuzzy connectedness map is automatically calculated almost instantaneously. If the segmentation is acceptable, the user selects "next" slice. Otherwise, the seeds are refined and the process continues. Although human interaction is needed, an advantage of the method is the high level of efficient user-control on the process and non-necessity to refine the results. Dynamic MRI sequences from 5 pediatric TIS patients involving 39 3D spatial volumes are used to evaluate the proposed approach. The method is compared to two other IRFC strategies with a higher level of automation. The proposed method yields an overall true positive and false positive volume fraction of 0.91 and 0.03, respectively, and Hausdorff boundary distance of 2 mm.

  12. Targeted exploration and analysis of large cross-platform human transcriptomic compendia

    PubMed Central

    Zhu, Qian; Wong, Aaron K; Krishnan, Arjun; Aure, Miriam R; Tadych, Alicja; Zhang, Ran; Corney, David C; Greene, Casey S; Bongo, Lars A; Kristensen, Vessela N; Charikar, Moses; Li, Kai; Troyanskaya, Olga G.

    2016-01-01

    We present SEEK (http://seek.princeton.edu), a query-based search engine across very large transcriptomic data collections, including thousands of human data sets from almost 50 microarray and next-generation sequencing platforms. SEEK uses a novel query-level cross-validation-based algorithm to automatically prioritize data sets relevant to the query and a robust search approach to identify query-coregulated genes, pathways, and processes. SEEK provides cross-platform handling, multi-gene query search, iterative metadata-based search refinement, and extensive visualization-based analysis options. PMID:25581801

  13. Changing the Way We Build Games: A Design-Based Research Study Examining the Implementation of Homemade PowerPoint Games in the Classroom

    ERIC Educational Resources Information Center

    Siko, Jason Paul

    2012-01-01

    This design-based research study examined the effects of a game design project on student test performance, with refinements made to the implementation after each of the three iterations of the study. The changes to the implementation over the three iterations were based on the literature for the three justifications for the use of homemade…

  14. Designing an over-the-counter consumer decision-making tool for older adults.

    PubMed

    Martin-Hammond, Aqueasha M; Abegaz, Tamirat; Gilbert, Juan E

    2015-10-01

    Older adults are at increased risk of adverse drug events due to medication. Older adults tend to take more medication and are at higher risk of chronic illness. Over-the-counter (OTC) medication does not require healthcare provider oversight and understanding OTC information is heavily dependent on a consumer's ability to understand and use the medication appropriately. Coupling health technology with effective communication is one approach to address the challenge of communicating health and improving health related tasks. However, the success of many health technologies also depends on how well the technology is designed and how well it addresses users needs. This is especially true for the older adult population. This paper describes (1) a formative study performed to understand how to design novel health technology to assist older adults with OTC medication information, and (2) how a user-centered design process helped to refine the initial assumptions of user needs and help to conceptualize the technology. An iterative design process was used. The process included two brainstorming and review sessions with human-computer interaction researchers and design sessions with older adults in the form of semi-structured interviews. Methods and principles of user-centered research and design were used to inform the research design. Two researchers with expertise in human-computer interaction performed expert reviews of early system prototypes. After initial prototypes were developed, seven older adults were engaged in semi-structured interviews to understand usability concerns and features and functionality older adults may find useful for selecting appropriate OTC medication. Eight usability concerns were discovered and addressed in the two rounds of expert review, and nine additional usability concerns were discovered in design sessions with older adults. Five themes emerged from the interview transcripts as recommendations for design. These recommendations represent opportunities for technology such as the one described in this paper to support older adults in the OTC decision-making process. This paper illustrates the use of an iterative user-centered process in the formative stages of design and its usefulness for understanding aspects of the technology design that are useful to older adults when making decisions about OTC medication. The technology support mechanisms included in the initial model were revised based on the results from the iterative design sessions and helped to refine and conceptualize the system being designed. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Perl Modules for Constructing Iterators

    NASA Technical Reports Server (NTRS)

    Tilmes, Curt

    2009-01-01

    The Iterator Perl Module provides a general-purpose framework for constructing iterator objects within Perl, and a standard API for interacting with those objects. Iterators are an object-oriented design pattern where a description of a series of values is used in a constructor. Subsequent queries can request values in that series. These Perl modules build on the standard Iterator framework and provide iterators for some other types of values. Iterator::DateTime constructs iterators from DateTime objects or Date::Parse descriptions and ICal/RFC 2445 style re-currence descriptions. It supports a variety of input parameters, including a start to the sequence, an end to the sequence, an Ical/RFC 2445 recurrence describing the frequency of the values in the series, and a format description that can refine the presentation manner of the DateTime. Iterator::String constructs iterators from string representations. This module is useful in contexts where the API consists of supplying a string and getting back an iterator where the specific iteration desired is opaque to the caller. It is of particular value to the Iterator::Hash module which provides nested iterations. Iterator::Hash constructs iterators from Perl hashes that can include multiple iterators. The constructed iterators will return all the permutations of the iterations of the hash by nested iteration of embedded iterators. A hash simply includes a set of keys mapped to values. It is a very common data structure used throughout Perl programming. The Iterator:: Hash module allows a hash to include strings defining iterators (parsed and dispatched with Iterator::String) that are used to construct an overall series of hash values.

  16. Automated Assume-Guarantee Reasoning by Abstraction Refinement

    NASA Technical Reports Server (NTRS)

    Pasareanu, Corina S.; Giannakopoulous, Dimitra; Glannakopoulou, Dimitra

    2008-01-01

    Current automated approaches for compositional model checking in the assume-guarantee style are based on learning of assumptions as deterministic automata. We propose an alternative approach based on abstraction refinement. Our new method computes the assumptions for the assume-guarantee rules as conservative and not necessarily deterministic abstractions of some of the components, and refines those abstractions using counter-examples obtained from model checking them together with the other components. Our approach also exploits the alphabets of the interfaces between components and performs iterative refinement of those alphabets as well as of the abstractions. We show experimentally that our preliminary implementation of the proposed alternative achieves similar or better performance than a previous learning-based implementation.

  17. Experiences with a generator tool for building clinical application modules.

    PubMed

    Kuhn, K A; Lenz, R; Elstner, T; Siegele, H; Moll, R

    2003-01-01

    To elaborate main system characteristics and relevant deployment experiences for the health information system (HIS) Orbis/OpenMed, which is in widespread use in Germany, Austria, and Switzerland. In a deployment phase of 3 years in a 1.200 bed university hospital, where the system underwent significant improvements, the system's functionality and its software design have been analyzed in detail. We focus on an integrated CASE tool for generating embedded clinical applications and for incremental system evolution. We present a participatory and iterative software engineering process developed for efficient utilization of such a tool. The system's functionality is comparable to other commercial products' functionality; its components are embedded in a vendor-specific application framework, and standard interfaces are being used for connecting subsystems. The integrated generator tool is a remarkable feature; it became a key factor of our project. Tool generated applications are workflow enabled and embedded into the overall data base schema. Rapid prototyping and iterative refinement are supported, so application modules can be adapted to the users' work practice. We consider tools supporting an iterative and participatory software engineering process highly relevant for health information system architects. The potential of a system to continuously evolve and to be effectively adapted to changing needs may be more important than sophisticated but hard-coded HIS functionality. More work will focus on HIS software design and on software engineering. Methods and tools are needed for quick and robust adaptation of systems to health care processes and changing requirements.

  18. Iterative refinement of implicit boundary models for improved geological feature reproduction

    NASA Astrophysics Data System (ADS)

    Martin, Ryan; Boisvert, Jeff B.

    2017-12-01

    Geological domains contain non-stationary features that cannot be described by a single direction of continuity. Non-stationary estimation frameworks generate more realistic curvilinear interpretations of subsurface geometries. A radial basis function (RBF) based implicit modeling framework using domain decomposition is developed that permits introduction of locally varying orientations and magnitudes of anisotropy for boundary models to better account for the local variability of complex geological deposits. The interpolation framework is paired with a method to automatically infer the locally predominant orientations, which results in a rapid and robust iterative non-stationary boundary modeling technique that can refine locally anisotropic geological shapes automatically from the sample data. The method also permits quantification of the volumetric uncertainty associated with the boundary modeling. The methodology is demonstrated on a porphyry dataset and shows improved local geological features.

  19. Implementing partnership-driven clinical federated electronic health record data sharing networks.

    PubMed

    Stephens, Kari A; Anderson, Nicholas; Lin, Ching-Ping; Estiri, Hossein

    2016-09-01

    Building federated data sharing architectures requires supporting a range of data owners, effective and validated semantic alignment between data resources, and consistent focus on end-users. Establishing these resources requires development methodologies that support internal validation of data extraction and translation processes, sustaining meaningful partnerships, and delivering clear and measurable system utility. We describe findings from two federated data sharing case examples that detail critical factors, shared outcomes, and production environment results. Two federated data sharing pilot architectures developed to support network-based research associated with the University of Washington's Institute of Translational Health Sciences provided the basis for the findings. A spiral model for implementation and evaluation was used to structure iterations of development and support knowledge share between the two network development teams, which cross collaborated to support and manage common stages. We found that using a spiral model of software development and multiple cycles of iteration was effective in achieving early network design goals. Both networks required time and resource intensive efforts to establish a trusted environment to create the data sharing architectures. Both networks were challenged by the need for adaptive use cases to define and test utility. An iterative cyclical model of development provided a process for developing trust with data partners and refining the design, and supported measureable success in the development of new federated data sharing architectures. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  20. User interface issues in supporting human-computer integrated scheduling

    NASA Technical Reports Server (NTRS)

    Cooper, Lynne P.; Biefeld, Eric W.

    1991-01-01

    Explored here is the user interface problems encountered with the Operations Missions Planner (OMP) project at the Jet Propulsion Laboratory (JPL). OMP uses a unique iterative approach to planning that places additional requirements on the user interface, particularly to support system development and maintenance. These requirements are necessary to support the concepts of heuristically controlled search, in-progress assessment, and iterative refinement of the schedule. The techniques used to address the OMP interface needs are given.

  1. A user-centred design process of new cold-protective clothing for offshore petroleum workers operating in the Barents Sea

    PubMed Central

    NAESGAARD, Ole Petter; STORHOLMEN, Tore Christian Bjørsvik; WIGGEN, Øystein Nordrum; REITAN, Jarl

    2017-01-01

    Petroleum operations in the Barents Sea require personal protective clothing (PPC) to ensure the safety and performance of the workers. This paper describes the accomplishment of a user-centred design process of new PPC for offshore workers operating in this area. The user-centred design process was accomplished by mixed-methods. Insights into user needs and context of use were established by group interviews and on-the-job observations during a field-trip. The design was developed based on these insights, and refined by user feedback and participatory design. The new PPC was evaluated via field-tests and cold climate chamber tests. The insight into user needs and context of use provided useful input to the design process and contributed to tailored solutions. Providing users with clothing prototypes facilitated participatory design and iterations of design refinement. The group interviews following the final field test showed consensus of enhanced user satisfaction compared to PPC in current use. The final cold chamber test indicated that the new PPC provides sufficient thermal protection during the 60 min of simulated work in a wind-chill temperature of −25°C. Conclusion: Accomplishing a user-centred design process contributed to new PPC with enhanced user satisfaction and included relevant functional solutions. PMID:29046494

  2. A user-centred design process of new cold-protective clothing for offshore petroleum workers operating in the Barents Sea.

    PubMed

    Naesgaard, Ole Petter; Storholmen, Tore Christian Bjørsvik; Wiggen, Øystein Nordrum; Reitan, Jarl

    2017-12-07

    Petroleum operations in the Barents Sea require personal protective clothing (PPC) to ensure the safety and performance of the workers. This paper describes the accomplishment of a user-centred design process of new PPC for offshore workers operating in this area. The user-centred design process was accomplished by mixed-methods. Insights into user needs and context of use were established by group interviews and on-the-job observations during a field-trip. The design was developed based on these insights, and refined by user feedback and participatory design. The new PPC was evaluated via field-tests and cold climate chamber tests. The insight into user needs and context of use provided useful input to the design process and contributed to tailored solutions. Providing users with clothing prototypes facilitated participatory design and iterations of design refinement. The group interviews following the final field test showed consensus of enhanced user satisfaction compared to PPC in current use. The final cold chamber test indicated that the new PPC provides sufficient thermal protection during the 60 min of simulated work in a wind-chill temperature of -25°C. Accomplishing a user-centred design process contributed to new PPC with enhanced user satisfaction and included relevant functional solutions.

  3. Hybrid propulsion technology program: Phase 1. Volume 3: Thiokol Corporation Space Operations

    NASA Technical Reports Server (NTRS)

    Schuler, A. L.; Wiley, D. R.

    1989-01-01

    Three candidate hybrid propulsion (HP) concepts were identified, optimized, evaluated, and refined through an iterative process that continually forced improvement to the systems with respect to safety, reliability, cost, and performance criteria. A full scale booster meeting Advanced Solid Rocket Motor (ASRM) thrust-time constraints and a booster application for 1/4 ASRM thrust were evaluated. Trade studies and analyses were performed for each of the motor elements related to SRM technology. Based on trade study results, the optimum HP concept for both full and quarter sized systems was defined. The three candidate hybrid concepts evaluated are illustrated.

  4. An Iterative Interplanetary Scintillation (IPS) Analysis Using Time-dependent 3-D MHD Models as Kernels

    NASA Astrophysics Data System (ADS)

    Jackson, B. V.; Yu, H. S.; Hick, P. P.; Buffington, A.; Odstrcil, D.; Kim, T. K.; Pogorelov, N. V.; Tokumaru, M.; Bisi, M. M.; Kim, J.; Yun, J.

    2017-12-01

    The University of California, San Diego has developed an iterative remote-sensing time-dependent three-dimensional (3-D) reconstruction technique which provides volumetric maps of density, velocity, and magnetic field. We have applied this technique in near real time for over 15 years with a kinematic model approximation to fit data from ground-based interplanetary scintillation (IPS) observations. Our modeling concept extends volumetric data from an inner boundary placed above the Alfvén surface out to the inner heliosphere. We now use this technique to drive 3-D MHD models at their inner boundary and generate output 3-D data files that are fit to remotely-sensed observations (in this case IPS observations), and iterated. These analyses are also iteratively fit to in-situ spacecraft measurements near Earth. To facilitate this process, we have developed a traceback from input 3-D MHD volumes to yield an updated boundary in density, temperature, and velocity, which also includes magnetic-field components. Here we will show examples of this analysis using the ENLIL 3D-MHD and the University of Alabama Multi-Scale Fluid-Kinetic Simulation Suite (MS-FLUKSS) heliospheric codes. These examples help refine poorly-known 3-D MHD variables (i.e., density, temperature), and parameters (gamma) by fitting heliospheric remotely-sensed data between the region near the solar surface and in-situ measurements near Earth.

  5. Combining density functional theory (DFT) and pair distribution function (PDF) analysis to solve the structure of metastable materials: the case of metakaolin.

    PubMed

    White, Claire E; Provis, John L; Proffen, Thomas; Riley, Daniel P; van Deventer, Jannie S J

    2010-04-07

    Understanding the atomic structure of complex metastable (including glassy) materials is of great importance in research and industry, however, such materials resist solution by most standard techniques. Here, a novel technique combining thermodynamics and local structure is presented to solve the structure of the metastable aluminosilicate material metakaolin (calcined kaolinite) without the use of chemical constraints. The structure is elucidated by iterating between least-squares real-space refinement using neutron pair distribution function data, and geometry optimisation using density functional modelling. The resulting structural representation is both energetically feasible and in excellent agreement with experimental data. This accurate structural representation of metakaolin provides new insight into the local environment of the aluminium atoms, with evidence of the existence of tri-coordinated aluminium. By the availability of this detailed chemically feasible atomic description, without the need to artificially impose constraints during the refinement process, there exists the opportunity to tailor chemical and mechanical processes involving metakaolin and other complex metastable materials at the atomic level to obtain optimal performance at the macro-scale.

  6. Cognitive Processing Therapy for Spanish-speaking Latinos: A Formative Study of a Model-Driven Cultural Adaptation of the Manual to Enhance Implementation in a Usual Care Setting.

    PubMed

    Valentine, Sarah E; Borba, Christina P C; Dixon, Louise; Vaewsorn, Adin S; Guajardo, Julia Gallegos; Resick, Patricia A; Wiltsey Stirman, Shannon; Marques, Luana

    2017-03-01

    As part of a larger implementation trial for cognitive processing therapy (CPT) for posttraumatic stress disorder (PTSD) in a community health center, we used formative evaluation to assess relations between iterative cultural adaption (for Spanish-speaking clients) and implementation outcomes (appropriateness and acceptability) for CPT. Qualitative data for the current study were gathered through multiple sources (providers: N = 6; clients: N = 22), including CPT therapy sessions, provider fieldnotes, weekly consultation team meetings, and researcher fieldnotes. Findings from conventional and directed content analysis of the data informed refinements to the CPT manual. Data-driven refinements included adaptations related to cultural context (i.e., language, regional variation in wording), urban context (e.g., crime/violence), and literacy level. Qualitative findings suggest improved appropriateness and acceptability of CPT for Spanish-speaking clients. Our study reinforces the need for dual application of cultural adaptation and implementation science to address the PTSD treatment needs of Spanish-speaking clients. © 2016 Wiley Periodicals, Inc.

  7. Cognitive Processing Therapy for Spanish-speaking Latinos: A formative study of a model-driven cultural adaptation of the manual to enhance implementation in a usual care setting

    PubMed Central

    Valentine, Sarah E.; Borba, Christina P. C.; Dixon, Louise; Vaewsorn, Adin S.; Guajardo, Julia Gallegos; Resick, Patricia A.; Wiltsey-Stirman, Shannon; Marques, Luana

    2016-01-01

    Objective As part of a larger implementation trial for Cognitive Processing Therapy (CPT) for posttraumatic stress disorder (PTSD) in a community health center, we used formative evaluation to assess relations between iterative cultural adaption (for Spanish-speaking clients) and implementation outcomes (appropriateness & acceptability) for CPT. Method Qualitative data for the current study were gathered through multiple sources (providers: N=6; clients: N=22), including CPT therapy sessions, provider field notes, weekly consultation team meetings, and researcher field notes. Findings from conventional and directed content analysis of the data informed refinements to the CPT manual. Results Data-driven refinements included adaptations related to cultural context (i.e., language, regional variation in wording), urban context (e.g., crime/violence), and literacy level. Qualitative findings suggest improved appropriateness and acceptability of CPT for Spanish-speaking clients. Conclusion Our study reinforces the need for dual application of cultural adaptation and implementation science to address the PTSD treatment needs of Spanish-speaking clients. PMID:27378013

  8. Translation position determination in ptychographic coherent diffraction imaging.

    PubMed

    Zhang, Fucai; Peterson, Isaac; Vila-Comamala, Joan; Diaz, Ana; Berenguer, Felisa; Bean, Richard; Chen, Bo; Menzel, Andreas; Robinson, Ian K; Rodenburg, John M

    2013-06-03

    Accurate knowledge of translation positions is essential in ptychography to achieve a good image quality and the diffraction limited resolution. We propose a method to retrieve and correct position errors during the image reconstruction iterations. Sub-pixel position accuracy after refinement is shown to be achievable within several tens of iterations. Simulation and experimental results for both optical and X-ray wavelengths are given. The method improves both the quality of the retrieved object image and relaxes the position accuracy requirement while acquiring the diffraction patterns.

  9. A refined methodology for modeling volume quantification performance in CT

    NASA Astrophysics Data System (ADS)

    Chen, Baiyu; Wilson, Joshua; Samei, Ehsan

    2014-03-01

    The utility of CT lung nodule volume quantification technique depends on the precision of the quantification. To enable the evaluation of quantification precision, we previously developed a mathematical model that related precision to image resolution and noise properties in uniform backgrounds in terms of an estimability index (e'). The e' was shown to predict empirical precision across 54 imaging and reconstruction protocols, but with different correlation qualities for FBP and iterative reconstruction (IR) due to the non-linearity of IR impacted by anatomical structure. To better account for the non-linearity of IR, this study aimed to refine the noise characterization of the model in the presence of textured backgrounds. Repeated scans of an anthropomorphic lung phantom were acquired. Subtracted images were used to measure the image quantum noise, which was then used to adjust the noise component of the e' calculation measured from a uniform region. In addition to the model refinement, the validation of the model was further extended to 2 nodule sizes (5 and 10 mm) and 2 segmentation algorithms. Results showed that the magnitude of IR's quantum noise was significantly higher in structured backgrounds than in uniform backgrounds (ASiR, 30-50%; MBIR, 100-200%). With the refined model, the correlation between e' values and empirical precision no longer depended on reconstruction algorithm. In conclusion, the model with refined noise characterization relfected the nonlinearity of iterative reconstruction in structured background, and further showed successful prediction of quantification precision across a variety of nodule sizes, dose levels, slice thickness, reconstruction algorithms, and segmentation software.

  10. Hydraulic refinement of an intraarterial microaxial blood pump.

    PubMed

    Siess, T; Reul, H; Rau, G

    1995-05-01

    Intravascularly operating microaxial pumps have been introduced clinically proving to be useful tools for cardiac assist. However, a number of complications have been reported in literature associated with the extra-corporeal motor and the flexible drive shaft cable. In this paper, a new pump concept is presented which has been mechanically and hydraulically refined during the developing process. The drive shaft cable has been replaced by a proximally integrated micro electric motor and an extra-corporeal power supply. The conduit between pump and power supply consists of only an electrical power cable within the catheter resulting in a device which is indifferent to kinking and small curvature radii. Anticipated insertion difficulties, as a result of a large outer pump diameter, led to a two-step approach with an initial 6,4mm pump version and a secondary 5,4mm version. Both pumps meet the hydraulic requirement of at least 2.5l/min at a differential pressure of 80-100 mmHg. The hydraulic refinements necessary to achieve the anticipated goal are based on ongoing hydrodynamic studies of the flow inside the pumps. Flow visualization on a 10:1 scale model as well as on 1:1 scale pumps have yielded significant improvements in the overall hydraulic performance of the pumps. One example of this iterative developing process by means of geometrical changes on the basis of flow visualization is illustrated for the 6.4mm pump.

  11. Camera calibration based on the back projection process

    NASA Astrophysics Data System (ADS)

    Gu, Feifei; Zhao, Hong; Ma, Yueyang; Bu, Penghui

    2015-12-01

    Camera calibration plays a crucial role in 3D measurement tasks of machine vision. In typical calibration processes, camera parameters are iteratively optimized in the forward imaging process (FIP). However, the results can only guarantee the minimum of 2D projection errors on the image plane, but not the minimum of 3D reconstruction errors. In this paper, we propose a universal method for camera calibration, which uses the back projection process (BPP). In our method, a forward projection model is used to obtain initial intrinsic and extrinsic parameters with a popular planar checkerboard pattern. Then, the extracted image points are projected back into 3D space and compared with the ideal point coordinates. Finally, the estimation of the camera parameters is refined by a non-linear function minimization process. The proposed method can obtain a more accurate calibration result, which is more physically useful. Simulation and practical data are given to demonstrate the accuracy of the proposed method.

  12. Systematic and Iterative Development of a Smartphone App to Promote Sun-Protection Among Holidaymakers: Design of a Prototype and Results of Usability and Acceptability Testing

    PubMed Central

    Sniehotta, Falko F; Birch-Machin, Mark A; Olivier, Patrick; Araújo-Soares, Vera

    2017-01-01

    Background Sunburn and intermittent exposure to ultraviolet rays are risk factors for melanoma. Sunburn is a common experience during holidays, making tourism settings of particular interest for skin cancer prevention. Holidaymakers are a volatile populations found at different locations, which may make them difficult to reach. Given the widespread use of smartphones, evidence suggests that this might be a novel, convenient, scalable, and feasible way of reaching the target population. Objective The main objective of this study was to describe and appraise the process of systematically developing a smartphone intervention (mISkin app) to promote sun-protection during holidays. Methods The iterative development process of the mISkin app was conducted over four sequential stages: (1) identify evidence on the most effective behavior change techniques (BCTs) used (active ingredients) as well as theoretical predictors and theories, (2) evidence-based intervention design, (3) co-design with users of the mISkin app prototype, and (4) refinement of the app. Each stage provided key findings that were subsequently used to inform the design of the mISkin app. Results The sequential approach to development integrates different strands of evidence to inform the design of an evidence-based intervention. A systematic review on previously tested interventions to promote sun-protection provided cues and constraints for the design of this intervention. The development and design of the mISkin app also incorporated other sources of information, such as other literature reviews and experts’ consultations. The developed prototype of the mISkin app was evaluated by engaging potential holidaymakers in the refinement and further development of the mISkin app through usability (ease-of-use) and acceptability testing of the intervention prototype. All 17 participants were satisfied with the mISkin prototype and expressed willingness to use it. Feedback on the app was integrated in the optimization process of the mISkin app. Conclusions The mISkin app was designed to promote sun-protection among holidaymakers and was based on current evidence, experts’ knowledge and experience, and user involvement. Based on user feedback, the app has been refined and a fully functional version is ready for formal testing in a feasibility pilot study. PMID:28606892

  13. Fostering learners' interaction with content: A learner-centered mobile device interface

    NASA Astrophysics Data System (ADS)

    Abdous, M.

    2015-12-01

    With the ever-increasing omnipresence of mobile devices in student life, leveraging smart devices to foster students' interaction with course content is critical. Following a learner-centered design iterative approach, we designed a mobile interface that may enable learners to access and interact with online course content efficiently and intuitively. Our design process leveraged recent technologies, such as bootstrap, Google's Material Design, HTML5, and JavaScript to design an intuitive, efficient, and portable mobile interface with a variety of built-in features, including context sensitive bookmarking, searching, progress tracking, captioning, and transcript display. The mobile interface also offers students the ability to ask context-related questions and to complete self-checks as they watch audio/video presentations. Our design process involved ongoing iterative feedback from learners, allowing us to refine and tweak the interface to provide learners with a unified experience across platforms and devices. The innovative combination of technologies built around well-structured and well-designed content seems to provide an effective learning experience to mobile learners. Early feedback indicates a high level of satisfaction with the interface's efficiency, intuitiveness, and robustness from both students and faculty.

  14. System requirements for a computerised patient record information system at a busy primary health care clinic.

    PubMed

    Blignaut, P J; McDonald, T; Tolmie, C J

    2001-05-01

    A prototyping approach was used to determine the essential system requirements of a computerised patient record information system for a typical township primary health care clinic. A pilot clinic was identified and the existing manual system and business processes in this clinic was studied intensively before the first prototype was implemented. Interviews with users, incidental observations and analysis of actual data entered were used as primary techniques to refine the prototype system iteratively until a system with an acceptable data set and adequate functionalities were in place. Several non-functional and user-related requirements were also discovered during the prototyping period.

  15. Modeling and Optimization of Multiple Unmanned Aerial Vehicles System Architecture Alternatives

    PubMed Central

    Wang, Weiping; He, Lei

    2014-01-01

    Unmanned aerial vehicle (UAV) systems have already been used in civilian activities, although very limitedly. Confronted different types of tasks, multi UAVs usually need to be coordinated. This can be extracted as a multi UAVs system architecture problem. Based on the general system architecture problem, a specific description of the multi UAVs system architecture problem is presented. Then the corresponding optimization problem and an efficient genetic algorithm with a refined crossover operator (GA-RX) is proposed to accomplish the architecting process iteratively in the rest of this paper. The availability and effectiveness of overall method is validated using 2 simulations based on 2 different scenarios. PMID:25140328

  16. In the wake of suicide: Developing guidelines for suicide postvention in fire service

    PubMed Central

    Gulliver, Suzy Bird; Pennington, Michelle L.; Leto, Frank; Cammarata, Claire; Ostiguy, William; Zavodny, Cynthia; Flynn, Elisa J.; Kimbrel, Nathan A.

    2016-01-01

    ABSTRACT This project aimed to develop a standard operating procedure (SOP) for suicide postvention in Fire Service. First, an existing SOP was refined through expert review. Next, focus groups were conducted with fire departments lacking a peer suicide postvention SOP; feedback obtained guided revisions. The current article describes the iterative process used to evaluate and revise a Suicide Postvention SOP into a Postvention guideline that is available for implementation and evaluation. Postventions assist survivors in grief and bereavement and attempt to prevent additional negative outcomes. The implementation of suicide postvention guidelines will increase behavioral wellness within Fire Service. PMID:26332212

  17. CGHnormaliter: an iterative strategy to enhance normalization of array CGH data with imbalanced aberrations

    PubMed Central

    van Houte, Bart PP; Binsl, Thomas W; Hettling, Hannes; Pirovano, Walter; Heringa, Jaap

    2009-01-01

    Background Array comparative genomic hybridization (aCGH) is a popular technique for detection of genomic copy number imbalances. These play a critical role in the onset of various types of cancer. In the analysis of aCGH data, normalization is deemed a critical pre-processing step. In general, aCGH normalization approaches are similar to those used for gene expression data, albeit both data-types differ inherently. A particular problem with aCGH data is that imbalanced copy numbers lead to improper normalization using conventional methods. Results In this study we present a novel method, called CGHnormaliter, which addresses this issue by means of an iterative normalization procedure. First, provisory balanced copy numbers are identified and subsequently used for normalization. These two steps are then iterated to refine the normalization. We tested our method on three well-studied tumor-related aCGH datasets with experimentally confirmed copy numbers. Results were compared to a conventional normalization approach and two more recent state-of-the-art aCGH normalization strategies. Our findings show that, compared to these three methods, CGHnormaliter yields a higher specificity and precision in terms of identifying the 'true' copy numbers. Conclusion We demonstrate that the normalization of aCGH data can be significantly enhanced using an iterative procedure that effectively eliminates the effect of imbalanced copy numbers. This also leads to a more reliable assessment of aberrations. An R-package containing the implementation of CGHnormaliter is available at . PMID:19709427

  18. An interactive medical image segmentation framework using iterative refinement.

    PubMed

    Kalshetti, Pratik; Bundele, Manas; Rahangdale, Parag; Jangra, Dinesh; Chattopadhyay, Chiranjoy; Harit, Gaurav; Elhence, Abhay

    2017-04-01

    Segmentation is often performed on medical images for identifying diseases in clinical evaluation. Hence it has become one of the major research areas. Conventional image segmentation techniques are unable to provide satisfactory segmentation results for medical images as they contain irregularities. They need to be pre-processed before segmentation. In order to obtain the most suitable method for medical image segmentation, we propose MIST (Medical Image Segmentation Tool), a two stage algorithm. The first stage automatically generates a binary marker image of the region of interest using mathematical morphology. This marker serves as the mask image for the second stage which uses GrabCut to yield an efficient segmented result. The obtained result can be further refined by user interaction, which can be done using the proposed Graphical User Interface (GUI). Experimental results show that the proposed method is accurate and provides satisfactory segmentation results with minimum user interaction on medical as well as natural images. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. The Development of a Communication Tool to Facilitate the Cancer Trial Recruitment Process and Increase Research Literacy among Underrepresented Populations.

    PubMed

    Torres, Samantha; de la Riva, Erika E; Tom, Laura S; Clayman, Marla L; Taylor, Chirisse; Dong, Xinqi; Simon, Melissa A

    2015-12-01

    Despite increasing need to boost the recruitment of underrepresented populations into cancer trials and biobanking research, few tools exist for facilitating dialogue between researchers and potential research participants during the recruitment process. In this paper, we describe the initial processes of a user-centered design cycle to develop a standardized research communication tool prototype for enhancing research literacy among individuals from underrepresented populations considering enrollment in cancer research and biobanking studies. We present qualitative feedback and recommendations on the prototype's design and content from potential end users: five clinical trial recruiters and ten potential research participants recruited from an academic medical center. Participants were given the prototype (a set of laminated cards) and were asked to provide feedback about the tool's content, design elements, and word choices during semi-structured, in-person interviews. Results suggest that the prototype was well received by recruiters and patients alike. They favored the simplicity, lay language, and layout of the cards. They also noted areas for improvement, leading to card refinements that included the following: addressing additional topic areas, clarifying research processes, increasing the number of diverse images, and using alternative word choices. Our process for refining user interfaces and iterating content in early phases of design may inform future efforts to develop tools for use in clinical research or biobanking studies to increase research literacy.

  20. Overview of the negative ion based neutral beam injectors for ITER.

    PubMed

    Schunke, B; Boilson, D; Chareyre, J; Choi, C-H; Decamps, H; El-Ouazzani, A; Geli, F; Graceffa, J; Hemsworth, R; Kushwah, M; Roux, K; Shah, D; Singh, M; Svensson, L; Urbani, M

    2016-02-01

    The ITER baseline foresees 2 Heating Neutral Beams (HNB's) based on 1 MeV 40 A D(-) negative ion accelerators, each capable of delivering 16.7 MW of deuterium atoms to the DT plasma, with an optional 3rd HNB injector foreseen as a possible upgrade. In addition, a dedicated diagnostic neutral beam will be injecting ≈22 A of H(0) at 100 keV as the probe beam for charge exchange recombination spectroscopy. The integration of the injectors into the ITER plant is nearly finished necessitating only refinements. A large number of components have passed the final design stage, manufacturing has started, and the essential test beds-for the prototype route chosen-will soon be ready to start.

  1. Simulation of Fusion Plasmas

    ScienceCinema

    Holland, Chris [UC San Diego, San Diego, California, United States

    2017-12-09

    The upcoming ITER experiment (www.iter.org) represents the next major milestone in realizing the promise of using nuclear fusion as a commercial energy source, by moving into the “burning plasma” regime where the dominant heat source is the internal fusion reactions. As part of its support for the ITER mission, the US fusion community is actively developing validated predictive models of the behavior of magnetically confined plasmas. In this talk, I will describe how the plasma community is using the latest high performance computing facilities to develop and refine our models of the nonlinear, multiscale plasma dynamics, and how recent advances in experimental diagnostics are allowing us to directly test and validate these models at an unprecedented level.

  2. Beryllium fabrication/cost assessment for ITER (International Thermonuclear Experimental Reactor)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beeston, J.M.; Longhurst, G.R.; Parsonage, T.

    1990-06-01

    A fabrication and cost estimate of three possible beryllium shapes for the International Thermonuclear Experimental Reactor (ITER) blanket is presented. The fabrication method by hot pressing (HP), cold isostatic pressing plus sintering (CIP+S), cold isostatic pressing plus sintering plus hot isostatic pressing (CIP+S+HIP), and sphere production by atomization or rotary electrode will be discussed. Conventional hot pressing blocks of beryllium with subsequent machining to finished shapes can be more expensive than production of a net shape by cold isostatic pressing and sintering. The three beryllium shapes to be considered here and proposed for ITER are: (1) cubic blocks (3 tomore » 17 cm on an edge), (2) tubular cylinders (33 to 50 mm i.d. by 62 mm o.d. by 8 m long), and (3) spheres (1--5 mm dia.). A rough cost estimate of the basic shape is presented which would need to be refined if the surface finish and tolerances required are better than the sintering process produces. The final cost of the beryllium in the blanket will depend largely on the machining and recycling of beryllium required to produce the finished product. The powder preparation will be discussed before shape fabrication. 10 refs., 6 figs.« less

  3. Applying the scientific method to small catchment studies: Areview of the Panola Mountain experience

    USGS Publications Warehouse

    Hooper, R.P.

    2001-01-01

    A hallmark of the scientific method is its iterative application to a problem to increase and refine the understanding of the underlying processes controlling it. A successful iterative application of the scientific method to catchment science (including the fields of hillslope hydrology and biogeochemistry) has been hindered by two factors. First, the scale at which controlled experiments can be performed is much smaller than the scale of the phenomenon of interest. Second, computer simulation models generally have not been used as hypothesis-testing tools as rigorously as they might have been. Model evaluation often has gone only so far as evaluation of goodness of fit, rather than a full structural analysis, which is more useful when treating the model as a hypothesis. An iterative application of a simple mixing model to the Panola Mountain Research Watershed is reviewed to illustrate the increase in understanding gained by this approach and to discern general principles that may be applicable to other studies. The lessons learned include the need for an explicitly stated conceptual model of the catchment, the definition of objective measures of its applicability, and a clear linkage between the scale of observations and the scale of predictions. Published in 2001 by John Wiley & Sons. Ltd.

  4. Dense soft tissue 3D reconstruction refined with super-pixel segmentation for robotic abdominal surgery.

    PubMed

    Penza, Veronica; Ortiz, Jesús; Mattos, Leonardo S; Forgione, Antonello; De Momi, Elena

    2016-02-01

    Single-incision laparoscopic surgery decreases postoperative infections, but introduces limitations in the surgeon's maneuverability and in the surgical field of view. This work aims at enhancing intra-operative surgical visualization by exploiting the 3D information about the surgical site. An interactive guidance system is proposed wherein the pose of preoperative tissue models is updated online. A critical process involves the intra-operative acquisition of tissue surfaces. It can be achieved using stereoscopic imaging and 3D reconstruction techniques. This work contributes to this process by proposing new methods for improved dense 3D reconstruction of soft tissues, which allows a more accurate deformation identification and facilitates the registration process. Two methods for soft tissue 3D reconstruction are proposed: Method 1 follows the traditional approach of the block matching algorithm. Method 2 performs a nonparametric modified census transform to be more robust to illumination variation. The simple linear iterative clustering (SLIC) super-pixel algorithm is exploited for disparity refinement by filling holes in the disparity images. The methods were validated using two video datasets from the Hamlyn Centre, achieving an accuracy of 2.95 and 1.66 mm, respectively. A comparison with ground-truth data demonstrated the disparity refinement procedure: (1) increases the number of reconstructed points by up to 43 % and (2) does not affect the accuracy of the 3D reconstructions significantly. Both methods give results that compare favorably with the state-of-the-art methods. The computational time constraints their applicability in real time, but can be greatly improved by using a GPU implementation.

  5. Development and psychometric evaluation of the Primary Health Care Engagement (PHCE) Scale: a pilot survey of rural and remote nurses.

    PubMed

    Kosteniuk, Julie G; Wilson, Erin C; Penz, Kelly L; MacLeod, Martha L P; Stewart, Norma J; Kulig, Judith C; Karunanayake, Chandima P; Kilpatrick, Kelley

    2016-01-01

    To report the development and psychometric evaluation of a scale to measure rural and remote (rural/remote) nurses' perceptions of the engagement of their workplaces in key dimensions of primary health care (PHC). Amidst ongoing PHC reforms, a comprehensive instrument is needed to evaluate the degree to which rural/remote health care settings are involved in the key dimensions that characterize PHC delivery, particularly from the perspective of professionals delivering care. This study followed a three-phase process of instrument development and psychometric evaluation. A literature review and expert consultation informed instrument development in the first phase, followed by an iterative process of content evaluation in the second phase. In the final phase, a pilot survey was undertaken and item discrimination analysis employed to evaluate the internal consistency reliability of each subscale in the preliminary 60-item Primary Health Care Engagement (PHCE) Scale. The 60-item scale was subsequently refined to a 40-item instrument. The pilot survey sample included 89 nurses in current practice who had experience in rural/remote practice settings. Participants completed either a web-based or paper survey from September to December, 2013. Following item discrimination analysis, the 60-item instrument was refined to a 40-item PHCE Scale consisting of 10 subscales, each including three to five items. Alpha estimates of the 10 refined subscales ranged from 0.61 to 0.83, with seven of the subscales demonstrating acceptable reliability (α ⩾ 0.70). The refined 40-item instrument exhibited good internal consistency reliability (α=0.91). The 40-item PHCE Scale may be considered for use in future studies regardless of locale, to measure the extent to which health care professionals perceive their workplaces to be engaged in key dimensions of PHC.

  6. What physicians reason about during admission case review.

    PubMed

    Juma, Salina; Goldszmidt, Mark

    2017-08-01

    Research suggests that physicians perform multiple reasoning tasks beyond diagnosis during patient review. However, these remain largely theoretical. The purpose of this study was to explore reasoning tasks in clinical practice during patient admission review. The authors used a constant comparative approach-an iterative and inductive process of coding and recoding-to analyze transcripts from 38 audio-recorded case reviews between junior trainees and their senior residents or attendings. Using a previous list of reasoning tasks, analysis focused on what tasks were performed, when they occurred, and how they related to the other tasks. All 24 tasks were observed in at least one review with a mean of 17.9 (Min = 15, Max = 22) distinct tasks per review. Two new tasks-assess illness severity and patient decision-making capacity-were identified, thus 26 tasks were examined. Three overarching tasks were identified-assess priorities, determine and refine the most likely diagnosis and establish and refine management plans-that occurred throughout all stages of the case review starting from patient identification and continuing through to assessment and plan. A fourth possible overarching task-reflection-was also identified but only observed in four instances across three cases. The other 22 tasks appeared to be context dependent serving to support, expand, and refine one or more overarching tasks. Tasks were non-sequential and the same supporting task could serve more than one overarching task. The authors conclude that these findings provide insight into the 'what' and 'when' of physician reasoning during case review that can be used to support professional development, clinical training and patient care. In particular, they draw attention to the iterative way in which each task is addressed during a case review and how this finding may challenge conventional ways of teaching and assessing clinical communication and reasoning. They also suggest that further research is needed to explore how physicians decide why a supporting task is required in a particular context.

  7. The Healthcare Complaints Analysis Tool: development and reliability testing of a method for service monitoring and organisational learning

    PubMed Central

    Gillespie, Alex; Reader, Tom W

    2016-01-01

    Background Letters of complaint written by patients and their advocates reporting poor healthcare experiences represent an under-used data source. The lack of a method for extracting reliable data from these heterogeneous letters hinders their use for monitoring and learning. To address this gap, we report on the development and reliability testing of the Healthcare Complaints Analysis Tool (HCAT). Methods HCAT was developed from a taxonomy of healthcare complaints reported in a previously published systematic review. It introduces the novel idea that complaints should be analysed in terms of severity. Recruiting three groups of educated lay participants (n=58, n=58, n=55), we refined the taxonomy through three iterations of discriminant content validity testing. We then supplemented this refined taxonomy with explicit coding procedures for seven problem categories (each with four levels of severity), stage of care and harm. These combined elements were further refined through iterative coding of a UK national sample of healthcare complaints (n= 25, n=80, n=137, n=839). To assess reliability and accuracy for the resultant tool, 14 educated lay participants coded a referent sample of 125 healthcare complaints. Results The seven HCAT problem categories (quality, safety, environment, institutional processes, listening, communication, and respect and patient rights) were found to be conceptually distinct. On average, raters identified 1.94 problems (SD=0.26) per complaint letter. Coders exhibited substantial reliability in identifying problems at four levels of severity; moderate and substantial reliability in identifying stages of care (except for ‘discharge/transfer’ that was only fairly reliable) and substantial reliability in identifying overall harm. Conclusions HCAT is not only the first reliable tool for coding complaints, it is the first tool to measure the severity of complaints. It facilitates service monitoring and organisational learning and it enables future research examining whether healthcare complaints are a leading indicator of poor service outcomes. HCAT is freely available to download and use. PMID:26740496

  8. Simplifying healthful choices: a qualitative study of a physical activity based nutrition label format

    PubMed Central

    2013-01-01

    Background This study used focus groups to pilot and evaluate a new nutrition label format and refine the label design. Physical activity equivalent labels present calorie information in terms of the amount of physical activity that would be required to expend the calories in a specified food item. Methods Three focus groups with a total of twenty participants discussed food choices and nutrition labeling. They provided information on comprehension, usability and acceptability of the label. A systematic coding process was used to apply descriptive codes to the data and to identify emerging themes and attitudes. Results Participants in all three groups were able to comprehend the label format. Discussion about label format focused on issues including gender of the depicted figure, physical fitness of the figure, preference for walking or running labels, and preference for information in miles or minutes. Feedback from earlier focus groups was used to refine the labels in an iterative process. Conclusions In contrast to calorie labels, participants shown physical activity labels asked and answered, “How does this label apply to me?” This shift toward personalized understanding may indicate that physical activity labels offer an advantage over currently available nutrition labels. PMID:23742678

  9. The Interactive Child Distress Screener: Development and Preliminary Feasibility Testing

    PubMed Central

    2018-01-01

    Background Early identification of child emotional and behavioral concerns is essential for the prevention of mental health problems; however, few suitable child-reported screening measures are available. Digital tools offer an exciting opportunity for obtaining clinical information from the child’s perspective. Objective The aim of this study was to describe the initial development and pilot testing of the Interactive Child Distress Screener (ICDS). The ICDS is a Web-based screening instrument for the early identification of emotional and behavioral problems in children aged between 5 and 12 years. Methods This paper utilized a mixed-methods approach to (1) develop and refine item content using an expert review process (study 1) and (2) develop and refine prototype animations and an app interface using codesign with child users (study 2). Study 1 involved an iterative process that comprised the following four steps: (1) the initial development of target constructs, (2) preliminary content validation (face validity, item importance, and suitability for animation) from an expert panel of researchers and psychologists (N=9), (3) item refinement, and (4) a follow-up validation with the same expert panel. Study 2 also comprised four steps, which are as follows: (1) the development of prototype animations, (2) the development of the app interface and a response format, (3) child interviews to determine feasibility and obtain feedback, and (4) refinement of animations and interface. Cognitive interviews were conducted with 18 children aged between 4 and 12 years who tested 3 prototype animated items. Children were asked to describe the target behavior, how well the animations captured the intended behavior, and provide suggestions for improvement. Their ability to understand the wording of instructions was also assessed, as well as the general acceptability of character and sound design. Results In study 1, a revised list of 15 constructs was generated from the first and second round of expert feedback. These were rated highly in terms of importance (mean 6.32, SD 0.42) and perceived compatibility of items (mean 6.41, SD 0.45) on a 7-point scale. In study 2, overall feedback regarding the character design and sounds was positive. Children’s ability to understand intended behaviors varied according to target items, and feedback highlighted key objectives for improvements such as adding contextual cues or improving character detail. These design changes were incorporated through an iterative process, with examples presented. Conclusions The ICDS has potential to obtain clinical information from the child’s perspective that may otherwise be overlooked. If effective, the ICDS will provide a quick, engaging, and easy-to-use screener that can be utilized in routine care settings. This project highlights the importance of involving an expert review and user codesign in the development of digital assessment tools for children. PMID:29674310

  10. The integration of palaeogeography and tectonics in refining plate tectonic models: an example from SE Asia

    NASA Astrophysics Data System (ADS)

    Masterton, S. M.; Markwick, P.; Bailiff, R.; Campanile, D.; Edgecombe, E.; Eue, D.; Galsworthy, A.; Wilson, K.

    2012-04-01

    Our understanding of lithospheric evolution and global plate motions throughout the Earth's history is based largely upon detailed knowledge of plate boundary structures, inferences about tectonic regimes, ocean isochrons and palaeomagnetic data. Most currently available plate models are either regionally restricted or do not consider palaeogeographies in their construction. Here, we present an integrated methodology in which derived hypotheses have been further refined using global and regional palaeogeographic, palaeotopological and palaeobathymetric maps. Iteration between our self-consistent and structurally constrained global plate model and palaeogeographic interpretations which are built on these reconstructions, allows for greater testing and refinement of results. Our initial structural and tectonic interpretations are based largely on analysis of our extensive global database of gravity and magnetic potential field data, and are further constrained by seismic, SRTM and Landsat data. This has been used as the basis for detailed interpretations that have allowed us to compile a new global map and database of structures, crustal types, plate boundaries and basin definitions. Our structural database is used in the identification of major tectonic terranes and their relative motions, from which we have developed our global plate model. It is subject to an ongoing process of regional evaluation and revisions in an effort to incorporate and reflect new tectonic and geologic interpretations. A major element of this programme is the extension of our existing plate model (GETECH Global Plate Model V1) back to the Neoproterozic. Our plate model forms the critical framework upon which palaeogeographic and palaeotopographic reconstructions have been made for every time stage in the Cretaceous and Cenozoic. Generating palaeogeographies involves integration of a variety of data, such as regional geology, palaeoclimate analyses, lithology, sea-level estimates, thermo-mechanical events and regional tectonics. These data are interpreted to constrain depositional systems and tectonophysiographic terranes. Palaeotopography and palaeobathymetry are derived from these tectonophysiographic terranes and depositional systems, and are further constrained using geological relationships, thermochronometric data, palaeoaltimetry indicators and modern analogues. Throughout this process, our plate model is iteratively tested against our palaeogeographies and their environmental consequences. Both the plate model and the palaeogeographies are refined until we have obtained a consistent and scientifically robust result. In this presentation we show an example from Southeast Asia, where the plate model complexity and wide variation in hypotheses has huge implications for the palaeogeographic interpretation, which can then be tested using geological observations from well and seismic data. For example, the Khorat Plateau Basin, Northeastern Thailand, comprises a succession of fluvial clastics during the Cretaceous, which include the evaporites of the Maha Sarakham Formation. These have been variously interpreted as indicative of saline lake or marine incursion depositional environments. We show how the feasibility of these different hypotheses is dependent on the regional palaeogeography (whether a marine link is possible), which in turn depends on the underlying plate model. We show two models with widely different environmental consequences. A more robust model that takes into account all these consequences, as well as data, can be defined by iterating through the consequences of the plate model and geological observations.

  11. Recovery of chemical Estimates by Field Inhomogeneity Neighborhood Error Detection (REFINED): Fat/Water Separation at 7T

    PubMed Central

    Narayan, Sreenath; Kalhan, Satish C.; Wilson, David L.

    2012-01-01

    I.Abstract Purpose To reduce swaps in fat-water separation methods, a particular issue on 7T small animal scanners due to field inhomogeneity, using image postprocessing innovations that detect and correct errors in the B0 field map. Materials and Methods Fat-water decompositions and B0 field maps were computed for images of mice acquired on a 7T Bruker BioSpec scanner, using a computationally efficient method for solving the Markov Random Field formulation of the multi-point Dixon model. The B0 field maps were processed with a novel hole-filling method, based on edge strength between regions, and a novel k-means method, based on field-map intensities, which were iteratively applied to automatically detect and reinitialize error regions in the B0 field maps. Errors were manually assessed in the B0 field maps and chemical parameter maps both before and after error correction. Results Partial swaps were found in 6% of images when processed with FLAWLESS. After REFINED correction, only 0.7% of images contained partial swaps, resulting in an 88% decrease in error rate. Complete swaps were not problematic. Conclusion Ex post facto error correction is a viable supplement to a priori techniques for producing globally smooth B0 field maps, without partial swaps. With our processing pipeline, it is possible to process image volumes rapidly, robustly, and almost automatically. PMID:23023815

  12. Recovery of chemical estimates by field inhomogeneity neighborhood error detection (REFINED): fat/water separation at 7 tesla.

    PubMed

    Narayan, Sreenath; Kalhan, Satish C; Wilson, David L

    2013-05-01

    To reduce swaps in fat-water separation methods, a particular issue on 7 Tesla (T) small animal scanners due to field inhomogeneity, using image postprocessing innovations that detect and correct errors in the B0 field map. Fat-water decompositions and B0 field maps were computed for images of mice acquired on a 7T Bruker BioSpec scanner, using a computationally efficient method for solving the Markov Random Field formulation of the multi-point Dixon model. The B0 field maps were processed with a novel hole-filling method, based on edge strength between regions, and a novel k-means method, based on field-map intensities, which were iteratively applied to automatically detect and reinitialize error regions in the B0 field maps. Errors were manually assessed in the B0 field maps and chemical parameter maps both before and after error correction. Partial swaps were found in 6% of images when processed with FLAWLESS. After REFINED correction, only 0.7% of images contained partial swaps, resulting in an 88% decrease in error rate. Complete swaps were not problematic. Ex post facto error correction is a viable supplement to a priori techniques for producing globally smooth B0 field maps, without partial swaps. With our processing pipeline, it is possible to process image volumes rapidly, robustly, and almost automatically. Copyright © 2012 Wiley Periodicals, Inc.

  13. Overview of the negative ion based neutral beam injectors for ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schunke, B., E-mail: email@none.edu; Boilson, D.; Chareyre, J.

    2016-02-15

    The ITER baseline foresees 2 Heating Neutral Beams (HNB’s) based on 1 MeV 40 A D{sup −} negative ion accelerators, each capable of delivering 16.7 MW of deuterium atoms to the DT plasma, with an optional 3rd HNB injector foreseen as a possible upgrade. In addition, a dedicated diagnostic neutral beam will be injecting ≈22 A of H{sup 0} at 100 keV as the probe beam for charge exchange recombination spectroscopy. The integration of the injectors into the ITER plant is nearly finished necessitating only refinements. A large number of components have passed the final design stage, manufacturing has started,more » and the essential test beds—for the prototype route chosen—will soon be ready to start.« less

  14. Rapid alignment of nanotomography data using joint iterative reconstruction and reprojection.

    PubMed

    Gürsoy, Doğa; Hong, Young P; He, Kuan; Hujsak, Karl; Yoo, Seunghwan; Chen, Si; Li, Yue; Ge, Mingyuan; Miller, Lisa M; Chu, Yong S; De Andrade, Vincent; He, Kai; Cossairt, Oliver; Katsaggelos, Aggelos K; Jacobsen, Chris

    2017-09-18

    As x-ray and electron tomography is pushed further into the nanoscale, the limitations of rotation stages become more apparent, leading to challenges in the alignment of the acquired projection images. Here we present an approach for rapid post-acquisition alignment of these projections to obtain high quality three-dimensional images. Our approach is based on a joint estimation of alignment errors, and the object, using an iterative refinement procedure. With simulated data where we know the alignment error of each projection image, our approach shows a residual alignment error that is a factor of a thousand smaller, and it reaches the same error level in the reconstructed image in less than half the number of iterations. We then show its application to experimental data in x-ray and electron nanotomography.

  15. Developing a patient-centered outcome measure for complementary and alternative medicine therapies II: Refining content validity through cognitive interviews

    PubMed Central

    2011-01-01

    Background Available measures of patient-reported outcomes for complementary and alternative medicine (CAM) inadequately capture the range of patient-reported treatment effects. The Self-Assessment of Change questionnaire was developed to measure multi-dimensional shifts in well-being for CAM users. With content derived from patient narratives, items were subsequently focused through interviews on a new cohort of participants. Here we present the development of the final version in which the content and format is refined through cognitive interviews. Methods We conducted cognitive interviews across five iterations of questionnaire refinement with a culturally diverse sample of 28 CAM users. In each iteration, participant critiques were used to revise the questionnaire, which was then re-tested in subsequent rounds of cognitive interviews. Following all five iterations, transcripts of cognitive interviews were systematically coded and analyzed to examine participants' understanding of the format and content of the final questionnaire. Based on this data, we established summary descriptions and selected exemplar quotations for each word pair on the final questionnaire. Results The final version of the Self-Assessment of Change questionnaire (SAC) includes 16 word pairs, nine of which remained unchanged from the original draft. Participants consistently said that these stable word pairs represented opposite ends of the same domain of experience and the meanings of these terms were stable across the participant pool. Five pairs underwent revision and two word pairs were added. Four word pairs were eliminated for redundancy or because participants did not agree on the meaning of the terms. Cognitive interviews indicate that participants understood the format of the questionnaire and considered each word pair to represent opposite poles of a shared domain of experience. Conclusions We have placed lay language and direct experience at the center of questionnaire revision and refinement. In so doing, we provide an innovative model for the development of truly patient-centered outcome measures. Although this instrument was designed and tested in a CAM-specific population, it may be useful in assessing multi-dimensional shifts in well-being across a broader patient population. PMID:22206409

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Los Alamos National Laboratory, Mailstop M888, Los Alamos, NM 87545, USA; Lawrence Berkeley National Laboratory, One Cyclotron Road, Building 64R0121, Berkeley, CA 94720, USA; Department of Haematology, University of Cambridge, Cambridge CB2 0XY, England

    The PHENIX AutoBuild Wizard is a highly automated tool for iterative model-building, structure refinement and density modification using RESOLVE or TEXTAL model-building, RESOLVE statistical density modification, and phenix.refine structure refinement. Recent advances in the AutoBuild Wizard and phenix.refine include automated detection and application of NCS from models as they are built, extensive model completion algorithms, and automated solvent molecule picking. Model completion algorithms in the AutoBuild Wizard include loop-building, crossovers between chains in different models of a structure, and side-chain optimization. The AutoBuild Wizard has been applied to a set of 48 structures at resolutions ranging from 1.1 {angstrom} tomore » 3.2 {angstrom}, resulting in a mean R-factor of 0.24 and a mean free R factor of 0.29. The R-factor of the final model is dependent on the quality of the starting electron density, and relatively independent of resolution.« less

  17. Accelerated discovery of metallic glasses through iteration of machine learning and high-throughput experiments

    PubMed Central

    Wolverton, Christopher; Hattrick-Simpers, Jason; Mehta, Apurva

    2018-01-01

    With more than a hundred elements in the periodic table, a large number of potential new materials exist to address the technological and societal challenges we face today; however, without some guidance, searching through this vast combinatorial space is frustratingly slow and expensive, especially for materials strongly influenced by processing. We train a machine learning (ML) model on previously reported observations, parameters from physiochemical theories, and make it synthesis method–dependent to guide high-throughput (HiTp) experiments to find a new system of metallic glasses in the Co-V-Zr ternary. Experimental observations are in good agreement with the predictions of the model, but there are quantitative discrepancies in the precise compositions predicted. We use these discrepancies to retrain the ML model. The refined model has significantly improved accuracy not only for the Co-V-Zr system but also across all other available validation data. We then use the refined model to guide the discovery of metallic glasses in two additional previously unreported ternaries. Although our approach of iterative use of ML and HiTp experiments has guided us to rapid discovery of three new glass-forming systems, it has also provided us with a quantitatively accurate, synthesis method–sensitive predictor for metallic glasses that improves performance with use and thus promises to greatly accelerate discovery of many new metallic glasses. We believe that this discovery paradigm is applicable to a wider range of materials and should prove equally powerful for other materials and properties that are synthesis path–dependent and that current physiochemical theories find challenging to predict. PMID:29662953

  18. Accelerated discovery of metallic glasses through iteration of machine learning and high-throughput experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, Fang; Ward, Logan; Williams, Travis

    With more than a hundred elements in the periodic table, a large number of potential new materials exist to address the technological and societal challenges we face today; however, without some guidance, searching through this vast combinatorial space is frustratingly slow and expensive, especially for materials strongly influenced by processing. We train a machine learning (ML) model on previously reported observations, parameters from physiochemical theories, and make it synthesis method–dependent to guide high-throughput (HiTp) experiments to find a new system of metallic glasses in the Co-V-Zr ternary. Experimental observations are in good agreement with the predictions of the model, butmore » there are quantitative discrepancies in the precise compositions predicted. We use these discrepancies to retrain the ML model. The refined model has significantly improved accuracy not only for the Co-V-Zr system but also across all other available validation data. We then use the refined model to guide the discovery of metallic glasses in two additional previously unreported ternaries. Although our approach of iterative use of ML and HiTp experiments has guided us to rapid discovery of three new glass-forming systems, it has also provided us with a quantitatively accurate, synthesis method–sensitive predictor for metallic glasses that improves performance with use and thus promises to greatly accelerate discovery of many new metallic glasses. We believe that this discovery paradigm is applicable to a wider range of materials and should prove equally powerful for other materials and properties that are synthesis path–dependent and that current physiochemical theories find challenging to predict.« less

  19. Accelerated discovery of metallic glasses through iteration of machine learning and high-throughput experiments

    DOE PAGES

    Ren, Fang; Ward, Logan; Williams, Travis; ...

    2018-04-01

    With more than a hundred elements in the periodic table, a large number of potential new materials exist to address the technological and societal challenges we face today; however, without some guidance, searching through this vast combinatorial space is frustratingly slow and expensive, especially for materials strongly influenced by processing. We train a machine learning (ML) model on previously reported observations, parameters from physiochemical theories, and make it synthesis method–dependent to guide high-throughput (HiTp) experiments to find a new system of metallic glasses in the Co-V-Zr ternary. Experimental observations are in good agreement with the predictions of the model, butmore » there are quantitative discrepancies in the precise compositions predicted. We use these discrepancies to retrain the ML model. The refined model has significantly improved accuracy not only for the Co-V-Zr system but also across all other available validation data. We then use the refined model to guide the discovery of metallic glasses in two additional previously unreported ternaries. Although our approach of iterative use of ML and HiTp experiments has guided us to rapid discovery of three new glass-forming systems, it has also provided us with a quantitatively accurate, synthesis method–sensitive predictor for metallic glasses that improves performance with use and thus promises to greatly accelerate discovery of many new metallic glasses. We believe that this discovery paradigm is applicable to a wider range of materials and should prove equally powerful for other materials and properties that are synthesis path–dependent and that current physiochemical theories find challenging to predict.« less

  20. Kalman Filter for Calibrating a Telescope Focal Plane

    NASA Technical Reports Server (NTRS)

    Kang, Bryan; Bayard, David

    2006-01-01

    The instrument-pointing frame (IPF) Kalman filter, and an algorithm that implements this filter, have been devised for calibrating the focal plane of a telescope. As used here, calibration signifies, more specifically, a combination of measurements and calculations directed toward ensuring accuracy in aiming the telescope and determining the locations of objects imaged in various arrays of photodetectors in instruments located on the focal plane. The IPF Kalman filter was originally intended for application to a spaceborne infrared astronomical telescope, but can also be applied to other spaceborne and ground-based telescopes. In the traditional approach to calibration of a telescope, (1) one team of experts concentrates on estimating parameters (e.g., pointing alignments and gyroscope drifts) that are classified as being of primarily an engineering nature, (2) another team of experts concentrates on estimating calibration parameters (e.g., plate scales and optical distortions) that are classified as being primarily of a scientific nature, and (3) the two teams repeatedly exchange data in an iterative process in which each team refines its estimates with the help of the data provided by the other team. This iterative process is inefficient and uneconomical because it is time-consuming and entails the maintenance of two survey teams and the development of computer programs specific to the requirements of each team. Moreover, theoretical analysis reveals that the engineering/ science iterative approach is not optimal in that it does not yield the best estimates of focal-plane parameters and, depending on the application, may not even enable convergence toward a set of estimates.

  1. Operations mission planner beyond the baseline

    NASA Technical Reports Server (NTRS)

    Biefeld, Eric; Cooper, Lynne

    1991-01-01

    The scheduling of Space Station Freedom must satisfy four major requirements. It must ensure efficient housekeeping operations, maximize the collection of science, respond to changes in tasking and available resources, and accommodate the above changes in a manner that minimizes disruption of the ongoing operations of the station. While meeting these requirements the scheduler must cope with the complexity, scope, and flexibility of SSF operations. This requires the scheduler to deal with an astronomical number of possible schedules. The Operations Mission Planner (OMP) is centered around minimally disruptive replanning and the use of heuristics limit search in scheduling. OMP has already shown several artificial intelligence based scheduling techniques such as Interleaved Iterative Refinement and Bottleneck Identification using Process Chronologies.

  2. Deployment of e-health services - a business model engineering strategy.

    PubMed

    Kijl, Björn; Nieuwenhuis, Lambert J M; Huis in 't Veld, Rianne M H A; Hermens, Hermie J; Vollenbroek-Hutten, Miriam M R

    2010-01-01

    We designed a business model for deploying a myofeedback-based teletreatment service. An iterative and combined qualitative and quantitative action design approach was used for developing the business model and the related value network. Insights from surveys, desk research, expert interviews, workshops and quantitative modelling were combined to produce the first business model and then to refine it in three design cycles. The business model engineering strategy provided important insights which led to an improved, more viable and feasible business model and related value network design. Based on this experience, we conclude that the process of early stage business model engineering reduces risk and produces substantial savings in costs and resources related to service deployment.

  3. First Steps in Computational Systems Biology: A Practical Session in Metabolic Modeling and Simulation

    ERIC Educational Resources Information Center

    Reyes-Palomares, Armando; Sanchez-Jimenez, Francisca; Medina, Miguel Angel

    2009-01-01

    A comprehensive understanding of biological functions requires new systemic perspectives, such as those provided by systems biology. Systems biology approaches are hypothesis-driven and involve iterative rounds of model building, prediction, experimentation, model refinement, and development. Developments in computer science are allowing for ever…

  4. The Individual Basic Facts Assessment Tool

    ERIC Educational Resources Information Center

    Tait-McCutcheon, Sandi; Drake, Michael

    2015-01-01

    There is an identified and growing need for a levelled diagnostic basic facts assessment tool that provides teachers with formative information about students' mastery of a broad range of basic fact sets. The Individual Basic Facts Assessment tool has been iteratively and cumulatively developed, trialled, and refined with input from teachers and…

  5. A Design-Based Approach to Fostering Understanding of Global Climate Change

    ERIC Educational Resources Information Center

    Svihla, Vanessa; Linn, Marcia C.

    2012-01-01

    To prepare students to make informed decisions and gain coherent understanding about global climate change, we tested and refined a middle school inquiry unit that featured interactive visualizations. Based on evidence from student pre-test responses, we increased emphasis on energy transfer and transformation. The first iteration improved…

  6. Development of a Targeted Smoking Relapse-Prevention Intervention for Cancer Patients.

    PubMed

    Meltzer, Lauren R; Meade, Cathy D; Diaz, Diana B; Carrington, Monica S; Brandon, Thomas H; Jacobsen, Paul B; McCaffrey, Judith C; Haura, Eric B; Simmons, Vani N

    2018-04-01

    We describe the series of iterative steps used to develop a smoking relapse-prevention intervention customized to the needs of cancer patients. Informed by relevant literature and a series of preliminary studies, an educational tool (DVD) was developed to target the unique smoking relapse risk factors among cancer patients. Learner verification interviews were conducted with 10 cancer patients who recently quit smoking to elicit feedback and inform the development of the DVD. The DVD was then refined using iterative processes and feedback from the learner verification interviews. Major changes focused on visual appeal, and the inclusion of additional testimonials and graphics to increase comprehension of key points and further emphasize the message that the patient is in control of their ability to maintain their smoking abstinence. Together, these steps resulted in the creation of a DVD titled Surviving Smokefree®, which represents the first smoking relapse-prevention intervention for cancer patients. If found effective, the Surviving Smokefree® DVD is an easily disseminable and low-cost portable intervention which can assist cancer patients in maintaining smoking abstinence.

  7. Segmentation-based retrospective shading correction in fluorescence microscopy E. coli images for quantitative analysis

    NASA Astrophysics Data System (ADS)

    Mai, Fei; Chang, Chunqi; Liu, Wenqing; Xu, Weichao; Hung, Yeung S.

    2009-10-01

    Due to the inherent imperfections in the imaging process, fluorescence microscopy images often suffer from spurious intensity variations, which is usually referred to as intensity inhomogeneity, intensity non uniformity, shading or bias field. In this paper, a retrospective shading correction method for fluorescence microscopy Escherichia coli (E. Coli) images is proposed based on segmentation result. Segmentation and shading correction are coupled together, so we iteratively correct the shading effects based on segmentation result and refine the segmentation by segmenting the image after shading correction. A fluorescence microscopy E. Coli image can be segmented (based on its intensity value) into two classes: the background and the cells, where the intensity variation within each class is close to zero if there is no shading. Therefore, we make use of this characteristics to correct the shading in each iteration. Shading is mathematically modeled as a multiplicative component and an additive noise component. The additive component is removed by a denoising process, and the multiplicative component is estimated using a fast algorithm to minimize the intra-class intensity variation. We tested our method on synthetic images and real fluorescence E.coli images. It works well not only for visual inspection, but also for numerical evaluation. Our proposed method should be useful for further quantitative analysis especially for protein expression value comparison.

  8. A Semi-Supervised Approach for Refining Transcriptional Signatures of Drug Response and Repositioning Predictions

    PubMed Central

    Iorio, Francesco; Shrestha, Roshan L.; Levin, Nicolas; Boilot, Viviane; Garnett, Mathew J.; Saez-Rodriguez, Julio; Draviam, Viji M.

    2015-01-01

    We present a novel strategy to identify drug-repositioning opportunities. The starting point of our method is the generation of a signature summarising the consensual transcriptional response of multiple human cell lines to a compound of interest (namely the seed compound). This signature can be derived from data in existing databases, such as the connectivity-map, and it is used at first instance to query a network interlinking all the connectivity-map compounds, based on the similarity of their transcriptional responses. This provides a drug neighbourhood, composed of compounds predicted to share some effects with the seed one. The original signature is then refined by systematically reducing its overlap with the transcriptional responses induced by drugs in this neighbourhood that are known to share a secondary effect with the seed compound. Finally, the drug network is queried again with the resulting refined signatures and the whole process is carried on for a number of iterations. Drugs in the final refined neighbourhood are then predicted to exert the principal mode of action of the seed compound. We illustrate our approach using paclitaxel (a microtubule stabilising agent) as seed compound. Our method predicts that glipizide and splitomicin perturb microtubule function in human cells: a result that could not be obtained through standard signature matching methods. In agreement, we find that glipizide and splitomicin reduce interphase microtubule growth rates and transiently increase the percentage of mitotic cells–consistent with our prediction. Finally, we validated the refined signatures of paclitaxel response by mining a large drug screening dataset, showing that human cancer cell lines whose basal transcriptional profile is anti-correlated to them are significantly more sensitive to paclitaxel and docetaxel. PMID:26452147

  9. Accuracy Quantification of the Loci-CHEM Code for Chamber Wall Heat Transfer in a GO2/GH2 Single Element Injector Model Problem

    NASA Technical Reports Server (NTRS)

    West, Jeff; Westra, Doug; Lin, Jeff; Tucker, Kevin

    2006-01-01

    A robust rocket engine combustor design and development process must include tools which can accurately predict the multi-dimensional thermal environments imposed on solid surfaces by the hot combustion products. Currently, empirical methods used in the design process are typically one dimensional and do not adequately account for the heat flux rise rate in the near-injector region of the chamber. Computational Fluid Dynamics holds promise to meet the design tool requirement, but requires accuracy quantification, or validation, before it can be confidently applied in the design process. This effort presents the beginning of such a validation process for the Loci-CHEM CFD code. The model problem examined here is a gaseous oxygen (GO2)/gaseous hydrogen (GH2) shear coaxial single element injector operating at a chamber pressure of 5.42 MPa. The GO2/GH2 propellant combination in this geometry represents one the simplest rocket model problems and is thus foundational to subsequent validation efforts for more complex injectors. Multiple steady state solutions have been produced with Loci-CHEM employing different hybrid grids and two-equation turbulence models. Iterative convergence for each solution is demonstrated via mass conservation, flow variable monitoring at discrete flow field locations as a function of solution iteration and overall residual performance. A baseline hybrid was used and then locally refined to demonstrate grid convergence. Solutions were obtained with three variations of the k-omega turbulence model.

  10. Accuracy Quantification of the Loci-CHEM Code for Chamber Wall Heat Fluxes in a G02/GH2 Single Element Injector Model Problem

    NASA Technical Reports Server (NTRS)

    West, Jeff; Westra, Doug; Lin, Jeff; Tucker, Kevin

    2006-01-01

    A robust rocket engine combustor design and development process must include tools which can accurately predict the multi-dimensional thermal environments imposed on solid surfaces by the hot combustion products. Currently, empirical methods used in the design process are typically one dimensional and do not adequately account for the heat flux rise rate in the near-injector region of the chamber. Computational Fluid Dynamics holds promise to meet the design tool requirement, but requires accuracy quantification, or validation, before it can be confidently applied in the design process. This effort presents the beginning of such a validation process for the Loci- CHEM CPD code. The model problem examined here is a gaseous oxygen (GO2)/gaseous hydrogen (GH2) shear coaxial single element injector operating at a chamber pressure of 5.42 MPa. The GO2/GH2 propellant combination in this geometry represents one the simplest rocket model problems and is thus foundational to subsequent validation efforts for more complex injectors. Multiple steady state solutions have been produced with Loci-CHEM employing different hybrid grids and two-equation turbulence models. Iterative convergence for each solution is demonstrated via mass conservation, flow variable monitoring at discrete flow field locations as a function of solution iteration and overall residual performance. A baseline hybrid grid was used and then locally refined to demonstrate grid convergence. Solutions were also obtained with three variations of the k-omega turbulence model.

  11. Composite panel development at JPL

    NASA Technical Reports Server (NTRS)

    Mcelroy, Paul; Helms, Rich

    1988-01-01

    Parametric computer studies can be use in a cost effective manner to determine optimized composite mirror panel designs. An InterDisciplinary computer Model (IDM) was created to aid in the development of high precision reflector panels for LDR. The materials properties, thermal responses, structural geometries, and radio/optical precision are synergistically analyzed for specific panel designs. Promising panels designs are fabricated and tested so that comparison with panel test results can be used to verify performance prediction models and accommodate design refinement. The iterative approach of computer design and model refinement with performance testing and materials optimization has shown good results for LDR panels.

  12. Using an Iterative Fourier Series Approach in Determining Orbital Elements of Detached Visual Binary Stars

    NASA Astrophysics Data System (ADS)

    Tupa, Peter R.; Quirin, S.; DeLeo, G. G.; McCluskey, G. E., Jr.

    2007-12-01

    We present a modified Fourier transform approach to determine the orbital parameters of detached visual binary stars. Originally inspired by Monet (ApJ 234, 275, 1979), this new method utilizes an iterative routine of refining higher order Fourier terms in a manner consistent with Keplerian motion. In most cases, this approach is not sensitive to the starting orbital parameters in the iterative loop. In many cases we have determined orbital elements even with small fragments of orbits and noisy data, although some systems show computational instabilities. The algorithm was constructed using the MAPLE mathematical software code and tested on artificially created orbits and many real binary systems, including Gliese 22 AC, Tau 51, and BU 738. This work was supported at Lehigh University by NSF-REU grant PHY-9820301.

  13. Rapid alignment of nanotomography data using joint iterative reconstruction and reprojection

    DOE PAGES

    Gürsoy, Doğa; Hong, Young P.; He, Kuan; ...

    2017-09-18

    As x-ray and electron tomography is pushed further into the nanoscale, the limitations of rotation stages become more apparent, leading to challenges in the alignment of the acquired projection images. Here we present an approach for rapid post-acquisition alignment of these projections to obtain high quality three-dimensional images. Our approach is based on a joint estimation of alignment errors, and the object, using an iterative refinement procedure. With simulated data where we know the alignment error of each projection image, our approach shows a residual alignment error that is a factor of a thousand smaller, and it reaches the samemore » error level in the reconstructed image in less than half the number of iterations. We then show its application to experimental data in x-ray and electron nanotomography.« less

  14. Systematic and Iterative Development of a Smartphone App to Promote Sun-Protection Among Holidaymakers: Design of a Prototype and Results of Usability and Acceptability Testing.

    PubMed

    Rodrigues, Angela M; Sniehotta, Falko F; Birch-Machin, Mark A; Olivier, Patrick; Araújo-Soares, Vera

    2017-06-12

    Sunburn and intermittent exposure to ultraviolet rays are risk factors for melanoma. Sunburn is a common experience during holidays, making tourism settings of particular interest for skin cancer prevention. Holidaymakers are a volatile populations found at different locations, which may make them difficult to reach. Given the widespread use of smartphones, evidence suggests that this might be a novel, convenient, scalable, and feasible way of reaching the target population. The main objective of this study was to describe and appraise the process of systematically developing a smartphone intervention (mISkin app) to promote sun-protection during holidays. The iterative development process of the mISkin app was conducted over four sequential stages: (1) identify evidence on the most effective behavior change techniques (BCTs) used (active ingredients) as well as theoretical predictors and theories, (2) evidence-based intervention design, (3) co-design with users of the mISkin app prototype, and (4) refinement of the app. Each stage provided key findings that were subsequently used to inform the design of the mISkin app. The sequential approach to development integrates different strands of evidence to inform the design of an evidence-based intervention. A systematic review on previously tested interventions to promote sun-protection provided cues and constraints for the design of this intervention. The development and design of the mISkin app also incorporated other sources of information, such as other literature reviews and experts' consultations. The developed prototype of the mISkin app was evaluated by engaging potential holidaymakers in the refinement and further development of the mISkin app through usability (ease-of-use) and acceptability testing of the intervention prototype. All 17 participants were satisfied with the mISkin prototype and expressed willingness to use it. Feedback on the app was integrated in the optimization process of the mISkin app. The mISkin app was designed to promote sun-protection among holidaymakers and was based on current evidence, experts' knowledge and experience, and user involvement. Based on user feedback, the app has been refined and a fully functional version is ready for formal testing in a feasibility pilot study. ©Angela M Rodrigues, Falko F Sniehotta, Mark A Birch-Machin, Patrick Olivier, Vera Araújo-Soares. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 12.06.2017.

  15. Process Improvement for Interinstitutional Research Contracting

    PubMed Central

    Logan, Jennifer; Bjorklund, Todd; Whitfield, Jesse; Reed, Peggy; Lesher, Laurie; Sikalis, Amy; Brown, Brent; Drollinger, Sandy; Larrabee, Kristine; Thompson, Kristie; Clark, Erin; Workman, Michael; Boi, Luca

    2015-01-01

    Abstract Introduction Sponsored research increasingly requires multiinstitutional collaboration. However, research contracting procedures have become more complicated and time consuming. The perinatal research units of two colocated healthcare systems sought to improve their research contracting processes. Methods The Lean Process, a management practice that iteratively involves team members in root cause analyses and process improvement, was applied to the research contracting process, initially using Process Mapping and then developing Problem Solving Reports. Results Root cause analyses revealed that the longest delays were the individual contract legal negotiations. In addition, the “business entity” was the research support personnel of both healthcare systems whose “customers” were investigators attempting to conduct interinstitutional research. Development of mutually acceptable research contract templates and language, chain of custody templates, and process development and refinement formats decreased the Notice of Grant Award to Purchase Order time from a mean of 103.5 days in the year prior to Lean Process implementation to 45.8 days in the year after implementation (p = 0.004). Conclusions The Lean Process can be applied to interinstitutional research contracting with significant improvement in contract implementation. PMID:26083433

  16. Process Improvement for Interinstitutional Research Contracting.

    PubMed

    Varner, Michael; Logan, Jennifer; Bjorklund, Todd; Whitfield, Jesse; Reed, Peggy; Lesher, Laurie; Sikalis, Amy; Brown, Brent; Drollinger, Sandy; Larrabee, Kristine; Thompson, Kristie; Clark, Erin; Workman, Michael; Boi, Luca

    2015-08-01

    Sponsored research increasingly requires multiinstitutional collaboration. However, research contracting procedures have become more complicated and time consuming. The perinatal research units of two colocated healthcare systems sought to improve their research contracting processes. The Lean Process, a management practice that iteratively involves team members in root cause analyses and process improvement, was applied to the research contracting process, initially using Process Mapping and then developing Problem Solving Reports. Root cause analyses revealed that the longest delays were the individual contract legal negotiations. In addition, the "business entity" was the research support personnel of both healthcare systems whose "customers" were investigators attempting to conduct interinstitutional research. Development of mutually acceptable research contract templates and language, chain of custody templates, and process development and refinement formats decreased the Notice of Grant Award to Purchase Order time from a mean of 103.5 days in the year prior to Lean Process implementation to 45.8 days in the year after implementation (p = 0.004). The Lean Process can be applied to interinstitutional research contracting with significant improvement in contract implementation. © 2015 Wiley Periodicals, Inc.

  17. A Faceted Taxonomy for Rating Student Bibliographies in an Online Information Literacy Game

    ERIC Educational Resources Information Center

    Leeder, Chris; Markey, Karen; Yakel, Elizabeth

    2012-01-01

    This study measured the quality of student bibliographies through creation of a faceted taxonomy flexible and fine-grained enough to encompass the variety of online sources cited by today's students. The taxonomy was developed via interviews with faculty, iterative refinement of categories and scoring, and testing on example student…

  18. Graphic Design Education: A Revised Assessment Approach to Encourage Deep Learning

    ERIC Educational Resources Information Center

    Ellmers, Grant; Foley, Marius; Bennett, Sue

    2008-01-01

    In this paper we outline the review and iterative refinement of assessment procedures in a final year graphic design subject at the University of Wollongong. Our aim is to represent the main issues in assessing graphic design work, and informed by the literature, particularly "notions of creativity" (Cowdroy & de Graaff, 2005), to…

  19. Challenges and Opportunities for Teacher Professional Development in Interactive Use of Technology in African Schools

    ERIC Educational Resources Information Center

    Hennessy, Sara; Haßler, Bjoern; Hofmann, Riikka

    2015-01-01

    This article examines the supporting and constraining factors influencing professional learning about interactive teaching and mobile digital technology use in low-resourced basic schools in sub-Saharan Africa. It draws on a case study of iterative development and refinement of a school-based, peer-facilitated professional learning programme…

  20. Adaptive monitoring design for ecosystem management

    Treesearch

    Paul L. Ringold; Jim Alegria; Raymond L. Czaplewski; Barry S. Mulder; Tim Tolle; Kelly Burnett

    1996-01-01

    Adaptive management of ecosystems (e.g., Holling 1978, Walters 1986, Everett et al. 1994, Grumbine 1994, Yaffee 1994, Gunderson et al. 1995, Frentz et al. 1995, Montgomery et al. 1995) structures a system in which monitoring iteratively improves the knowledge base and helps refine management plans. This adaptive approach acknowledges that action is necessary or...

  1. Students' Socio-Scientific Reasoning in an Astrobiological Context during Work with a Digital Learning Environment

    ERIC Educational Resources Information Center

    Hansson, Lena; Redfors, Andreas; Rosberg, Maria

    2011-01-01

    In a European project--CoReflect--researchers in seven countries are developing, implementing and evaluating teaching sequences using a web-based platform (STOCHASMOS). The interactive web-based inquiry materials support collaborative and reflective work. The learning environments will be iteratively tested and refined, during different phases of…

  2. Fast-kick-off monotonically convergent algorithm for searching optimal control fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, Sheng-Lun; Ho, Tak-San; Rabitz, Herschel

    2011-09-15

    This Rapid Communication presents a fast-kick-off search algorithm for quickly finding optimal control fields in the state-to-state transition probability control problems, especially those with poorly chosen initial control fields. The algorithm is based on a recently formulated monotonically convergent scheme [T.-S. Ho and H. Rabitz, Phys. Rev. E 82, 026703 (2010)]. Specifically, the local temporal refinement of the control field at each iteration is weighted by a fractional inverse power of the instantaneous overlap of the backward-propagating wave function, associated with the target state and the control field from the previous iteration, and the forward-propagating wave function, associated with themore » initial state and the concurrently refining control field. Extensive numerical simulations for controls of vibrational transitions and ultrafast electron tunneling show that the new algorithm not only greatly improves the search efficiency but also is able to attain good monotonic convergence quality when further frequency constraints are required. The algorithm is particularly effective when the corresponding control dynamics involves a large number of energy levels or ultrashort control pulses.« less

  3. Using stakeholder engagement to develop a patient-centered pediatric asthma intervention.

    PubMed

    Shelef, Deborah Q; Rand, Cynthia; Streisand, Randi; Horn, Ivor B; Yadav, Kabir; Stewart, Lisa; Fousheé, Naja; Waters, Damian; Teach, Stephen J

    2016-12-01

    Stakeholder engagement has the potential to develop research interventions that are responsive to patient and provider preferences. This approach contrasts with traditional models of clinical research in which researchers determine the study's design. This article describes the effect of stakeholder engagement on the design of a randomized trial of an intervention designed to improve child asthma outcomes by reducing parental stress. The study team developed and implemented a stakeholder engagement process that provided iterative feedback regarding the study design, patient-centered outcomes, and intervention. Stakeholder engagement incorporated the perspectives of parents of children with asthma; local providers of community-based medical, legal, and social services; and national experts in asthma research methodology and implementation. Through a year-long process of multidimensional stakeholder engagement, the research team successfully refined and implemented a patient-centered study protocol. Key stakeholder contributions included selection of patient-centered outcome measures, refinement of intervention content and format, and language framing the study in a culturally appropriate manner. Stakeholder engagement was a useful framework for developing an intervention that was acceptable and relevant to our target population. This approach might have unique benefits in underserved populations, leading to sustainable improvement in health outcomes and reduced disparities. Copyright © 2016 American Academy of Allergy, Asthma & Immunology. Published by Elsevier Inc. All rights reserved.

  4. Development and evaluation of a local grid refinement method for block-centered finite-difference groundwater models using shared nodes

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2002-01-01

    A new method of local grid refinement for two-dimensional block-centered finite-difference meshes is presented in the context of steady-state groundwater-flow modeling. The method uses an iteration-based feedback with shared nodes to couple two separate grids. The new method is evaluated by comparison with results using a uniform fine mesh, a variably spaced mesh, and a traditional method of local grid refinement without a feedback. Results indicate: (1) The new method exhibits quadratic convergence for homogeneous systems and convergence equivalent to uniform-grid refinement for heterogeneous systems. (2) Coupling the coarse grid with the refined grid in a numerically rigorous way allowed for improvement in the coarse-grid results. (3) For heterogeneous systems, commonly used linear interpolation of heads from the large model onto the boundary of the refined model produced heads that are inconsistent with the physics of the flow field. (4) The traditional method works well in situations where the better resolution of the locally refined grid has little influence on the overall flow-system dynamics, but if this is not true, lack of a feedback mechanism produced errors in head up to 3.6% and errors in cell-to-cell flows up to 25%. ?? 2002 Elsevier Science Ltd. All rights reserved.

  5. Value of Collaboration With Standardized Patients and Patient Facilitators in Enhancing Reflection During the Process of Building a Simulation.

    PubMed

    Stanley, Claire; Lindsay, Sally; Parker, Kathryn; Kawamura, Anne; Samad Zubairi, Mohammad

    2018-05-09

    We previously reported that experienced clinicians find the process of collectively building and participating in simulations provide (1) a unique reflective opportunity; (2) a venue to identify different perspectives through discussion and action in a group; and (3) a safe environment for learning. No studies have assessed the value of collaborating with standardized patients (SPs) and patient facilitators (PFs) in the process. In this work, we describe this collaboration in building a simulation and the key elements that facilitate reflection. Three simulation scenarios surrounding communication were built by teams of clinicians, a PF, and SPs. Six build sessions were audio recorded, transcribed, and thematically analyzed through an iterative process to (1) describe the steps of building a simulation scenario and (2) identify the key elements involved in the collaboration. The five main steps to build a simulation scenario were (1) storytelling and reflection; (2) defining objectives and brainstorming ideas; (3) building a stem and creating a template; (4) refining the scenario with feedback from SPs; and (5) mock run-throughs with follow-up discussion. During these steps, the PF shared personal insights, challenging participants to reflect deeper to better understand and consider the patient's perspective. The SPs provided unique outside perspective to the group. In addition, the interaction between the SPs and the PF helped refine character roles. A collaborative approach incorporating feedback from PFs and SPs to create a simulation scenario is a valuable method to enhance reflective practice for clinicians.

  6. Development of an integrated e-health tool for people with, or at high risk of, cardiovascular disease: The Consumer Navigation of Electronic Cardiovascular Tools (CONNECT) web application.

    PubMed

    Neubeck, Lis; Coorey, Genevieve; Peiris, David; Mulley, John; Heeley, Emma; Hersch, Fred; Redfern, Julie

    2016-12-01

    Cardiovascular disease is the leading killer globally and secondary prevention substantially reduces risk. Uptake of, and adherence to, face-to-face preventive programs is often low. Alternative models of care are exploiting the prominence of technology in daily life to facilitate lifestyle behavior change. To inform the development of a web-based application integrated with the primary care electronic health record, we undertook a collaborative user-centered design process to develop a consumer-focused e-health tool for cardiovascular disease risk reduction. A four-phase iterative process involved ten multidisciplinary clinicians and academics (primary care physician, nurses and allied health professionals), two design consultants, one graphic designer, three software developers and fourteen proposed end-users. This 18-month process involved, (1) defining the target audience and needs, (2) pilot testing and refinement, (3) software development including validation and testing the algorithm, (4) user acceptance testing and beta testing. From this process, researchers were able to better understand end-user needs and preferences, thereby improving and enriching the increasingly detailed system designs and prototypes for a mobile responsive web application. We reviewed 14 relevant applications/websites and sixteen observational and interventional studies to derive a set of core components and ideal features for the system. These included the need for interactivity, visual appeal, credible health information, virtual rewards, and emotional and physical support. The features identified as essential were: (i) both mobile and web-enabled 'apps', (ii) an emphasis on medication management, (iii) a strong psychosocial support component. Subsequent workshops (n=6; 2×1.5h) informed the development of functionality and lo-fidelity sketches of application interfaces. These ideas were next tested in consumer focus groups (n=9; 3×1.5h). Specifications for the application were refined from this feedback and a graphic designer iteratively developed the interface. Concurrently, the electronic health record was linked to the consumer portal. A written description of the final algorithms for all decisions and outputs was provided to software programmers. These algorithmic outputs to the app were first validated against those obtained from an independently programmed version in STATA 11. User acceptance testing (n=5, 2×1.0h) and beta testing revealed technical bugs and interface concerns across commonly-used web browsers and smartphones. These were resolved and re-tested until functionality was optimized. End-users of a cardiovascular disease prevention program have complex needs. A user-centered design approach aided the integration of these needs into the concept, specifications, development and refinement of a responsive web application for risk factor reduction and disease prevention. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  7. Iterative Tensor Voting for Perceptual Grouping of Ill-Defined Curvilinear Structures: Application to Adherens Junctions

    PubMed Central

    Loss, Leandro A.; Bebis, George; Parvin, Bahram

    2012-01-01

    In this paper, a novel approach is proposed for perceptual grouping and localization of ill-defined curvilinear structures. Our approach builds upon the tensor voting and the iterative voting frameworks. Its efficacy lies on iterative refinements of curvilinear structures by gradually shifting from an exploratory to an exploitative mode. Such a mode shifting is achieved by reducing the aperture of the tensor voting fields, which is shown to improve curve grouping and inference by enhancing the concentration of the votes over promising, salient structures. The proposed technique is applied to delineation of adherens junctions imaged through fluorescence microscopy. This class of membrane-bound macromolecules maintains tissue structural integrity and cell-cell interactions. Visually, it exhibits fibrous patterns that may be diffused, punctate and frequently perceptual. Besides the application to real data, the proposed method is compared to prior methods on synthetic and annotated real data, showing high precision rates. PMID:21421432

  8. MODFLOW-LGR-Modifications to the streamflow-routing package (SFR2) to route streamflow through locally refined grids

    USGS Publications Warehouse

    Mehl, Steffen W.; Hill, Mary C.

    2011-01-01

    This report documents modifications to the Streamflow-Routing Package (SFR2) to route streamflow through grids constructed using the multiple-refined-areas capability of shared node Local Grid Refinement (LGR) of MODFLOW-2005. MODFLOW-2005 is the U.S. Geological Survey modular, three-dimensional, finite-difference groundwater-flow model. LGR provides the capability to simulate groundwater flow by using one or more block-shaped, higher resolution local grids (child model) within a coarser grid (parent model). LGR accomplishes this by iteratively coupling separate MODFLOW-2005 models such that heads and fluxes are balanced across the shared interfacing boundaries. Compatibility with SFR2 allows for streamflow routing across grids. LGR can be used in two- and three-dimensional, steady-state and transient simulations and for simulations of confined and unconfined groundwater systems.

  9. Designing and evaluating an interprofessional shared decision-making and goal-setting decision aid for patients with diabetes in clinical care--systematic decision aid development and study protocol.

    PubMed

    Yu, Catherine H; Stacey, Dawn; Sale, Joanna; Hall, Susan; Kaplan, David M; Ivers, Noah; Rezmovitz, Jeremy; Leung, Fok-Han; Shah, Baiju R; Straus, Sharon E

    2014-01-22

    Care of patients with diabetes often occurs in the context of other chronic illness. Competing disease priorities and competing patient-physician priorities present challenges in the provision of care for the complex patient. Guideline implementation interventions to date do not acknowledge these intricacies of clinical practice. As a result, patients and providers are left overwhelmed and paralyzed by the sheer volume of recommendations and tasks. An individualized approach to the patient with diabetes and multiple comorbid conditions using shared decision-making (SDM) and goal setting has been advocated as a patient-centred approach that may facilitate prioritization of treatment options. Furthermore, incorporating interprofessional integration into practice may overcome barriers to implementation. However, these strategies have not been taken up extensively in clinical practice. To systematically develop and test an interprofessional SDM and goal-setting toolkit for patients with diabetes and other chronic diseases, following the Knowledge to Action framework. 1. Feasibility study: Individual interviews with primary care physicians, nurses, dietitians, pharmacists, and patients with diabetes will be conducted, exploring their experiences with shared decision-making and priority-setting, including facilitators and barriers, the relevance of a decision aid and toolkit for priority-setting, and how best to integrate it into practice.2. Toolkit development: Based on this data, an evidence-based multi-component SDM toolkit will be developed. The toolkit will be reviewed by content experts (primary care, endocrinology, geriatricians, nurses, dietitians, pharmacists, patients) for accuracy and comprehensiveness.3. Heuristic evaluation: A human factors engineer will review the toolkit and identify, list and categorize usability issues by severity.4. Usability testing: This will be done using cognitive task analysis.5. Iterative refinement: Throughout the development process, the toolkit will be refined through several iterative cycles of feedback and redesign. Interprofessional shared decision-making regarding priority-setting with the use of a decision aid toolkit may help prioritize care of individuals with multiple comorbid conditions. Adhering to principles of user-centered design, we will develop and refine a toolkit to assess the feasibility of this approach.

  10. Designing and evaluating an interprofessional shared decision-making and goal-setting decision aid for patients with diabetes in clinical care - systematic decision aid development and study protocol

    PubMed Central

    2014-01-01

    Background Care of patients with diabetes often occurs in the context of other chronic illness. Competing disease priorities and competing patient-physician priorities present challenges in the provision of care for the complex patient. Guideline implementation interventions to date do not acknowledge these intricacies of clinical practice. As a result, patients and providers are left overwhelmed and paralyzed by the sheer volume of recommendations and tasks. An individualized approach to the patient with diabetes and multiple comorbid conditions using shared decision-making (SDM) and goal setting has been advocated as a patient-centred approach that may facilitate prioritization of treatment options. Furthermore, incorporating interprofessional integration into practice may overcome barriers to implementation. However, these strategies have not been taken up extensively in clinical practice. Objectives To systematically develop and test an interprofessional SDM and goal-setting toolkit for patients with diabetes and other chronic diseases, following the Knowledge to Action framework. Methods 1. Feasibility study: Individual interviews with primary care physicians, nurses, dietitians, pharmacists, and patients with diabetes will be conducted, exploring their experiences with shared decision-making and priority-setting, including facilitators and barriers, the relevance of a decision aid and toolkit for priority-setting, and how best to integrate it into practice. 2. Toolkit development: Based on this data, an evidence-based multi-component SDM toolkit will be developed. The toolkit will be reviewed by content experts (primary care, endocrinology, geriatricians, nurses, dietitians, pharmacists, patients) for accuracy and comprehensiveness. 3. Heuristic evaluation: A human factors engineer will review the toolkit and identify, list and categorize usability issues by severity. 4. Usability testing: This will be done using cognitive task analysis. 5. Iterative refinement: Throughout the development process, the toolkit will be refined through several iterative cycles of feedback and redesign. Discussion Interprofessional shared decision-making regarding priority-setting with the use of a decision aid toolkit may help prioritize care of individuals with multiple comorbid conditions. Adhering to principles of user-centered design, we will develop and refine a toolkit to assess the feasibility of this approach. PMID:24450385

  11. Combinatorial refinement of thin-film microstructure, properties and process conditions: iterative nanoscale search for self-assembled TiAlN nanolamellae.

    PubMed

    Zalesak, J; Todt, J; Pitonak, R; Köpf, A; Weißenbacher, R; Sartory, B; Burghammer, M; Daniel, R; Keckes, J

    2016-12-01

    Because of the tremendous variability of crystallite sizes and shapes in nano-materials, it is challenging to assess the corresponding size-property relationships and to identify microstructures with particular physical properties or even optimized functions. This task is especially difficult for nanomaterials formed by self-organization, where the spontaneous evolution of microstructure and properties is coupled. In this work, two compositionally graded TiAlN films were (i) grown using chemical vapour deposition by applying a varying ratio of reacting gases and (ii) subsequently analysed using cross-sectional synchrotron X-ray nanodiffraction, electron microscopy and nanoindentation in order to evaluate the microstructure and hardness depth gradients. The results indicate the formation of self-organized hexagonal-cubic and cubic-cubic nanolamellae with varying compositions and thicknesses in the range of ∼3-15 nm across the film thicknesses, depending on the actual composition of the reactive gas mixtures. On the basis of the occurrence of the nanolamellae and their correlation with the local film hardness, progressively narrower ranges of the composition and hardness were refined in three steps. The third film was produced using an AlCl 3 /TiCl 4 precursor ratio of ∼1.9, resulting in the formation of an optimized lamellar microstructure with ∼1.3 nm thick cubic Ti(Al)N and ∼12 nm thick cubic Al(Ti)N nanolamellae which exhibits a maximal hardness of ∼36 GPa and an indentation modulus of ∼522 GPa. The presented approach of an iterative nanoscale search based on the application of cross-sectional synchrotron X-ray nanodiffraction and cross-sectional nanoindentation allows one to refine the relationship between (i) varying deposition conditions, (ii) gradients of microstructure and (iii) gradients of mechanical properties in nanostructured materials prepared as thin films. This is done in a combinatorial way in order to screen a wide range of deposition conditions, while identifying those that result in the formation of a particular microstructure with optimized functional attributes.

  12. Real-space processing of helical filaments in SPARX

    PubMed Central

    Behrmann, Elmar; Tao, Guozhi; Stokes, David L.; Egelman, Edward H.; Raunser, Stefan; Penczek, Pawel A.

    2012-01-01

    We present a major revision of the iterative helical real-space refinement (IHRSR) procedure and its implementation in the SPARX single particle image processing environment. We built on over a decade of experience with IHRSR helical structure determination and we took advantage of the flexible SPARX infrastructure to arrive at an implementation that offers ease of use, flexibility in designing helical structure determination strategy, and high computational efficiency. We introduced the 3D projection matching code which now is able to work with non-cubic volumes, the geometry better suited for long helical filaments, we enhanced procedures for establishing helical symmetry parameters, and we parallelized the code using distributed memory paradigm. Additional feature includes a graphical user interface that facilitates entering and editing of parameters controlling the structure determination strategy of the program. In addition, we present a novel approach to detect and evaluate structural heterogeneity due to conformer mixtures that takes advantage of helical structure redundancy. PMID:22248449

  13. Sources of Safety Data and Statistical Strategies for Design and Analysis: Transforming Data Into Evidence.

    PubMed

    Ma, Haijun; Russek-Cohen, Estelle; Izem, Rima; Marchenko, Olga V; Jiang, Qi

    2018-03-01

    Safety evaluation is a key aspect of medical product development. It is a continual and iterative process requiring thorough thinking, and dedicated time and resources. In this article, we discuss how safety data are transformed into evidence to establish and refine the safety profile of a medical product, and how the focus of safety evaluation, data sources, and statistical methods change throughout a medical product's life cycle. Some challenges and statistical strategies for medical product safety evaluation are discussed. Examples of safety issues identified in different periods, that is, premarketing and postmarketing, are discussed to illustrate how different sources are used in the safety signal identification and the iterative process of safety assessment. The examples highlighted range from commonly used pediatric vaccine given to healthy children to medical products primarily used to treat a medical condition in adults. These case studies illustrate that different products may require different approaches, and once a signal is discovered, it could impact future safety assessments. Many challenges still remain in this area despite advances in methodologies, infrastructure, public awareness, international harmonization, and regulatory enforcement. Innovations in safety assessment methodologies are pressing in order to make the medical product development process more efficient and effective, and the assessment of medical product marketing approval more streamlined and structured. Health care payers, providers, and patients may have different perspectives when weighing in on clinical, financial and personal needs when therapies are being evaluated.

  14. Parallel iterative solution for h and p approximations of the shallow water equations

    USGS Publications Warehouse

    Barragy, E.J.; Walters, R.A.

    1998-01-01

    A p finite element scheme and parallel iterative solver are introduced for a modified form of the shallow water equations. The governing equations are the three-dimensional shallow water equations. After a harmonic decomposition in time and rearrangement, the resulting equations are a complex Helmholz problem for surface elevation, and a complex momentum equation for the horizontal velocity. Both equations are nonlinear and the resulting system is solved using the Picard iteration combined with a preconditioned biconjugate gradient (PBCG) method for the linearized subproblems. A subdomain-based parallel preconditioner is developed which uses incomplete LU factorization with thresholding (ILUT) methods within subdomains, overlapping ILUT factorizations for subdomain boundaries and under-relaxed iteration for the resulting block system. The method builds on techniques successfully applied to linear elements by introducing ordering and condensation techniques to handle uniform p refinement. The combined methods show good performance for a range of p (element order), h (element size), and N (number of processors). Performance and scalability results are presented for a field scale problem where up to 512 processors are used. ?? 1998 Elsevier Science Ltd. All rights reserved.

  15. Correction of spin diffusion during iterative automated NOE assignment

    NASA Astrophysics Data System (ADS)

    Linge, Jens P.; Habeck, Michael; Rieping, Wolfgang; Nilges, Michael

    2004-04-01

    Indirect magnetization transfer increases the observed nuclear Overhauser enhancement (NOE) between two protons in many cases, leading to an underestimation of target distances. Wider distance bounds are necessary to account for this error. However, this leads to a loss of information and may reduce the quality of the structures generated from the inter-proton distances. Although several methods for spin diffusion correction have been published, they are often not employed to derive distance restraints. This prompted us to write a user-friendly and CPU-efficient method to correct for spin diffusion that is fully integrated in our program ambiguous restraints for iterative assignment (ARIA). ARIA thus allows automated iterative NOE assignment and structure calculation with spin diffusion corrected distances. The method relies on numerical integration of the coupled differential equations which govern relaxation by matrix squaring and sparse matrix techniques. We derive a correction factor for the distance restraints from calculated NOE volumes and inter-proton distances. To evaluate the impact of our spin diffusion correction, we tested the new calibration process extensively with data from the Pleckstrin homology (PH) domain of Mus musculus β-spectrin. By comparing structures refined with and without spin diffusion correction, we show that spin diffusion corrected distance restraints give rise to structures of higher quality (notably fewer NOE violations and a more regular Ramachandran map). Furthermore, spin diffusion correction permits the use of tighter error bounds which improves the distinction between signal and noise in an automated NOE assignment scheme.

  16. Development of a video-based education and process change intervention to improve advance cardiopulmonary resuscitation decision-making.

    PubMed

    Waldron, Nicholas; Johnson, Claire E; Saul, Peter; Waldron, Heidi; Chong, Jeffrey C; Hill, Anne-Marie; Hayes, Barbara

    2016-10-06

    Advance cardiopulmonary resuscitation (CPR) decision-making and escalation of care discussions are variable in routine clinical practice. We aimed to explore physician barriers to advance CPR decision-making in an inpatient hospital setting and develop a pragmatic intervention to support clinicians to undertake and document routine advance care planning discussions. Two focus groups, which involved eight consultants and ten junior doctors, were conducted following a review of the current literature. A subsequent iterative consensus process developed two intervention elements: (i) an updated 'Goals of Patient Care' (GOPC) form and process; (ii) an education video and resources for teaching advance CPR decision-making and communication. A multidisciplinary group of health professionals and policy-makers with experience in systems development, education and research provided critical feedback. Three key themes emerged from the focus groups and the literature, which identified a structure for the intervention: (i) knowing what to say; (ii) knowing how to say it; (iii) wanting to say it. The themes informed the development of a video to provide education about advance CPR decision-making framework, improving communication and contextualising relevant clinical issues. Critical feedback assisted in refining the video and further guided development and evolution of a medical GOPC approach to discussing and recording medical treatment and advance care plans. Through an iterative process of consultation and review, video-based education and an expanded GOPC form and approach were developed to address physician and systemic barriers to advance CPR decision-making and documentation. Implementation and evaluation across hospital settings is required to examine utility and determine effect on quality of care.

  17. The Process and Impact of Stakeholder Engagement in Developing a Pediatric Intensive Care Unit Communication and Decision-Making Intervention.

    PubMed

    Michelson, Kelly N; Frader, Joel; Sorce, Lauren; Clayman, Marla L; Persell, Stephen D; Fragen, Patricia; Ciolino, Jody D; Campbell, Laura C; Arenson, Melanie; Aniciete, Danica Y; Brown, Melanie L; Ali, Farah N; White, Douglas

    2016-12-01

    Stakeholder-developed interventions are needed to support pediatric intensive care unit (PICU) communication and decision-making. Few publications delineate methods and outcomes of stakeholder engagement in research. We describe the process and impact of stakeholder engagement on developing a PICU communication and decision-making support intervention. We also describe the resultant intervention. Stakeholders included parents of PICU patients, healthcare team members (HTMs), and research experts. Through a year-long iterative process, we involved 96 stakeholders in 25 meetings and 26 focus groups or interviews. Stakeholders adapted an adult navigator model by identifying core intervention elements and then determining how to operationalize those core elements in pediatrics. The stakeholder input led to PICU-specific refinements, such as supporting transitions after PICU discharge and including ancillary tools. The resultant intervention includes navigator involvement with parents and HTMs and navigator-guided use of ancillary tools. Subsequent research will test the feasibility and efficacy of our intervention.

  18. A new art code for tomographic interferometry

    NASA Technical Reports Server (NTRS)

    Tan, H.; Modarress, D.

    1987-01-01

    A new algebraic reconstruction technique (ART) code based on the iterative refinement method of least squares solution for tomographic reconstruction is presented. Accuracy and the convergence of the technique is evaluated through the application of numerically generated interferometric data. It was found that, in general, the accuracy of the results was superior to other reported techniques. The iterative method unconditionally converged to a solution for which the residual was minimum. The effects of increased data were studied. The inversion error was found to be a function of the input data error only. The convergence rate, on the other hand, was affected by all three parameters. Finally, the technique was applied to experimental data, and the results are reported.

  19. 3Drefine: an interactive web server for efficient protein structure refinement

    PubMed Central

    Bhattacharya, Debswapna; Nowotny, Jackson; Cao, Renzhi; Cheng, Jianlin

    2016-01-01

    3Drefine is an interactive web server for consistent and computationally efficient protein structure refinement with the capability to perform web-based statistical and visual analysis. The 3Drefine refinement protocol utilizes iterative optimization of hydrogen bonding network combined with atomic-level energy minimization on the optimized model using a composite physics and knowledge-based force fields for efficient protein structure refinement. The method has been extensively evaluated on blind CASP experiments as well as on large-scale and diverse benchmark datasets and exhibits consistent improvement over the initial structure in both global and local structural quality measures. The 3Drefine web server allows for convenient protein structure refinement through a text or file input submission, email notification, provided example submission and is freely available without any registration requirement. The server also provides comprehensive analysis of submissions through various energy and statistical feedback and interactive visualization of multiple refined models through the JSmol applet that is equipped with numerous protein model analysis tools. The web server has been extensively tested and used by many users. As a result, the 3Drefine web server conveniently provides a useful tool easily accessible to the community. The 3Drefine web server has been made publicly available at the URL: http://sysbio.rnet.missouri.edu/3Drefine/. PMID:27131371

  20. PROGRESS TOWARDS NEXT GENERATION, WAVEFORM BASED THREE-DIMENSIONAL MODELS AND METRICS TO IMPROVE NUCLEAR EXPLOSION MONITORING IN THE MIDDLE EAST

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Savage, B; Peter, D; Covellone, B

    2009-07-02

    Efforts to update current wave speed models of the Middle East require a thoroughly tested database of sources and recordings. Recordings of seismic waves traversing the region from Tibet to the Red Sea will be the principal metric in guiding improvements to the current wave speed model. Precise characterizations of the earthquakes, specifically depths and faulting mechanisms, are essential to avoid mapping source errors into the refined wave speed model. Errors associated with the source are manifested in amplitude and phase changes. Source depths and paths near nodal planes are particularly error prone as small changes may severely affect themore » resulting wavefield. Once sources are quantified, regions requiring refinement will be highlighted using adjoint tomography methods based on spectral element simulations [Komatitsch and Tromp (1999)]. An initial database of 250 regional Middle Eastern events from 1990-2007, was inverted for depth and focal mechanism using teleseismic arrivals [Kikuchi and Kanamori (1982)] and regional surface and body waves [Zhao and Helmberger (1994)]. From this initial database, we reinterpreted a large, well recorded subset of 201 events through a direct comparison between data and synthetics based upon a centroid moment tensor inversion [Liu et al. (2004)]. Evaluation was done using both a 1D reference model [Dziewonski and Anderson (1981)] at periods greater than 80 seconds and a 3D model [Kustowski et al. (2008)] at periods of 25 seconds and longer. The final source reinterpretations will be within the 3D model, as this is the initial starting point for the adjoint tomography. Transitioning from a 1D to 3D wave speed model shows dramatic improvements when comparisons are done at shorter periods, (25 s). Synthetics from the 1D model were created through mode summations while those from the 3D simulations were created using the spectral element method. To further assess errors in source depth and focal mechanism, comparisons between the three methods were made. These comparisons help to identify problematic stations and sources which may bias the final solution. Estimates of standard errors were generated for each event's source depth and focal mechanism to identify poorly constrained events. A final, well characterized set of sources and stations will be then used to iteratively improve the wave speed model of the Middle East. After a few iterations during the adjoint inversion process, the sources will be reexamined and relocated to further reduce mapping of source errors into structural features. Finally, efforts continue in developing the infrastructure required to 'quickly' generate event kernels at the n-th iteration and invert for a new, (n+1)-th, wave speed model of the Middle East. While development of the infrastructure proceeds, initial tests using a limited number of events shows the 3D model, while showing vast improvement compared to the 1D model, still requires substantial modifications. Employing our new, full source set and iterating the adjoint inversions at successively shorter periods will lead to significant changes and refined wave speed structures of the Middle East.« less

  1. MODFLOW-2005, The U.S. Geological Survey Modular Ground-Water Model - Documentation of the Multiple-Refined-Areas Capability of Local Grid Refinement (LGR) and the Boundary Flow and Head (BFH) Package

    USGS Publications Warehouse

    Mehl, Steffen W.; Hill, Mary C.

    2007-01-01

    This report documents the addition of the multiple-refined-areas capability to shared node Local Grid Refinement (LGR) and Boundary Flow and Head (BFH) Package of MODFLOW-2005, the U.S. Geological Survey modular, three-dimensional, finite-difference ground-water flow model. LGR now provides the capability to simulate ground-water flow by using one or more block-shaped, higher resolution local grids (child model) within a coarser grid (parent model). LGR accomplishes this by iteratively coupling separate MODFLOW-2005 models such that heads and fluxes are balanced across the shared interfacing boundaries. The ability to have multiple, nonoverlapping areas of refinement is important in situations where there is more than one area of concern within a regional model. In this circumstance, LGR can be used to simulate these distinct areas with higher resolution grids. LGR can be used in two-and three-dimensional, steady-state and transient simulations and for simulations of confined and unconfined ground-water systems. The BFH Package can be used to simulate these situations by using either the parent or child models independently.

  2. The Application of Behavior Change Theory to Family-Based Services: Improving Parent Empowerment in Children's Mental Health

    ERIC Educational Resources Information Center

    Olin, S. Serene; Hoagwood, Kimberly E.; Rodriguez, James; Ramos, Belinda; Burton, Geraldine; Penn, Marlene; Crowe, Maura; Radigan, Marleen; Jensen, Peter S.

    2010-01-01

    We describe the development of a parent empowerment program (PEP) using a community-based participatory research approach. In collaboration with a group of dedicated family advocates working with the Mental Health Association of New York City and state policy makers, academic researchers took an iterative approach to crafting and refining PEP to…

  3. Accelerating scientific computations with mixed precision algorithms

    NASA Astrophysics Data System (ADS)

    Baboulin, Marc; Buttari, Alfredo; Dongarra, Jack; Kurzak, Jakub; Langou, Julie; Langou, Julien; Luszczek, Piotr; Tomov, Stanimire

    2009-12-01

    On modern architectures, the performance of 32-bit operations is often at least twice as fast as the performance of 64-bit operations. By using a combination of 32-bit and 64-bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanced while maintaining the 64-bit accuracy of the resulting solution. The approach presented here can apply not only to conventional processors but also to other technologies such as Field Programmable Gate Arrays (FPGA), Graphical Processing Units (GPU), and the STI Cell BE processor. Results on modern processor architectures and the STI Cell BE are presented. Program summaryProgram title: ITER-REF Catalogue identifier: AECO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECO_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 7211 No. of bytes in distributed program, including test data, etc.: 41 862 Distribution format: tar.gz Programming language: FORTRAN 77 Computer: desktop, server Operating system: Unix/Linux RAM: 512 Mbytes Classification: 4.8 External routines: BLAS (optional) Nature of problem: On modern architectures, the performance of 32-bit operations is often at least twice as fast as the performance of 64-bit operations. By using a combination of 32-bit and 64-bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanced while maintaining the 64-bit accuracy of the resulting solution. Solution method: Mixed precision algorithms stem from the observation that, in many cases, a single precision solution of a problem can be refined to the point where double precision accuracy is achieved. A common approach to the solution of linear systems, either dense or sparse, is to perform the LU factorization of the coefficient matrix using Gaussian elimination. First, the coefficient matrix A is factored into the product of a lower triangular matrix L and an upper triangular matrix U. Partial row pivoting is in general used to improve numerical stability resulting in a factorization PA=LU, where P is a permutation matrix. The solution for the system is achieved by first solving Ly=Pb (forward substitution) and then solving Ux=y (backward substitution). Due to round-off errors, the computed solution, x, carries a numerical error magnified by the condition number of the coefficient matrix A. In order to improve the computed solution, an iterative process can be applied, which produces a correction to the computed solution at each iteration, which then yields the method that is commonly known as the iterative refinement algorithm. Provided that the system is not too ill-conditioned, the algorithm produces a solution correct to the working precision. Running time: seconds/minutes

  4. Using field observations to inform thermal hydrology models of permafrost dynamics with ATS (v0.83)

    DOE PAGES

    Atchley, A. L.; Painter, S. L.; Harp, D. R.; ...

    2015-04-14

    Climate change is profoundly transforming the carbon-rich Arctic tundra landscape, potentially moving it from a carbon sink to a carbon source by increasing the thickness of soil that thaws on a seasonal basis. However, the modeling capability and precise parameterizations of the physical characteristics needed to estimate projected active layer thickness (ALT) are limited in Earth System Models (ESMs). In particular, discrepancies in spatial scale between field measurements and Earth System Models challenge validation and parameterization of hydrothermal models. A recently developed surface/subsurface model for permafrost thermal hydrology, the Advanced Terrestrial Simulator (ATS), is used in combination with field measurementsmore » to calibrate and identify fine scale controls of ALT in ice wedge polygon tundra in Barrow, Alaska. An iterative model refinement procedure that cycles between borehole temperature and snow cover measurements and simulations functions to evaluate and parameterize different model processes necessary to simulate freeze/thaw processes and ALT formation. After model refinement and calibration, reasonable matches between simulated and measured soil temperatures are obtained, with the largest errors occurring during early summer above ice wedges (e.g. troughs). The results suggest that properly constructed and calibrated one-dimensional thermal hydrology models have the potential to provide reasonable representation of the subsurface thermal response and can be used to infer model input parameters and process representations. The models for soil thermal conductivity and snow distribution were found to be the most sensitive process representations. However, information on lateral flow and snowpack evolution might be needed to constrain model representations of surface hydrology and snow depth.« less

  5. Stakeholder assessment of comparative effectiveness research needs for Medicaid populations.

    PubMed

    Fischer, Michael A; Allen-Coleman, Cora; Farrell, Stephen F; Schneeweiss, Sebastian

    2015-09-01

    Patients, providers and policy-makers rely heavily on comparative effectiveness research (CER) when making complex, real-world medical decisions. In particular, Medicaid providers and policy-makers face unique challenges in decision-making because their program cares for traditionally underserved populations, especially children, pregnant women and people with mental illness. Because these patient populations have generally been underrepresented in research discussions, CER questions for these groups may be understudied. To address this problem, the Agency for Healthcare Research and Quality commissioned our team to work with Medicaid Medical Directors and other stakeholders to identify relevant CER questions. Through an iterative process of topic identification and refinement, we developed relevant, feasible and actionable questions based on issues affecting Medicaid programs nationwide. We describe challenges and limitations and provide recommendations for future stakeholder engagement.

  6. Fostering synergy between cell biology and systems biology.

    PubMed

    Eddy, James A; Funk, Cory C; Price, Nathan D

    2015-08-01

    In the shared pursuit of elucidating detailed mechanisms of cell function, systems biology presents a natural complement to ongoing efforts in cell biology. Systems biology aims to characterize biological systems through integrated and quantitative modeling of cellular information. The process of model building and analysis provides value through synthesizing and cataloging information about cells and molecules, predicting mechanisms and identifying generalizable themes, generating hypotheses and guiding experimental design, and highlighting knowledge gaps and refining understanding. In turn, incorporating domain expertise and experimental data is crucial for building towards whole cell models. An iterative cycle of interaction between cell and systems biologists advances the goals of both fields and establishes a framework for mechanistic understanding of the genome-to-phenome relationship. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.

  7. INFOS: spectrum fitting software for NMR analysis.

    PubMed

    Smith, Albert A

    2017-02-01

    Software for fitting of NMR spectra in MATLAB is presented. Spectra are fitted in the frequency domain, using Fourier transformed lineshapes, which are derived using the experimental acquisition and processing parameters. This yields more accurate fits compared to common fitting methods that use Lorentzian or Gaussian functions. Furthermore, a very time-efficient algorithm for calculating and fitting spectra has been developed. The software also performs initial peak picking, followed by subsequent fitting and refinement of the peak list, by iteratively adding and removing peaks to improve the overall fit. Estimation of error on fitting parameters is performed using a Monte-Carlo approach. Many fitting options allow the software to be flexible enough for a wide array of applications, while still being straightforward to set up with minimal user input.

  8. Virtual reality and gaming systems to improve walking and mobility for people with musculoskeletal and neuromuscular conditions.

    PubMed

    Deutsch, Judith E

    2009-01-01

    Improving walking for individuals with musculoskeletal and neuromuscular conditions is an important aspect of rehabilitation. The capabilities of clinicians who address these rehabilitation issues could be augmented with innovations such as virtual reality gaming based technologies. The chapter provides an overview of virtual reality gaming based technologies currently being developed and tested to improve motor and cognitive elements required for ambulation and mobility in different patient populations. Included as well is a detailed description of a single VR system, consisting of the rationale for development and iterative refinement of the system based on clinical science. These concepts include: neural plasticity, part-task training, whole task training, task specific training, principles of exercise and motor learning, sensorimotor integration, and visual spatial processing.

  9. Stakeholder assessment of comparative effectiveness research needs for Medicaid populations

    PubMed Central

    Fischer, Michael A; Allen-Coleman, Cora; Farrell, Stephen F; Schneeweiss, Sebastian

    2015-01-01

    Patients, providers and policy-makers rely heavily on comparative effectiveness research (CER) when making complex, real-world medical decisions. In particular, Medicaid providers and policy-makers face unique challenges in decision-making because their program cares for traditionally underserved populations, especially children, pregnant women and people with mental illness. Because these patient populations have generally been underrepresented in research discussions, CER questions for these groups may be understudied. To address this problem, the Agency for Healthcare Research and Quality commissioned our team to work with Medicaid Medical Directors and other stakeholders to identify relevant CER questions. Through an iterative process of topic identification and refinement, we developed relevant, feasible and actionable questions based on issues affecting Medicaid programs nationwide. We describe challenges and limitations and provide recommendations for future stakeholder engagement. PMID:26388438

  10. A computational approach for hypersonic nonequilibrium radiation utilizing space partition algorithm and Gauss quadrature

    NASA Astrophysics Data System (ADS)

    Shang, J. S.; Andrienko, D. A.; Huang, P. G.; Surzhikov, S. T.

    2014-06-01

    An efficient computational capability for nonequilibrium radiation simulation via the ray tracing technique has been accomplished. The radiative rate equation is iteratively coupled with the aerodynamic conservation laws including nonequilibrium chemical and chemical-physical kinetic models. The spectral properties along tracing rays are determined by a space partition algorithm of the nearest neighbor search process, and the numerical accuracy is further enhanced by a local resolution refinement using the Gauss-Lobatto polynomial. The interdisciplinary governing equations are solved by an implicit delta formulation through the diminishing residual approach. The axisymmetric radiating flow fields over the reentry RAM-CII probe have been simulated and verified with flight data and previous solutions by traditional methods. A computational efficiency gain nearly forty times is realized over that of the existing simulation procedures.

  11. 2D photonic crystal complete band gap search using a cyclic cellular automaton refination

    NASA Astrophysics Data System (ADS)

    González-García, R.; Castañón, G.; Hernández-Figueroa, H. E.

    2014-11-01

    We present a refination method based on a cyclic cellular automaton (CCA) that simulates a crystallization-like process, aided with a heuristic evolutionary method called differential evolution (DE) used to perform an ordered search of full photonic band gaps (FPBGs) in a 2D photonic crystal (PC). The solution is proposed as a combinatorial optimization of the elements in a binary array. These elements represent the existence or absence of a dielectric material surrounded by air, thus representing a general geometry whose search space is defined by the number of elements in such array. A block-iterative frequency-domain method was used to compute the FPBGs on a PC, when present. DE has proved to be useful in combinatorial problems and we also present an implementation feature that takes advantage of the periodic nature of PCs to enhance the convergence of this algorithm. Finally, we used this methodology to find a PC structure with a 19% bandgap-to-midgap ratio without requiring previous information of suboptimal configurations and we made a statistical study of how it is affected by disorder in the borders of the structure compared with a previous work that uses a genetic algorithm.

  12. A Methodology for Robust Comparative Life Cycle Assessments Incorporating Uncertainty.

    PubMed

    Gregory, Jeremy R; Noshadravan, Arash; Olivetti, Elsa A; Kirchain, Randolph E

    2016-06-21

    We propose a methodology for conducting robust comparative life cycle assessments (LCA) by leveraging uncertainty. The method evaluates a broad range of the possible scenario space in a probabilistic fashion while simultaneously considering uncertainty in input data. The method is intended to ascertain which scenarios have a definitive environmentally preferable choice among the alternatives being compared and the significance of the differences given uncertainty in the parameters, which parameters have the most influence on this difference, and how we can identify the resolvable scenarios (where one alternative in the comparison has a clearly lower environmental impact). This is accomplished via an aggregated probabilistic scenario-aware analysis, followed by an assessment of which scenarios have resolvable alternatives. Decision-tree partitioning algorithms are used to isolate meaningful scenario groups. In instances where the alternatives cannot be resolved for scenarios of interest, influential parameters are identified using sensitivity analysis. If those parameters can be refined, the process can be iterated using the refined parameters. We also present definitions of uncertainty quantities that have not been applied in the field of LCA and approaches for characterizing uncertainty in those quantities. We then demonstrate the methodology through a case study of pavements.

  13. Development of a coping intervention to improve traumatic stress and HIV care engagement among South African women with sexual trauma histories.

    PubMed

    Sikkema, Kathleen J; Choi, Karmel W; Robertson, Corne; Knettel, Brandon A; Ciya, Nonceba; Knippler, Elizabeth T; Watt, Melissa H; Joska, John A

    2018-06-01

    This paper describes the development and preliminary trial run of ImpACT (Improving AIDS Care after Trauma), a brief coping intervention to address traumatic stress and HIV care engagement among South African women with sexual trauma histories. We engaged in an iterative process to culturally adapt a cognitive-behavioral intervention for delivery within a South African primary care clinic. This process involved three phases: (a) preliminary intervention development, drawing on content from a prior evidence-based intervention; (b) contextual adaptation of the curriculum through formative data collection using a multi-method qualitative approach; and (c) pre-testing of trauma screening procedures and a subsequent trial run of the intervention. Feedback from key informant interviews and patient in-depth interviews guided the refinement of session content and adaptation of key intervention elements, including culturally relevant visuals, metaphors, and interactive exercises. The trial run curriculum consisted of four individual sessions and two group sessions. Strong session attendance during the trial run supported the feasibility of ImpACT. Participants responded positively to the logistics of the intervention delivery and the majority of session content. Trial run feedback helped to further refine intervention content and delivery towards a pilot randomized clinical trial to assess the feasibility and potential efficacy of this intervention. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Automated source term and wind parameter estimation for atmospheric transport and dispersion applications

    NASA Astrophysics Data System (ADS)

    Bieringer, Paul E.; Rodriguez, Luna M.; Vandenberghe, Francois; Hurst, Jonathan G.; Bieberbach, George; Sykes, Ian; Hannan, John R.; Zaragoza, Jake; Fry, Richard N.

    2015-12-01

    Accurate simulations of the atmospheric transport and dispersion (AT&D) of hazardous airborne materials rely heavily on the source term parameters necessary to characterize the initial release and meteorological conditions that drive the downwind dispersion. In many cases the source parameters are not known and consequently based on rudimentary assumptions. This is particularly true of accidental releases and the intentional releases associated with terrorist incidents. When available, meteorological observations are often not representative of the conditions at the location of the release and the use of these non-representative meteorological conditions can result in significant errors in the hazard assessments downwind of the sensors, even when the other source parameters are accurately characterized. Here, we describe a computationally efficient methodology to characterize both the release source parameters and the low-level winds (eg. winds near the surface) required to produce a refined downwind hazard. This methodology, known as the Variational Iterative Refinement Source Term Estimation (STE) Algorithm (VIRSA), consists of a combination of modeling systems. These systems include a back-trajectory based source inversion method, a forward Gaussian puff dispersion model, a variational refinement algorithm that uses both a simple forward AT&D model that is a surrogate for the more complex Gaussian puff model and a formal adjoint of this surrogate model. The back-trajectory based method is used to calculate a ;first guess; source estimate based on the available observations of the airborne contaminant plume and atmospheric conditions. The variational refinement algorithm is then used to iteratively refine the first guess STE parameters and meteorological variables. The algorithm has been evaluated across a wide range of scenarios of varying complexity. It has been shown to improve the source parameters for location by several hundred percent (normalized by the distance from source to the closest sampler), and improve mass estimates by several orders of magnitude. Furthermore, it also has the ability to operate in scenarios with inconsistencies between the wind and airborne contaminant sensor observations and adjust the wind to provide a better match between the hazard prediction and the observations.

  15. xMDFF: molecular dynamics flexible fitting of low-resolution X-ray structures.

    PubMed

    McGreevy, Ryan; Singharoy, Abhishek; Li, Qufei; Zhang, Jingfen; Xu, Dong; Perozo, Eduardo; Schulten, Klaus

    2014-09-01

    X-ray crystallography remains the most dominant method for solving atomic structures. However, for relatively large systems, the availability of only medium-to-low-resolution diffraction data often limits the determination of all-atom details. A new molecular dynamics flexible fitting (MDFF)-based approach, xMDFF, for determining structures from such low-resolution crystallographic data is reported. xMDFF employs a real-space refinement scheme that flexibly fits atomic models into an iteratively updating electron-density map. It addresses significant large-scale deformations of the initial model to fit the low-resolution density, as tested with synthetic low-resolution maps of D-ribose-binding protein. xMDFF has been successfully applied to re-refine six low-resolution protein structures of varying sizes that had already been submitted to the Protein Data Bank. Finally, via systematic refinement of a series of data from 3.6 to 7 Å resolution, xMDFF refinements together with electrophysiology experiments were used to validate the first all-atom structure of the voltage-sensing protein Ci-VSP.

  16. Cognitive search model and a new query paradigm

    NASA Astrophysics Data System (ADS)

    Xu, Zhonghui

    2001-06-01

    This paper proposes a cognitive model in which people begin to search pictures by using semantic content and find a right picture by judging whether its visual content is a proper visualization of the semantics desired. It is essential that human search is not just a process of matching computation on visual feature but rather a process of visualization of the semantic content known. For people to search electronic images in the way as they manually do in the model, we suggest that querying be a semantic-driven process like design. A query-by-design paradigm is prosed in the sense that what you design is what you find. Unlike query-by-example, query-by-design allows users to specify the semantic content through an iterative and incremental interaction process so that a retrieval can start with association and identification of the given semantic content and get refined while further visual cues are available. An experimental image retrieval system, Kuafu, has been under development using the query-by-design paradigm and an iconic language is adopted.

  17. Development and benchmarking of TASSER(iter) for the iterative improvement of protein structure predictions.

    PubMed

    Lee, Seung Yup; Skolnick, Jeffrey

    2007-07-01

    To improve the accuracy of TASSER models especially in the limit where threading provided template alignments are of poor quality, we have developed the TASSER(iter) algorithm which uses the templates and contact restraints from TASSER generated models for iterative structure refinement. We apply TASSER(iter) to a large benchmark set of 2,773 nonhomologous single domain proteins that are < or = 200 in length and that cover the PDB at the level of 35% pairwise sequence identity. Overall, TASSER(iter) models have a smaller global average RMSD of 5.48 A compared to 5.81 A RMSD of the original TASSER models. Classifying the targets by the level of prediction difficulty (where Easy targets have a good template with a corresponding good threading alignment, Medium targets have a good template but a poor alignment, and Hard targets have an incorrectly identified template), TASSER(iter) (TASSER) models have an average RMSD of 4.15 A (4.35 A) for the Easy set and 9.05 A (9.52 A) for the Hard set. The largest reduction of average RMSD is for the Medium set where the TASSER(iter) models have an average global RMSD of 5.67 A compared to 6.72 A of the TASSER models. Seventy percent of the Medium set TASSER(iter) models have a smaller RMSD than the TASSER models, while 63% of the Easy and 60% of the Hard TASSER models are improved by TASSER(iter). For the foldable cases, where the targets have a RMSD to the native <6.5 A, TASSER(iter) shows obvious improvement over TASSER models: For the Medium set, it improves the success rate from 57.0 to 67.2%, followed by the Hard targets where the success rate improves from 32.0 to 34.8%, with the smallest improvement in the Easy targets from 82.6 to 84.0%. These results suggest that TASSER(iter) can provide more reliable predictions for targets of Medium difficulty, a range that had resisted improvement in the quality of protein structure predictions. 2007 Wiley-Liss, Inc.

  18. Using field observations to inform thermal hydrology models of permafrost dynamics with ATS (v0.83)

    DOE PAGES

    Atchley, Adam L.; Painter, Scott L.; Harp, Dylan R.; ...

    2015-09-01

    Climate change is profoundly transforming the carbon-rich Arctic tundra landscape, potentially moving it from a carbon sink to a carbon source by increasing the thickness of soil that thaws on a seasonal basis. Thus, the modeling capability and precise parameterizations of the physical characteristics needed to estimate projected active layer thickness (ALT) are limited in Earth system models (ESMs). In particular, discrepancies in spatial scale between field measurements and Earth system models challenge validation and parameterization of hydrothermal models. A recently developed surface–subsurface model for permafrost thermal hydrology, the Advanced Terrestrial Simulator (ATS), is used in combination with field measurementsmore » to achieve the goals of constructing a process-rich model based on plausible parameters and to identify fine-scale controls of ALT in ice-wedge polygon tundra in Barrow, Alaska. An iterative model refinement procedure that cycles between borehole temperature and snow cover measurements and simulations functions to evaluate and parameterize different model processes necessary to simulate freeze–thaw processes and ALT formation. After model refinement and calibration, reasonable matches between simulated and measured soil temperatures are obtained, with the largest errors occurring during early summer above ice wedges (e.g., troughs). The results suggest that properly constructed and calibrated one-dimensional thermal hydrology models have the potential to provide reasonable representation of the subsurface thermal response and can be used to infer model input parameters and process representations. The models for soil thermal conductivity and snow distribution were found to be the most sensitive process representations. However, information on lateral flow and snowpack evolution might be needed to constrain model representations of surface hydrology and snow depth.« less

  19. The MHOST finite element program: 3-D inelastic analysis methods for hot section components. Volume 1: Theoretical manual

    NASA Technical Reports Server (NTRS)

    Nakazawa, Shohei

    1991-01-01

    Formulations and algorithms implemented in the MHOST finite element program are discussed. The code uses a novel concept of the mixed iterative solution technique for the efficient 3-D computations of turbine engine hot section components. The general framework of variational formulation and solution algorithms are discussed which were derived from the mixed three field Hu-Washizu principle. This formulation enables the use of nodal interpolation for coordinates, displacements, strains, and stresses. Algorithmic description of the mixed iterative method includes variations for the quasi static, transient dynamic and buckling analyses. The global-local analysis procedure referred to as the subelement refinement is developed in the framework of the mixed iterative solution, of which the detail is presented. The numerically integrated isoparametric elements implemented in the framework is discussed. Methods to filter certain parts of strain and project the element discontinuous quantities to the nodes are developed for a family of linear elements. Integration algorithms are described for linear and nonlinear equations included in MHOST program.

  20. Iterative tensor voting for perceptual grouping of ill-defined curvilinear structures.

    PubMed

    Loss, Leandro A; Bebis, George; Parvin, Bahram

    2011-08-01

    In this paper, a novel approach is proposed for perceptual grouping and localization of ill-defined curvilinear structures. Our approach builds upon the tensor voting and the iterative voting frameworks. Its efficacy lies on iterative refinements of curvilinear structures by gradually shifting from an exploratory to an exploitative mode. Such a mode shifting is achieved by reducing the aperture of the tensor voting fields, which is shown to improve curve grouping and inference by enhancing the concentration of the votes over promising, salient structures. The proposed technique is validated on delineating adherens junctions that are imaged through fluorescence microscopy. However, the method is also applicable for screening other organisms based on characteristics of their cell wall structures. Adherens junctions maintain tissue structural integrity and cell-cell interactions. Visually, they exhibit fibrous patterns that may be diffused, heterogeneous in fluorescence intensity, or punctate and frequently perceptual. Besides the application to real data, the proposed method is compared to prior methods on synthetic and annotated real data, showing high precision rates.

  1. Iterative methods for dose reduction and image enhancement in tomography

    DOEpatents

    Miao, Jianwei; Fahimian, Benjamin Pooya

    2012-09-18

    A system and method for creating a three dimensional cross sectional image of an object by the reconstruction of its projections that have been iteratively refined through modification in object space and Fourier space is disclosed. The invention provides systems and methods for use with any tomographic imaging system that reconstructs an object from its projections. In one embodiment, the invention presents a method to eliminate interpolations present in conventional tomography. The method has been experimentally shown to provide higher resolution and improved image quality parameters over existing approaches. A primary benefit of the method is radiation dose reduction since the invention can produce an image of a desired quality with a fewer number projections than seen with conventional methods.

  2. Iterative Goal Refinement for Robotics

    DTIC Science & Technology

    2014-06-01

    Researchers have used a variety of ways to represent such constraints (e.g., as a constraint satisfaction problem ( Scala , to appear), in PDDL (Vaquro...lifecycle to recent models of replanning (Talamadupala et al., 2013) and continual planning ( Scala , to appear). We described goal reasoning in...F., & Barreiro, J. (2013). Towards deliberative control in marine robotics. In Marine Robot Autonomy (pp. 91–175). Springer. Scala , E. (to appear

  3. Operational Evaluation of Self-Paced Instruction in U.S. Army Training.

    DTIC Science & Technology

    1979-01-01

    one iteration of each course, and the on -going refinement and adjustment of managerial techniques. Research Approach A quasi - experimental approach was...research design employed experimental and control groups , posttest only with non-random groups . The design dealt with the six major areas identified as...course on Interpersonal Communications were conducted in the conventional, group -paced manner. Experimental course materials. Wherever possible, existing

  4. Automated Scheduling Via Artificial Intelligence

    NASA Technical Reports Server (NTRS)

    Biefeld, Eric W.; Cooper, Lynne P.

    1991-01-01

    Artificial-intelligence software that automates scheduling developed in Operations Mission Planner (OMP) research project. Software used in both generation of new schedules and modification of existing schedules in view of changes in tasks and/or available resources. Approach based on iterative refinement. Although project focused upon scheduling of operations of scientific instruments and other equipment aboard spacecraft, also applicable to such terrestrial problems as scheduling production in factory.

  5. Comparisons of Observed Process Quality in German and American Infant/Toddler Programs

    ERIC Educational Resources Information Center

    Tietze, Wolfgang; Cryer, Debby

    2004-01-01

    Observed process quality in infant/toddler classrooms was compared in Germany (n = 75) and the USA (n = 219). Process quality was assessed with the Infant/Toddler Environment Rating Scale(ITERS) and parent attitudes about ITERS content with the ITERS Parent Questionnaire (ITERSPQ). The ITERS had comparable reliabilities in the two countries and…

  6. Hirshfeld atom refinement.

    PubMed

    Capelli, Silvia C; Bürgi, Hans-Beat; Dittrich, Birger; Grabowsky, Simon; Jayatilaka, Dylan

    2014-09-01

    Hirshfeld atom refinement (HAR) is a method which determines structural parameters from single-crystal X-ray diffraction data by using an aspherical atom partitioning of tailor-made ab initio quantum mechanical molecular electron densities without any further approximation. Here the original HAR method is extended by implementing an iterative procedure of successive cycles of electron density calculations, Hirshfeld atom scattering factor calculations and structural least-squares refinements, repeated until convergence. The importance of this iterative procedure is illustrated via the example of crystalline ammonia. The new HAR method is then applied to X-ray diffraction data of the dipeptide Gly-l-Ala measured at 12, 50, 100, 150, 220 and 295 K, using Hartree-Fock and BLYP density functional theory electron densities and three different basis sets. All positions and anisotropic displacement parameters (ADPs) are freely refined without constraints or restraints - even those for hydrogen atoms. The results are systematically compared with those from neutron diffraction experiments at the temperatures 12, 50, 150 and 295 K. Although non-hydrogen-atom ADPs differ by up to three combined standard uncertainties (csu's), all other structural parameters agree within less than 2 csu's. Using our best calculations (BLYP/cc-pVTZ, recommended for organic molecules), the accuracy of determining bond lengths involving hydrogen atoms from HAR is better than 0.009 Å for temperatures of 150 K or below; for hydrogen-atom ADPs it is better than 0.006 Å(2) as judged from the mean absolute X-ray minus neutron differences. These results are among the best ever obtained. Remarkably, the precision of determining bond lengths and ADPs for the hydrogen atoms from the HAR procedure is comparable with that from the neutron measurements - an outcome which is obtained with a routinely achievable resolution of the X-ray data of 0.65 Å.

  7. Hirshfeld atom refinement

    PubMed Central

    Capelli, Silvia C.; Bürgi, Hans-Beat; Dittrich, Birger; Grabowsky, Simon; Jayatilaka, Dylan

    2014-01-01

    Hirshfeld atom refinement (HAR) is a method which determines structural parameters from single-crystal X-ray diffraction data by using an aspherical atom partitioning of tailor-made ab initio quantum mechanical molecular electron densities without any further approximation. Here the original HAR method is extended by implementing an iterative procedure of successive cycles of electron density calculations, Hirshfeld atom scattering factor calculations and structural least-squares refinements, repeated until convergence. The importance of this iterative procedure is illustrated via the example of crystalline ammonia. The new HAR method is then applied to X-ray diffraction data of the dipeptide Gly–l-Ala measured at 12, 50, 100, 150, 220 and 295 K, using Hartree–Fock and BLYP density functional theory electron densities and three different basis sets. All positions and anisotropic displacement parameters (ADPs) are freely refined without constraints or restraints – even those for hydrogen atoms. The results are systematically compared with those from neutron diffraction experiments at the temperatures 12, 50, 150 and 295 K. Although non-hydrogen-atom ADPs differ by up to three combined standard uncertainties (csu’s), all other structural parameters agree within less than 2 csu’s. Using our best calculations (BLYP/cc-pVTZ, recommended for organic molecules), the accuracy of determining bond lengths involving hydrogen atoms from HAR is better than 0.009 Å for temperatures of 150 K or below; for hydrogen-atom ADPs it is better than 0.006 Å2 as judged from the mean absolute X-ray minus neutron differences. These results are among the best ever obtained. Remarkably, the precision of determining bond lengths and ADPs for the hydrogen atoms from the HAR procedure is comparable with that from the neutron measurements – an outcome which is obtained with a routinely achievable resolution of the X-ray data of 0.65 Å. PMID:25295177

  8. 3Drefine: an interactive web server for efficient protein structure refinement.

    PubMed

    Bhattacharya, Debswapna; Nowotny, Jackson; Cao, Renzhi; Cheng, Jianlin

    2016-07-08

    3Drefine is an interactive web server for consistent and computationally efficient protein structure refinement with the capability to perform web-based statistical and visual analysis. The 3Drefine refinement protocol utilizes iterative optimization of hydrogen bonding network combined with atomic-level energy minimization on the optimized model using a composite physics and knowledge-based force fields for efficient protein structure refinement. The method has been extensively evaluated on blind CASP experiments as well as on large-scale and diverse benchmark datasets and exhibits consistent improvement over the initial structure in both global and local structural quality measures. The 3Drefine web server allows for convenient protein structure refinement through a text or file input submission, email notification, provided example submission and is freely available without any registration requirement. The server also provides comprehensive analysis of submissions through various energy and statistical feedback and interactive visualization of multiple refined models through the JSmol applet that is equipped with numerous protein model analysis tools. The web server has been extensively tested and used by many users. As a result, the 3Drefine web server conveniently provides a useful tool easily accessible to the community. The 3Drefine web server has been made publicly available at the URL: http://sysbio.rnet.missouri.edu/3Drefine/. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  9. Psychosocial work characteristics of personal care and service occupations: a process for developing meaningful measures for a multiethnic workforce.

    PubMed

    Hoppe, Annekatrin; Heaney, Catherine A; Fujishiro, Kaori; Gong, Fang; Baron, Sherry

    2015-01-01

    Despite their rapid increase in number, workers in personal care and service occupations are underrepresented in research on psychosocial work characteristics and occupational health. Some of the research challenges stem from the high proportion of immigrants in these occupations. Language barriers, low literacy, and cultural differences as well as their nontraditional work setting (i.e., providing service for one person in his/her home) make generic questionnaire measures inadequate for capturing salient aspects of personal care and service work. This study presents strategies for (1) identifying psychosocial work characteristics of home care workers that may affect their occupational safety and health and (2) creating survey measures that overcome barriers posed by language, low literacy, and cultural differences. We pursued these aims in four phases: (Phase 1) Six focus groups to identify the psychosocial work characteristics affecting the home care workers' occupational safety and health; (Phase 2) Selection of questionnaire items (i.e., questions or statements to assess the target construct) and first round of cognitive interviews (n = 30) to refine the items in an iterative process; (Phase 3) Item revision and second round of cognitive interviews (n = 11); (Phase 4) Quantitative pilot test to ensure the scales' reliability and validity across three language groups (English, Spanish, and Chinese; total n = 404). Analysis of the data from each phase informed the nature of subsequent phases. This iterative process ensured that survey measures not only met the reliability and validity criteria across groups, but were also meaningful to home care workers. This complex process is necessary when conducting research with nontraditional and multilingual worker populations.

  10. Measurement of a model of implementation for health care: toward a testable theory

    PubMed Central

    2012-01-01

    Background Greenhalgh et al. used a considerable evidence-base to develop a comprehensive model of implementation of innovations in healthcare organizations [1]. However, these authors did not fully operationalize their model, making it difficult to test formally. The present paper represents a first step in operationalizing Greenhalgh et al.’s model by providing background, rationale, working definitions, and measurement of key constructs. Methods A systematic review of the literature was conducted for key words representing 53 separate sub-constructs from six of the model’s broad constructs. Using an iterative process, we reviewed existing measures and utilized or adapted items. Where no one measure was deemed appropriate, we developed other items to measure the constructs through consensus. Results The review and iterative process of team consensus identified three types of data that can been used to operationalize the constructs in the model: survey items, interview questions, and administrative data. Specific examples of each of these are reported. Conclusion Despite limitations, the mixed-methods approach to measurement using the survey, interview measure, and administrative data can facilitate research on implementation by providing investigators with a measurement tool that captures most of the constructs identified by the Greenhalgh model. These measures are currently being used to collect data concerning the implementation of two evidence-based psychotherapies disseminated nationally within Department of Veterans Affairs. Testing of psychometric properties and subsequent refinement should enhance the utility of the measures. PMID:22759451

  11. BlobContours: adapting Blobworld for supervised color- and texture-based image segmentation

    NASA Astrophysics Data System (ADS)

    Vogel, Thomas; Nguyen, Dinh Quyen; Dittmann, Jana

    2006-01-01

    Extracting features is the first and one of the most crucial steps in recent image retrieval process. While the color features and the texture features of digital images can be extracted rather easily, the shape features and the layout features depend on reliable image segmentation. Unsupervised image segmentation, often used in image analysis, works on merely syntactical basis. That is, what an unsupervised segmentation algorithm can segment is only regions, but not objects. To obtain high-level objects, which is desirable in image retrieval, human assistance is needed. Supervised image segmentations schemes can improve the reliability of segmentation and segmentation refinement. In this paper we propose a novel interactive image segmentation technique that combines the reliability of a human expert with the precision of automated image segmentation. The iterative procedure can be considered a variation on the Blobworld algorithm introduced by Carson et al. from EECS Department, University of California, Berkeley. Starting with an initial segmentation as provided by the Blobworld framework, our algorithm, namely BlobContours, gradually updates it by recalculating every blob, based on the original features and the updated number of Gaussians. Since the original algorithm has hardly been designed for interactive processing we had to consider additional requirements for realizing a supervised segmentation scheme on the basis of Blobworld. Increasing transparency of the algorithm by applying usercontrolled iterative segmentation, providing different types of visualization for displaying the segmented image and decreasing computational time of segmentation are three major requirements which are discussed in detail.

  12. On some variational acceleration techniques and related methods for local refinement

    NASA Astrophysics Data System (ADS)

    Teigland, Rune

    1998-10-01

    This paper shows that the well-known variational acceleration method described by Wachspress (E. Wachspress, Iterative Solution of Elliptic Systems and Applications to the Neutron Diffusion Equations of Reactor Physics, Prentice-Hall, Englewood Cliffs, NJ, 1966) and later generalized to multilevels (known as the additive correction multigrid method (B.R Huthchinson and G.D. Raithby, Numer. Heat Transf., 9, 511-537 (1986))) is similar to the FAC method of McCormick and Thomas (S.F McCormick and J.W. Thomas, Math. Comput., 46, 439-456 (1986)) and related multilevel methods. The performance of the method is demonstrated for some simple model problems using local refinement and suggestions for improving the performance of the method are given.

  13. Improving marine disease surveillance through sea temperature monitoring, outlooks and projections

    PubMed Central

    Maynard, Jeffrey; van Hooidonk, Ruben; Harvell, C. Drew; Eakin, C. Mark; Liu, Gang; Willis, Bette L.; Williams, Gareth J.; Dobson, Andrew; Heron, Scott F.; Glenn, Robert; Reardon, Kathleen; Shields, Jeffrey D.

    2016-01-01

    To forecast marine disease outbreaks as oceans warm requires new environmental surveillance tools. We describe an iterative process for developing these tools that combines research, development and deployment for suitable systems. The first step is to identify candidate host–pathogen systems. The 24 candidate systems we identified include sponges, corals, oysters, crustaceans, sea stars, fishes and sea grasses (among others). To illustrate the other steps, we present a case study of epizootic shell disease (ESD) in the American lobster. Increasing prevalence of ESD is a contributing factor to lobster fishery collapse in southern New England (SNE), raising concerns that disease prevalence will increase in the northern Gulf of Maine under climate change. The lowest maximum bottom temperature associated with ESD prevalence in SNE is 12°C. Our seasonal outlook for 2015 and long-term projections show bottom temperatures greater than or equal to 12°C may occur in this and coming years in the coastal bays of Maine. The tools presented will allow managers to target efforts to monitor the effects of ESD on fishery sustainability and will be iteratively refined. The approach and case example highlight that temperature-based surveillance tools can inform research, monitoring and management of emerging and continuing marine disease threats. PMID:26880840

  14. Improving marine disease surveillance through sea temperature monitoring, outlooks and projections.

    PubMed

    Maynard, Jeffrey; van Hooidonk, Ruben; Harvell, C Drew; Eakin, C Mark; Liu, Gang; Willis, Bette L; Williams, Gareth J; Groner, Maya L; Dobson, Andrew; Heron, Scott F; Glenn, Robert; Reardon, Kathleen; Shields, Jeffrey D

    2016-03-05

    To forecast marine disease outbreaks as oceans warm requires new environmental surveillance tools. We describe an iterative process for developing these tools that combines research, development and deployment for suitable systems. The first step is to identify candidate host-pathogen systems. The 24 candidate systems we identified include sponges, corals, oysters, crustaceans, sea stars, fishes and sea grasses (among others). To illustrate the other steps, we present a case study of epizootic shell disease (ESD) in the American lobster. Increasing prevalence of ESD is a contributing factor to lobster fishery collapse in southern New England (SNE), raising concerns that disease prevalence will increase in the northern Gulf of Maine under climate change. The lowest maximum bottom temperature associated with ESD prevalence in SNE is 12 °C. Our seasonal outlook for 2015 and long-term projections show bottom temperatures greater than or equal to 12 °C may occur in this and coming years in the coastal bays of Maine. The tools presented will allow managers to target efforts to monitor the effects of ESD on fishery sustainability and will be iteratively refined. The approach and case example highlight that temperature-based surveillance tools can inform research, monitoring and management of emerging and continuing marine disease threats. © 2016 The Authors.

  15. Strategies to enhance participant recruitment and retention in research involving a community-based population.

    PubMed

    McCullagh, Marjorie C; Sanon, Marie-Anne; Cohen, Michael A

    2014-11-01

    Challenges associated with recruiting and retaining community-based populations in research studies have been recognized yet remain of major concern for researchers. There is a need for exchange of recruitment and retention techniques that inform recruitment and retention strategies. Here, the authors discuss a variety of methods that were successful in exceeding target recruitment and retention goals in a randomized clinical trial of hearing protector use among farm operators. Recruitment and retention strategies were 1) based on a philosophy of mutually beneficial engagement in the research process, 2) culturally appropriate, 3) tailored to the unique needs of partnering agencies, and 4) developed and refined in a cyclical and iterative process. Sponsoring organizations are interested in cost-effective recruitment and retention strategies, particularly relating to culturally and ethnically diverse groups. These approaches may result in enhanced subject recruitment and retention, concomitant containment of study costs, and timely accomplishment of study aims. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. CT Angiography after 20 Years

    PubMed Central

    Rubin, Geoffrey D.; Leipsic, Jonathon; Schoepf, U. Joseph; Fleischmann, Dominik; Napel, Sandy

    2015-01-01

    Through a marriage of spiral computed tomography (CT) and graphical volumetric image processing, CT angiography was born 20 years ago. Fueled by a series of technical innovations in CT and image processing, over the next 5–15 years, CT angiography toppled conventional angiography, the undisputed diagnostic reference standard for vascular disease for the prior 70 years, as the preferred modality for the diagnosis and characterization of most cardiovascular abnormalities. This review recounts the evolution of CT angiography from its development and early challenges to a maturing modality that has provided unique insights into cardiovascular disease characterization and management. Selected clinical challenges, which include acute aortic syndromes, peripheral vascular disease, aortic stent-graft and transcatheter aortic valve assessment, and coronary artery disease, are presented as contrasting examples of how CT angiography is changing our approach to cardiovascular disease diagnosis and management. Finally, the recently introduced capabilities for multispectral imaging, tissue perfusion imaging, and radiation dose reduction through iterative reconstruction are explored with consideration toward the continued refinement and advancement of CT angiography. PMID:24848958

  17. Refining an intervention for students with emotional disturbance using qualitative parent and teacher data

    PubMed Central

    Nese, Rhonda N.T.; Palinkas, Lawrence A.; Ruppert, Traci

    2017-01-01

    Intensive supports are needed for students with emotional disturbance during high-risk transitions. Such interventions are most likely to be successful if they address stakeholder perspectives during the development process. This paper discusses qualitative findings from an iterative intervention development project designed to incorporate parent and teacher feedback early in the development process with applications relevant to the adoption of new programs. Using maximum variation purposive sampling, we solicited feedback from five foster/kinship parents, four biological parents and seven teachers to evaluate the feasibility and utility of the Students With Involved Families and Teachers (SWIFT) intervention in home and school settings. SWIFT provides youth and parent skills coaching in the home and school informed by weekly student behavioral progress monitoring. Participants completed semi-structured interviews that were transcribed and coded via an independent co-coding strategy. The findings provide support for school-based interventions involving family participation and lessons to ensure intervention success. PMID:28966422

  18. Collective memory in primate conflict implied by temporal scaling collapse.

    PubMed

    Lee, Edward D; Daniels, Bryan C; Krakauer, David C; Flack, Jessica C

    2017-09-01

    In biological systems, prolonged conflict is costly, whereas contained conflict permits strategic innovation and refinement. Causes of variation in conflict size and duration are not well understood. We use a well-studied primate society model system to study how conflicts grow. We find conflict duration is a 'first to fight' growth process that scales superlinearly, with the number of possible pairwise interactions. This is in contrast with a 'first to fail' process that characterizes peaceful durations. Rescaling conflict distributions reveals a universal curve, showing that the typical time scale of correlated interactions exceeds nearly all individual fights. This temporal correlation implies collective memory across pairwise interactions beyond those assumed in standard models of contagion growth or iterated evolutionary games. By accounting for memory, we make quantitative predictions for interventions that mitigate or enhance the spread of conflict. Managing conflict involves balancing the efficient use of limited resources with an intervention strategy that allows for conflict while keeping it contained and controlled. © 2017 The Author(s).

  19. Tractable Experiment Design via Mathematical Surrogates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Brian J.

    This presentation summarizes the development and implementation of quantitative design criteria motivated by targeted inference objectives for identifying new, potentially expensive computational or physical experiments. The first application is concerned with estimating features of quantities of interest arising from complex computational models, such as quantiles or failure probabilities. A sequential strategy is proposed for iterative refinement of the importance distributions used to efficiently sample the uncertain inputs to the computational model. In the second application, effective use of mathematical surrogates is investigated to help alleviate the analytical and numerical intractability often associated with Bayesian experiment design. This approach allows formore » the incorporation of prior information into the design process without the need for gross simplification of the design criterion. Illustrative examples of both design problems will be presented as an argument for the relevance of these research problems.« less

  20. The Chlamydomonas genome project: a decade on

    PubMed Central

    Blaby, Ian K.; Blaby-Haas, Crysten; Tourasse, Nicolas; Hom, Erik F. Y.; Lopez, David; Aksoy, Munevver; Grossman, Arthur; Umen, James; Dutcher, Susan; Porter, Mary; King, Stephen; Witman, George; Stanke, Mario; Harris, Elizabeth H.; Goodstein, David; Grimwood, Jane; Schmutz, Jeremy; Vallon, Olivier; Merchant, Sabeeha S.; Prochnik, Simon

    2014-01-01

    The green alga Chlamydomonas reinhardtii is a popular unicellular organism for studying photosynthesis, cilia biogenesis and micronutrient homeostasis. Ten years since its genome project was initiated, an iterative process of improvements to the genome and gene predictions has propelled this organism to the forefront of the “omics” era. Housed at Phytozome, the Joint Genome Institute’s (JGI) plant genomics portal, the most up-to-date genomic data include a genome arranged on chromosomes and high-quality gene models with alternative splice forms supported by an abundance of RNA-Seq data. Here, we present the past, present and future of Chlamydomonas genomics. Specifically, we detail progress on genome assembly and gene model refinement, discuss resources for gene annotations, functional predictions and locus ID mapping between versions and, importantly, outline a standardized framework for naming genes. PMID:24950814

  1. An Algorithm for Converting Static Earth Sensor Measurements into Earth Observation Vectors

    NASA Technical Reports Server (NTRS)

    Harman, R.; Hashmall, Joseph A.; Sedlak, Joseph

    2004-01-01

    An algorithm has been developed that converts penetration angles reported by Static Earth Sensors (SESs) into Earth observation vectors. This algorithm allows compensation for variation in the horizon height including that caused by Earth oblateness. It also allows pitch and roll to be computed using any number (greater than 1) of simultaneous sensor penetration angles simplifying processing during periods of Sun and Moon interference. The algorithm computes body frame unit vectors through each SES cluster. It also computes GCI vectors from the spacecraft to the position on the Earth's limb where each cluster detects the Earth's limb. These body frame vectors are used as sensor observation vectors and the GCI vectors are used as reference vectors in an attitude solution. The attitude, with the unobservable yaw discarded, is iteratively refined to provide the Earth observation vector solution.

  2. A Methodology for Improving Active Learning Engineering Courses with a Large Number of Students and Teachers through Feedback Gathering and Iterative Refinement

    ERIC Educational Resources Information Center

    Estévez-Ayres, Iria; Alario-Hoyos, Carlos; Pérez-Sanagustín, Mar; Pardo, Abelardo; Crespo-García, Raquel M.; Leony, Derick; Parada G., Hugo A.; Delgado-Kloos, Carlos

    2015-01-01

    In the last decade, engineering education has evolved in many ways to meet society demands. Universities offer more flexible curricula and put a lot of effort on the acquisition of professional engineering skills by the students. In many universities, the courses in the first years of different engineering degrees share program and objectives,…

  3. Automatic Parameter Tuning for the Morpheus Vehicle Using Particle Swarm Optimization

    NASA Technical Reports Server (NTRS)

    Birge, B.

    2013-01-01

    A high fidelity simulation using a PC based Trick framework has been developed for Johnson Space Center's Morpheus test bed flight vehicle. There is an iterative development loop of refining and testing the hardware, refining the software, comparing the software simulation to hardware performance and adjusting either or both the hardware and the simulation to extract the best performance from the hardware as well as the most realistic representation of the hardware from the software. A Particle Swarm Optimization (PSO) based technique has been developed that increases speed and accuracy of the iterative development cycle. Parameters in software can be automatically tuned to make the simulation match real world subsystem data from test flights. Special considerations for scale, linearity, discontinuities, can be all but ignored with this technique, allowing fast turnaround both for simulation tune up to match hardware changes as well as during the test and validation phase to help identify hardware issues. Software models with insufficient control authority to match hardware test data can be immediately identified and using this technique requires very little to no specialized knowledge of optimization, freeing model developers to concentrate on spacecraft engineering. Integration of the PSO into the Morpheus development cycle will be discussed as well as a case study highlighting the tool's effectiveness.

  4. Scattering property based contextual PolSAR speckle filter

    NASA Astrophysics Data System (ADS)

    Mullissa, Adugna G.; Tolpekin, Valentyn; Stein, Alfred

    2017-12-01

    Reliability of the scattering model based polarimetric SAR (PolSAR) speckle filter depends upon the accurate decomposition and classification of the scattering mechanisms. This paper presents an improved scattering property based contextual speckle filter based upon an iterative classification of the scattering mechanisms. It applies a Cloude-Pottier eigenvalue-eigenvector decomposition and a fuzzy H/α classification to determine the scattering mechanisms on a pre-estimate of the coherency matrix. The H/α classification identifies pixels with homogeneous scattering properties. A coarse pixel selection rule groups pixels that are either single bounce, double bounce or volume scatterers. A fine pixel selection rule is applied to pixels within each canonical scattering mechanism. We filter the PolSAR data and depending on the type of image scene (urban or rural) use either the coarse or fine pixel selection rule. Iterative refinement of the Wishart H/α classification reduces the speckle in the PolSAR data. Effectiveness of this new filter is demonstrated by using both simulated and real PolSAR data. It is compared with the refined Lee filter, the scattering model based filter and the non-local means filter. The study concludes that the proposed filter compares favorably with other polarimetric speckle filters in preserving polarimetric information, point scatterers and subtle features in PolSAR data.

  5. Recursive Factorization of the Inverse Overlap Matrix in Linear-Scaling Quantum Molecular Dynamics Simulations.

    PubMed

    Negre, Christian F A; Mniszewski, Susan M; Cawkwell, Marc J; Bock, Nicolas; Wall, Michael E; Niklasson, Anders M N

    2016-07-12

    We present a reduced complexity algorithm to compute the inverse overlap factors required to solve the generalized eigenvalue problem in a quantum-based molecular dynamics (MD) simulation. Our method is based on the recursive, iterative refinement of an initial guess of Z (inverse square root of the overlap matrix S). The initial guess of Z is obtained beforehand by using either an approximate divide-and-conquer technique or dynamical methods, propagated within an extended Lagrangian dynamics from previous MD time steps. With this formulation, we achieve long-term stability and energy conservation even under the incomplete, approximate, iterative refinement of Z. Linear-scaling performance is obtained using numerically thresholded sparse matrix algebra based on the ELLPACK-R sparse matrix data format, which also enables efficient shared-memory parallelization. As we show in this article using self-consistent density-functional-based tight-binding MD, our approach is faster than conventional methods based on the diagonalization of overlap matrix S for systems as small as a few hundred atoms, substantially accelerating quantum-based simulations even for molecular structures of intermediate size. For a 4158-atom water-solvated polyalanine system, we find an average speedup factor of 122 for the computation of Z in each MD step.

  6. Structure-based coarse-graining for inhomogeneous liquid polymer systems.

    PubMed

    Fukuda, Motoo; Zhang, Hedong; Ishiguro, Takahiro; Fukuzawa, Kenji; Itoh, Shintaro

    2013-08-07

    The iterative Boltzmann inversion (IBI) method is used to derive interaction potentials for coarse-grained (CG) systems by matching structural properties of a reference atomistic system. However, because it depends on such thermodynamic conditions as density and pressure of the reference system, the derived CG nonbonded potential is probably not applicable to inhomogeneous systems containing different density regimes. In this paper, we propose a structure-based coarse-graining scheme to devise CG nonbonded potentials that are applicable to different density bulk systems and inhomogeneous systems with interfaces. Similar to the IBI, the radial distribution function (RDF) of a reference atomistic bulk system is used for iteratively refining the CG nonbonded potential. In contrast to the IBI, however, our scheme employs an appropriately estimated initial guess and a small amount of refinement to suppress transfer of the many-body interaction effects included in the reference RDF into the CG nonbonded potential. To demonstrate the application of our approach to inhomogeneous systems, we perform coarse-graining for a liquid perfluoropolyether (PFPE) film coated on a carbon surface. The constructed CG PFPE model favorably reproduces structural and density distribution functions, not only for bulk systems, but also at the liquid-vacuum and liquid-solid interfaces, demonstrating that our CG scheme offers an easy and practical way to accurately determine nonbonded potentials for inhomogeneous systems.

  7. 3-D minimum-structure inversion of magnetotelluric data using the finite-element method and tetrahedral grids

    NASA Astrophysics Data System (ADS)

    Jahandari, H.; Farquharson, C. G.

    2017-11-01

    Unstructured grids enable representing arbitrary structures more accurately and with fewer cells compared to regular structured grids. These grids also allow more efficient refinements compared to rectilinear meshes. In this study, tetrahedral grids are used for the inversion of magnetotelluric (MT) data, which allows for the direct inclusion of topography in the model, for constraining an inversion using a wireframe-based geological model and for local refinement at the observation stations. A minimum-structure method with an iterative model-space Gauss-Newton algorithm for optimization is used. An iterative solver is employed for solving the normal system of equations at each Gauss-Newton step and the sensitivity matrix-vector products that are required by this solver are calculated using pseudo-forward problems. This method alleviates the need to explicitly form the Hessian or Jacobian matrices which significantly reduces the required computation memory. Forward problems are formulated using an edge-based finite-element approach and a sparse direct solver is used for the solutions. This solver allows saving and re-using the factorization of matrices for similar pseudo-forward problems within a Gauss-Newton iteration which greatly minimizes the computation time. Two examples are presented to show the capability of the algorithm: the first example uses a benchmark model while the second example represents a realistic geological setting with topography and a sulphide deposit. The data that are inverted are the full-tensor impedance and the magnetic transfer function vector. The inversions sufficiently recovered the models and reproduced the data, which shows the effectiveness of unstructured grids for complex and realistic MT inversion scenarios. The first example is also used to demonstrate the computational efficiency of the presented model-space method by comparison with its data-space counterpart.

  8. Developing an expert panel process to refine health outcome definitions in observational data.

    PubMed

    Fox, Brent I; Hollingsworth, Joshua C; Gray, Michael D; Hollingsworth, Michael L; Gao, Juan; Hansen, Richard A

    2013-10-01

    Drug safety surveillance using observational data requires valid adverse event, or health outcome of interest (HOI) measurement. The objectives of this study were to develop a method to review HOI definitions in claims databases using (1) web-based digital tools to present de-identified patient data, (2) a systematic expert panel review process, and (3) a data collection process enabling analysis of concepts-of-interest that influence panelists' determination of HOI. De-identified patient data were presented via an interactive web-based dashboard to enable case review and determine if specific HOIs were present or absent. Criteria for determining HOIs and their severity were provided to each panelist. Using a modified Delphi method, six panelist pairs independently reviewed approximately 200 cases across each of three HOIs (acute liver injury, acute kidney injury, and acute myocardial infarction) such that panelist pairs independently reviewed the same cases. Panelists completed an assessment within the dashboard for each case that included their assessment of the presence or absence of the HOI, HOI severity (if present), and data contributing to their decision. Discrepancies within panelist pairs were resolved during a consensus process. Dashboard development was iterative, focusing on data presentation and recording panelists' assessments. Panelists reported quickly learning how to use the dashboard. The assessment module was used consistently. The dashboard was reliable, enabling an efficient review process for panelists. Modifications were made to the dashboard and review process when necessary to facilitate case review. Our methods should be applied to other health outcomes of interest to further refine the dashboard and case review process. The expert review process was effective and was supported by the web-based dashboard. Our methods for case review and classification can be applied to future methods for case identification in observational data sources. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. A mathematical approach to molecular organization and proteolytic disintegration of bacterial inclusion bodies.

    PubMed

    Cubarsi, R; Carrió, M M; Villaverde, A

    2005-09-01

    The in vivo proteolytic digestion of bacterial inclusion bodies (IBs) and the kinetic analysis of the resulting protein fragments is an interesting approach to investigate the molecular organization of these unconventional protein aggregates. In this work, we describe a set of mathematical instruments useful for such analysis and interpretation of observed data. These methods combine numerical estimation of digestion rate and approximation of its high-order derivatives, modelling of fragmentation events from a mixture of Poisson processes associated with differentiated protein species, differential equations techniques in order to estimate the mixture parameters, an iterative predictor-corrector algorithm for describing the flow diagram along the cascade process, as well as least squares procedures with minimum variance estimates. The models are formulated and compared with data, and successively refined to better match experimental observations. By applying such procedures as well as newer improved algorithms of formerly developed equations, it has been possible to model, for two kinds of bacterially produced aggregation prone recombinant proteins, their cascade digestion process that has revealed intriguing features of the IB-forming polypeptides.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gürsoy, Doğa; Hong, Young P.; He, Kuan

    As x-ray and electron tomography is pushed further into the nanoscale, the limitations of rotation stages become more apparent, leading to challenges in the alignment of the acquired projection images. Here we present an approach for rapid post-acquisition alignment of these projections to obtain high quality three-dimensional images. Our approach is based on a joint estimation of alignment errors, and the object, using an iterative refinement procedure. With simulated data where we know the alignment error of each projection image, our approach shows a residual alignment error that is a factor of a thousand smaller, and it reaches the samemore » error level in the reconstructed image in less than half the number of iterations. We then show its application to experimental data in x-ray and electron nanotomography.« less

  11. "They Have to Adapt to Learn": Surgeons' Perspectives on the Role of Procedural Variation in Surgical Education.

    PubMed

    Apramian, Tavis; Cristancho, Sayra; Watling, Chris; Ott, Michael; Lingard, Lorelei

    2016-01-01

    Clinical research increasingly acknowledges the existence of significant procedural variation in surgical practice. This study explored surgeons' perspectives regarding the influence of intersurgeon procedural variation on the teaching and learning of surgical residents. This qualitative study used a grounded theory-based analysis of observational and interview data. Observational data were collected in 3 tertiary care teaching hospitals in Ontario, Canada. Semistructured interviews explored potential procedural variations arising during the observations and prompts from an iteratively refined guide. Ongoing data analysis refined the theoretical framework and informed data collection strategies, as prescribed by the iterative nature of grounded theory research. Our sample included 99 hours of observation across 45 cases with 14 surgeons. Semistructured, audio-recorded interviews (n = 14) occurred immediately following observational periods. Surgeons endorsed the use of intersurgeon procedural variations to teach residents about adapting to the complexity of surgical practice and the norms of surgical culture. Surgeons suggested that residents' efforts to identify thresholds of principle and preference are crucial to professional development. Principles that emerged from the study included the following: (1) knowing what comes next, (2) choosing the right plane, (3) handling tissue appropriately, (4) recognizing the abnormal, and (5) making safe progress. Surgeons suggested that learning to follow these principles while maintaining key aspects of surgical culture, like autonomy and individuality, are important social processes in surgical education. Acknowledging intersurgeon variation has important implications for curriculum development and workplace-based assessment in surgical education. Adapting to intersurgeon procedural variations may foster versatility in surgical residents. However, the existence of procedural variations and their active use in surgeons' teaching raises questions about the lack of attention to this form of complexity in current workplace-based assessment strategies. Failure to recognize the role of such variations may threaten the implementation of competency-based medical education in surgery. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  12. “They Have to Adapt to Learn”: Surgeons’ Perspectives on the Role of Procedural Variation in Surgical Education

    PubMed Central

    Apramian, Tavis; Cristancho, Sayra; Watling, Chris; Ott, Michael; Lingard, Lorelei

    2017-01-01

    OBJECTIVE Clinical research increasingly acknowledges the existence of significant procedural variation in surgical practice. This study explored surgeons’ perspectives regarding the influence of intersurgeon procedural variation on the teaching and learning of surgical residents. DESIGN AND SETTING This qualitative study used a grounded theory-based analysis of observational and interview data. Observational data were collected in 3 tertiary care teaching hospitals in Ontario, Canada. Semistructured interviews explored potential procedural variations arising during the observations and prompts from an iteratively refined guide. Ongoing data analysis refined the theoretical framework and informed data collection strategies, as prescribed by the iterative nature of grounded theory research. PARTICIPANTS Our sample included 99 hours of observation across 45 cases with 14 surgeons. Semistructured, audio-recorded interviews (n = 14) occurred immediately following observational periods. RESULTS Surgeons endorsed the use of intersurgeon procedural variations to teach residents about adapting to the complexity of surgical practice and the norms of surgical culture. Surgeons suggested that residents’ efforts to identify thresholds of principle and preference are crucial to professional development. Principles that emerged from the study included the following: (1) knowing what comes next, (2) choosing the right plane, (3) handling tissue appropriately, (4) recognizing the abnormal, and (5) making safe progress. Surgeons suggested that learning to follow these principles while maintaining key aspects of surgical culture, like autonomy and individuality, are important social processes in surgical education. CONCLUSIONS Acknowledging intersurgeon variation has important implications for curriculum development and workplace-based assessment in surgical education. Adapting to intersurgeon procedural variations may foster versatility in surgical residents. However, the existence of procedural variations and their active use in surgeons’ teaching raises questions about the lack of attention to this form of complexity in current workplace-based assessment strategies. Failure to recognize the role of such variations may threaten the implementation of competency-based medical education in surgery. PMID:26705062

  13. Humanoid Mobile Manipulation Using Controller Refinement

    NASA Technical Reports Server (NTRS)

    Platt, Robert; Burridge, Robert; Diftler, Myron; Graf, Jodi; Goza, Mike; Huber, Eric; Brock, Oliver

    2006-01-01

    An important class of mobile manipulation problems are move-to-grasp problems where a mobile robot must navigate to and pick up an object. One of the distinguishing features of this class of tasks is its coarse-to-fine structure. Near the beginning of the task, the robot can only sense the target object coarsely or indirectly and make gross motion toward the object. However, after the robot has located and approached the object, the robot must finely control its grasping contacts using precise visual and haptic feedback. This paper proposes that move-to-grasp problems are naturally solved by a sequence of controllers that iteratively refines what ultimately becomes the final solution. This paper introduces the notion of a refining sequence of controllers and characterizes this type of solution. The approach is demonstrated in a move-to-grasp task where Robonaut, the NASA/JSC dexterous humanoid, is mounted on a mobile base and navigates to and picks up a geological sample box. In a series of tests, it is shown that a refining sequence of controllers decreases variance in robot configuration relative to the sample box until a successful grasp has been achieved.

  14. Humanoid Mobile Manipulation Using Controller Refinement

    NASA Technical Reports Server (NTRS)

    Platt, Robert; Burridge, Robert; Diftler, Myron; Graf, Jodi; Goza, Mike; Huber, Eric

    2006-01-01

    An important class of mobile manipulation problems are move-to-grasp problems where a mobile robot must navigate to and pick up an object. One of the distinguishing features of this class of tasks is its coarse-to-fine structure. Near the beginning of the task, the robot can only sense the target object coarsely or indirectly and make gross motion toward the object. However, after the robot has located and approached the object, the robot must finely control its grasping contacts using precise visual and haptic feedback. In this paper, it is proposed that move-to-grasp problems are naturally solved by a sequence of controllers that iteratively refines what ultimately becomes the final solution. This paper introduces the notion of a refining sequence of controllers and characterizes this type of solution. The approach is demonstrated in a move-to-grasp task where Robonaut, the NASA/JSC dexterous humanoid, is mounted on a mobile base and navigates to and picks up a geological sample box. In a series of tests, it is shown that a refining sequence of controllers decreases variance in robot configuration relative to the sample box until a successful grasp has been achieved.

  15. Consolidated principles for screening based on a systematic review and consensus process.

    PubMed

    Dobrow, Mark J; Hagens, Victoria; Chafe, Roger; Sullivan, Terrence; Rabeneck, Linda

    2018-04-09

    In 1968, Wilson and Jungner published 10 principles of screening that often represent the de facto starting point for screening decisions today; 50 years on, are these principles still the right ones? Our objectives were to review published work that presents principles for population-based screening decisions since Wilson and Jungner's seminal publication, and to conduct a Delphi consensus process to assess the review results. We conducted a systematic review and modified Delphi consensus process. We searched multiple databases for articles published in English in 1968 or later that were intended to guide population-based screening decisions, described development and modification of principles, and presented principles as a set or list. Identified sets were compared for basic characteristics (e.g., number, categorization), a citation analysis was conducted, and principles were iteratively synthesized and consolidated into categories to assess evolution. Participants in the consensus process assessed the level of agreement with the importance and interpretability of the consolidated screening principles. We identified 41 sets and 367 unique principles. Each unique principle was coded to 12 consolidated decision principles that were further categorized as disease/condition, test/intervention or program/system principles. Program or system issues were the focus of 3 of Wilson and Jungner's 10 principles, but comprised almost half of all unique principles identified in the review. The 12 consolidated principles were assessed through 2 rounds of the consensus process, leading to specific refinements to improve their relevance and interpretability. No gaps or missing principles were identified. Wilson and Jungner's principles are remarkably enduring, but increasingly reflect a truncated version of contemporary thinking on screening that does not fully capture subsequent focus on program or system principles. Ultimately, this review and consensus process provides a comprehensive and iterative modernization of guidance to inform population-based screening decisions. © 2018 Joule Inc. or its licensors.

  16. Consolidated principles for screening based on a systematic review and consensus process

    PubMed Central

    Hagens, Victoria; Chafe, Roger; Sullivan, Terrence; Rabeneck, Linda

    2018-01-01

    BACKGROUND: In 1968, Wilson and Jungner published 10 principles of screening that often represent the de facto starting point for screening decisions today; 50 years on, are these principles still the right ones? Our objectives were to review published work that presents principles for population-based screening decisions since Wilson and Jungner’s seminal publication, and to conduct a Delphi consensus process to assess the review results. METHODS: We conducted a systematic review and modified Delphi consensus process. We searched multiple databases for articles published in English in 1968 or later that were intended to guide population-based screening decisions, described development and modification of principles, and presented principles as a set or list. Identified sets were compared for basic characteristics (e.g., number, categorization), a citation analysis was conducted, and principles were iteratively synthesized and consolidated into categories to assess evolution. Participants in the consensus process assessed the level of agreement with the importance and interpretability of the consolidated screening principles. RESULTS: We identified 41 sets and 367 unique principles. Each unique principle was coded to 12 consolidated decision principles that were further categorized as disease/condition, test/intervention or program/system principles. Program or system issues were the focus of 3 of Wilson and Jungner’s 10 principles, but comprised almost half of all unique principles identified in the review. The 12 consolidated principles were assessed through 2 rounds of the consensus process, leading to specific refinements to improve their relevance and interpretability. No gaps or missing principles were identified. INTERPRETATION: Wilson and Jungner’s principles are remarkably enduring, but increasingly reflect a truncated version of contemporary thinking on screening that does not fully capture subsequent focus on program or system principles. Ultimately, this review and consensus process provides a comprehensive and iterative modernization of guidance to inform population-based screening decisions. PMID:29632037

  17. Adjusting stream-sediment geochemical maps in the Austrian Bohemian Massif by analysis of variance

    USGS Publications Warehouse

    Davis, J.C.; Hausberger, G.; Schermann, O.; Bohling, G.

    1995-01-01

    The Austrian portion of the Bohemian Massif is a Precambrian terrane composed mostly of highly metamorphosed rocks intruded by a series of granitoids that are petrographically similar. Rocks are exposed poorly and the subtle variations in rock type are difficult to map in the field. A detailed geochemical survey of stream sediments in this region has been conducted and included as part of the Geochemischer Atlas der Republik O??sterreich, and the variations in stream sediment composition may help refine the geological interpretation. In an earlier study, multivariate analysis of variance (MANOVA) was applied to the stream-sediment data in order to minimize unwanted sampling variation and emphasize relationships between stream sediments and rock types in sample catchment areas. The estimated coefficients were used successfully to correct for the sampling effects throughout most of the region, but also introduced an overcorrection in some areas that seems to result from consistent but subtle differences in composition of specific rock types. By expanding the model to include an additional factor reflecting the presence of a major tectonic unit, the Rohrbach block, the overcorrection is removed. This iterative process simultaneously refines both the geochemical map by removing extraneous variation and the geological map by suggesting a more detailed classification of rock types. ?? 1995 International Association for Mathematical Geology.

  18. Rasch Measurement Analysis of the Mayo-Portland Adaptability Inventory (MPAI-4) in a Community-Based Rehabilitation Sample

    PubMed Central

    Malec, James F.; Altman, Irwin M.; Swick, Shannon

    2011-01-01

    Abstract The precise measurement of patient outcomes depends upon clearly articulated constructs and refined clinical assessment instruments that work equally well for all subgroups within a population. This is a challenging task in those with acquired brain injury (ABI) because of the marked heterogeneity of the disorder and subsequent outcomes. Alhough essential, the iterative process of instrument refinement is often neglected. This present study was undertaken to examine validity, reliability, dimensionality and item estimate invariance of the Mayo-Portland Adaptability Inventory – 4 (MPAI-4), an outcome measure for persons with ABI. The sampled population included 603 persons with traumatic ABI participating in a home- and community-based rehabilitation program. Results indicated that the MPAI-4 is a valid, reliable measure of outcome following traumatic ABI, which measures a broad but unitary core construct of outcome after ABI. Further, the MPAI-4 is composed of items that are unbiased toward selected subgroups except where differences could be expected [e.g., more chronic traumatic brain injury (TBI) patients are better able to negotiate demands of transportation than more acute TBI patients]. We address the trade-offs between strict unidimensionality and clinical applicability in measuring outcome, and illustrate the advantages and disadvantages of applying single-parameter measurement models to broad constructs. PMID:21332409

  19. Rasch measurement analysis of the Mayo-Portland Adaptability Inventory (MPAI-4) in a community-based rehabilitation sample.

    PubMed

    Kean, Jacob; Malec, James F; Altman, Irwin M; Swick, Shannon

    2011-05-01

    The precise measurement of patient outcomes depends upon clearly articulated constructs and refined clinical assessment instruments that work equally well for all subgroups within a population. This is a challenging task in those with acquired brain injury (ABI) because of the marked heterogeneity of the disorder and subsequent outcomes. Although essential, the iterative process of instrument refinement is often neglected. This present study was undertaken to examine validity, reliability, dimensionality and item estimate invariance of the Mayo-Portland Adaptability Inventory - 4 (MPAI-4), an outcome measure for persons with ABI. The sampled population included 603 persons with traumatic ABI participating in a home- and community-based rehabilitation program. Results indicated that the MPAI-4 is a valid, reliable measure of outcome following traumatic ABI, which measures a broad but unitary core construct of outcome after ABI. Further, the MPAI-4 is composed of items that are unbiased toward selected subgroups except where differences could be expected [e.g., more chronic traumatic brain injury (TBI) patients are better able to negotiate demands of transportation than more acute TBI patients]. We address the trade-offs between strict unidimensionality and clinical applicability in measuring outcome, and illustrate the advantages and disadvantages of applying single-parameter measurement models to broad constructs.

  20. Improving cluster-based missing value estimation of DNA microarray data.

    PubMed

    Brás, Lígia P; Menezes, José C

    2007-06-01

    We present a modification of the weighted K-nearest neighbours imputation method (KNNimpute) for missing values (MVs) estimation in microarray data based on the reuse of estimated data. The method was called iterative KNN imputation (IKNNimpute) as the estimation is performed iteratively using the recently estimated values. The estimation efficiency of IKNNimpute was assessed under different conditions (data type, fraction and structure of missing data) by the normalized root mean squared error (NRMSE) and the correlation coefficients between estimated and true values, and compared with that of other cluster-based estimation methods (KNNimpute and sequential KNN). We further investigated the influence of imputation on the detection of differentially expressed genes using SAM by examining the differentially expressed genes that are lost after MV estimation. The performance measures give consistent results, indicating that the iterative procedure of IKNNimpute can enhance the prediction ability of cluster-based methods in the presence of high missing rates, in non-time series experiments and in data sets comprising both time series and non-time series data, because the information of the genes having MVs is used more efficiently and the iterative procedure allows refining the MV estimates. More importantly, IKNN has a smaller detrimental effect on the detection of differentially expressed genes.

  1. What Are Student Inservice Teachers Talking about in Their Online Communities of Practice? Investigating Student Inservice Teachers' Experiences in a Double-Layered CoP

    ERIC Educational Resources Information Center

    Lee, Kyungmee; Brett, Clare

    2013-01-01

    This qualitative case study is the first phase of a large-scale design-based research project to implement a theoretically derived double-layered CoP model within real-world teacher development practices. The main goal of this first iteration is to evaluate the courses and test and refine the CoP model for future implementations. This paper…

  2. System Integration Issues in Digital Photogrammetric Mapping

    DTIC Science & Technology

    1992-01-01

    elevation models, and/or rectified imagery/ orthophotos . Imagery exported from the DSPW can be either in a tiled image format or standard raster format...data. In the near future, correlation using "window shaping" operations along with an iterative orthophoto refinements methodology (Norvelle, 1992) is...components of TIES. The IDS passes tiled image data and ASCII header data to the DSPW. The tiled image file contains only image data. The ASCII header

  3. Soft Clustering Criterion Functions for Partitional Document Clustering

    DTIC Science & Technology

    2004-05-26

    in the clus- ter that it already belongs to. The refinement phase ends, as soon as we perform an iteration in which no documents moved between...for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE 26 MAY 2004 2... it with the one obtained by the hard criterion functions. We present a comprehensive experimental evaluation involving twelve differ- ent datasets

  4. Assessment of Preconditioner for a USM3D Hierarchical Adaptive Nonlinear Method (HANIM) (Invited)

    NASA Technical Reports Server (NTRS)

    Pandya, Mohagna J.; Diskin, Boris; Thomas, James L.; Frink, Neal T.

    2016-01-01

    Enhancements to the previously reported mixed-element USM3D Hierarchical Adaptive Nonlinear Iteration Method (HANIM) framework have been made to further improve robustness, efficiency, and accuracy of computational fluid dynamic simulations. The key enhancements include a multi-color line-implicit preconditioner, a discretely consistent symmetry boundary condition, and a line-mapping method for the turbulence source term discretization. The USM3D iterative convergence for the turbulent flows is assessed on four configurations. The configurations include a two-dimensional (2D) bump-in-channel, the 2D NACA 0012 airfoil, a three-dimensional (3D) bump-in-channel, and a 3D hemisphere cylinder. The Reynolds Averaged Navier Stokes (RANS) solutions have been obtained using a Spalart-Allmaras turbulence model and families of uniformly refined nested grids. Two types of HANIM solutions using line- and point-implicit preconditioners have been computed. Additional solutions using the point-implicit preconditioner alone (PA) method that broadly represents the baseline solver technology have also been computed. The line-implicit HANIM shows superior iterative convergence in most cases with progressively increasing benefits on finer grids.

  5. 3D level set methods for evolving fronts on tetrahedral meshes with adaptive mesh refinement

    DOE PAGES

    Morgan, Nathaniel Ray; Waltz, Jacob I.

    2017-03-02

    The level set method is commonly used to model dynamically evolving fronts and interfaces. In this work, we present new methods for evolving fronts with a specified velocity field or in the surface normal direction on 3D unstructured tetrahedral meshes with adaptive mesh refinement (AMR). The level set field is located at the nodes of the tetrahedral cells and is evolved using new upwind discretizations of Hamilton–Jacobi equations combined with a Runge–Kutta method for temporal integration. The level set field is periodically reinitialized to a signed distance function using an iterative approach with a new upwind gradient. We discuss themore » details of these level set and reinitialization methods. Results from a range of numerical test problems are presented.« less

  6. Metaheuristics-Assisted Combinatorial Screening of Eu2+-Doped Ca-Sr-Ba-Li-Mg-Al-Si-Ge-N Compositional Space in Search of a Narrow-Band Green Emitting Phosphor and Density Functional Theory Calculations.

    PubMed

    Lee, Jin-Woong; Singh, Satendra Pal; Kim, Minseuk; Hong, Sung Un; Park, Woon Bae; Sohn, Kee-Sun

    2017-08-21

    A metaheuristics-based design would be of great help in relieving the enormous experimental burdens faced during the combinatorial screening of a huge, multidimensional search space, while providing the same effect as total enumeration. In order to tackle the high-throughput powder processing complications and to secure practical phosphors, metaheuristics, an elitism-reinforced nondominated sorting genetic algorithm (NSGA-II), was employed in this study. The NSGA-II iteration targeted two objective functions. The first was to search for a higher emission efficacy. The second was to search for narrow-band green color emissions. The NSGA-II iteration finally converged on BaLi 2 Al 2 Si 2 N 6 :Eu 2+ phosphors in the Eu 2+ -doped Ca-Sr-Ba-Li-Mg-Al-Si-Ge-N compositional search space. The BaLi 2 Al 2 Si 2 N 6 :Eu 2+ phosphor, which was synthesized with no human intervention via the assistance of NSGA-II, was a clear single phase and gave an acceptable luminescence. The BaLi 2 Al 2 Si 2 N 6 :Eu 2+ phosphor as well as all other phosphors that appeared during the NSGA-II iterations were examined in detail by employing powder X-ray diffraction-based Rietveld refinement, X-ray absorption near edge structure, density functional theory calculation, and time-resolved photoluminescence. The thermodynamic stability and the band structure plausibility were confirmed, and more importantly a novel approach to the energy transfer analysis was also introduced for BaLi 2 Al 2 Si 2 N 6 :Eu 2+ phosphors.

  7. An Iterative Inference Procedure Applying Conditional Random Fields for Simultaneous Classification of Land Cover and Land Use

    NASA Astrophysics Data System (ADS)

    Albert, L.; Rottensteiner, F.; Heipke, C.

    2015-08-01

    Land cover and land use exhibit strong contextual dependencies. We propose a novel approach for the simultaneous classification of land cover and land use, where semantic and spatial context is considered. The image sites for land cover and land use classification form a hierarchy consisting of two layers: a land cover layer and a land use layer. We apply Conditional Random Fields (CRF) at both layers. The layers differ with respect to the image entities corresponding to the nodes, the employed features and the classes to be distinguished. In the land cover layer, the nodes represent super-pixels; in the land use layer, the nodes correspond to objects from a geospatial database. Both CRFs model spatial dependencies between neighbouring image sites. The complex semantic relations between land cover and land use are integrated in the classification process by using contextual features. We propose a new iterative inference procedure for the simultaneous classification of land cover and land use, in which the two classification tasks mutually influence each other. This helps to improve the classification accuracy for certain classes. The main idea of this approach is that semantic context helps to refine the class predictions, which, in turn, leads to more expressive context information. Thus, potentially wrong decisions can be reversed at later stages. The approach is designed for input data based on aerial images. Experiments are carried out on a test site to evaluate the performance of the proposed method. We show the effectiveness of the iterative inference procedure and demonstrate that a smaller size of the super-pixels has a positive influence on the classification result.

  8. A Framework for Automating Cost Estimates in Assembly Processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calton, T.L.; Peters, R.R.

    1998-12-09

    When a product concept emerges, the manufacturing engineer is asked to sketch out a production strategy and estimate its cost. The engineer is given an initial product design, along with a schedule of expected production volumes. The engineer then determines the best approach to manufacturing the product, comparing a variey of alternative production strategies. The engineer must consider capital cost, operating cost, lead-time, and other issues in an attempt to maximize pro$ts. After making these basic choices and sketching the design of overall production, the engineer produces estimates of the required capital, operating costs, and production capacity. 177is process maymore » iterate as the product design is refined in order to improve its pe~ormance or manufacturability. The focus of this paper is on the development of computer tools to aid manufacturing engineers in their decision-making processes. This computer sof~are tool provides aj?amework in which accurate cost estimates can be seamlessly derivedfiom design requirements at the start of any engineering project. Z+e result is faster cycle times through first-pass success; lower ll~e cycie cost due to requirements-driven design and accurate cost estimates derived early in the process.« less

  9. The Effect of Iteration on the Design Performance of Primary School Children

    ERIC Educational Resources Information Center

    Looijenga, Annemarie; Klapwijk, Remke; de Vries, Marc J.

    2015-01-01

    Iteration during the design process is an essential element. Engineers optimize their design by iteration. Research on iteration in Primary Design Education is however scarce; possibly teachers believe they do not have enough time for iteration in daily classroom practices. Spontaneous playing behavior of children indicates that iteration fits in…

  10. The EU-project United4Health: User-Centred Design and Evaluation of a Collaborative Information System for a Norwegian Telehealth Service.

    PubMed

    Smaradottir, Berglind; Gerdes, Martin; Martinez, Santiago; Fensli, Rune

    2015-01-01

    This study presents the user-centred design and evaluation process of a Collaborative Information System (CIS), developed for a new telehealth service for remote monitoring of chronic obstructive pulmonary disease patients after hospital discharge. The CIS was designed based on the information gathered in a workshop, where target end-users described the context of use, a telehealth workflow and their preferred ways of interaction with the solution. Evaluation of the iterative refinements were made through user tests, semi-structured interviews and a questionnaire. A field trial reported results on the ease of use and user satisfaction during the interaction with the fully developed system. The implemented CIS was successfully deployed within the secured Norwegian Health Network. The research was a result of cooperation between international partners within the EU FP7 project United4Health.

  11. Self-Stigma in Substance Abuse: Development of a New Measure

    PubMed Central

    Luoma, Jason B.; Nobles, Richard H.; Drake, Chad E.; Hayes, Steven C.; O’Hair, Alyssa; Fletcher, Lindsay; Kohlenberg, Barbara S.

    2012-01-01

    Little attention has been paid to the examination and measurement of self-stigma in substance misuse. This paper aims to fill this gap by reporting on the development of a new scale to measure self-stigma experienced by people who are misusing substances, the Substance Abuse Self-Stigma Scale. Content validity and item refinement occurred through an iterative process involving a literature search, focus groups, and expert judges. Psychometric properties were examined in a cross-sectional study of individuals (n = 352) receiving treatment for substance misuse. Factor analyses resulted in a 40-item measure with self devaluation, fear of enacted stigma, stigma avoidance, and values disengagement subscales. The measure showed a strong factor structure and good reliability and validity overall, though the values disengagement subscale showed a mixed pattern. Results are discussed in terms of their implications for studies of stigma impact and intervention. PMID:23772099

  12. Joint Segmentation and Deformable Registration of Brain Scans Guided by a Tumor Growth Model

    PubMed Central

    Gooya, Ali; Pohl, Kilian M.; Bilello, Michel; Biros, George; Davatzikos, Christos

    2011-01-01

    This paper presents an approach for joint segmentation and deformable registration of brain scans of glioma patients to a normal atlas. The proposed method is based on the Expectation Maximization (EM) algorithm that incorporates a glioma growth model for atlas seeding, a process which modifies the normal atlas into one with a tumor and edema. The modified atlas is registered into the patient space and utilized for the posterior probability estimation of various tissue labels. EM iteratively refines the estimates of the registration parameters, the posterior probabilities of tissue labels and the tumor growth model parameters. We have applied this approach to 10 glioma scans acquired with four Magnetic Resonance (MR) modalities (T1, T1-CE, T2 and FLAIR ) and validated the result by comparing them to manual segmentations by clinical experts. The resulting segmentations look promising and quantitatively match well with the expert provided ground truth. PMID:21995070

  13. Joint segmentation and deformable registration of brain scans guided by a tumor growth model.

    PubMed

    Gooya, Ali; Pohl, Kilian M; Bilello, Michel; Biros, George; Davatzikos, Christos

    2011-01-01

    This paper presents an approach for joint segmentation and deformable registration of brain scans of glioma patients to a normal atlas. The proposed method is based on the Expectation Maximization (EM) algorithm that incorporates a glioma growth model for atlas seeding, a process which modifies the normal atlas into one with a tumor and edema. The modified atlas is registered into the patient space and utilized for the posterior probability estimation of various tissue labels. EM iteratively refines the estimates of the registration parameters, the posterior probabilities of tissue labels and the tumor growth model parameters. We have applied this approach to 10 glioma scans acquired with four Magnetic Resonance (MR) modalities (T1, T1-CE, T2 and FLAIR) and validated the result by comparing them to manual segmentations by clinical experts. The resulting segmentations look promising and quantitatively match well with the expert provided ground truth.

  14. A Least-Squares Finite Element Method for Electromagnetic Scattering Problems

    NASA Technical Reports Server (NTRS)

    Wu, Jie; Jiang, Bo-nan

    1996-01-01

    The least-squares finite element method (LSFEM) is applied to electromagnetic scattering and radar cross section (RCS) calculations. In contrast to most existing numerical approaches, in which divergence-free constraints are omitted, the LSFF-M directly incorporates two divergence equations in the discretization process. The importance of including the divergence equations is demonstrated by showing that otherwise spurious solutions with large divergence occur near the scatterers. The LSFEM is based on unstructured grids and possesses full flexibility in handling complex geometry and local refinement Moreover, the LSFEM does not require any special handling, such as upwinding, staggered grids, artificial dissipation, flux-differencing, etc. Implicit time discretization is used and the scheme is unconditionally stable. By using a matrix-free iterative method, the computational cost and memory requirement for the present scheme is competitive with other approaches. The accuracy of the LSFEM is verified by several benchmark test problems.

  15. The Responsive Environmental Assessment for Classroom Teaching (REACT): the dimensionality of student perceptions of the instructional environment.

    PubMed

    Nelson, Peter M; Demers, Joseph A; Christ, Theodore J

    2014-06-01

    This study details the initial development of the Responsive Environmental Assessment for Classroom Teachers (REACT). REACT was developed as a questionnaire to evaluate student perceptions of the classroom teaching environment. Researchers engaged in an iterative process to develop, field test, and analyze student responses on 100 rating-scale items. Participants included 1,465 middle school students across 48 classrooms in the Midwest. Item analysis, including exploratory and confirmatory factor analysis, was used to refine a 27-item scale with a second-order factor structure. Results support the interpretation of a single general dimension of the Classroom Teaching Environment with 6 subscale dimensions: Positive Reinforcement, Instructional Presentation, Goal Setting, Differentiated Instruction, Formative Feedback, and Instructional Enjoyment. Applications of REACT in research and practice are discussed along with implications for future research and the development of classroom environment measures. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  16. Computerization of Mental Health Integration Complexity Scores at Intermountain Healthcare

    PubMed Central

    Oniki, Thomas A.; Rodrigues, Drayton; Rahman, Noman; Patur, Saritha; Briot, Pascal; Taylor, David P.; Wilcox, Adam B.; Reiss-Brennan, Brenda; Cannon, Wayne H.

    2014-01-01

    Intermountain Healthcare’s Mental Health Integration (MHI) Care Process Model (CPM) contains formal scoring criteria for assessing a patient’s mental health complexity as “mild,” “medium,” or “high” based on patient data. The complexity score attempts to assist Primary Care Physicians in assessing the mental health needs of their patients and what resources will need to be brought to bear. We describe an effort to computerize the scoring. Informatics and MHI personnel collaboratively and iteratively refined the criteria to make them adequately explicit and reflective of MHI objectives. When tested on retrospective data of 540 patients, the clinician agreed with the computer’s conclusion in 52.8% of the cases (285/540). We considered the analysis sufficiently successful to begin piloting the computerized score in prospective clinical care. So far in the pilot, clinicians have agreed with the computer in 70.6% of the cases (24/34). PMID:25954401

  17. Joint detection of anatomical points on surface meshes and color images for visual registration of 3D dental models

    NASA Astrophysics Data System (ADS)

    Destrez, Raphaël.; Albouy-Kissi, Benjamin; Treuillet, Sylvie; Lucas, Yves

    2015-04-01

    Computer aided planning for orthodontic treatment requires knowing occlusion of separately scanned dental casts. A visual guided registration is conducted starting by extracting corresponding features in both photographs and 3D scans. To achieve this, dental neck and occlusion surface are firstly extracted by image segmentation and 3D curvature analysis. Then, an iterative registration process is conducted during which feature positions are refined, guided by previously found anatomic edges. The occlusal edge image detection is improved by an original algorithm which follows Canny's poorly detected edges using a priori knowledge of tooth shapes. Finally, the influence of feature extraction and position optimization is evaluated in terms of the quality of the induced registration. Best combination of feature detection and optimization leads to a positioning average error of 1.10 mm and 2.03°.

  18. The Chlamydomonas genome project: a decade on.

    PubMed

    Blaby, Ian K; Blaby-Haas, Crysten E; Tourasse, Nicolas; Hom, Erik F Y; Lopez, David; Aksoy, Munevver; Grossman, Arthur; Umen, James; Dutcher, Susan; Porter, Mary; King, Stephen; Witman, George B; Stanke, Mario; Harris, Elizabeth H; Goodstein, David; Grimwood, Jane; Schmutz, Jeremy; Vallon, Olivier; Merchant, Sabeeha S; Prochnik, Simon

    2014-10-01

    The green alga Chlamydomonas reinhardtii is a popular unicellular organism for studying photosynthesis, cilia biogenesis, and micronutrient homeostasis. Ten years since its genome project was initiated an iterative process of improvements to the genome and gene predictions has propelled this organism to the forefront of the omics era. Housed at Phytozome, the plant genomics portal of the Joint Genome Institute (JGI), the most up-to-date genomic data include a genome arranged on chromosomes and high-quality gene models with alternative splice forms supported by an abundance of whole transcriptome sequencing (RNA-Seq) data. We present here the past, present, and future of Chlamydomonas genomics. Specifically, we detail progress on genome assembly and gene model refinement, discuss resources for gene annotations, functional predictions, and locus ID mapping between versions and, importantly, outline a standardized framework for naming genes. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Understanding Biological Regulation Through Synthetic Biology.

    PubMed

    Bashor, Caleb J; Collins, James J

    2018-05-20

    Engineering synthetic gene regulatory circuits proceeds through iterative cycles of design, building, and testing. Initial circuit designs must rely on often-incomplete models of regulation established by fields of reductive inquiry-biochemistry and molecular and systems biology. As differences in designed and experimentally observed circuit behavior are inevitably encountered, investigated, and resolved, each turn of the engineering cycle can force a resynthesis in understanding of natural network function. Here, we outline research that uses the process of gene circuit engineering to advance biological discovery. Synthetic gene circuit engineering research has not only refined our understanding of cellular regulation but furnished biologists with a toolkit that can be directed at natural systems to exact precision manipulation of network structure. As we discuss, using circuit engineering to predictively reorganize, rewire, and reconstruct cellular regulation serves as the ultimate means of testing and understanding how cellular phenotype emerges from systems-level network function.

  20. MS lesion segmentation using a multi-channel patch-based approach with spatial consistency

    NASA Astrophysics Data System (ADS)

    Mechrez, Roey; Goldberger, Jacob; Greenspan, Hayit

    2015-03-01

    This paper presents an automatic method for segmentation of Multiple Sclerosis (MS) in Magnetic Resonance Images (MRI) of the brain. The approach is based on similarities between multi-channel patches (T1, T2 and FLAIR). An MS lesion patch database is built using training images for which the label maps are known. For each patch in the testing image, k similar patches are retrieved from the database. The matching labels for these k patches are then combined to produce an initial segmentation map for the test case. Finally a novel iterative patch-based label refinement process based on the initial segmentation map is performed to ensure spatial consistency of the detected lesions. A leave-one-out evaluation is done for each testing image in the MS lesion segmentation challenge of MICCAI 2008. Results are shown to compete with the state-of-the-art methods on the MICCAI 2008 challenge.

  1. Low-rank Atlas Image Analyses in the Presence of Pathologies

    PubMed Central

    Liu, Xiaoxiao; Niethammer, Marc; Kwitt, Roland; Singh, Nikhil; McCormick, Matt; Aylward, Stephen

    2015-01-01

    We present a common framework, for registering images to an atlas and for forming an unbiased atlas, that tolerates the presence of pathologies such as tumors and traumatic brain injury lesions. This common framework is particularly useful when a sufficient number of protocol-matched scans from healthy subjects cannot be easily acquired for atlas formation and when the pathologies in a patient cause large appearance changes. Our framework combines a low-rank-plus-sparse image decomposition technique with an iterative, diffeomorphic, group-wise image registration method. At each iteration of image registration, the decomposition technique estimates a “healthy” version of each image as its low-rank component and estimates the pathologies in each image as its sparse component. The healthy version of each image is used for the next iteration of image registration. The low-rank and sparse estimates are refined as the image registrations iteratively improve. When that framework is applied to image-to-atlas registration, the low-rank image is registered to a pre-defined atlas, to establish correspondence that is independent of the pathologies in the sparse component of each image. Ultimately, image-to-atlas registrations can be used to define spatial priors for tissue segmentation and to map information across subjects. When that framework is applied to unbiased atlas formation, at each iteration, the average of the low-rank images from the patients is used as the atlas image for the next iteration, until convergence. Since each iteration’s atlas is comprised of low-rank components, it provides a population-consistent, pathology-free appearance. Evaluations of the proposed methodology are presented using synthetic data as well as simulated and clinical tumor MRI images from the brain tumor segmentation (BRATS) challenge from MICCAI 2012. PMID:26111390

  2. ITER Central Solenoid Module Fabrication

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, John

    The fabrication of the modules for the ITER Central Solenoid (CS) has started in a dedicated production facility located in Poway, California, USA. The necessary tools have been designed, built, installed, and tested in the facility to enable the start of production. The current schedule has first module fabrication completed in 2017, followed by testing and subsequent shipment to ITER. The Central Solenoid is a key component of the ITER tokamak providing the inductive voltage to initiate and sustain the plasma current and to position and shape the plasma. The design of the CS has been a collaborative effort betweenmore » the US ITER Project Office (US ITER), the international ITER Organization (IO) and General Atomics (GA). GA’s responsibility includes: completing the fabrication design, developing and qualifying the fabrication processes and tools, and then completing the fabrication of the seven 110 tonne CS modules. The modules will be shipped separately to the ITER site, and then stacked and aligned in the Assembly Hall prior to insertion in the core of the ITER tokamak. A dedicated facility in Poway, California, USA has been established by GA to complete the fabrication of the seven modules. Infrastructure improvements included thick reinforced concrete floors, a diesel generator for backup power, along with, cranes for moving the tooling within the facility. The fabrication process for a single module requires approximately 22 months followed by five months of testing, which includes preliminary electrical testing followed by high current (48.5 kA) tests at 4.7K. The production of the seven modules is completed in a parallel fashion through ten process stations. The process stations have been designed and built with most stations having completed testing and qualification for carrying out the required fabrication processes. The final qualification step for each process station is achieved by the successful production of a prototype coil. Fabrication of the first ITER module is in progress. The seven modules will be individually shipped to Cadarache, France upon their completion. This paper describes the processes and status of the fabrication of the CS Modules for ITER.« less

  3. Construction and assembly of the wire planes for the MicroBooNE Time Projection Chamber

    DOE PAGES

    Acciarri, R.; Adams, C.; Asaadi, J.; ...

    2017-03-09

    As x-ray and electron tomography is pushed further into the nanoscale, the limitations of rotation stages become more apparent, leading to challenges in the alignment of the acquired projection images. Here we present an approach for rapid post-acquisition alignment of these projections to obtain high quality three-dimensional images. Our approach is based on a joint estimation of alignment errors, and the object, using an iterative refinement procedure. With simulated data where we know the alignment error of each projection image, our approach shows a residual alignment error that is a factor of a thousand smaller, and it reaches the samemore » error level in the reconstructed image in less than half the number of iterations. We then show its application to experimental data in x-ray and electron nanotomography.« less

  4. Genetic Constructor: An Online DNA Design Platform.

    PubMed

    Bates, Maxwell; Lachoff, Joe; Meech, Duncan; Zulkower, Valentin; Moisy, Anaïs; Luo, Yisha; Tekotte, Hille; Franziska Scheitz, Cornelia Johanna; Khilari, Rupal; Mazzoldi, Florencio; Chandran, Deepak; Groban, Eli

    2017-12-15

    Genetic Constructor is a cloud Computer Aided Design (CAD) application developed to support synthetic biologists from design intent through DNA fabrication and experiment iteration. The platform allows users to design, manage, and navigate complex DNA constructs and libraries, using a new visual language that focuses on functional parts abstracted from sequence. Features like combinatorial libraries and automated primer design allow the user to separate design from construction by focusing on functional intent, and design constraints aid iterative refinement of designs. A plugin architecture enables contributions from scientists and coders to leverage existing powerful software and connect to DNA foundries. The software is easily accessible and platform agnostic, free for academics, and available in an open-source community edition. Genetic Constructor seeks to democratize DNA design, manufacture, and access to tools and services from the synthetic biology community.

  5. Construction and assembly of the wire planes for the MicroBooNE Time Projection Chamber

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Acciarri, R.; Adams, C.; Asaadi, J.

    As x-ray and electron tomography is pushed further into the nanoscale, the limitations of rotation stages become more apparent, leading to challenges in the alignment of the acquired projection images. Here we present an approach for rapid post-acquisition alignment of these projections to obtain high quality three-dimensional images. Our approach is based on a joint estimation of alignment errors, and the object, using an iterative refinement procedure. With simulated data where we know the alignment error of each projection image, our approach shows a residual alignment error that is a factor of a thousand smaller, and it reaches the samemore » error level in the reconstructed image in less than half the number of iterations. We then show its application to experimental data in x-ray and electron nanotomography.« less

  6. Optimization of Refining Craft for Vegetable Insulating Oil

    NASA Astrophysics Data System (ADS)

    Zhou, Zhu-Jun; Hu, Ting; Cheng, Lin; Tian, Kai; Wang, Xuan; Yang, Jun; Kong, Hai-Yang; Fang, Fu-Xin; Qian, Hang; Fu, Guang-Pan

    2016-05-01

    Vegetable insulating oil because of its environmental friendliness are considered as ideal material instead of mineral oil used for the insulation and the cooling of the transformer. The main steps of traditional refining process included alkali refining, bleaching and distillation. This kind of refining process used in small doses of insulating oil refining can get satisfactory effect, but can't be applied to the large capacity reaction kettle. This paper using rapeseed oil as crude oil, and the refining process has been optimized for large capacity reaction kettle. The optimized refining process increases the acid degumming process. The alkali compound adds the sodium silicate composition in the alkali refining process, and the ratio of each component is optimized. Add the amount of activated clay and activated carbon according to 10:1 proportion in the de-colorization process, which can effectively reduce the oil acid value and dielectric loss. Using vacuum pumping gas instead of distillation process can further reduce the acid value. Compared some part of the performance parameters of refined oil products with mineral insulating oil, the dielectric loss of vegetable insulating oil is still high and some measures are needed to take to further optimize in the future.

  7. Development of the IBD Disk: A Visual Self-administered Tool for Assessing Disability in Inflammatory Bowel Diseases.

    PubMed

    Ghosh, Subrata; Louis, Edouard; Beaugerie, Laurent; Bossuyt, Peter; Bouguen, Guillaume; Bourreille, Arnaud; Ferrante, Marc; Franchimont, Denis; Frost, Karen; Hebuterne, Xavier; Marshall, John K; OʼShea, Ciara; Rosenfeld, Greg; Williams, Chadwick; Peyrin-Biroulet, Laurent

    2017-03-01

    The Inflammatory bowel disease (IBD) Disability Index is a validated tool that evaluates functional status; however, it is used mainly in the clinical trial setting. We describe the use of an iterative Delphi consensus process to develop the IBD Disk-a shortened, self-administered adaption of the validated IBD Disability Index-to give immediate visual representation of patient-reported IBD-related disability. In the preparatory phase, the IBD CONNECT group (30 health care professionals) ranked IBD Disability Index items in the perceived order of importance. The Steering Committee then selected 10 items from the IBD Disability Index to take forward for inclusion in the IBD Disk. In the consensus phase, the items were refined and agreed by the IBD Disk Working Group (14 gastroenterologists) using an online iterative Delphi consensus process. Members could also suggest new element(s) or recommend changes to included elements. The final items for the IBD Disk were agreed in February 2016. After 4 rounds of voting, the following 10 items were agreed for inclusion in the IBD Disk: abdominal pain, body image, education and work, emotions, energy, interpersonal interactions, joint pain, regulating defecation, sexual functions, and sleep. All elements, except sexual functions, were included in the validated IBD Disability Index. The IBD Disk has the potential to be a valuable tool for use at a clinical visit. It can facilitate assessment of inflammatory bowel disease-related disability relevant to both patients and physicians, discussion on specific disability-related issues, and tracking changes in disease burden over time.

  8. Learning outcomes for communication skills across the health professions: a systematic literature review and qualitative synthesis

    PubMed Central

    Denniston, Charlotte; Molloy, Elizabeth; Woodward-Kron, Robyn; Keating, Jennifer L

    2017-01-01

    Objective The aim of this study was to identify and analyse communication skills learning outcomes via a systematic review and present results in a synthesised list. Summarised results inform educators and researchers in communication skills teaching and learning across health professions. Design Systematic review and qualitative synthesis. Methods A systematic search of five databases (MEDLINE, PsycINFO, ERIC, CINAHL plus and Scopus), from first records until August 2016, identified published learning outcomes for communication skills in health professions education. Extracted data were analysed through an iterative process of qualitative synthesis. This process was guided by principles of person centredness and an a priori decision guide. Results 168 papers met the eligibility criteria; 1669 individual learning outcomes were extracted and refined using qualitative synthesis. A final refined set of 205 learning outcomes were constructed and are presented in 4 domains that include: (1) knowledge (eg, describe the importance of communication in healthcare), (2) content skills (eg, explore a healthcare seeker's motivation for seeking healthcare),( 3) process skills (eg, respond promptly to a communication partner's questions) and (4) perceptual skills (eg, reflect on own ways of expressing emotion). Conclusions This study provides a list of 205 communication skills learning outcomes that provide a foundation for further research and educational design in communication education across the health professions. Areas for future investigation include greater patient involvement in communication skills education design and further identification of learning outcomes that target knowledge and perceptual skills. This work may also prompt educators to be cognisant of the quality and scope of the learning outcomes they design and their application as goals for learning. PMID:28389493

  9. How can systems engineering inform the methods of programme evaluation in health professions education?

    PubMed

    Rojas, David; Grierson, Lawrence; Mylopoulos, Maria; Trbovich, Patricia; Bagli, Darius; Brydges, Ryan

    2018-04-01

    We evaluate programmes in health professions education (HPE) to determine their effectiveness and value. Programme evaluation has evolved from use of reductionist frameworks to those addressing the complex interactions between programme factors. Researchers in HPE have recently suggested a 'holistic programme evaluation' aiming to better describe and understand the implications of 'emergent processes and outcomes'. We propose a programme evaluation framework informed by principles and tools from systems engineering. Systems engineers conceptualise complexity and emergent elements in unique ways that may complement and extend contemporary programme evaluations in HPE. We demonstrate how the abstract decomposition space (ADS), an engineering knowledge elicitation tool, provides the foundation for a systems engineering informed programme evaluation designed to capture both planned and emergent programme elements. We translate the ADS tool to use education-oriented language, and describe how evaluators can use it to create a programme-specific ADS through iterative refinement. We provide a conceptualisation of emergent elements and an equation that evaluators can use to identify the emergent elements in their programme. Using our framework, evaluators can analyse programmes not as isolated units with planned processes and planned outcomes, but as unfolding, complex interactive systems that will exhibit emergent processes and emergent outcomes. Subsequent analysis of these emergent elements will inform the evaluator as they seek to optimise and improve the programme. Our proposed systems engineering informed programme evaluation framework provides principles and tools for analysing the implications of planned and emergent elements, as well as their potential interactions. We acknowledge that our framework is preliminary and will require application and constant refinement. We suggest that our framework will also advance our understanding of the construct of 'emergence' in HPE research. © 2017 John Wiley & Sons Ltd and The Association for the Study of Medical Education.

  10. Learning outcomes for communication skills across the health professions: a systematic literature review and qualitative synthesis.

    PubMed

    Denniston, Charlotte; Molloy, Elizabeth; Nestel, Debra; Woodward-Kron, Robyn; Keating, Jennifer L

    2017-04-07

    The aim of this study was to identify and analyse communication skills learning outcomes via a systematic review and present results in a synthesised list. Summarised results inform educators and researchers in communication skills teaching and learning across health professions. Systematic review and qualitative synthesis. A systematic search of five databases (MEDLINE, PsycINFO, ERIC, CINAHL plus and Scopus), from first records until August 2016, identified published learning outcomes for communication skills in health professions education. Extracted data were analysed through an iterative process of qualitative synthesis. This process was guided by principles of person centredness and an a priori decision guide. 168 papers met the eligibility criteria; 1669 individual learning outcomes were extracted and refined using qualitative synthesis. A final refined set of 205 learning outcomes were constructed and are presented in 4 domains that include: (1) knowledge (eg, describe the importance of communication in healthcare), (2) content skills (eg, explore a healthcare seeker's motivation for seeking healthcare),( 3) process skills (eg, respond promptly to a communication partner's questions) and (4) perceptual skills (eg, reflect on own ways of expressing emotion). This study provides a list of 205 communication skills learning outcomes that provide a foundation for further research and educational design in communication education across the health professions. Areas for future investigation include greater patient involvement in communication skills education design and further identification of learning outcomes that target knowledge and perceptual skills. This work may also prompt educators to be cognisant of the quality and scope of the learning outcomes they design and their application as goals for learning. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  11. Swarm: robust and fast clustering method for amplicon-based studies.

    PubMed

    Mahé, Frédéric; Rognes, Torbjørn; Quince, Christopher; de Vargas, Colomban; Dunthorn, Micah

    2014-01-01

    Popular de novo amplicon clustering methods suffer from two fundamental flaws: arbitrary global clustering thresholds, and input-order dependency induced by centroid selection. Swarm was developed to address these issues by first clustering nearly identical amplicons iteratively using a local threshold, and then by using clusters' internal structure and amplicon abundances to refine its results. This fast, scalable, and input-order independent approach reduces the influence of clustering parameters and produces robust operational taxonomic units.

  12. Swarm: robust and fast clustering method for amplicon-based studies

    PubMed Central

    Rognes, Torbjørn; Quince, Christopher; de Vargas, Colomban; Dunthorn, Micah

    2014-01-01

    Popular de novo amplicon clustering methods suffer from two fundamental flaws: arbitrary global clustering thresholds, and input-order dependency induced by centroid selection. Swarm was developed to address these issues by first clustering nearly identical amplicons iteratively using a local threshold, and then by using clusters’ internal structure and amplicon abundances to refine its results. This fast, scalable, and input-order independent approach reduces the influence of clustering parameters and produces robust operational taxonomic units. PMID:25276506

  13. Iteratively Refined Guide Trees Help Improving Alignment and Phylogenetic Inference in the Mushroom Family Bolbitiaceae

    PubMed Central

    Tóth, Annamária; Hausknecht, Anton; Krisai-Greilhuber, Irmgard; Papp, Tamás; Vágvölgyi, Csaba; Nagy, László G.

    2013-01-01

    Reconciling traditional classifications, morphology, and the phylogenetic relationships of brown-spored agaric mushrooms has proven difficult in many groups, due to extensive convergence in morphological features. Here, we address the monophyly of the Bolbitiaceae, a family with over 700 described species and examine the higher-level relationships within the family using a newly constructed multilocus dataset (ITS, nrLSU rDNA and EF1-alpha). We tested whether the fast-evolving Internal Transcribed Spacer (ITS) sequences can be accurately aligned across the family, by comparing the outcome of two iterative alignment refining approaches (an automated and a manual) and various indel-treatment strategies. We used PRANK to align sequences in both cases. Our results suggest that – although PRANK successfully evades overmatching of gapped sites, referred previously to as alignment overmatching – it infers an unrealistically high number of indel events with natively generated guide-trees. This 'alignment undermatching' could be avoided by using more rigorous (e.g. ML) guide trees. The trees inferred in this study support the monophyly of the core Bolbitiaceae, with the exclusion of Panaeolus, Agrocybe, and some of the genera formerly placed in the family. Bolbitius and Conocybe were found monophyletic, however, Pholiotina and Galerella require redefinition. The phylogeny revealed that stipe coverage type is a poor predictor of phylogenetic relationships, indicating the need for a revision of the intrageneric relationships within Conocybe. PMID:23418526

  14. Clinical Decision Support System to Enhance Quality Control of Spirometry Using Information and Communication Technologies

    PubMed Central

    2014-01-01

    Background We recently demonstrated that quality of spirometry in primary care could markedly improve with remote offline support from specialized professionals. It is hypothesized that implementation of automatic online assessment of quality of spirometry using information and communication technologies may significantly enhance the potential for extensive deployment of a high quality spirometry program in integrated care settings. Objective The objective of the study was to elaborate and validate a Clinical Decision Support System (CDSS) for automatic online quality assessment of spirometry. Methods The CDSS was done through a three step process including: (1) identification of optimal sampling frequency; (2) iterations to build-up an initial version using the 24 standard spirometry curves recommended by the American Thoracic Society; and (3) iterations to refine the CDSS using 270 curves from 90 patients. In each of these steps the results were checked against one expert. Finally, 778 spirometry curves from 291 patients were analyzed for validation purposes. Results The CDSS generated appropriate online classification and certification in 685/778 (88.1%) of spirometry testing, with 96% sensitivity and 95% specificity. Conclusions Consequently, only 93/778 (11.9%) of spirometry testing required offline remote classification by an expert, indicating a potential positive role of the CDSS in the deployment of a high quality spirometry program in an integrated care setting. PMID:25600957

  15. Improving virtual screening of G protein-coupled receptors via ligand-directed modeling

    PubMed Central

    Simms, John; Christopoulos, Arthur; Wootten, Denise

    2017-01-01

    G protein-coupled receptors (GPCRs) play crucial roles in cell physiology and pathophysiology. There is increasing interest in using structural information for virtual screening (VS) of libraries and for structure-based drug design to identify novel agonist or antagonist leads. However, the sparse availability of experimentally determined GPCR/ligand complex structures with diverse ligands impedes the application of structure-based drug design (SBDD) programs directed to identifying new molecules with a select pharmacology. In this study, we apply ligand-directed modeling (LDM) to available GPCR X-ray structures to improve VS performance and selectivity towards molecules of specific pharmacological profile. The described method refines a GPCR binding pocket conformation using a single known ligand for that GPCR. The LDM method is a computationally efficient, iterative workflow consisting of protein sampling and ligand docking. We developed an extensive benchmark comparing LDM-refined binding pockets to GPCR X-ray crystal structures across seven different GPCRs bound to a range of ligands of different chemotypes and pharmacological profiles. LDM-refined models showed improvement in VS performance over origin X-ray crystal structures in 21 out of 24 cases. In all cases, the LDM-refined models had superior performance in enriching for the chemotype of the refinement ligand. This likely contributes to the LDM success in all cases of inhibitor-bound to agonist-bound binding pocket refinement, a key task for GPCR SBDD programs. Indeed, agonist ligands are required for a plethora of GPCRs for therapeutic intervention, however GPCR X-ray structures are mostly restricted to their inactive inhibitor-bound state. PMID:29131821

  16. Increasing High School Student Interest in Science: An Action Research Study

    NASA Astrophysics Data System (ADS)

    Vartuli, Cindy A.

    An action research study was conducted to determine how to increase student interest in learning science and pursuing a STEM career. The study began by exploring 10th-grade student and teacher perceptions of student interest in science in order to design an instructional strategy for stimulating student interest in learning and pursuing science. Data for this study included responses from 270 students to an on-line science survey and interviews with 11 students and eight science teachers. The action research intervention included two iterations of the STEM Career Project. The first iteration introduced four chemistry classes to the intervention. The researcher used student reflections and a post-project survey to determine if the intervention had influence on the students' interest in pursuing science. The second iteration was completed by three science teachers who had implemented the intervention with their chemistry classes, using student reflections and post-project surveys, as a way to make further procedural refinements and improvements to the intervention and measures. Findings from the exploratory phase of the study suggested students generally had interest in learning science but increasing that interest required including personally relevant applications and laboratory experiences. The intervention included a student-directed learning module in which students investigated three STEM careers and presented information on one of their chosen careers. The STEM Career Project enabled students to explore career possibilities in order to increase their awareness of STEM careers. Findings from the first iteration of the intervention suggested a positive influence on student interest in learning and pursuing science. The second iteration included modifications to the intervention resulting in support for the findings of the first iteration. Results of the second iteration provided modifications that would allow the project to be used for different academic levels. Insights from conducting the action research study provided the researcher with effective ways to make positive changes in her own teaching praxis and the tools used to improve student awareness of STEM career options.

  17. Automated main-chain model building by template matching and iterative fragment extension.

    PubMed

    Terwilliger, Thomas C

    2003-01-01

    An algorithm for the automated macromolecular model building of polypeptide backbones is described. The procedure is hierarchical. In the initial stages, many overlapping polypeptide fragments are built. In subsequent stages, the fragments are extended and then connected. Identification of the locations of helical and beta-strand regions is carried out by FFT-based template matching. Fragment libraries of helices and beta-strands from refined protein structures are then positioned at the potential locations of helices and strands and the longest segments that fit the electron-density map are chosen. The helices and strands are then extended using fragment libraries consisting of sequences three amino acids long derived from refined protein structures. The resulting segments of polypeptide chain are then connected by choosing those which overlap at two or more C(alpha) positions. The fully automated procedure has been implemented in RESOLVE and is capable of model building at resolutions as low as 3.5 A. The algorithm is useful for building a preliminary main-chain model that can serve as a basis for refinement and side-chain addition.

  18. An efficient method for model refinement in diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Zirak, A. R.; Khademi, M.

    2007-11-01

    Diffuse optical tomography (DOT) is a non-linear, ill-posed, boundary value and optimization problem which necessitates regularization. Also, Bayesian methods are suitable owing to measurements data are sparse and correlated. In such problems which are solved with iterative methods, for stabilization and better convergence, the solution space must be small. These constraints subject to extensive and overdetermined system of equations which model retrieving criteria specially total least squares (TLS) must to refine model error. Using TLS is limited to linear systems which is not achievable when applying traditional Bayesian methods. This paper presents an efficient method for model refinement using regularized total least squares (RTLS) for treating on linearized DOT problem, having maximum a posteriori (MAP) estimator and Tikhonov regulator. This is done with combination Bayesian and regularization tools as preconditioner matrices, applying them to equations and then using RTLS to the resulting linear equations. The preconditioning matrixes are guided by patient specific information as well as a priori knowledge gained from the training set. Simulation results illustrate that proposed method improves the image reconstruction performance and localize the abnormally well.

  19. Intermetallic Growth and Interfacial Properties of the Grain Refiners in Al Alloys.

    PubMed

    Li, Chunmei; Cheng, Nanpu; Chen, Zhiqian; Xie, Zhongjing; Hui, Liangliang

    2018-04-20

    Al₃TM(TM = Ti, Zr, Hf, Sc) particles acting as effective grain refiners for Al alloys have been receiving extensive attention these days. In order to judge their nucleation behaviors, first-principles calculations are used to investigate their intermetallic and interfacial properties. Based on energy analysis, Al₃Zr and Al₃Sc are more suitable for use as grain refiners than the other two intermetallic compounds. Interfacial properties show that Al/Al₃TM(TM = Ti, Zr, Hf, Sc) interfaces in I-ter interfacial mode exhibit better interface wetting effects due to larger Griffith rupture work and a smaller interface energy. Among these, Al/Al₃Sc achieves the lowest interfacial energy, which shows that Sc atoms should get priority for occupying interfacial sites. Additionally, Sc-doped Al/Al₃(Zr, Sc) interfacial properties show that Sc can effectively improve the Al/Al₃(Zr, Sc) binding strength with the Al matrix. By combining the characteristics of interfaces with the properties of intermetallics, the core-shell structure with Al₃Zr-core or Al₃Zr(Sc1-1)-core encircled with an Sc-rich shell forms.

  20. Agile Development of a Smartphone App for Perinatal Monitoring in a Resource-Constrained Setting

    PubMed Central

    Martinez, Boris; Hall-Clifford, Rachel; Coyote, Enma; Stroux, Lisa; Valderrama, Camilo E.; Aaron, Christopher; Francis, Aaron; Hendren, Cate; Rohloff, Peter; Clifford, Gari D.

    2017-01-01

    Technology provides the potential to empower frontline healthcare workers with low levels of training and literacy, particularly in low- and middle-income countries. An obvious platform for achieving this aim is the smartphone, a low cost, almost ubiquitous device with good supply chain infrastructure and a general cultural acceptance for its use. In particular, the smartphone offers the opportunity to provide augmented or procedural information through active audiovisual aids to illiterate or untrained users, as described in this article. In this article, the process of refinement and iterative design of a smartphone application prototype to support perinatal surveillance in rural Guatemala for indigenous Maya lay midwives with low levels of literacy and technology exposure is described. Following on from a pilot to investigate the feasibility of this system, a two-year project to develop a robust in-field system was initiated, culminating in a randomized controlled trial of the system, which is ongoing. The development required an agile approach, with the development team working both remotely and in country to identify and solve key technical and cultural issues in close collaboration with the midwife end-users. This article describes this process and intermediate results. The application prototype was refined in two phases, with expanding numbers of end-users. Some of the key weaknesses identified in the system during the development cycles were user error when inserting and assembling cables and interacting with the 1-D ultrasound-recording interface, as well as unexpectedly poor bandwidth for data uploads in the central healthcare facility. Safety nets for these issues were developed and the resultant system was well accepted and highly utilized by the end-users. To evaluate the effectiveness of the system after full field deployment, data quality, and corruption over time, as well as general usage of the system and the volume of application support for end-users required by the in-country team was analyzed. Through iterative review of data quality and consistent use of user feedback, the volume and percentage of high quality recordings was increased monthly. Final analysis of the impact of the system on obstetrical referral volume and maternal and neonatal clinical outcomes is pending conclusion of the ongoing clinical trial. PMID:28936111

  1. Agile Development of a Smartphone App for Perinatal Monitoring in a Resource-Constrained Setting.

    PubMed

    Martinez, Boris; Hall-Clifford, Rachel; Coyote, Enma; Stroux, Lisa; Valderrama, Camilo E; Aaron, Christopher; Francis, Aaron; Hendren, Cate; Rohloff, Peter; Clifford, Gari D

    2017-01-01

    Technology provides the potential to empower frontline healthcare workers with low levels of training and literacy, particularly in low- and middle-income countries. An obvious platform for achieving this aim is the smartphone, a low cost, almost ubiquitous device with good supply chain infrastructure and a general cultural acceptance for its use. In particular, the smartphone offers the opportunity to provide augmented or procedural information through active audiovisual aids to illiterate or untrained users, as described in this article. In this article, the process of refinement and iterative design of a smartphone application prototype to support perinatal surveillance in rural Guatemala for indigenous Maya lay midwives with low levels of literacy and technology exposure is described. Following on from a pilot to investigate the feasibility of this system, a two-year project to develop a robust in-field system was initiated, culminating in a randomized controlled trial of the system, which is ongoing. The development required an agile approach, with the development team working both remotely and in country to identify and solve key technical and cultural issues in close collaboration with the midwife end-users. This article describes this process and intermediate results. The application prototype was refined in two phases, with expanding numbers of end-users. Some of the key weaknesses identified in the system during the development cycles were user error when inserting and assembling cables and interacting with the 1-D ultrasound-recording interface, as well as unexpectedly poor bandwidth for data uploads in the central healthcare facility. Safety nets for these issues were developed and the resultant system was well accepted and highly utilized by the end-users. To evaluate the effectiveness of the system after full field deployment, data quality, and corruption over time, as well as general usage of the system and the volume of application support for end-users required by the in-country team was analyzed. Through iterative review of data quality and consistent use of user feedback, the volume and percentage of high quality recordings was increased monthly. Final analysis of the impact of the system on obstetrical referral volume and maternal and neonatal clinical outcomes is pending conclusion of the ongoing clinical trial.

  2. 3D exemplar-based random walks for tooth segmentation from cone-beam computed tomography images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pei, Yuru, E-mail: peiyuru@cis.pku.edu.cn; Ai, Xin

    Purpose: Tooth segmentation is an essential step in acquiring patient-specific dental geometries from cone-beam computed tomography (CBCT) images. Tooth segmentation from CBCT images is still a challenging task considering the comparatively low image quality caused by the limited radiation dose, as well as structural ambiguities from intercuspation and nearby alveolar bones. The goal of this paper is to present and discuss the latest accomplishments in semisupervised tooth segmentation with adaptive 3D shape constraints. Methods: The authors propose a 3D exemplar-based random walk method of tooth segmentation from CBCT images. The proposed method integrates semisupervised label propagation and regularization by 3Dmore » exemplar registration. To begin with, the pure random walk method is to get an initial segmentation of the teeth, which tends to be erroneous because of the structural ambiguity of CBCT images. And then, as an iterative refinement, the authors conduct a regularization by using 3D exemplar registration, as well as label propagation by random walks with soft constraints, to improve the tooth segmentation. In the first stage of the iteration, 3D exemplars with well-defined topologies are adapted to fit the tooth contours, which are obtained from the random walks based segmentation. The soft constraints on voxel labeling are defined by shape-based foreground dentine probability acquired by the exemplar registration, as well as the appearance-based probability from a support vector machine (SVM) classifier. In the second stage, the labels of the volume-of-interest (VOI) are updated by the random walks with soft constraints. The two stages are optimized iteratively. Instead of the one-shot label propagation in the VOI, an iterative refinement process can achieve a reliable tooth segmentation by virtue of exemplar-based random walks with adaptive soft constraints. Results: The proposed method was applied for tooth segmentation of twenty clinically captured CBCT images. Three metrics, including the Dice similarity coefficient (DSC), the Jaccard similarity coefficient (JSC), and the mean surface deviation (MSD), were used to quantitatively analyze the segmentation of anterior teeth including incisors and canines, premolars, and molars. The segmentation of the anterior teeth achieved a DSC up to 98%, a JSC of 97%, and an MSD of 0.11 mm compared with manual segmentation. For the premolars, the average values of DSC, JSC, and MSD were 98%, 96%, and 0.12 mm, respectively. The proposed method yielded a DSC of 95%, a JSC of 89%, and an MSD of 0.26 mm for molars. Aside from the interactive definition of label priors by the user, automatic tooth segmentation can be achieved in an average of 1.18 min. Conclusions: The proposed technique enables an efficient and reliable tooth segmentation from CBCT images. This study makes it clinically practical to segment teeth from CBCT images, thus facilitating pre- and interoperative uses of dental morphologies in maxillofacial and orthodontic treatments.« less

  3. 3D exemplar-based random walks for tooth segmentation from cone-beam computed tomography images.

    PubMed

    Pei, Yuru; Ai, Xingsheng; Zha, Hongbin; Xu, Tianmin; Ma, Gengyu

    2016-09-01

    Tooth segmentation is an essential step in acquiring patient-specific dental geometries from cone-beam computed tomography (CBCT) images. Tooth segmentation from CBCT images is still a challenging task considering the comparatively low image quality caused by the limited radiation dose, as well as structural ambiguities from intercuspation and nearby alveolar bones. The goal of this paper is to present and discuss the latest accomplishments in semisupervised tooth segmentation with adaptive 3D shape constraints. The authors propose a 3D exemplar-based random walk method of tooth segmentation from CBCT images. The proposed method integrates semisupervised label propagation and regularization by 3D exemplar registration. To begin with, the pure random walk method is to get an initial segmentation of the teeth, which tends to be erroneous because of the structural ambiguity of CBCT images. And then, as an iterative refinement, the authors conduct a regularization by using 3D exemplar registration, as well as label propagation by random walks with soft constraints, to improve the tooth segmentation. In the first stage of the iteration, 3D exemplars with well-defined topologies are adapted to fit the tooth contours, which are obtained from the random walks based segmentation. The soft constraints on voxel labeling are defined by shape-based foreground dentine probability acquired by the exemplar registration, as well as the appearance-based probability from a support vector machine (SVM) classifier. In the second stage, the labels of the volume-of-interest (VOI) are updated by the random walks with soft constraints. The two stages are optimized iteratively. Instead of the one-shot label propagation in the VOI, an iterative refinement process can achieve a reliable tooth segmentation by virtue of exemplar-based random walks with adaptive soft constraints. The proposed method was applied for tooth segmentation of twenty clinically captured CBCT images. Three metrics, including the Dice similarity coefficient (DSC), the Jaccard similarity coefficient (JSC), and the mean surface deviation (MSD), were used to quantitatively analyze the segmentation of anterior teeth including incisors and canines, premolars, and molars. The segmentation of the anterior teeth achieved a DSC up to 98%, a JSC of 97%, and an MSD of 0.11 mm compared with manual segmentation. For the premolars, the average values of DSC, JSC, and MSD were 98%, 96%, and 0.12 mm, respectively. The proposed method yielded a DSC of 95%, a JSC of 89%, and an MSD of 0.26 mm for molars. Aside from the interactive definition of label priors by the user, automatic tooth segmentation can be achieved in an average of 1.18 min. The proposed technique enables an efficient and reliable tooth segmentation from CBCT images. This study makes it clinically practical to segment teeth from CBCT images, thus facilitating pre- and interoperative uses of dental morphologies in maxillofacial and orthodontic treatments.

  4. The EU-project United4Health: User-centred design of an information system for a Norwegian telemedicine service.

    PubMed

    Smaradottir, Berglind; Gerdes, Martin; Martinez, Santiago; Fensli, Rune

    2016-10-01

    Organizational changes of health care services in Norway brought to light a need for new clinical pathways. This study presents the design and evaluation of an information system for a new telemedicine service for chronic obstructive pulmonary disease patients after hospital discharge. A user-centred design approach was employed composed of a workshop with end-users, two user tests and a field trial. For data collection, qualitative methods such as observations, semi-structured interviews and a questionnaire were used. User workshop's outcome informed the implementation of the system initial prototype, evaluated by end-users in a usability laboratory. Several usability and functionality issues were identified and solved, such as the interface between the initial colour scheme and the triage colours. Iterative refinements were made and a second user evaluation showed that the main issues were solved. The responses to a questionnaire presented a high score of user satisfaction. In the final phase, a field trial showed satisfactory use of the system. This study showed how the target end-users groups were actively involved in identifying the needs, suggestions and preferences. These aspects were addressed in the development of an information system through a user-centred design process. The process efficiently enabled users to give feedback about design and functionality. Continuous refinement of the system was the key to full development and suitability for the telemedicine service. This research was a result of the international cooperation between partners within the project United4Health, a part of the Seventh Framework Programme for Research of the European Union. © The Author(s) 2015.

  5. Development of ITER non-activation phase operation scenarios

    DOE PAGES

    Kim, S. H.; Poli, F. M.; Koechl, F.; ...

    2017-06-29

    Non-activation phase operations in ITER in hydrogen (H) and helium (He) will be important for commissioning of tokamak systems, such as diagnostics, heating and current drive (HCD) systems, coils and plasma control systems, and for validation of techniques necessary for establishing operations in DT. The assessment of feasible HCD schemes at various toroidal fields (2.65–5.3 T) has revealed that the previously applied assumptions need to be refined for the ITER non-activation phase H/He operations. A study of the ranges of plasma density and profile shape using the JINTRAC suite of codes has indicated that the hydrogen pellet fuelling into Hemore » plasmas should be utilized taking the optimization of IC power absorption, neutral beam shine-through density limit and H-mode access into account. The EPED1 estimation of the edge pedestal parameters has been extended to various H operation conditions, and the combined EPED1 and SOLPS estimation has provided guidance for modelling the edge pedestal in H/He operations. The availability of ITER HCD schemes, ranges of achievable plasma density and profile shape, and estimation of the edge pedestal parameters for H/He plasmas have been integrated into various time-dependent tokamak discharge simulations. In this paper, various H/He scenarios at a wide range of plasma current (7.5–15 MA) and field (2.65–5.3 T) have been developed for the ITER non-activation phase operation, and the sensitivity of the developed scenarios to the used assumptions has been investigated to provide guidance for further development.« less

  6. Development of ITER non-activation phase operation scenarios

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, S. H.; Poli, F. M.; Koechl, F.

    Non-activation phase operations in ITER in hydrogen (H) and helium (He) will be important for commissioning of tokamak systems, such as diagnostics, heating and current drive (HCD) systems, coils and plasma control systems, and for validation of techniques necessary for establishing operations in DT. The assessment of feasible HCD schemes at various toroidal fields (2.65–5.3 T) has revealed that the previously applied assumptions need to be refined for the ITER non-activation phase H/He operations. A study of the ranges of plasma density and profile shape using the JINTRAC suite of codes has indicated that the hydrogen pellet fuelling into Hemore » plasmas should be utilized taking the optimization of IC power absorption, neutral beam shine-through density limit and H-mode access into account. The EPED1 estimation of the edge pedestal parameters has been extended to various H operation conditions, and the combined EPED1 and SOLPS estimation has provided guidance for modelling the edge pedestal in H/He operations. The availability of ITER HCD schemes, ranges of achievable plasma density and profile shape, and estimation of the edge pedestal parameters for H/He plasmas have been integrated into various time-dependent tokamak discharge simulations. In this paper, various H/He scenarios at a wide range of plasma current (7.5–15 MA) and field (2.65–5.3 T) have been developed for the ITER non-activation phase operation, and the sensitivity of the developed scenarios to the used assumptions has been investigated to provide guidance for further development.« less

  7. Geochemistry and the understanding of ground-water systems

    USGS Publications Warehouse

    Glynn, Pierre D.; Plummer, Niel

    2005-01-01

    Geochemistry has contributed significantly to the understanding of ground-water systems over the last 50 years. Historic advances include development of the hydrochemical facies concept, application of equilibrium theory, investigation of redox processes, and radiocarbon dating. Other hydrochemical concepts, tools, and techniques have helped elucidate mechanisms of flow and transport in ground-water systems, and have helped unlock an archive of paleoenvironmental information. Hydrochemical and isotopic information can be used to interpret the origin and mode of ground-water recharge, refine estimates of time scales of recharge and ground-water flow, decipher reactive processes, provide paleohydrological information, and calibrate ground-water flow models. Progress needs to be made in obtaining representative samples. Improvements are needed in the interpretation of the information obtained, and in the construction and interpretation of numerical models utilizing hydrochemical data. The best approach will ensure an optimized iterative process between field data collection and analysis, interpretation, and the application of forward, inverse, and statistical modeling tools. Advances are anticipated from microbiological investigations, the characterization of natural organics, isotopic fingerprinting, applications of dissolved gas measurements, and the fields of reaction kinetics and coupled processes. A thermodynamic perspective is offered that could facilitate the comparison and understanding of the multiple physical, chemical, and biological processes affecting ground-water systems.

  8. Knowledge exchange systems for youth health and chronic disease prevention: a tri-provincial case study.

    PubMed

    Murnaghan, D; Morrison, W; Griffith, E J; Bell, B L; Duffley, L A; McGarry, K; Manske, S

    2013-09-01

    The research teams undertook a case study design using a common analytical framework to investigate three provincial (Prince Edward Island, New Brunswick and Manitoba) knowledge exchange systems. These three knowledge exchange systems seek to generate and enhance the use of evidence in policy development, program planning and evaluation to improve youth health and chronic disease prevention. We applied a case study design to explore the lessons learned, that is, key conditions or processes contributing to the development of knowledge exchange capacity, using a multi-data collection method to gain an in-depth understanding. Data management, synthesis and analysis activities were concurrent, iterative and ongoing. The lessons learned were organized into seven "clusters." Key findings demonstrated that knowledge exchange is a complex process requiring champions, collaborative partnerships, regional readiness and the adaptation of knowledge exchange to diverse stakeholders. Overall, knowledge exchange systems can increase the capacity to exchange and use evidence by moving beyond collecting and reporting data. Areas of influence included development of new partnerships, expanded knowledge-sharing activities, and refinement of policy and practice approaches related to youth health and chronic disease prevention.

  9. The Iterative Design Process in Research and Development: A Work Experience Paper

    NASA Technical Reports Server (NTRS)

    Sullivan, George F. III

    2013-01-01

    The iterative design process is one of many strategies used in new product development. Top-down development strategies, like waterfall development, place a heavy emphasis on planning and simulation. The iterative process, on the other hand, is better suited to the management of small to medium scale projects. Over the past four months, I have worked with engineers at Johnson Space Center on a multitude of electronics projects. By describing the work I have done these last few months, analyzing the factors that have driven design decisions, and examining the testing and verification process, I will demonstrate that iterative design is the obvious choice for research and development projects.

  10. Using a Systematic Approach and Theoretical Framework to Design a Curriculum for the Shaping Healthy Choices Program.

    PubMed

    Linnell, Jessica D; Zidenberg-Cherr, Sheri; Briggs, Marilyn; Scherr, Rachel E; Brian, Kelley M; Hillhouse, Carol; Smith, Martin H

    2016-01-01

    To examine the use of a systematic approach and theoretical framework to develop an inquiry-based, garden-enhanced nutrition curriculum for the Shaping Healthy Choices Program. Curriculum development occurred in 3 steps: identification of learning objectives, determination of evidence of learning, and activity development. Curriculum activities were further refined through pilot-testing, which was conducted in 2 phases. Formative data collected during pilot-testing resulted in improvements to activities. Using a systematic, iterative process resulted in a curriculum called Discovering Healthy Choices, which has a strong foundation in Social Cognitive Theory and constructivist learning theory. Furthermore, the Backward Design method provided the design team with a systematic approach to ensure activities addressed targeted learning objectives and overall Shaping Healthy Choices Program goals. The process by which a nutrition curriculum is developed may have a direct effect on student outcomes. Processes by which nutrition curricula are designed and learning objectives are selected, and how theory and pedagogy are applied should be further investigated so that effective approaches to developing garden-enhanced nutrition interventions can be determined and replicated. Copyright © 2016 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.

  11. Forward and inverse solutions for three-element Risley prism beam scanners.

    PubMed

    Li, Anhu; Liu, Xingsheng; Sun, Wansong

    2017-04-03

    Scan blind zone and control singularity are two adverse issues for the beam scanning performance in double-prism Risley systems. In this paper, a theoretical model which introduces a third prism is developed. The critical condition for a fully eliminated scan blind zone is determined through a geometric derivation, providing several useful formulae for three-Risley-prism system design. Moreover, inverse solutions for a three-prism system are established, based on the damped least-squares iterative refinement by a forward ray tracing method. It is shown that the efficiency of this iterative calculation of the inverse solutions can be greatly enhanced by a numerical differentiation method. In order to overcome the control singularity problem, the motion law of any one prism in a three-prism system needs to be conditioned, resulting in continuous and steady motion profiles for the other two prisms.

  12. Intelligent cooperation: A framework of pedagogic practice in the operating room.

    PubMed

    Sutkin, Gary; Littleton, Eliza B; Kanter, Steven L

    2018-04-01

    Surgeons who work with trainees must address their learning needs without compromising patient safety. We used a constructivist grounded theory approach to examine videos of five teaching surgeries. Attending surgeons were interviewed afterward while watching cued videos of their cases. Codes were iteratively refined into major themes, and then constructed into a larger framework. We present a novel framework, Intelligent Cooperation, which accounts for the highly adaptive, iterative features of surgical teaching in the operating room. Specifically, we define Intelligent Cooperation as a sequence of coordinated exchanges between attending and trainee that accomplishes small surgical steps while simultaneously uncovering the trainee's learning needs. Intelligent Cooperation requires the attending to accurately determine learning needs, perform real-time needs assessment, provide critical scaffolding, and work with the learner to accomplish the next step in the surgery. This is achieved through intense, coordinated verbal and physical cooperation. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Linking the Long Tail of Data: A Bottoms-up Approach to Connecting Scientific Research

    NASA Astrophysics Data System (ADS)

    Jacob, B.; Arctur, D. K.

    2016-12-01

    Highly curated ontologies are often developed for big scientific data, but the long tail of research data rarely receives the same treatment. The learning curve for Semantic Web technology is steep, and the value of linking each long-tail data set to known taxonomies and ontologies in isolation rarely justifies the level of effort required to bring a Knowledge Engineer into the project. We present an approach that takes a bottoms-up approach of producing a Linked Data model of datasets mechanically, inferring the shape and structure of the data from the original format, and adding derived variables and semantic linkages via iterative, interactive refinements of that model. In this way, the vast corpus of small but rich scientific data becomes part of the greater linked web of knowledge, and the connectivity of that data can be iteratively improved over time.

  14. WIND: Computer program for calculation of three dimensional potential compressible flow about wind turbine rotor blades

    NASA Technical Reports Server (NTRS)

    Dulikravich, D. S.

    1980-01-01

    A computer program is presented which numerically solves an exact, full potential equation (FPE) for three dimensional, steady, inviscid flow through an isolated wind turbine rotor. The program automatically generates a three dimensional, boundary conforming grid and iteratively solves the FPE while fully accounting for both the rotating cascade and Coriolis effects. The numerical techniques incorporated involve rotated, type dependent finite differencing, a finite volume method, artificial viscosity in conservative form, and a successive line overrelaxation combined with the sequential grid refinement procedure to accelerate the iterative convergence rate. Consequently, the WIND program is capable of accurately analyzing incompressible and compressible flows, including those that are locally transonic and terminated by weak shocks. The program can also be used to analyze the flow around isolated aircraft propellers and helicopter rotors in hover as long as the total relative Mach number of the oncoming flow is subsonic.

  15. Counterrotating prop-fan simulations which feature a relative-motion multiblock grid decomposition enabling arbitrary time-steps

    NASA Technical Reports Server (NTRS)

    Janus, J. Mark; Whitfield, David L.

    1990-01-01

    Improvements are presented of a computer algorithm developed for the time-accurate flow analysis of rotating machines. The flow model is a finite volume method utilizing a high-resolution approximate Riemann solver for interface flux definitions. The numerical scheme is a block LU implicit iterative-refinement method which possesses apparent unconditional stability. Multiblock composite gridding is used to orderly partition the field into a specified arrangement of blocks exhibiting varying degrees of similarity. Block-block relative motion is achieved using local grid distortion to reduce grid skewness and accommodate arbitrary time step selection. A general high-order numerical scheme is applied to satisfy the geometric conservation law. An even-blade-count counterrotating unducted fan configuration is chosen for a computational study comparing solutions resulting from altering parameters such as time step size and iteration count. The solutions are compared with measured data.

  16. Item generation and pilot testing of the Comprehensive Professional Behaviours Development Log.

    PubMed

    Bartlett, Doreen J; Lucy, S Deborah; Bisbee, Leslie

    2006-01-01

    The purpose of this project was to generate and refine criteria for professional behaviors previously identified to be important for physical therapy practice and to develop and pilot test a new instrument, which we have called the Comprehensive Professional Behaviours Development Log (CPBDL). Items were generated from our previous work, the work of Warren May and his colleagues, a competency profile for entry-level physical therapists, our regulatory code of ethics, and an evaluation of clinical performance. A group of eight people, including recent graduates, clinical instructors and professional practice leaders, and faculty members, refined the items in two iterations using the Delphi process. The CPBDL contains nine key professional behaviors with a range of nine to 23 specific behavioral criteria for individuals to reflect on and to indicate the consistency of performance from a selection of "not at all," "sometimes," and "always" response options. Pilot testing with a group of 42 students in the final year of our entry-to-practice curriculum indicated that the criteria were clear, the measure was feasible to complete in a reasonable time frame, and there were no ceiling or floor effects. We believe that others, including health care educators and practicing professionals, might be interested in adapting the CPBDL in their own settings to enhance the professional behaviors of either students in preparation for entry to practice or clinicians wishing to demonstrate continuing competency to professional regulatory bodies.

  17. The Landscape Evolution Observatory: a large-scale controllable infrastructure to study coupled Earth-surface processes

    USGS Publications Warehouse

    Pangle, Luke A.; DeLong, Stephen B.; Abramson, Nate; Adams, John; Barron-Gafford, Greg A.; Breshears, David D.; Brooks, Paul D.; Chorover, Jon; Dietrich, William E.; Dontsova, Katerina; Durcik, Matej; Espeleta, Javier; Ferré, T.P.A.; Ferriere, Regis; Henderson, Whitney; Hunt, Edward A.; Huxman, Travis E.; Millar, David; Murphy, Brendan; Niu, Guo-Yue; Pavao-Zuckerman, Mitch; Pelletier, Jon D.; Rasmussen, Craig; Ruiz, Joaquin; Saleska, Scott; Schaap, Marcel; Sibayan, Michael; Troch, Peter A.; Tuller, Markus; van Haren, Joost; Zeng, Xubin

    2015-01-01

    Zero-order drainage basins, and their constituent hillslopes, are the fundamental geomorphic unit comprising much of Earth's uplands. The convergent topography of these landscapes generates spatially variable substrate and moisture content, facilitating biological diversity and influencing how the landscape filters precipitation and sequesters atmospheric carbon dioxide. In light of these significant ecosystem services, refining our understanding of how these functions are affected by landscape evolution, weather variability, and long-term climate change is imperative. In this paper we introduce the Landscape Evolution Observatory (LEO): a large-scale controllable infrastructure consisting of three replicated artificial landscapes (each 330 m2 surface area) within the climate-controlled Biosphere 2 facility in Arizona, USA. At LEO, experimental manipulation of rainfall, air temperature, relative humidity, and wind speed are possible at unprecedented scale. The Landscape Evolution Observatory was designed as a community resource to advance understanding of how topography, physical and chemical properties of soil, and biological communities coevolve, and how this coevolution affects water, carbon, and energy cycles at multiple spatial scales. With well-defined boundary conditions and an extensive network of sensors and samplers, LEO enables an iterative scientific approach that includes numerical model development and virtual experimentation, physical experimentation, data analysis, and model refinement. We plan to engage the broader scientific community through public dissemination of data from LEO, collaborative experimental design, and community-based model development.

  18. Micro/nano composited tungsten material and its high thermal loading behavior

    NASA Astrophysics Data System (ADS)

    Fan, Jinglian; Han, Yong; Li, Pengfei; Sun, Zhiyu; Zhou, Qiang

    2014-12-01

    Tungsten (W) is considered as promising candidate material for plasma facing components (PFCs) in future fusion reactors attributing to its many excellent properties. Current commercial pure tungsten material in accordance with the ITER specification can well fulfil the performance requirements, however, it has defects such as coarse grains, high ductile-brittle transition temperature (DBTT) and relatively low recrystallization temperature compared with its using temperature, which cannot meet the harsh wall loading requirement of future fusion reactor. Grain refinement has been reported to be effective in improving the thermophysical and mechanical properties of W. In this work, rare earth oxide (Y2O3/La2O3) and carbides (TiC/ZrC) were used as dispersion phases to refine W grains, and micro/nano composite technology with a process of "sol gel - heterogeneous precipitation - spray drying - hydrogen reduction - ordinary consolidation sintering" was invented to introduce these second-phase particles uniformly dispersed into W grains and grain-boundaries. Via this technology, fine-grain W materials with near-full density and relatively high mechanical properties compared with traditional pure W material were manufactured. Preliminary transient high-heat flux tests were performed to evaluate the thermal response under plasma disruption conditions, and the results show that the W materials prepared by micro/nano composite technology can endure high-heat flux of 200 MW/m2 (5 ms).

  19. Deuterium results at the negative ion source test facility ELISE

    NASA Astrophysics Data System (ADS)

    Kraus, W.; Wünderlich, D.; Fantz, U.; Heinemann, B.; Bonomo, F.; Riedl, R.

    2018-05-01

    The ITER neutral beam system will be equipped with large radio frequency (RF) driven negative ion sources, with a cross section of 0.9 m × 1.9 m, which have to deliver extracted D- ion beams of 57 A at 1 MeV for 1 h. On the extraction from a large ion source experiment test facility, a source of half of this size is being operational since 2013. The goal of this experiment is to demonstrate a high operational reliability and to achieve the extracted current densities and beam properties required for ITER. Technical improvements of the source design and the RF system were necessary to provide reliable operation in steady state with an RF power of up to 300 kW. While in short pulses the required D- current density has almost been reached, the performance in long pulses is determined in particular in Deuterium by inhomogeneous and unstable currents of co-extracted electrons. By application of refined caesium evaporation and distribution procedures, and reduction and symmetrization of the electron currents, considerable progress has been made and up to 190 A/m2 D-, corresponding to 66% of the value required for ITER, have been extracted for 45 min.

  20. Joint Sparse Recovery With Semisupervised MUSIC

    NASA Astrophysics Data System (ADS)

    Wen, Zaidao; Hou, Biao; Jiao, Licheng

    2017-05-01

    Discrete multiple signal classification (MUSIC) with its low computational cost and mild condition requirement becomes a significant noniterative algorithm for joint sparse recovery (JSR). However, it fails in rank defective problem caused by coherent or limited amount of multiple measurement vectors (MMVs). In this letter, we provide a novel sight to address this problem by interpreting JSR as a binary classification problem with respect to atoms. Meanwhile, MUSIC essentially constructs a supervised classifier based on the labeled MMVs so that its performance will heavily depend on the quality and quantity of these training samples. From this viewpoint, we develop a semisupervised MUSIC (SS-MUSIC) in the spirit of machine learning, which declares that the insufficient supervised information in the training samples can be compensated from those unlabeled atoms. Instead of constructing a classifier in a fully supervised manner, we iteratively refine a semisupervised classifier by exploiting the labeled MMVs and some reliable unlabeled atoms simultaneously. Through this way, the required conditions and iterations can be greatly relaxed and reduced. Numerical experimental results demonstrate that SS-MUSIC can achieve much better recovery performances than other MUSIC extended algorithms as well as some typical greedy algorithms for JSR in terms of iterations and recovery probability.

  1. Multilevel Iterative Methods in Nonlinear Computational Plasma Physics

    NASA Astrophysics Data System (ADS)

    Knoll, D. A.; Finn, J. M.

    1997-11-01

    Many applications in computational plasma physics involve the implicit numerical solution of coupled systems of nonlinear partial differential equations or integro-differential equations. Such problems arise in MHD, systems of Vlasov-Fokker-Planck equations, edge plasma fluid equations. We have been developing matrix-free Newton-Krylov algorithms for such problems and have applied these algorithms to the edge plasma fluid equations [1,2] and to the Vlasov-Fokker-Planck equation [3]. Recently we have found that with increasing grid refinement, the number of Krylov iterations required per Newton iteration has grown unmanageable [4]. This has led us to the study of multigrid methods as a means of preconditioning matrix-free Newton-Krylov methods. In this poster we will give details of the general multigrid preconditioned Newton-Krylov algorithm, as well as algorithm performance details on problems of interest in the areas of magnetohydrodynamics and edge plasma physics. Work supported by US DoE 1. Knoll and McHugh, J. Comput. Phys., 116, pg. 281 (1995) 2. Knoll and McHugh, Comput. Phys. Comm., 88, pg. 141 (1995) 3. Mousseau and Knoll, J. Comput. Phys. (1997) (to appear) 4. Knoll and McHugh, SIAM J. Sci. Comput. 19, (1998) (to appear)

  2. Peer feedback for examiner quality assurance on MRCGP International South Asia: a mixed methods study.

    PubMed

    Perera, D P; Andrades, Marie; Wass, Val

    2017-12-08

    The International Membership Examination (MRCGP[INT]) of the Royal College of General Practitioners UK is a unique collaboration between four South Asian countries with diverse cultures, epidemiology, clinical facilities and resources. In this setting good quality assurance is imperative to achieve acceptable standards of inter rater reliability. This study aims to explore the process of peer feedback for examiner quality assurance with regard to factors affecting the implementation and acceptance of the method. A sequential mixed methods approach was used based on focus group discussions with examiners (n = 12) and clinical examination convenors who acted as peer reviewers (n = 4). A questionnaire based on emerging themes and literature review was then completed by 20 examiners at the subsequent OSCE exam. Qualitative data were analysed using an iterative reflexive process. Quantitative data were integrated by interpretive analysis looking for convergence, complementarity or dissonance. The qualitative data helped understand the issues and informed the process of developing the questionnaire. The quantitative data allowed for further refining of issues, wider sampling of examiners and giving voice to different perspectives. Examiners stated specifically that peer feedback gave an opportunity for discussion, standardisation of judgments and improved discriminatory abilities. Interpersonal dynamics, hierarchy and perception of validity of feedback were major factors influencing acceptance of feedback. Examiners desired increased transparency, accountability and the opportunity for equal partnership within the process. The process was stressful for examiners and reviewers; however acceptance increased with increasing exposure to receiving feedback. The process could be refined to improve acceptability through scrupulous attention to training and selection of those giving feedback to improve the perceived validity of feedback and improved reviewer feedback skills to enable better interpersonal dynamics and a more equitable feedback process. It is important to highlight the role of quality assurance and peer feedback as a tool for continuous improvement and maintenance of standards to examiners during training. Examiner quality assurance using peer feedback was generally a successful and accepted process. The findings highlight areas for improvement and guide the path towards a model of feedback that is responsive to examiner views and cultural sensibilities.

  3. User-oriented evaluation of a medical image retrieval system for radiologists.

    PubMed

    Markonis, Dimitrios; Holzer, Markus; Baroz, Frederic; De Castaneda, Rafael Luis Ruiz; Boyer, Célia; Langs, Georg; Müller, Henning

    2015-10-01

    This article reports the user-oriented evaluation of a text- and content-based medical image retrieval system. User tests with radiologists using a search system for images in the medical literature are presented. The goal of the tests is to assess the usability of the system, identify system and interface aspects that need improvement and useful additions. Another objective is to investigate the system's added value to radiology information retrieval. The study provides an insight into required specifications and potential shortcomings of medical image retrieval systems through a concrete methodology for conducting user tests. User tests with a working image retrieval system of images from the biomedical literature were performed in an iterative manner, where each iteration had the participants perform radiology information seeking tasks and then refining the system as well as the user study design itself. During these tasks the interaction of the users with the system was monitored, usability aspects were measured, retrieval success rates recorded and feedback was collected through survey forms. In total, 16 radiologists participated in the user tests. The success rates in finding relevant information were on average 87% and 78% for image and case retrieval tasks, respectively. The average time for a successful search was below 3 min in both cases. Users felt quickly comfortable with the novel techniques and tools (after 5 to 15 min), such as content-based image retrieval and relevance feedback. User satisfaction measures show a very positive attitude toward the system's functionalities while the user feedback helped identifying the system's weak points. The participants proposed several potentially useful new functionalities, such as filtering by imaging modality and search for articles using image examples. The iterative character of the evaluation helped to obtain diverse and detailed feedback on all system aspects. Radiologists are quickly familiar with the functionalities but have several comments on desired functionalities. The analysis of the results can potentially assist system refinement for future medical information retrieval systems. Moreover, the methodology presented as well as the discussion on the limitations and challenges of such studies can be useful for user-oriented medical image retrieval evaluation, as user-oriented evaluation of interactive system is still only rarely performed. Such interactive evaluations can be limited in effort if done iteratively and can give many insights for developing better systems. Copyright © 2015. Published by Elsevier Ireland Ltd.

  4. Hirshfeld atom refinement for modelling strong hydrogen bonds.

    PubMed

    Woińska, Magdalena; Jayatilaka, Dylan; Spackman, Mark A; Edwards, Alison J; Dominiak, Paulina M; Woźniak, Krzysztof; Nishibori, Eiji; Sugimoto, Kunihisa; Grabowsky, Simon

    2014-09-01

    High-resolution low-temperature synchrotron X-ray diffraction data of the salt L-phenylalaninium hydrogen maleate are used to test the new automated iterative Hirshfeld atom refinement (HAR) procedure for the modelling of strong hydrogen bonds. The HAR models used present the first examples of Z' > 1 treatments in the framework of wavefunction-based refinement methods. L-Phenylalaninium hydrogen maleate exhibits several hydrogen bonds in its crystal structure, of which the shortest and the most challenging to model is the O-H...O intramolecular hydrogen bond present in the hydrogen maleate anion (O...O distance is about 2.41 Å). In particular, the reconstruction of the electron density in the hydrogen maleate moiety and the determination of hydrogen-atom properties [positions, bond distances and anisotropic displacement parameters (ADPs)] are the focus of the study. For comparison to the HAR results, different spherical (independent atom model, IAM) and aspherical (free multipole model, MM; transferable aspherical atom model, TAAM) X-ray refinement techniques as well as results from a low-temperature neutron-diffraction experiment are employed. Hydrogen-atom ADPs are furthermore compared to those derived from a TLS/rigid-body (SHADE) treatment of the X-ray structures. The reference neutron-diffraction experiment reveals a truly symmetric hydrogen bond in the hydrogen maleate anion. Only with HAR is it possible to freely refine hydrogen-atom positions and ADPs from the X-ray data, which leads to the best electron-density model and the closest agreement with the structural parameters derived from the neutron-diffraction experiment, e.g. the symmetric hydrogen position can be reproduced. The multipole-based refinement techniques (MM and TAAM) yield slightly asymmetric positions, whereas the IAM yields a significantly asymmetric position.

  5. A user-centered model for designing consumer mobile health (mHealth) applications (apps).

    PubMed

    Schnall, Rebecca; Rojas, Marlene; Bakken, Suzanne; Brown, William; Carballo-Dieguez, Alex; Carry, Monique; Gelaude, Deborah; Mosley, Jocelyn Patterson; Travers, Jasmine

    2016-04-01

    Mobile technologies are a useful platform for the delivery of health behavior interventions. Yet little work has been done to create a rigorous and standardized process for the design of mobile health (mHealth) apps. This project sought to explore the use of the Information Systems Research (ISR) framework as guide for the design of mHealth apps. Our work was guided by the ISR framework which is comprised of 3 cycles: Relevance, Rigor and Design. In the Relevance cycle, we conducted 5 focus groups with 33 targeted end-users. In the Rigor cycle, we performed a review to identify technology-based interventions for meeting the health prevention needs of our target population. In the Design Cycle, we employed usability evaluation methods to iteratively develop and refine mock-ups for a mHealth app. Through an iterative process, we identified barriers and facilitators to the use of mHealth technology for HIV prevention for high-risk MSM, developed 'use cases' and identified relevant functional content and features for inclusion in a design document to guide future app development. Findings from our work support the use of the ISR framework as a guide for designing future mHealth apps. Results from this work provide detailed descriptions of the user-centered design and system development and have heuristic value for those venturing into the area of technology-based intervention work. Findings from this study support the use of the ISR framework as a guide for future mHealth app development. Use of the ISR framework is a potentially useful approach for the design of a mobile app that incorporates end-users' design preferences. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Considerations for preparing a randomized population health intervention trial: lessons from a South African–Canadian partnership to improve the health of health workers

    PubMed Central

    Yassi, Annalee; O’Hara, Lyndsay Michelle; Engelbrecht, Michelle C.; Uebel, Kerry; Nophale, Letshego Elizabeth; Bryce, Elizabeth Ann; Buxton, Jane A; Siegel, Jacob; Spiegel, Jerry Malcolm

    2014-01-01

    Background Community-based cluster-randomized controlled trials (RCTs) are increasingly being conducted to address pressing global health concerns. Preparations for clinical trials are well-described, as are the steps for multi-component health service trials. However, guidance is lacking for addressing the ethical and logistic challenges in (cluster) RCTs of population health interventions in low- and middle-income countries. Objective We aimed to identify the factors that population health researchers must explicitly consider when planning RCTs within North–South partnerships. Design We reviewed our experiences and identified key ethical and logistic issues encountered during the pre-trial phase of a recently implemented RCT. This trial aimed to improve tuberculosis (TB) and Human Immunodeficiency Virus (HIV) prevention and care for health workers by enhancing workplace assessment capability, addressing concerns about confidentiality and stigma, and providing onsite counseling, testing, and treatment. An iterative framework was used to synthesize this analysis with lessons taken from other studies. Results The checklist of critical factors was grouped into eight categories: 1) Building trust and shared ownership; 2) Conducting feasibility studies throughout the process; 3) Building capacity; 4) Creating an appropriate information system; 5) Conducting pilot studies; 6) Securing stakeholder support, with a view to scale-up; 7) Continuously refining methodological rigor; and 8) Explicitly addressing all ethical issues both at the start and continuously as they arise. Conclusion Researchers should allow for the significant investment of time and resources required for successful implementation of population health RCTs within North–South collaborations, recognize the iterative nature of the process, and be prepared to revise protocols as challenges emerge. PMID:24802561

  7. Local Surface Reconstruction from MER images using Stereo Workstation

    NASA Astrophysics Data System (ADS)

    Shin, Dongjoe; Muller, Jan-Peter

    2010-05-01

    The authors present a semi-automatic workflow that reconstructs the 3D shape of the martian surface from local stereo images delivered by PnCam or NavCam on systems such as the NASA Mars Exploration Rover (MER) Mission and in the future the ESA-NASA ExoMars rover PanCam. The process is initiated with manually selected tiepoints on a stereo workstation which is then followed by a tiepoint refinement, stereo-matching using region growing and Levenberg-Marquardt Algorithm (LMA)-based bundle adjustment processing. The stereo workstation, which is being developed by UCL in collaboration with colleagues at the Jet Propulsion Laboratory (JPL) within the EU FP7 ProVisG project, includes a set of practical GUI-based tools that enable an operator to define a visually correct tiepoint via a stereo display. To achieve platform and graphic hardware independence, the stereo application has been implemented using JPL's JADIS graphic library which is written in JAVA and the remaining processing blocks used in the reconstruction workflow have also been developed as a JAVA package to increase the code re-usability, portability and compatibility. Although initial tiepoints from the stereo workstation are reasonably acceptable as true correspondences, it is often required to employ an optional validity check and/or quality enhancing process. To meet this requirement, the workflow has been designed to include a tiepoint refinement process based on the Adaptive Least Square Correlation (ALSC) matching algorithm so that the initial tiepoints can be further enhanced to sub-pixel precision or rejected if they fail to pass the ALSC matching threshold. Apart from the accuracy of reconstruction, it is obvious that the other criterion to assess the quality of reconstruction is the density (or completeness) of reconstruction, which is not attained in the refinement process. Thus, we re-implemented a stereo region growing process, which is a core matching algorithm within the UCL-HRSC reconstruction workflow. This algorithm's performance is reasonable even for close-range imagery so long as the stereo -pair does not too large a baseline displacement. For post-processing, a Bundle Adjustment (BA) is used to optimise the initial calibration parameters, which bootstrap the reconstruction results. Amongst many options for the non-linear optimisation, the LMA has been adopted due to its stability so that the BA searches the best calibration parameters whilst iteratively minimising the re-projection errors of the initial reconstruction points. For the evaluation of the proposed method, the result of the method is compared with the reconstruction from a disparity map provided by JPL using their operational processing system. Visual and quantitative comparison will be presented as well as updated camera parameters. As part of future work, we will investigate a method expediting the processing speed of the stereo region growing process and look into the possibility of extending the use of the stereo workstation to orbital image processing. Such an interactive stereo workstation can also be used to digitize points and line features as well as assess the accuracy of stereo processed results produced from other stereo matching algorithms available from within the consortium and elsewhere. It can also provide "ground truth" when suitably refined for stereo matching algorithms as well as provide visual cues as to why these matching algorithms sometimes fail to mitigate this in the future. The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 218814 "PRoVisG".

  8. Photogrammetric Processing of Planetary Linear Pushbroom Images Based on Approximate Orthophotos

    NASA Astrophysics Data System (ADS)

    Geng, X.; Xu, Q.; Xing, S.; Hou, Y. F.; Lan, C. Z.; Zhang, J. J.

    2018-04-01

    It is still a great challenging task to efficiently produce planetary mapping products from orbital remote sensing images. There are many disadvantages in photogrammetric processing of planetary stereo images, such as lacking ground control information and informative features. Among which, image matching is the most difficult job in planetary photogrammetry. This paper designs a photogrammetric processing framework for planetary remote sensing images based on approximate orthophotos. Both tie points extraction for bundle adjustment and dense image matching for generating digital terrain model (DTM) are performed on approximate orthophotos. Since most of planetary remote sensing images are acquired by linear scanner cameras, we mainly deal with linear pushbroom images. In order to improve the computational efficiency of orthophotos generation and coordinates transformation, a fast back-projection algorithm of linear pushbroom images is introduced. Moreover, an iteratively refined DTM and orthophotos scheme was adopted in the DTM generation process, which is helpful to reduce search space of image matching and improve matching accuracy of conjugate points. With the advantages of approximate orthophotos, the matching results of planetary remote sensing images can be greatly improved. We tested the proposed approach with Mars Express (MEX) High Resolution Stereo Camera (HRSC) and Lunar Reconnaissance Orbiter (LRO) Narrow Angle Camera (NAC) images. The preliminary experimental results demonstrate the feasibility of the proposed approach.

  9. A Description of the "Crow's Foot" Tunnel Concept

    NASA Technical Reports Server (NTRS)

    Parrish, Russell V.; Williams, Steven P.; Arthur, Jarvis J., III; Kramer, Lynda J.; Bailey, Randall E.; Prinzel, Lawrence J., III; Norman, R. Michael

    2006-01-01

    NASA Langley Research Center has actively pursued the development and the use of pictorial or three-dimensional perspective displays of tunnel-, pathway- or highway-in-the-sky concepts for presenting flight path information to pilots in all aircraft categories (e.g., transports, General Aviation, rotorcraft) since the late 1970s. Prominent among these efforts has been the development of the crow s foot tunnel concept. The crow's foot tunnel concept emerged as the consensus pathway concept from a series of interactive workshops that brought together government and industry display designers, test pilots, and airline pilots to iteratively design, debate, and fly various pathway concepts. Over years of use in many simulation and flight test activities at NASA and elsewhere, modifications have refined and adapted the tunnel concept for different applications and aircraft categories (i.e., conventional transports, High Speed Civil Transport, General Aviation). A description of those refinements follows the definition of the original tunnel concept.

  10. Process for solvent refining of coal using a denitrogenated and dephenolated solvent

    DOEpatents

    Garg, Diwakar; Givens, Edwin N.; Schweighardt, Frank K.

    1984-01-01

    A process is disclosed for the solvent refining of non-anthracitic coal at elevated temperatures and pressure in a hydrogen atmosphere using a hydrocarbon solvent which before being recycled in the solvent refining process is subjected to chemical treatment to extract substantially all nitrogenous and phenolic constituents from the solvent so as to improve the conversion of coal and the production of oil in the solvent refining process. The solvent refining process can be either thermal or catalytic. The extraction of nitrogenous compounds can be performed by acid contact such as hydrogen chloride or fluoride treatment, while phenolic extraction can be performed by caustic contact or contact with a mixture of silica and alumina.

  11. Optimization of palm oil physical refining process for reduction of 3-monochloropropane-1,2-diol (3-MCPD) ester formation.

    PubMed

    Zulkurnain, Musfirah; Lai, Oi Ming; Tan, Soo Choon; Abdul Latip, Razam; Tan, Chin Ping

    2013-04-03

    The reduction of 3-monochloropropane-1,2-diol (3-MCPD) ester formation in refined palm oil was achieved by incorporation of additional processing steps in the physical refining process to remove chloroester precursors prior to the deodorization step. The modified refining process was optimized for the least 3-MCPD ester formation and acceptable refined palm oil quality using response surface methodology (RSM) with five processing parameters: water dosage, phosphoric acid dosage, degumming temperature, activated clay dosage, and deodorization temperature. The removal of chloroester precursors was largely accomplished by increasing the water dosage, while the reduction of 3-MCPD esters was a compromise in oxidative stability and color of the refined palm oil because some factors such as acid dosage, degumming temperature, and deodorization temperature showed contradictory effects. The optimization resulted in 87.2% reduction of 3-MCPD esters from 2.9 mg/kg in the conventional refining process to 0.4 mg/kg, with color and oil stability index values of 2.4 R and 14.3 h, respectively.

  12. Comparison of oil refining and biodiesel production process between screw press and n-hexane techniques from beauty leaf feedstock

    NASA Astrophysics Data System (ADS)

    Bhuiya, M. M. K.; Rasul, M. G.; Khan, M. M. K.; Ashwath, N.

    2016-07-01

    The Beauty Leaf Tree (Callophylum inophyllum) is regarded as an alternative source of energy to produce 2nd generation biodiesel due to its potentiality as well as high oil yield content in the seed kernels. The treating process is indispensable during the biodiesel production process because it can augment the yield as well as quality of the product. Oil extracted from both mechanical screw press and solvent extraction using n-hexane was refined. Five replications each of 25 gm of crude oil for screw press and five replications each of 25 gm of crude oil for n-hexane were selected for refining as well as biodiesel conversion processes. The oil refining processes consists of degumming, neutralization as well as dewaxing. The degumming, neutralization and dewaxing processes were performed to remove all the gums (phosphorous-based compounds), free fatty acids, and waxes from the fresh crude oil before the biodiesel conversion process carried out, respectively. The results indicated that up to 73% and 81% of mass conversion efficiency of the refined oil in the screw press and n-hexane refining processes were obtained, respectively. It was also found that up to 88% and 90% of biodiesel were yielded in terms of mass conversion efficiency in the transesterification process for the screw press and n-hexane techniques, respectively. While the entire processes (refining and transesterification) were considered, the conversion of beauty leaf tree (BLT) refined oil into biodiesel was yielded up to 65% and 73% of mass conversion efficiency for the screw press and n-hexane techniques, respectively. Physico-chemical properties of crude and refined oil, and biodiesel were characterized according to the ASTM standards. Overall, BLT has the potential to contribute as an alternative energy source because of high mass conversion efficiency.

  13. High resolution x-ray CMT: Reconstruction methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, J.K.

    This paper qualitatively discusses the primary characteristics of methods for reconstructing tomographic images from a set of projections. These reconstruction methods can be categorized as either {open_quotes}analytic{close_quotes} or {open_quotes}iterative{close_quotes} techniques. Analytic algorithms are derived from the formal inversion of equations describing the imaging process, while iterative algorithms incorporate a model of the imaging process and provide a mechanism to iteratively improve image estimates. Analytic reconstruction algorithms are typically computationally more efficient than iterative methods; however, analytic algorithms are available for a relatively limited set of imaging geometries and situations. Thus, the framework of iterative reconstruction methods is better suited formore » high accuracy, tomographic reconstruction codes.« less

  14. Research on material removal accuracy analysis and correction of removal function during ion beam figuring

    NASA Astrophysics Data System (ADS)

    Wu, Weibin; Dai, Yifan; Zhou, Lin; Xu, Mingjin

    2016-09-01

    Material removal accuracy has a direct impact on the machining precision and efficiency of ion beam figuring. By analyzing the factors suppressing the improvement of material removal accuracy, we conclude that correcting the removal function deviation and reducing the removal material amount during each iterative process could help to improve material removal accuracy. Removal function correcting principle can effectively compensate removal function deviation between actual figuring and simulated processes, while experiments indicate that material removal accuracy decreases with a long machining time, so a small amount of removal material in each iterative process is suggested. However, more clamping and measuring steps will be introduced in this way, which will also generate machining errors and suppress the improvement of material removal accuracy. On this account, a free-measurement iterative process method is put forward to improve material removal accuracy and figuring efficiency by using less measuring and clamping steps. Finally, an experiment on a φ 100-mm Zerodur planar is preformed, which shows that, in similar figuring time, three free-measurement iterative processes could improve the material removal accuracy and the surface error convergence rate by 62.5% and 17.6%, respectively, compared with a single iterative process.

  15. Magnesium Recycling of Partially Oxidized, Mixed Magnesium-Aluminum Scrap through Combined Refining and Solid Oxide Membrane Electrolysis Processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiaofei Guan; Peter A. Zink; Uday B. Pal

    2012-01-01

    Pure magnesium (Mg) is recycled from 19g of partially oxidized 50.5wt.% Mg-Aluminum (Al) alloy. During the refining process, potentiodynamic scans (PDS) were performed to determine the electrorefining potential for magnesium. The PDS show that the electrorefining potential increases over time as the magnesium content inside the Mg-Al scrap decreases. Up to 100% percent of magnesium is refined from the Mg-Al scrap by a novel refining process of dissolving magnesium and its oxide into a flux followed by vapor phase removal of dissolved magnesium and subsequently condensing the magnesium vapor. The solid oxide membrane (SOM) electrolysis process is employed in themore » refining system to enable additional recycling of magnesium from magnesium oxide (MgO) in the partially oxidized Mg-Al scrap. The combination of the refining and SOM processes yields 7.4g of pure magnesium.« less

  16. Accurate Micro-Tool Manufacturing by Iterative Pulsed-Laser Ablation

    NASA Astrophysics Data System (ADS)

    Warhanek, Maximilian; Mayr, Josef; Dörig, Christian; Wegener, Konrad

    2017-12-01

    Iterative processing solutions, including multiple cycles of material removal and measurement, are capable of achieving higher geometric accuracy by compensating for most deviations manifesting directly on the workpiece. Remaining error sources are the measurement uncertainty and the repeatability of the material-removal process including clamping errors. Due to the lack of processing forces, process fluids and wear, pulsed-laser ablation has proven high repeatability and can be realized directly on a measuring machine. This work takes advantage of this possibility by implementing an iterative, laser-based correction process for profile deviations registered directly on an optical measurement machine. This way efficient iterative processing is enabled, which is precise, applicable for all tool materials including diamond and eliminates clamping errors. The concept is proven by a prototypical implementation on an industrial tool measurement machine and a nanosecond fibre laser. A number of measurements are performed on both the machine and the processed workpieces. Results show production deviations within 2 μm diameter tolerance.

  17. MeProRisk - a Joint Venture for Minimizing Risk in Geothermal Reservoir Development

    NASA Astrophysics Data System (ADS)

    Clauser, C.; Marquart, G.

    2009-12-01

    Exploration and development of geothermal reservoirs for the generation of electric energy involves high engineering and economic risks due to the need for 3-D geophysical surface surveys and deep boreholes. The MeProRisk project provides a strategy guideline for reducing these risks by combining cross-disciplinary information from different specialists: Scientists from three German universities and two private companies contribute with new methods in seismic modeling and interpretation, numerical reservoir simulation, estimation of petrophysical parameters, and 3-D visualization. The approach chosen in MeProRisk consists in considering prospecting and developing of geothermal reservoirs as an iterative process. A first conceptual model for fluid flow and heat transport simulation can be developed based on limited available initial information on geology and rock properties. In the next step, additional data is incorporated which is based on (a) new seismic interpretation methods designed for delineating fracture systems, (b) statistical studies on large numbers of rock samples for estimating reliable rock parameters, (c) in situ estimates of the hydraulic conductivity tensor. This results in a continuous refinement of the reservoir model where inverse modelling of fluid flow and heat transport allows infering the uncertainty and resolution of the model at each iteration step. This finally yields a calibrated reservoir model which may be used to direct further exploration by optimizing additional borehole locations, estimate the uncertainty of key operational and economic parameters, and optimize the long-term operation of a geothermal resrvoir.

  18. Participatory design in the development of the wheelchair convoy system

    PubMed Central

    Sharma, Vinod; Simpson, Richard C; LoPresti, Edmund F; Mostowy, Casimir; Olson, Joseph; Puhlman, Jeremy; Hayashi, Steve; Cooper, Rory A; Konarski, Ed; Kerley, Barry

    2008-01-01

    Background In long-term care environments, residents who have severe mobility deficits are typically transported by having another person push the individual in a manual wheelchair. This practice is inefficient and encourages staff to hurry to complete the process, thereby setting the stage for unsafe practices. Furthermore, the time involved in assembling multiple individuals with disabilities often deters their participation in group activities. Methods The Wheelchair Convoy System (WCS) is being developed to allow a single caregiver to move multiple individuals without removing them from their wheelchairs. The WCS will consist of a processor, and a flexible cord linking each wheelchair to the wheelchair in front of it. A Participatory Design approach – in which several iterations of design, fabrication and evaluation are used to elicit feedback from users – was used. Results An iterative cycle of development and evaluation was followed through five prototypes of the device. The third and fourth prototypes were evaluated in unmanned field trials at J. Iverson Riddle Development Center. The prototypes were used to form a convoy of three wheelchairs that successfully completed a series of navigation tasks. Conclusion A Participatory Design approach to the project allowed the design of the WCS to quickly evolve towards a viable solution. The design that emerged by the end of the fifth development cycle bore little resemblance to the initial design, but successfully met the project's design criteria. Additional development and testing is planned to further refine the system. PMID:18171465

  19. Learning normalized inputs for iterative estimation in medical image segmentation.

    PubMed

    Drozdzal, Michal; Chartrand, Gabriel; Vorontsov, Eugene; Shakeri, Mahsa; Di Jorio, Lisa; Tang, An; Romero, Adriana; Bengio, Yoshua; Pal, Chris; Kadoury, Samuel

    2018-02-01

    In this paper, we introduce a simple, yet powerful pipeline for medical image segmentation that combines Fully Convolutional Networks (FCNs) with Fully Convolutional Residual Networks (FC-ResNets). We propose and examine a design that takes particular advantage of recent advances in the understanding of both Convolutional Neural Networks as well as ResNets. Our approach focuses upon the importance of a trainable pre-processing when using FC-ResNets and we show that a low-capacity FCN model can serve as a pre-processor to normalize medical input data. In our image segmentation pipeline, we use FCNs to obtain normalized images, which are then iteratively refined by means of a FC-ResNet to generate a segmentation prediction. As in other fully convolutional approaches, our pipeline can be used off-the-shelf on different image modalities. We show that using this pipeline, we exhibit state-of-the-art performance on the challenging Electron Microscopy benchmark, when compared to other 2D methods. We improve segmentation results on CT images of liver lesions, when contrasting with standard FCN methods. Moreover, when applying our 2D pipeline on a challenging 3D MRI prostate segmentation challenge we reach results that are competitive even when compared to 3D methods. The obtained results illustrate the strong potential and versatility of the pipeline by achieving accurate segmentations on a variety of image modalities and different anatomical regions. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Lung Segmentation Refinement based on Optimal Surface Finding Utilizing a Hybrid Desktop/Virtual Reality User Interface

    PubMed Central

    Sun, Shanhui; Sonka, Milan; Beichel, Reinhard R.

    2013-01-01

    Recently, the optimal surface finding (OSF) and layered optimal graph image segmentation of multiple objects and surfaces (LOGISMOS) approaches have been reported with applications to medical image segmentation tasks. While providing high levels of performance, these approaches may locally fail in the presence of pathology or other local challenges. Due to the image data variability, finding a suitable cost function that would be applicable to all image locations may not be feasible. This paper presents a new interactive refinement approach for correcting local segmentation errors in the automated OSF-based segmentation. A hybrid desktop/virtual reality user interface was developed for efficient interaction with the segmentations utilizing state-of-the-art stereoscopic visualization technology and advanced interaction techniques. The user interface allows a natural and interactive manipulation on 3-D surfaces. The approach was evaluated on 30 test cases from 18 CT lung datasets, which showed local segmentation errors after employing an automated OSF-based lung segmentation. The performed experiments exhibited significant increase in performance in terms of mean absolute surface distance errors (2.54 ± 0.75 mm prior to refinement vs. 1.11 ± 0.43 mm post-refinement, p ≪ 0.001). Speed of the interactions is one of the most important aspects leading to the acceptance or rejection of the approach by users expecting real-time interaction experience. The average algorithm computing time per refinement iteration was 150 ms, and the average total user interaction time required for reaching complete operator satisfaction per case was about 2 min. This time was mostly spent on human-controlled manipulation of the object to identify whether additional refinement was necessary and to approve the final segmentation result. The reported principle is generally applicable to segmentation problems beyond lung segmentation in CT scans as long as the underlying segmentation utilizes the OSF framework. The two reported segmentation refinement tools were optimized for lung segmentation and might need some adaptation for other application domains. PMID:23415254

  1. Segmentation of anterior cruciate ligament in knee MR images using graph cuts with patient-specific shape constraints and label refinement.

    PubMed

    Lee, Hansang; Hong, Helen; Kim, Junmo

    2014-12-01

    We propose a graph-cut-based segmentation method for the anterior cruciate ligament (ACL) in knee MRI with a novel shape prior and label refinement. As the initial seeds for graph cuts, candidates for the ACL and the background are extracted from knee MRI roughly by means of adaptive thresholding with Gaussian mixture model fitting. The extracted ACL candidate is segmented iteratively by graph cuts with patient-specific shape constraints. Two shape constraints termed fence and neighbor costs are suggested such that the graph cuts prevent any leakage into adjacent regions with similar intensity. The segmented ACL label is refined by means of superpixel classification. Superpixel classification makes the segmented label propagate into missing inhomogeneous regions inside the ACL. In the experiments, the proposed method segmented the ACL with Dice similarity coefficient of 66.47±7.97%, average surface distance of 2.247±0.869, and root mean squared error of 3.538±1.633, which increased the accuracy by 14.8%, 40.3%, and 37.6% from the Boykov model, respectively. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Modeling as an Anchoring Scientific Practice for Explaining Friction Phenomena

    NASA Astrophysics Data System (ADS)

    Neilson, Drew; Campbell, Todd

    2017-12-01

    Through examining the day-to-day work of scientists, researchers in science studies have revealed how models are a central sense-making practice of scientists as they construct and critique explanations about how the universe works. Additionally, they allow predictions to be made using the tenets of the model. Given this, alongside research suggesting that engaging students in developing and using models can have a positive effect on learning in science classrooms, the recent national standards documents in science education have identified developing and using models as an important practice students should engage in as they apply and refine their ideas with peers and teachers in explaining phenomena or solving problems in classrooms. This article details how students can be engaged in developing and using models to help them make sense of friction phenomena in a high school conceptual physics classroom in ways that align with visions for teaching and learning outlined in the Next Generation Science Standards. This particular unit has been refined over several years to build on what was initially an inquiry-based unit we have described previously. In this latest iteration of the friction unit, students developed and refined models through engaging in small group and whole class discussions and investigations.

  3. Intermetallic Growth and Interfacial Properties of the Grain Refiners in Al Alloys

    PubMed Central

    Li, Chunmei; Cheng, Nanpu; Chen, Zhiqian; Xie, Zhongjing; Hui, Liangliang

    2018-01-01

    Al3TM(TM = Ti, Zr, Hf, Sc) particles acting as effective grain refiners for Al alloys have been receiving extensive attention these days. In order to judge their nucleation behaviors, first-principles calculations are used to investigate their intermetallic and interfacial properties. Based on energy analysis, Al3Zr and Al3Sc are more suitable for use as grain refiners than the other two intermetallic compounds. Interfacial properties show that Al/Al3TM(TM = Ti, Zr, Hf, Sc) interfaces in I-ter interfacial mode exhibit better interface wetting effects due to larger Griffith rupture work and a smaller interface energy. Among these, Al/Al3Sc achieves the lowest interfacial energy, which shows that Sc atoms should get priority for occupying interfacial sites. Additionally, Sc-doped Al/Al3(Zr, Sc) interfacial properties show that Sc can effectively improve the Al/Al3(Zr, Sc) binding strength with the Al matrix. By combining the characteristics of interfaces with the properties of intermetallics, the core-shell structure with Al3Zr-core or Al3Zr(Sc1-1)-core encircled with an Sc-rich shell forms. PMID:29677155

  4. A new iterative triclass thresholding technique in image segmentation.

    PubMed

    Cai, Hongmin; Yang, Zhong; Cao, Xinhua; Xia, Weiming; Xu, Xiaoyin

    2014-03-01

    We present a new method in image segmentation that is based on Otsu's method but iteratively searches for subregions of the image for segmentation, instead of treating the full image as a whole region for processing. The iterative method starts with Otsu's threshold and computes the mean values of the two classes as separated by the threshold. Based on the Otsu's threshold and the two mean values, the method separates the image into three classes instead of two as the standard Otsu's method does. The first two classes are determined as the foreground and background and they will not be processed further. The third class is denoted as a to-be-determined (TBD) region that is processed at next iteration. At the succeeding iteration, Otsu's method is applied on the TBD region to calculate a new threshold and two class means and the TBD region is again separated into three classes, namely, foreground, background, and a new TBD region, which by definition is smaller than the previous TBD regions. Then, the new TBD region is processed in the similar manner. The process stops when the Otsu's thresholds calculated between two iterations is less than a preset threshold. Then, all the intermediate foreground and background regions are, respectively, combined to create the final segmentation result. Tests on synthetic and real images showed that the new iterative method can achieve better performance than the standard Otsu's method in many challenging cases, such as identifying weak objects and revealing fine structures of complex objects while the added computational cost is minimal.

  5. Using Rapid Prototyping to Design a Smoking Cessation Website with End-Users.

    PubMed

    Ronquillo, Charlene; Currie, Leanne; Rowsell, Derek; Phillips, J Craig

    2016-01-01

    Rapid prototyping is an iterative approach to design involving cycles of prototype building, review by end-users and refinement, and can be a valuable tool in user-centered website design. Informed by various user-centered approaches, we used rapid prototyping as a tool to collaborate with users in building a peer-support focused smoking-cessation website for gay men living with HIV. Rapid prototyping was effective in eliciting feedback on the needs of this group of potential end-users from a smoking cessation website.

  6. F-8C adaptive control law refinement and software development

    NASA Technical Reports Server (NTRS)

    Hartmann, G. L.; Stein, G.

    1981-01-01

    An explicit adaptive control algorithm based on maximum likelihood estimation of parameters was designed. To avoid iterative calculations, the algorithm uses parallel channels of Kalman filters operating at fixed locations in parameter space. This algorithm was implemented in NASA/DFRC's Remotely Augmented Vehicle (RAV) facility. Real-time sensor outputs (rate gyro, accelerometer, surface position) are telemetered to a ground computer which sends new gain values to an on-board system. Ground test data and flight records were used to establish design values of noise statistics and to verify the ground-based adaptive software.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grout, Ray W. S.

    Convergence of spectral deferred correction (SDC), where low-order time integration methods are used to construct higher-order methods through iterative refinement, can be accelerated in terms of computational effort by using mixed-precision methods. Using ideas from multi-level SDC (in turn based on FAS multigrid ideas), some of the SDC correction sweeps can use function values computed in reduced precision without adversely impacting the accuracy of the final solution. This is particularly beneficial for the performance of combustion solvers such as S3D [6] which require double precision accuracy but are performance limited by the cost of data motion.

  8. NREL Spectrum of Clean Energy Innovation (Brochure)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    2011-09-01

    This brochure describes the NREL Spectrum of Clean Energy Innovation, which includes analysis and decision support, fundamental science, market relevant research, systems integration, testing and validation, commercialization and deployment. Through deep technical expertise and an unmatched breadth of capabilities, the National Renewable Energy Laboratory (NREL) leads an integrated approach across the spectrum of renewable energy innovation. From scientific discovery to accelerating market deployment, NREL works in partnership with private industry to drive the transformation of our nation's energy systems. NREL integrates the entire spectrum of innovation, including fundamental science, market relevant research, systems integration, testing and validation, commercialization, and deployment.more » Our world-class analysis and decision support informs every point on the spectrum. The innovation process at NREL is inter-dependent and iterative. Many scientific breakthroughs begin in our own laboratories, but new ideas and technologies may come to NREL at any point along the innovation spectrum to be validated and refined for commercial use.« less

  9. Feature selection with harmony search.

    PubMed

    Diao, Ren; Shen, Qiang

    2012-12-01

    Many search strategies have been exploited for the task of feature selection (FS), in an effort to identify more compact and better quality subsets. Such work typically involves the use of greedy hill climbing (HC), or nature-inspired heuristics, in order to discover the optimal solution without going through exhaustive search. In this paper, a novel FS approach based on harmony search (HS) is presented. It is a general approach that can be used in conjunction with many subset evaluation techniques. The simplicity of HS is exploited to reduce the overall complexity of the search process. The proposed approach is able to escape from local solutions and identify multiple solutions owing to the stochastic nature of HS. Additional parameter control schemes are introduced to reduce the effort and impact of parameter configuration. These can be further combined with the iterative refinement strategy, tailored to enforce the discovery of quality subsets. The resulting approach is compared with those that rely on HC, genetic algorithms, and particle swarm optimization, accompanied by in-depth studies of the suggested improvements.

  10. Wideband Direction of Arrival Estimation in the Presence of Unknown Mutual Coupling

    PubMed Central

    Li, Weixing; Zhang, Yue; Lin, Jianzhi; Guo, Rui; Chen, Zengping

    2017-01-01

    This paper investigates a subarray based algorithm for direction of arrival (DOA) estimation of wideband uniform linear array (ULA), under the presence of frequency-dependent mutual coupling effects. Based on the Toeplitz structure of mutual coupling matrices, the whole array is divided into the middle subarray and the auxiliary subarray. Then two-sided correlation transformation is applied to the correlation matrix of the middle subarray instead of the whole array. In this way, the mutual coupling effects can be eliminated. Finally, the multiple signal classification (MUSIC) method is utilized to derive the DOAs. For the condition when the blind angles exist, we refine DOA estimation by using a simple approach based on the frequency-dependent mutual coupling matrixes (MCMs). The proposed method can achieve high estimation accuracy without any calibration sources. It has a low computational complexity because iterative processing is not required. Simulation results validate the effectiveness and feasibility of the proposed algorithm. PMID:28178177

  11. A hybrid multiview stereo algorithm for modeling urban scenes.

    PubMed

    Lafarge, Florent; Keriven, Renaud; Brédif, Mathieu; Vu, Hoang-Hiep

    2013-01-01

    We present an original multiview stereo reconstruction algorithm which allows the 3D-modeling of urban scenes as a combination of meshes and geometric primitives. The method provides a compact model while preserving details: Irregular elements such as statues and ornaments are described by meshes, whereas regular structures such as columns and walls are described by primitives (planes, spheres, cylinders, cones, and tori). We adopt a two-step strategy consisting first in segmenting the initial meshbased surface using a multilabel Markov Random Field-based model and second in sampling primitive and mesh components simultaneously on the obtained partition by a Jump-Diffusion process. The quality of a reconstruction is measured by a multi-object energy model which takes into account both photo-consistency and semantic considerations (i.e., geometry and shape layout). The segmentation and sampling steps are embedded into an iterative refinement procedure which provides an increasingly accurate hybrid representation. Experimental results on complex urban structures and large scenes are presented and compared to state-of-the-art multiview stereo meshing algorithms.

  12. Development of a set of community-informed Ebola messages for Sierra Leone

    PubMed Central

    de Bruijne, Kars; Jalloh, Alpha M.; Harris, Muriel; Abdullah, Hussainatu; Boye-Thompson, Titus; Sankoh, Osman; Jalloh, Abdul K.; Jalloh-Vos, Heidi

    2017-01-01

    The West African Ebola epidemic of 2013–2016 was by far the largest outbreak of the disease on record. Sierra Leone suffered nearly half of the 28,646 reported cases. This paper presents a set of culturally contextualized Ebola messages that are based on the findings of qualitative interviews and focus group discussions conducted in 'hotspot' areas of rural Bombali District and urban Freetown in Sierra Leone, between January and March 2015. An iterative approach was taken in the message development process, whereby (i) data from formative research was subjected to thematic analysis to identify areas of community concern about Ebola and the national response; (ii) draft messages to address these concerns were produced; (iii) the messages were field tested; (iv) the messages were refined; and (v) a final set of messages on 14 topics was disseminated to relevant national and international stakeholders. Each message included details of its rationale, audience, dissemination channels, messengers, and associated operational issues that need to be taken into account. While developing the 14 messages, a set of recommendations emerged that could be adopted in future public health emergencies. These included the importance of embedding systematic, iterative qualitative research fully into the message development process; communication of the subsequent messages through a two-way dialogue with communities, using trusted messengers, and not only through a one-way, top-down communication process; provision of good, parallel operational services; and engagement with senior policy makers and managers as well as people in key operational positions to ensure national ownership of the messages, and to maximize the chance of their being utilised. The methodological approach that we used to develop our messages along with our suggested recommendations constitute a set of tools that could be incorporated into international and national public health emergency preparedness and response plans. PMID:28787444

  13. Linking the Congenital Heart Surgery Databases of the Society of Thoracic Surgeons and the Congenital Heart Surgeons’ Society: Part 1—Rationale and Methodology

    PubMed Central

    Jacobs, Jeffrey P.; Pasquali, Sara K.; Austin, Erle; Gaynor, J. William; Backer, Carl; Hirsch-Romano, Jennifer C.; Williams, William G.; Caldarone, Christopher A.; McCrindle, Brian W.; Graham, Karen E.; Dokholyan, Rachel S.; Shook, Gregory J.; Poteat, Jennifer; Baxi, Maulik V.; Karamlou, Tara; Blackstone, Eugene H.; Mavroudis, Constantine; Mayer, John E.; Jonas, Richard A.; Jacobs, Marshall L.

    2014-01-01

    Purpose The Society of Thoracic Surgeons Congenital Heart Surgery Database (STS-CHSD) is the largest Registry in the world of patients who have undergone congenital and pediatric cardiac surgical operations. The Congenital Heart Surgeons’ Society Database (CHSS-D) is an Academic Database designed for specialized detailed analyses of specific congenital cardiac malformations and related treatment strategies. The goal of this project was to create a link between the STS-CHSD and the CHSS-D in order to facilitate studies not possible using either individual database alone and to help identify patients who are potentially eligible for enrollment in CHSS studies. Methods Centers were classified on the basis of participation in the STS-CHSD, the CHSS-D, or both. Five matrices, based on CHSS inclusionary criteria and STS-CHSD codes, were created to facilitate the automated identification of patients in the STS-CHSD who meet eligibility criteria for the five active CHSS studies. The matrices were evaluated with a manual adjudication process and were iteratively refined. The sensitivity and specificity of the original matrices and the refined matrices were assessed. Results In January 2012, a total of 100 centers participated in the STS-CHSD and 74 centers participated in the CHSS. A total of 70 centers participate in both and 40 of these 70 agreed to participate in this linkage project. The manual adjudication process and the refinement of the matrices resulted in an increase in the sensitivity of the matrices from 93% to 100% and an increase in the specificity of the matrices from 94% to 98%. Conclusion Matrices were created to facilitate the automated identification of patients potentially eligible for the five active CHSS studies using the STS-CHSD. These matrices have a sensitivity of 100% and a specificity of 98%. In addition to facilitating identification of patients potentially eligible for enrollment in CHSS studies, these matrices will allow (1) estimation of the denominator of patients potentially eligible for CHSS studies and (2) comparison of eligible and enrolled patients to potentially eligible and not enrolled patients to assess the generalizability of CHSS studies. PMID:24668974

  14. Iteration and Prototyping in Creating Technical Specifications.

    ERIC Educational Resources Information Center

    Flynt, John P.

    1994-01-01

    Claims that the development process for computer software can be greatly aided by the writers of specifications if they employ basic iteration and prototyping techniques. Asserts that computer software configuration management practices provide ready models for iteration and prototyping. (HB)

  15. DART system analysis.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boggs, Paul T.; Althsuler, Alan; Larzelere, Alex R.

    2005-08-01

    The Design-through-Analysis Realization Team (DART) is chartered with reducing the time Sandia analysts require to complete the engineering analysis process. The DART system analysis team studied the engineering analysis processes employed by analysts in Centers 9100 and 8700 at Sandia to identify opportunities for reducing overall design-through-analysis process time. The team created and implemented a rigorous analysis methodology based on a generic process flow model parameterized by information obtained from analysts. They also collected data from analysis department managers to quantify the problem type and complexity distribution throughout Sandia's analyst community. They then used this information to develop a communitymore » model, which enables a simple characterization of processes that span the analyst community. The results indicate that equal opportunity for reducing analysis process time is available both by reducing the ''once-through'' time required to complete a process step and by reducing the probability of backward iteration. In addition, reducing the rework fraction (i.e., improving the engineering efficiency of subsequent iterations) offers approximately 40% to 80% of the benefit of reducing the ''once-through'' time or iteration probability, depending upon the process step being considered. Further, the results indicate that geometry manipulation and meshing is the largest portion of an analyst's effort, especially for structural problems, and offers significant opportunity for overall time reduction. Iteration loops initiated late in the process are more costly than others because they increase ''inner loop'' iterations. Identifying and correcting problems as early as possible in the process offers significant opportunity for time savings.« less

  16. Magnesium Recycling of Partially Oxidized, Mixed Magnesium-Aluminum Scrap Through Combined Refining and Solid Oxide Membrane (SOM) Electrolysis Processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guan, Xiaofei; Zink, Peter; Pal, Uday

    2012-03-11

    Pure magnesium (Mg) is recycled from 19g of partially oxidized 50.5wt.%Mg-Aluminum (Al) alloy. During the refining process, potentiodynamic scans (PDS) were performed to determine the electrorefining potential for magnesium. The PDS show that the electrorefining potential increases over time as the Mg content inside the Mg-Al scrap decreases. Up to 100% percent of magnesium is refined from the Mg-Al scrap by a novel refining process of dissolving magnesium and its oxide into a flux followed by vapor phase removal of dissolved magnesium and subsequently condensing the magnesium vapors in a separate condenser. The solid oxide membrane (SOM) electrolysis process ismore » employed in the refining system to enable additional recycling of magnesium from magnesium oxide (MgO) in the partially oxidized Mg-Al scrap. The combination of the refining and SOM processes yields 7.4g of pure magnesium; could not collect and weigh all of the magnesium recovered.« less

  17. Simplified lipid guidelines: Prevention and management of cardiovascular disease in primary care.

    PubMed

    Allan, G Michael; Lindblad, Adrienne J; Comeau, Ann; Coppola, John; Hudson, Brianne; Mannarino, Marco; McMinis, Cindy; Padwal, Raj; Schelstraete, Christine; Zarnke, Kelly; Garrison, Scott; Cotton, Candra; Korownyk, Christina; McCormack, James; Nickel, Sharon; Kolber, Michael R

    2015-10-01

    To develop clinical practice guidelines for a simplified approach to primary prevention of cardiovascular disease (CVD), concentrating on CVD risk estimation and lipid management for primary care clinicians and their teams; we sought increased contribution from primary care professionals with little or no conflict of interest and focused on the highest level of evidence available. Nine health professionals (4 family physicians, 2 internal medicine specialists, 1 nurse practitioner, 1 registered nurse, and 1 pharmacist) and 1 nonvoting member (pharmacist project manager) comprised the overarching Lipid Pathway Committee (LPC). Member selection was based on profession, practice setting, and location, and members disclosed any actual or potential conflicts of interest. The guideline process was iterative through online posting, detailed evidence review, and telephone and online meetings. The LPC identified 12 priority questions to be addressed. The Evidence Review Group answered these questions. After review of the answers, key recommendations were derived through consensus of the LPC. The guidelines were drafted, refined, and distributed to a group of clinicians (family physicians, other specialists, pharmacists, nurses, and nurse practitioners) and patients for feedback, then refined again and finalized by the LPC. Recommendations are provided on screening and testing, risk assessments, interventions, follow-up, and the role of acetylsalicylic acid in primary prevention. These simplified lipid guidelines provide practical recommendations for prevention and treatment of CVD for primary care practitioners. All recommendations are intended to assist with, not dictate, decision making in conjunction with patients. Copyright© the College of Family Physicians of Canada.

  18. Simplified lipid guidelines

    PubMed Central

    Allan, G. Michael; Lindblad, Adrienne J.; Comeau, Ann; Coppola, John; Hudson, Brianne; Mannarino, Marco; McMinis, Cindy; Padwal, Raj; Schelstraete, Christine; Zarnke, Kelly; Garrison, Scott; Cotton, Candra; Korownyk, Christina; McCormack, James; Nickel, Sharon; Kolber, Michael R.

    2015-01-01

    Abstract Objective To develop clinical practice guidelines for a simplified approach to primary prevention of cardiovascular disease (CVD), concentrating on CVD risk estimation and lipid management for primary care clinicians and their teams; we sought increased contribution from primary care professionals with little or no conflict of interest and focused on the highest level of evidence available. Methods Nine health professionals (4 family physicians, 2 internal medicine specialists, 1 nurse practitioner, 1 registered nurse, and 1 pharmacist) and 1 nonvoting member (pharmacist project manager) comprised the overarching Lipid Pathway Committee (LPC). Member selection was based on profession, practice setting, and location, and members disclosed any actual or potential conflicts of interest. The guideline process was iterative through online posting, detailed evidence review, and telephone and online meetings. The LPC identified 12 priority questions to be addressed. The Evidence Review Group answered these questions. After review of the answers, key recommendations were derived through consensus of the LPC. The guidelines were drafted, refined, and distributed to a group of clinicians (family physicians, other specialists, pharmacists, nurses, and nurse practitioners) and patients for feedback, then refined again and finalized by the LPC. Recommendations Recommendations are provided on screening and testing, risk assessments, interventions, follow-up, and the role of acetylsalicylic acid in primary prevention. Conclusion These simplified lipid guidelines provide practical recommendations for prevention and treatment of CVD for primary care practitioners. All recommendations are intended to assist with, not dictate, decision making in conjunction with patients. PMID:26472792

  19. Composition of web services using Markov decision processes and dynamic programming.

    PubMed

    Uc-Cetina, Víctor; Moo-Mena, Francisco; Hernandez-Ucan, Rafael

    2015-01-01

    We propose a Markov decision process model for solving the Web service composition (WSC) problem. Iterative policy evaluation, value iteration, and policy iteration algorithms are used to experimentally validate our approach, with artificial and real data. The experimental results show the reliability of the model and the methods employed, with policy iteration being the best one in terms of the minimum number of iterations needed to estimate an optimal policy, with the highest Quality of Service attributes. Our experimental work shows how the solution of a WSC problem involving a set of 100,000 individual Web services and where a valid composition requiring the selection of 1,000 services from the available set can be computed in the worst case in less than 200 seconds, using an Intel Core i5 computer with 6 GB RAM. Moreover, a real WSC problem involving only 7 individual Web services requires less than 0.08 seconds, using the same computational power. Finally, a comparison with two popular reinforcement learning algorithms, sarsa and Q-learning, shows that these algorithms require one or two orders of magnitude and more time than policy iteration, iterative policy evaluation, and value iteration to handle WSC problems of the same complexity.

  20. Separation of Lead from Crude Antimony by Pyro-Refining Process with NaPO3 Addition

    NASA Astrophysics Data System (ADS)

    Ye, Longgang; Hu, Yuejie; Xia, Zhimei; Chen, Yongming

    2016-06-01

    The main purpose of this study was to separate lead from crude antimony through an oxidation pyro-refining process and by using sodium metaphosphate as a lead elimination reagent. The process parameters that will affect the refining results were optimized experimentally under controlled conditions, such as the sodium metaphosphate charging dosage, the refining temperature and duration, and the air flow rate, to determine their effect on the lead content in refined antimony and the lead removal rate. A minimum lead content of 0.0522 wt.% and a 98.6% lead removal rate were obtained under the following optimal conditions: W_{{{NaPO}_{{3}} }} = 15% W Sb (where W represents weight), a refining temperature of 800°C, a refining time of 30 min, and an air flow rate of 3 L/min. X-ray diffractometry and scanning electron microscopy showed that high-purity antimony was obtained. The smelting operation is free from smoke or ammonia pollution when using monobasic sodium phosphate or ammonium dihydrogen phosphate as the lead elimination reagent. However, this refining process can also remove a certain amount of sulfur, cobalt, and silicon simultaneously, and smelting results also suggest that sodium metaphosphate can be used as a potential lead elimination reagent for bismuth and copper refining.

  1. Benefits of object-oriented models and ModeliChart: modern tools and methods for the interdisciplinary research on smart biomedical technology.

    PubMed

    Gesenhues, Jonas; Hein, Marc; Ketelhut, Maike; Habigt, Moriz; Rüschen, Daniel; Mechelinck, Mare; Albin, Thivaharan; Leonhardt, Steffen; Schmitz-Rode, Thomas; Rossaint, Rolf; Autschbach, Rüdiger; Abel, Dirk

    2017-04-01

    Computational models of biophysical systems generally constitute an essential component in the realization of smart biomedical technological applications. Typically, the development process of such models is characterized by a great extent of collaboration between different interdisciplinary parties. Furthermore, due to the fact that many underlying mechanisms and the necessary degree of abstraction of biophysical system models are unknown beforehand, the steps of the development process of the application are iteratively repeated when the model is refined. This paper presents some methods and tools to facilitate the development process. First, the principle of object-oriented (OO) modeling is presented and the advantages over classical signal-oriented modeling are emphasized. Second, our self-developed simulation tool ModeliChart is presented. ModeliChart was designed specifically for clinical users and allows independently performing in silico studies in real time including intuitive interaction with the model. Furthermore, ModeliChart is capable of interacting with hardware such as sensors and actuators. Finally, it is presented how optimal control methods in combination with OO models can be used to realize clinically motivated control applications. All methods presented are illustrated on an exemplary clinically oriented use case of the artificial perfusion of the systemic circulation.

  2. Analysis of Online Composite Mirror Descent Algorithm.

    PubMed

    Lei, Yunwen; Zhou, Ding-Xuan

    2017-03-01

    We study the convergence of the online composite mirror descent algorithm, which involves a mirror map to reflect the geometry of the data and a convex objective function consisting of a loss and a regularizer possibly inducing sparsity. Our error analysis provides convergence rates in terms of properties of the strongly convex differentiable mirror map and the objective function. For a class of objective functions with Hölder continuous gradients, the convergence rates of the excess (regularized) risk under polynomially decaying step sizes have the order [Formula: see text] after [Formula: see text] iterates. Our results improve the existing error analysis for the online composite mirror descent algorithm by avoiding averaging and removing boundedness assumptions, and they sharpen the existing convergence rates of the last iterate for online gradient descent without any boundedness assumptions. Our methodology mainly depends on a novel error decomposition in terms of an excess Bregman distance, refined analysis of self-bounding properties of the objective function, and the resulting one-step progress bounds.

  3. Hybrid parallelization of the XTOR-2F code for the simulation of two-fluid MHD instabilities in tokamaks

    NASA Astrophysics Data System (ADS)

    Marx, Alain; Lütjens, Hinrich

    2017-03-01

    A hybrid MPI/OpenMP parallel version of the XTOR-2F code [Lütjens and Luciani, J. Comput. Phys. 229 (2010) 8130] solving the two-fluid MHD equations in full tokamak geometry by means of an iterative Newton-Krylov matrix-free method has been developed. The present work shows that the code has been parallelized significantly despite the numerical profile of the problem solved by XTOR-2F, i.e. a discretization with pseudo-spectral representations in all angular directions, the stiffness of the two-fluid stability problem in tokamaks, and the use of a direct LU decomposition to invert the physical pre-conditioner at every Krylov iteration of the solver. The execution time of the parallelized version is an order of magnitude smaller than the sequential one for low resolution cases, with an increasing speedup when the discretization mesh is refined. Moreover, it allows to perform simulations with higher resolutions, previously forbidden because of memory limitations.

  4. A novel highly parallel algorithm for linearly unmixing hyperspectral images

    NASA Astrophysics Data System (ADS)

    Guerra, Raúl; López, Sebastián.; Callico, Gustavo M.; López, Jose F.; Sarmiento, Roberto

    2014-10-01

    Endmember extraction and abundances calculation represent critical steps within the process of linearly unmixing a given hyperspectral image because of two main reasons. The first one is due to the need of computing a set of accurate endmembers in order to further obtain confident abundance maps. The second one refers to the huge amount of operations involved in these time-consuming processes. This work proposes an algorithm to estimate the endmembers of a hyperspectral image under analysis and its abundances at the same time. The main advantage of this algorithm is its high parallelization degree and the mathematical simplicity of the operations implemented. This algorithm estimates the endmembers as virtual pixels. In particular, the proposed algorithm performs the descent gradient method to iteratively refine the endmembers and the abundances, reducing the mean square error, according with the linear unmixing model. Some mathematical restrictions must be added so the method converges in a unique and realistic solution. According with the algorithm nature, these restrictions can be easily implemented. The results obtained with synthetic images demonstrate the well behavior of the algorithm proposed. Moreover, the results obtained with the well-known Cuprite dataset also corroborate the benefits of our proposal.

  5. Design principles for simulation games for learning clinical reasoning: A design-based research approach.

    PubMed

    Koivisto, J-M; Haavisto, E; Niemi, H; Haho, P; Nylund, S; Multisilta, J

    2018-01-01

    Nurses sometimes lack the competence needed for recognising deterioration in patient conditions and this is often due to poor clinical reasoning. There is a need to develop new possibilities for learning this crucial competence area. In addition, educators need to be future oriented; they need to be able to design and adopt new pedagogical innovations. The purpose of the study is to describe the development process and to generate principles for the design of nursing simulation games. A design-based research methodology is applied in this study. Iterative cycles of analysis, design, development, testing and refinement were conducted via collaboration among researchers, educators, students, and game designers. The study facilitated the generation of reusable design principles for simulation games to guide future designers when designing and developing simulation games for learning clinical reasoning. This study makes a major contribution to research on simulation game development in the field of nursing education. The results of this study provide important insights into the significance of involving nurse educators in the design and development process of educational simulation games for the purpose of nursing education. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Software Would Largely Automate Design of Kalman Filter

    NASA Technical Reports Server (NTRS)

    Chuang, Jason C. H.; Negast, William J.

    2005-01-01

    Embedded Navigation Filter Automatic Designer (ENFAD) is a computer program being developed to automate the most difficult tasks in designing embedded software to implement a Kalman filter in a navigation system. The most difficult tasks are selection of error states of the filter and tuning of filter parameters, which are timeconsuming trial-and-error tasks that require expertise and rarely yield optimum results. An optimum selection of error states and filter parameters depends on navigation-sensor and vehicle characteristics, and on filter processing time. ENFAD would include a simulation module that would incorporate all possible error states with respect to a given set of vehicle and sensor characteristics. The first of two iterative optimization loops would vary the selection of error states until the best filter performance was achieved in Monte Carlo simulations. For a fixed selection of error states, the second loop would vary the filter parameter values until an optimal performance value was obtained. Design constraints would be satisfied in the optimization loops. Users would supply vehicle and sensor test data that would be used to refine digital models in ENFAD. Filter processing time and filter accuracy would be computed by ENFAD.

  7. Development of a technical assistance framework for building organizational capacity of health programs in resource-limited settings.

    PubMed

    Reyes, E Michael; Sharma, Anjali; Thomas, Kate K; Kuehn, Chuck; Morales, José Rafael

    2014-09-17

    Little information exists on the technical assistance needs of local indigenous organizations charged with managing HIV care and treatment programs funded by the US President's Emergency Plan for AIDS Relief (PEPFAR). This paper describes the methods used to adapt the Primary Care Assessment Tool (PCAT) framework, which has successfully strengthened HIV primary care services in the US, into one that could strengthen the capacity of local partners to deliver priority health programs in resource-constrained settings by identifying their specific technical assistance needs. Qualitative methods and inductive reasoning approaches were used to conceptualize and adapt the new Clinical Assessment for Systems Strengthening (ClASS) framework. Stakeholder interviews, comparisons of existing assessment tools, and a pilot test helped determine the overall ClASS framework for use in low-resource settings. The framework was further refined one year post-ClASS implementation. Stakeholder interviews, assessment of existing tools, a pilot process and the one-year post- implementation assessment informed the adaptation of the ClASS framework for assessing and strengthening technical and managerial capacities of health programs at three levels: international partner, local indigenous partner, and local partner treatment facility. The PCAT focus on organizational strengths and systems strengthening was retained and implemented in the ClASS framework and approach. A modular format was chosen to allow the use of administrative, fiscal and clinical modules in any combination and to insert new modules as needed by programs. The pilot led to refined pre-visit planning, informed review team composition, increased visit duration, and restructured modules. A web-based toolkit was developed to capture three years of experiential learning; this kit can also be used for independent implementation of the ClASS framework. A systematic adaptation process has produced a qualitative framework that can inform implementation strategies in support of country led HIV care and treatment programs. The framework, as a well-received iterative process focused on technical assistance, may have broader utility in other global programs.

  8. A developmental evaluation to enhance stakeholder engagement in a wide-scale interactive project disseminating quality improvement data: study protocol for a mixed-methods study

    PubMed Central

    Laycock, Alison; Bailie, Jodie; Matthews, Veronica; Cunningham, Frances; Harvey, Gillian; Percival, Nikki; Bailie, Ross

    2017-01-01

    Introduction Bringing together continuous quality improvement (CQI) data from multiple health services offers opportunities to identify common improvement priorities and to develop interventions at various system levels to achieve large-scale improvement in care. An important principle of CQI is practitioner participation in interpreting data and planning evidence-based change. This study will contribute knowledge about engaging diverse stakeholders in collaborative and theoretically informed processes to identify and address priority evidence-practice gaps in care delivery. This paper describes a developmental evaluation to support and refine a novel interactive dissemination project using aggregated CQI data from Aboriginal and Torres Strait Islander primary healthcare centres in Australia. The project aims to effect multilevel system improvement in Aboriginal and Torres Strait Islander primary healthcare. Methods and analysis Data will be gathered using document analysis, online surveys, interviews with participants and iterative analytical processes with the research team. These methods will enable real-time feedback to guide refinements to the design, reports, tools and processes as the interactive dissemination project is implemented. Qualitative data from interviews and surveys will be analysed and interpreted to provide in-depth understanding of factors that influence engagement and stakeholder perspectives about use of the aggregated data and generated improvement strategies. Sources of data will be triangulated to build up a comprehensive, contextualised perspective and integrated understanding of the project's development, implementation and findings. Ethics and dissemination The Human Research Ethics Committee (HREC) of the Northern Territory Department of Health and Menzies School of Health Research (Project 2015-2329), the Central Australian HREC (Project 15-288) and the Charles Darwin University HREC (Project H15030) approved the study. Dissemination will include articles in peer-reviewed journals, policy and research briefs. Results will be presented at conferences and quality improvement network meetings. Researchers, clinicians, policymakers and managers developing evidence-based system and policy interventions should benefit from this research. PMID:28710222

  9. Practical Advances in Petroleum Processing

    NASA Astrophysics Data System (ADS)

    Hsu, Chang S.; Robinson, Paul R.

    "This comprehensive book by Robinson and Hsu will certainly become the standard text book for the oil refining business...[A] must read for all who are associated with oil refining." - Dr. Walter Fritsch, Senior Vice President Refining, OMV "This book covers a very advanced horizon of petroleum processing technology. For all refiners facing regional and global environmental concerns, and for those who seek a more sophisticated understanding of the refining of petroleum resources, this book has been long in coming." - Mr. Naomasa Kondo, Cosmo Oil Company, Ltd.

  10. The three stages of building and testing mid-level theories in a realist RCT: a theoretical and methodological case-example.

    PubMed

    Jamal, Farah; Fletcher, Adam; Shackleton, Nichola; Elbourne, Diana; Viner, Russell; Bonell, Chris

    2015-10-15

    Randomised controlled trials (RCTs) of social interventions are often criticised as failing to open the 'black box' whereby they only address questions about 'what works' without explaining the underlying processes of implementation and mechanisms of action, and how these vary by contextual characteristics of person and place. Realist RCTs are proposed as an approach to evaluation science that addresses these gaps while preserving the strengths of RCTs in providing evidence with strong internal validity in estimating effects. In the context of growing interest in designing and conducting realist trials, there is an urgent need to offer a worked example to provide guidance on how such an approach might be practically taken forward. The aim of this paper is to outline a three-staged theoretical and methodological process of undertaking a realist RCT using the example of the evaluation of a whole-school restorative intervention aiming to reduce aggression and bullying in English secondary schools. First, informed by the findings of our initial pilot trial and sociological theory, we elaborate our theory of change and specific a priori hypotheses about how intervention mechanisms interact with context to produce outcomes. Second, we describe how we will use emerging findings from the integral process evaluation within the RCT to refine, and add to, these a priori hypotheses before the collection of quantitative, follow-up data. Third, we will test our hypotheses using a combination of process and outcome data via quantitative analyses of effect mediation (examining mechanisms) and moderation (examining contextual contingencies). The results are then used to refine and further develop the theory of change. The aim of the realist RCT approach is thus not merely to assess whether the intervention is effective or not, but to develop empirically informed mid-range theory through a three-stage process. There are important implications for those involved with reporting and reviewing RCTs, including the use of new, iterative protocols. Current Controlled Trials ISRCTN10751359 (Registered 11 March 2014).

  11. Mesh refinement in finite element analysis by minimization of the stiffness matrix trace

    NASA Technical Reports Server (NTRS)

    Kittur, Madan G.; Huston, Ronald L.

    1989-01-01

    Most finite element packages provide means to generate meshes automatically. However, the user is usually confronted with the problem of not knowing whether the mesh generated is appropriate for the problem at hand. Since the accuracy of the finite element results is mesh dependent, mesh selection forms a very important step in the analysis. Indeed, in accurate analyses, meshes need to be refined or rezoned until the solution converges to a value so that the error is below a predetermined tolerance. A-posteriori methods use error indicators, developed by using the theory of interpolation and approximation theory, for mesh refinements. Some use other criterions, such as strain energy density variation and stress contours for example, to obtain near optimal meshes. Although these methods are adaptive, they are expensive. Alternatively, a priori methods, until now available, use geometrical parameters, for example, element aspect ratio. Therefore, they are not adaptive by nature. An adaptive a-priori method is developed. The criterion is that the minimization of the trace of the stiffness matrix with respect to the nodal coordinates, leads to a minimization of the potential energy, and as a consequence provide a good starting mesh. In a few examples the method is shown to provide the optimal mesh. The method is also shown to be relatively simple and amenable to development of computer algorithms. When the procedure is used in conjunction with a-posteriori methods of grid refinement, it is shown that fewer refinement iterations and fewer degrees of freedom are required for convergence as opposed to when the procedure is not used. The mesh obtained is shown to have uniform distribution of stiffness among the nodes and elements which, as a consequence, leads to uniform error distribution. Thus the mesh obtained meets the optimality criterion of uniform error distribution.

  12. Evaluation of MODFLOW-LGR in connection with a synthetic regional-scale model

    USGS Publications Warehouse

    Vilhelmsen, T.N.; Christensen, S.; Mehl, S.W.

    2012-01-01

    This work studies costs and benefits of utilizing local-grid refinement (LGR) as implemented in MODFLOW-LGR to simulate groundwater flow in a buried tunnel valley interacting with a regional aquifer. Two alternative LGR methods were used: the shared-node (SN) method and the ghost-node (GN) method. To conserve flows the SN method requires correction of sources and sinks in cells at the refined/coarse-grid interface. We found that the optimal correction method is case dependent and difficult to identify in practice. However, the results showed little difference and suggest that identifying the optimal method was of minor importance in our case. The GN method does not require corrections at the models' interface, and it uses a simpler head interpolation scheme than the SN method. The simpler scheme is faster but less accurate so that more iterations may be necessary. However, the GN method solved our flow problem more efficiently than the SN method. The MODFLOW-LGR results were compared with the results obtained using a globally coarse (GC) grid. The LGR simulations required one to two orders of magnitude longer run times than the GC model. However, the improvements of the numerical resolution around the buried valley substantially increased the accuracy of simulated heads and flows compared with the GC simulation. Accuracy further increased locally around the valley flanks when improving the geological resolution using the refined grid. Finally, comparing MODFLOW-LGR simulation with a globally refined (GR) grid showed that the refinement proportion of the model should not exceed 10% to 15% in order to secure method efficiency. ?? 2011, The Author(s). Ground Water ?? 2011, National Ground Water Association.

  13. Distributed Simulation as a modelling tool for the development of a simulation-based training programme for cardiovascular specialties.

    PubMed

    Kelay, Tanika; Chan, Kah Leong; Ako, Emmanuel; Yasin, Mohammad; Costopoulos, Charis; Gold, Matthew; Kneebone, Roger K; Malik, Iqbal S; Bello, Fernando

    2017-01-01

    Distributed Simulation is the concept of portable, high-fidelity immersive simulation. Here, it is used for the development of a simulation-based training programme for cardiovascular specialities. We present an evidence base for how accessible, portable and self-contained simulated environments can be effectively utilised for the modelling, development and testing of a complex training framework and assessment methodology. Iterative user feedback through mixed-methods evaluation techniques resulted in the implementation of the training programme. Four phases were involved in the development of our immersive simulation-based training programme: ( 1) initial conceptual stage for mapping structural criteria and parameters of the simulation training framework and scenario development ( n  = 16), (2) training facility design using Distributed Simulation , (3) test cases with clinicians ( n  = 8) and collaborative design, where evaluation and user feedback involved a mixed-methods approach featuring (a) quantitative surveys to evaluate the realism and perceived educational relevance of the simulation format and framework for training and (b) qualitative semi-structured interviews to capture detailed feedback including changes and scope for development. Refinements were made iteratively to the simulation framework based on user feedback, resulting in (4) transition towards implementation of the simulation training framework, involving consistent quantitative evaluation techniques for clinicians ( n  = 62). For comparative purposes, clinicians' initial quantitative mean evaluation scores for realism of the simulation training framework, realism of the training facility and relevance for training ( n  = 8) are presented longitudinally, alongside feedback throughout the development stages from concept to delivery, including the implementation stage ( n  = 62). Initially, mean evaluation scores fluctuated from low to average, rising incrementally. This corresponded with the qualitative component, which augmented the quantitative findings; trainees' user feedback was used to perform iterative refinements to the simulation design and components (collaborative design), resulting in higher mean evaluation scores leading up to the implementation phase. Through application of innovative Distributed Simulation techniques, collaborative design, and consistent evaluation techniques from conceptual, development, and implementation stages, fully immersive simulation techniques for cardiovascular specialities are achievable and have the potential to be implemented more broadly.

  14. Effect of abutment angulation on the strain on the bone around an implant in the anterior maxilla: a finite element study.

    PubMed

    Saab, Xavier E; Griggs, Jason A; Powers, John M; Engelmeier, Robert L

    2007-02-01

    Angled abutments are often used to restore dental implants placed in the anterior maxilla due to esthetic or spatial needs. The effect of abutment angulation on bone strain is unknown. The purpose of the current study was to measure and compare the strain distribution on the bone around an implant in the anterior maxilla using 2 different abutments by means of finite element analysis. Two-dimensional finite element models were designed using software (ANSYS) for 2 situations: (1) an implant with a straight abutment in the anterior maxilla, and (2) an implant with an angled abutment in the anterior maxilla. The implant used was 4x13 mm (MicroThread). The maxillary bone was modeled as type 3 bone with a cortical layer thickness of 0.5 mm. Oblique loads of 178 N were applied on the cingulum area of both models. Seven consecutive iterations of mesh refinement were performed in each model to observe the convergence of the results. The greatest strain was found on the cancellous bone, adjacent to the 3 most apical microthreads on the palatal side of the implant where tensile forces were created. The same strain distribution was observed around both the straight and angled abutments. After several iterations, the results converged to a value for the maximum first principal strain on the bone of both models, which was independent of element size. Most of the deformation occurred in the cancellous bone and ranged between 1000 and 3500 microstrain. Small areas of cancellous bone experienced strain above the physiologic limit (4000 microstrain). The model predicted a 15% higher maximum bone strain for the straight abutment compared with the angled abutment. The results converged after several iterations of mesh refinement, which confirmed the lack of dependence of the maximum strain at the implant-bone interface on mesh density. Most of the strain produced on the cancellous and cortical bone was within the range that has been reported to increase bone mass and mineralization.

  15. New developments in the diagnostics for the fusion products on JET in preparation for ITER (invited).

    PubMed

    Murari, A; Angelone, M; Bonheure, G; Cecil, E; Craciunescu, T; Darrow, D; Edlington, T; Ericsson, G; Gatu-Johnson, M; Gorini, G; Hellesen, C; Kiptily, V; Mlynar, J; Perez von Thun, C; Pillon, M; Popovichev, S; Syme, B; Tardocchi, M; Zoita, V L

    2010-10-01

    Notwithstanding the advances of the past decades, significant developments are still needed to satisfactorily diagnose “burning plasmas.” D–T plasmas indeed require a series of additional measurements for the optimization and control of the configuration: the 14 MeV neutrons, the isotopic composition of the main plasma, the helium ash, and the redistribution and losses of the alpha particles. Moreover a burning plasma environment is in general much more hostile for diagnostics than purely deuterium plasmas. Therefore, in addition to the development and refinement of new measuring techniques, technological advances are also indispensable for the proper characterization of the next generation of devices. On JET an integrated program of diagnostic developments, for JET future and in preparation for ITER, has been pursued and many new results are now available. In the field of neutron detection, the neutron spectra are now routinely measured in the energy range of 1–18 MeV by a time of flight spectrometer and they have allowed studying the effects of rf heating on the fast ions. A new analysis method for the interpretation of the neutron cameras measurements has been refined and applied to the data of the last trace tritium campaign (TTE). With regard to technological upgrades, chemical vapor deposition diamond detectors have been qualified both as neutron counters and as neutron spectrometers, with a potential energy resolution of about one percent. The in situ calibration of the neutron diagnostics, in preparation for the operation with the ITER-like wall, is also promoting important technological developments. With regard to the fast particles, for the first time the temperature of the fast particle tails has been obtained with a new high purity Germanium detector measuring the gamma emission spectrum from the plasma. The effects of toroidal Alfven eigenmodes modes and various MHD instabilities on the confinement of the fast particles have been determined with a combination of gamma ray cameras, neutral particle analyzers, scintillator probe, and Faraday cups. From a more technological perspective, various neutron filters have been tested to allow measurement of the gamma ray emission also at high level of neutron yield.

  16. New developments in the diagnostics for the fusion products on JET in preparation for ITER (invited)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murari, A.; Angelone, M.; Pillon, M.

    Notwithstanding the advances of the past decades, significant developments are still needed to satisfactorily diagnose ''burning plasmas.'' D-T plasmas indeed require a series of additional measurements for the optimization and control of the configuration: the 14 MeV neutrons, the isotopic composition of the main plasma, the helium ash, and the redistribution and losses of the alpha particles. Moreover a burning plasma environment is in general much more hostile for diagnostics than purely deuterium plasmas. Therefore, in addition to the development and refinement of new measuring techniques, technological advances are also indispensable for the proper characterization of the next generation ofmore » devices. On JET an integrated program of diagnostic developments, for JET future and in preparation for ITER, has been pursued and many new results are now available. In the field of neutron detection, the neutron spectra are now routinely measured in the energy range of 1-18 MeV by a time of flight spectrometer and they have allowed studying the effects of rf heating on the fast ions.A new analysis method for the interpretation of the neutron cameras measurements has been refined and applied to the data of the last trace tritium campaign (TTE). With regard to technological upgrades, chemical vapor deposition diamond detectors have been qualified both as neutron counters and as neutron spectrometers, with a potential energy resolution of about one percent. The in situ calibration of the neutron diagnostics, in preparation for the operation with the ITER-like wall, is also promoting important technological developments. With regard to the fast particles, for the first time the temperature of the fast particle tails has been obtained with a new high purity Germanium detector measuring the gamma emission spectrum from the plasma. The effects of toroidal Alfven eigenmodes modes and various MHD instabilities on the confinement of the fast particles have been determined with a combination of gamma ray cameras, neutral particle analyzers, scintillator probe, and Faraday cups. From a more technological perspective, various neutron filters have been tested to allow measurement of the gamma ray emission also at high level of neutron yield.« less

  17. Strong Convergence of Iteration Processes for Infinite Family of General Extended Mappings

    NASA Astrophysics Data System (ADS)

    Hussein Maibed, Zena

    2018-05-01

    The aim of this paper, we introduce a concept of general extended mapping which is independent of nonexpansive mapping and give an iteration process of families of quasi nonexpansive and of general extended mappings. Also, the existence of common fixed point are studied for these process in the Hilbert spaces.

  18. A multi-site cognitive task analysis for biomedical query mediation.

    PubMed

    Hruby, Gregory W; Rasmussen, Luke V; Hanauer, David; Patel, Vimla L; Cimino, James J; Weng, Chunhua

    2016-09-01

    To apply cognitive task analyses of the Biomedical query mediation (BQM) processes for EHR data retrieval at multiple sites towards the development of a generic BQM process model. We conducted semi-structured interviews with eleven data analysts from five academic institutions and one government agency, and performed cognitive task analyses on their BQM processes. A coding schema was developed through iterative refinement and used to annotate the interview transcripts. The annotated dataset was used to reconstruct and verify each BQM process and to develop a harmonized BQM process model. A survey was conducted to evaluate the face and content validity of this harmonized model. The harmonized process model is hierarchical, encompassing tasks, activities, and steps. The face validity evaluation concluded the model to be representative of the BQM process. In the content validity evaluation, out of the 27 tasks for BQM, 19 meet the threshold for semi-valid, including 3 fully valid: "Identify potential index phenotype," "If needed, request EHR database access rights," and "Perform query and present output to medical researcher", and 8 are invalid. We aligned the goals of the tasks within the BQM model with the five components of the reference interview. The similarity between the process of BQM and the reference interview is promising and suggests the BQM tasks are powerful for eliciting implicit information needs. We contribute a BQM process model based on a multi-site study. This model promises to inform the standardization of the BQM process towards improved communication efficiency and accuracy. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  19. A Multi-Site Cognitive Task Analysis for Biomedical Query Mediation

    PubMed Central

    Hruby, Gregory W.; Rasmussen, Luke V.; Hanauer, David; Patel, Vimla; Cimino, James J.; Weng, Chunhua

    2016-01-01

    Objective To apply cognitive task analyses of the Biomedical query mediation (BQM) processes for EHR data retrieval at multiple sites towards the development of a generic BQM process model. Materials and Methods We conducted semi-structured interviews with eleven data analysts from five academic institutions and one government agency, and performed cognitive task analyses on their BQM processes. A coding schema was developed through iterative refinement and used to annotate the interview transcripts. The annotated dataset was used to reconstruct and verify each BQM process and to develop a harmonized BQM process model. A survey was conducted to evaluate the face and content validity of this harmonized model. Results The harmonized process model is hierarchical, encompassing tasks, activities, and steps. The face validity evaluation concluded the model to be representative of the BQM process. In the content validity evaluation, out of the 27 tasks for BQM, 19 meet the threshold for semi-valid, including 3 fully valid: “Identify potential index phenotype,” “If needed, request EHR database access rights,” and “Perform query and present output to medical researcher”, and 8 are invalid. Discussion We aligned the goals of the tasks within the BQM model with the five components of the reference interview. The similarity between the process of BQM and the reference interview is promising and suggests the BQM tasks are powerful for eliciting implicit information needs. Conclusions We contribute a BQM process model based on a multi-site study. This model promises to inform the standardization of the BQM process towards improved communication efficiency and accuracy. PMID:27435950

  20. Leveraging the NLM map from SNOMED CT to ICD-10-CM to facilitate adoption of ICD-10-CM.

    PubMed

    Cartagena, F Phil; Schaeffer, Molly; Rifai, Dorothy; Doroshenko, Victoria; Goldberg, Howard S

    2015-05-01

    Develop and test web services to retrieve and identify the most precise ICD-10-CM code(s) for a given clinical encounter. Facilitate creation of user interfaces that 1) provide an initial shortlist of candidate codes, ideally visible on a single screen; and 2) enable code refinement. To satisfy our high-level use cases, the analysis and design process involved reviewing available maps and crosswalks, designing the rule adjudication framework, determining necessary metadata, retrieving related codes, and iteratively improving the code refinement algorithm. The Partners ICD-10-CM Search and Mapping Services (PI-10 Services) are SOAP web services written using Microsoft's.NET 4.0 Framework, Windows Communications Framework, and SQL Server 2012. The services cover 96% of the Partners problem list subset of SNOMED CT codes that map to ICD-10-CM codes and can return up to 76% of the 69,823 billable ICD-10-CM codes prior to creation of custom mapping rules. We consider ways to increase 1) the coverage ratio of the Partners problem list subset of SNOMED CT codes and 2) the upper bound of returnable ICD-10-CM codes by creating custom mapping rules. Future work will investigate the utility of the transitive closure of SNOMED CT codes and other methods to assist in custom rule creation and, ultimately, to provide more complete coverage of ICD-10-CM codes. ICD-10-CM will be easier for clinicians to manage if applications display short lists of candidate codes from which clinicians can subsequently select a code for further refinement. The PI-10 Services support ICD-10 migration by implementing this paradigm and enabling users to consistently and accurately find the best ICD-10-CM code(s) without translation from ICD-9-CM. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. Evolution of Tides and Tidal Dissipation Over the Past 26,000 Years Using a Multi-Scale Model of Global Barotropic Tides

    NASA Astrophysics Data System (ADS)

    Salehipour, H.; Peltier, W. R.

    2014-12-01

    In this paper we will describe the results obtained through integration of a further refined version of the truly global barotropic tidal model of Salehipour et al. (Ocean Modell., 69, 2013) using the most recent reconstruction of ice-age bathymetric conditions as embodied in the recently constructed ICE-6G_C (VM5a) model of Peltier et al. (JGR-Solid Earth, in press, 2014). Our interest is in the spatial and temporal evolution of tidal amplitude, phase and dissipation from the Last Glacial Maximum (LGM) 26,000 years ago until the present. The state-of-the-art higher order nonlinear tidal model of Salehipour et al. (2013) includes a highly parallelized multi-scale framework in which an unstructured tessellation of the global ocean enables extensive local refinement around regions of interest such as the Hawaiian Ridge, the Brazil Basin and the Southern Ocean. At LGM, features such as the Patagonian Shelf were fully exposed land which during the deglaciation process would have been flooded leading to significant changes of tidal range along the evolving coastline. In the further development of this model we have included the fully iterated treatment of the influence of gravitational self-attraction and loading as in, e.g. Egbert et al. (JGR-Oceans, 109, 2004). The treatment of the dissipation of the barotropic tide through dissipation of the internal tide has also been significantly improved. Our paleobathymetry and coastline data sets extend from LGM to present at 500 year intervals and constitute a significant refinement of the widely employed ICE-5G (VM2) model of Peltier (Annu. Rev. Earth Planet. Sci., 32, 2004). Our results will be compared with those recently published by Green & Nycander (JPO, 43, 2013) and Wilmes & Green (JGR-Oceans, 119, 2014) as well as with the earlier results of Griffiths & Peltier (GRL, 35, 2008; J. Clim., 22, 2009).

  2. Composition of Web Services Using Markov Decision Processes and Dynamic Programming

    PubMed Central

    Uc-Cetina, Víctor; Moo-Mena, Francisco; Hernandez-Ucan, Rafael

    2015-01-01

    We propose a Markov decision process model for solving the Web service composition (WSC) problem. Iterative policy evaluation, value iteration, and policy iteration algorithms are used to experimentally validate our approach, with artificial and real data. The experimental results show the reliability of the model and the methods employed, with policy iteration being the best one in terms of the minimum number of iterations needed to estimate an optimal policy, with the highest Quality of Service attributes. Our experimental work shows how the solution of a WSC problem involving a set of 100,000 individual Web services and where a valid composition requiring the selection of 1,000 services from the available set can be computed in the worst case in less than 200 seconds, using an Intel Core i5 computer with 6 GB RAM. Moreover, a real WSC problem involving only 7 individual Web services requires less than 0.08 seconds, using the same computational power. Finally, a comparison with two popular reinforcement learning algorithms, sarsa and Q-learning, shows that these algorithms require one or two orders of magnitude and more time than policy iteration, iterative policy evaluation, and value iteration to handle WSC problems of the same complexity. PMID:25874247

  3. Computer simulation of refining process of a high consistency disc refiner based on CFD

    NASA Astrophysics Data System (ADS)

    Wang, Ping; Yang, Jianwei; Wang, Jiahui

    2017-08-01

    In order to reduce refining energy consumption, the ANSYS CFX was used to simulate the refining process of a high consistency disc refiner. In the first it was assumed to be uniform Newton fluid of turbulent state in disc refiner with the k-ɛ flow model; then meshed grids and set the boundary conditions in 3-D model of the disc refiner; and then was simulated and analyzed; finally, the viscosity of the pulp were measured. The results show that the CFD method can be used to analyze the pressure and torque on the disc plate, so as to calculate the refining power, and streamlines and velocity vectors can also be observed. CFD simulation can optimize parameters of the bar and groove, which is of great significance to reduce the experimental cost and cycle.

  4. Programmable Iterative Optical Image And Data Processing

    NASA Technical Reports Server (NTRS)

    Jackson, Deborah J.

    1995-01-01

    Proposed method of iterative optical image and data processing overcomes limitations imposed by loss of optical power after repeated passes through many optical elements - especially, beam splitters. Involves selective, timed combination of optical wavefront phase conjugation and amplification to regenerate images in real time to compensate for losses in optical iteration loops; timing such that amplification turned on to regenerate desired image, then turned off so as not to regenerate other, undesired images or spurious light propagating through loops from unwanted reflections.

  5. Solving the multiple-set split equality common fixed-point problem of firmly quasi-nonexpansive operators.

    PubMed

    Zhao, Jing; Zong, Haili

    2018-01-01

    In this paper, we propose parallel and cyclic iterative algorithms for solving the multiple-set split equality common fixed-point problem of firmly quasi-nonexpansive operators. We also combine the process of cyclic and parallel iterative methods and propose two mixed iterative algorithms. Our several algorithms do not need any prior information about the operator norms. Under mild assumptions, we prove weak convergence of the proposed iterative sequences in Hilbert spaces. As applications, we obtain several iterative algorithms to solve the multiple-set split equality problem.

  6. Refining a self-assessment of informatics competency scale using Mokken scaling analysis.

    PubMed

    Yoon, Sunmoo; Shaffer, Jonathan A; Bakken, Suzanne

    2015-01-01

    Healthcare environments are increasingly implementing health information technology (HIT) and those from various professions must be competent to use HIT in meaningful ways. In addition, HIT has been shown to enable interprofessional approaches to health care. The purpose of this article is to describe the refinement of the Self-Assessment of Nursing Informatics Competencies Scale (SANICS) using analytic techniques based upon item response theory (IRT) and discuss its relevance to interprofessional education and practice. In a sample of 604 nursing students, the 93-item version of SANICS was examined using non-parametric IRT. The iterative modeling procedure included 31 steps comprising: (1) assessing scalability, (2) assessing monotonicity, (3) assessing invariant item ordering, and (4) expert input. SANICS was reduced to an 18-item hierarchical scale with excellent reliability. Fundamental skills for team functioning and shared decision making among team members (e.g. "using monitoring systems appropriately," "describing general systems to support clinical care") had the highest level of difficulty, and "demonstrating basic technology skills" had the lowest difficulty level. Most items reflect informatics competencies relevant to all health professionals. Further, the approaches can be applied to construct a new hierarchical scale or refine an existing scale related to informatics attitudes or competencies for various health professions.

  7. Usability Testing of an Interactive Virtual Reality Distraction Intervention to Reduce Procedural Pain in Children and Adolescents With Cancer.

    PubMed

    Birnie, Kathryn A; Kulandaivelu, Yalinie; Jibb, Lindsay; Hroch, Petra; Positano, Karyn; Robertson, Simon; Campbell, Fiona; Abla, Oussama; Stinson, Jennifer

    2018-06-01

    Needle procedures are among the most distressing aspects of pediatric cancer-related treatment. Virtual reality (VR) distraction offers promise for needle-related pain and distress given its highly immersive and interactive virtual environment. This study assessed the usability (ease of use and understanding, acceptability) of a custom VR intervention for children with cancer undergoing implantable venous access device (IVAD) needle insertion. Three iterative cycles of mixed-method usability testing with semistructured interviews were undertaken to refine the VR. Participants included 17 children and adolescents (8-18 years old) with cancer who used the VR intervention prior to or during IVAD access. Most participants reported the VR as easy to use (82%) and understand (94%), and would like to use it during subsequent needle procedures (94%). Based on usability testing, refinements were made to VR hardware, software, and clinical implementation. Refinements focused on increasing responsiveness, interaction, and immersion of the VR program, reducing head movement for VR interaction, and enabling participant alerts to steps of the procedure by clinical staff. No adverse events of nausea or dizziness were reported. The VR intervention was deemed acceptable and safe. Next steps include assessing feasibility and effectiveness of the VR intervention for pain and distress.

  8. A multigrid method for steady Euler equations on unstructured adaptive grids

    NASA Technical Reports Server (NTRS)

    Riemslagh, Kris; Dick, Erik

    1993-01-01

    A flux-difference splitting type algorithm is formulated for the steady Euler equations on unstructured grids. The polynomial flux-difference splitting technique is used. A vertex-centered finite volume method is employed on a triangular mesh. The multigrid method is in defect-correction form. A relaxation procedure with a first order accurate inner iteration and a second-order correction performed only on the finest grid, is used. A multi-stage Jacobi relaxation method is employed as a smoother. Since the grid is unstructured a Jacobi type is chosen. The multi-staging is necessary to provide sufficient smoothing properties. The domain is discretized using a Delaunay triangular mesh generator. Three grids with more or less uniform distribution of nodes but with different resolution are generated by successive refinement of the coarsest grid. Nodes of coarser grids appear in the finer grids. The multigrid method is started on these grids. As soon as the residual drops below a threshold value, an adaptive refinement is started. The solution on the adaptively refined grid is accelerated by a multigrid procedure. The coarser multigrid grids are generated by successive coarsening through point removement. The adaption cycle is repeated a few times. Results are given for the transonic flow over a NACA-0012 airfoil.

  9. Recent developments in the structural design and optimization of ITER neutral beam manifold

    NASA Astrophysics Data System (ADS)

    Chengzhi, CAO; Yudong, PAN; Zhiwei, XIA; Bo, LI; Tao, JIANG; Wei, LI

    2018-02-01

    This paper describes a new design of the neutral beam manifold based on a more optimized support system. A proposed alternative scheme has presented to replace the former complex manifold supports and internal pipe supports in the final design phase. Both the structural reliability and feasibility were confirmed with detailed analyses. Comparative analyses between two typical types of manifold support scheme were performed. All relevant results of mechanical analyses for typical operation scenarios and fault conditions are presented. Future optimization activities are described, which will give useful information for a refined setting of components in the next phase.

  10. Vortex breakdown simulation - A circumspect study of the steady, laminar, axisymmetric model

    NASA Technical Reports Server (NTRS)

    Salas, M. D.; Kuruvila, G.

    1989-01-01

    The incompressible axisymmetric steady Navier-Stokes equations are written using the streamfunction-vorticity formulation. The resulting equations are discretized using a second-order central-difference scheme. The discretized equations are linearized and then solved using an exact LU decomposition, Gaussian elimination, and Newton iteration. Solutions are presented for Reynolds numbers (based on vortex core radius) 100-1800 and swirl parameter 0.9-1.1. The effects of inflow boundary conditions, the location of farfield and outflow boundaries, and mesh refinement are examined. Finally, the stability of the steady solutions is investigated by solving the time-dependent equations.

  11. Language Evolution by Iterated Learning with Bayesian Agents

    ERIC Educational Resources Information Center

    Griffiths, Thomas L.; Kalish, Michael L.

    2007-01-01

    Languages are transmitted from person to person and generation to generation via a process of iterated learning: people learn a language from other people who once learned that language themselves. We analyze the consequences of iterated learning for learning algorithms based on the principles of Bayesian inference, assuming that learners compute…

  12. Utilizing the Iterative Closest Point (ICP) algorithm for enhanced registration of high resolution surface models - more than a simple black-box application

    NASA Astrophysics Data System (ADS)

    Stöcker, Claudia; Eltner, Anette

    2016-04-01

    Advances in computer vision and digital photogrammetry (i.e. structure from motion) allow for fast and flexible high resolution data supply. Within geoscience applications and especially in the field of small surface topography, high resolution digital terrain models and dense 3D point clouds are valuable data sources to capture actual states as well as for multi-temporal studies. However, there are still some limitations regarding robust registration and accuracy demands (e.g. systematic positional errors) which impede the comparison and/or combination of multi-sensor data products. Therefore, post-processing of 3D point clouds can heavily enhance data quality. In this matter the Iterative Closest Point (ICP) algorithm represents an alignment tool which iteratively minimizes distances of corresponding points within two datasets. Even though tool is widely used; it is often applied as a black-box application within 3D data post-processing for surface reconstruction. Aiming for precise and accurate combination of multi-sensor data sets, this study looks closely at different variants of the ICP algorithm including sub-steps of point selection, point matching, weighting, rejection, error metric and minimization. Therefore, an agricultural utilized field was investigated simultaneously by terrestrial laser scanning (TLS) and unmanned aerial vehicle (UAV) sensors two times (once covered with sparse vegetation and once bare soil). Due to different perspectives both data sets show diverse consistency in terms of shadowed areas and thus gaps so that data merging would provide consistent surface reconstruction. Although photogrammetric processing already included sub-cm accurate ground control surveys, UAV point cloud exhibits an offset towards TLS point cloud. In order to achieve the transformation matrix for fine registration of UAV point clouds, different ICP variants were tested. Statistical analyses of the results show that final success of registration and therefore data quality depends particularly on parameterization and choice of error metric, especially for erroneous data sets as in the case of sparse vegetation cover. At this, the point-to-point metric is more sensitive to data "noise" than the point-to-plane metric which results in considerably higher cloud-to-cloud distances. Concluding, in order to comply with accuracy demands of high resolution surface reconstruction and the aspect that ground control surveys can reach their limits both in time exposure and terrain accessibility ICP algorithm represents a great tool to refine rough initial alignment. Here different variants of registration modules allow for individual application according to the quality of the input data.

  13. Iterative Neighbour-Information Gathering for Ranking Nodes in Complex Networks

    NASA Astrophysics Data System (ADS)

    Xu, Shuang; Wang, Pei; Lü, Jinhu

    2017-01-01

    Designing node influence ranking algorithms can provide insights into network dynamics, functions and structures. Increasingly evidences reveal that node’s spreading ability largely depends on its neighbours. We introduce an iterative neighbourinformation gathering (Ing) process with three parameters, including a transformation matrix, a priori information and an iteration time. The Ing process iteratively combines priori information from neighbours via the transformation matrix, and iteratively assigns an Ing score to each node to evaluate its influence. The algorithm appropriates for any types of networks, and includes some traditional centralities as special cases, such as degree, semi-local, LeaderRank. The Ing process converges in strongly connected networks with speed relying on the first two largest eigenvalues of the transformation matrix. Interestingly, the eigenvector centrality corresponds to a limit case of the algorithm. By comparing with eight renowned centralities, simulations of susceptible-infected-removed (SIR) model on real-world networks reveal that the Ing can offer more exact rankings, even without a priori information. We also observe that an optimal iteration time is always in existence to realize best characterizing of node influence. The proposed algorithms bridge the gaps among some existing measures, and may have potential applications in infectious disease control, designing of optimal information spreading strategies.

  14. Automatic 3D segmentation of spinal cord MRI using propagated deformable models

    NASA Astrophysics Data System (ADS)

    De Leener, B.; Cohen-Adad, J.; Kadoury, S.

    2014-03-01

    Spinal cord diseases or injuries can cause dysfunction of the sensory and locomotor systems. Segmentation of the spinal cord provides measures of atrophy and allows group analysis of multi-parametric MRI via inter-subject registration to a template. All these measures were shown to improve diagnostic and surgical intervention. We developed a framework to automatically segment the spinal cord on T2-weighted MR images, based on the propagation of a deformable model. The algorithm is divided into three parts: first, an initialization step detects the spinal cord position and orientation by using the elliptical Hough transform on multiple adjacent axial slices to produce an initial tubular mesh. Second, a low-resolution deformable model is iteratively propagated along the spinal cord. To deal with highly variable contrast levels between the spinal cord and the cerebrospinal fluid, the deformation is coupled with a contrast adaptation at each iteration. Third, a refinement process and a global deformation are applied on the low-resolution mesh to provide an accurate segmentation of the spinal cord. Our method was evaluated against a semi-automatic edge-based snake method implemented in ITK-SNAP (with heavy manual adjustment) by computing the 3D Dice coefficient, mean and maximum distance errors. Accuracy and robustness were assessed from 8 healthy subjects. Each subject had two volumes: one at the cervical and one at the thoracolumbar region. Results show a precision of 0.30 +/- 0.05 mm (mean absolute distance error) in the cervical region and 0.27 +/- 0.06 mm in the thoracolumbar region. The 3D Dice coefficient was of 0.93 for both regions.

  15. Development and Evaluation of an Intuitive Operations Planning Process

    DTIC Science & Technology

    2006-03-01

    designed to be iterative and also prescribes the way in which iterations should occur. On the other hand, participants’ perceived level of trust and...16 4. DESIGN AND METHOD OF THE EXPERIMENTAL EVALUATION OF THE INTUITIVE PLANNING PROCESS...20 4.1.3 Design

  16. A Universal Tare Load Prediction Algorithm for Strain-Gage Balance Calibration Data Analysis

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.

    2011-01-01

    An algorithm is discussed that may be used to estimate tare loads of wind tunnel strain-gage balance calibration data. The algorithm was originally developed by R. Galway of IAR/NRC Canada and has been described in the literature for the iterative analysis technique. Basic ideas of Galway's algorithm, however, are universally applicable and work for both the iterative and the non-iterative analysis technique. A recent modification of Galway's algorithm is presented that improves the convergence behavior of the tare load prediction process if it is used in combination with the non-iterative analysis technique. The modified algorithm allows an analyst to use an alternate method for the calculation of intermediate non-linear tare load estimates whenever Galway's original approach does not lead to a convergence of the tare load iterations. It is also shown in detail how Galway's algorithm may be applied to the non-iterative analysis technique. Hand load data from the calibration of a six-component force balance is used to illustrate the application of the original and modified tare load prediction method. During the analysis of the data both the iterative and the non-iterative analysis technique were applied. Overall, predicted tare loads for combinations of the two tare load prediction methods and the two balance data analysis techniques showed excellent agreement as long as the tare load iterations converged. The modified algorithm, however, appears to have an advantage over the original algorithm when absolute voltage measurements of gage outputs are processed using the non-iterative analysis technique. In these situations only the modified algorithm converged because it uses an exact solution of the intermediate non-linear tare load estimate for the tare load iteration.

  17. Calibration and compensation method of three-axis geomagnetic sensor based on pre-processing total least square iteration

    NASA Astrophysics Data System (ADS)

    Zhou, Y.; Zhang, X.; Xiao, W.

    2018-04-01

    As the geomagnetic sensor is susceptible to interference, a pre-processing total least square iteration method is proposed for calibration compensation. Firstly, the error model of the geomagnetic sensor is analyzed and the correction model is proposed, then the characteristics of the model are analyzed and converted into nine parameters. The geomagnetic data is processed by Hilbert transform (HHT) to improve the signal-to-noise ratio, and the nine parameters are calculated by using the combination of Newton iteration method and the least squares estimation method. The sifter algorithm is used to filter the initial value of the iteration to ensure that the initial error is as small as possible. The experimental results show that this method does not need additional equipment and devices, can continuously update the calibration parameters, and better than the two-step estimation method, it can compensate geomagnetic sensor error well.

  18. Lung segmentation refinement based on optimal surface finding utilizing a hybrid desktop/virtual reality user interface.

    PubMed

    Sun, Shanhui; Sonka, Milan; Beichel, Reinhard R

    2013-01-01

    Recently, the optimal surface finding (OSF) and layered optimal graph image segmentation of multiple objects and surfaces (LOGISMOS) approaches have been reported with applications to medical image segmentation tasks. While providing high levels of performance, these approaches may locally fail in the presence of pathology or other local challenges. Due to the image data variability, finding a suitable cost function that would be applicable to all image locations may not be feasible. This paper presents a new interactive refinement approach for correcting local segmentation errors in the automated OSF-based segmentation. A hybrid desktop/virtual reality user interface was developed for efficient interaction with the segmentations utilizing state-of-the-art stereoscopic visualization technology and advanced interaction techniques. The user interface allows a natural and interactive manipulation of 3-D surfaces. The approach was evaluated on 30 test cases from 18 CT lung datasets, which showed local segmentation errors after employing an automated OSF-based lung segmentation. The performed experiments exhibited significant increase in performance in terms of mean absolute surface distance errors (2.54±0.75 mm prior to refinement vs. 1.11±0.43 mm post-refinement, p≪0.001). Speed of the interactions is one of the most important aspects leading to the acceptance or rejection of the approach by users expecting real-time interaction experience. The average algorithm computing time per refinement iteration was 150 ms, and the average total user interaction time required for reaching complete operator satisfaction was about 2 min per case. This time was mostly spent on human-controlled manipulation of the object to identify whether additional refinement was necessary and to approve the final segmentation result. The reported principle is generally applicable to segmentation problems beyond lung segmentation in CT scans as long as the underlying segmentation utilizes the OSF framework. The two reported segmentation refinement tools were optimized for lung segmentation and might need some adaptation for other application domains. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Nested Krylov methods and preserving the orthogonality

    NASA Technical Reports Server (NTRS)

    Desturler, Eric; Fokkema, Diederik R.

    1993-01-01

    Recently the GMRESR inner-outer iteraction scheme for the solution of linear systems of equations was proposed by Van der Vorst and Vuik. Similar methods have been proposed by Axelsson and Vassilevski and Saad (FGMRES). The outer iteration is GCR, which minimizes the residual over a given set of direction vectors. The inner iteration is GMRES, which at each step computes a new direction vector by approximately solving the residual equation. However, the optimality of the approximation over the space of outer search directions is ignored in the inner GMRES iteration. This leads to suboptimal corrections to the solution in the outer iteration, as components of the outer iteration directions may reenter in the inner iteration process. Therefore we propose to preserve the orthogonality relations of GCR in the inner GMRES iteration. This gives optimal corrections; however, it involves working with a singular, non-symmetric operator. We will discuss some important properties, and we will show by experiments that, in terms of matrix vector products, this modification (almost) always leads to better convergence. However, because we do more orthogonalizations, it does not always give an improved performance in CPU-time. Furthermore, we will discuss efficient implementations as well as the truncation possibilities of the outer GCR process. The experimental results indicate that for such methods it is advantageous to preserve the orthogonality in the inner iteration. Of course we can also use iteration schemes other than GMRES as the inner method; methods with short recurrences like GICGSTAB are of interest.

  20. Integrated decontamination process for metals

    DOEpatents

    Snyder, Thomas S.; Whitlow, Graham A.

    1991-01-01

    An integrated process for decontamination of metals, particularly metals that are used in the nuclear energy industry contaminated with radioactive material. The process combines the processes of electrorefining and melt refining to purify metals that can be decontaminated using either electrorefining or melt refining processes.

  1. Robust Decision Making Approach to Managing Water Resource Risks (Invited)

    NASA Astrophysics Data System (ADS)

    Lempert, R.

    2010-12-01

    The IPCC and US National Academies of Science have recommended iterative risk management as the best approach for water management and many other types of climate-related decisions. Such an approach does not rely on a single set of judgments at any one time but rather actively updates and refines strategies as new information emerges. In addition, the approach emphasizes that a portfolio of different types of responses, rather than any single action, often provides the best means to manage uncertainty. Implementing an iterative risk management approach can however prove difficult in actual decision support applications. This talk will suggest that robust decision making (RDM) provides a particularly useful set of quantitative methods for implementing iterative risk management. This RDM approach is currently being used in a wide variety of water management applications. RDM employs three key concepts that differentiate it from most types of probabilistic risk analysis: 1) characterizing uncertainty with multiple views of the future (which can include sets of probability distributions) rather than a single probabilistic best-estimate, 2) employing a robustness rather than an optimality criterion to assess alternative policies, and 3) organizing the analysis with a vulnerability and response option framework, rather than a predict-then-act framework. This talk will summarize the RDM approach, describe its use in several different types of water management applications, and compare the results to those obtained with other methods.

  2. Not so Complex: Iteration in the Complex Plane

    ERIC Educational Resources Information Center

    O'Dell, Robin S.

    2014-01-01

    The simple process of iteration can produce complex and beautiful figures. In this article, Robin O'Dell presents a set of tasks requiring students to use the geometric interpretation of complex number multiplication to construct linear iteration rules. When the outputs are plotted in the complex plane, the graphs trace pleasing designs…

  3. Developing Conceptual Understanding and Procedural Skill in Mathematics: An Iterative Process.

    ERIC Educational Resources Information Center

    Rittle-Johnson, Bethany; Siegler, Robert S.; Alibali, Martha Wagner

    2001-01-01

    Proposes that conceptual and procedural knowledge develop in an iterative fashion and improved problem representation is one mechanism underlying the relations between them. Two experiments were conducted with 5th and 6th grade students learning about decimal fractions. Results indicate conceptual and procedural knowledge do develop, iteratively,…

  4. Integrating Theory and Practice: Applying the Quality Improvement Paradigm to Product Line Engineering

    NASA Technical Reports Server (NTRS)

    Stark, Michael; Hennessy, Joseph F. (Technical Monitor)

    2002-01-01

    My assertion is that not only are product lines a relevant research topic, but that the tools used by empirical software engineering researchers can address observed practical problems. Our experience at NASA has been there are often externally proposed solutions available, but that we have had difficulties applying them in our particular context. We have also focused on return on investment issues when evaluating product lines, and while these are important, one can not attain objective data on success or failure until several applications from a product family have been deployed. The use of the Quality Improvement Paradigm (QIP) can address these issues: (1) Planning an adoption path from an organization's current state to a product line approach; (2) Constructing a development process to fit the organization's adoption path; (3) Evaluation of product line development processes as the project is being developed. The QIP consists of the following six steps: (1) Characterize the project and its environment; (2) Set quantifiable goals for successful project performance; (3) Choose the appropriate process models, supporting methods, and tools for the project; (4) Execute the process, analyze interim results, and provide real-time feedback for corrective action; (5) Analyze the results of completed projects and recommend improvements; and (6) Package the lessons learned as updated and refined process models. A figure shows the QIP in detail. The iterative nature of the QIP supports an incremental development approach to product lines, and the project learning and feedback provide the necessary early evaluations.

  5. Reproducible research in palaeomagnetism

    NASA Astrophysics Data System (ADS)

    Lurcock, Pontus; Florindo, Fabio

    2015-04-01

    The reproducibility of research findings is attracting increasing attention across all scientific disciplines. In palaeomagnetism as elsewhere, computer-based analysis techniques are becoming more commonplace, complex, and diverse. Analyses can often be difficult to reproduce from scratch, both for the original researchers and for others seeking to build on the work. We present a palaeomagnetic plotting and analysis program designed to make reproducibility easier. Part of the problem is the divide between interactive and scripted (batch) analysis programs. An interactive desktop program with a graphical interface is a powerful tool for exploring data and iteratively refining analyses, but usually cannot operate without human interaction. This makes it impossible to re-run an analysis automatically, or to integrate it into a larger automated scientific workflow - for example, a script to generate figures and tables for a paper. In some cases the parameters of the analysis process itself are not saved explicitly, making it hard to repeat or improve the analysis even with human interaction. Conversely, non-interactive batch tools can be controlled by pre-written scripts and configuration files, allowing an analysis to be 'replayed' automatically from the raw data. However, this advantage comes at the expense of exploratory capability: iteratively improving an analysis entails a time-consuming cycle of editing scripts, running them, and viewing the output. Batch tools also tend to require more computer expertise from their users. PuffinPlot is a palaeomagnetic plotting and analysis program which aims to bridge this gap. First released in 2012, it offers both an interactive, user-friendly desktop interface and a batch scripting interface, both making use of the same core library of palaeomagnetic functions. We present new improvements to the program that help to integrate the interactive and batch approaches, allowing an analysis to be interactively explored and refined, then saved as a self-contained configuration which can be re-run without human interaction. PuffinPlot can thus be used as a component of a larger scientific workflow, integrated with workflow management tools such as Kepler, without compromising its capabilities as an exploratory tool. Since both PuffinPlot and the platform it runs on (Java) are Free/Open Source software, even the most fundamental components of an analysis can be verified and reproduced.

  6. A Putative Multiple-Demand System in the Macaque Brain.

    PubMed

    Mitchell, Daniel J; Bell, Andrew H; Buckley, Mark J; Mitchell, Anna S; Sallet, Jerome; Duncan, John

    2016-08-17

    In humans, cognitively demanding tasks of many types recruit common frontoparietal brain areas. Pervasive activation of this "multiple-demand" (MD) network suggests a core function in supporting goal-oriented behavior. A similar network might therefore be predicted in nonhuman primates that readily perform similar tasks after training. However, an MD network in nonhuman primates has not been described. Single-cell recordings from macaque frontal and parietal cortex show some similar properties to human MD fMRI responses (e.g., adaptive coding of task-relevant information). Invasive recordings, however, come from limited prespecified locations, so they do not delineate a macaque homolog of the MD system and their positioning could benefit from knowledge of where MD foci lie. Challenges of scanning behaving animals mean that few macaque fMRI studies specifically contrast levels of cognitive demand, so we sought to identify a macaque counterpart to the human MD system using fMRI connectivity in 35 rhesus macaques. Putative macaque MD regions, mapped from frontoparietal MD regions defined in humans, were found to be functionally connected under anesthesia. To further refine these regions, an iterative process was used to maximize their connectivity cross-validated across animals. Finally, whole-brain connectivity analyses identified voxels that were robustly connected to MD regions, revealing seven clusters across frontoparietal and insular cortex comparable to human MD regions and one unexpected cluster in the lateral fissure. The proposed macaque MD regions can be used to guide future electrophysiological investigation of MD neural coding and in task-based fMRI to test predictions of similar functional properties to human MD cortex. In humans, a frontoparietal "multiple-demand" (MD) brain network is recruited during a wide range of cognitively demanding tasks. Because this suggests a fundamental function, one might expect a similar network to exist in nonhuman primates, but this remains controversial. Here, we sought to identify a macaque counterpart to the human MD system using fMRI connectivity. Putative macaque MD regions were functionally connected under anesthesia and were further refined by iterative optimization. The result is a network including lateral frontal, dorsomedial frontal, and insular and inferior parietal regions closely similar to the human counterpart. The proposed macaque MD regions can be useful in guiding electrophysiological recordings or in task-based fMRI to test predictions of similar functional properties to human MD cortex. Copyright © 2016 Mitchell et al.

  7. Finite Volume Element (FVE) discretization and multilevel solution of the axisymmetric heat equation

    NASA Astrophysics Data System (ADS)

    Litaker, Eric T.

    1994-12-01

    The axisymmetric heat equation, resulting from a point-source of heat applied to a metal block, is solved numerically; both iterative and multilevel solutions are computed in order to compare the two processes. The continuum problem is discretized in two stages: finite differences are used to discretize the time derivatives, resulting is a fully implicit backward time-stepping scheme, and the Finite Volume Element (FVE) method is used to discretize the spatial derivatives. The application of the FVE method to a problem in cylindrical coordinates is new, and results in stencils which are analyzed extensively. Several iteration schemes are considered, including both Jacobi and Gauss-Seidel; a thorough analysis of these schemes is done, using both the spectral radii of the iteration matrices and local mode analysis. Using this discretization, a Gauss-Seidel relaxation scheme is used to solve the heat equation iteratively. A multilevel solution process is then constructed, including the development of intergrid transfer and coarse grid operators. Local mode analysis is performed on the components of the amplification matrix, resulting in the two-level convergence factors for various combinations of the operators. A multilevel solution process is implemented by using multigrid V-cycles; the iterative and multilevel results are compared and discussed in detail. The computational savings resulting from the multilevel process are then discussed.

  8. Multi-modal classification of neurodegenerative disease by progressive graph-based transductive learning

    PubMed Central

    Wang, Zhengxia; Zhu, Xiaofeng; Adeli, Ehsan; Zhu, Yingying; Nie, Feiping; Munsell, Brent

    2018-01-01

    Graph-based transductive learning (GTL) is a powerful machine learning technique that is used when sufficient training data is not available. In particular, conventional GTL approaches first construct a fixed inter-subject relation graph that is based on similarities in voxel intensity values in the feature domain, which can then be used to propagate the known phenotype data (i.e., clinical scores and labels) from the training data to the testing data in the label domain. However, this type of graph is exclusively learned in the feature domain, and primarily due to outliers in the observed features, may not be optimal for label propagation in the label domain. To address this limitation, a progressive GTL (pGTL) method is proposed that gradually finds an intrinsic data representation that more accurately aligns imaging features with the phenotype data. In general, optimal feature-to-phenotype alignment is achieved using an iterative approach that: (1) refines inter-subject relationships observed in the feature domain by using the learned intrinsic data representation in the label domain, (2) updates the intrinsic data representation from the refined inter-subject relationships, and (3) verifies the intrinsic data representation on the training data to guarantee an optimal classification when applied to testing data. Additionally, the iterative approach is extended to multi-modal imaging data to further improve pGTL classification accuracy. Using Alzheimer’s disease and Parkinson’s disease study data, the classification accuracy of the proposed pGTL method is compared to several state-of-the-art classification methods, and the results show pGTL can more accurately identify subjects, even at different progression stages, in these two study data sets. PMID:28551556

  9. A Simple Iterative Model Accurately Captures Complex Trapline Formation by Bumblebees Across Spatial Scales and Flower Arrangements

    PubMed Central

    Reynolds, Andrew M.; Lihoreau, Mathieu; Chittka, Lars

    2013-01-01

    Pollinating bees develop foraging circuits (traplines) to visit multiple flowers in a manner that minimizes overall travel distance, a task analogous to the travelling salesman problem. We report on an in-depth exploration of an iterative improvement heuristic model of bumblebee traplining previously found to accurately replicate the establishment of stable routes by bees between flowers distributed over several hectares. The critical test for a model is its predictive power for empirical data for which the model has not been specifically developed, and here the model is shown to be consistent with observations from different research groups made at several spatial scales and using multiple configurations of flowers. We refine the model to account for the spatial search strategy of bees exploring their environment, and test several previously unexplored predictions. We find that the model predicts accurately 1) the increasing propensity of bees to optimize their foraging routes with increasing spatial scale; 2) that bees cannot establish stable optimal traplines for all spatial configurations of rewarding flowers; 3) the observed trade-off between travel distance and prioritization of high-reward sites (with a slight modification of the model); 4) the temporal pattern with which bees acquire approximate solutions to travelling salesman-like problems over several dozen foraging bouts; 5) the instability of visitation schedules in some spatial configurations of flowers; 6) the observation that in some flower arrays, bees' visitation schedules are highly individually different; 7) the searching behaviour that leads to efficient location of flowers and routes between them. Our model constitutes a robust theoretical platform to generate novel hypotheses and refine our understanding about how small-brained insects develop a representation of space and use it to navigate in complex and dynamic environments. PMID:23505353

  10. Developing and Pilot Testing a Spanish Translation of CollaboRATE for Use in the United States.

    PubMed

    Forcino, Rachel C; Bustamante, Nitzy; Thompson, Rachel; Percac-Lima, Sanja; Elwyn, Glyn; Pérez-Arechaederra, Diana; Barr, Paul J

    2016-01-01

    Given the need for access to patient-facing materials in multiple languages, this study aimed to develop and pilot test an accurate and understandable translation of CollaboRATE, a three-item patient-reported measure of shared decision-making, for Spanish-speaking patients in the United States (US). We followed the Translate, Review, Adjudicate, Pre-test, Document (TRAPD) survey translation protocol. Cognitive interviews were conducted with Spanish-speaking adults within an urban Massachusetts internal medicine clinic. For the pilot test, all patients with weekday appointments between May 1 and May 29, 2015 were invited to complete CollaboRATE in either English or Spanish upon exit. We calculated the proportion of respondents giving the best score possible on CollaboRATE and compared scores across key patient subgroups. Four rounds of cognitive interviews with 26 people were completed between January and April 2015. Extensive, iterative refinements to survey items between interview rounds led to final items that were generally understood by participants with diverse educational backgrounds. Pilot data collection achieved an overall response rate of 73 percent, with 606 (49%) patients completing Spanish CollaboRATE questionnaires and 624 (51%) patients completing English CollaboRATE questionnaires. The proportion of respondents giving the best score possible on CollaboRATE was the same (86%) for both the English and Spanish versions of the instrument. Our translation method, guided by emerging best practices in survey and health measurement translation, encompassed multiple levels of review. By conducting four rounds of cognitive interviews with iterative item refinement between each round, we arrived at a Spanish language version of CollaboRATE that was understandable to a majority of cognitive interview participants and was completed by more than 600 pilot questionnaire respondents.

  11. A protocol for the development of Mediterranean climate services based on the experiences of the CLIM-RUN case studies

    NASA Astrophysics Data System (ADS)

    Goodess, Clare; Ruti, Paolo; Rousset, Nathalie

    2014-05-01

    During the closing stages of the CLIM-RUN EU FP7 project on Climate Local Information in the Mediterranean region Responding to User Needs, the real-world experiences encountered by the case-study teams are being assessed and synthesised to identify examples of good practice and, in particular, to produce the CLIM-RUN protocol for the development of Mediterranean climate services. The specific case studies have focused on renewable energy (Morocco, Spain, Croatia, Cyprus), tourism (Savoie, Tunisia, Croatia, Cyprus) and wild fires (Greece) as well as one cross-cutting case study (Veneto region). They have been implemented following a common programme of local workshops, questionnaires and interviews, with Climate Expert Team and Stakeholder Expert Team members collaborating to identify and translate user needs and subsequently develop climate products and information. Feedback from stakeholders has been essential in assessing and refining these products. The protocol covers the following issues: the overall process and methodological key stages; identification and selection of stakeholders; communication with stakeholders; identification of user needs; translation of needs; producing products; assessing and refining products; methodologies for evaluating the economic value of climate services; and beyond CLIM-RUN - the lessons learnt. Particular emphasis is given to stakeholder analysis in the context of the participatory, bottom-up approach promoted by CLIM-RUN and to the iterative approach taken in the development of climate products. Recommendations are also made for an envisioned three-tier business model for the development of climate services involving climate, intermediary and stakeholder tiers.

  12. Adapting Poisson-Boltzmann to the self-consistent mean field theory: Application to protein side-chain modeling

    NASA Astrophysics Data System (ADS)

    Koehl, Patrice; Orland, Henri; Delarue, Marc

    2011-08-01

    We present an extension of the self-consistent mean field theory for protein side-chain modeling in which solvation effects are included based on the Poisson-Boltzmann (PB) theory. In this approach, the protein is represented with multiple copies of its side chains. Each copy is assigned a weight that is refined iteratively based on the mean field energy generated by the rest of the protein, until self-consistency is reached. At each cycle, the variational free energy of the multi-copy system is computed; this free energy includes the internal energy of the protein that accounts for vdW and electrostatics interactions and a solvation free energy term that is computed using the PB equation. The method converges in only a few cycles and takes only minutes of central processing unit time on a commodity personal computer. The predicted conformation of each residue is then set to be its copy with the highest weight after convergence. We have tested this method on a database of hundred highly refined NMR structures to circumvent the problems of crystal packing inherent to x-ray structures. The use of the PB-derived solvation free energy significantly improves prediction accuracy for surface side chains. For example, the prediction accuracies for χ1 for surface cysteine, serine, and threonine residues improve from 68%, 35%, and 43% to 80%, 53%, and 57%, respectively. A comparison with other side-chain prediction algorithms demonstrates that our approach is consistently better in predicting the conformations of exposed side chains.

  13. Rapid reconstruction of 3D neuronal morphology from light microscopy images with augmented rayburst sampling.

    PubMed

    Ming, Xing; Li, Anan; Wu, Jingpeng; Yan, Cheng; Ding, Wenxiang; Gong, Hui; Zeng, Shaoqun; Liu, Qian

    2013-01-01

    Digital reconstruction of three-dimensional (3D) neuronal morphology from light microscopy images provides a powerful technique for analysis of neural circuits. It is time-consuming to manually perform this process. Thus, efficient computer-assisted approaches are preferable. In this paper, we present an innovative method for the tracing and reconstruction of 3D neuronal morphology from light microscopy images. The method uses a prediction and refinement strategy that is based on exploration of local neuron structural features. We extended the rayburst sampling algorithm to a marching fashion, which starts from a single or a few seed points and marches recursively forward along neurite branches to trace and reconstruct the whole tree-like structure. A local radius-related but size-independent hemispherical sampling was used to predict the neurite centerline and detect branches. Iterative rayburst sampling was performed in the orthogonal plane, to refine the centerline location and to estimate the local radius. We implemented the method in a cooperative 3D interactive visualization-assisted system named flNeuronTool. The source code in C++ and the binaries are freely available at http://sourceforge.net/projects/flneurontool/. We validated and evaluated the proposed method using synthetic data and real datasets from the Digital Reconstruction of Axonal and Dendritic Morphology (DIADEM) challenge. Then, flNeuronTool was applied to mouse brain images acquired with the Micro-Optical Sectioning Tomography (MOST) system, to reconstruct single neurons and local neural circuits. The results showed that the system achieves a reasonable balance between fast speed and acceptable accuracy, which is promising for interactive applications in neuronal image analysis.

  14. Development of a universal approach to increase physical activity among adolescents: the GoActive intervention

    PubMed Central

    Corder, Kirsten; Schiff, Annie; Kesten, Joanna M; van Sluijs, Esther M F

    2015-01-01

    Objectives To develop a physical activity (PA) promotion intervention for adolescents using a process addressing gaps in the literature while considering participant engagement. We describe the initial development stages; (1) existing evidence, (2) large scale opinion gathering and (3) developmental qualitative work, aiming (A) to gain insight into how to increase PA among the whole of year 9 (13–14 years-old) by identifying elements for intervention inclusion (B) to improve participant engagement and (C) to develop and refine programme design. Methods Relevant systematic reviews and longitudinal analyses of change were examined. An intervention was developed iteratively with older adolescents (17.3±0.5 years) and teachers, using the following process: (1) focus groups with (A) adolescents (n=26) and (B) teachers (n=4); (2) individual interviews (n=5) with inactive and shy adolescents focusing on engagement and programme acceptability. Qualitative data were analysed thematically. Results Limitations of the existing literature include lack of evidence on whole population approaches, limited adolescent involvement in intervention development, and poor participant engagement. Qualitative work suggested six themes which may encourage adolescents to do more PA; choice, novelty, mentorship, competition, rewards and flexibility. Teachers discussed time pressures as a barrier to encouraging adolescent PA and suggested between-class competition as a strategy. GoActive aims to increase PA through increased peer support, self-efficacy, group cohesion, self-esteem and friendship quality, and is implemented in tutor groups using a student-led tiered-leadership system. Conclusions We have followed an evidence-based iterative approach to translate existing evidence into an adolescent PA promotion intervention. Qualitative work with adolescents and teachers supported intervention design and addressed lack of engagement with health promotion programmes within this age group. Future work will examine the feasibility and effectiveness of GoActive to increase PA among adolescents while monitoring potential negative effects. The approach developed is applicable to other population groups and health behaviours. Trial registration number ISRCTN31583496. PMID:26307618

  15. A systematic review of patient safety in mental health: a protocol based on the inpatient setting.

    PubMed

    D'Lima, Danielle; Archer, Stephanie; Thibaut, Bethan Ines; Ramtale, Sonny Christian; Dewa, Lindsay H; Darzi, Ara

    2016-11-29

    Despite the growing international interest in patient safety as a discipline, there has been a lack of exploration of its application to mental health. It cannot be assumed that findings based upon physical health in acute care hospitals can be applied to mental health patients, disorders and settings. To the authors' knowledge, there has only been one review of the literature that focuses on patient safety research in mental health settings, conducted in Canada in 2008. We have identified a need to update this review and develop the methodology in order to strengthen the findings and disseminate internationally for advancement in the field. This systematic review will explore the existing research base on patient safety in mental health within the inpatient setting. To conduct this systematic review, a thorough search across multiple databases will be undertaken, based upon four search facets ("mental health", "patient safety", "research" and "inpatient setting"). The search strategy has been developed based upon the Canadian review accompanied with input from the National Reporting and Learning System (NRLS) taxonomy of patient safety incidents and the Diagnostic and Statistical Manual of Mental Disorders (fifth edition). The screening process will involve perspectives from at least two researchers at all stages with a third researcher invited to review when discrepancies require resolution. Initial inclusion and exclusion criteria have been developed and will be refined iteratively throughout the process. Quality assessment and data extraction of included articles will be conducted by at least two researchers. A data extraction form will be developed, piloted and iterated as necessary in accordance with the research question. Extracted information will be analysed thematically. We believe that this systematic review will make a significant contribution to the advancement of patient safety in mental health inpatient settings. The findings will enable the development and implementation of interventions to improve the quality of care experienced by patients and support the identification of future research priorities. PROSPERO CRD42016034057.

  16. Status of the ITER Cryodistribution

    NASA Astrophysics Data System (ADS)

    Chang, H.-S.; Vaghela, H.; Patel, P.; Rizzato, A.; Cursan, M.; Henry, D.; Forgeas, A.; Grillot, D.; Sarkar, B.; Muralidhara, S.; Das, J.; Shukla, V.; Adler, E.

    2017-12-01

    Since the conceptual design of the ITER Cryodistribution many modifications have been applied due to both system optimization and improved knowledge of the clients’ requirements. Process optimizations in the Cryoplant resulted in component simplifications whereas increased heat load in some of the superconducting magnet systems required more complicated process configuration but also the removal of a cold box was possible due to component arrangement standardization. Another cold box, planned for redundancy, has been removed due to the Tokamak in-Cryostat piping layout modification. In this proceeding we will summarize the present design status and component configuration of the ITER Cryodistribution with all changes implemented which aim at process optimization and simplification as well as operational reliability, stability and flexibility.

  17. A High Order, Locally-Adaptive Method for the Navier-Stokes Equations

    NASA Astrophysics Data System (ADS)

    Chan, Daniel

    1998-11-01

    I have extended the FOSLS method of Cai, Manteuffel and McCormick (1997) and implemented it within the framework of a spectral element formulation using the Legendre polynomial basis function. The FOSLS method solves the Navier-Stokes equations as a system of coupled first-order equations and provides the ellipticity that is needed for fast iterative matrix solvers like multigrid to operate efficiently. Each element is treated as an object and its properties are self-contained. Only C^0 continuity is imposed across element interfaces; this design allows local grid refinement and coarsening without the burden of having an elaborate data structure, since only information along element boundaries is needed. With the FORTRAN 90 programming environment, I can maintain a high computational efficiency by employing a hybrid parallel processing model. The OpenMP directives provides parallelism in the loop level which is executed in a shared-memory SMP and the MPI protocol allows the distribution of elements to a cluster of SMP's connected via a commodity network. This talk will provide timing results and a comparison with a second order finite difference method.

  18. From bricks to buildings: adapting the Medical Research Council framework to develop programs of research in simulation education and training for the health professions.

    PubMed

    Haji, Faizal A; Da Silva, Celina; Daigle, Delton T; Dubrowski, Adam

    2014-08-01

    Presently, health care simulation research is largely conducted on a study-by-study basis. Although such "project-based" research generates a plethora of evidence, it can be chaotic and contradictory. A move toward sustained, thematic, theory-based programs of research is necessary to advance knowledge in the field. Recognizing that simulation is a complex intervention, we present a framework for developing research programs in simulation-based education adapted from the Medical Research Council (MRC) guidance. This framework calls for an iterative approach to developing, refining, evaluating, and implementing simulation interventions. The adapted framework guidance emphasizes: (1) identification of theory and existing evidence; (2) modeling and piloting interventions to clarify active ingredients and identify mechanisms linking the context, intervention, and outcomes; and (3) evaluation of intervention processes and outcomes in both the laboratory and real-world setting. The proposed framework will aid simulation researchers in developing more robust interventions that optimize simulation-based education and advance our understanding of simulation pedagogy.

  19. Depth-color fusion strategy for 3-D scene modeling with Kinect.

    PubMed

    Camplani, Massimo; Mantecon, Tomas; Salgado, Luis

    2013-12-01

    Low-cost depth cameras, such as Microsoft Kinect, have completely changed the world of human-computer interaction through controller-free gaming applications. Depth data provided by the Kinect sensor presents several noise-related problems that have to be tackled to improve the accuracy of the depth data, thus obtaining more reliable game control platforms and broadening its applicability. In this paper, we present a depth-color fusion strategy for 3-D modeling of indoor scenes with Kinect. Accurate depth and color models of the background elements are iteratively built, and used to detect moving objects in the scene. Kinect depth data is processed with an innovative adaptive joint-bilateral filter that efficiently combines depth and color by analyzing an edge-uncertainty map and the detected foreground regions. Results show that the proposed approach efficiently tackles main Kinect data problems: distance-dependent depth maps, spatial noise, and temporal random fluctuations are dramatically reduced; objects depth boundaries are refined, and nonmeasured depth pixels are interpolated. Moreover, a robust depth and color background model and accurate moving objects silhouette are generated.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koniges, A.E.; Craddock, G.G.; Schnack, D.D.

    The purpose of the workshop was to assemble workers, both within and outside of the fusion-related computations areas, for discussion regarding the issues of dynamically adaptive gridding. There were three invited talks related to adaptive gridding application experiences in various related fields of computational fluid dynamics (CFD), and nine short talks reporting on the progress of adaptive techniques in the specific areas of scrape-off-layer (SOL) modeling and magnetohydrodynamic (MHD) stability. Adaptive mesh methods have been successful in a number of diverse fields of CFD for over a decade. The method involves dynamic refinement of computed field profiles in a waymore » that disperses uniformly the numerical errors associated with discrete approximations. Because the process optimizes computational effort, adaptive mesh methods can be used to study otherwise the intractable physical problems that involve complex boundary shapes or multiple spatial/temporal scales. Recent results indicate that these adaptive techniques will be required for tokamak fluid-based simulations involving the diverted tokamak SOL modeling and MHD simulations problems related to the highest priority ITER relevant issues.Individual papers are indexed separately on the energy data bases.« less

  1. Glycidyl fatty acid esters in refined edible oils: A review on formation, occurrence, analysis, and elimination methods

    USDA-ARS?s Scientific Manuscript database

    Glycidyl fatty acid esters (GEs), one of the main contaminants in processed oil, are mainly formed during the deodorization step in the oil refining process of edible oils and therefore occur in almost all refined edible oils. GEs are potential carcinogens, due to the fact that they hydrolyze into t...

  2. Airborne Hyperspectral Imaging of Seagrass and Coral Reef

    NASA Astrophysics Data System (ADS)

    Merrill, J.; Pan, Z.; Mewes, T.; Herwitz, S.

    2013-12-01

    This talk presents the process of project preparation, airborne data collection, data pre-processing and comparative analysis of a series of airborne hyperspectral projects focused on the mapping of seagrass and coral reef communities in the Florida Keys. As part of a series of large collaborative projects funded by the NASA ROSES program and the Florida Fish and Wildlife Conservation Commission and administered by the NASA UAV Collaborative, a series of airborne hyperspectral datasets were collected over six sites in the Florida Keys in May 2012, October 2012 and May 2013 by Galileo Group, Inc. using a manned Cessna 172 and NASA's SIERRA Unmanned Aerial Vehicle. Precise solar and tidal data were used to calculate airborne collection parameters and develop flight plans designed to optimize data quality. Two independent Visible and Near-Infrared (VNIR) hyperspectral imaging systems covering 400-100nm were used to collect imagery over six Areas of Interest (AOIs). Multiple collections were performed over all sites across strict solar windows in the mornings and afternoons. Independently developed pre-processing algorithms were employed to radiometrically correct, synchronize and georectify individual flight lines which were then combined into color balanced mosaics for each Area of Interest. The use of two different hyperspectral sensor as well as environmental variations between each collection allow for the comparative analysis of data quality as well as the iterative refinement of flight planning and collection parameters.

  3. Exploiting parallel computing with limited program changes using a network of microcomputers

    NASA Technical Reports Server (NTRS)

    Rogers, J. L., Jr.; Sobieszczanski-Sobieski, J.

    1985-01-01

    Network computing and multiprocessor computers are two discernible trends in parallel processing. The computational behavior of an iterative distributed process in which some subtasks are completed later than others because of an imbalance in computational requirements is of significant interest. The effects of asynchronus processing was studied. A small existing program was converted to perform finite element analysis by distributing substructure analysis over a network of four Apple IIe microcomputers connected to a shared disk, simulating a parallel computer. The substructure analysis uses an iterative, fully stressed, structural resizing procedure. A framework of beams divided into three substructures is used as the finite element model. The effects of asynchronous processing on the convergence of the design variables are determined by not resizing particular substructures on various iterations.

  4. Diffraction pattern simulation of cellulose fibrils using distributed and quantized pair distances

    DOE PAGES

    Zhang, Yan; Inouye, Hideyo; Crowley, Michael; ...

    2016-10-14

    Intensity simulation of X-ray scattering from large twisted cellulose molecular fibrils is important in understanding the impact of chemical or physical treatments on structural properties such as twisting or coiling. This paper describes a highly efficient method for the simulation of X-ray diffraction patterns from complex fibrils using atom-type-specific pair-distance quantization. Pair distances are sorted into arrays which are labelled by atom type. Histograms of pair distances in each array are computed and binned and the resulting population distributions are used to represent the whole pair-distance data set. These quantized pair-distance arrays are used with a modified and vectorized Debyemore » formula to simulate diffraction patterns. This approach utilizes fewer pair distances in each iteration, and atomic scattering factors are moved outside the iteration since the arrays are labelled by atom type. As a result, this algorithm significantly reduces the computation time while maintaining the accuracy of diffraction pattern simulation, making possible the simulation of diffraction patterns from large twisted fibrils in a relatively short period of time, as is required for model testing and refinement.« less

  5. An Improved Compressive Sensing and Received Signal Strength-Based Target Localization Algorithm with Unknown Target Population for Wireless Local Area Networks.

    PubMed

    Yan, Jun; Yu, Kegen; Chen, Ruizhi; Chen, Liang

    2017-05-30

    In this paper a two-phase compressive sensing (CS) and received signal strength (RSS)-based target localization approach is proposed to improve position accuracy by dealing with the unknown target population and the effect of grid dimensions on position error. In the coarse localization phase, by formulating target localization as a sparse signal recovery problem, grids with recovery vector components greater than a threshold are chosen as the candidate target grids. In the fine localization phase, by partitioning each candidate grid, the target position in a grid is iteratively refined by using the minimum residual error rule and the least-squares technique. When all the candidate target grids are iteratively partitioned and the measurement matrix is updated, the recovery vector is re-estimated. Threshold-based detection is employed again to determine the target grids and hence the target population. As a consequence, both the target population and the position estimation accuracy can be significantly improved. Simulation results demonstrate that the proposed approach achieves the best accuracy among all the algorithms compared.

  6. Diffraction pattern simulation of cellulose fibrils using distributed and quantized pair distances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yan; Inouye, Hideyo; Crowley, Michael

    Intensity simulation of X-ray scattering from large twisted cellulose molecular fibrils is important in understanding the impact of chemical or physical treatments on structural properties such as twisting or coiling. This paper describes a highly efficient method for the simulation of X-ray diffraction patterns from complex fibrils using atom-type-specific pair-distance quantization. Pair distances are sorted into arrays which are labelled by atom type. Histograms of pair distances in each array are computed and binned and the resulting population distributions are used to represent the whole pair-distance data set. These quantized pair-distance arrays are used with a modified and vectorized Debyemore » formula to simulate diffraction patterns. This approach utilizes fewer pair distances in each iteration, and atomic scattering factors are moved outside the iteration since the arrays are labelled by atom type. This algorithm significantly reduces the computation time while maintaining the accuracy of diffraction pattern simulation, making possible the simulation of diffraction patterns from large twisted fibrils in a relatively short period of time, as is required for model testing and refinement.« less

  7. Diffraction pattern simulation of cellulose fibrils using distributed and quantized pair distances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yan; Inouye, Hideyo; Crowley, Michael

    Intensity simulation of X-ray scattering from large twisted cellulose molecular fibrils is important in understanding the impact of chemical or physical treatments on structural properties such as twisting or coiling. This paper describes a highly efficient method for the simulation of X-ray diffraction patterns from complex fibrils using atom-type-specific pair-distance quantization. Pair distances are sorted into arrays which are labelled by atom type. Histograms of pair distances in each array are computed and binned and the resulting population distributions are used to represent the whole pair-distance data set. These quantized pair-distance arrays are used with a modified and vectorized Debyemore » formula to simulate diffraction patterns. This approach utilizes fewer pair distances in each iteration, and atomic scattering factors are moved outside the iteration since the arrays are labelled by atom type. As a result, this algorithm significantly reduces the computation time while maintaining the accuracy of diffraction pattern simulation, making possible the simulation of diffraction patterns from large twisted fibrils in a relatively short period of time, as is required for model testing and refinement.« less

  8. Quantification of 2D elemental distribution maps of intermediate-thick biological sections by low energy synchrotron μ-X-ray fluorescence spectrometry

    NASA Astrophysics Data System (ADS)

    Kump, P.; Vogel-Mikuš, K.

    2018-05-01

    Two fundamental-parameter (FP) based models for quantification of 2D elemental distribution maps of intermediate-thick biological samples by synchrotron low energy μ-X-ray fluorescence spectrometry (SR-μ-XRF) are presented and applied to the elemental analysis in experiments with monochromatic focused photon beam excitation at two low energy X-ray fluorescence beamlines—TwinMic, Elettra Sincrotrone Trieste, Italy, and ID21, ESRF, Grenoble, France. The models assume intermediate-thick biological samples composed of measured elements, the sources of the measurable spectral lines, and by the residual matrix, which affects the measured intensities through absorption. In the first model a fixed residual matrix of the sample is assumed, while in the second model the residual matrix is obtained by the iteration refinement of elemental concentrations and an adjusted residual matrix. The absorption of the incident focused beam in the biological sample at each scanned pixel position, determined from the output of a photodiode or a CCD camera, is applied as a control in the iteration procedure of quantification.

  9. Finite volume multigrid method of the planar contraction flow of a viscoelastic fluid

    NASA Astrophysics Data System (ADS)

    Moatssime, H. Al; Esselaoui, D.; Hakim, A.; Raghay, S.

    2001-08-01

    This paper reports on a numerical algorithm for the steady flow of viscoelastic fluid. The conservative and constitutive equations are solved using the finite volume method (FVM) with a hybrid scheme for the velocities and first-order upwind approximation for the viscoelastic stress. A non-uniform staggered grid system is used. The iterative SIMPLE algorithm is employed to relax the coupled momentum and continuity equations. The non-linear algebraic equations over the flow domain are solved iteratively by the symmetrical coupled Gauss-Seidel (SCGS) method. In both, the full approximation storage (FAS) multigrid algorithm is used. An Oldroyd-B fluid model was selected for the calculation. Results are reported for planar 4:1 abrupt contraction at various Weissenberg numbers. The solutions are found to be stable and smooth. The solutions show that at high Weissenberg number the domain must be long enough. The convergence of the method has been verified with grid refinement. All the calculations have been performed on a PC equipped with a Pentium III processor at 550 MHz. Copyright

  10. The relative pose estimation of aircraft based on contour model

    NASA Astrophysics Data System (ADS)

    Fu, Tai; Sun, Xiangyi

    2017-02-01

    This paper proposes a relative pose estimation approach based on object contour model. The first step is to obtain a two-dimensional (2D) projection of three-dimensional (3D)-model-based target, which will be divided into 40 forms by clustering and LDA analysis. Then we proceed by extracting the target contour in each image and computing their Pseudo-Zernike Moments (PZM), thus a model library is constructed in an offline mode. Next, we spot a projection contour that resembles the target silhouette most in the present image from the model library with reference of PZM; then similarity transformation parameters are generated as the shape context is applied to match the silhouette sampling location, from which the identification parameters of target can be further derived. Identification parameters are converted to relative pose parameters, in the premise that these values are the initial result calculated via iterative refinement algorithm, as the relative pose parameter is in the neighborhood of actual ones. At last, Distance Image Iterative Least Squares (DI-ILS) is employed to acquire the ultimate relative pose parameters.

  11. Metal-induced streak artifact reduction using iterative reconstruction algorithms in x-ray computed tomography image of the dentoalveolar region.

    PubMed

    Dong, Jian; Hayakawa, Yoshihiko; Kannenberg, Sven; Kober, Cornelia

    2013-02-01

    The objective of this study was to reduce metal-induced streak artifact on oral and maxillofacial x-ray computed tomography (CT) images by developing the fast statistical image reconstruction system using iterative reconstruction algorithms. Adjacent CT images often depict similar anatomical structures in thin slices. So, first, images were reconstructed using the same projection data of an artifact-free image. Second, images were processed by the successive iterative restoration method where projection data were generated from reconstructed image in sequence. Besides the maximum likelihood-expectation maximization algorithm, the ordered subset-expectation maximization algorithm (OS-EM) was examined. Also, small region of interest (ROI) setting and reverse processing were applied for improving performance. Both algorithms reduced artifacts instead of slightly decreasing gray levels. The OS-EM and small ROI reduced the processing duration without apparent detriments. Sequential and reverse processing did not show apparent effects. Two alternatives in iterative reconstruction methods were effective for artifact reduction. The OS-EM algorithm and small ROI setting improved the performance. Copyright © 2012 Elsevier Inc. All rights reserved.

  12. Faster Evolution of More Multifunctional Logic Circuits

    NASA Technical Reports Server (NTRS)

    Stoica, Adrian; Zebulum, Ricardo

    2005-01-01

    A modification in a method of automated evolutionary synthesis of voltage-controlled multifunctional logic circuits makes it possible to synthesize more circuits in less time. Prior to the modification, the computations for synthesizing a four-function logic circuit by this method took about 10 hours. Using the method as modified, it is possible to synthesize a six-function circuit in less than half an hour. The concepts of automated evolutionary synthesis and voltage-controlled multifunctional logic circuits were described in a number of prior NASA Tech Briefs articles. To recapitulate: A circuit is designed to perform one of several different logic functions, depending on the value of an applied control voltage. The circuit design is synthesized following an automated evolutionary approach that is so named because it is modeled partly after the repetitive trial-and-error process of biological evolution. In this process, random populations of integer strings that encode electronic circuits play a role analogous to that of chromosomes. An evolved circuit is tested by computational simulation (prior to testing in real hardware to verify a final design). Then, in a fitness-evaluation step, responses of the circuit are compared with specifications of target responses and circuits are ranked according to how close they come to satisfying specifications. The results of the evaluation provide guidance for refining designs through further iteration.

  13. Biomarker development targeting unmet clinical needs.

    PubMed

    Monaghan, Phillip J; Lord, Sarah J; St John, Andrew; Sandberg, Sverre; Cobbaert, Christa M; Lennartz, Lieselotte; Verhagen-Kamerbeek, Wilma D J; Ebert, Christoph; Bossuyt, Patrick M M; Horvath, Andrea R

    2016-09-01

    The introduction of new biomarkers can lead to inappropriate utilization of tests if they do not fill in existing gaps in clinical care. We aimed to define a strategy and checklist for identifying unmet needs for biomarkers. A multidisciplinary working group used a 4-step process: 1/ scoping literature review; 2/ face-to-face meetings to discuss scope, strategy and checklist items; 3/ iterative process of feedback and consensus to develop the checklist; 4/ testing and refinement of checklist items using case scenarios. We used clinical pathway mapping to identify clinical management decisions linking biomarker testing to health outcomes and developed a 14-item checklist organized into 4 domains: 1/ identifying and 2/ verifying the unmet need; 3/ validating the intended use; and 4/ assessing the feasibility of the new biomarker to influence clinical practice and health outcome. We present an outcome-focused approach that can be used by multiple stakeholders for any medical test, irrespective of the purpose and role of testing. The checklist intends to achieve more efficient biomarker development and translation into practice. We propose the checklist is field tested by stakeholders, and advocate the role of the clinical laboratory professional to foster trans-sector collaboration in this regard. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. A clinical reasoning model focused on clients' behaviour change with reference to physiotherapists: its multiphase development and validation.

    PubMed

    Elvén, Maria; Hochwälder, Jacek; Dean, Elizabeth; Söderlund, Anne

    2015-05-01

    A biopsychosocial approach and behaviour change strategies have long been proposed to serve as a basis for addressing current multifaceted health problems. This emphasis has implications for clinical reasoning of health professionals. This study's aim was to develop and validate a conceptual model to guide physiotherapists' clinical reasoning focused on clients' behaviour change. Phase 1 consisted of the exploration of existing research and the research team's experiences and knowledge. Phases 2a and 2b consisted of validation and refinement of the model based on input from physiotherapy students in two focus groups (n = 5 per group) and from experts in behavioural medicine (n = 9). Phase 1 generated theoretical and evidence bases for the first version of a model. Phases 2a and 2b established the validity and value of the model. The final model described clinical reasoning focused on clients' behaviour change as a cognitive, reflective, collaborative and iterative process with multiple interrelated levels that included input from the client and physiotherapist, a functional behavioural analysis of the activity-related target behaviour and the selection of strategies for behaviour change. This unique model, theory- and evidence-informed, has been developed to help physiotherapists to apply clinical reasoning systematically in the process of behaviour change with their clients.

  15. Development of an evaluation framework for African-European hospital patient safety partnerships.

    PubMed

    Rutter, Paul; Syed, Shamsuzzoha B; Storr, Julie; Hightower, Joyce D; Bagheri-Nejad, Sepideh; Kelley, Edward; Pittet, Didier

    2014-04-01

    Patient safety is recognised as a significant healthcare problem worldwide, and healthcare-associated infections are an important aspect. African Partnerships for Patient Safety is a WHO programme that pairs hospitals in Africa with hospitals in Europe with the objective to work together to improve patient safety. To describe the development of an evaluation framework for hospital-to-hospital partnerships participating in the programme. The framework was structured around the programme's three core objectives: facilitate strong interhospital partnerships, improve in-hospital patient safety and spread best practices nationally. Africa-based clinicians, their European partners and experts in patient safety were closely involved in developing the evaluation framework in an iterative process. The process defined six domains of partnership strength, each with measurable subdomains. We developed a questionnaire to measure these subdomains. Participants selected six indicators of hospital patient safety improvement from a short-list of 22 based on their relevance, sensitivity to intervention and measurement feasibility. Participants proposed 20 measures of spread, which were refined into a two-part conceptual framework, and a data capture tool created. Taking a highly participatory approach that closely involved its end users, we developed an evaluation framework and tools to measure partnership strength, patient safety improvements and the spread of best practice.

  16. Development of a program theory for shared decision-making: a realist review protocol.

    PubMed

    Groot, Gary; Waldron, Tamara; Carr, Tracey; McMullen, Linda; Bandura, Lori-Ann; Neufeld, Shelley-May; Duncan, Vicky

    2017-06-17

    The practicality of applying evidence to healthcare systems with the aim of implementing change is an ongoing challenge for practitioners, policy makers, and academics. Shared decision- making (SDM), a method of medical decision-making that allows a balanced relationship between patients, physicians, and other key players in the medical decision process, is purported to improve patient and system outcomes. Despite the oft-mentioned benefits, there are gaps in the current literature between theory and implementation that would benefit from a realist approach given the value of this methodology to analyze complex interventions. In this protocol, we outline a study that will explore: "In which situations, how, why, and for whom does SDM between patients and health care providers contribute to improved decision making?" A seven step iterative process will be described including preliminary theory development, establishment of a search strategy, selection and appraisal of literature, data extraction, analysis and synthesis of extracted results from literature, and formation of a revised program theory with the input of patients, physicians, nurse navigators, and policy makers from a stakeholder session. The goal of the realist review will be to identify and refine a program theory for SDM through the identification of mechanisms which shape the characteristics of when, how, and why SDM will, and will not, work. PROSPERO CRD42017062609.

  17. 40 CFR 409.30 - Applicability; description of the liquid cane sugar refining subcategory.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... liquid cane sugar refining subcategory. 409.30 Section 409.30 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Liquid Cane Sugar Refining Subcategory § 409.30 Applicability; description of the liquid cane sugar refining...

  18. 40 CFR 409.30 - Applicability; description of the liquid cane sugar refining subcategory.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... liquid cane sugar refining subcategory. 409.30 Section 409.30 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Liquid Cane Sugar Refining Subcategory § 409.30 Applicability; description of the liquid cane sugar refining...

  19. 40 CFR 409.30 - Applicability; description of the liquid cane sugar refining subcategory.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... liquid cane sugar refining subcategory. 409.30 Section 409.30 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Liquid Cane Sugar Refining Subcategory § 409.30 Applicability; description of the liquid cane sugar refining...

  20. 40 CFR 409.30 - Applicability; description of the liquid cane sugar refining subcategory.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... liquid cane sugar refining subcategory. 409.30 Section 409.30 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Liquid Cane Sugar Refining Subcategory § 409.30 Applicability; description of the liquid cane sugar refining...

  1. 40 CFR 409.30 - Applicability; description of the liquid cane sugar refining subcategory.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... liquid cane sugar refining subcategory. 409.30 Section 409.30 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Liquid Cane Sugar Refining Subcategory § 409.30 Applicability; description of the liquid cane sugar refining...

  2. Simultaneous and iterative weighted regression analysis of toxicity tests using a microplate reader.

    PubMed

    Galgani, F; Cadiou, Y; Gilbert, F

    1992-04-01

    A system is described for determination of LC50 or IC50 by an iterative process based on data obtained from a plate reader using a marine unicellular alga as a target species. The esterase activity of Tetraselmis suesica on fluorescein diacetate as a substrate was measured using a fluorescence titerplate. Simultaneous analysis of results was performed using an iterative process adopting the sigmoid function Y = y/1 (dose of toxicant/IC50)slope for dose-response relationships. IC50 (+/- SEM) was estimated (P less than 0.05). An application with phosalone as a toxicant is presented.

  3. Experiments on water detritiation and cryogenic distillation at TLK; Impact on ITER fuel cycle subsystems interfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cristescu, I.; Cristescu, I. R.; Doerr, L.

    2008-07-15

    The ITER Isotope Separation System (ISS) and Water Detritiation System (WDS) should be integrated in order to reduce potential chronic tritium emissions from the ISS. This is achieved by routing the top (protium) product from the ISS to a feed point near the bottom end of the WDS Liquid Phase Catalytic Exchange (LPCE) column. This provides an additional barrier against ISS emissions and should mitigate the memory effects due to process parameter fluctuations in the ISS. To support the research activities needed to characterize the performances of various components for WDS and ISS processes under various working conditions and configurationsmore » as needed for ITER design, an experimental facility called TRENTA representative of the ITER WDS and ISS protium separation column, has been commissioned and is in operation at TLK The experimental program on TRENTA facility is conducted to provide the necessary design data related to the relevant ITER operating modes. The operation availability and performances of ISS-WDS have impact on ITER fuel cycle subsystems with consequences on the design integration. The preliminary experimental data on TRENTA facility are presented. (authors)« less

  4. Using Peer Feedback to Promote Reflection on Open-Ended Problems

    NASA Astrophysics Data System (ADS)

    Reinholz, Daniel L.; Dounas-Frazer, Dimitri R.

    2016-09-01

    This paper describes a new approach for learning from homework called Peer-Assisted Reflection (PAR). PAR involves students using peer feedback to improve their work on open-ended homework problems. Collaborating with peers and revising one's work based on the feedback of others are important aspects of doing and learning physics. While notable exceptions exist, homework and exams are generally individual activities that do not support collaboration and refinement, which misses important opportunities to use assessment for learning. In contrast, PAR provides students with a structure to iteratively engage with challenging, open-ended problems and solicit the input of their peers to improve their work.

  5. Progressive content-based retrieval of image and video with adaptive and iterative refinement

    NASA Technical Reports Server (NTRS)

    Li, Chung-Sheng (Inventor); Turek, John Joseph Edward (Inventor); Castelli, Vittorio (Inventor); Chen, Ming-Syan (Inventor)

    1998-01-01

    A method and apparatus for minimizing the time required to obtain results for a content based query in a data base. More specifically, with this invention, the data base is partitioned into a plurality of groups. Then, a schedule or sequence of groups is assigned to each of the operations of the query, where the schedule represents the order in which an operation of the query will be applied to the groups in the schedule. Each schedule is arranged so that each application of the operation operates on the group which will yield intermediate results that are closest to final results.

  6. Preferred Reporting Items for a Systematic Review and Meta-analysis of Diagnostic Test Accuracy Studies: The PRISMA-DTA Statement.

    PubMed

    McInnes, Matthew D F; Moher, David; Thombs, Brett D; McGrath, Trevor A; Bossuyt, Patrick M; Clifford, Tammy; Cohen, Jérémie F; Deeks, Jonathan J; Gatsonis, Constantine; Hooft, Lotty; Hunt, Harriet A; Hyde, Christopher J; Korevaar, Daniël A; Leeflang, Mariska M G; Macaskill, Petra; Reitsma, Johannes B; Rodin, Rachel; Rutjes, Anne W S; Salameh, Jean-Paul; Stevens, Adrienne; Takwoingi, Yemisi; Tonelli, Marcello; Weeks, Laura; Whiting, Penny; Willis, Brian H

    2018-01-23

    Systematic reviews of diagnostic test accuracy synthesize data from primary diagnostic studies that have evaluated the accuracy of 1 or more index tests against a reference standard, provide estimates of test performance, allow comparisons of the accuracy of different tests, and facilitate the identification of sources of variability in test accuracy. To develop the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) diagnostic test accuracy guideline as a stand-alone extension of the PRISMA statement. Modifications to the PRISMA statement reflect the specific requirements for reporting of systematic reviews and meta-analyses of diagnostic test accuracy studies and the abstracts for these reviews. Established standards from the Enhancing the Quality and Transparency of Health Research (EQUATOR) Network were followed for the development of the guideline. The original PRISMA statement was used as a framework on which to modify and add items. A group of 24 multidisciplinary experts used a systematic review of articles on existing reporting guidelines and methods, a 3-round Delphi process, a consensus meeting, pilot testing, and iterative refinement to develop the PRISMA diagnostic test accuracy guideline. The final version of the PRISMA diagnostic test accuracy guideline checklist was approved by the group. The systematic review (produced 64 items) and the Delphi process (provided feedback on 7 proposed items; 1 item was later split into 2 items) identified 71 potentially relevant items for consideration. The Delphi process reduced these to 60 items that were discussed at the consensus meeting. Following the meeting, pilot testing and iterative feedback were used to generate the 27-item PRISMA diagnostic test accuracy checklist. To reflect specific or optimal contemporary systematic review methods for diagnostic test accuracy, 8 of the 27 original PRISMA items were left unchanged, 17 were modified, 2 were added, and 2 were omitted. The 27-item PRISMA diagnostic test accuracy checklist provides specific guidance for reporting of systematic reviews. The PRISMA diagnostic test accuracy guideline can facilitate the transparent reporting of reviews, and may assist in the evaluation of validity and applicability, enhance replicability of reviews, and make the results from systematic reviews of diagnostic test accuracy studies more useful.

  7. Automated road network extraction from high spatial resolution multi-spectral imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Qiaoping

    For the last three decades, the Geomatics Engineering and Computer Science communities have considered automated road network extraction from remotely-sensed imagery to be a challenging and important research topic. The main objective of this research is to investigate the theory and methodology of automated feature extraction for image-based road database creation, refinement or updating, and to develop a series of algorithms for road network extraction from high resolution multi-spectral imagery. The proposed framework for road network extraction from multi-spectral imagery begins with an image segmentation using the k-means algorithm. This step mainly concerns the exploitation of the spectral information for feature extraction. The road cluster is automatically identified using a fuzzy classifier based on a set of predefined road surface membership functions. These membership functions are established based on the general spectral signature of road pavement materials and the corresponding normalized digital numbers on each multi-spectral band. Shape descriptors of the Angular Texture Signature are defined and used to reduce the misclassifications between roads and other spectrally similar objects (e.g., crop fields, parking lots, and buildings). An iterative and localized Radon transform is developed for the extraction of road centerlines from the classified images. The purpose of the transform is to accurately and completely detect the road centerlines. It is able to find short, long, and even curvilinear lines. The input image is partitioned into a set of subset images called road component images. An iterative Radon transform is locally applied to each road component image. At each iteration, road centerline segments are detected based on an accurate estimation of the line parameters and line widths. Three localization approaches are implemented and compared using qualitative and quantitative methods. Finally, the road centerline segments are grouped into a road network. The extracted road network is evaluated against a reference dataset using a line segment matching algorithm. The entire process is unsupervised and fully automated. Based on extensive experimentation on a variety of remotely-sensed multi-spectral images, the proposed methodology achieves a moderate success in automating road network extraction from high spatial resolution multi-spectral imagery.

  8. Characterizing the orthodontic patient's purchase decision: A novel approach using netnography.

    PubMed

    Pittman, Joseph W; Bennett, M Elizabeth; Koroluk, Lorne D; Robinson, Stacey G; Phillips, Ceib L

    2017-06-01

    A deeper and more thorough characterization of why patients do or do not seek orthodontic treatment is needed for effective shared decision making about receiving treatment. Previous orthodontic qualitative research has identified important dimensions that influence treatment decisions, but our understanding of patients' decisions and how they interpret benefits and barriers of treatment are lacking. The objectives of this study were to expand our current list of decision-making dimensions and to create a conceptual framework to describe the decision-making process. Discussion boards, rich in orthodontic decision-making data, were identified and analyzed with qualitative methods. An iterative process of data collection, dimension identification, and dimension refinement were performed to saturation. A conceptual framework was created to describe the decision-making process. Fifty-four dimensions captured the ideas discussed in regard to a patient's decision to receive orthodontic treatment. Ten domains were identified: function, esthetics, psychosocial benefits, diagnosis, finances, inconveniences, risks of treatment, individual aspects, societal attitudes, and child-specific influences, each containing specific descriptive and conceptual dimensions. A person's desires, self-perceptions, and viewpoints, the public's views on esthetics and orthodontics, and parenting philosophies impacted perceptions of benefits and barriers associated with orthodontic treatment. We identified an expanded list of dimensions, created a conceptual framework describing the orthodontic patient's decision-making process, and identified dimensions associated with yes and no decisions, giving doctors a better understanding of patient attitudes and expectations. Copyright © 2017 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  9. Iterative-Transform Phase Retrieval Using Adaptive Diversity

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H.

    2007-01-01

    A phase-diverse iterative-transform phase-retrieval algorithm enables high spatial-frequency, high-dynamic-range, image-based wavefront sensing. [The terms phase-diverse, phase retrieval, image-based, and wavefront sensing are defined in the first of the two immediately preceding articles, Broadband Phase Retrieval for Image-Based Wavefront Sensing (GSC-14899-1).] As described below, no prior phase-retrieval algorithm has offered both high dynamic range and the capability to recover high spatial-frequency components. Each of the previously developed image-based phase-retrieval techniques can be classified into one of two categories: iterative transform or parametric. Among the modifications of the original iterative-transform approach has been the introduction of a defocus diversity function (also defined in the cited companion article). Modifications of the original parametric approach have included minimizing alternative objective functions as well as implementing a variety of nonlinear optimization methods. The iterative-transform approach offers the advantage of ability to recover low, middle, and high spatial frequencies, but has disadvantage of having a limited dynamic range to one wavelength or less. In contrast, parametric phase retrieval offers the advantage of high dynamic range, but is poorly suited for recovering higher spatial frequency aberrations. The present phase-diverse iterative transform phase-retrieval algorithm offers both the high-spatial-frequency capability of the iterative-transform approach and the high dynamic range of parametric phase-recovery techniques. In implementation, this is a focus-diverse iterative-transform phaseretrieval algorithm that incorporates an adaptive diversity function, which makes it possible to avoid phase unwrapping while preserving high-spatial-frequency recovery. The algorithm includes an inner and an outer loop (see figure). An initial estimate of phase is used to start the algorithm on the inner loop, wherein multiple intensity images are processed, each using a different defocus value. The processing is done by an iterative-transform method, yielding individual phase estimates corresponding to each image of the defocus-diversity data set. These individual phase estimates are combined in a weighted average to form a new phase estimate, which serves as the initial phase estimate for either the next iteration of the iterative-transform method or, if the maximum number of iterations has been reached, for the next several steps, which constitute the outerloop portion of the algorithm. The details of the next several steps must be omitted here for the sake of brevity. The overall effect of these steps is to adaptively update the diversity defocus values according to recovery of global defocus in the phase estimate. Aberration recovery varies with differing amounts as the amount of diversity defocus is updated in each image; thus, feedback is incorporated into the recovery process. This process is iterated until the global defocus error is driven to zero during the recovery process. The amplitude of aberration may far exceed one wavelength after completion of the inner-loop portion of the algorithm, and the classical iterative transform method does not, by itself, enable recovery of multi-wavelength aberrations. Hence, in the absence of a means of off-loading the multi-wavelength portion of the aberration, the algorithm would produce a wrapped phase map. However, a special aberration-fitting procedure can be applied to the wrapped phase data to transfer at least some portion of the multi-wavelength aberration to the diversity function, wherein the data are treated as known phase values. In this way, a multiwavelength aberration can be recovered incrementally by successively applying the aberration-fitting procedure to intermediate wrapped phase maps. During recovery, as more of the aberration is transferred to the diversity function following successive iterations around the ter loop, the estimated phase ceases to wrap in places where the aberration values become incorporated as part of the diversity function. As a result, as the aberration content is transferred to the diversity function, the phase estimate resembles that of a reference flat.

  10. Modeling the dynamics of evaluation: a multilevel neural network implementation of the iterative reprocessing model.

    PubMed

    Ehret, Phillip J; Monroe, Brian M; Read, Stephen J

    2015-05-01

    We present a neural network implementation of central components of the iterative reprocessing (IR) model. The IR model argues that the evaluation of social stimuli (attitudes, stereotypes) is the result of the IR of stimuli in a hierarchy of neural systems: The evaluation of social stimuli develops and changes over processing. The network has a multilevel, bidirectional feedback evaluation system that integrates initial perceptual processing and later developing semantic processing. The network processes stimuli (e.g., an individual's appearance) over repeated iterations, with increasingly higher levels of semantic processing over time. As a result, the network's evaluations of stimuli evolve. We discuss the implications of the network for a number of different issues involved in attitudes and social evaluation. The success of the network supports the IR model framework and provides new insights into attitude theory. © 2014 by the Society for Personality and Social Psychology, Inc.

  11. 40 CFR Table 1 of Subpart Aaaaaaa... - Emission Limits for Asphalt Processing (Refining) Operations

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 15 2012-07-01 2012-07-01 false Emission Limits for Asphalt Processing... Area Sources: Asphalt Processing and Asphalt Roofing Manufacturing Other Requirements and Information... of Part 63—Emission Limits for Asphalt Processing (Refining) Operations For * * * You must meet the...

  12. 40 CFR Table 1 of Subpart Aaaaaaa... - Emission Limits for Asphalt Processing (Refining) Operations

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 15 2013-07-01 2013-07-01 false Emission Limits for Asphalt Processing... Area Sources: Asphalt Processing and Asphalt Roofing Manufacturing Other Requirements and Information... of Part 63—Emission Limits for Asphalt Processing (Refining) Operations For * * * You must meet the...

  13. 40 CFR Table 1 of Subpart Aaaaaaa... - Emission Limits for Asphalt Processing (Refining) Operations

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 14 2011-07-01 2011-07-01 false Emission Limits for Asphalt Processing... Area Sources: Asphalt Processing and Asphalt Roofing Manufacturing Other Requirements and Information... of Part 63—Emission Limits for Asphalt Processing (Refining) Operations For * * * You must meet the...

  14. 40 CFR Table 1 of Subpart Aaaaaaa... - Emission Limits for Asphalt Processing (Refining) Operations

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 15 2014-07-01 2014-07-01 false Emission Limits for Asphalt Processing... Area Sources: Asphalt Processing and Asphalt Roofing Manufacturing Other Requirements and Information... of Part 63—Emission Limits for Asphalt Processing (Refining) Operations For * * * You must meet the...

  15. 40 CFR Table 1 of Subpart Aaaaaaa... - Emission Limits for Asphalt Processing (Refining) Operations

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 14 2010-07-01 2010-07-01 false Emission Limits for Asphalt Processing... Area Sources: Asphalt Processing and Asphalt Roofing Manufacturing Other Requirements and Information... of Part 63—Emission Limits for Asphalt Processing (Refining) Operations For * * * You must meet the...

  16. A methodology to event reconstruction from trace images.

    PubMed

    Milliet, Quentin; Delémont, Olivier; Sapin, Eric; Margot, Pierre

    2015-03-01

    The widespread use of digital imaging devices for surveillance (CCTV) and entertainment (e.g., mobile phones, compact cameras) has increased the number of images recorded and opportunities to consider the images as traces or documentation of criminal activity. The forensic science literature focuses almost exclusively on technical issues and evidence assessment [1]. Earlier steps in the investigation phase have been neglected and must be considered. This article is the first comprehensive description of a methodology to event reconstruction using images. This formal methodology was conceptualised from practical experiences and applied to different contexts and case studies to test and refine it. Based on this practical analysis, we propose a systematic approach that includes a preliminary analysis followed by four main steps. These steps form a sequence for which the results from each step rely on the previous step. However, the methodology is not linear, but it is a cyclic, iterative progression for obtaining knowledge about an event. The preliminary analysis is a pre-evaluation phase, wherein potential relevance of images is assessed. In the first step, images are detected and collected as pertinent trace material; the second step involves organising and assessing their quality and informative potential. The third step includes reconstruction using clues about space, time and actions. Finally, in the fourth step, the images are evaluated and selected as evidence. These steps are described and illustrated using practical examples. The paper outlines how images elicit information about persons, objects, space, time and actions throughout the investigation process to reconstruct an event step by step. We emphasise the hypothetico-deductive reasoning framework, which demonstrates the contribution of images to generating, refining or eliminating propositions or hypotheses. This methodology provides a sound basis for extending image use as evidence and, more generally, as clues in investigation and crime reconstruction processes. Copyright © 2015 Forensic Science Society. Published by Elsevier Ireland Ltd. All rights reserved.

  17. US NDC Modernization Iteration E1 Prototyping Report: Processing Control Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prescott, Ryan; Hamlet, Benjamin R.

    2014-12-01

    During the first iteration of the US NDC Modernization Elaboration phase (E1), the SNL US NDC modernization project team developed an initial survey of applicable COTS solutions, and established exploratory prototyping related to the processing control framework in support of system architecture definition. This report summarizes these activities and discusses planned follow-on work.

  18. Foucauldian Iterative Learning Conversations--An Example of Organisational Change: Developing Conjoint-Work between EPS and Social Workers

    ERIC Educational Resources Information Center

    Apter, Brian

    2014-01-01

    An organisational change-process in a UK local authority (LA) over two years is examined using transcribed excerpts from three meetings. The change-process is analysed using a Foucauldian analytical tool--Iterative Learning Conversations (ILCS). An Educational Psychology Service was changed from being primarily an education-focussed…

  19. 40 CFR 409.21 - Specialized definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Crystalline Cane Sugar Refining Subcategory § 409.21... (raw sugar) contained within aqueous solution at the beginning of the process for production of refined cane sugar. ...

  20. 40 CFR 409.21 - Specialized definitions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Crystalline Cane Sugar Refining Subcategory § 409.21... (raw sugar) contained within aqueous solution at the beginning of the process for production of refined cane sugar. ...

  1. 40 CFR 409.21 - Specialized definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Crystalline Cane Sugar Refining Subcategory § 409.21... (raw sugar) contained within aqueous solution at the beginning of the process for production of refined cane sugar. ...

  2. 40 CFR 409.21 - Specialized definitions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Crystalline Cane Sugar Refining Subcategory § 409.21... (raw sugar) contained within aqueous solution at the beginning of the process for production of refined cane sugar. ...

  3. An iterative reduced field-of-view reconstruction for periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) MRI.

    PubMed

    Lin, Jyh-Miin; Patterson, Andrew J; Chang, Hing-Chiu; Gillard, Jonathan H; Graves, Martin J

    2015-10-01

    To propose a new reduced field-of-view (rFOV) strategy for iterative reconstructions in a clinical environment. Iterative reconstructions can incorporate regularization terms to improve the image quality of periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) MRI. However, the large amount of calculations required for full FOV iterative reconstructions has posed a huge computational challenge for clinical usage. By subdividing the entire problem into smaller rFOVs, the iterative reconstruction can be accelerated on a desktop with a single graphic processing unit (GPU). This rFOV strategy divides the iterative reconstruction into blocks, based on the block-diagonal dominant structure. A near real-time reconstruction system was developed for the clinical MR unit, and parallel computing was implemented using the object-oriented model. In addition, the Toeplitz method was implemented on the GPU to reduce the time required for full interpolation. Using the data acquired from the PROPELLER MRI, the reconstructed images were then saved in the digital imaging and communications in medicine format. The proposed rFOV reconstruction reduced the gridding time by 97%, as the total iteration time was 3 s even with multiple processes running. A phantom study showed that the structure similarity index for rFOV reconstruction was statistically superior to conventional density compensation (p < 0.001). In vivo study validated the increased signal-to-noise ratio, which is over four times higher than with density compensation. Image sharpness index was improved using the regularized reconstruction implemented. The rFOV strategy permits near real-time iterative reconstruction to improve the image quality of PROPELLER images. Substantial improvements in image quality metrics were validated in the experiments. The concept of rFOV reconstruction may potentially be applied to other kinds of iterative reconstructions for shortened reconstruction duration.

  4. Cognitive representation of "musical fractals": Processing hierarchy and recursion in the auditory domain.

    PubMed

    Martins, Mauricio Dias; Gingras, Bruno; Puig-Waldmueller, Estela; Fitch, W Tecumseh

    2017-04-01

    The human ability to process hierarchical structures has been a longstanding research topic. However, the nature of the cognitive machinery underlying this faculty remains controversial. Recursion, the ability to embed structures within structures of the same kind, has been proposed as a key component of our ability to parse and generate complex hierarchies. Here, we investigated the cognitive representation of both recursive and iterative processes in the auditory domain. The experiment used a two-alternative forced-choice paradigm: participants were exposed to three-step processes in which pure-tone sequences were built either through recursive or iterative processes, and had to choose the correct completion. Foils were constructed according to generative processes that did not match the previous steps. Both musicians and non-musicians were able to represent recursion in the auditory domain, although musicians performed better. We also observed that general 'musical' aptitudes played a role in both recursion and iteration, although the influence of musical training was somehow independent from melodic memory. Moreover, unlike iteration, recursion in audition was well correlated with its non-auditory (recursive) analogues in the visual and action sequencing domains. These results suggest that the cognitive machinery involved in establishing recursive representations is domain-general, even though this machinery requires access to information resulting from domain-specific processes. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  5. Study on the Control of Cleanliness for X90 Pipeline in the Secondary Refining Process

    NASA Astrophysics Data System (ADS)

    Chu, Ren Sheng; Liu, Jin Gang; Li, Zhan Jun

    X90 pipeline steel requires ultra low for sulfur content and gas content in the smelting process. The secondary refining process is very important for X90 pipeline in smelting process and the control of cleanliness is the key for the secondary refining process in the steelmaking process for Pretreatment of hot metal → LD → LF refining → RH refining → Calcium treatment → CC. In the current paper, the cleanliness control method of secondary refining was analyzed for the evolution of non-metallic inclusions in the secondary refining prcess and related changes for composition in steel. The size, composition and the type of the non-metallic inclusions were analyzed by aspex explorer automated scanning electron microscope in X90 pipeline samples for 20mm * 25mm * 25mm by the line cutting. The results show that the number of non-metallic inclusions in steel decrease from the beginning of the LF refining to the RH refining. In the composition of the Non-metallic inclusions, the initial non-metallic inclusions of alumina is converted to two comple-type non-metallic inclusions. Most of them, the non-metallic inclusions were composed by the calcium aluminate and CaS. The others are that the spinel is the core, peripheral parcels calcium aluminate nonmetallic inclusions for complex-type non-metallic inclusions. For the size of the non-metallic inclusions, the non-metallic inclusions for size larger than 100µm is converted to 5 20µm based small size non-metallic inclusions. While the S content of the steel decreased from 0.012% to 0.0012% or less, Al content is kept at between 0.025% to 0.035% and the quality for the casting slab satisfies the requirement of the steel. The ratings for various types of the non-metallic inclusions are 1.5 or less. The control strategy for the inclusions in 90 pipeline is small size, diffuse distribution and little amount of the deformation after rolling. On the contrary, the specific chemical composition of the inclusions is not important, single component in the inclusions is better.

  6. Efficient super-resolution image reconstruction applied to surveillance video captured by small unmanned aircraft systems

    NASA Astrophysics Data System (ADS)

    He, Qiang; Schultz, Richard R.; Chu, Chee-Hung Henry

    2008-04-01

    The concept surrounding super-resolution image reconstruction is to recover a highly-resolved image from a series of low-resolution images via between-frame subpixel image registration. In this paper, we propose a novel and efficient super-resolution algorithm, and then apply it to the reconstruction of real video data captured by a small Unmanned Aircraft System (UAS). Small UAS aircraft generally have a wingspan of less than four meters, so that these vehicles and their payloads can be buffeted by even light winds, resulting in potentially unstable video. This algorithm is based on a coarse-to-fine strategy, in which a coarsely super-resolved image sequence is first built from the original video data by image registration and bi-cubic interpolation between a fixed reference frame and every additional frame. It is well known that the median filter is robust to outliers. If we calculate pixel-wise medians in the coarsely super-resolved image sequence, we can restore a refined super-resolved image. The primary advantage is that this is a noniterative algorithm, unlike traditional approaches based on highly-computational iterative algorithms. Experimental results show that our coarse-to-fine super-resolution algorithm is not only robust, but also very efficient. In comparison with five well-known super-resolution algorithms, namely the robust super-resolution algorithm, bi-cubic interpolation, projection onto convex sets (POCS), the Papoulis-Gerchberg algorithm, and the iterated back projection algorithm, our proposed algorithm gives both strong efficiency and robustness, as well as good visual performance. This is particularly useful for the application of super-resolution to UAS surveillance video, where real-time processing is highly desired.

  7. Anisotropic scene geometry resampling with occlusion filling for 3DTV applications

    NASA Astrophysics Data System (ADS)

    Kim, Jangheon; Sikora, Thomas

    2006-02-01

    Image and video-based rendering technologies are receiving growing attention due to their photo-realistic rendering capability in free-viewpoint. However, two major limitations are ghosting and blurring due to their sampling-based mechanism. The scene geometry which supports to select accurate sampling positions is proposed using global method (i.e. approximate depth plane) and local method (i.e. disparity estimation). This paper focuses on the local method since it can yield more accurate rendering quality without large number of cameras. The local scene geometry has two difficulties which are the geometrical density and the uncovered area including hidden information. They are the serious drawback to reconstruct an arbitrary viewpoint without aliasing artifacts. To solve the problems, we propose anisotropic diffusive resampling method based on tensor theory. Isotropic low-pass filtering accomplishes anti-aliasing in scene geometry and anisotropic diffusion prevents filtering from blurring the visual structures. Apertures in coarse samples are estimated following diffusion on the pre-filtered space, the nonlinear weighting of gradient directions suppresses the amount of diffusion. Aliasing artifacts from low density are efficiently removed by isotropic filtering and the edge blurring can be solved by the anisotropic method at one process. Due to difference size of sampling gap, the resampling condition is defined considering causality between filter-scale and edge. Using partial differential equation (PDE) employing Gaussian scale-space, we iteratively achieve the coarse-to-fine resampling. In a large scale, apertures and uncovered holes can be overcoming because only strong and meaningful boundaries are selected on the resolution. The coarse-level resampling with a large scale is iteratively refined to get detail scene structure. Simulation results show the marked improvements of rendering quality.

  8. Interacting domain-specific languages with biological problem solving environments

    NASA Astrophysics Data System (ADS)

    Cickovski, Trevor M.

    Iteratively developing a biological model and verifying results with lab observations has become standard practice in computational biology. This process is currently facilitated by biological Problem Solving Environments (PSEs), multi-tiered and modular software frameworks which traditionally consist of two layers: a computational layer written in a high level language using design patterns, and a user interface layer which hides its details. Although PSEs have proven effective, they still enforce some communication overhead between biologists refining their models through repeated comparison with experimental observations in vitro or in vivo, and programmers actually implementing model extensions and modifications within the computational layer. I illustrate the use of biological Domain-Specific Languages (DSLs) as a middle-level PSE tier to ameliorate this problem by providing experimentalists with the ability to iteratively test and develop their models using a higher degree of expressive power compared to a graphical interface, while saving the requirement of general purpose programming knowledge. I develop two radically different biological DSLs: XML-based BIOLOGO will model biological morphogenesis using a cell-centered stochastic cellular automaton and translate into C++ modules for an object-oriented PSE C OMPUCELL3D, and MDLab will provide a set of high-level Python libraries for running molecular dynamics simulations, using wrapped functionality from the C++ PSE PROTOMOL. I describe each language in detail, including its its roles within the larger PSE and its expressibility in terms of representable phenomena, and a discussion of observations from users of the languages. Moreover I will use these studies to draw general conclusions about biological DSL development, including dependencies upon the goals of the corresponding PSE, strategies, and tradeoffs.

  9. Early years interventions to improve child health and wellbeing: what works, for whom and in what circumstances? Protocol for a realist review.

    PubMed

    Coles, Emma; Cheyne, Helen; Daniel, Brigid

    2015-06-06

    Child health and wellbeing is influenced by multiple factors, all of which can impact on early childhood development. Adverse early life experiences can have lasting effects across the life course, sustaining inequalities and resulting in negative consequences for the health and wellbeing of individuals and society. The potential to influence future outcomes via early intervention is widely accepted; there are numerous policy initiatives, programmes and interventions clustered around the early years theme, resulting in a broad and disparate evidence base. Existing reviews have addressed the effectiveness of early years interventions, yet there is a knowledge gap regarding the mechanisms underlying why interventions work in given contexts. This realist review seeks to address the question 'what works, for whom and in what circumstances?' in terms of early years interventions to improve child health and wellbeing. The review will be conducted following Pawson's five-stage iterative realist methodology: (1) clarify scope, (2) search for evidence, (3) appraise primary studies and extract data, (4) synthesise evidence and draw conclusions and (5) disseminate findings. The reviewers will work with stakeholders in the early stages to refine the focus of the review, create a review framework and build programme theory. Searches for primary evidence will be conducted iteratively. Data will be extracted and tested against the programme theory. A review collaboration group will oversee the review process. The review will demonstrate how early years interventions do or do not work in different contexts and with what outcomes and effects. Review findings will be written up following the RAMESES guidelines and will be disseminated via a report, presentations and peer-reviewed publications. PROSPERO CRD42015017832.

  10. 40 CFR 409.31 - Specialized definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Liquid Cane Sugar Refining Subcategory § 409.31... (raw sugar) contained within aqueous solution at the beginning of the process for production of refined cane sugar. ...

  11. GIRAF: a method for fast search and flexible alignment of ligand binding interfaces in proteins at atomic resolution

    PubMed Central

    Kinjo, Akira R.; Nakamura, Haruki

    2012-01-01

    Comparison and classification of protein structures are fundamental means to understand protein functions. Due to the computational difficulty and the ever-increasing amount of structural data, however, it is in general not feasible to perform exhaustive all-against-all structure comparisons necessary for comprehensive classifications. To efficiently handle such situations, we have previously proposed a method, now called GIRAF. We herein describe further improvements in the GIRAF protein structure search and alignment method. The GIRAF method achieves extremely efficient search of similar structures of ligand binding sites of proteins by exploiting database indexing of structural features of local coordinate frames. In addition, it produces refined atom-wise alignments by iterative applications of the Hungarian method to the bipartite graph defined for a pair of superimposed structures. By combining the refined alignments based on different local coordinate frames, it is made possible to align structures involving domain movements. We provide detailed accounts for the database design, the search and alignment algorithms as well as some benchmark results. PMID:27493524

  12. Solving regularly and singularly perturbed reaction-diffusion equations in three space dimensions

    NASA Astrophysics Data System (ADS)

    Moore, Peter K.

    2007-06-01

    In [P.K. Moore, Effects of basis selection and h-refinement on error estimator reliability and solution efficiency for higher-order methods in three space dimensions, Int. J. Numer. Anal. Mod. 3 (2006) 21-51] a fixed, high-order h-refinement finite element algorithm, Href, was introduced for solving reaction-diffusion equations in three space dimensions. In this paper Href is coupled with continuation creating an automatic method for solving regularly and singularly perturbed reaction-diffusion equations. The simple quasilinear Newton solver of Moore, (2006) is replaced by the nonlinear solver NITSOL [M. Pernice, H.F. Walker, NITSOL: a Newton iterative solver for nonlinear systems, SIAM J. Sci. Comput. 19 (1998) 302-318]. Good initial guesses for the nonlinear solver are obtained using continuation in the small parameter ɛ. Two strategies allow adaptive selection of ɛ. The first depends on the rate of convergence of the nonlinear solver and the second implements backtracking in ɛ. Finally a simple method is used to select the initial ɛ. Several examples illustrate the effectiveness of the algorithm.

  13. Appearance-based representative samples refining method for palmprint recognition

    NASA Astrophysics Data System (ADS)

    Wen, Jiajun; Chen, Yan

    2012-07-01

    The sparse representation can deal with the lack of sample problem due to utilizing of all the training samples. However, the discrimination ability will degrade when more training samples are used for representation. We propose a novel appearance-based palmprint recognition method. We aim to find a compromise between the discrimination ability and the lack of sample problem so as to obtain a proper representation scheme. Under the assumption that the test sample can be well represented by a linear combination of a certain number of training samples, we first select the representative training samples according to the contributions of the samples. Then we further refine the training samples by an iteration procedure, excluding the training sample with the least contribution to the test sample for each time. Experiments on PolyU multispectral palmprint database and two-dimensional and three-dimensional palmprint database show that the proposed method outperforms the conventional appearance-based palmprint recognition methods. Moreover, we also explore and find out the principle of the usage for the key parameters in the proposed algorithm, which facilitates to obtain high-recognition accuracy.

  14. Formulation and Implementation of Inflow/Outflow Boundary Conditions to Simulate Propulsive Effects

    NASA Technical Reports Server (NTRS)

    Rodriguez, David L.; Aftosmis, Michael J.; Nemec, Marian

    2018-01-01

    Boundary conditions appropriate for simulating flow entering or exiting the computational domain to mimic propulsion effects have been implemented in an adaptive Cartesian simulation package. A robust iterative algorithm to control mass flow rate through an outflow boundary surface is presented, along with a formulation to explicitly specify mass flow rate through an inflow boundary surface. The boundary conditions have been applied within a mesh adaptation framework based on the method of adjoint-weighted residuals. This allows for proper adaptive mesh refinement when modeling propulsion systems. The new boundary conditions are demonstrated on several notional propulsion systems operating in flow regimes ranging from low subsonic to hypersonic. The examples show that the prescribed boundary state is more properly imposed as the mesh is refined. The mass-flowrate steering algorithm is shown to be an efficient approach in each example. To demonstrate the boundary conditions on a realistic complex aircraft geometry, two of the new boundary conditions are also applied to a modern low-boom supersonic demonstrator design with multiple flow inlets and outlets.

  15. Automated protein structure modeling in CASP9 by I-TASSER pipeline combined with QUARK-based ab initio folding and FG-MD-based structure refinement

    PubMed Central

    Xu, Dong; Zhang, Jian; Roy, Ambrish; Zhang, Yang

    2011-01-01

    I-TASSER is an automated pipeline for protein tertiary structure prediction using multiple threading alignments and iterative structure assembly simulations. In CASP9 experiments, two new algorithms, QUARK and FG-MD, were added to the I-TASSER pipeline for improving the structural modeling accuracy. QUARK is a de novo structure prediction algorithm used for structure modeling of proteins that lack detectable template structures. For distantly homologous targets, QUARK models are found useful as a reference structure for selecting good threading alignments and guiding the I-TASSER structure assembly simulations. FG-MD is an atomic-level structural refinement program that uses structural fragments collected from the PDB structures to guide molecular dynamics simulation and improve the local structure of predicted model, including hydrogen-bonding networks, torsion angles and steric clashes. Despite considerable progress in both the template-based and template-free structure modeling, significant improvements on protein target classification, domain parsing, model selection, and ab initio folding of beta-proteins are still needed to further improve the I-TASSER pipeline. PMID:22069036

  16. 40 CFR 409.21 - Specialized definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Crystalline Cane Sugar Refining Subcategory § 409... raw material (raw sugar) contained within aqueous solution at the beginning of the process for production of refined cane sugar. ...

  17. Formation and mechanism of nanocrystalline AZ91 powders during HDDR processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yafen; Fan, Jianfeng, E-mail: fanjianfeng@tyu

    2017-03-15

    Grain sizes of AZ91 alloy powders were markedly refined to about 15 nm from 100 to 160 μm by an optimized hydrogenation-disproportionation-desorption-recombination (HDDR) process. The effect of temperature, hydrogen pressure and processing time on phase and microstructure evolution of AZ91 alloy powders during HDDR process was investigated systematically by X-ray diffraction, optical microscopy, scanning electron microscopy and transmission electron microscopy, respectively. The optimal HDDR process for preparing nanocrystalline Mg alloy powders is hydriding at temperature of 350 °C under 4 MPa hydrogen pressure for 12 h and dehydriding at 350 °C for 3 h in vacuum. A modified unreacted coremore » model was introduced to describe the mechanism of grain refinement of during HDDR process. - Highlights: • Grain size of the AZ91 alloy powders was significantly refined from 100 μm to 15 nm. • The optimal HDDR technology for nano Mg alloy powders is obtained. • A modified unreacted core model of grain refinement mechanism was proposed.« less

  18. Post-treatment mechanical refining as a method to improve overall sugar recovery of steam pretreated hybrid poplar.

    PubMed

    Dou, Chang; Ewanick, Shannon; Bura, Renata; Gustafson, Rick

    2016-05-01

    This study investigates the effect of mechanical refining to improve the sugar yield from biomass processed under a wide range of steam pretreatment conditions. Hybrid poplar chips were steam pretreated using six different conditions with or without SO2. The resulting water insoluble fractions were subjected to mechanical refining. After refining, poplar pretreated at 205°C for 10min without SO2 obtained a 32% improvement in enzymatic hydrolysis and achieved similar overall monomeric sugar recovery (539kg/tonne) to samples pretreated with SO2. Refining did not improve hydrolyzability of samples pretreated at more severe conditions, nor did it improve the overall sugar recovery. By maximizing overall sugar recovery, refining could partially decouple the pretreatment from other unit operations, and enable the use of low temperature, non-sulfur pretreatment conditions. The study demonstrates the possibility of using post-treatment refining to accommodate potential pretreatment process upsets without sacrificing sugar yields. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Reducing Design Cycle Time and Cost Through Process Resequencing

    NASA Technical Reports Server (NTRS)

    Rogers, James L.

    2004-01-01

    In today's competitive environment, companies are under enormous pressure to reduce the time and cost of their design cycle. One method for reducing both time and cost is to develop an understanding of the flow of the design processes and the effects of the iterative subcycles that are found in complex design projects. Once these aspects are understood, the design manager can make decisions that take advantage of decomposition, concurrent engineering, and parallel processing techniques to reduce the total time and the total cost of the design cycle. One software tool that can aid in this decision-making process is the Design Manager's Aid for Intelligent Decomposition (DeMAID). The DeMAID software minimizes the feedback couplings that create iterative subcycles, groups processes into iterative subcycles, and decomposes the subcycles into a hierarchical structure. The real benefits of producing the best design in the least time and at a minimum cost are obtained from sequencing the processes in the subcycles.

  20. From Intent to Action: An Iterative Engineering Process

    ERIC Educational Resources Information Center

    Mouton, Patrice; Rodet, Jacques; Vacaresse, Sylvain

    2015-01-01

    Quite by chance, and over the course of a few haphazard meetings, a Master's degree in "E-learning Design" gradually developed in a Faculty of Economics. Its original and evolving design was the result of an iterative process carried out, not by a single Instructional Designer (ID), but by a full ID team. Over the last 10 years it has…

  1. Iterated learning and the evolution of language.

    PubMed

    Kirby, Simon; Griffiths, Tom; Smith, Kenny

    2014-10-01

    Iterated learning describes the process whereby an individual learns their behaviour by exposure to another individual's behaviour, who themselves learnt it in the same way. It can be seen as a key mechanism of cultural evolution. We review various methods for understanding how behaviour is shaped by the iterated learning process: computational agent-based simulations; mathematical modelling; and laboratory experiments in humans and non-human animals. We show how this framework has been used to explain the origins of structure in language, and argue that cultural evolution must be considered alongside biological evolution in explanations of language origins. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Learning Efficient Sparse and Low Rank Models.

    PubMed

    Sprechmann, P; Bronstein, A M; Sapiro, G

    2015-09-01

    Parsimony, including sparsity and low rank, has been shown to successfully model data in numerous machine learning and signal processing tasks. Traditionally, such modeling approaches rely on an iterative algorithm that minimizes an objective function with parsimony-promoting terms. The inherently sequential structure and data-dependent complexity and latency of iterative optimization constitute a major limitation in many applications requiring real-time performance or involving large-scale data. Another limitation encountered by these modeling techniques is the difficulty of their inclusion in discriminative learning scenarios. In this work, we propose to move the emphasis from the model to the pursuit algorithm, and develop a process-centric view of parsimonious modeling, in which a learned deterministic fixed-complexity pursuit process is used in lieu of iterative optimization. We show a principled way to construct learnable pursuit process architectures for structured sparse and robust low rank models, derived from the iteration of proximal descent algorithms. These architectures learn to approximate the exact parsimonious representation at a fraction of the complexity of the standard optimization methods. We also show that appropriate training regimes allow to naturally extend parsimonious models to discriminative settings. State-of-the-art results are demonstrated on several challenging problems in image and audio processing with several orders of magnitude speed-up compared to the exact optimization algorithms.

  3. Integrated process for the solvent refining of coal

    DOEpatents

    Garg, Diwakar

    1983-01-01

    A process is set forth for the integrated liquefaction of coal by the catalytic solvent refining of a feed coal in a first stage to liquid and solid products and the catalytic hydrogenation of the solid product in a second stage to produce additional liquid product. A fresh inexpensive, throw-away catalyst is utilized in the second stage hydrogenation of the solid product and this catalyst is recovered and recycled for catalyst duty in the solvent refining stage without any activation steps performed on the used catalyst prior to its use in the solvent refining of feed coal.

  4. Support patient search on pathology reports with interactive online learning based data extraction.

    PubMed

    Zheng, Shuai; Lu, James J; Appin, Christina; Brat, Daniel; Wang, Fusheng

    2015-01-01

    Structural reporting enables semantic understanding and prompt retrieval of clinical findings about patients. While synoptic pathology reporting provides templates for data entries, information in pathology reports remains primarily in narrative free text form. Extracting data of interest from narrative pathology reports could significantly improve the representation of the information and enable complex structured queries. However, manual extraction is tedious and error-prone, and automated tools are often constructed with a fixed training dataset and not easily adaptable. Our goal is to extract data from pathology reports to support advanced patient search with a highly adaptable semi-automated data extraction system, which can adjust and self-improve by learning from a user's interaction with minimal human effort. We have developed an online machine learning based information extraction system called IDEAL-X. With its graphical user interface, the system's data extraction engine automatically annotates values for users to review upon loading each report text. The system analyzes users' corrections regarding these annotations with online machine learning, and incrementally enhances and refines the learning model as reports are processed. The system also takes advantage of customized controlled vocabularies, which can be adaptively refined during the online learning process to further assist the data extraction. As the accuracy of automatic annotation improves overtime, the effort of human annotation is gradually reduced. After all reports are processed, a built-in query engine can be applied to conveniently define queries based on extracted structured data. We have evaluated the system with a dataset of anatomic pathology reports from 50 patients. Extracted data elements include demographical data, diagnosis, genetic marker, and procedure. The system achieves F-1 scores of around 95% for the majority of tests. Extracting data from pathology reports could enable more accurate knowledge to support biomedical research and clinical diagnosis. IDEAL-X provides a bridge that takes advantage of online machine learning based data extraction and the knowledge from human's feedback. By combining iterative online learning and adaptive controlled vocabularies, IDEAL-X can deliver highly adaptive and accurate data extraction to support patient search.

  5. Collective action for implementation: a realist evaluation of organisational collaboration in healthcare.

    PubMed

    Rycroft-Malone, Jo; Burton, Christopher R; Wilkinson, Joyce; Harvey, Gill; McCormack, Brendan; Baker, Richard; Dopson, Sue; Graham, Ian D; Staniszewska, Sophie; Thompson, Carl; Ariss, Steven; Melville-Richards, Lucy; Williams, Lynne

    2016-02-09

    Increasingly, it is being suggested that translational gaps might be eradicated or narrowed by bringing research users and producers closer together, a theory that is largely untested. This paper reports a national study to fill a gap in the evidence about the conditions, processes and outcomes related to collaboration and implementation. A longitudinal realist evaluation using multiple qualitative methods case studies was conducted with three Collaborations for Leadership in Applied Health Research in Care (England). Data were collected over four rounds of theory development, refinement and testing. Over 200 participants were involved in semi-structured interviews, non-participant observations of events and meetings, and stakeholder engagement. A combined inductive and deductive data analysis process was focused on proposition refinement and testing iteratively over data collection rounds. The quality of existing relationships between higher education and local health service, and views about whether implementation was a collaborative act, created a path dependency. Where implementation was perceived to be removed from service and there was a lack of organisational connections, this resulted in a focus on knowledge production and transfer, rather than co-production. The collaborations' architectures were counterproductive because they did not facilitate connectivity and had emphasised professional and epistemic boundaries. More distributed leadership was associated with greater potential for engagement. The creation of boundary spanning roles was the most visible investment in implementation, and credible individuals in these roles resulted in cross-boundary work, in facilitation and in direct impacts. The academic-practice divide played out strongly as a context for motivation to engage, in that 'what's in it for me' resulted in variable levels of engagement along a co-operation-collaboration continuum. Learning within and across collaborations was patchy depending on attention to evaluation. These collaborations did not emerge from a vacuum, and they needed time to learn and develop. Their life cycle started with their position on collaboration, knowledge and implementation. More impactful attempts at collective action in implementation might be determined by the deliberate alignment of a number of features, including foundational relationships, vision, values, structures and processes and views about the nature of the collaboration and implementation.

  6. Theory development for situational awareness in multi-casualty incidents.

    PubMed

    Busby, Steven; Witucki-Brown, Janet

    2011-09-01

    Nurses and other field-level providers will be increasingly called on to respond to both natural and manmade situations that involve multiple casualties. Situational Awareness (SA) is necessary for managing these complicated incidents. The purpose of the study was to create new knowledge by discovering the process of SA in multi-casualty incidents (MCI) and develop substantive theory with regard to field-level SA for use by emergency response nurses and other providers. A qualitative, grounded theory approach was used to develop the first substantive theory of SA for MCI. The sample included 15 emergency response providers from the Southeastern United States. One pilot interview was conducted to trial and refine the semi-structured interview questions. Following Institutional Review Board approval, data collection and analysis occurred from September 2008 through January 2009. The grounded theory methods of Corbin and Strauss (2008) and Charmaz (2006) informed this study. Transcribed participant interviews constituted the bulk of the data with additional data provided by field notes and extensive memos. Multiple levels of coding, theoretical sampling, and theoretical sensitivity were used to develop and relate concepts resulting in emerging theory. Multiple methods were used for maintaining the rigor of the study. The process of SA in MCI involves emergency responders establishing and maintaining control of dynamic, contextually-based situations. Against the backdrop of experience and other preparatory interval actions, responders handle various types of information and manage resources, roles, relationships and human emotion. The goal is to provide an environment of relative safety in which patient care is provided. SA in MCI is an on-going and iterative process with each piece of information informing new actions. Analysis culminated in the development of the Busby Theory of Situational Awareness in Multi-casualty Incidents. SA in MCI is a growing need at local, national and international levels. The newly developed theory provides a useful model for appreciating SA in the context of MCI thereby improving practice and providing a tool for education. The theory also provides a catalyst for further research refining and testing of the theory and for studying larger-scale incidents. Copyright © 2011 Emergency Nurses Association. Published by Mosby, Inc. All rights reserved.

  7. A developmental evaluation to enhance stakeholder engagement in a wide-scale interactive project disseminating quality improvement data: study protocol for a mixed-methods study.

    PubMed

    Laycock, Alison; Bailie, Jodie; Matthews, Veronica; Cunningham, Frances; Harvey, Gillian; Percival, Nikki; Bailie, Ross

    2017-07-13

    Bringing together continuous quality improvement (CQI) data from multiple health services offers opportunities to identify common improvement priorities and to develop interventions at various system levels to achieve large-scale improvement in care. An important principle of CQI is practitioner participation in interpreting data and planning evidence-based change. This study will contribute knowledge about engaging diverse stakeholders in collaborative and theoretically informed processes to identify and address priority evidence-practice gaps in care delivery. This paper describes a developmental evaluation to support and refine a novel interactive dissemination project using aggregated CQI data from Aboriginal and Torres Strait Islander primary healthcare centres in Australia. The project aims to effect multilevel system improvement in Aboriginal and Torres Strait Islander primary healthcare. Data will be gathered using document analysis, online surveys, interviews with participants and iterative analytical processes with the research team. These methods will enable real-time feedback to guide refinements to the design, reports, tools and processes as the interactive dissemination project is implemented. Qualitative data from interviews and surveys will be analysed and interpreted to provide in-depth understanding of factors that influence engagement and stakeholder perspectives about use of the aggregated data and generated improvement strategies. Sources of data will be triangulated to build up a comprehensive, contextualised perspective and integrated understanding of the project's development, implementation and findings. The Human Research Ethics Committee (HREC) of the Northern Territory Department of Health and Menzies School of Health Research (Project 2015-2329), the Central Australian HREC (Project 15-288) and the Charles Darwin University HREC (Project H15030) approved the study. Dissemination will include articles in peer-reviewed journals, policy and research briefs. Results will be presented at conferences and quality improvement network meetings. Researchers, clinicians, policymakers and managers developing evidence-based system and policy interventions should benefit from this research. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  8. A Matlab-based finite-difference solver for the Poisson problem with mixed Dirichlet-Neumann boundary conditions

    NASA Astrophysics Data System (ADS)

    Reimer, Ashton S.; Cheviakov, Alexei F.

    2013-03-01

    A Matlab-based finite-difference numerical solver for the Poisson equation for a rectangle and a disk in two dimensions, and a spherical domain in three dimensions, is presented. The solver is optimized for handling an arbitrary combination of Dirichlet and Neumann boundary conditions, and allows for full user control of mesh refinement. The solver routines utilize effective and parallelized sparse vector and matrix operations. Computations exhibit high speeds, numerical stability with respect to mesh size and mesh refinement, and acceptable error values even on desktop computers. Catalogue identifier: AENQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License v3.0 No. of lines in distributed program, including test data, etc.: 102793 No. of bytes in distributed program, including test data, etc.: 369378 Distribution format: tar.gz Programming language: Matlab 2010a. Computer: PC, Macintosh. Operating system: Windows, OSX, Linux. RAM: 8 GB (8, 589, 934, 592 bytes) Classification: 4.3. Nature of problem: To solve the Poisson problem in a standard domain with “patchy surface”-type (strongly heterogeneous) Neumann/Dirichlet boundary conditions. Solution method: Finite difference with mesh refinement. Restrictions: Spherical domain in 3D; rectangular domain or a disk in 2D. Unusual features: Choice between mldivide/iterative solver for the solution of large system of linear algebraic equations that arise. Full user control of Neumann/Dirichlet boundary conditions and mesh refinement. Running time: Depending on the number of points taken and the geometry of the domain, the routine may take from less than a second to several hours to execute.

  9. Precipitation process in a Mg–Gd–Y alloy grain-refined by Al addition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Jichun; CAST Cooperative Research Centre, Department of Materials Engineering, Monash University, Victoria 3800; Zhu, Suming, E-mail: suming.zhu@monash.edu

    2014-02-15

    The precipitation process in Mg–10Gd–3Y (wt.%) alloy grain-refined by 0.8 wt.% Al addition has been investigated by transmission electron microscopy. The alloy was given a solution treatment at 520 °C for 6 h plus 550 °C for 7 h before ageing at 250 °C. Plate-shaped intermetallic particles with the 18R-type long-period stacking ordered structure were observed in the solution-treated state. Upon isothermal ageing at 250 °C, the following precipitation sequence was identified for the α-Mg supersaturated solution: β″ (D0{sub 19}) → β′ (bco) → β{sub 1} (fcc) → β (fcc). The observed precipitation process and age hardening response in themore » Al grain-refined Mg–10Gd–3Y alloy are compared with those reported in the Zr grain-refined counterpart. - Highlights: • The precipitation process in Mg–10Gd–3Y–0.8Al (wt.%) alloy has been investigated. • Particles with the 18R-type LPSO structure were observed in the solution state. • Upon ageing at 250 °C, the precipitation sequence is: β″ → β′ → β1 (fcc) → β. • The Al grain-refined alloy has a lower hardness than the Zr refined counterpart.« less

  10. Twostep-by-twostep PIRK-type PC methods with continuous output formulas

    NASA Astrophysics Data System (ADS)

    Cong, Nguyen Huu; Xuan, Le Ngoc

    2008-11-01

    This paper deals with parallel predictor-corrector (PC) iteration methods based on collocation Runge-Kutta (RK) corrector methods with continuous output formulas for solving nonstiff initial-value problems (IVPs) for systems of first-order differential equations. At nth step, the continuous output formulas are used not only for predicting the stage values in the PC iteration methods but also for calculating the step values at (n+2)th step. In this case, the integration processes can be proceeded twostep-by-twostep. The resulting twostep-by-twostep (TBT) parallel-iterated RK-type (PIRK-type) methods with continuous output formulas (twostep-by-twostep PIRKC methods or TBTPIRKC methods) give us a faster integration process. Fixed stepsize applications of these TBTPIRKC methods to a few widely-used test problems reveal that the new PC methods are much more efficient when compared with the well-known parallel-iterated RK methods (PIRK methods), parallel-iterated RK-type PC methods with continuous output formulas (PIRKC methods) and sequential explicit RK codes DOPRI5 and DOP853 available from the literature.

  11. Aluminum-fly ash metal matrix composites for automotive parts. [Reports for October 1 to December 1998, and January 31 to March 31, 1999

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weiss, David; Purgert, Robert; Rhudy, Richard

    1999-04-21

    Some highlights are: (1) Material development, process development, and part validation are occurring simultaneously on a fast track schedule. (2) Prior project activity has resulted in a program emphasis on three components--manifolds, mounting brackets, and motor mounts; and three casting techniques--squeeze casting, pressure die casting, and sand casting. (3) With the project focus, it appears possible to offer manifolds and mounting brackets for automotive qualification testing on a schedule in line with the PNGV Year 2004 goal. (4) Through an iterative process of fly ash treatment, MMC ingot preparation, foundry process refinement, and parts production, both foundries (Eck Industries andmore » Thompson Aluminum Casting Company) are addressing the pre-competitive issues of: (a) Optimum castability with fly ash shapes and sizes; (b) Best mechanical properties derived from fly ash shapes and sizes; (c) Effective fly ash classification processes; (d) Mechanical properties resulting from various casting processes and fly ash formulations. Eck and TAC continued experiments with batch ingot provided by both Eck and the University of Wisconsin at Milwaukee. Castings were run that contained varying amounts of fly ash and different size fractions. Components were cast using cenosphere material to ascertain the effects of squeeze casting and to determine whether the pressure would break the cenospheres. Test parts are currently being machined into substandard test bars for mechanical testing. Also, the affect of heat treatments on ashalloy are being studied through comparison to two lots, one heat treated and one in the ''as cast'' condition.« less

  12. An analytical and numerical study of Galton-Watson branching processes relevant to population dynamics

    NASA Astrophysics Data System (ADS)

    Jang, Sa-Han

    Galton-Watson branching processes of relevance to human population dynamics are the subject of this thesis. We begin with an historical survey of the invention of the invention of this model in the middle of the 19th century, for the purpose of modelling the extinction of unusual surnames in France and Britain. We then review the principal developments and refinements of this model, and their applications to a wide variety of problems in biology and physics. Next, we discuss in detail the case where the probability generating function for a Galton-Watson branching process is a geometric series, which can be summed in closed form to yield a fractional linear generating function that can be iterated indefinitely in closed form. We then describe the matrix method of Keyfitz and Tyree, and use it to determine how large a matrix must be chosen to model accurately a Galton-Watson branching process for a very large number of generations, of the order of hundreds or even thousands. Finally, we show that any attempt to explain the recent evidence for the existence thousands of generations ago of a 'mitochondrial Eve' and a 'Y-chromosomal Adam' in terms of a the standard Galton-Watson branching process, or indeed any statistical model that assumes equality of probabilities of passing one's genes to one's descendents in later generations, is unlikely to be successful. We explain that such models take no account of the advantages that the descendents of the most successful individuals in earlier generations enjoy over their contemporaries, which must play a key role in human evolution.

  13. 40 CFR 80.1339 - Who is not eligible for the provisions for small refiners?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... eligible for the hardship provisions for small refiners: (a) A refiner with one or more refineries built... employees or crude capacity is due to operational changes at the refinery or a company sale or... refinery processing units. (e)(1) A small refiner approved under § 80.1340 that subsequently ceases...

  14. Multidisciplinary systems optimization by linear decomposition

    NASA Technical Reports Server (NTRS)

    Sobieski, J.

    1984-01-01

    In a typical design process major decisions are made sequentially. An illustrated example is given for an aircraft design in which the aerodynamic shape is usually decided first, then the airframe is sized for strength and so forth. An analogous sequence could be laid out for any other major industrial product, for instance, a ship. The loops in the discipline boxes symbolize iterative design improvements carried out within the confines of a single engineering discipline, or subsystem. The loops spanning several boxes depict multidisciplinary design improvement iterations. Omitted for graphical simplicity is parallelism of the disciplinary subtasks. The parallelism is important in order to develop a broad workfront necessary to shorten the design time. If all the intradisciplinary and interdisciplinary iterations were carried out to convergence, the process could yield a numerically optimal design. However, it usually stops short of that because of time and money limitations. This is especially true for the interdisciplinary iterations.

  15. Integrating Low-Cost Rapid Usability Testing into Agile System Development of Healthcare IT: A Methodological Perspective.

    PubMed

    Kushniruk, Andre W; Borycki, Elizabeth M

    2015-01-01

    The development of more usable and effective healthcare information systems has become a critical issue. In the software industry methodologies such as agile and iterative development processes have emerged to lead to more effective and usable systems. These approaches highlight focusing on user needs and promoting iterative and flexible development practices. Evaluation and testing of iterative agile development cycles is considered an important part of the agile methodology and iterative processes for system design and re-design. However, the issue of how to effectively integrate usability testing methods into rapid and flexible agile design cycles has remained to be fully explored. In this paper we describe our application of an approach known as low-cost rapid usability testing as it has been applied within agile system development in healthcare. The advantages of the integrative approach are described, along with current methodological considerations.

  16. Negotiating Tensions Between Theory and Design in the Development of Mailings for People Recovering From Acute Coronary Syndrome

    PubMed Central

    Presseau, Justin; Nicholas Angl, Emily; Jokhio, Iffat; Schwalm, JD; Grimshaw, Jeremy M; Bosiak, Beth; Natarajan, Madhu K; Ivers, Noah M

    2017-01-01

    Background Taking all recommended secondary prevention cardiac medications and fully participating in a formal cardiac rehabilitation program significantly reduces mortality and morbidity in the year following a heart attack. However, many people who have had a heart attack stop taking some or all of their recommended medications prematurely and many do not complete a formal cardiac rehabilitation program. Objective The objective of our study was to develop a user-centered, theory-based, scalable intervention of printed educational materials to encourage and support people who have had a heart attack to use recommended secondary prevention cardiac treatments. Methods Prior to the design process, we conducted theory-based interviews and surveys with patients who had had a heart attack to identify key determinants of secondary prevention behaviors. Our interdisciplinary research team then partnered with a patient advisor and design firm to undertake an iterative, theory-informed, user-centered design process to operationalize techniques to address these determinants. User-centered design requires considering users’ needs, goals, strengths, limitations, context, and intuitive processes; designing prototypes adapted to users accordingly; observing how potential users respond to the prototype; and using those data to refine the design. To accomplish these tasks, we conducted user research to develop personas (archetypes of potential users), developed a preliminary prototype using behavior change theory to map behavior change techniques to identified determinants of medication adherence, and conducted 2 design cycles, testing materials via think-aloud and semistructured interviews with a total of 11 users (10 patients who had experienced a heart attack and 1 caregiver). We recruited participants at a single cardiac clinic using purposive sampling informed by our personas. We recorded sessions with users and extracted key themes from transcripts. We held interdisciplinary team discussions to interpret findings in the context of relevant theory-based evidence and iteratively adapted the intervention accordingly. Results Through our iterative development and testing, we identified 3 key tensions: (1) evidence from theory-based studies versus users’ feelings, (2) informative versus persuasive communication, and (3) logistical constraints for the intervention versus users’ desires or preferences. We addressed these by (1) identifying root causes for users’ feelings and addressing those to better incorporate theory- and evidence-based features, (2) accepting that our intervention was ethically justified in being persuasive, and (3) making changes to the intervention where possible, such as attempting to match imagery in the materials to patients’ self-images. Conclusions Theory-informed interventions must be operationalized in ways that fit with user needs. Tensions between users’ desires or preferences and health care system goals and constraints must be identified and addressed to the greatest extent possible. A cluster randomized controlled trial of the final intervention is currently underway. PMID:28249831

  17. Quantized Iterative Learning Consensus Tracking of Digital Networks With Limited Information Communication.

    PubMed

    Xiong, Wenjun; Yu, Xinghuo; Chen, Yao; Gao, Jie

    2017-06-01

    This brief investigates the quantized iterative learning problem for digital networks with time-varying topologies. The information is first encoded as symbolic data and then transmitted. After the data are received, a decoder is used by the receiver to get an estimate of the sender's state. Iterative learning quantized communication is considered in the process of encoding and decoding. A sufficient condition is then presented to achieve the consensus tracking problem in a finite interval using the quantized iterative learning controllers. Finally, simulation results are given to illustrate the usefulness of the developed criterion.

  18. Fetoscopic Open Neural Tube Defect Repair: Development and Refinement of a Two-Port, Carbon Dioxide Insufflation Technique.

    PubMed

    Belfort, Michael A; Whitehead, William E; Shamshirsaz, Alireza A; Bateni, Zhoobin H; Olutoye, Oluyinka O; Olutoye, Olutoyin A; Mann, David G; Espinoza, Jimmy; Williams, Erin; Lee, Timothy C; Keswani, Sundeep G; Ayres, Nancy; Cassady, Christopher I; Mehollin-Ray, Amy R; Sanz Cortes, Magdalena; Carreras, Elena; Peiro, Jose L; Ruano, Rodrigo; Cass, Darrell L

    2017-04-01

    To describe development of a two-port fetoscopic technique for spina bifida repair in the exteriorized, carbon dioxide-filled uterus and report early results of two cohorts of patients: the first 15 treated with an iterative technique and the latter 13 with a standardized technique. This was a retrospective cohort study (2014-2016). All patients met Management of Myelomeningocele Study selection criteria. The intraoperative approach was iterative in the first 15 patients and was then standardized. Obstetric, maternal, fetal, and early neonatal outcomes were compared. Standard parametric and nonparametric tests were used as appropriate. Data for 28 patients (22 endoscopic only, four hybrid, two abandoned) are reported, but only those with a complete fetoscopic repair were analyzed (iterative technique [n=10] compared with standardized technique [n=12]). Maternal demographics and gestational age (median [range]) at fetal surgery (25.4 [22.9-25.9] compared with 24.8 [24-25.6] weeks) were similar, but delivery occurred at 35.9 (26-39) weeks of gestation with the iterative technique compared with 39 (35.9-40) weeks of gestation with the standardized technique (P<.01). Duration of surgery (267 [107-434] compared with 246 [206-333] minutes), complication rates, preterm prelabor rupture of membranes rates (4/12 [33%] compared with 1/10 [10%]), and vaginal delivery rates (5/12 [42%] compared with 6/10 [60%]) were not statistically different in the iterative and standardized techniques, respectively. In 6 of 12 (50%) compared with 1 of 10 (10%), respectively (P=.07), there was leakage of cerebrospinal fluid from the repair site at birth. Management of Myelomeningocele Study criteria for hydrocephalus-death at discharge were met in 9 of 12 (75%) and 3 of 10 (30%), respectively, and 7 of 12 (58%) compared with 2 of 10 (20%) have been treated for hydrocephalus to date. These latter differences were not statistically significant. Fetoscopic open neural tube defect repair does not appear to increase maternal-fetal complications as compared with repair by hysterotomy, allows for vaginal delivery, and may reduce long-term maternal risks. ClinicalTrials.gov, https://clinicaltrials.gov, NCT02230072.

  19. Handbook of Petroleum Processing

    NASA Astrophysics Data System (ADS)

    Jones, David S. J.; Pujado, Peter P.

    This handbook describes and discusses the features that make up the petroleum refining industry. It begins with a description of the crude oils and their nature, and continues with the saleable products from the refining processes, with a review of the environmental impact. There is a complete overview of the processes that make up the refinery with a brief history of those processes.

  20. A quality-refinement process for medical imaging applications.

    PubMed

    Neuhaus, J; Maleike, D; Nolden, M; Kenngott, H-G; Meinzer, H-P; Wolf, I

    2009-01-01

    To introduce and evaluate a process for refinement of software quality that is suitable to research groups. In order to avoid constraining researchers too much, the quality improvement process has to be designed carefully. The scope of this paper is to present and evaluate a process to advance quality aspects of existing research prototypes in order to make them ready for initial clinical studies. The proposed process is tailored for research environments and therefore more lightweight than traditional quality management processes. Focus on quality criteria that are important at the given stage of the software life cycle. Usage of tools that automate aspects of the process is emphasized. To evaluate the additional effort that comes along with the process, it was exemplarily applied for eight prototypical software modules for medical image processing. The introduced process has been applied to improve the quality of all prototypes so that they could be successfully used in clinical studies. The quality refinement yielded an average of 13 person days of additional effort per project. Overall, 107 bugs were found and resolved by applying the process. Careful selection of quality criteria and the usage of automated process tools lead to a lightweight quality refinement process suitable for scientific research groups that can be applied to ensure a successful transfer of technical software prototypes into clinical research workflows.

  1. Redesigning pictographs for patients with low health literacy and establishing preliminary steps for delivery via smart phones.

    PubMed

    Wolpin, Seth E; Nguyen, Juliet K; Parks, Jason J; Lam, Annie Y; Morisky, Donald E; Fernando, Lara; Chu, Adeline; Berry, Donna L

    2016-01-01

    Pictographs (or pictograms) have been widely utilized to convey medication related messages and to address nonadherence among patients with low health literacy. Yet, patients do not always interpret the intended messages on commonly used pictographs correctly and there are questions how they may be delivered on mobile devices. Our objectives are to refine a set of pictographs to use as medication reminders and to establish preliminary steps for delivery via smart phones. Card sorting was used to identify existing pictographs that focus group members found "not easy" to understand. Participants then explored improvements to these pictographs while iterations were sketched in real-time by a graphic artist. Feedback was also solicited on how selected pictographs might be delivered via smart phones in a sequential reminder message. The study was conducted at a community learning center that provides literacy services to underserved populations in Seattle, WA. Participants aged 18 years and older who met the criteria for low health literacy using S-TOFHLA were recruited. Among the 45 participants screened for health literacy, 29 were eligible and consented to participate. Across four focus group sessions, participants examined 91 commonly used pictographs, 20 of these were ultimately refined to improve comprehensibility using participatory design approaches. All participants in the fifth focus group owned and used cell phones and provided feedback on preferred sequencing of pictographs to represent medication messages. Low literacy adults found a substantial number of common medication label pictographs difficult to understand. Participative design processes helped generate new pictographs, as well as feedback on the sequencing of messages on cell phones, that may be evaluated in future research.

  2. A feature refinement approach for statistical interior CT reconstruction

    NASA Astrophysics Data System (ADS)

    Hu, Zhanli; Zhang, Yunwan; Liu, Jianbo; Ma, Jianhua; Zheng, Hairong; Liang, Dong

    2016-07-01

    Interior tomography is clinically desired to reduce the radiation dose rendered to patients. In this work, a new statistical interior tomography approach for computed tomography is proposed. The developed design focuses on taking into account the statistical nature of local projection data and recovering fine structures which are lost in the conventional total-variation (TV)—minimization reconstruction. The proposed method falls within the compressed sensing framework of TV minimization, which only assumes that the interior ROI is piecewise constant or polynomial and does not need any additional prior knowledge. To integrate the statistical distribution property of projection data, the objective function is built under the criteria of penalized weighed least-square (PWLS-TV). In the implementation of the proposed method, the interior projection extrapolation based FBP reconstruction is first used as the initial guess to mitigate truncation artifacts and also provide an extended field-of-view. Moreover, an interior feature refinement step, as an important processing operation is performed after each iteration of PWLS-TV to recover the desired structure information which is lost during the TV minimization. Here, a feature descriptor is specifically designed and employed to distinguish structure from noise and noise-like artifacts. A modified steepest descent algorithm is adopted to minimize the associated objective function. The proposed method is applied to both digital phantom and in vivo Micro-CT datasets, and compared to FBP, ART-TV and PWLS-TV. The reconstruction results demonstrate that the proposed method performs better than other conventional methods in suppressing noise, reducing truncated and streak artifacts, and preserving features. The proposed approach demonstrates its potential usefulness for feature preservation of interior tomography under truncated projection measurements.

  3. A feature refinement approach for statistical interior CT reconstruction.

    PubMed

    Hu, Zhanli; Zhang, Yunwan; Liu, Jianbo; Ma, Jianhua; Zheng, Hairong; Liang, Dong

    2016-07-21

    Interior tomography is clinically desired to reduce the radiation dose rendered to patients. In this work, a new statistical interior tomography approach for computed tomography is proposed. The developed design focuses on taking into account the statistical nature of local projection data and recovering fine structures which are lost in the conventional total-variation (TV)-minimization reconstruction. The proposed method falls within the compressed sensing framework of TV minimization, which only assumes that the interior ROI is piecewise constant or polynomial and does not need any additional prior knowledge. To integrate the statistical distribution property of projection data, the objective function is built under the criteria of penalized weighed least-square (PWLS-TV). In the implementation of the proposed method, the interior projection extrapolation based FBP reconstruction is first used as the initial guess to mitigate truncation artifacts and also provide an extended field-of-view. Moreover, an interior feature refinement step, as an important processing operation is performed after each iteration of PWLS-TV to recover the desired structure information which is lost during the TV minimization. Here, a feature descriptor is specifically designed and employed to distinguish structure from noise and noise-like artifacts. A modified steepest descent algorithm is adopted to minimize the associated objective function. The proposed method is applied to both digital phantom and in vivo Micro-CT datasets, and compared to FBP, ART-TV and PWLS-TV. The reconstruction results demonstrate that the proposed method performs better than other conventional methods in suppressing noise, reducing truncated and streak artifacts, and preserving features. The proposed approach demonstrates its potential usefulness for feature preservation of interior tomography under truncated projection measurements.

  4. Fitmunk: improving protein structures by accurate, automatic modeling of side-chain conformations.

    PubMed

    Porebski, Przemyslaw Jerzy; Cymborowski, Marcin; Pasenkiewicz-Gierula, Marta; Minor, Wladek

    2016-02-01

    Improvements in crystallographic hardware and software have allowed automated structure-solution pipelines to approach a near-`one-click' experience for the initial determination of macromolecular structures. However, in many cases the resulting initial model requires a laborious, iterative process of refinement and validation. A new method has been developed for the automatic modeling of side-chain conformations that takes advantage of rotamer-prediction methods in a crystallographic context. The algorithm, which is based on deterministic dead-end elimination (DEE) theory, uses new dense conformer libraries and a hybrid energy function derived from experimental data and prior information about rotamer frequencies to find the optimal conformation of each side chain. In contrast to existing methods, which incorporate the electron-density term into protein-modeling frameworks, the proposed algorithm is designed to take advantage of the highly discriminatory nature of electron-density maps. This method has been implemented in the program Fitmunk, which uses extensive conformational sampling. This improves the accuracy of the modeling and makes it a versatile tool for crystallographic model building, refinement and validation. Fitmunk was extensively tested on over 115 new structures, as well as a subset of 1100 structures from the PDB. It is demonstrated that the ability of Fitmunk to model more than 95% of side chains accurately is beneficial for improving the quality of crystallographic protein models, especially at medium and low resolutions. Fitmunk can be used for model validation of existing structures and as a tool to assess whether side chains are modeled optimally or could be better fitted into electron density. Fitmunk is available as a web service at http://kniahini.med.virginia.edu/fitmunk/server/ or at http://fitmunk.bitbucket.org/.

  5. Border-oriented post-processing refinement on detected vehicle bounding box for ADAS

    NASA Astrophysics Data System (ADS)

    Chen, Xinyuan; Zhang, Zhaoning; Li, Minne; Li, Dongsheng

    2018-04-01

    We investigate a new approach for improving localization accuracy of detected vehicles for object detection in advanced driver assistance systems(ADAS). Specifically, we implement a bounding box refinement as a post-processing of the state-of-the-art object detectors (Faster R-CNN, YOLOv2, etc.). The bounding box refinement is achieved by individually adjusting each border of the detected bounding box to its target location using a regression method. We use HOG features which perform well on the edge detection of vehicles to train the regressor and the regressor is independent of the CNN-based object detectors. Experiment results on the KITTI 2012 benchmark show that we can achieve up to 6% improvements over YOLOv2 and Faster R-CNN object detectors on the IoU threshold of 0.8. Also, the proposed refinement framework is computationally light, allowing for processing one bounding box within a few milliseconds on CPU. Further, this refinement method can be added to any object detectors, especially those with high speed but less accuracy.

  6. Use of mechanical refining to improve the production of low-cost sugars from lignocellulosic biomass.

    PubMed

    Park, Junyeong; Jones, Brandon; Koo, Bonwook; Chen, Xiaowen; Tucker, Melvin; Yu, Ju-Hyun; Pschorn, Thomas; Venditti, Richard; Park, Sunkyu

    2016-01-01

    Mechanical refining is widely used in the pulp and paper industry to enhance the end-use properties of products by creating external fibrillation and internal delamination. This technology can be directly applied to biochemical conversion processes. By implementing mechanical refining technology, biomass recalcitrance to enzyme hydrolysis can be overcome and carbohydrate conversion can be enhanced with commercially attractive levels of enzymes. In addition, chemical and thermal pretreatment severity can be reduced to achieve the same level of carbohydrate conversion, which reduces pretreatment cost and results in lower concentrations of inhibitors. Refining is versatile and a commercially proven technology that can be operated at process flows of ∼ 1500 dry tons per day of biomass. This paper reviews the utilization of mechanical refining in the pulp and paper industry and summarizes the recent development in applications for biochemical conversion, which potentially make an overall biorefinery process more economically viable. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Study on the Removal of Gases in RH Refining Progress through Experiments Using Vacuum Induction Furnace

    NASA Astrophysics Data System (ADS)

    Niu, Deliang; Liu, Qingcai; Wang, Zhu; Ren, Shan; Lan, Yuanpei; Xu, Minren

    Removal of gas is the major function of RH degasser. To optimize the RH refining craft in Chongqing Iron and Steel Co. Ltd, the degassing effect of RH degasser at different degrees of vacuum was investigated using a vacuum induction furnace. In addition, the effect of processing time on the gas content dissolved in molten steel was also studied. The results showed that degree of vacuum was one of the important factors that determined the degassing efficiency in RH refining process. High vacuum degree is helpful in the removal of gas, especially in the removal of [H] dissolved in molten steel. The processing time could be reduced from 25-30 min to 15 minutes and gas content could also meet the demand of RH refining.

  8. Techno-Economic Analysis of the Deacetylation and Disk Refining Process. Characterizing the Effect of Refining Energy and Enzyme Usage on Minimum Sugar Selling Price and Minimum Ethanol Selling Price

    DOE PAGES

    Chen, Xiaowen; Shekiro, Joseph; Pschorn, Thomas; ...

    2015-10-29

    A novel, highly efficient deacetylation and disk refining (DDR) process to liberate fermentable sugars from biomass was recently developed at the National Renewable Energy Laboratory (NREL). The DDR process consists of a mild, dilute alkaline deacetylation step followed by low-energy-consumption disk refining. The DDR corn stover substrates achieved high process sugar conversion yields, at low to modest enzyme loadings, and also produced high sugar concentration syrups at high initial insoluble solid loadings. The sugar syrups derived from corn stover are highly fermentable due to low concentrations of fermentation inhibitors. The objective of this work is to evaluate the economic feasibilitymore » of the DDR process through a techno-economic analysis (TEA). A large array of experiments designed using a response surface methodology was carried out to investigate the two major cost-driven operational parameters of the novel DDR process: refining energy and enzyme loadings. The boundary conditions for refining energy (128–468 kWh/ODMT), cellulase (Novozyme’s CTec3) loading (11.6–28.4 mg total protein/g of cellulose), and hemicellulase (Novozyme’s HTec3) loading (0–5 mg total protein/g of cellulose) were chosen to cover the most commercially practical operating conditions. The sugar and ethanol yields were modeled with good adequacy, showing a positive linear correlation between those yields and refining energy and enzyme loadings. The ethanol yields ranged from 77 to 89 gallons/ODMT of corn stover. The minimum sugar selling price (MSSP) ranged from $0.191 to $0.212 per lb of 50 % concentrated monomeric sugars, while the minimum ethanol selling price (MESP) ranged from $2.24 to $2.54 per gallon of ethanol. The DDR process concept is evaluated for economic feasibility through TEA. The MSSP and MESP of the DDR process falls within a range similar to that found with the deacetylation/dilute acid pretreatment process modeled in NREL’s 2011 design report. The DDR process is a much simpler process that requires less capital and maintenance costs when compared to conventional chemical pretreatments with pressure vessels. As a result, we feel the DDR process should be considered as an option for future biorefineries with great potential to be more cost-effective.« less

  9. Techno-Economic Analysis of the Deacetylation and Disk Refining Process. Characterizing the Effect of Refining Energy and Enzyme Usage on Minimum Sugar Selling Price and Minimum Ethanol Selling Price

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Xiaowen; Shekiro, Joseph; Pschorn, Thomas

    A novel, highly efficient deacetylation and disk refining (DDR) process to liberate fermentable sugars from biomass was recently developed at the National Renewable Energy Laboratory (NREL). The DDR process consists of a mild, dilute alkaline deacetylation step followed by low-energy-consumption disk refining. The DDR corn stover substrates achieved high process sugar conversion yields, at low to modest enzyme loadings, and also produced high sugar concentration syrups at high initial insoluble solid loadings. The sugar syrups derived from corn stover are highly fermentable due to low concentrations of fermentation inhibitors. The objective of this work is to evaluate the economic feasibilitymore » of the DDR process through a techno-economic analysis (TEA). A large array of experiments designed using a response surface methodology was carried out to investigate the two major cost-driven operational parameters of the novel DDR process: refining energy and enzyme loadings. The boundary conditions for refining energy (128–468 kWh/ODMT), cellulase (Novozyme’s CTec3) loading (11.6–28.4 mg total protein/g of cellulose), and hemicellulase (Novozyme’s HTec3) loading (0–5 mg total protein/g of cellulose) were chosen to cover the most commercially practical operating conditions. The sugar and ethanol yields were modeled with good adequacy, showing a positive linear correlation between those yields and refining energy and enzyme loadings. The ethanol yields ranged from 77 to 89 gallons/ODMT of corn stover. The minimum sugar selling price (MSSP) ranged from $0.191 to $0.212 per lb of 50 % concentrated monomeric sugars, while the minimum ethanol selling price (MESP) ranged from $2.24 to $2.54 per gallon of ethanol. The DDR process concept is evaluated for economic feasibility through TEA. The MSSP and MESP of the DDR process falls within a range similar to that found with the deacetylation/dilute acid pretreatment process modeled in NREL’s 2011 design report. The DDR process is a much simpler process that requires less capital and maintenance costs when compared to conventional chemical pretreatments with pressure vessels. As a result, we feel the DDR process should be considered as an option for future biorefineries with great potential to be more cost-effective.« less

  10. Techno-economic analysis of the deacetylation and disk refining process: characterizing the effect of refining energy and enzyme usage on minimum sugar selling price and minimum ethanol selling price.

    PubMed

    Chen, Xiaowen; Shekiro, Joseph; Pschorn, Thomas; Sabourin, Marc; Tucker, Melvin P; Tao, Ling

    2015-01-01

    A novel, highly efficient deacetylation and disk refining (DDR) process to liberate fermentable sugars from biomass was recently developed at the National Renewable Energy Laboratory (NREL). The DDR process consists of a mild, dilute alkaline deacetylation step followed by low-energy-consumption disk refining. The DDR corn stover substrates achieved high process sugar conversion yields, at low to modest enzyme loadings, and also produced high sugar concentration syrups at high initial insoluble solid loadings. The sugar syrups derived from corn stover are highly fermentable due to low concentrations of fermentation inhibitors. The objective of this work is to evaluate the economic feasibility of the DDR process through a techno-economic analysis (TEA). A large array of experiments designed using a response surface methodology was carried out to investigate the two major cost-driven operational parameters of the novel DDR process: refining energy and enzyme loadings. The boundary conditions for refining energy (128-468 kWh/ODMT), cellulase (Novozyme's CTec3) loading (11.6-28.4 mg total protein/g of cellulose), and hemicellulase (Novozyme's HTec3) loading (0-5 mg total protein/g of cellulose) were chosen to cover the most commercially practical operating conditions. The sugar and ethanol yields were modeled with good adequacy, showing a positive linear correlation between those yields and refining energy and enzyme loadings. The ethanol yields ranged from 77 to 89 gallons/ODMT of corn stover. The minimum sugar selling price (MSSP) ranged from $0.191 to $0.212 per lb of 50 % concentrated monomeric sugars, while the minimum ethanol selling price (MESP) ranged from $2.24 to $2.54 per gallon of ethanol. The DDR process concept is evaluated for economic feasibility through TEA. The MSSP and MESP of the DDR process falls within a range similar to that found with the deacetylation/dilute acid pretreatment process modeled in NREL's 2011 design report. The DDR process is a much simpler process that requires less capital and maintenance costs when compared to conventional chemical pretreatments with pressure vessels. As a result, we feel the DDR process should be considered as an option for future biorefineries with great potential to be more cost-effective.

  11. Solidification Based Grain Refinement in Steels

    DTIC Science & Technology

    2009-07-24

    pearlite (See Figure 1). No evidence of the as-cast austenite dendrite structure was observed. The gating system for this sample resides at the thermal...possible nucleating compounds. 3) Extend grain refinement theory and solidification knowledge through experimental data. 4) Determine structure ...refine the structure of a casting through heat treatment. The energy required for grain refining via thermomechanical processes or heat treatment

  12. Automated Knowledge Discovery From Simulators

    NASA Technical Reports Server (NTRS)

    Burl, Michael; DeCoste, Dennis; Mazzoni, Dominic; Scharenbroich, Lucas; Enke, Brian; Merline, William

    2007-01-01

    A computational method, SimLearn, has been devised to facilitate efficient knowledge discovery from simulators. Simulators are complex computer programs used in science and engineering to model diverse phenomena such as fluid flow, gravitational interactions, coupled mechanical systems, and nuclear, chemical, and biological processes. SimLearn uses active-learning techniques to efficiently address the "landscape characterization problem." In particular, SimLearn tries to determine which regions in "input space" lead to a given output from the simulator, where "input space" refers to an abstraction of all the variables going into the simulator, e.g., initial conditions, parameters, and interaction equations. Landscape characterization can be viewed as an attempt to invert the forward mapping of the simulator and recover the inputs that produce a particular output. Given that a single simulation run can take days or weeks to complete even on a large computing cluster, SimLearn attempts to reduce costs by reducing the number of simulations needed to effect discoveries. Unlike conventional data-mining methods that are applied to static predefined datasets, SimLearn involves an iterative process in which a most informative dataset is constructed dynamically by using the simulator as an oracle. On each iteration, the algorithm models the knowledge it has gained through previous simulation trials and then chooses which simulation trials to run next. Running these trials through the simulator produces new data in the form of input-output pairs. The overall process is embodied in an algorithm that combines support vector machines (SVMs) with active learning. SVMs use learning from examples (the examples are the input-output pairs generated by running the simulator) and a principle called maximum margin to derive predictors that generalize well to new inputs. In SimLearn, the SVM plays the role of modeling the knowledge that has been gained through previous simulation trials. Active learning is used to determine which new input points would be most informative if their output were known. The selected input points are run through the simulator to generate new information that can be used to refine the SVM. The process is then repeated. SimLearn carefully balances exploration (semi-randomly searching around the input space) versus exploitation (using the current state of knowledge to conduct a tightly focused search). During each iteration, SimLearn uses not one, but an ensemble of SVMs. Each SVM in the ensemble is characterized by different hyper-parameters that control various aspects of the learned predictor - for example, whether the predictor is constrained to be very smooth (nearby points in input space lead to similar output predictions) or whether the predictor is allowed to be "bumpy." The various SVMs will have different preferences about which input points they would like to run through the simulator next. SimLearn includes a formal mechanism for balancing the ensemble SVM preferences so that a single choice can be made for the next set of trials.

  13. Intelligent multi-spectral IR image segmentation

    NASA Astrophysics Data System (ADS)

    Lu, Thomas; Luong, Andrew; Heim, Stephen; Patel, Maharshi; Chen, Kang; Chao, Tien-Hsin; Chow, Edward; Torres, Gilbert

    2017-05-01

    This article presents a neural network based multi-spectral image segmentation method. A neural network is trained on the selected features of both the objects and background in the longwave (LW) Infrared (IR) images. Multiple iterations of training are performed until the accuracy of the segmentation reaches satisfactory level. The segmentation boundary of the LW image is used to segment the midwave (MW) and shortwave (SW) IR images. A second neural network detects the local discontinuities and refines the accuracy of the local boundaries. This article compares the neural network based segmentation method to the Wavelet-threshold and Grab-Cut methods. Test results have shown increased accuracy and robustness of this segmentation scheme for multi-spectral IR images.

  14. Profiles of electrified drops and bubbles

    NASA Technical Reports Server (NTRS)

    Basaran, O. A.; Scriven, L. E.

    1982-01-01

    Axisymmetric equilibrium shapes of conducting drops and bubbles, (1) pendant or sessile on one face of a circular parallel-plate capacitor or (2) free and surface-charged, are found by solving simultaneously the free boundary problem consisting of the augmented Young-Laplace equation for surface shape and the Laplace equation for electrostatic field, given the surface potential. The problem is nonlinear and the method is a finite element algorithm employing Newton iteration, a modified frontal solver, and triangular as well as quadrilateral tessellations of the domain exterior to the drop in order to facilitate refined analysis of sharply curved drop tips seen in experiments. The stability limit predicted by this computer-aided theoretical analysis agrees well with experiments.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morgan, Nathaniel Ray; Waltz, Jacob I.

    The level set method is commonly used to model dynamically evolving fronts and interfaces. In this work, we present new methods for evolving fronts with a specified velocity field or in the surface normal direction on 3D unstructured tetrahedral meshes with adaptive mesh refinement (AMR). The level set field is located at the nodes of the tetrahedral cells and is evolved using new upwind discretizations of Hamilton–Jacobi equations combined with a Runge–Kutta method for temporal integration. The level set field is periodically reinitialized to a signed distance function using an iterative approach with a new upwind gradient. We discuss themore » details of these level set and reinitialization methods. Results from a range of numerical test problems are presented.« less

  16. Recursive inverse factorization.

    PubMed

    Rubensson, Emanuel H; Bock, Nicolas; Holmström, Erik; Niklasson, Anders M N

    2008-03-14

    A recursive algorithm for the inverse factorization S(-1)=ZZ(*) of Hermitian positive definite matrices S is proposed. The inverse factorization is based on iterative refinement [A.M.N. Niklasson, Phys. Rev. B 70, 193102 (2004)] combined with a recursive decomposition of S. As the computational kernel is matrix-matrix multiplication, the algorithm can be parallelized and the computational effort increases linearly with system size for systems with sufficiently sparse matrices. Recent advances in network theory are used to find appropriate recursive decompositions. We show that optimization of the so-called network modularity results in an improved partitioning compared to other approaches. In particular, when the recursive inverse factorization is applied to overlap matrices of irregularly structured three-dimensional molecules.

  17. Region of interest processing for iterative reconstruction in x-ray computed tomography

    NASA Astrophysics Data System (ADS)

    Kopp, Felix K.; Nasirudin, Radin A.; Mei, Kai; Fehringer, Andreas; Pfeiffer, Franz; Rummeny, Ernst J.; Noël, Peter B.

    2015-03-01

    The recent advancements in the graphics card technology raised the performance of parallel computing and contributed to the introduction of iterative reconstruction methods for x-ray computed tomography in clinical CT scanners. Iterative maximum likelihood (ML) based reconstruction methods are known to reduce image noise and to improve the diagnostic quality of low-dose CT. However, iterative reconstruction of a region of interest (ROI), especially ML based, is challenging. But for some clinical procedures, like cardiac CT, only a ROI is needed for diagnostics. A high-resolution reconstruction of the full field of view (FOV) consumes unnecessary computation effort that results in a slower reconstruction than clinically acceptable. In this work, we present an extension and evaluation of an existing ROI processing algorithm. Especially improvements for the equalization between regions inside and outside of a ROI are proposed. The evaluation was done on data collected from a clinical CT scanner. The performance of the different algorithms is qualitatively and quantitatively assessed. Our solution to the ROI problem provides an increase in signal-to-noise ratio and leads to visually less noise in the final reconstruction. The reconstruction speed of our technique was observed to be comparable with other previous proposed techniques. The development of ROI processing algorithms in combination with iterative reconstruction will provide higher diagnostic quality in the near future.

  18. A 2D systems approach to iterative learning control for discrete linear processes with zero Markov parameters

    NASA Astrophysics Data System (ADS)

    Hladowski, Lukasz; Galkowski, Krzysztof; Cai, Zhonglun; Rogers, Eric; Freeman, Chris T.; Lewin, Paul L.

    2011-07-01

    In this article a new approach to iterative learning control for the practically relevant case of deterministic discrete linear plants with uniform rank greater than unity is developed. The analysis is undertaken in a 2D systems setting that, by using a strong form of stability for linear repetitive processes, allows simultaneous consideration of both trial-to-trial error convergence and along the trial performance, resulting in design algorithms that can be computed using linear matrix inequalities (LMIs). Finally, the control laws are experimentally verified on a gantry robot that replicates a pick and place operation commonly found in a number of applications to which iterative learning control is applicable.

  19. The purification process on scintillator material (SrI{sub 2}: Eu) by zone-refinement technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arumugam, Raja; Daniel, D. Joseph; Ramasamy, P., E-mail: ramasamyp@ssn.edu.in

    The thermal properties of Europium doped strontium iodide was analyzed through Thermogravimetric (TG) and differential thermal analyses (DTA). The melting point of europium doped strontium iodide is around 531°C. The hydrated and oxyhalide impurities were found before melting temperature. In order to remove these impurities we have done purification process by Zone-refinement technique. The effective output of purification of zone refining was also observed through the segregation of impurities.

  20. New Insights on Degumming and Bleaching Process Parameters on The Formation of 3-Monochloropropane-1,2-Diol Esters and Glycidyl Esters in Refined, Bleached, Deodorized Palm Oil.

    PubMed

    Sim, Biow Ing; Muhamad, Halimah; Lai, Oi Ming; Abas, Faridah; Yeoh, Chee Beng; Nehdi, Imededdine Arbi; Khor, Yih Phing; Tan, Chin Ping

    2018-04-01

    This paper examines the interactions of degumming and bleaching processes as well as their influences on the formation of 3-monochloropropane-1,2-diol esters (3-MCPDE) and glycidyl esters in refined, bleached and deodorized palm oil by using D-optimal design. Water degumming effectively reduced the 3-MCPDE content up to 50%. Acid activated bleaching earth had a greater effect on 3-MCPDE reduction compared to natural bleaching earth and acid activated bleaching earth with neutral pH, indicating that performance and adsorption capacities of bleaching earth are the predominant factors in the removal of esters, rather than its acidity profile. The combination of high dosage phosphoric acid during degumming with the use of acid activated bleaching earth eliminated almost all glycidyl esters during refining. Besides, the effects of crude palm oil quality was assessed and it was found that the quality of crude palm oil determines the level of formation of 3-MCPDE and glycidyl esters in palm oil during the high temperature deodorization step of physical refining process. Poor quality crude palm oil has strong impact towards 3-MCPDE and glycidyl esters formation due to the intrinsic components present within. The findings are useful to palm oil refining industry in choosing raw materials as an input during the refining process.

  1. Model of Silicon Refining During Tapping: Removal of Ca, Al, and Other Selected Element Groups

    NASA Astrophysics Data System (ADS)

    Olsen, Jan Erik; Kero, Ida T.; Engh, Thorvald A.; Tranell, Gabriella

    2017-04-01

    A mathematical model for industrial refining of silicon alloys has been developed for the so-called oxidative ladle refining process. It is a lumped (zero-dimensional) model, based on the mass balances of metal, slag, and gas in the ladle, developed to operate with relatively short computational times for the sake of industrial relevance. The model accounts for a semi-continuous process which includes both the tapping and post-tapping refining stages. It predicts the concentrations of Ca, Al, and trace elements, most notably the alkaline metals, alkaline earth metal, and rare earth metals. The predictive power of the model depends on the quality of the model coefficients, the kinetic coefficient, τ, and the equilibrium partition coefficient, L for a given element. A sensitivity analysis indicates that the model results are most sensitive to L. The model has been compared to industrial measurement data and found to be able to qualitatively, and to some extent quantitatively, predict the data. The model is very well suited for alkaline and alkaline earth metals which respond relatively fast to the refining process. The model is less well suited for elements such as the lanthanides and Al, which are refined more slowly. A major challenge for the prediction of the behavior of the rare earth metals is that reliable thermodynamic data for true equilibrium conditions relevant to the industrial process is not typically available in literature.

  2. 'Isotopo' a database application for facile analysis and management of mass isotopomer data.

    PubMed

    Ahmed, Zeeshan; Zeeshan, Saman; Huber, Claudia; Hensel, Michael; Schomburg, Dietmar; Münch, Richard; Eylert, Eva; Eisenreich, Wolfgang; Dandekar, Thomas

    2014-01-01

    The composition of stable-isotope labelled isotopologues/isotopomers in metabolic products can be measured by mass spectrometry and supports the analysis of pathways and fluxes. As a prerequisite, the original mass spectra have to be processed, managed and stored to rapidly calculate, analyse and compare isotopomer enrichments to study, for instance, bacterial metabolism in infection. For such applications, we provide here the database application 'Isotopo'. This software package includes (i) a database to store and process isotopomer data, (ii) a parser to upload and translate different data formats for such data and (iii) an improved application to process and convert signal intensities from mass spectra of (13)C-labelled metabolites such as tertbutyldimethylsilyl-derivatives of amino acids. Relative mass intensities and isotopomer distributions are calculated applying a partial least square method with iterative refinement for high precision data. The data output includes formats such as graphs for overall enrichments in amino acids. The package is user-friendly for easy and robust data management of multiple experiments. The 'Isotopo' software is available at the following web link (section Download): http://spp1316.uni-wuerzburg.de/bioinformatics/isotopo/. The package contains three additional files: software executable setup (installer), one data set file (discussed in this article) and one excel file (which can be used to convert data from excel to '.iso' format). The 'Isotopo' software is compatible only with the Microsoft Windows operating system. http://spp1316.uni-wuerzburg.de/bioinformatics/isotopo/. © The Author(s) 2014. Published by Oxford University Press.

  3. The segmented non-uniform dielectric module design for uniformity control of plasma profile in a capacitively coupled plasma chamber

    NASA Astrophysics Data System (ADS)

    Xia, Huanxiong; Xiang, Dong; Yang, Wang; Mou, Peng

    2014-12-01

    Low-temperature plasma technique is one of the critical techniques in IC manufacturing process, such as etching and thin-film deposition, and the uniformity greatly impacts the process quality, so the design for the plasma uniformity control is very important but difficult. It is hard to finely and flexibly regulate the spatial distribution of the plasma in the chamber via controlling the discharge parameters or modifying the structure in zero-dimensional space, and it just can adjust the overall level of the process factors. In the view of this problem, a segmented non-uniform dielectric module design solution is proposed for the regulation of the plasma profile in a CCP chamber. The solution achieves refined and flexible regulation of the plasma profile in the radial direction via configuring the relative permittivity and the width of each segment. In order to solve this design problem, a novel simulation-based auto-design approach is proposed, which can automatically design the positional sequence with multi independent variables to make the output target profile in the parameterized simulation model approximate the one that users preset. This approach employs an idea of quasi-closed-loop control system, and works in an iterative mode. It starts from initial values of the design variable sequences, and predicts better sequences via the feedback of the profile error between the output target profile and the expected one. It never stops until the profile error is narrowed in the preset tolerance.

  4. Catalytic coal liquefaction with treated solvent and SRC recycle

    DOEpatents

    Garg, Diwakar; Givens, Edwin N.; Schweighardt, Frank K.

    1986-01-01

    A process for the solvent refining of coal to distillable, pentane soluble products using a dephenolated and denitrogenated recycle solvent and a recycled, pentane-insoluble, solvent-refined coal material, which process provides enhanced oil-make in the conversion of coal.

  5. Muon reconstruction in the Daya Bay water pools

    DOE PAGES

    Hackenburg, R. W.

    2017-08-12

    Muon reconstruction in the Daya Bay water pools would serve to verify the simulated muon fluxes and offer the possibility of studying cosmic muons in general. This reconstruction is, however, complicated by many optical obstacles and the small coverage of photomultiplier tubes (PMTs) as compared to other large water Cherenkov detectors. The PMTs’ timing information is useful only in the case of direct, unreflected Cherenkov light. This requires PMTs to be added and removed as an hypothesized muon trajectory is iteratively improved, to account for the changing effects of obstacles and direction of light. Therefore, muon reconstruction in the Dayamore » Bay water pools does not lend itself to a general fitting procedure employing smoothly varying functions with continuous derivatives. Here, we describe an algorithm which overcomes these complications. It employs the method of Least Mean Squares to determine an hypothesized trajectory from the PMTs’ charge-weighted positions. This initially hypothesized trajectory is then iteratively refined using the PMTs’ timing information. Reconstructions with simulated data reproduce the simulated trajectory to within about 5° in direction and about 45 cm in position at the pool surface, with a bias that tends to pull tracks away from the vertical by about 3°.« less

  6. Development of a Tool to Recreate the Mars Science Laboratory Aerothermal Environment

    NASA Technical Reports Server (NTRS)

    Beerman, A. F.; Lewis, M. J.; Santos, J. A.; White, T. R.

    2010-01-01

    The Mars Science Laboratory will enter the Martian atmosphere in 2012 with multiple char depth sensors and in-depth thermocouples in its heatshield. The aerothermal environment experienced by MSL may be computationally recreated using the data from the sensors and a material response program, such as the Fully Implicit Ablation and Thermal (FIAT) response program, through the matching of the char depth and thermocouple predictions of the material response program to the sensor data. A tool, CHanging Inputs from the Environment of FIAT (CHIEF), was developed to iteratively change different environmental conditions such that FIAT predictions match within certain criteria applied to an external data set. The computational environment is changed by iterating on the enthalpy, pressure, or heat transfer coefficient at certain times in the trajectory. CHIEF was initially compared against arc-jet test data from the development of the MSL heatshield and then against simulated sensor data derived from design trajectories for MSL. CHIEF was able to match char depth and in-depth thermocouple temperatures within the bounds placed upon it for these cases. Further refinement of CHIEF to compare multiple time points and assign convergence criteria may improve accuracy.

  7. Muon reconstruction in the Daya Bay water pools

    NASA Astrophysics Data System (ADS)

    Hackenburg, R. W.

    2017-11-01

    Muon reconstruction in the Daya Bay water pools would serve to verify the simulated muon fluxes and offer the possibility of studying cosmic muons in general. This reconstruction is, however, complicated by many optical obstacles and the small coverage of photomultiplier tubes (PMTs) as compared to other large water Cherenkov detectors. The PMTs' timing information is useful only in the case of direct, unreflected Cherenkov light. This requires PMTs to be added and removed as an hypothesized muon trajectory is iteratively improved, to account for the changing effects of obstacles and direction of light. Therefore, muon reconstruction in the Daya Bay water pools does not lend itself to a general fitting procedure employing smoothly varying functions with continuous derivatives. Here, an algorithm is described which overcomes these complications. It employs the method of Least Mean Squares to determine an hypothesized trajectory from the PMTs' charge-weighted positions. This initially hypothesized trajectory is then iteratively refined using the PMTs' timing information. Reconstructions with simulated data reproduce the simulated trajectory to within about 5°in direction and about 45 cm in position at the pool surface, with a bias that tends to pull tracks away from the vertical by about 3°.

  8. Systematic Domain Swaps of Iterative, Nonreducing Polyketide Synthases Provide a Mechanistic Understanding and Rationale For Catalytic Reprogramming

    PubMed Central

    2015-01-01

    Iterative, nonreducing polyketide synthases (NR-PKSs) are multidomain enzymes responsible for the construction of the core architecture of aromatic polyketide natural products in fungi. Engineering these enzymes for the production of non-native metabolites has been a long-standing goal. We conducted a systematic survey of in vitro “domain swapped” NR-PKSs using an enzyme deconstruction approach. The NR-PKSs were dissected into mono- to multidomain fragments and recombined as noncognate pairs in vitro, reconstituting enzymatic activity. The enzymes used in this study produce aromatic polyketides that are representative of the four main chemical features set by the individual NR-PKS: starter unit selection, chain-length control, cyclization register control, and product release mechanism. We found that boundary conditions limit successful chemistry, which are dependent on a set of underlying enzymatic mechanisms. Crucial for successful redirection of catalysis, the rate of productive chemistry must outpace the rate of spontaneous derailment and thioesterase-mediated editing. Additionally, all of the domains in a noncognate system must interact efficiently if chemical redirection is to proceed. These observations refine and further substantiate current understanding of the mechanisms governing NR-PKS catalysis. PMID:24815013

  9. Impact of extrinsic factors on fine motor performance of children attending day care.

    PubMed

    Corsi, Carolina; Santos, Mariana Martins Dos; Marques, Luísa de Andrade Perez; Rocha, Nelci Adriana Cicuto Ferreira

    2016-12-01

    To assess the impact of extrinsic factors on fine motor performance of children aged two years old. 73 children attending public and 21 private day care centers were assessed. Day care environment was evaluated using the Infant/Toddler Environment Rating Scale - Revised Edition (ITERS-R), fine motor performance was assessed through the Bayley Scales of Infant and Toddler Development - III (BSITD-III), socioeconomic data, maternal education and time of start at the day care were collected through interviews. Spearman's correlation coefficient was calculated to assess the association between the studied variables. The time at the day care was positively correlated with the children's performance in some fine motor tasks of the BSITD-III, showing that the activities developed in day care centers were important for the refinement of specific motor skills, while the overall fine motor performance by the scale was associated with maternal education and the ITERS-R scale sub-item "language and understanding". Extrinsic factors such as higher maternal education and quality of day care centers are associated with fine motor performance in children attending day care. Copyright © 2016 Sociedade de Pediatria de São Paulo. Publicado por Elsevier Editora Ltda. All rights reserved.

  10. A fractional-order accumulative regularization filter for force reconstruction

    NASA Astrophysics Data System (ADS)

    Wensong, Jiang; Zhongyu, Wang; Jing, Lv

    2018-02-01

    The ill-posed inverse problem of the force reconstruction comes from the influence of noise to measured responses and results in an inaccurate or non-unique solution. To overcome this ill-posedness, in this paper, the transfer function of the reconstruction model is redefined by a Fractional order Accumulative Regularization Filter (FARF). First, the measured responses with noise are refined by a fractional-order accumulation filter based on a dynamic data refresh strategy. Second, a transfer function, generated by the filtering results of the measured responses, is manipulated by an iterative Tikhonov regularization with a serious of iterative Landweber filter factors. Third, the regularization parameter is optimized by the Generalized Cross-Validation (GCV) to improve the ill-posedness of the force reconstruction model. A Dynamic Force Measurement System (DFMS) for the force reconstruction is designed to illustrate the application advantages of our suggested FARF method. The experimental result shows that the FARF method with r = 0.1 and α = 20, has a PRE of 0.36% and an RE of 2.45%, is superior to other cases of the FARF method and the traditional regularization methods when it comes to the dynamic force reconstruction.

  11. A Real-Time Data Acquisition and Processing Framework Based on FlexRIO FPGA and ITER Fast Plant System Controller

    NASA Astrophysics Data System (ADS)

    Yang, C.; Zheng, W.; Zhang, M.; Yuan, T.; Zhuang, G.; Pan, Y.

    2016-06-01

    Measurement and control of the plasma in real-time are critical for advanced Tokamak operation. It requires high speed real-time data acquisition and processing. ITER has designed the Fast Plant System Controllers (FPSC) for these purposes. At J-TEXT Tokamak, a real-time data acquisition and processing framework has been designed and implemented using standard ITER FPSC technologies. The main hardware components of this framework are an Industrial Personal Computer (IPC) with a real-time system and FlexRIO devices based on FPGA. With FlexRIO devices, data can be processed by FPGA in real-time before they are passed to the CPU. The software elements are based on a real-time framework which runs under Red Hat Enterprise Linux MRG-R and uses Experimental Physics and Industrial Control System (EPICS) for monitoring and configuring. That makes the framework accord with ITER FPSC standard technology. With this framework, any kind of data acquisition and processing FlexRIO FPGA program can be configured with a FPSC. An application using the framework has been implemented for the polarimeter-interferometer diagnostic system on J-TEXT. The application is able to extract phase-shift information from the intermediate frequency signal produced by the polarimeter-interferometer diagnostic system and calculate plasma density profile in real-time. Different algorithms implementations on the FlexRIO FPGA are compared in the paper.

  12. Rater variables associated with ITER ratings.

    PubMed

    Paget, Michael; Wu, Caren; McIlwrick, Joann; Woloschuk, Wayne; Wright, Bruce; McLaughlin, Kevin

    2013-10-01

    Advocates of holistic assessment consider the ITER a more authentic way to assess performance. But this assessment format is subjective and, therefore, susceptible to rater bias. Here our objective was to study the association between rater variables and ITER ratings. In this observational study our participants were clerks at the University of Calgary and preceptors who completed online ITERs between February 2008 and July 2009. Our outcome variable was global rating on the ITER (rated 1-5), and we used a generalized estimating equation model to identify variables associated with this rating. Students were rated "above expected level" or "outstanding" on 66.4 % of 1050 online ITERs completed during the study period. Two rater variables attenuated ITER ratings: the log transformed time taken to complete the ITER [β = -0.06, 95 % confidence interval (-0.10, -0.02), p = 0.002], and the number of ITERs that a preceptor completed over the time period of the study [β = -0.008 (-0.02, -0.001), p = 0.02]. In this study we found evidence of leniency bias that resulted in two thirds of students being rated above expected level of performance. This leniency bias appeared to be attenuated by delay in ITER completion, and was also blunted in preceptors who rated more students. As all biases threaten the internal validity of the assessment process, further research is needed to confirm these and other sources of rater bias in ITER ratings, and to explore ways of limiting their impact.

  13. eNOSHA, a Free, Open and Flexible Learning Object Repository--An Iterative Development Process for Global User-Friendliness

    ERIC Educational Resources Information Center

    Mozelius, Peter; Hettiarachchi, Enosha

    2012-01-01

    This paper describes the iterative development process of a Learning Object Repository (LOR), named eNOSHA. Discussions on a project for a LOR started at the e-Learning Centre (eLC) at The University of Colombo, School of Computing (UCSC) in 2007. The eLC has during the last decade been developing learning content for a nationwide e-learning…

  14. Apex Reference Manual 3.0 Beta

    NASA Technical Reports Server (NTRS)

    Freed, Michael A.

    2005-01-01

    Apex is a toolkit for constructing software that behaves intelligently and responsively in demanding task environments. Reflecting its origin at NASA where Apex continues to be developed, current applications include: a) Providing autonomous mission management and tactical control capabilities for unmanned aerial vehicles including an autonomous surveillance helicopter and a simulation prototype of an unmanned fixed-wing aircraft to be used for wildfire mapping; b) Simulating human air traffic controllers, pilots and astronauts to help predict how people might respond to changes in equipment or procedures; and c) Predicting the precise duration and sequence of routine human behaviors based on a human-computer interaction engineering technique called CPM-GOMS. Among Apex s components are a set of implemented reasoning services, such as those for reactive planning and temporal pattern recognition; a software architecture that embeds and integrates these services and allows additional reasoning elements to be added as extensions; a formal language for specifying agent knowledge; a simulation environment to facilitate prototyping and analysis; and Sherpa, a set of tools for visualizing autonomy logic and runtime behavior. In combination, these are meant to provide a flexible and usable framework for creating, testing, and deploying intelligent agent software. Overall, our goal in developing Apex is to lower economic barriers to developing intelligent software agents. New ideas about how to extend or modify the system are evaluated in terms of their impact in reducing the time, expertise, and inventiveness required to build and maintain applications. For example, potential enhancements to the AI reasoning capabilities in the system are reviewed not only for usefulness and distinctiveness, but also for their impact on the readability and general usability of Apex s behavior representation language (PDL) and on the transparency of resulting behavior. A second central part of our approach is to iteratively refine Apex based on lessons learned from as diverse a set of applications as possible. Many applications have been developed by users outside the core development team including engineers, researchers, and students. Usability is thus a central concern for every aspect of Apex visible to a user, including PDL, Sherpa, the Apex installation process, APIs, and user documentation. Apex users vary in their areas of expertise and in their familiarity with autonomy technology. Focusing on usability, a development philosophy summarized by the project motto "Usable Autonomy," has been important part of enabling diverse users to employ Apex successfully and to provide feedback needed to guide iterative, user-centered refinement.

  15. Integrating a Genetic Algorithm Into a Knowledge-Based System for Ordering Complex Design Processes

    NASA Technical Reports Server (NTRS)

    Rogers, James L.; McCulley, Collin M.; Bloebaum, Christina L.

    1996-01-01

    The design cycle associated with large engineering systems requires an initial decomposition of the complex system into design processes which are coupled through the transference of output data. Some of these design processes may be grouped into iterative subcycles. In analyzing or optimizing such a coupled system, it is essential to be able to determine the best ordering of the processes within these subcycles to reduce design cycle time and cost. Many decomposition approaches assume the capability is available to determine what design processes and couplings exist and what order of execution will be imposed during the design cycle. Unfortunately, this is often a complex problem and beyond the capabilities of a human design manager. A new feature, a genetic algorithm, has been added to DeMAID (Design Manager's Aid for Intelligent Decomposition) to allow the design manager to rapidly examine many different combinations of ordering processes in an iterative subcycle and to optimize the ordering based on cost, time, and iteration requirements. Two sample test cases are presented to show the effects of optimizing the ordering with a genetic algorithm.

  16. Evolutionary Software Development (Developpement Evolutionnaire de Logiciels)

    DTIC Science & Technology

    2008-08-01

    development processes. While this may be true, frequently it is not. MIL-STD-498 was explicitly introduced to encourage iterative development; ISO /IEC... 12207 was carefully worded not to prohibit iterative development. Yet both standards were widely interpreted as requiring waterfall development, as

  17. Evolutionary Software Development (Developpement evolutionnaire de logiciels)

    DTIC Science & Technology

    2008-08-01

    development processes. While this may be true, frequently it is not. MIL-STD-498 was explicitly introduced to encourage iterative development; ISO /IEC... 12207 was carefully worded not to prohibit iterative development. Yet both standards were widely interpreted as requiring waterfall development, as

  18. Quickprop method to speed up learning process of Artificial Neural Network in money's nominal value recognition case

    NASA Astrophysics Data System (ADS)

    Swastika, Windra

    2017-03-01

    A money's nominal value recognition system has been developed using Artificial Neural Network (ANN). ANN with Back Propagation has one disadvantage. The learning process is very slow (or never reach the target) in the case of large number of iteration, weight and samples. One way to speed up the learning process is using Quickprop method. Quickprop method is based on Newton's method and able to speed up the learning process by assuming that the weight adjustment (E) is a parabolic function. The goal is to minimize the error gradient (E'). In our system, we use 5 types of money's nominal value, i.e. 1,000 IDR, 2,000 IDR, 5,000 IDR, 10,000 IDR and 50,000 IDR. One of the surface of each nominal were scanned and digitally processed. There are 40 patterns to be used as training set in ANN system. The effectiveness of Quickprop method in the ANN system was validated by 2 factors, (1) number of iterations required to reach error below 0.1; and (2) the accuracy to predict nominal values based on the input. Our results shows that the use of Quickprop method is successfully reduce the learning process compared to Back Propagation method. For 40 input patterns, Quickprop method successfully reached error below 0.1 for only 20 iterations, while Back Propagation method required 2000 iterations. The prediction accuracy for both method is higher than 90%.

  19. Mixed Material Plasma-Surface Interactions in ITER: Recent Results from the PISCES Group

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tynan, George R.; Baldwin, Matthew; Doerner, Russell

    This paper summarizes recent PISCES studies focused on the effects associated with mixed species plasmas that are similar in composition to what one might expect in ITER. Formation of nanometer scale whiskerlike features occurs in W surfaces exposed to pure He and mixed D/He plasmas and appears to be associated with the formation of He nanometer-scaled bubbles in the W surface. Studies of Be-W alloy formation in Be-seeded D plasmas suggest that this process may be important in ITER all metal wall operational scenarios. Studies also suggest that BeD formation via chemical sputtering of Be walls may be an importantmore » first wall erosion mechanism. D retention in ITER mixed materials has also been studied. The D release behavior from beryllium co-deposits does not appear to be a diffusion dominated process, but instead is consistent with thermal release from a number of variable trapping energy sites. As a result, the amount of tritium remaining in codeposits in ITER after baking will be determined by the maximum temperature achieved, rather than by the duration of the baking cycle.« less

  20. Six sigma: process of understanding the control and capability of ranitidine hydrochloride tablet.

    PubMed

    Chabukswar, Ar; Jagdale, Sc; Kuchekar, Bs; Joshi, Vd; Deshmukh, Gr; Kothawade, Hs; Kuckekar, Ab; Lokhande, Pd

    2011-01-01

    The process of understanding the control and capability (PUCC) is an iterative closed loop process for continuous improvement. It covers the DMAIC toolkit in its three phases. PUCC is an iterative approach that rotates between the three pillars of the process of understanding, process control, and process capability, with each iteration resulting in a more capable and robust process. It is rightly said that being at the top is a marathon and not a sprint. The objective of the six sigma study of Ranitidine hydrochloride tablets is to achieve perfection in tablet manufacturing by reviewing the present robust manufacturing process, to find out ways to improve and modify the process, which will yield tablets that are defect-free and will give more customer satisfaction. The application of six sigma led to an improved process capability, due to the improved sigma level of the process from 1.5 to 4, a higher yield, due to reduced variation and reduction of thick tablets, reduction in packing line stoppages, reduction in re-work by 50%, a more standardized process, with smooth flow and change in coating suspension reconstitution level (8%w/w), a huge cost reduction of approximately Rs.90 to 95 lakhs per annum, an improved overall efficiency by 30% approximately, and improved overall quality of the product.

  1. Six Sigma: Process of Understanding the Control and Capability of Ranitidine Hydrochloride Tablet

    PubMed Central

    Chabukswar, AR; Jagdale, SC; Kuchekar, BS; Joshi, VD; Deshmukh, GR; Kothawade, HS; Kuckekar, AB; Lokhande, PD

    2011-01-01

    The process of understanding the control and capability (PUCC) is an iterative closed loop process for continuous improvement. It covers the DMAIC toolkit in its three phases. PUCC is an iterative approach that rotates between the three pillars of the process of understanding, process control, and process capability, with each iteration resulting in a more capable and robust process. It is rightly said that being at the top is a marathon and not a sprint. The objective of the six sigma study of Ranitidine hydrochloride tablets is to achieve perfection in tablet manufacturing by reviewing the present robust manufacturing process, to find out ways to improve and modify the process, which will yield tablets that are defect-free and will give more customer satisfaction. The application of six sigma led to an improved process capability, due to the improved sigma level of the process from 1.5 to 4, a higher yield, due to reduced variation and reduction of thick tablets, reduction in packing line stoppages, reduction in re-work by 50%, a more standardized process, with smooth flow and change in coating suspension reconstitution level (8%w/w), a huge cost reduction of approximately Rs.90 to 95 lakhs per annum, an improved overall efficiency by 30% approximately, and improved overall quality of the product. PMID:21607050

  2. Prediction, experimental results and analysis of the ITER TF insert coil quench propagation tests, using the 4C code

    NASA Astrophysics Data System (ADS)

    Zanino, R.; Bonifetto, R.; Brighenti, A.; Isono, T.; Ozeki, H.; Savoldi, L.

    2018-07-01

    The ITER toroidal field insert (TFI) coil is a single-layer Nb3Sn solenoid tested in 2016-2017 at the National Institutes for Quantum and Radiological Science and Technology (former JAEA) in Naka, Japan. The TFI, the last in a series of ITER insert coils, was tested in operating conditions relevant for the actual ITER TF coils, inserting it in the borehole of the central solenoid model coil, which provided the background magnetic field. In this paper, we consider the five quench propagation tests that were performed using one or two inductive heaters (IHs) as drivers; out of these, three used just one IH but with increasing delay times, up to 7.5 s, between the quench detection and the TFI current dump. The results of the 4C code prediction of the quench propagation up to the current dump are presented first, based on simulations performed before the tests. We then describe the experimental results, showing good reproducibility. Finally, we compare the 4C code predictions with the measurements, confirming the 4C code capability to accurately predict the quench propagation, and the evolution of total and local voltages, as well as of the hot spot temperature. To the best of our knowledge, such a predictive validation exercise is performed here for the first time for the quench of a Nb3Sn coil. Discrepancies between prediction and measurement are found in the evolution of the jacket temperatures, in the He pressurization and quench acceleration in the late phase of the transient before the dump, as well as in the early evolution of the inlet and outlet He mass flow rate. Based on the lessons learned in the predictive exercise, the model is then refined to try and improve a posteriori (i.e. in interpretive, as opposed to predictive mode) the agreement between simulation and experiment.

  3. Tanks Focus Area Site Needs Assessment - FY 2001

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allen, Robert W.; Josephson, Gary B.; Westsik, Joseph H.

    2001-04-30

    The TFA uses a systematic process for developing its annual program that draws from the tanks science and technology development needs expressed by the five DOE tank waste sites. TFA's annual program development process is iterative and involves the following steps: Collection of site needs; Needs analysis; Development of technical responses and initial prioritization; Refinement of the program for the next fiscal year; Formulation of the Corporate Review Budget (CRB); Preparation of Program Execution Guidance (PEG) for the next FY Revision of the Multiyear Program Plan (MYPP). This document describes the outcomes of the first phase of this process, frommore » collection of site needs to the initial prioritization of technical activities. The TFA received site needs in October - December 2000. A total of 170 site needs were received, an increase of 30 over the previous year. The needs were analyzed and integrated, where appropriate. Sixty-six distinct technical responses were drafted and prioritized. In addition, seven strategic tasks were approved to compete for available funding in FY 2002 and FY 2003. Draft technical responses were prepared and provided to the TFA Site Representatives and the TFA User Steering Group (USG) for their review and comment. These responses were discussed at a March 15, 2001, meeting where the TFA Management Team established the priority listing in preparation for input to the DOE Office of Science and Technology (OST) budget process. At the time of publication of this document, the TFA continues to finalize technical responses as directed by the TFA Management Team and clarify the intended work scopes for FY 2002 and FY 2003.« less

  4. Processing of High Resolution, Multiparametric Radar Data for the Airborne Dual-Frequency Precipitation Radar APR-2

    NASA Technical Reports Server (NTRS)

    Tanelli, Simone; Meagher, Jonathan P.; Durden, Stephen L.; Im, Eastwood

    2004-01-01

    Following the successful Precipitation Radar (PR) of the Tropical Rainfall Measuring Mission, a new airborne, 14/35 GHz rain profiling radar, known as Airborne Precipitation Radar - 2 (APR-2), has been developed as a prototype for an advanced, dual-frequency spaceborne radar for a future spaceborne precipitation measurement mission. . This airborne instrument is capable of making simultaneous measurements of rainfall parameters, including co-pol and cross-pol rain reflectivities and vertical Doppler velocities, at 14 and 35 GHz. furthermore, it also features several advanced technologies for performance improvement, including real-time data processing, low-sidelobe dual-frequency pulse compression, and dual-frequency scanning antenna. Since August 2001, APR-2 has been deployed on the NASA P3 and DC8 aircrafts in four experiments including CAMEX-4 and the Wakasa Bay Experiment. Raw radar data are first processed to obtain reflectivity, LDR (linear depolarization ratio), and Doppler velocity measurements. The dataset is then processed iteratively to accurately estimate the true aircraft navigation parameters and to classify the surface return. These intermediate products are then used to refine reflectivity and LDR calibrations (by analyzing clear air ocean surface returns), and to correct Doppler measurements for the aircraft motion. Finally, the the melting layer of precipitation is detected and its boundaries and characteristics are identifIed at the APR-2 range resolution of 30m. The resulting 3D dataset will be used for validation of other airborne and spaceborne instruments, development of multiparametric rain/snow retrieval algorithms and melting layer characterization and statistics.

  5. Initial Everglades Depth Estimation Network (EDEN) Digital Elevation Model Research and Development

    USGS Publications Warehouse

    Jones, John W.; Price, Susan D.

    2007-01-01

    Introduction The Everglades Depth Estimation Network (EDEN) offers a consistent and documented dataset that can be used to guide large-scale field operations, to integrate hydrologic and ecological responses, and to support biological and ecological assessments that measure ecosystem responses to the Comprehensive Everglades Restoration Plan (Telis, 2006). To produce historic and near-real time maps of water depths, the EDEN requires a system-wide digital elevation model (DEM) of the ground surface. Accurate Everglades wetland ground surface elevation data were non-existent before the U.S. Geological Survey (USGS) undertook the collection of highly accurate surface elevations at the regional scale. These form the foundation for EDEN DEM development. This development process is iterative as additional high accuracy elevation data (HAED) are collected, water surfacing algorithms improve, and additional ground-based ancillary data become available. Models are tested using withheld HAED and independently measured water depth data, and by using DEM data in EDEN adaptive management applications. Here the collection of HAED is briefly described before the approach to DEM development and the current EDEN DEM are detailed. Finally future research directions for continued model development, testing, and refinement are provided.

  6. Coiled-Coil Proteins Facilitated the Functional Expansion of the Centrosome

    PubMed Central

    Kuhn, Michael; Hyman, Anthony A.; Beyer, Andreas

    2014-01-01

    Repurposing existing proteins for new cellular functions is recognized as a main mechanism of evolutionary innovation, but its role in organelle evolution is unclear. Here, we explore the mechanisms that led to the evolution of the centrosome, an ancestral eukaryotic organelle that expanded its functional repertoire through the course of evolution. We developed a refined sequence alignment technique that is more sensitive to coiled coil proteins, which are abundant in the centrosome. For proteins with high coiled-coil content, our algorithm identified 17% more reciprocal best hits than BLAST. Analyzing 108 eukaryotic genomes, we traced the evolutionary history of centrosome proteins. In order to assess how these proteins formed the centrosome and adopted new functions, we computationally emulated evolution by iteratively removing the most recently evolved proteins from the centrosomal protein interaction network. Coiled-coil proteins that first appeared in the animal–fungi ancestor act as scaffolds and recruit ancestral eukaryotic proteins such as kinases and phosphatases to the centrosome. This process created a signaling hub that is crucial for multicellular development. Our results demonstrate how ancient proteins can be co-opted to different cellular localizations, thereby becoming involved in novel functions. PMID:24901223

  7. Designing Colorectal Cancer Screening Decision Support: A Cognitive Engineering Enterprise.

    PubMed

    Militello, Laura G; Saleem, Jason J; Borders, Morgan R; Sushereba, Christen E; Haverkamp, Donald; Wolf, Steven P; Doebbeling, Bradley N

    2016-03-01

    Adoption of clinical decision support has been limited. Important barriers include an emphasis on algorithmic approaches to decision support that do not align well with clinical work flow and human decision strategies, and the expense and challenge of developing, implementing, and refining decision support features in existing electronic health records (EHRs). We applied decision-centered design to create a modular software application to support physicians in managing and tracking colorectal cancer screening. Using decision-centered design facilitates a thorough understanding of cognitive support requirements from an end user perspective as a foundation for design. In this project, we used an iterative design process, including ethnographic observation and cognitive task analysis, to move from an initial design concept to a working modular software application called the Screening & Surveillance App. The beta version is tailored to work with the Veterans Health Administration's EHR Computerized Patient Record System (CPRS). Primary care providers using the beta version Screening & Surveillance App more accurately answered questions about patients and found relevant information more quickly compared to those using CPRS alone. Primary care providers also reported reduced mental effort and rated the Screening & Surveillance App positively for usability.

  8. Designing Colorectal Cancer Screening Decision Support: A Cognitive Engineering Enterprise

    PubMed Central

    Militello, Laura G.; Saleem, Jason J.; Borders, Morgan R.; Sushereba, Christen E.; Haverkamp, Donald; Wolf, Steven P.; Doebbeling, Bradley N.

    2016-01-01

    Adoption of clinical decision support has been limited. Important barriers include an emphasis on algorithmic approaches to decision support that do not align well with clinical work flow and human decision strategies, and the expense and challenge of developing, implementing, and refining decision support features in existing electronic health records (EHRs). We applied decision-centered design to create a modular software application to support physicians in managing and tracking colorectal cancer screening. Using decision-centered design facilitates a thorough understanding of cognitive support requirements from an end user perspective as a foundation for design. In this project, we used an iterative design process, including ethnographic observation and cognitive task analysis, to move from an initial design concept to a working modular software application called the Screening & Surveillance App. The beta version is tailored to work with the Veterans Health Administration’s EHR Computerized Patient Record System (CPRS). Primary care providers using the beta version Screening & Surveillance App more accurately answered questions about patients and found relevant information more quickly compared to those using CPRS alone. Primary care providers also reported reduced mental effort and rated the Screening & Surveillance App positively for usability. PMID:26973441

  9. Development and evaluation of a new taxonomy of mobility-related assistive technology devices.

    PubMed

    Shoemaker, Laura L; Lenker, James A; Fuhrer, Marcus J; Jutai, Jeffrey W; Demers, Louise; DeRuyter, Frank

    2010-10-01

    This article reports on the development of a new taxonomy for mobility-related assistive technology devices. A prototype taxonomy was created based on the extant literature. Five mobility device experts were engaged in a modified Delphi process to evaluate and refine the taxonomy. Multiple iterations of expert feedback and revision yielded consensual agreement on the structure and terminology of a new mobility device taxonomy. The taxonomy uses a hierarchical framework to classify ambulation aids and wheeled mobility devices, including their key features that impact mobility. Five attributes of the new taxonomy differentiate it from previous mobility-related device classifications: (1) hierarchical structure, (2) primary device categories are grouped based on their intended mobility impact, (3) comprehensive inclusion of technical features, (4) a capacity to assimilate reimbursement codes, and (5) availability of a detailed glossary. The taxonomy is intended to support assistive technology outcomes research. The taxonomy will enable researchers to capture mobility-related assistive technology device interventions with precision and provide a common terminology that will allow comparisons among studies. The prominence of technical features within the new taxonomy will hopefully promote research that helps clinicians predict how devices will perform, thus aiding clinical decision making and supporting funding recommendations.

  10. Validation and Verification of LADEE Models and Software

    NASA Technical Reports Server (NTRS)

    Gundy-Burlet, Karen

    2013-01-01

    The Lunar Atmosphere Dust Environment Explorer (LADEE) mission will orbit the moon in order to measure the density, composition and time variability of the lunar dust environment. The ground-side and onboard flight software for the mission is being developed using a Model-Based Software methodology. In this technique, models of the spacecraft and flight software are developed in a graphical dynamics modeling package. Flight Software requirements are prototyped and refined using the simulated models. After the model is shown to work as desired in this simulation framework, C-code software is automatically generated from the models. The generated software is then tested in real time Processor-in-the-Loop and Hardware-in-the-Loop test beds. Travelling Road Show test beds were used for early integration tests with payloads and other subsystems. Traditional techniques for verifying computational sciences models are used to characterize the spacecraft simulation. A lightweight set of formal methods analysis, static analysis, formal inspection and code coverage analyses are utilized to further reduce defects in the onboard flight software artifacts. These techniques are applied early and often in the development process, iteratively increasing the capabilities of the software and the fidelity of the vehicle models and test beds.

  11. Cancer Systems Biology: a peak into the future of patient care?

    PubMed Central

    Werner, Henrica M. J.; Mills, Gordon B.; Ram, Prahlad T.

    2015-01-01

    Traditionally, scientific research has focused on studying individual events, such as single mutations, gene function or the effect of the manipulation of one protein on a biological phenotype. A range of technologies, combined with the ability to develop robust and predictive mathematical models, is beginning to provide information that will enable a holistic view of how the genomic and epigenetic aberrations in cancer cells can alter the homeostasis of signalling networks within these cells, between cancer cells and the local microenvironment, at the organ and organism level. This systems biology process needs to be integrated with an iterative approach wherein hypotheses and predictions that arise from modelling are refined and constrained by experimental evaluation. Systems biology approaches will be vital for developing and implementing effective strategies to deliver personalized cancer therapy. Specifically, these approaches will be important to select those patients most likely to benefit from targeted therapies as well as for the development and implementation of rational combinatorial therapies. Systems biology can help to increase therapy efficacy or bypass the emergence of resistance, thus converting the current (often short term) effects of targeted therapies into durable responses, ultimately to improve quality of life and provide a cure. PMID:24492837

  12. Evolution of the phase 2 preparation and observation tools at ESO

    NASA Astrophysics Data System (ADS)

    Dorigo, D.; Amarand, B.; Bierwirth, T.; Jung, Y.; Santos, P.; Sogni, F.; Vera, I.

    2012-09-01

    Throughout the course of many years of observations at the VLT, the phase 2 software applications supporting the specification, execution and reporting of observations have been continuously improved and refined. Specifically the introduction of astronomical surveys propelled the creation of new tools to express more sophisticated, longer-term observing strategies often consisting of several hundreds of observations. During the execution phase, such survey programs compete with other service and visitor mode observations and a number of constraints have to be considered. In order to maximize telescope utilization and execute all programs in a fair way, new algorithms have been developed to prioritize observable OBs taking into account both current and future constraints (e.g. OB time constraints, technical telescope time) and suggest the next OB to be executed. As a side effect, a higher degree of observation automation enables operators to run telescopes mostly autonomously with little supervision by a support astronomer. We describe the new tools that have been deployed and the iterative and incremental software development process applied to develop them. We present our key software technologies used so far and discuss potential future evolution both in terms of features as well as software technologies.

  13. Adaptive finite element modelling of three-dimensional magnetotelluric fields in general anisotropic media

    NASA Astrophysics Data System (ADS)

    Liu, Ying; Xu, Zhenhuan; Li, Yuguo

    2018-04-01

    We present a goal-oriented adaptive finite element (FE) modelling algorithm for 3-D magnetotelluric fields in generally anisotropic conductivity media. The model consists of a background layered structure, containing anisotropic blocks. Each block and layer might be anisotropic by assigning to them 3 × 3 conductivity tensors. The second-order partial differential equations are solved using the adaptive finite element method (FEM). The computational domain is subdivided into unstructured tetrahedral elements, which allow for complex geometries including bathymetry and dipping interfaces. The grid refinement process is guided by a global posteriori error estimator and is performed iteratively. The system of linear FE equations for electric field E is solved with a direct solver MUMPS. Then the magnetic field H can be found, in which the required derivatives are computed numerically using cubic spline interpolation. The 3-D FE algorithm has been validated by comparisons with both the 3-D finite-difference solution and 2-D FE results. Two model types are used to demonstrate the effects of anisotropy upon 3-D magnetotelluric responses: horizontal and dipping anisotropy. Finally, a 3D sea hill model is modelled to study the effect of oblique interfaces and the dipping anisotropy.

  14. Response to Tibayrenc and Ayala: Reproductive clonality in protozoan pathogens--truth or artefact?

    PubMed

    Ramírez, J D; Llewellyn, M S

    2015-12-01

    Tibayrenc and Ayala raised several interesting objections to an opinion piece we recently published in Molecular Ecology (Ramirez & Llewellyn 2014). Our piece examined the value of an alternative perspective to their theory of predominant clonal evolution (PCE) on the prevalence and importance of genetic exchange in parasitic protozoa. In particular, our aim was to establish whether population genetic signatures of clonality in parasites were representative of true biological/evolutionary processes or artefacts of inadequate tools and inappropriate or inadequate sampling. We address Tibayrenc and Ayala's criticisms and make a detailed response. In doing so, we deny the consensus that Tibayrenc and Ayala claim around their views and dismiss much of the language which Tibayrenc and Ayala have introduced to this debate as either arbitrary or inaccurate. We strongly reject accusations that we misunderstood and misquoted the work of others. We do not think the PCE provides a useful framework for understanding existing parasite population structures. Furthermore, on the eve of the population genomic era, we strongly urge Tibayrenc and Ayala to wait for the forthcoming wealth of high-resolution data before considering whether it is appropriate to refine or re-iterate their PCE hypothesis. © 2015 John Wiley & Sons Ltd.

  15. Optimal Filter Estimation for Lucas-Kanade Optical Flow

    PubMed Central

    Sharmin, Nusrat; Brad, Remus

    2012-01-01

    Optical flow algorithms offer a way to estimate motion from a sequence of images. The computation of optical flow plays a key-role in several computer vision applications, including motion detection and segmentation, frame interpolation, three-dimensional scene reconstruction, robot navigation and video compression. In the case of gradient based optical flow implementation, the pre-filtering step plays a vital role, not only for accurate computation of optical flow, but also for the improvement of performance. Generally, in optical flow computation, filtering is used at the initial level on original input images and afterwards, the images are resized. In this paper, we propose an image filtering approach as a pre-processing step for the Lucas-Kanade pyramidal optical flow algorithm. Based on a study of different types of filtering methods and applied on the Iterative Refined Lucas-Kanade, we have concluded on the best filtering practice. As the Gaussian smoothing filter was selected, an empirical approach for the Gaussian variance estimation was introduced. Tested on the Middlebury image sequences, a correlation between the image intensity value and the standard deviation value of the Gaussian function was established. Finally, we have found that our selection method offers a better performance for the Lucas-Kanade optical flow algorithm.

  16. Development and validation of a new survey: Perceptions of Teaching as a Profession (PTaP)

    NASA Astrophysics Data System (ADS)

    Adams, Wendy

    2017-01-01

    To better understand the impact of efforts to train more science teachers such as the PhysTEC Project and to help with early identification of future teachers, we are developing the survey of Perceptions of Teaching as a Profession (PTaP) to measure students' views of teaching as a career, their interest in teaching and the perceived climate of physics departments towards teaching as a profession. The instrument consists of a series of statements which require a response using a 5-point Likert-scale and can be easily administered online. The survey items were drafted by a team of researchers and physics teacher candidates and then reviewed by an advisory committee of 20 physics teacher educators and practicing teachers. We conducted 27 interviews with both teacher candidates and non-teaching STEM majors. The survey was refined through an iterative process of student interviews and item clarification until all items were interpreted consistently and answered for consistent reasons. In this presentation the preliminary results from the student interviews as well as the results of item analysis and a factor analysis on 900 student responses will be shared.

  17. Collecting data along the continuum of prevention and care: a Continuous Quality Improvement approach.

    PubMed

    Indyk, Leonard; Indyk, Debbie

    2006-01-01

    For the past 14 years, a team of applied social scientists and system analysts has worked with a wide variety of Community- Based Organizations (CBO's), other grassroots agencies and networks, and Medical Center departments to support resource, program, staff and data development and evaluation for hospital- and community-based programs and agencies serving HIV at-risk and affected populations. A by-product of this work has been the development, elaboration and refinement of an approach to Continuous Quality Improvement (CQI) which is appropriate for diverse community-based providers and agencies. A key component of our CQI system involves the installation of a sophisticated relational database management and reporting system (DBMS) which is used to collect, analyze, and report data in an iterative process to provide feedback among the evaluators, agency administration and staff. The database system is designed for two purposes: (1) to support the agency's administrative internal and external reporting requirements; (2) to support the development of practice driven health services and early intervention research. The body of work has fostered a unique opportunity for the development of exploratory service-driven research which serves both administrative and research needs.

  18. Interactive outlining: an improved approach using active contours

    NASA Astrophysics Data System (ADS)

    Daneels, Dirk; van Campenhout, David; Niblack, Carlton W.; Equitz, Will; Barber, Ron; Fierens, Freddy

    1993-04-01

    The purpose of our work is to outline objects on images in an interactive environment. We use an improved method based on energy minimizing active contours or `snakes.' Kass et al., proposed a variational technique; Amini used dynamic programming; and Williams and Shah introduced a fast, greedy algorithm. We combine the advantages of the latter two methods in a two-stage algorithm. The first stage is a greedy procedure that provides fast initial convergence. It is enhanced with a cost term that extends over a large number of points to avoid oscillations. The second stage, when accuracy becomes important, uses dynamic programming. This step is accelerated by the use of alternating search neighborhoods and by dropping stable points from the iterations. We have also added several features for user interaction. First, the user can define points of high confidence. Mathematically, this results in an extra cost term and, in that way, the robustness in difficult areas (e.g., noisy edges, sharp corners) is improved. We also give the user the possibility of incremental contour tracking, thus providing feedback on the refinement process. The algorithm has been tested on numerous photographic clip art images and extensive tests on medical images are in progress.

  19. Processing and refinement of steel microstructure images for assisting in computerized heat treatment of plain carbon steel

    NASA Astrophysics Data System (ADS)

    Gupta, Shubhank; Panda, Aditi; Naskar, Ruchira; Mishra, Dinesh Kumar; Pal, Snehanshu

    2017-11-01

    Steels are alloys of iron and carbon, widely used in construction and other applications. The evolution of steel microstructure through various heat treatment processes is an important factor in controlling properties and performance of steel. Extensive experimentations have been performed to enhance the properties of steel by customizing heat treatment processes. However, experimental analyses are always associated with high resource requirements in terms of cost and time. As an alternative solution, we propose an image processing-based technique for refinement of raw plain carbon steel microstructure images, into a digital form, usable in experiments related to heat treatment processes of steel in diverse applications. The proposed work follows the conventional steps practiced by materials engineers in manual refinement of steel images; and it appropriately utilizes basic image processing techniques (including filtering, segmentation, opening, and clustering) to automate the whole process. The proposed refinement of steel microstructure images is aimed to enable computer-aided simulations of heat treatment of plain carbon steel, in a timely and cost-efficient manner; hence it is beneficial for the materials and metallurgy industry. Our experimental results prove the efficiency and effectiveness of the proposed technique.

  20. B Removal by Zr Addition in Electromagnetic Solidification Refinement of Si with Si-Al Melt

    NASA Astrophysics Data System (ADS)

    Lei, Yun; Ma, Wenhui; Sun, Luen; Dai, Yongnian; Morita, Kazuki

    2016-02-01

    This study investigated a new process of enhancing B removal by adding small amounts of Zr in the electromagnetic solidification refinement of Si with Si-Al melt. B in Si was removed by as much as 97.2 pct by adding less than 1057 ppma Zr, and the added Zr was removed by as much as 99.7 pct. In addition, Zr is more effective in enhancing B removal than Ti in the same electromagnetic solidification refining process.

Top