Sample records for refinement scheme based

  1. ADAPTIVE TETRAHEDRAL GRID REFINEMENT AND COARSENING IN MESSAGE-PASSING ENVIRONMENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hallberg, J.; Stagg, A.

    2000-10-01

    A grid refinement and coarsening scheme has been developed for tetrahedral and triangular grid-based calculations in message-passing environments. The element adaption scheme is based on an edge bisection of elements marked for refinement by an appropriate error indicator. Hash-table/linked-list data structures are used to store nodal and element formation. The grid along inter-processor boundaries is refined and coarsened consistently with the update of these data structures via MPI calls. The parallel adaption scheme has been applied to the solution of a transient, three-dimensional, nonlinear, groundwater flow problem. Timings indicate efficiency of the grid refinement process relative to the flow solvermore » calculations.« less

  2. The Portsmouth-based glaucoma refinement scheme: a role for virtual clinics in the future?

    PubMed

    Trikha, S; Macgregor, C; Jeffery, M; Kirwan, J

    2012-10-01

    Glaucoma referrals continue to impart a significant burden on Hospital Eye Services (HES), with a large proportion of these false positives. To evaluate the Portsmouth glaucoma scheme, utilising virtual clinics, digital technology, and community optometrists to streamline glaucoma referrals. The stages of the patient trail were mapped and, at each step of the process, 100 consecutive patient decisions were identified. The diagnostic outcomes of 50 consecutive patients referred from the refinement scheme to the HES were identified. A total of 76% of 'glaucoma' referrals were suitable for the refinement scheme. Overall, 94% of disc images were gradeable in the virtual clinic. In all, 11% of patients 'attending' the virtual clinic were accepted into HES, with 89% being discharged for community follow-up. Of referrals accepted into HES, the positive predictive value (glaucoma/ocular hypertension/suspect) was 0.78 vs 0.37 in the predating 'unrefined' scheme (95% CI 0.65-0.87). The scheme has released 1400 clinic slots/year for HES, and has produced a £244 200/year cost saving for Portsmouth Hospitals' Trust. The refinement scheme is streamlining referrals and increasing the positive predictive rate in the diagnosis of glaucoma, glaucoma suspect or ocular hypertension. This consultant-led practice-based commissioning scheme, if adopted widely, is likely to incur a significant cost saving while maintaining high quality of care within the NHS.

  3. High-resolution multi-code implementation of unsteady Navier-Stokes flow solver based on paralleled overset adaptive mesh refinement and high-order low-dissipation hybrid schemes

    NASA Astrophysics Data System (ADS)

    Li, Gaohua; Fu, Xiang; Wang, Fuxin

    2017-10-01

    The low-dissipation high-order accurate hybrid up-winding/central scheme based on fifth-order weighted essentially non-oscillatory (WENO) and sixth-order central schemes, along with the Spalart-Allmaras (SA)-based delayed detached eddy simulation (DDES) turbulence model, and the flow feature-based adaptive mesh refinement (AMR), are implemented into a dual-mesh overset grid infrastructure with parallel computing capabilities, for the purpose of simulating vortex-dominated unsteady detached wake flows with high spatial resolutions. The overset grid assembly (OGA) process based on collection detection theory and implicit hole-cutting algorithm achieves an automatic coupling for the near-body and off-body solvers, and the error-and-try method is used for obtaining a globally balanced load distribution among the composed multiple codes. The results of flows over high Reynolds cylinder and two-bladed helicopter rotor show that the combination of high-order hybrid scheme, advanced turbulence model, and overset adaptive mesh refinement can effectively enhance the spatial resolution for the simulation of turbulent wake eddies.

  4. Large Eddy simulation of compressible flows with a low-numerical dissipation patch-based adaptive mesh refinement method

    NASA Astrophysics Data System (ADS)

    Pantano, Carlos

    2005-11-01

    We describe a hybrid finite difference method for large-eddy simulation (LES) of compressible flows with a low-numerical dissipation scheme and structured adaptive mesh refinement (SAMR). Numerical experiments and validation calculations are presented including a turbulent jet and the strongly shock-driven mixing of a Richtmyer-Meshkov instability. The approach is a conservative flux-based SAMR formulation and as such, it utilizes refinement to computational advantage. The numerical method for the resolved scale terms encompasses the cases of scheme alternation and internal mesh interfaces resulting from SAMR. An explicit centered scheme that is consistent with a skew-symmetric finite difference formulation is used in turbulent flow regions while a weighted essentially non-oscillatory (WENO) scheme is employed to capture shocks. The subgrid stresses and transports are calculated by means of the streched-vortex model, Misra & Pullin (1997)

  5. The Portsmouth-based glaucoma refinement scheme: a role for virtual clinics in the future?

    PubMed Central

    Trikha, S; Macgregor, C; Jeffery, M; Kirwan, J

    2012-01-01

    Background Glaucoma referrals continue to impart a significant burden on Hospital Eye Services (HES), with a large proportion of these false positives. Aims To evaluate the Portsmouth glaucoma scheme, utilising virtual clinics, digital technology, and community optometrists to streamline glaucoma referrals. Method The stages of the patient trail were mapped and, at each step of the process, 100 consecutive patient decisions were identified. The diagnostic outcomes of 50 consecutive patients referred from the refinement scheme to the HES were identified. Results A total of 76% of ‘glaucoma' referrals were suitable for the refinement scheme. Overall, 94% of disc images were gradeable in the virtual clinic. In all, 11% of patients ‘attending' the virtual clinic were accepted into HES, with 89% being discharged for community follow-up. Of referrals accepted into HES, the positive predictive value (glaucoma/ocular hypertension/suspect) was 0.78 vs 0.37 in the predating ‘unrefined' scheme (95% CI 0.65–0.87). The scheme has released 1400 clinic slots/year for HES, and has produced a £244 200/year cost saving for Portsmouth Hospitals' Trust. Conclusion The refinement scheme is streamlining referrals and increasing the positive predictive rate in the diagnosis of glaucoma, glaucoma suspect or ocular hypertension. This consultant-led practice-based commissioning scheme, if adopted widely, is likely to incur a significant cost saving while maintaining high quality of care within the NHS. PMID:22766539

  6. A new adaptive mesh refinement strategy for numerically solving evolutionary PDE's

    NASA Astrophysics Data System (ADS)

    Burgarelli, Denise; Kischinhevsky, Mauricio; Biezuner, Rodney Josue

    2006-11-01

    A graph-based implementation of quadtree meshes for dealing with adaptive mesh refinement (AMR) in the numerical solution of evolutionary partial differential equations is discussed using finite volume methods. The technique displays a plug-in feature that allows replacement of a group of cells in any region of interest for another one with arbitrary refinement, and with only local changes occurring in the data structure. The data structure is also specially designed to minimize the number of operations needed in the AMR. Implementation of the new scheme allows flexibility in the levels of refinement of adjacent regions. Moreover, storage requirements and computational cost compare competitively with mesh refinement schemes based on hierarchical trees. Low storage is achieved for only the children nodes are stored when a refinement takes place. These nodes become part of a graph structure, thus motivating the denomination autonomous leaves graph (ALG) for the new scheme. Neighbors can then be reached without accessing their parent nodes. Additionally, linear-system solvers based on the minimization of functionals can be easily employed. ALG was not conceived with any particular problem or geometry in mind and can thus be applied to the study of several phenomena. Some test problems are used to illustrate the effectiveness of the technique.

  7. Grid refinement in Cartesian coordinates for groundwater flow models using the divergence theorem and Taylor's series.

    PubMed

    Mansour, M M; Spink, A E F

    2013-01-01

    Grid refinement is introduced in a numerical groundwater model to increase the accuracy of the solution over local areas without compromising the run time of the model. Numerical methods developed for grid refinement suffered certain drawbacks, for example, deficiencies in the implemented interpolation technique; the non-reciprocity in head calculations or flow calculations; lack of accuracy resulting from high truncation errors, and numerical problems resulting from the construction of elongated meshes. A refinement scheme based on the divergence theorem and Taylor's expansions is presented in this article. This scheme is based on the work of De Marsily (1986) but includes more terms of the Taylor's series to improve the numerical solution. In this scheme, flow reciprocity is maintained and high order of refinement was achievable. The new numerical method is applied to simulate groundwater flows in homogeneous and heterogeneous confined aquifers. It produced results with acceptable degrees of accuracy. This method shows the potential for its application to solving groundwater heads over nested meshes with irregular shapes. © 2012, British Geological Survey © NERC 2012. Ground Water © 2012, National GroundWater Association.

  8. Adaptive-Mesh-Refinement for hyperbolic systems of conservation laws based on a posteriori stabilized high order polynomial reconstructions

    NASA Astrophysics Data System (ADS)

    Semplice, Matteo; Loubère, Raphaël

    2018-02-01

    In this paper we propose a third order accurate finite volume scheme based on a posteriori limiting of polynomial reconstructions within an Adaptive-Mesh-Refinement (AMR) simulation code for hydrodynamics equations in 2D. The a posteriori limiting is based on the detection of problematic cells on a so-called candidate solution computed at each stage of a third order Runge-Kutta scheme. Such detection may include different properties, derived from physics, such as positivity, from numerics, such as a non-oscillatory behavior, or from computer requirements such as the absence of NaN's. Troubled cell values are discarded and re-computed starting again from the previous time-step using a more dissipative scheme but only locally, close to these cells. By locally decrementing the degree of the polynomial reconstructions from 2 to 0 we switch from a third-order to a first-order accurate but more stable scheme. The entropy indicator sensor is used to refine/coarsen the mesh. This sensor is also employed in an a posteriori manner because if some refinement is needed at the end of a time step, then the current time-step is recomputed with the refined mesh, but only locally, close to the new cells. We show on a large set of numerical tests that this a posteriori limiting procedure coupled with the entropy-based AMR technology can maintain not only optimal accuracy on smooth flows but also stability on discontinuous profiles such as shock waves, contacts, interfaces, etc. Moreover numerical evidences show that this approach is at least comparable in terms of accuracy and cost to a more classical CWENO approach within the same AMR context.

  9. Developing and Modifying Behavioral Coding Schemes in Pediatric Psychology: A Practical Guide

    PubMed Central

    McMurtry, C. Meghan; Chambers, Christine T.; Bakeman, Roger

    2015-01-01

    Objectives To provide a concise and practical guide to the development, modification, and use of behavioral coding schemes for observational data in pediatric psychology. Methods This article provides a review of relevant literature and experience in developing and refining behavioral coding schemes. Results A step-by-step guide to developing and/or modifying behavioral coding schemes is provided. Major steps include refining a research question, developing or refining the coding manual, piloting and refining the coding manual, and implementing the coding scheme. Major tasks within each step are discussed, and pediatric psychology examples are provided throughout. Conclusions Behavioral coding can be a complex and time-intensive process, but the approach is invaluable in allowing researchers to address clinically relevant research questions in ways that would not otherwise be possible. PMID:25416837

  10. Block structured adaptive mesh and time refinement for hybrid, hyperbolic + N-body systems

    NASA Astrophysics Data System (ADS)

    Miniati, Francesco; Colella, Phillip

    2007-11-01

    We present a new numerical algorithm for the solution of coupled collisional and collisionless systems, based on the block structured adaptive mesh and time refinement strategy (AMR). We describe the issues associated with the discretization of the system equations and the synchronization of the numerical solution on the hierarchy of grid levels. We implement a code based on a higher order, conservative and directionally unsplit Godunov’s method for hydrodynamics; a symmetric, time centered modified symplectic scheme for collisionless component; and a multilevel, multigrid relaxation algorithm for the elliptic equation coupling the two components. Numerical results that illustrate the accuracy of the code and the relative merit of various implemented schemes are also presented.

  11. Aerodynamic design optimization via reduced Hessian SQP with solution refining

    NASA Technical Reports Server (NTRS)

    Feng, Dan; Pulliam, Thomas H.

    1995-01-01

    An all-at-once reduced Hessian Successive Quadratic Programming (SQP) scheme has been shown to be efficient for solving aerodynamic design optimization problems with a moderate number of design variables. This paper extends this scheme to allow solution refining. In particular, we introduce a reduced Hessian refining technique that is critical for making a smooth transition of the Hessian information from coarse grids to fine grids. Test results on a nozzle design using quasi-one-dimensional Euler equations show that through solution refining the efficiency and the robustness of the all-at-once reduced Hessian SQP scheme are significantly improved.

  12. Developing and modifying behavioral coding schemes in pediatric psychology: a practical guide.

    PubMed

    Chorney, Jill MacLaren; McMurtry, C Meghan; Chambers, Christine T; Bakeman, Roger

    2015-01-01

    To provide a concise and practical guide to the development, modification, and use of behavioral coding schemes for observational data in pediatric psychology. This article provides a review of relevant literature and experience in developing and refining behavioral coding schemes. A step-by-step guide to developing and/or modifying behavioral coding schemes is provided. Major steps include refining a research question, developing or refining the coding manual, piloting and refining the coding manual, and implementing the coding scheme. Major tasks within each step are discussed, and pediatric psychology examples are provided throughout. Behavioral coding can be a complex and time-intensive process, but the approach is invaluable in allowing researchers to address clinically relevant research questions in ways that would not otherwise be possible. © The Author 2014. Published by Oxford University Press on behalf of the Society of Pediatric Psychology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  13. A Cartesian grid approach with hierarchical refinement for compressible flows

    NASA Technical Reports Server (NTRS)

    Quirk, James J.

    1994-01-01

    Many numerical studies of flows that involve complex geometries are limited by the difficulties in generating suitable grids. We present a Cartesian boundary scheme for two-dimensional, compressible flows that is unfettered by the need to generate a computational grid and so it may be used, routinely, even for the most awkward of geometries. In essence, an arbitrary-shaped body is allowed to blank out some region of a background Cartesian mesh and the resultant cut-cells are singled out for special treatment. This is done within a finite-volume framework and so, in principle, any explicit flux-based integration scheme can take advantage of this method for enforcing solid boundary conditions. For best effect, the present Cartesian boundary scheme has been combined with a sophisticated, local mesh refinement scheme, and a number of examples are shown in order to demonstrate the efficacy of the combined algorithm for simulations of shock interaction phenomena.

  14. An adaptive embedded mesh procedure for leading-edge vortex flows

    NASA Technical Reports Server (NTRS)

    Powell, Kenneth G.; Beer, Michael A.; Law, Glenn W.

    1989-01-01

    A procedure for solving the conical Euler equations on an adaptively refined mesh is presented, along with a method for determining which cells to refine. The solution procedure is a central-difference cell-vertex scheme. The adaptation procedure is made up of a parameter on which the refinement decision is based, and a method for choosing a threshold value of the parameter. The refinement parameter is a measure of mesh-convergence, constructed by comparison of locally coarse- and fine-grid solutions. The threshold for the refinement parameter is based on the curvature of the curve relating the number of cells flagged for refinement to the value of the refinement threshold. Results for three test cases are presented. The test problem is that of a delta wing at angle of attack in a supersonic free-stream. The resulting vortices and shocks are captured efficiently by the adaptive code.

  15. Adaptively Refined Euler and Navier-Stokes Solutions with a Cartesian-Cell Based Scheme

    NASA Technical Reports Server (NTRS)

    Coirier, William J.; Powell, Kenneth G.

    1995-01-01

    A Cartesian-cell based scheme with adaptive mesh refinement for solving the Euler and Navier-Stokes equations in two dimensions has been developed and tested. Grids about geometrically complicated bodies were generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells were created using polygon-clipping algorithms. The grid was stored in a binary-tree data structure which provided a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations were solved on the resulting grids using an upwind, finite-volume formulation. The inviscid fluxes were found in an upwinded manner using a linear reconstruction of the cell primitives, providing the input states to an approximate Riemann solver. The viscous fluxes were formed using a Green-Gauss type of reconstruction upon a co-volume surrounding the cell interface. Data at the vertices of this co-volume were found in a linearly K-exact manner, which ensured linear K-exactness of the gradients. Adaptively-refined solutions for the inviscid flow about a four-element airfoil (test case 3) were compared to theory. Laminar, adaptively-refined solutions were compared to accepted computational, experimental and theoretical results.

  16. NMRe: a web server for NMR protein structure refinement with high-quality structure validation scores.

    PubMed

    Ryu, Hyojung; Lim, GyuTae; Sung, Bong Hyun; Lee, Jinhyuk

    2016-02-15

    Protein structure refinement is a necessary step for the study of protein function. In particular, some nuclear magnetic resonance (NMR) structures are of lower quality than X-ray crystallographic structures. Here, we present NMRe, a web-based server for NMR structure refinement. The previously developed knowledge-based energy function STAP (Statistical Torsion Angle Potential) was used for NMRe refinement. With STAP, NMRe provides two refinement protocols using two types of distance restraints. If a user provides NOE (Nuclear Overhauser Effect) data, the refinement is performed with the NOE distance restraints as a conventional NMR structure refinement. Additionally, NMRe generates NOE-like distance restraints based on the inter-hydrogen distances derived from the input structure. The efficiency of NMRe refinement was validated on 20 NMR structures. Most of the quality assessment scores of the refined NMR structures were better than those of the original structures. The refinement results are provided as a three-dimensional structure view, a secondary structure scheme, and numerical and graphical structure validation scores. NMRe is available at http://psb.kobic.re.kr/nmre/. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  17. Parallel implementation of an adaptive scheme for 3D unstructured grids on the SP2

    NASA Technical Reports Server (NTRS)

    Strawn, Roger C.; Oliker, Leonid; Biswas, Rupak

    1996-01-01

    Dynamic mesh adaption on unstructured grids is a powerful tool for computing unsteady flows that require local grid modifications to efficiently resolve solution features. For this work, we consider an edge-based adaption scheme that has shown good single-processor performance on the C90. We report on our experience parallelizing this code for the SP2. Results show a 47.0X speedup on 64 processors when 10 percent of the mesh is randomly refined. Performance deteriorates to 7.7X when the same number of edges are refined in a highly-localized region. This is because almost all the mesh adaption is confined to a single processor. However, this problem can be remedied by repartitioning the mesh immediately after targeting edges for refinement but before the actual adaption takes place. With this change, the speedup improves dramatically to 43.6X.

  18. Parallel Implementation of an Adaptive Scheme for 3D Unstructured Grids on the SP2

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Biswas, Rupak; Strawn, Roger C.

    1996-01-01

    Dynamic mesh adaption on unstructured grids is a powerful tool for computing unsteady flows that require local grid modifications to efficiently resolve solution features. For this work, we consider an edge-based adaption scheme that has shown good single-processor performance on the C90. We report on our experience parallelizing this code for the SP2. Results show a 47.OX speedup on 64 processors when 10% of the mesh is randomly refined. Performance deteriorates to 7.7X when the same number of edges are refined in a highly-localized region. This is because almost all mesh adaption is confined to a single processor. However, this problem can be remedied by repartitioning the mesh immediately after targeting edges for refinement but before the actual adaption takes place. With this change, the speedup improves dramatically to 43.6X.

  19. Cartesian Off-Body Grid Adaption for Viscous Time- Accurate Flow Simulation

    NASA Technical Reports Server (NTRS)

    Buning, Pieter G.; Pulliam, Thomas H.

    2011-01-01

    An improved solution adaption capability has been implemented in the OVERFLOW overset grid CFD code. Building on the Cartesian off-body approach inherent in OVERFLOW and the original adaptive refinement method developed by Meakin, the new scheme provides for automated creation of multiple levels of finer Cartesian grids. Refinement can be based on the undivided second-difference of the flow solution variables, or on a specific flow quantity such as vorticity. Coupled with load-balancing and an inmemory solution interpolation procedure, the adaption process provides very good performance for time-accurate simulations on parallel compute platforms. A method of using refined, thin body-fitted grids combined with adaption in the off-body grids is presented, which maximizes the part of the domain subject to adaption. Two- and three-dimensional examples are used to illustrate the effectiveness and performance of the adaption scheme.

  20. Automatic Approach for Lung Segmentation with Juxta-Pleural Nodules from Thoracic CT Based on Contour Tracing and Correction.

    PubMed

    Wang, Jinke; Guo, Haoyan

    2016-01-01

    This paper presents a fully automatic framework for lung segmentation, in which juxta-pleural nodule problem is brought into strong focus. The proposed scheme consists of three phases: skin boundary detection, rough segmentation of lung contour, and pulmonary parenchyma refinement. Firstly, chest skin boundary is extracted through image aligning, morphology operation, and connective region analysis. Secondly, diagonal-based border tracing is implemented for lung contour segmentation, with maximum cost path algorithm used for separating the left and right lungs. Finally, by arc-based border smoothing and concave-based border correction, the refined pulmonary parenchyma is obtained. The proposed scheme is evaluated on 45 volumes of chest scans, with volume difference (VD) 11.15 ± 69.63 cm 3 , volume overlap error (VOE) 3.5057 ± 1.3719%, average surface distance (ASD) 0.7917 ± 0.2741 mm, root mean square distance (RMSD) 1.6957 ± 0.6568 mm, maximum symmetric absolute surface distance (MSD) 21.3430 ± 8.1743 mm, and average time-cost 2 seconds per image. The preliminary results on accuracy and complexity prove that our scheme is a promising tool for lung segmentation with juxta-pleural nodules.

  1. SU-E-J-128: Two-Stage Atlas Selection in Multi-Atlas-Based Image Segmentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, T; Ruan, D

    2015-06-15

    Purpose: In the new era of big data, multi-atlas-based image segmentation is challenged by heterogeneous atlas quality and high computation burden from extensive atlas collection, demanding efficient identification of the most relevant atlases. This study aims to develop a two-stage atlas selection scheme to achieve computational economy with performance guarantee. Methods: We develop a low-cost fusion set selection scheme by introducing a preliminary selection to trim full atlas collection into an augmented subset, alleviating the need for extensive full-fledged registrations. More specifically, fusion set selection is performed in two successive steps: preliminary selection and refinement. An augmented subset is firstmore » roughly selected from the whole atlas collection with a simple registration scheme and the corresponding preliminary relevance metric; the augmented subset is further refined into the desired fusion set size, using full-fledged registration and the associated relevance metric. The main novelty of this work is the introduction of an inference model to relate the preliminary and refined relevance metrics, based on which the augmented subset size is rigorously derived to ensure the desired atlases survive the preliminary selection with high probability. Results: The performance and complexity of the proposed two-stage atlas selection method were assessed using a collection of 30 prostate MR images. It achieved comparable segmentation accuracy as the conventional one-stage method with full-fledged registration, but significantly reduced computation time to 1/3 (from 30.82 to 11.04 min per segmentation). Compared with alternative one-stage cost-saving approach, the proposed scheme yielded superior performance with mean and medium DSC of (0.83, 0.85) compared to (0.74, 0.78). Conclusion: This work has developed a model-guided two-stage atlas selection scheme to achieve significant cost reduction while guaranteeing high segmentation accuracy. The benefit in both complexity and performance is expected to be most pronounced with large-scale heterogeneous data.« less

  2. Meshfree truncated hierarchical refinement for isogeometric analysis

    NASA Astrophysics Data System (ADS)

    Atri, H. R.; Shojaee, S.

    2018-05-01

    In this paper truncated hierarchical B-spline (THB-spline) is coupled with reproducing kernel particle method (RKPM) to blend advantages of the isogeometric analysis and meshfree methods. Since under certain conditions, the isogeometric B-spline and NURBS basis functions are exactly represented by reproducing kernel meshfree shape functions, recursive process of producing isogeometric bases can be omitted. More importantly, a seamless link between meshfree methods and isogeometric analysis can be easily defined which provide an authentic meshfree approach to refine the model locally in isogeometric analysis. This procedure can be accomplished using truncated hierarchical B-splines to construct new bases and adaptively refine them. It is also shown that the THB-RKPM method can provide efficient approximation schemes for numerical simulations and represent a promising performance in adaptive refinement of partial differential equations via isogeometric analysis. The proposed approach for adaptive locally refinement is presented in detail and its effectiveness is investigated through well-known benchmark examples.

  3. A Viable Scheme for Elemental Extraction and Purification Using In-Situ Planetary Resources

    NASA Technical Reports Server (NTRS)

    Sen, S.; Schofield, E.; ODell, S.; Ray, C. S.

    2005-01-01

    NASA's new strategic direction includes establishing a self-sufficient, affordable and safe human and robotic presence outside the low earth orbit. Some of the items required for a self-sufficient extra-terrestrial habitat will include materials for power generation (e.g. Si for solar cells) and habitat construction (e.g. Al, Fe, and Ti). In this paper we will present a viable elemental extraction and refining process from in-situ regolith which would be optimally continuous, robotically automated, and require a minimum amount of astronaut supervision and containment facilities, The approach is based on using a concentrated heat source and translating sample geometry to enable simultaneous oxide reduction and elemental refining. Preliminary results will be presented to demonstrate that the proposed zone refining process is capable of segregating or refining important elements such as Si (for solar cell fabrication) and Fe (for habitat construction). A conceptual scheme will be presented whereby such a process could be supported by use of solar energy and a precursor robotic mission on the surface of the moon.

  4. An Adaptively-Refined, Cartesian, Cell-Based Scheme for the Euler and Navier-Stokes Equations. Ph.D. Thesis - Michigan Univ.

    NASA Technical Reports Server (NTRS)

    Coirier, William John

    1994-01-01

    A Cartesian, cell-based scheme for solving the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, polygonal 'cut' cells are created. The geometry of the cut cells is computed using polygon-clipping algorithms. The grid is stored in a binary-tree data structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded, with a limited linear reconstruction of the primitive variables used to provide input states to an approximate Riemann solver for computing the fluxes between neighboring cells. A multi-stage time-stepping scheme is used to reach a steady-state solution. Validation of the Euler solver with benchmark numerical and exact solutions is presented. An assessment of the accuracy of the approach is made by uniform and adaptive grid refinements for a steady, transonic, exact solution to the Euler equations. The error of the approach is directly compared to a structured solver formulation. A non smooth flow is also assessed for grid convergence, comparing uniform and adaptively refined results. Several formulations of the viscous terms are assessed analytically, both for accuracy and positivity. The two best formulations are used to compute adaptively refined solutions of the Navier-Stokes equations. These solutions are compared to each other, to experimental results and/or theory for a series of low and moderate Reynolds numbers flow fields. The most suitable viscous discretization is demonstrated for geometrically-complicated internal flows. For flows at high Reynolds numbers, both an altered grid-generation procedure and a different formulation of the viscous terms are shown to be necessary. A hybrid Cartesian/body-fitted grid generation approach is demonstrated. In addition, a grid-generation procedure based on body-aligned cell cutting coupled with a viscous stensil-construction procedure based on quadratic programming is presented.

  5. A conservation and biophysics guided stochastic approach to refining docked multimeric proteins.

    PubMed

    Akbal-Delibas, Bahar; Haspel, Nurit

    2013-01-01

    We introduce a protein docking refinement method that accepts complexes consisting of any number of monomeric units. The method uses a scoring function based on a tight coupling between evolutionary conservation, geometry and physico-chemical interactions. Understanding the role of protein complexes in the basic biology of organisms heavily relies on the detection of protein complexes and their structures. Different computational docking methods are developed for this purpose, however, these methods are often not accurate and their results need to be further refined to improve the geometry and the energy of the resulting complexes. Also, despite the fact that complexes in nature often have more than two monomers, most docking methods focus on dimers since the computational complexity increases exponentially due to the addition of monomeric units. Our results show that the refinement scheme can efficiently handle complexes with more than two monomers by biasing the results towards complexes with native interactions, filtering out false positive results. Our refined complexes have better IRMSDs with respect to the known complexes and lower energies than those initial docked structures. Evolutionary conservation information allows us to bias our results towards possible functional interfaces, and the probabilistic selection scheme helps us to escape local energy minima. We aim to incorporate our refinement method in a larger framework which also enables docking of multimeric complexes given only monomeric structures.

  6. Refining Markov state models for conformational dynamics using ensemble-averaged data and time-series trajectories

    NASA Astrophysics Data System (ADS)

    Matsunaga, Y.; Sugita, Y.

    2018-06-01

    A data-driven modeling scheme is proposed for conformational dynamics of biomolecules based on molecular dynamics (MD) simulations and experimental measurements. In this scheme, an initial Markov State Model (MSM) is constructed from MD simulation trajectories, and then, the MSM parameters are refined using experimental measurements through machine learning techniques. The second step can reduce the bias of MD simulation results due to inaccurate force-field parameters. Either time-series trajectories or ensemble-averaged data are available as a training data set in the scheme. Using a coarse-grained model of a dye-labeled polyproline-20, we compare the performance of machine learning estimations from the two types of training data sets. Machine learning from time-series data could provide the equilibrium populations of conformational states as well as their transition probabilities. It estimates hidden conformational states in more robust ways compared to that from ensemble-averaged data although there are limitations in estimating the transition probabilities between minor states. We discuss how to use the machine learning scheme for various experimental measurements including single-molecule time-series trajectories.

  7. Two-stage atlas subset selection in multi-atlas based image segmentation.

    PubMed

    Zhao, Tingting; Ruan, Dan

    2015-06-01

    Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. The authors have developed a novel two-stage atlas subset selection scheme for multi-atlas based segmentation. It achieves good segmentation accuracy with significantly reduced computation cost, making it a suitable configuration in the presence of extensive heterogeneous atlases.

  8. An adaptive mesh-moving and refinement procedure for one-dimensional conservation laws

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Flaherty, Joseph E.; Arney, David C.

    1993-01-01

    We examine the performance of an adaptive mesh-moving and /or local mesh refinement procedure for the finite difference solution of one-dimensional hyperbolic systems of conservation laws. Adaptive motion of a base mesh is designed to isolate spatially distinct phenomena, and recursive local refinement of the time step and cells of the stationary or moving base mesh is performed in regions where a refinement indicator exceeds a prescribed tolerance. These adaptive procedures are incorporated into a computer code that includes a MacCormack finite difference scheme wih Davis' artificial viscosity model and a discretization error estimate based on Richardson's extrapolation. Experiments are conducted on three problems in order to qualify the advantages of adaptive techniques relative to uniform mesh computations and the relative benefits of mesh moving and refinement. Key results indicate that local mesh refinement, with and without mesh moving, can provide reliable solutions at much lower computational cost than possible on uniform meshes; that mesh motion can be used to improve the results of uniform mesh solutions for a modest computational effort; that the cost of managing the tree data structure associated with refinement is small; and that a combination of mesh motion and refinement reliably produces solutions for the least cost per unit accuracy.

  9. The false negative rate and the role for virtual review in a nationally evaluated glaucoma referral refinement scheme.

    PubMed

    Ratnarajan, Gokulan; Kean, Jane; French, Karen; Parker, Mike; Bourne, Rupert

    2015-09-01

    To establish the safety of the CHANGES glaucoma referral refinement scheme (GRRS). The CHANGES scheme risk stratifies glaucoma referrals, with low risk referrals seen by a community based specialist optometrist (OSI) while high risk referrals are referred directly to the hospital. In this study, those patients discharged by the OSI were reviewed by the consultant ophthalmologist to establish a 'false negative' rate (Study 1). Virtual review of optic disc photographs was carried out both by a hospital-based specialist optometrist as well as the consultant ophthalmologist (Study 2). None of these 34 discharged patients seen by the consultant were found to have glaucoma or started on treatment to lower the intra-ocular pressure. Five of the 34 (15%) were classified as 'glaucoma suspect' based on the appearance of the optic disc and offered a follow-up appointment. Virtual review by both the consultant and optometrist had a sensitivity of 80%, whilst the false positive rate for the optometrist was 3.4%, and 32% for the consultant (p < 0.05). The false negative rate of the OSIs in the CHANGES scheme was 15%, however there were no patients where glaucoma was missed. Virtual review in experienced hands can be as effective as clinical review by a consultant, and is a valid method to ensure glaucoma is not missed in GRRS. The CHANGES scheme, which includes virtual review, is effective at reducing referrals to the hospital whilst not compromising patient safety. © 2015 The Authors Ophthalmic & Physiological Optics © 2015 The College of Optometrists.

  10. CT liver volumetry using geodesic active contour segmentation with a level-set algorithm

    NASA Astrophysics Data System (ADS)

    Suzuki, Kenji; Epstein, Mark L.; Kohlbrenner, Ryan; Obajuluwa, Ademola; Xu, Jianwu; Hori, Masatoshi; Baron, Richard

    2010-03-01

    Automatic liver segmentation on CT images is challenging because the liver often abuts other organs of a similar density. Our purpose was to develop an accurate automated liver segmentation scheme for measuring liver volumes. We developed an automated volumetry scheme for the liver in CT based on a 5 step schema. First, an anisotropic smoothing filter was applied to portal-venous phase CT images to remove noise while preserving the liver structure, followed by an edge enhancer to enhance the liver boundary. By using the boundary-enhanced image as a speed function, a fastmarching algorithm generated an initial surface that roughly estimated the liver shape. A geodesic-active-contour segmentation algorithm coupled with level-set contour-evolution refined the initial surface so as to more precisely fit the liver boundary. The liver volume was calculated based on the refined liver surface. Hepatic CT scans of eighteen prospective liver donors were obtained under a liver transplant protocol with a multi-detector CT system. Automated liver volumes obtained were compared with those manually traced by a radiologist, used as "gold standard." The mean liver volume obtained with our scheme was 1,520 cc, whereas the mean manual volume was 1,486 cc, with the mean absolute difference of 104 cc (7.0%). CT liver volumetrics based on an automated scheme agreed excellently with "goldstandard" manual volumetrics (intra-class correlation coefficient was 0.95) with no statistically significant difference (p(F<=f)=0.32), and required substantially less completion time. Our automated scheme provides an efficient and accurate way of measuring liver volumes.

  11. An upwind method for the solution of the 3D Euler and Navier-Stokes equations on adaptively refined meshes

    NASA Astrophysics Data System (ADS)

    Aftosmis, Michael J.

    1992-10-01

    A new node based upwind scheme for the solution of the 3D Navier-Stokes equations on adaptively refined meshes is presented. The method uses a second-order upwind TVD scheme to integrate the convective terms, and discretizes the viscous terms with a new compact central difference technique. Grid adaptation is achieved through directional division of hexahedral cells in response to evolving features as the solution converges. The method is advanced in time with a multistage Runge-Kutta time stepping scheme. Two- and three-dimensional examples establish the accuracy of the inviscid and viscous discretization. These investigations highlight the ability of the method to produce crisp shocks, while accurately and economically resolving viscous layers. The representation of these and other structures is shown to be comparable to that obtained by structured methods. Further 3D examples demonstrate the ability of the adaptive algorithm to effectively locate and resolve multiple scale features in complex 3D flows with many interacting, viscous, and inviscid structures.

  12. An advanced teaching scheme for integrating problem-based learning in control education

    NASA Astrophysics Data System (ADS)

    Juuso, Esko K.

    2018-03-01

    Engineering education needs to provide both theoretical knowledge and problem-solving skills. Many topics can be presented in lectures and computer exercises are good tools in teaching the skills. Learning by doing is combined with lectures to provide additional material and perspectives. The teaching scheme includes lectures, computer exercises, case studies, seminars and reports organized as a problem-based learning process. In the gradually refining learning material, each teaching method has its own role. The scheme, which has been used in teaching two 4th year courses, is beneficial for overall learning progress, especially in bilingual courses. The students become familiar with new perspectives and are ready to use the course material in application projects.

  13. On Accuracy of Adaptive Grid Methods for Captured Shocks

    NASA Technical Reports Server (NTRS)

    Yamaleev, Nail K.; Carpenter, Mark H.

    2002-01-01

    The accuracy of two grid adaptation strategies, grid redistribution and local grid refinement, is examined by solving the 2-D Euler equations for the supersonic steady flow around a cylinder. Second- and fourth-order linear finite difference shock-capturing schemes, based on the Lax-Friedrichs flux splitting, are used to discretize the governing equations. The grid refinement study shows that for the second-order scheme, neither grid adaptation strategy improves the numerical solution accuracy compared to that calculated on a uniform grid with the same number of grid points. For the fourth-order scheme, the dominant first-order error component is reduced by the grid adaptation, while the design-order error component drastically increases because of the grid nonuniformity. As a result, both grid adaptation techniques improve the numerical solution accuracy only on the coarsest mesh or on very fine grids that are seldom found in practical applications because of the computational cost involved. Similar error behavior has been obtained for the pressure integral across the shock. A simple analysis shows that both grid adaptation strategies are not without penalties in the numerical solution accuracy. Based on these results, a new grid adaptation criterion for captured shocks is proposed.

  14. Some observations on mesh refinement schemes applied to shock wave phenomena

    NASA Technical Reports Server (NTRS)

    Quirk, James J.

    1995-01-01

    This workshop's double-wedge test problem is taken from one of a sequence of experiments which were performed in order to classify the various canonical interactions between a planar shock wave and a double wedge. Therefore to build up a reasonably broad picture of the performance of our mesh refinement algorithm we have simulated three of these experiments and not just the workshop case. Here, using the results from these simulations together with their experimental counterparts, we make some general observations concerning the development of mesh refinement schemes for shock wave phenomena.

  15. Parallel three-dimensional magnetotelluric inversion using adaptive finite-element method. Part I: theory and synthetic study

    NASA Astrophysics Data System (ADS)

    Grayver, Alexander V.

    2015-07-01

    This paper presents a distributed magnetotelluric inversion scheme based on adaptive finite-element method (FEM). The key novel aspect of the introduced algorithm is the use of automatic mesh refinement techniques for both forward and inverse modelling. These techniques alleviate tedious and subjective procedure of choosing a suitable model parametrization. To avoid overparametrization, meshes for forward and inverse problems were decoupled. For calculation of accurate electromagnetic (EM) responses, automatic mesh refinement algorithm based on a goal-oriented error estimator has been adopted. For further efficiency gain, EM fields for each frequency were calculated using independent meshes in order to account for substantially different spatial behaviour of the fields over a wide range of frequencies. An automatic approach for efficient initial mesh design in inverse problems based on linearized model resolution matrix was developed. To make this algorithm suitable for large-scale problems, it was proposed to use a low-rank approximation of the linearized model resolution matrix. In order to fill a gap between initial and true model complexities and resolve emerging 3-D structures better, an algorithm for adaptive inverse mesh refinement was derived. Within this algorithm, spatial variations of the imaged parameter are calculated and mesh is refined in the neighborhoods of points with the largest variations. A series of numerical tests were performed to demonstrate the utility of the presented algorithms. Adaptive mesh refinement based on the model resolution estimates provides an efficient tool to derive initial meshes which account for arbitrary survey layouts, data types, frequency content and measurement uncertainties. Furthermore, the algorithm is capable to deliver meshes suitable to resolve features on multiple scales while keeping number of unknowns low. However, such meshes exhibit dependency on an initial model guess. Additionally, it is demonstrated that the adaptive mesh refinement can be particularly efficient in resolving complex shapes. The implemented inversion scheme was able to resolve a hemisphere object with sufficient resolution starting from a coarse discretization and refining mesh adaptively in a fully automatic process. The code is able to harness the computational power of modern distributed platforms and is shown to work with models consisting of millions of degrees of freedom. Significant computational savings were achieved by using locally refined decoupled meshes.

  16. An Efficient Means of Adaptive Refinement Within Systems of Overset Grids

    NASA Technical Reports Server (NTRS)

    Meakin, Robert L.

    1996-01-01

    An efficient means of adaptive refinement within systems of overset grids is presented. Problem domains are segregated into near-body and off-body fields. Near-body fields are discretized via overlapping body-fitted grids that extend only a short distance from body surfaces. Off-body fields are discretized via systems of overlapping uniform Cartesian grids of varying levels of refinement. a novel off-body grid generation and management scheme provides the mechanism for carrying out adaptive refinement of off-body flow dynamics and solid body motion. The scheme allows for very efficient use of memory resources, and flow solvers and domain connectivity routines that can exploit the structure inherent to uniform Cartesian grids.

  17. Adaptive mesh refinement and load balancing based on multi-level block-structured Cartesian mesh

    NASA Astrophysics Data System (ADS)

    Misaka, Takashi; Sasaki, Daisuke; Obayashi, Shigeru

    2017-11-01

    We developed a framework for a distributed-memory parallel computer that enables dynamic data management for adaptive mesh refinement and load balancing. We employed simple data structure of the building cube method (BCM) where a computational domain is divided into multi-level cubic domains and each cube has the same number of grid points inside, realising a multi-level block-structured Cartesian mesh. Solution adaptive mesh refinement, which works efficiently with the help of the dynamic load balancing, was implemented by dividing cubes based on mesh refinement criteria. The framework was investigated with the Laplace equation in terms of adaptive mesh refinement, load balancing and the parallel efficiency. It was then applied to the incompressible Navier-Stokes equations to simulate a turbulent flow around a sphere. We considered wall-adaptive cube refinement where a non-dimensional wall distance y+ near the sphere is used for a criterion of mesh refinement. The result showed the load imbalance due to y+ adaptive mesh refinement was corrected by the present approach. To utilise the BCM framework more effectively, we also tested a cube-wise algorithm switching where an explicit and implicit time integration schemes are switched depending on the local Courant-Friedrichs-Lewy (CFL) condition in each cube.

  18. A Domain-Decomposed Multilevel Method for Adaptively Refined Cartesian Grids with Embedded Boundaries

    NASA Technical Reports Server (NTRS)

    Aftosmis, M. J.; Berger, M. J.; Adomavicius, G.

    2000-01-01

    Preliminary verification and validation of an efficient Euler solver for adaptively refined Cartesian meshes with embedded boundaries is presented. The parallel, multilevel method makes use of a new on-the-fly parallel domain decomposition strategy based upon the use of space-filling curves, and automatically generates a sequence of coarse meshes for processing by the multigrid smoother. The coarse mesh generation algorithm produces grids which completely cover the computational domain at every level in the mesh hierarchy. A series of examples on realistically complex three-dimensional configurations demonstrate that this new coarsening algorithm reliably achieves mesh coarsening ratios in excess of 7 on adaptively refined meshes. Numerical investigations of the scheme's local truncation error demonstrate an achieved order of accuracy between 1.82 and 1.88. Convergence results for the multigrid scheme are presented for both subsonic and transonic test cases and demonstrate W-cycle multigrid convergence rates between 0.84 and 0.94. Preliminary parallel scalability tests on both simple wing and complex complete aircraft geometries shows a computational speedup of 52 on 64 processors using the run-time mesh partitioner.

  19. Two-stage atlas subset selection in multi-atlas based image segmentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Tingting, E-mail: tingtingzhao@mednet.ucla.edu; Ruan, Dan, E-mail: druan@mednet.ucla.edu

    2015-06-15

    Purpose: Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. Methods: An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stagemore » atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. Results: The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. Conclusions: The authors have developed a novel two-stage atlas subset selection scheme for multi-atlas based segmentation. It achieves good segmentation accuracy with significantly reduced computation cost, making it a suitable configuration in the presence of extensive heterogeneous atlases.« less

  20. Spatial correlation-based side information refinement for distributed video coding

    NASA Astrophysics Data System (ADS)

    Taieb, Mohamed Haj; Chouinard, Jean-Yves; Wang, Demin

    2013-12-01

    Distributed video coding (DVC) architecture designs, based on distributed source coding principles, have benefitted from significant progresses lately, notably in terms of achievable rate-distortion performances. However, a significant performance gap still remains when compared to prediction-based video coding schemes such as H.264/AVC. This is mainly due to the non-ideal exploitation of the video sequence temporal correlation properties during the generation of side information (SI). In fact, the decoder side motion estimation provides only an approximation of the true motion. In this paper, a progressive DVC architecture is proposed, which exploits the spatial correlation of the video frames to improve the motion-compensated temporal interpolation (MCTI). Specifically, Wyner-Ziv (WZ) frames are divided into several spatially correlated groups that are then sent progressively to the receiver. SI refinement (SIR) is performed as long as these groups are being decoded, thus providing more accurate SI for the next groups. It is shown that the proposed progressive SIR method leads to significant improvements over the Discover DVC codec as well as other SIR schemes recently introduced in the literature.

  1. A Key Pre-Distribution Scheme Based on µ-PBIBD for Enhancing Resilience in Wireless Sensor Networks.

    PubMed

    Yuan, Qi; Ma, Chunguang; Yu, Haitao; Bian, Xuefen

    2018-05-12

    Many key pre-distribution (KPD) schemes based on combinatorial design were proposed for secure communication of wireless sensor networks (WSNs). Due to complexity of constructing the combinatorial design, it is infeasible to generate key rings using the corresponding combinatorial design in large scale deployment of WSNs. In this paper, we present a definition of new combinatorial design, termed “µ-partially balanced incomplete block design (µ-PBIBD)”, which is a refinement of partially balanced incomplete block design (PBIBD), and then describe a 2-D construction of µ-PBIBD which is mapped to KPD in WSNs. Our approach is of simple construction which provides a strong key connectivity and a poor network resilience. To improve the network resilience of KPD based on 2-D µ-PBIBD, we propose a KPD scheme based on 3-D Ex-µ-PBIBD which is a construction of µ-PBIBD from 2-D space to 3-D space. Ex-µ-PBIBD KPD scheme improves network scalability and resilience while has better key connectivity. Theoretical analysis and comparison with the related schemes show that key pre-distribution scheme based on Ex-µ-PBIBD provides high network resilience and better key scalability, while it achieves a trade-off between network resilience and network connectivity.

  2. A Key Pre-Distribution Scheme Based on µ-PBIBD for Enhancing Resilience in Wireless Sensor Networks

    PubMed Central

    Yuan, Qi; Ma, Chunguang; Yu, Haitao; Bian, Xuefen

    2018-01-01

    Many key pre-distribution (KPD) schemes based on combinatorial design were proposed for secure communication of wireless sensor networks (WSNs). Due to complexity of constructing the combinatorial design, it is infeasible to generate key rings using the corresponding combinatorial design in large scale deployment of WSNs. In this paper, we present a definition of new combinatorial design, termed “µ-partially balanced incomplete block design (µ-PBIBD)”, which is a refinement of partially balanced incomplete block design (PBIBD), and then describe a 2-D construction of µ-PBIBD which is mapped to KPD in WSNs. Our approach is of simple construction which provides a strong key connectivity and a poor network resilience. To improve the network resilience of KPD based on 2-D µ-PBIBD, we propose a KPD scheme based on 3-D Ex-µ-PBIBD which is a construction of µ-PBIBD from 2-D space to 3-D space. Ex-µ-PBIBD KPD scheme improves network scalability and resilience while has better key connectivity. Theoretical analysis and comparison with the related schemes show that key pre-distribution scheme based on Ex-µ-PBIBD provides high network resilience and better key scalability, while it achieves a trade-off between network resilience and network connectivity. PMID:29757244

  3. Testing hydrodynamics schemes in galaxy disc simulations

    NASA Astrophysics Data System (ADS)

    Few, C. G.; Dobbs, C.; Pettitt, A.; Konstandin, L.

    2016-08-01

    We examine how three fundamentally different numerical hydrodynamics codes follow the evolution of an isothermal galactic disc with an external spiral potential. We compare an adaptive mesh refinement code (RAMSES), a smoothed particle hydrodynamics code (SPHNG), and a volume-discretized mesh-less code (GIZMO). Using standard refinement criteria, we find that RAMSES produces a disc that is less vertically concentrated and does not reach such high densities as the SPHNG or GIZMO runs. The gas surface density in the spiral arms increases at a lower rate for the RAMSES simulations compared to the other codes. There is also a greater degree of substructure in the SPHNG and GIZMO runs and secondary spiral arms are more pronounced. By resolving the Jeans length with a greater number of grid cells, we achieve more similar results to the Lagrangian codes used in this study. Other alterations to the refinement scheme (adding extra levels of refinement and refining based on local density gradients) are less successful in reducing the disparity between RAMSES and SPHNG/GIZMO. Although more similar, SPHNG displays different density distributions and vertical mass profiles to all modes of GIZMO (including the smoothed particle hydrodynamics version). This suggests differences also arise which are not intrinsic to the particular method but rather due to its implementation. The discrepancies between codes (in particular, the densities reached in the spiral arms) could potentially result in differences in the locations and time-scales for gravitational collapse, and therefore impact star formation activity in more complex galaxy disc simulations.

  4. Structure-based coarse-graining for inhomogeneous liquid polymer systems.

    PubMed

    Fukuda, Motoo; Zhang, Hedong; Ishiguro, Takahiro; Fukuzawa, Kenji; Itoh, Shintaro

    2013-08-07

    The iterative Boltzmann inversion (IBI) method is used to derive interaction potentials for coarse-grained (CG) systems by matching structural properties of a reference atomistic system. However, because it depends on such thermodynamic conditions as density and pressure of the reference system, the derived CG nonbonded potential is probably not applicable to inhomogeneous systems containing different density regimes. In this paper, we propose a structure-based coarse-graining scheme to devise CG nonbonded potentials that are applicable to different density bulk systems and inhomogeneous systems with interfaces. Similar to the IBI, the radial distribution function (RDF) of a reference atomistic bulk system is used for iteratively refining the CG nonbonded potential. In contrast to the IBI, however, our scheme employs an appropriately estimated initial guess and a small amount of refinement to suppress transfer of the many-body interaction effects included in the reference RDF into the CG nonbonded potential. To demonstrate the application of our approach to inhomogeneous systems, we perform coarse-graining for a liquid perfluoropolyether (PFPE) film coated on a carbon surface. The constructed CG PFPE model favorably reproduces structural and density distribution functions, not only for bulk systems, but also at the liquid-vacuum and liquid-solid interfaces, demonstrating that our CG scheme offers an easy and practical way to accurately determine nonbonded potentials for inhomogeneous systems.

  5. Computations of Unsteady Viscous Compressible Flows Using Adaptive Mesh Refinement in Curvilinear Body-fitted Grid Systems

    NASA Technical Reports Server (NTRS)

    Steinthorsson, E.; Modiano, David; Colella, Phillip

    1994-01-01

    A methodology for accurate and efficient simulation of unsteady, compressible flows is presented. The cornerstones of the methodology are a special discretization of the Navier-Stokes equations on structured body-fitted grid systems and an efficient solution-adaptive mesh refinement technique for structured grids. The discretization employs an explicit multidimensional upwind scheme for the inviscid fluxes and an implicit treatment of the viscous terms. The mesh refinement technique is based on the AMR algorithm of Berger and Colella. In this approach, cells on each level of refinement are organized into a small number of topologically rectangular blocks, each containing several thousand cells. The small number of blocks leads to small overhead in managing data, while their size and regular topology means that a high degree of optimization can be achieved on computers with vector processors.

  6. Gaussian entanglement distribution via satellite

    NASA Astrophysics Data System (ADS)

    Hosseinidehaj, Nedasadat; Malaney, Robert

    2015-02-01

    In this work we analyze three quantum communication schemes for the generation of Gaussian entanglement between two ground stations. Communication occurs via a satellite over two independent atmospheric fading channels dominated by turbulence-induced beam wander. In our first scheme, the engineering complexity remains largely on the ground transceivers, with the satellite acting simply as a reflector. Although the channel state information of the two atmospheric channels remains unknown in this scheme, the Gaussian entanglement generation between the ground stations can still be determined. On the ground, distillation and Gaussification procedures can be applied, leading to a refined Gaussian entanglement generation rate between the ground stations. We compare the rates produced by this first scheme with two competing schemes in which quantum complexity is added to the satellite, thereby illustrating the tradeoff between space-based engineering complexity and the rate of ground-station entanglement generation.

  7. Residual Distribution Schemes for Conservation Laws Via Adaptive Quadrature

    NASA Technical Reports Server (NTRS)

    Barth, Timothy; Abgrall, Remi; Biegel, Bryan (Technical Monitor)

    2000-01-01

    This paper considers a family of nonconservative numerical discretizations for conservation laws which retains the correct weak solution behavior in the limit of mesh refinement whenever sufficient order numerical quadrature is used. Our analysis of 2-D discretizations in nonconservative form follows the 1-D analysis of Hou and Le Floch. For a specific family of nonconservative discretizations, it is shown under mild assumptions that the error arising from non-conservation is strictly smaller than the discretization error in the scheme. In the limit of mesh refinement under the same assumptions, solutions are shown to satisfy an entropy inequality. Using results from this analysis, a variant of the "N" (Narrow) residual distribution scheme of van der Weide and Deconinck is developed for first-order systems of conservation laws. The modified form of the N-scheme supplants the usual exact single-state mean-value linearization of flux divergence, typically used for the Euler equations of gasdynamics, by an equivalent integral form on simplex interiors. This integral form is then numerically approximated using an adaptive quadrature procedure. This renders the scheme nonconservative in the sense described earlier so that correct weak solutions are still obtained in the limit of mesh refinement. Consequently, we then show that the modified form of the N-scheme can be easily applied to general (non-simplicial) element shapes and general systems of first-order conservation laws equipped with an entropy inequality where exact mean-value linearization of the flux divergence is not readily obtained, e.g. magnetohydrodynamics, the Euler equations with certain forms of chemistry, etc. Numerical examples of subsonic, transonic and supersonic flows containing discontinuities together with multi-level mesh refinement are provided to verify the analysis.

  8. Moving overlapping grids with adaptive mesh refinement for high-speed reactive and non-reactive flow

    NASA Astrophysics Data System (ADS)

    Henshaw, William D.; Schwendeman, Donald W.

    2006-08-01

    We consider the solution of the reactive and non-reactive Euler equations on two-dimensional domains that evolve in time. The domains are discretized using moving overlapping grids. In a typical grid construction, boundary-fitted grids are used to represent moving boundaries, and these grids overlap with stationary background Cartesian grids. Block-structured adaptive mesh refinement (AMR) is used to resolve fine-scale features in the flow such as shocks and detonations. Refinement grids are added to base-level grids according to an estimate of the error, and these refinement grids move with their corresponding base-level grids. The numerical approximation of the governing equations takes place in the parameter space of each component grid which is defined by a mapping from (fixed) parameter space to (moving) physical space. The mapped equations are solved numerically using a second-order extension of Godunov's method. The stiff source term in the reactive case is handled using a Runge-Kutta error-control scheme. We consider cases when the boundaries move according to a prescribed function of time and when the boundaries of embedded bodies move according to the surface stress exerted by the fluid. In the latter case, the Newton-Euler equations describe the motion of the center of mass of the each body and the rotation about it, and these equations are integrated numerically using a second-order predictor-corrector scheme. Numerical boundary conditions at slip walls are described, and numerical results are presented for both reactive and non-reactive flows that demonstrate the use and accuracy of the numerical approach.

  9. Modeling flow at the nozzle of a solid rocket motor

    NASA Technical Reports Server (NTRS)

    Chow, Alan S.; Jin, Kang-Ren

    1991-01-01

    The mechanical behavior of a rocket motor internal flow field results in a system of nonlinear partial differential equations which can be solved numerically. The accuracy and the convergence of the solution of the system of equations depends largely on how precisely the sharp gradients can be resolved. An adaptive grid generation scheme is incorporated into the computer algorithm to enhance the capability of numerical modeling. With this scheme, the grid is refined as the solution evolves. This scheme significantly improves the methodology of solving flow problems in rocket nozzle by putting the refinement part of grid generation into the computer algorithm.

  10. Hierarchical Volume Representation with 3{radical}2 Subdivision and Trivariate B-Spline Wavelets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Linsen, L; Gray, JT; Pascucci, V

    2002-01-11

    Multiresolution methods provide a means for representing data at multiple levels of detail. They are typically based on a hierarchical data organization scheme and update rules needed for data value computation. We use a data organization that is based on what we call n{radical}2 subdivision. The main advantage of subdivision, compared to quadtree (n = 2) or octree (n = 3) organizations, is that the number of vertices is only doubled in each subdivision step instead of multiplied by a factor of four or eight, respectively. To update data values we use n-variate B-spline wavelets, which yields better approximations formore » each level of detail. We develop a lifting scheme for n = 2 and n = 3 based on the n{radical}2-subdivision scheme. We obtain narrow masks that could also provide a basis for view-dependent visualization and adaptive refinement.« less

  11. Advances in Patch-Based Adaptive Mesh Refinement Scalability

    DOE PAGES

    Gunney, Brian T.N.; Anderson, Robert W.

    2015-12-18

    Patch-based structured adaptive mesh refinement (SAMR) is widely used for high-resolution simu- lations. Combined with modern supercomputers, it could provide simulations of unprecedented size and resolution. A persistent challenge for this com- bination has been managing dynamically adaptive meshes on more and more MPI tasks. The dis- tributed mesh management scheme in SAMRAI has made some progress SAMR scalability, but early al- gorithms still had trouble scaling past the regime of 105 MPI tasks. This work provides two critical SAMR regridding algorithms, which are integrated into that scheme to ensure efficiency of the whole. The clustering algorithm is an extensionmore » of the tile- clustering approach, making it more flexible and efficient in both clustering and parallelism. The partitioner is a new algorithm designed to prevent the network congestion experienced by its prede- cessor. We evaluated performance using weak- and strong-scaling benchmarks designed to be difficult for dynamic adaptivity. Results show good scaling on up to 1.5M cores and 2M MPI tasks. Detailed timing diagnostics suggest scaling would continue well past that.« less

  12. Advances in Patch-Based Adaptive Mesh Refinement Scalability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gunney, Brian T.N.; Anderson, Robert W.

    Patch-based structured adaptive mesh refinement (SAMR) is widely used for high-resolution simu- lations. Combined with modern supercomputers, it could provide simulations of unprecedented size and resolution. A persistent challenge for this com- bination has been managing dynamically adaptive meshes on more and more MPI tasks. The dis- tributed mesh management scheme in SAMRAI has made some progress SAMR scalability, but early al- gorithms still had trouble scaling past the regime of 105 MPI tasks. This work provides two critical SAMR regridding algorithms, which are integrated into that scheme to ensure efficiency of the whole. The clustering algorithm is an extensionmore » of the tile- clustering approach, making it more flexible and efficient in both clustering and parallelism. The partitioner is a new algorithm designed to prevent the network congestion experienced by its prede- cessor. We evaluated performance using weak- and strong-scaling benchmarks designed to be difficult for dynamic adaptivity. Results show good scaling on up to 1.5M cores and 2M MPI tasks. Detailed timing diagnostics suggest scaling would continue well past that.« less

  13. Structure Refinement of Protein Low Resolution Models Using the GNEIMO Constrained Dynamics Method

    PubMed Central

    Park, In-Hee; Gangupomu, Vamshi; Wagner, Jeffrey; Jain, Abhinandan; Vaidehi, Nagara-jan

    2012-01-01

    The challenge in protein structure prediction using homology modeling is the lack of reliable methods to refine the low resolution homology models. Unconstrained all-atom molecular dynamics (MD) does not serve well for structure refinement due to its limited conformational search. We have developed and tested the constrained MD method, based on the Generalized Newton-Euler Inverse Mass Operator (GNEIMO) algorithm for protein structure refinement. In this method, the high-frequency degrees of freedom are replaced with hard holonomic constraints and a protein is modeled as a collection of rigid body clusters connected by flexible torsional hinges. This allows larger integration time steps and enhances the conformational search space. In this work, we have demonstrated the use of a constraint free GNEIMO method for protein structure refinement that starts from low-resolution decoy sets derived from homology methods. In the eight proteins with three decoys for each, we observed an improvement of ~2 Å in the RMSD to the known experimental structures of these proteins. The GNEIMO method also showed enrichment in the population density of native-like conformations. In addition, we demonstrated structural refinement using a “Freeze and Thaw” clustering scheme with the GNEIMO framework as a viable tool for enhancing localized conformational search. We have derived a robust protocol based on the GNEIMO replica exchange method for protein structure refinement that can be readily extended to other proteins and possibly applicable for high throughput protein structure refinement. PMID:22260550

  14. High Order Schemes in Bats-R-US for Faster and More Accurate Predictions

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Toth, G.; Gombosi, T. I.

    2014-12-01

    BATS-R-US is a widely used global magnetohydrodynamics model that originally employed second order accurate TVD schemes combined with block based Adaptive Mesh Refinement (AMR) to achieve high resolution in the regions of interest. In the last years we have implemented fifth order accurate finite difference schemes CWENO5 and MP5 for uniform Cartesian grids. Now the high order schemes have been extended to generalized coordinates, including spherical grids and also to the non-uniform AMR grids including dynamic regridding. We present numerical tests that verify the preservation of free-stream solution and high-order accuracy as well as robust oscillation-free behavior near discontinuities. We apply the new high order accurate schemes to both heliospheric and magnetospheric simulations and show that it is robust and can achieve the same accuracy as the second order scheme with much less computational resources. This is especially important for space weather prediction that requires faster than real time code execution.

  15. Local finite element enrichment strategies for 2D contact computations and a corresponding post-processing scheme

    NASA Astrophysics Data System (ADS)

    Sauer, Roger A.

    2013-08-01

    Recently an enriched contact finite element formulation has been developed that substantially increases the accuracy of contact computations while keeping the additional numerical effort at a minimum reported by Sauer (Int J Numer Meth Eng, 87: 593-616, 2011). Two enrich-ment strategies were proposed, one based on local p-refinement using Lagrange interpolation and one based on Hermite interpolation that produces C 1-smoothness on the contact surface. Both classes, which were initially considered for the frictionless Signorini problem, are extended here to friction and contact between deformable bodies. For this, a symmetric contact formulation is used that allows the unbiased treatment of both contact partners. This paper also proposes a post-processing scheme for contact quantities like the contact pressure. The scheme, which provides a more accurate representation than the raw data, is based on an averaging procedure that is inspired by mortar formulations. The properties of the enrichment strategies and the corresponding post-processing scheme are illustrated by several numerical examples considering sliding and peeling contact in the presence of large deformations.

  16. Developing a risk-based trading scheme for cattle in England: farmer perspectives on managing trading risk for bovine tuberculosis.

    PubMed

    Little, R; Wheeler, K; Edge, S

    2017-02-11

    This paper examines farmer attitudes towards the development of a voluntary risk-based trading scheme for cattle in England as a risk mitigation measure for bovine tuberculosis (bTB). The research reported here was commissioned to gather evidence on the type of scheme that would have a good chance of success in improving the information farmers receive about the bTB risk of cattle they buy. Telephone interviews were conducted with a stratified random sample of 203 cattle farmers in England, splitting the interviews equally between respondents in the high-risk area and low-risk area for bTB. Supplementary interviews and focus groups with farmers were also carried out across the risk areas. Results suggest a greater enthusiasm for a risk-based trading scheme in low-risk areas compared with high-risk areas and among members of breed societies and cattle health schemes. Third-party certification of herds by private vets or the Animal and Plant Health Agency were regarded as the most credible source, with farmer self-certification being favoured by sellers, but being regarded as least credible by buyers. Understanding farmers' attitudes towards voluntary risk-based trading is important to gauge likely uptake, understand preferences for information provision and to assist in monitoring, evaluating and refining the scheme once established. British Veterinary Association.

  17. Simulation of violent free surface flow by AMR method

    NASA Astrophysics Data System (ADS)

    Hu, Changhong; Liu, Cheng

    2018-05-01

    A novel CFD approach based on adaptive mesh refinement (AMR) technique is being developed for numerical simulation of violent free surface flows. CIP method is applied to the flow solver and tangent of hyperbola for interface capturing with slope weighting (THINC/SW) scheme is implemented as the free surface capturing scheme. The PETSc library is adopted to solve the linear system. The linear solver is redesigned and modified to satisfy the requirement of the AMR mesh topology. In this paper, our CFD method is outlined and newly obtained results on numerical simulation of violent free surface flows are presented.

  18. Recent developments in the structural design and optimization of ITER neutral beam manifold

    NASA Astrophysics Data System (ADS)

    Chengzhi, CAO; Yudong, PAN; Zhiwei, XIA; Bo, LI; Tao, JIANG; Wei, LI

    2018-02-01

    This paper describes a new design of the neutral beam manifold based on a more optimized support system. A proposed alternative scheme has presented to replace the former complex manifold supports and internal pipe supports in the final design phase. Both the structural reliability and feasibility were confirmed with detailed analyses. Comparative analyses between two typical types of manifold support scheme were performed. All relevant results of mechanical analyses for typical operation scenarios and fault conditions are presented. Future optimization activities are described, which will give useful information for a refined setting of components in the next phase.

  19. Array-based, parallel hierarchical mesh refinement algorithms for unstructured meshes

    DOE PAGES

    Ray, Navamita; Grindeanu, Iulian; Zhao, Xinglin; ...

    2016-08-18

    In this paper, we describe an array-based hierarchical mesh refinement capability through uniform refinement of unstructured meshes for efficient solution of PDE's using finite element methods and multigrid solvers. A multi-degree, multi-dimensional and multi-level framework is designed to generate the nested hierarchies from an initial coarse mesh that can be used for a variety of purposes such as in multigrid solvers/preconditioners, to do solution convergence and verification studies and to improve overall parallel efficiency by decreasing I/O bandwidth requirements (by loading smaller meshes and in memory refinement). We also describe a high-order boundary reconstruction capability that can be used tomore » project the new points after refinement using high-order approximations instead of linear projection in order to minimize and provide more control on geometrical errors introduced by curved boundaries.The capability is developed under the parallel unstructured mesh framework "Mesh Oriented dAtaBase" (MOAB Tautges et al. (2004)). We describe the underlying data structures and algorithms to generate such hierarchies in parallel and present numerical results for computational efficiency and effect on mesh quality. Furthermore, we also present results to demonstrate the applicability of the developed capability to study convergence properties of different point projection schemes for various mesh hierarchies and to a multigrid finite-element solver for elliptic problems.« less

  20. Analysis of High Order Difference Methods for Multiscale Complex Compressible Flows

    NASA Technical Reports Server (NTRS)

    Sjoegreen, Bjoern; Yee, H. C.; Tang, Harry (Technical Monitor)

    2002-01-01

    Accurate numerical simulations of complex multiscale compressible viscous flows, especially high speed turbulence combustion and acoustics, demand high order schemes with adaptive numerical dissipation controls. Standard high resolution shock-capturing methods are too dissipative to capture the small scales and/or long-time wave propagations without extreme grid refinements and small time steps. An integrated approach for the control of numerical dissipation in high order schemes with incremental studies was initiated. Here we further refine the analysis on, and improve the understanding of the adaptive numerical dissipation control strategy. Basically, the development of these schemes focuses on high order nondissipative schemes and takes advantage of the progress that has been made for the last 30 years in numerical methods for conservation laws, such as techniques for imposing boundary conditions, techniques for stability at shock waves, and techniques for stable and accurate long-time integration. We concentrate on high order centered spatial discretizations and a fourth-order Runge-Kutta temporal discretizations as the base scheme. Near the bound-aries, the base scheme has stable boundary difference operators. To further enhance stability, the split form of the inviscid flux derivatives is frequently used for smooth flow problems. To enhance nonlinear stability, linear high order numerical dissipations are employed away from discontinuities, and nonlinear filters are employed after each time step in order to suppress spurious oscillations near discontinuities to minimize the smearing of turbulent fluctuations. Although these schemes are built from many components, each of which is well-known, it is not entirely obvious how the different components be best connected. For example, the nonlinear filter could instead have been built into the spatial discretization, so that it would have been activated at each stage in the Runge-Kutta time stepping. We could think of a mechanism that activates the split form of the equations only at some parts of the domain. Another issue is how to define good sensors for determining in which parts of the computational domain a certain feature should be filtered by the appropriate numerical dissipation. For the present study we employ a wavelet technique introduced in as sensors. Here, the method is briefly described with selected numerical experiments.

  1. The Collaborative Seismic Earth Model: Generation 1

    NASA Astrophysics Data System (ADS)

    Fichtner, Andreas; van Herwaarden, Dirk-Philip; Afanasiev, Michael; SimutÄ--, SaulÄ--; Krischer, Lion; ćubuk-Sabuncu, Yeşim; Taymaz, Tuncay; Colli, Lorenzo; Saygin, Erdinc; Villaseñor, Antonio; Trampert, Jeannot; Cupillard, Paul; Bunge, Hans-Peter; Igel, Heiner

    2018-05-01

    We present a general concept for evolutionary, collaborative, multiscale inversion of geophysical data, specifically applied to the construction of a first-generation Collaborative Seismic Earth Model. This is intended to address the limited resources of individual researchers and the often limited use of previously accumulated knowledge. Model evolution rests on a Bayesian updating scheme, simplified into a deterministic method that honors today's computational restrictions. The scheme is able to harness distributed human and computing power. It furthermore handles conflicting updates, as well as variable parameterizations of different model refinements or different inversion techniques. The first-generation Collaborative Seismic Earth Model comprises 12 refinements from full seismic waveform inversion, ranging from regional crustal- to continental-scale models. A global full-waveform inversion ensures that regional refinements translate into whole-Earth structure.

  2. Universal Blind Quantum Computation

    NASA Astrophysics Data System (ADS)

    Fitzsimons, Joseph; Kashefi, Elham

    2012-02-01

    Blind Quantum Computing (BQC) allows a client to have a server carry out a quantum computation for them such that the client's inputs, outputs and computation remain private. Recently we proposed a universal unconditionally secure BQC scheme, based on the conceptual framework of the measurement-based quantum computing model, where the client only needs to be able to prepare single qubits in separable states randomly chosen from a finite set and send them to the server, who has the balance of the required quantum computational resources. Here we present a refinement of the scheme which vastly expands the class of quantum circuits which can be directly implemented as a blind computation, by introducing a new class of resource states which we term dotted-complete graph states and expanding the set of single qubit states the client is required to prepare. These two modifications significantly simplify the overall protocol and remove the previously present restriction that only nearest-neighbor circuits could be implemented as blind computations directly. As an added benefit, the refined protocol admits a substantially more intuitive and simplified verification mechanism, allowing the correctness of a blind computation to be verified with arbitrarily small probability of error.

  3. Modified Mean-Pyramid Coding Scheme

    NASA Technical Reports Server (NTRS)

    Cheung, Kar-Ming; Romer, Richard

    1996-01-01

    Modified mean-pyramid coding scheme requires transmission of slightly fewer data. Data-expansion factor reduced from 1/3 to 1/12. Schemes for progressive transmission of image data transmitted in sequence of frames in such way coarse version of image reconstructed after receipt of first frame and increasingly refined version of image reconstructed after receipt of each subsequent frame.

  4. The role of muscle synergies in myoelectric control: trends and challenges for simultaneous multifunction control

    NASA Astrophysics Data System (ADS)

    Ison, Mark; Artemiadis, Panagiotis

    2014-10-01

    Myoelectric control is filled with potential to significantly change human-robot interaction due to the ability to non-invasively measure human motion intent. However, current control schemes have struggled to achieve the robust performance that is necessary for use in commercial applications. As demands in myoelectric control trend toward simultaneous multifunctional control, multi-muscle coordinations, or synergies, play larger roles in the success of the control scheme. Detecting and refining patterns in muscle activations robust to the high variance and transient changes associated with surface electromyography is essential for efficient, user-friendly control. This article reviews the role of muscle synergies in myoelectric control schemes by dissecting each component of the scheme with respect to associated challenges for achieving robust simultaneous control of myoelectric interfaces. Electromyography recording details, signal feature extraction, pattern recognition and motor learning based control schemes are considered, and future directions are proposed as steps toward fulfilling the potential of myoelectric control in clinically and commercially viable applications.

  5. An Optimised System for Generating Multi-Resolution Dtms Using NASA Mro Datasets

    NASA Astrophysics Data System (ADS)

    Tao, Y.; Muller, J.-P.; Sidiropoulos, P.; Veitch-Michaelis, J.; Yershov, V.

    2016-06-01

    Within the EU FP-7 iMars project, a fully automated multi-resolution DTM processing chain, called Co-registration ASP-Gotcha Optimised (CASP-GO) has been developed, based on the open source NASA Ames Stereo Pipeline (ASP). CASP-GO includes tiepoint based multi-resolution image co-registration and an adaptive least squares correlation-based sub-pixel refinement method called Gotcha. The implemented system guarantees global geo-referencing compliance with respect to HRSC (and thence to MOLA), provides refined stereo matching completeness and accuracy based on the ASP normalised cross-correlation. We summarise issues discovered from experimenting with the use of the open-source ASP DTM processing chain and introduce our new working solutions. These issues include global co-registration accuracy, de-noising, dealing with failure in matching, matching confidence estimation, outlier definition and rejection scheme, various DTM artefacts, uncertainty estimation, and quality-efficiency trade-offs.

  6. GLISTRboost: Combining Multimodal MRI Segmentation, Registration, and Biophysical Tumor Growth Modeling with Gradient Boosting Machines for Glioma Segmentation.

    PubMed

    Bakas, Spyridon; Zeng, Ke; Sotiras, Aristeidis; Rathore, Saima; Akbari, Hamed; Gaonkar, Bilwaj; Rozycki, Martin; Pati, Sarthak; Davatzikos, Christos

    2016-01-01

    We present an approach for segmenting low- and high-grade gliomas in multimodal magnetic resonance imaging volumes. The proposed approach is based on a hybrid generative-discriminative model. Firstly, a generative approach based on an Expectation-Maximization framework that incorporates a glioma growth model is used to segment the brain scans into tumor, as well as healthy tissue labels. Secondly, a gradient boosting multi-class classification scheme is used to refine tumor labels based on information from multiple patients. Lastly, a probabilistic Bayesian strategy is employed to further refine and finalize the tumor segmentation based on patient-specific intensity statistics from the multiple modalities. We evaluated our approach in 186 cases during the training phase of the BRAin Tumor Segmentation (BRATS) 2015 challenge and report promising results. During the testing phase, the algorithm was additionally evaluated in 53 unseen cases, achieving the best performance among the competing methods.

  7. Parallel Tetrahedral Mesh Adaptation with Dynamic Load Balancing

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Biswas, Rupak; Gabow, Harold N.

    1999-01-01

    The ability to dynamically adapt an unstructured grid is a powerful tool for efficiently solving computational problems with evolving physical features. In this paper, we report on our experience parallelizing an edge-based adaptation scheme, called 3D_TAG. using message passing. Results show excellent speedup when a realistic helicopter rotor mesh is randomly refined. However. performance deteriorates when the mesh is refined using a solution-based error indicator since mesh adaptation for practical problems occurs in a localized region., creating a severe load imbalance. To address this problem, we have developed PLUM, a global dynamic load balancing framework for adaptive numerical computations. Even though PLUM primarily balances processor workloads for the solution phase, it reduces the load imbalance problem within mesh adaptation by repartitioning the mesh after targeting edges for refinement but before the actual subdivision. This dramatically improves the performance of parallel 3D_TAG since refinement occurs in a more load balanced fashion. We also present optimal and heuristic algorithms that, when applied to the default mapping of a parallel repartitioner, significantly reduce the data redistribution overhead. Finally, portability is examined by comparing performance on three state-of-the-art parallel machines.

  8. CASTRO: A NEW COMPRESSIBLE ASTROPHYSICAL SOLVER. II. GRAY RADIATION HYDRODYNAMICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, W.; Almgren, A.; Bell, J.

    We describe the development of a flux-limited gray radiation solver for the compressible astrophysics code, CASTRO. CASTRO uses an Eulerian grid with block-structured adaptive mesh refinement based on a nested hierarchy of logically rectangular variable-sized grids with simultaneous refinement in both space and time. The gray radiation solver is based on a mixed-frame formulation of radiation hydrodynamics. In our approach, the system is split into two parts, one part that couples the radiation and fluid in a hyperbolic subsystem, and another parabolic part that evolves radiation diffusion and source-sink terms. The hyperbolic subsystem is solved explicitly with a high-order Godunovmore » scheme, whereas the parabolic part is solved implicitly with a first-order backward Euler method.« less

  9. Patch-based Adaptive Mesh Refinement for Multimaterial Hydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lomov, I; Pember, R; Greenough, J

    2005-10-18

    We present a patch-based direct Eulerian adaptive mesh refinement (AMR) algorithm for modeling real equation-of-state, multimaterial compressible flow with strength. Our approach to AMR uses a hierarchical, structured grid approach first developed by (Berger and Oliger 1984), (Berger and Oliger 1984). The grid structure is dynamic in time and is composed of nested uniform rectangular grids of varying resolution. The integration scheme on the grid hierarchy is a recursive procedure in which the coarse grids are advanced, then the fine grids are advanced multiple steps to reach the same time, and finally the coarse and fine grids are synchronized tomore » remove conservation errors during the separate advances. The methodology presented here is based on a single grid algorithm developed for multimaterial gas dynamics by (Colella et al. 1993), refined by(Greenough et al. 1995), and extended to the solution of solid mechanics problems with significant strength by (Lomov and Rubin 2003). The single grid algorithm uses a second-order Godunov scheme with an approximate single fluid Riemann solver and a volume-of-fluid treatment of material interfaces. The method also uses a non-conservative treatment of the deformation tensor and an acoustic approximation for shear waves in the Riemann solver. This departure from a strict application of the higher-order Godunov methodology to the equation of solid mechanics is justified due to the fact that highly nonlinear behavior of shear stresses is rare. This algorithm is implemented in two codes, Geodyn and Raptor, the latter of which is a coupled rad-hydro code. The present discussion will be solely concerned with hydrodynamics modeling. Results from a number of simulations for flows with and without strength will be presented.« less

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Besse, Nicolas; Latu, Guillaume; Ghizzo, Alain

    In this paper we present a new method for the numerical solution of the relativistic Vlasov-Maxwell system on a phase-space grid using an adaptive semi-Lagrangian method. The adaptivity is performed through a wavelet multiresolution analysis, which gives a powerful and natural refinement criterion based on the local measurement of the approximation error and regularity of the distribution function. Therefore, the multiscale expansion of the distribution function allows to get a sparse representation of the data and thus save memory space and CPU time. We apply this numerical scheme to reduced Vlasov-Maxwell systems arising in laser-plasma physics. Interaction of relativistically strongmore » laser pulses with overdense plasma slabs is investigated. These Vlasov simulations revealed a rich variety of phenomena associated with the fast particle dynamics induced by electromagnetic waves as electron trapping, particle acceleration, and electron plasma wavebreaking. However, the wavelet based adaptive method that we developed here, does not yield significant improvements compared to Vlasov solvers on a uniform mesh due to the substantial overhead that the method introduces. Nonetheless they might be a first step towards more efficient adaptive solvers based on different ideas for the grid refinement or on a more efficient implementation. Here the Vlasov simulations are performed in a two-dimensional phase-space where the development of thin filaments, strongly amplified by relativistic effects requires an important increase of the total number of points of the phase-space grid as they get finer as time goes on. The adaptive method could be more useful in cases where these thin filaments that need to be resolved are a very small fraction of the hyper-volume, which arises in higher dimensions because of the surface-to-volume scaling and the essentially one-dimensional structure of the filaments. Moreover, the main way to improve the efficiency of the adaptive method is to increase the local character in phase-space of the numerical scheme, by considering multiscale reconstruction with more compact support and by replacing the semi-Lagrangian method with more local - in space - numerical scheme as compact finite difference schemes, discontinuous-Galerkin method or finite element residual schemes which are well suited for parallel domain decomposition techniques.« less

  11. The comparison of approaches to the solid-state NMR-based structural refinement of vitamin B1 hydrochloride and of its monohydrate

    NASA Astrophysics Data System (ADS)

    Czernek, Jiří; Pawlak, Tomasz; Potrzebowski, Marek J.; Brus, Jiří

    2013-01-01

    The 13C and 15N CPMAS SSNMR measurements were accompanied by the proper theoretical description of the solid-phase environment, as provided by the density functional theory in the pseudopotential plane-wave scheme, and employed in refining the atomic coordinates of the crystal structures of thiamine chloride hydrochloride and of its monohydrate. Thus, using the DFT functionals PBE, PW91 and RPBE, the SSNMR-consistent solid-phase structures of these compounds are derived from the geometrical optimization, which is followed by an assessment of the fits of the GIPAW-predicted values of the chemical shielding parameters to their experimental counterparts.

  12. An assessment of the adaptive unstructured tetrahedral grid, Euler Flow Solver Code FELISA

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Erickson, Larry L.

    1994-01-01

    A three-dimensional solution-adaptive Euler flow solver for unstructured tetrahedral meshes is assessed, and the accuracy and efficiency of the method for predicting sonic boom pressure signatures about simple generic models are demonstrated. Comparison of computational and wind tunnel data and enhancement of numerical solutions by means of grid adaptivity are discussed. The mesh generation is based on the advancing front technique. The FELISA code consists of two solvers, the Taylor-Galerkin and the Runge-Kutta-Galerkin schemes, both of which are spacially discretized by the usual Galerkin weighted residual finite-element methods but with different explicit time-marching schemes to steady state. The solution-adaptive grid procedure is based on either remeshing or mesh refinement techniques. An alternative geometry adaptive procedure is also incorporated.

  13. An extended basis inexact shift-invert Lanczos for the efficient solution of large-scale generalized eigenproblems

    NASA Astrophysics Data System (ADS)

    Rewieński, M.; Lamecki, A.; Mrozowski, M.

    2013-09-01

    This paper proposes a technique, based on the Inexact Shift-Invert Lanczos (ISIL) method with Inexact Jacobi Orthogonal Component Correction (IJOCC) refinement, and a preconditioned conjugate-gradient (PCG) linear solver with multilevel preconditioner, for finding several eigenvalues for generalized symmetric eigenproblems. Several eigenvalues are found by constructing (with the ISIL process) an extended projection basis. Presented results of numerical experiments confirm the technique can be effectively applied to challenging, large-scale problems characterized by very dense spectra, such as resonant cavities with spatial dimensions which are large with respect to wavelengths of the resonating electromagnetic fields. It is also shown that the proposed scheme based on inexact linear solves delivers superior performance, as compared to methods which rely on exact linear solves, indicating tremendous potential of the 'inexact solve' concept. Finally, the scheme which generates an extended projection basis is found to provide a cost-efficient alternative to classical deflation schemes when several eigenvalues are computed.

  14. Stability and natural vibration analysis of laminated plates by using a mixed element based on a refined plate theory

    NASA Technical Reports Server (NTRS)

    Putcha, N. S.; Reddy, J. N.

    1986-01-01

    A mixed shear flexible finite element, with relaxed continuity, is developed for the geometrically linear and nonlinear analysis of layered anisotropic plates. The element formulation is based on a refined higher order theory which satisfies the zero transverse shear stress boundary conditions on the top and bottom faces of the plate and requires no shear correction coefficients. The mixed finite element developed herein consists of eleven degrees of freedom per node which include three displacements, two rotations and six moment resultants. The element is evaluated for its accuracy in the analysis of the stability and vibration of anisotropic rectangular plates with different lamination schemes and boundary conditions. The mixed finite element described here for the higher order theory gives very accurate results for buckling loads and natural frequencies.

  15. A novel transferable individual tree crown delineation model based on Fishing Net Dragging and boundary classification

    NASA Astrophysics Data System (ADS)

    Liu, Tao; Im, Jungho; Quackenbush, Lindi J.

    2015-12-01

    This study provides a novel approach to individual tree crown delineation (ITCD) using airborne Light Detection and Ranging (LiDAR) data in dense natural forests using two main steps: crown boundary refinement based on a proposed Fishing Net Dragging (FiND) method, and segment merging based on boundary classification. FiND starts with approximate tree crown boundaries derived using a traditional watershed method with Gaussian filtering and refines these boundaries using an algorithm that mimics how a fisherman drags a fishing net. Random forest machine learning is then used to classify boundary segments into two classes: boundaries between trees and boundaries between branches that belong to a single tree. Three groups of LiDAR-derived features-two from the pseudo waveform generated along with crown boundaries and one from a canopy height model (CHM)-were used in the classification. The proposed ITCD approach was tested using LiDAR data collected over a mountainous region in the Adirondack Park, NY, USA. Overall accuracy of boundary classification was 82.4%. Features derived from the CHM were generally more important in the classification than the features extracted from the pseudo waveform. A comprehensive accuracy assessment scheme for ITCD was also introduced by considering both area of crown overlap and crown centroids. Accuracy assessment using this new scheme shows the proposed ITCD achieved 74% and 78% as overall accuracy, respectively, for deciduous and mixed forest.

  16. Noncontrast magnetic resonance angiography of the hand: improved arterial conspicuity by multidirectional flow-sensitive dephasing magnetization preparation in 3D balanced steady-state free precession imaging.

    PubMed

    Fan, Zhaoyang; Hodnett, Philip A; Davarpanah, Amir H; Scanlon, Timothy G; Sheehan, John J; Varga, John; Carr, James C; Li, Debiao

    2011-08-01

    : To develop a flow-sensitive dephasing (FSD) preparative scheme to facilitate multidirectional flow-signal suppression in 3-dimensional balanced steady-state free precession imaging and to validate the feasibility of the refined sequence for noncontrast magnetic resonance angiography (NC-MRA) of the hand. : A new FSD preparative scheme was developed that combines 2 conventional FSD modules. Studies using a flow phantom (gadolinium-doped water 15 cm/s) and the hands of 11 healthy volunteers (6 males and 5 females) were performed to compare the proposed FSD scheme with its conventional counterpart with respect to the signal suppression of multidirectional flow. In 9 of the 11 healthy subjects and 2 patients with suspected vasculitis and documented Raynaud phenomenon, respectively, 3-dimensional balanced steady-state free precession imaging coupled with the new FSD scheme was compared with spatial-resolution-matched (0.94 × 0.94 × 0.94 mm) contrast-enhanced magnetic resonance angiography (0.15 mmol/kg gadopentetate dimeglumine) in terms of overall image quality, venous contamination, motion degradation, and arterial conspicuity. : The proposed FSD scheme was able to suppress 2-dimensional flow signal in the flow phantom and hands and yielded significantly higher arterial conspicuity scores than the conventional scheme did on NC-MRA at the regions of common digitals and proper digitals. Compared with contrast-enhanced magnetic resonance angiography, the refined NC-MRA technique yielded comparable overall image quality and motion degradation, significantly less venous contamination, and significantly higher arterial conspicuity score at digital arteries. : The FSD-based NC-MRA technique is improved in the depiction of multidirectional flow by applying a 2-module FSD preparation, which enhances its potential to serve as an alternative magnetic resonance angiography technique for the assessment of hand vascular abnormalities.

  17. High speed inviscid compressible flow by the finite element method

    NASA Technical Reports Server (NTRS)

    Zienkiewicz, O. C.; Loehner, R.; Morgan, K.

    1984-01-01

    The finite element method and an explicit time stepping algorithm which is based on Taylor-Galerkin schemes with an appropriate artificial viscosity is combined with an automatic mesh refinement process which is designed to produce accurate steady state solutions to problems of inviscid compressible flow in two dimensions. The results of two test problems are included which demonstrate the excellent performance characteristics of the proposed procedures.

  18. xMDFF: molecular dynamics flexible fitting of low-resolution X-ray structures.

    PubMed

    McGreevy, Ryan; Singharoy, Abhishek; Li, Qufei; Zhang, Jingfen; Xu, Dong; Perozo, Eduardo; Schulten, Klaus

    2014-09-01

    X-ray crystallography remains the most dominant method for solving atomic structures. However, for relatively large systems, the availability of only medium-to-low-resolution diffraction data often limits the determination of all-atom details. A new molecular dynamics flexible fitting (MDFF)-based approach, xMDFF, for determining structures from such low-resolution crystallographic data is reported. xMDFF employs a real-space refinement scheme that flexibly fits atomic models into an iteratively updating electron-density map. It addresses significant large-scale deformations of the initial model to fit the low-resolution density, as tested with synthetic low-resolution maps of D-ribose-binding protein. xMDFF has been successfully applied to re-refine six low-resolution protein structures of varying sizes that had already been submitted to the Protein Data Bank. Finally, via systematic refinement of a series of data from 3.6 to 7 Å resolution, xMDFF refinements together with electrophysiology experiments were used to validate the first all-atom structure of the voltage-sensing protein Ci-VSP.

  19. Thoughts on the chimera method of simulation of three-dimensional viscous flow

    NASA Technical Reports Server (NTRS)

    Steger, Joseph L.

    1991-01-01

    The chimera overset grid is reviewed and discussed relative to other procedures for simulating flow about complex configurations. It is argued that while more refinement of the technique is needed, current schemes are competitive to unstructured grid schemes and should ultimately prove more useful.

  20. A refined mixed shear flexible finite element for the nonlinear analysis of laminated plates

    NASA Technical Reports Server (NTRS)

    Putcha, N. S.; Reddy, J. N.

    1986-01-01

    The present study is concerned with the development of a mixed shear flexible finite element with relaxed continuity for the geometrically linear and nonlinear analysis of laminated anisotropic plates. The formulation of the element is based on a refined higher-order theory. This theory satisfies the zero transverse shear stress boundary conditions on the top and bottom faces of the plate. Shear correction coefficients are not needed. The developed element consists of 11 degrees-of-freedom per node, taking into account three displacements, two rotations, and six moment resultants. An evaluation of the element is conducted with respect to the accuracy obtained in the bending of laminated anistropic rectangular plates with different lamination schemes, loadings, and boundary conditions.

  1. Oxygen-Induced Cracking Distillation of Oil in the Continuous Flow Tank Reactor

    ERIC Educational Resources Information Center

    Shvets, Valeriy F.; Kozlovskiy, Roman A.; Luganskiy, Artur I.; Gorbunov, Andrey V.; Suchkov, Yuriy P.; Ushin, Nikolay S.; Cherepanov, Alexandr A.

    2016-01-01

    The article analyses problems of processing black oil fuel and addresses the possibility of increasing the depth of oil refining by a new processing scheme. The study examines various methods of increasing the depth of oil refining reveals their inadequacies and highlights a need to introduce a new method of processing atmospheric and vacuum…

  2. A fast and accurate dihedral interpolation loop subdivision scheme

    NASA Astrophysics Data System (ADS)

    Shi, Zhuo; An, Yalei; Wang, Zhongshuai; Yu, Ke; Zhong, Si; Lan, Rushi; Luo, Xiaonan

    2018-04-01

    In this paper, we propose a fast and accurate dihedral interpolation Loop subdivision scheme for subdivision surfaces based on triangular meshes. In order to solve the problem of surface shrinkage, we keep the limit condition unchanged, which is important. Extraordinary vertices are handled using modified Butterfly rules. Subdivision schemes are computationally costly as the number of faces grows exponentially at higher levels of subdivision. To address this problem, our approach is to use local surface information to adaptively refine the model. This is achieved simply by changing the threshold value of the dihedral angle parameter, i.e., the angle between the normals of a triangular face and its adjacent faces. We then demonstrate the effectiveness of the proposed method for various 3D graphic triangular meshes, and extensive experimental results show that it can match or exceed the expected results at lower computational cost.

  3. A Lagrangian particle method with remeshing for tracer transport on the sphere

    DOE PAGES

    Bosler, Peter Andrew; Kent, James; Krasny, Robert; ...

    2017-03-30

    A Lagrangian particle method (called LPM) based on the flow map is presented for tracer transport on the sphere. The particles carry tracer values and are located at the centers and vertices of triangular Lagrangian panels. Remeshing is applied to control particle disorder and two schemes are compared, one using direct tracer interpolation and another using inverse flow map interpolation with sampling of the initial tracer density. Test cases include a moving-vortices flow and reversing-deformational flow with both zero and nonzero divergence, as well as smooth and discontinuous tracers. We examine the accuracy of the computed tracer density and tracermore » integral, and preservation of nonlinear correlation in a pair of tracers. Here, we compare results obtained using LPM and the Lin–Rood finite-volume scheme. An adaptive particle/panel refinement scheme is demonstrated.« less

  4. A Lagrangian particle method with remeshing for tracer transport on the sphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bosler, Peter Andrew; Kent, James; Krasny, Robert

    A Lagrangian particle method (called LPM) based on the flow map is presented for tracer transport on the sphere. The particles carry tracer values and are located at the centers and vertices of triangular Lagrangian panels. Remeshing is applied to control particle disorder and two schemes are compared, one using direct tracer interpolation and another using inverse flow map interpolation with sampling of the initial tracer density. Test cases include a moving-vortices flow and reversing-deformational flow with both zero and nonzero divergence, as well as smooth and discontinuous tracers. We examine the accuracy of the computed tracer density and tracermore » integral, and preservation of nonlinear correlation in a pair of tracers. Here, we compare results obtained using LPM and the Lin–Rood finite-volume scheme. An adaptive particle/panel refinement scheme is demonstrated.« less

  5. Parametric Human Body Reconstruction Based on Sparse Key Points.

    PubMed

    Cheng, Ke-Li; Tong, Ruo-Feng; Tang, Min; Qian, Jing-Ye; Sarkis, Michel

    2016-11-01

    We propose an automatic parametric human body reconstruction algorithm which can efficiently construct a model using a single Kinect sensor. A user needs to stand still in front of the sensor for a couple of seconds to measure the range data. The user's body shape and pose will then be automatically constructed in several seconds. Traditional methods optimize dense correspondences between range data and meshes. In contrast, our proposed scheme relies on sparse key points for the reconstruction. It employs regression to find the corresponding key points between the scanned range data and some annotated training data. We design two kinds of feature descriptors as well as corresponding regression stages to make the regression robust and accurate. Our scheme follows with dense refinement where a pre-factorization method is applied to improve the computational efficiency. Compared with other methods, our scheme achieves similar reconstruction accuracy but significantly reduces runtime.

  6. On some Approximation Schemes for Steady Compressible Viscous Flow

    NASA Astrophysics Data System (ADS)

    Bause, M.; Heywood, J. G.; Novotny, A.; Padula, M.

    This paper continues our development of approximation schemes for steady compressible viscous flow based on an iteration between a Stokes like problem for the velocity and a transport equation for the density, with the aim of improving their suitability for computations. Such schemes seem attractive for computations because they offer a reduction to standard problems for which there is already highly refined software, and because of the guidance that can be drawn from an existence theory based on them. Our objective here is to modify a recent scheme of Heywood and Padula [12], to improve its convergence properties. This scheme improved upon an earlier scheme of Padula [21], [23] through the use of a special ``effective pressure'' in linking the Stokes and transport problems. However, its convergence is limited for several reasons. Firstly, the steady transport equation itself is only solvable for general velocity fields if they satisfy certain smallness conditions. These conditions are met here by using a rescaled variant of the steady transport equation based on a pseudo time step for the equation of continuity. Another matter limiting the convergence of the scheme in [12] is that the Stokes linearization, which is a linearization about zero, has an inevitably small range of convergence. We replace it here with an Oseen or Newton linearization, either of which has a wider range of convergence, and converges more rapidly. The simplicity of the scheme offered in [12] was conducive to a relatively simple and clearly organized proof of its convergence. The proofs of convergence for the more complicated schemes proposed here are structured along the same lines. They strengthen the theorems of existence and uniqueness in [12] by weakening the smallness conditions that are needed. The expected improvement in the computational performance of the modified schemes has been confirmed by Bause [2], in an ongoing investigation.

  7. Variationally consistent discretization schemes and numerical algorithms for contact problems

    NASA Astrophysics Data System (ADS)

    Wohlmuth, Barbara

    We consider variationally consistent discretization schemes for mechanical contact problems. Most of the results can also be applied to other variational inequalities, such as those for phase transition problems in porous media, for plasticity or for option pricing applications from finance. The starting point is to weakly incorporate the constraint into the setting and to reformulate the inequality in the displacement in terms of a saddle-point problem. Here, the Lagrange multiplier represents the surface forces, and the constraints are restricted to the boundary of the simulation domain. Having a uniform inf-sup bound, one can then establish optimal low-order a priori convergence rates for the discretization error in the primal and dual variables. In addition to the abstract framework of linear saddle-point theory, complementarity terms have to be taken into account. The resulting inequality system is solved by rewriting it equivalently by means of the non-linear complementarity function as a system of equations. Although it is not differentiable in the classical sense, semi-smooth Newton methods, yielding super-linear convergence rates, can be applied and easily implemented in terms of a primal-dual active set strategy. Quite often the solution of contact problems has a low regularity, and the efficiency of the approach can be improved by using adaptive refinement techniques. Different standard types, such as residual- and equilibrated-based a posteriori error estimators, can be designed based on the interpretation of the dual variable as Neumann boundary condition. For the fully dynamic setting it is of interest to apply energy-preserving time-integration schemes. However, the differential algebraic character of the system can result in high oscillations if standard methods are applied. A possible remedy is to modify the fully discretized system by a local redistribution of the mass. Numerical results in two and three dimensions illustrate the wide range of possible applications and show the performance of the space discretization scheme, non-linear solver, adaptive refinement process and time integration.

  8. In Search of Grid Converged Solutions

    NASA Technical Reports Server (NTRS)

    Lockard, David P.

    2010-01-01

    Assessing solution error continues to be a formidable task when numerically solving practical flow problems. Currently, grid refinement is the primary method used for error assessment. The minimum grid spacing requirements to achieve design order accuracy for a structured-grid scheme are determined for several simple examples using truncation error evaluations on a sequence of meshes. For certain methods and classes of problems, obtaining design order may not be sufficient to guarantee low error. Furthermore, some schemes can require much finer meshes to obtain design order than would be needed to reduce the error to acceptable levels. Results are then presented from realistic problems that further demonstrate the challenges associated with using grid refinement studies to assess solution accuracy.

  9. A qualitative study of stakeholder views regarding participation in locally commissioned enhanced optometric services.

    PubMed

    Konstantakopoulou, E; Harper, R A; Edgar, D F; Lawrenson, J G

    2014-05-29

    To explore the views of optometrists, general practitioners (GPs) and ophthalmologists regarding the development and organisation of community-based enhanced optometric services. Qualitative study using free-text questionnaires and telephone interviews. A minor eye conditions scheme (MECS) and a glaucoma referral refinement scheme (GRRS) are based on accredited community optometry practices. 41 optometrists, 6 ophthalmologists and 25 GPs. The most common reason given by optometrists for participation in enhanced schemes was to further their professional development; however, as providers of 'for-profit' healthcare, it was clear that participants had also considered the impact of the schemes on their business. Lack of fit with the 'retail' business model of optometry was a frequently given reason for non-participation. The methods used for training and accreditation were generally thought to be appropriate, and participating optometrists welcomed the opportunities for ongoing training. The ophthalmologists involved in the MECS and GRRS expressed very positive views regarding the schemes and widely acknowledged that the new care pathways would reduce unnecessary referrals and shorten patient waiting times. GPs involved in the MECS were also very supportive. They felt that the scheme provided an 'expert' local opinion that could potentially reduce the number of secondary care referrals. The results of this study demonstrated strong stakeholder support for the development of community-based enhanced optometric services. Although optometrists welcomed the opportunity to develop their professional skills and knowledge, enhanced schemes must also provide a sufficient financial incentive so as not to compromise the profitability of their business. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  10. Boundary fitting based segmentation of fluorescence microscopy images

    NASA Astrophysics Data System (ADS)

    Lee, Soonam; Salama, Paul; Dunn, Kenneth W.; Delp, Edward J.

    2015-03-01

    Segmentation is a fundamental step in quantifying characteristics, such as volume, shape, and orientation of cells and/or tissue. However, quantification of these characteristics still poses a challenge due to the unique properties of microscopy volumes. This paper proposes a 2D segmentation method that utilizes a combination of adaptive and global thresholding, potentials, z direction refinement, branch pruning, end point matching, and boundary fitting methods to delineate tubular objects in microscopy volumes. Experimental results demonstrate that the proposed method achieves better performance than an active contours based scheme.

  11. Hybrid Upwinding for Two-Phase Flow in Heterogeneous Porous Media with Buoyancy and Capillarity

    NASA Astrophysics Data System (ADS)

    Hamon, F. P.; Mallison, B.; Tchelepi, H.

    2016-12-01

    In subsurface flow simulation, efficient discretization schemes for the partial differential equations governing multiphase flow and transport are critical. For highly heterogeneous porous media, the temporal discretization of choice is often the unconditionally stable fully implicit (backward-Euler) method. In this scheme, the simultaneous update of all the degrees of freedom requires solving large algebraic nonlinear systems at each time step using Newton's method. This is computationally expensive, especially in the presence of strong capillary effects driven by abrupt changes in porosity and permeability between different rock types. Therefore, discretization schemes that reduce the simulation cost by improving the nonlinear convergence rate are highly desirable. To speed up nonlinear convergence, we present an efficient fully implicit finite-volume scheme for immiscible two-phase flow in the presence of strong capillary forces. In this scheme, the discrete viscous, buoyancy, and capillary spatial terms are evaluated separately based on physical considerations. We build on previous work on Implicit Hybrid Upwinding (IHU) by using the upstream saturations with respect to the total velocity to compute the relative permeabilities in the viscous term, and by determining the directionality of the buoyancy term based on the phase density differences. The capillary numerical flux is decomposed into a rock- and geometry-dependent transmissibility factor, a nonlinear capillary diffusion coefficient, and an approximation of the saturation gradient. Combining the viscous, buoyancy, and capillary terms, we obtain a numerical flux that is consistent, bounded, differentiable, and monotone for homogeneous one-dimensional flow. The proposed scheme also accounts for spatially discontinuous capillary pressure functions. Specifically, at the interface between two rock types, the numerical scheme accurately honors the entry pressure condition by solving a local nonlinear problem to compute the numerical flux. Heterogeneous numerical tests demonstrate that this extended IHU scheme is non-oscillatory and convergent upon refinement. They also illustrate the superior accuracy and nonlinear convergence rate of the IHU scheme compared with the standard phase-based upstream weighting approach.

  12. Self-adaptive relevance feedback based on multilevel image content analysis

    NASA Astrophysics Data System (ADS)

    Gao, Yongying; Zhang, Yujin; Fu, Yu

    2001-01-01

    In current content-based image retrieval systems, it is generally accepted that obtaining high-level image features is a key to improve the querying. Among the related techniques, relevance feedback has become a hot research aspect because it combines the information from the user to refine the querying results. In practice, many methods have been proposed to achieve the goal of relevance feedback. In this paper, a new scheme for relevance feedback is proposed. Unlike previous methods for relevance feedback, our scheme provides a self-adaptive operation. First, based on multi- level image content analysis, the relevant images from the user could be automatically analyzed in different levels and the querying could be modified in terms of different analysis results. Secondly, to make it more convenient to the user, the procedure of relevance feedback could be led with memory or without memory. To test the performance of the proposed method, a practical semantic-based image retrieval system has been established, and the querying results gained by our self-adaptive relevance feedback are given.

  13. Self-adaptive relevance feedback based on multilevel image content analysis

    NASA Astrophysics Data System (ADS)

    Gao, Yongying; Zhang, Yujin; Fu, Yu

    2000-12-01

    In current content-based image retrieval systems, it is generally accepted that obtaining high-level image features is a key to improve the querying. Among the related techniques, relevance feedback has become a hot research aspect because it combines the information from the user to refine the querying results. In practice, many methods have been proposed to achieve the goal of relevance feedback. In this paper, a new scheme for relevance feedback is proposed. Unlike previous methods for relevance feedback, our scheme provides a self-adaptive operation. First, based on multi- level image content analysis, the relevant images from the user could be automatically analyzed in different levels and the querying could be modified in terms of different analysis results. Secondly, to make it more convenient to the user, the procedure of relevance feedback could be led with memory or without memory. To test the performance of the proposed method, a practical semantic-based image retrieval system has been established, and the querying results gained by our self-adaptive relevance feedback are given.

  14. Refinements to SSiB with an Emphasis on Snow-Physics: Evaluation and Validation Using GSWP and Valdai Data

    NASA Technical Reports Server (NTRS)

    Mocko, David M.; Sud, Y. C.

    2000-01-01

    Refinements to the snow-physics scheme of SSiB (Simplified Simple Biosphere Model) are described and evaluated. The upgrades include a partial redesign of the conceptual architecture to better simulate the diurnal temperature of the snow surface. For a deep snowpack, there are two separate prognostic temperature snow layers - the top layer responds to diurnal fluctuations in the surface forcing, while the deep layer exhibits a slowly varying response. In addition, the use of a very deep soil temperature and a treatment of snow aging with its influence on snow density is parameterized and evaluated. The upgraded snow scheme produces better timing of snow melt in GSWP-style simulations using ISLSCP Initiative I data for 1987-1988 in the Russian Wheat Belt region. To simulate more realistic runoff in regions with high orographic variability, additional improvements are made to SSiB's soil hydrology. These improvements include an orography-based surface runoff scheme as well as interaction with a water table below SSiB's three soil layers. The addition of these parameterizations further help to simulate more realistic runoff and accompanying prognostic soil moisture fields in the GSWP-style simulations. In intercomparisons of the performance of the new snow-physics SSiB with its earlier versions using an 18-year single-site dataset from Valdai Russia, the version of SSiB described in this paper again produces the earliest onset of snow melt. Soil moisture and deep soil temperatures also compare favorably with observations.

  15. Optimal guidance with obstacle avoidance for nap-of-the-earth flight

    NASA Technical Reports Server (NTRS)

    Pekelsma, Nicholas J.

    1988-01-01

    The development of automatic guidance is discussed for helicopter Nap-of-the-Earth (NOE) and near-NOE flight. It deals with algorithm refinements relating to automated real-time flight path planning and to mission planning. With regard to path planning, it relates rotorcraft trajectory characteristics to the NOE computation scheme and addresses real-time computing issues and both ride quality issues and pilot-vehicle interfaces. The automated mission planning algorithm refinements include route optimization, automatic waypoint generation, interactive applications, and provisions for integrating the results into the real-time path planning software. A microcomputer based mission planning workstation was developed and is described. Further, the application of Defense Mapping Agency (DMA) digital terrain to both the mission planning workstation and to automatic guidance is both discussed and illustrated.

  16. Interpolation Method Needed for Numerical Uncertainty

    NASA Technical Reports Server (NTRS)

    Groves, Curtis E.; Ilie, Marcel; Schallhorn, Paul A.

    2014-01-01

    Using Computational Fluid Dynamics (CFD) to predict a flow field is an approximation to the exact problem and uncertainties exist. There is a method to approximate the errors in CFD via Richardson's Extrapolation. This method is based off of progressive grid refinement. To estimate the errors, the analyst must interpolate between at least three grids. This paper describes a study to find an appropriate interpolation scheme that can be used in Richardson's extrapolation or other uncertainty method to approximate errors.

  17. First Prismatic Building Model Reconstruction from Tomosar Point Clouds

    NASA Astrophysics Data System (ADS)

    Sun, Y.; Shahzad, M.; Zhu, X.

    2016-06-01

    This paper demonstrates for the first time the potential of explicitly modelling the individual roof surfaces to reconstruct 3-D prismatic building models using spaceborne tomographic synthetic aperture radar (TomoSAR) point clouds. The proposed approach is modular and works as follows: it first extracts the buildings via DSM generation and cutting-off the ground terrain. The DSM is smoothed using BM3D denoising method proposed in (Dabov et al., 2007) and a gradient map of the smoothed DSM is generated based on height jumps. Watershed segmentation is then adopted to oversegment the DSM into different regions. Subsequently, height and polygon complexity constrained merging is employed to refine (i.e., to reduce) the retrieved number of roof segments. Coarse outline of each roof segment is then reconstructed and later refined using quadtree based regularization plus zig-zag line simplification scheme. Finally, height is associated to each refined roof segment to obtain the 3-D prismatic model of the building. The proposed approach is illustrated and validated over a large building (convention center) in the city of Las Vegas using TomoSAR point clouds generated from a stack of 25 images using Tomo-GENESIS software developed at DLR.

  18. Refining and end use study of coal liquids II - linear programming analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lowe, C.; Tam, S.

    1995-12-31

    A DOE-funded study is underway to determine the optimum refinery processing schemes for producing transportation fuels that will meet CAAA regulations from direct and indirect coal liquids. The study consists of three major parts: pilot plant testing of critical upgrading processes, linear programming analysis of different processing schemes, and engine emission testing of final products. Currently, fractions of a direct coal liquid produced form bituminous coal are being tested in sequence of pilot plant upgrading processes. This work is discussed in a separate paper. The linear programming model, which is the subject of this paper, has been completed for themore » petroleum refinery and is being modified to handle coal liquids based on the pilot plant test results. Preliminary coal liquid evaluation studies indicate that, if a refinery expansion scenario is adopted, then the marginal value of the coal liquid (over the base petroleum crude) is $3-4/bbl.« less

  19. Sparse spikes super-resolution on thin grids II: the continuous basis pursuit

    NASA Astrophysics Data System (ADS)

    Duval, Vincent; Peyré, Gabriel

    2017-09-01

    This article analyzes the performance of the continuous basis pursuit (C-BP) method for sparse super-resolution. The C-BP has been recently proposed by Ekanadham, Tranchina and Simoncelli as a refined discretization scheme for the recovery of spikes in inverse problems regularization. One of the most well known discretization scheme, the basis pursuit (BP, also known as \

  20. Interface conditions for domain decomposition with radical grid refinement

    NASA Technical Reports Server (NTRS)

    Scroggs, Jeffrey S.

    1991-01-01

    Interface conditions for coupling the domains in a physically motivated domain decomposition method are discussed. The domain decomposition is based on an asymptotic-induced method for the numerical solution of hyperbolic conservation laws with small viscosity. The method consists of multiple stages. The first stage is to obtain a first approximation using a first-order method, such as the Godunov scheme. Subsequent stages of the method involve solving internal-layer problem via a domain decomposition. The method is derived and justified via singular perturbation techniques.

  1. A survey of particle contamination in electronic devices

    NASA Technical Reports Server (NTRS)

    Adolphsen, J. W.; Kagdis, W. A.; Timmins, A. R.

    1976-01-01

    The experiences are given of a number of National Aeronautics and Space Administration (NASA) and Space and Missile System Organization (SAMSO) contractors with particle contamination, and the methods used for its prevention and detection, evaluates the bases for the different schemes, assesses their effectiveness, and identifies the problems associated with each. It recommends specific short-range tests or approaches appropriate to individual part-type categories and recommends that specific tasks be initiated to refine techniques and to resolve technical and application facets of promising solutions.

  2. [Reform and practice of teaching methods for culture of medicinal plant].

    PubMed

    Si, Jinping; Zhu, Yuqiu; Liu, Jingjing; Bai, Yan; Zhang, Xinfeng

    2012-02-01

    Culture of pharmaceutical plant is a comprehensive multi-disciplinary theory, which has a long history of application. In order to improve the quality of this course, some reformation schemes have been carried out, including stimulating enthusiasm for learning, refining the basic concepts and theories, promoting the case study, emphasis on latest achievements, enhancing exercise in laboratory and planting base, and guiding students to do scientific and technological innovation. Meanwhile, the authors point out some teaching problems of this course.

  3. Collaborative Localization and Location Verification in WSNs

    PubMed Central

    Miao, Chunyu; Dai, Guoyong; Ying, Kezhen; Chen, Qingzhang

    2015-01-01

    Localization is one of the most important technologies in wireless sensor networks. A lightweight distributed node localization scheme is proposed by considering the limited computational capacity of WSNs. The proposed scheme introduces the virtual force model to determine the location by incremental refinement. Aiming at solving the drifting problem and malicious anchor problem, a location verification algorithm based on the virtual force mode is presented. In addition, an anchor promotion algorithm using the localization reliability model is proposed to re-locate the drifted nodes. Extended simulation experiments indicate that the localization algorithm has relatively high precision and the location verification algorithm has relatively high accuracy. The communication overhead of these algorithms is relative low, and the whole set of reliable localization methods is practical as well as comprehensive. PMID:25954948

  4. Efficient energy stable schemes for isotropic and strongly anisotropic Cahn-Hilliard systems with the Willmore regularization

    NASA Astrophysics Data System (ADS)

    Chen, Ying; Lowengrub, John; Shen, Jie; Wang, Cheng; Wise, Steven

    2018-07-01

    We develop efficient energy stable numerical methods for solving isotropic and strongly anisotropic Cahn-Hilliard systems with the Willmore regularization. The scheme, which involves adaptive mesh refinement and a nonlinear multigrid finite difference method, is constructed based on a convex splitting approach. We prove that, for the isotropic Cahn-Hilliard system with the Willmore regularization, the total free energy of the system is non-increasing for any time step and mesh sizes. A straightforward modification of the scheme is then used to solve the regularized strongly anisotropic Cahn-Hilliard system, and it is numerically verified that the discrete energy of the anisotropic system is also non-increasing, and can be efficiently solved by using the modified stable method. We present numerical results in both two and three dimensions that are in good agreement with those in earlier work on the topics. Numerical simulations are presented to demonstrate the accuracy and efficiency of the proposed methods.

  5. Fourier Collocation Approach With Mesh Refinement Method for Simulating Transit-Time Ultrasonic Flowmeters Under Multiphase Flow Conditions.

    PubMed

    Simurda, Matej; Duggen, Lars; Basse, Nils T; Lassen, Benny

    2018-02-01

    A numerical model for transit-time ultrasonic flowmeters operating under multiphase flow conditions previously presented by us is extended by mesh refinement and grid point redistribution. The method solves modified first-order stress-velocity equations of elastodynamics with additional terms to account for the effect of the background flow. Spatial derivatives are calculated by a Fourier collocation scheme allowing the use of the fast Fourier transform, while the time integration is realized by the explicit third-order Runge-Kutta finite-difference scheme. The method is compared against analytical solutions and experimental measurements to verify the benefit of using mapped grids. Additionally, a study of clamp-on and in-line ultrasonic flowmeters operating under multiphase flow conditions is carried out.

  6. Deterministic implementations of single-photon multi-qubit Deutsch-Jozsa algorithms with linear optics

    NASA Astrophysics Data System (ADS)

    Wei, Hai-Rui; Liu, Ji-Zhen

    2017-02-01

    It is very important to seek an efficient and robust quantum algorithm demanding less quantum resources. We propose one-photon three-qubit original and refined Deutsch-Jozsa algorithms with polarization and two linear momentums degrees of freedom (DOFs). Our schemes are constructed by solely using linear optics. Compared to the traditional ones with one DOF, our schemes are more economic and robust because the necessary photons are reduced from three to one. Our linear-optic schemes are working in a determinate way, and they are feasible with current experimental technology.

  7. Deterministic implementations of single-photon multi-qubit Deutsch–Jozsa algorithms with linear optics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei, Hai-Rui, E-mail: hrwei@ustb.edu.cn; Liu, Ji-Zhen

    2017-02-15

    It is very important to seek an efficient and robust quantum algorithm demanding less quantum resources. We propose one-photon three-qubit original and refined Deutsch–Jozsa algorithms with polarization and two linear momentums degrees of freedom (DOFs). Our schemes are constructed by solely using linear optics. Compared to the traditional ones with one DOF, our schemes are more economic and robust because the necessary photons are reduced from three to one. Our linear-optic schemes are working in a determinate way, and they are feasible with current experimental technology.

  8. Multistage Estimation Of Frequency And Phase

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra

    1991-01-01

    Conceptual two-stage software scheme serves as prototype of multistage scheme for digital estimation of phase, frequency, and rate of change of frequency ("Doppler rate") of possibly phase-modulated received sinusoidal signal in communication system in which transmitter and/or receiver traveling rapidly, accelerating, and/or jerking severely. Each additional stage of multistage scheme provides increasingly refined estimate of frequency and phase of signal. Conceived for use in estimating parameters of signals from spacecraft and high dynamic GPS signal parameters, also applicable, to terrestrial stationary/mobile (e.g., cellular radio) and land-mobile/satellite communication systems.

  9. Simulations of viscous and compressible gas-gas flows using high-order finite difference schemes

    NASA Astrophysics Data System (ADS)

    Capuano, M.; Bogey, C.; Spelt, P. D. M.

    2018-05-01

    A computational method for the simulation of viscous and compressible gas-gas flows is presented. It consists in solving the Navier-Stokes equations associated with a convection equation governing the motion of the interface between two gases using high-order finite-difference schemes. A discontinuity-capturing methodology based on sensors and a spatial filter enables capturing shock waves and deformable interfaces. One-dimensional test cases are performed as validation and to justify choices in the numerical method. The results compare well with analytical solutions. Shock waves and interfaces are accurately propagated, and remain sharp. Subsequently, two-dimensional flows are considered including viscosity and thermal conductivity. In Richtmyer-Meshkov instability, generated on an air-SF6 interface, the influence of the mesh refinement on the instability shape is studied, and the temporal variations of the instability amplitude is compared with experimental data. Finally, for a plane shock wave propagating in air and impacting a cylindrical bubble filled with helium or R22, numerical Schlieren pictures obtained using different grid refinements are found to compare well with experimental shadow-photographs. The mass conservation is verified from the temporal variations of the mass of the bubble. The mean velocities of pressure waves and bubble interface are similar to those obtained experimentally.

  10. Constrained-transport Magnetohydrodynamics with Adaptive Mesh Refinement in CHARM

    NASA Astrophysics Data System (ADS)

    Miniati, Francesco; Martin, Daniel F.

    2011-07-01

    We present the implementation of a three-dimensional, second-order accurate Godunov-type algorithm for magnetohydrodynamics (MHD) in the adaptive-mesh-refinement (AMR) cosmological code CHARM. The algorithm is based on the full 12-solve spatially unsplit corner-transport-upwind (CTU) scheme. The fluid quantities are cell-centered and are updated using the piecewise-parabolic method (PPM), while the magnetic field variables are face-centered and are evolved through application of the Stokes theorem on cell edges via a constrained-transport (CT) method. The so-called multidimensional MHD source terms required in the predictor step for high-order accuracy are applied in a simplified form which reduces their complexity in three dimensions without loss of accuracy or robustness. The algorithm is implemented on an AMR framework which requires specific synchronization steps across refinement levels. These include face-centered restriction and prolongation operations and a reflux-curl operation, which maintains a solenoidal magnetic field across refinement boundaries. The code is tested against a large suite of test problems, including convergence tests in smooth flows, shock-tube tests, classical two- and three-dimensional MHD tests, a three-dimensional shock-cloud interaction problem, and the formation of a cluster of galaxies in a fully cosmological context. The magnetic field divergence is shown to remain negligible throughout.

  11. Appearance-based representative samples refining method for palmprint recognition

    NASA Astrophysics Data System (ADS)

    Wen, Jiajun; Chen, Yan

    2012-07-01

    The sparse representation can deal with the lack of sample problem due to utilizing of all the training samples. However, the discrimination ability will degrade when more training samples are used for representation. We propose a novel appearance-based palmprint recognition method. We aim to find a compromise between the discrimination ability and the lack of sample problem so as to obtain a proper representation scheme. Under the assumption that the test sample can be well represented by a linear combination of a certain number of training samples, we first select the representative training samples according to the contributions of the samples. Then we further refine the training samples by an iteration procedure, excluding the training sample with the least contribution to the test sample for each time. Experiments on PolyU multispectral palmprint database and two-dimensional and three-dimensional palmprint database show that the proposed method outperforms the conventional appearance-based palmprint recognition methods. Moreover, we also explore and find out the principle of the usage for the key parameters in the proposed algorithm, which facilitates to obtain high-recognition accuracy.

  12. Parallel Cartesian grid refinement for 3D complex flow simulations

    NASA Astrophysics Data System (ADS)

    Angelidis, Dionysios; Sotiropoulos, Fotis

    2013-11-01

    A second order accurate method for discretizing the Navier-Stokes equations on 3D unstructured Cartesian grids is presented. Although the grid generator is based on the oct-tree hierarchical method, fully unstructured data-structure is adopted enabling robust calculations for incompressible flows, avoiding both the need of synchronization of the solution between different levels of refinement and usage of prolongation/restriction operators. The current solver implements a hybrid staggered/non-staggered grid layout, employing the implicit fractional step method to satisfy the continuity equation. The pressure-Poisson equation is discretized by using a novel second order fully implicit scheme for unstructured Cartesian grids and solved using an efficient Krylov subspace solver. The momentum equation is also discretized with second order accuracy and the high performance Newton-Krylov method is used for integrating them in time. Neumann and Dirichlet conditions are used to validate the Poisson solver against analytical functions and grid refinement results to a significant reduction of the solution error. The effectiveness of the fractional step method results in the stability of the overall algorithm and enables the performance of accurate multi-resolution real life simulations. This material is based upon work supported by the Department of Energy under Award Number DE-EE0005482.

  13. A semi-implicit level set method for multiphase flows and fluid-structure interaction problems

    NASA Astrophysics Data System (ADS)

    Cottet, Georges-Henri; Maitre, Emmanuel

    2016-06-01

    In this paper we present a novel semi-implicit time-discretization of the level set method introduced in [8] for fluid-structure interaction problems. The idea stems from a linear stability analysis derived on a simplified one-dimensional problem. The semi-implicit scheme relies on a simple filter operating as a pre-processing on the level set function. It applies to multiphase flows driven by surface tension as well as to fluid-structure interaction problems. The semi-implicit scheme avoids the stability constraints that explicit scheme need to satisfy and reduces significantly the computational cost. It is validated through comparisons with the original explicit scheme and refinement studies on two-dimensional benchmarks.

  14. A Novel Four-Node Quadrilateral Smoothing Element for Stress Enhancement and Error Estimation

    NASA Technical Reports Server (NTRS)

    Tessler, A.; Riggs, H. R.; Dambach, M.

    1998-01-01

    A four-node, quadrilateral smoothing element is developed based upon a penalized-discrete-least-squares variational formulation. The smoothing methodology recovers C1-continuous stresses, thus enabling effective a posteriori error estimation and automatic adaptive mesh refinement. The element formulation is originated with a five-node macro-element configuration consisting of four triangular anisoparametric smoothing elements in a cross-diagonal pattern. This element pattern enables a convenient closed-form solution for the degrees of freedom of the interior node, resulting from enforcing explicitly a set of natural edge-wise penalty constraints. The degree-of-freedom reduction scheme leads to a very efficient formulation of a four-node quadrilateral smoothing element without any compromise in robustness and accuracy of the smoothing analysis. The application examples include stress recovery and error estimation in adaptive mesh refinement solutions for an elasticity problem and an aerospace structural component.

  15. An economic prediction of the finer resolution level wavelet coefficients in electronic structure calculations.

    PubMed

    Nagy, Szilvia; Pipek, János

    2015-12-21

    In wavelet based electronic structure calculations, introducing a new, finer resolution level is usually an expensive task, this is why often a two-level approximation is used with very fine starting resolution level. This process results in large matrices to calculate with and a large number of coefficients to be stored. In our previous work we have developed an adaptively refined solution scheme that determines the indices, where the refined basis functions are to be included, and later a method for predicting the next, finer resolution coefficients in a very economic way. In the present contribution, we would like to determine whether the method can be applied for predicting not only the first, but also the other, higher resolution level coefficients. Also the energy expectation values of the predicted wave functions are studied, as well as the scaling behaviour of the coefficients in the fine resolution limit.

  16. Toward Automatic Verification of Goal-Oriented Flow Simulations

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis, Michael J.

    2014-01-01

    We demonstrate the power of adaptive mesh refinement with adjoint-based error estimates in verification of simulations governed by the steady Euler equations. The flow equations are discretized using a finite volume scheme on a Cartesian mesh with cut cells at the wall boundaries. The discretization error in selected simulation outputs is estimated using the method of adjoint-weighted residuals. Practical aspects of the implementation are emphasized, particularly in the formulation of the refinement criterion and the mesh adaptation strategy. Following a thorough code verification example, we demonstrate simulation verification of two- and three-dimensional problems. These involve an airfoil performance database, a pressure signature of a body in supersonic flow and a launch abort with strong jet interactions. The results show reliable estimates and automatic control of discretization error in all simulations at an affordable computational cost. Moreover, the approach remains effective even when theoretical assumptions, e.g., steady-state and solution smoothness, are relaxed.

  17. An upwind multigrid method for solving viscous flows on unstructured triangular meshes. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Bonhaus, Daryl Lawrence

    1993-01-01

    A multigrid algorithm is combined with an upwind scheme for solving the two dimensional Reynolds averaged Navier-Stokes equations on triangular meshes resulting in an efficient, accurate code for solving complex flows around multiple bodies. The relaxation scheme uses a backward-Euler time difference and relaxes the resulting linear system using a red-black procedure. Roe's flux-splitting scheme is used to discretize convective and pressure terms, while a central difference is used for the diffusive terms. The multigrid scheme is demonstrated for several flows around single and multi-element airfoils, including inviscid, laminar, and turbulent flows. The results show an appreciable speed up of the scheme for inviscid and laminar flows, and dramatic increases in efficiency for turbulent cases, especially those on increasingly refined grids.

  18. Cultivation of students' engineering designing ability based on optoelectronic system course project

    NASA Astrophysics Data System (ADS)

    Cao, Danhua; Wu, Yubin; Li, Jingping

    2017-08-01

    We carry out teaching based on optoelectronic related course group, aiming at junior students majored in Optoelectronic Information Science and Engineering. " Optoelectronic System Course Project " is product-designing-oriented and lasts for a whole semester. It provides a chance for students to experience the whole process of product designing, and improve their abilities to search literature, proof schemes, design and implement their schemes. In teaching process, each project topic is carefully selected and repeatedly refined to guarantee the projects with the knowledge integrity, engineering meanings and enjoyment. Moreover, we set up a top team with professional and experienced teachers, and build up learning community. Meanwhile, the communication between students and teachers as well as the interaction among students are taken seriously in order to improve their team-work ability and communicational skills. Therefore, students are not only able to have a chance to review the knowledge hierarchy of optics, electronics, and computer sciences, but also are able to improve their engineering mindset and innovation consciousness.

  19. Implementation of the diagonalization-free algorithm in the self-consistent field procedure within the four-component relativistic scheme.

    PubMed

    Hrdá, Marcela; Kulich, Tomáš; Repiský, Michal; Noga, Jozef; Malkina, Olga L; Malkin, Vladimir G

    2014-09-05

    A recently developed Thouless-expansion-based diagonalization-free approach for improving the efficiency of self-consistent field (SCF) methods (Noga and Šimunek, J. Chem. Theory Comput. 2010, 6, 2706) has been adapted to the four-component relativistic scheme and implemented within the program package ReSpect. In addition to the implementation, the method has been thoroughly analyzed, particularly with respect to cases for which it is difficult or computationally expensive to find a good initial guess. Based on this analysis, several modifications of the original algorithm, refining its stability and efficiency, are proposed. To demonstrate the robustness and efficiency of the improved algorithm, we present the results of four-component diagonalization-free SCF calculations on several heavy-metal complexes, the largest of which contains more than 80 atoms (about 6000 4-spinor basis functions). The diagonalization-free procedure is about twice as fast as the corresponding diagonalization. Copyright © 2014 Wiley Periodicals, Inc.

  20. A High Order Finite Difference Scheme with Sharp Shock Resolution for the Euler Equations

    NASA Technical Reports Server (NTRS)

    Gerritsen, Margot; Olsson, Pelle

    1996-01-01

    We derive a high-order finite difference scheme for the Euler equations that satisfies a semi-discrete energy estimate, and present an efficient strategy for the treatment of discontinuities that leads to sharp shock resolution. The formulation of the semi-discrete energy estimate is based on a symmetrization of the Euler equations that preserves the homogeneity of the flux vector, a canonical splitting of the flux derivative vector, and the use of difference operators that satisfy a discrete analogue to the integration by parts procedure used in the continuous energy estimate. Around discontinuities or sharp gradients, refined grids are created on which the discrete equations are solved after adding a newly constructed artificial viscosity. The positioning of the sub-grids and computation of the viscosity are aided by a detection algorithm which is based on a multi-scale wavelet analysis of the pressure grid function. The wavelet theory provides easy to implement mathematical criteria to detect discontinuities, sharp gradients and spurious oscillations quickly and efficiently.

  1. Interpolation Method Needed for Numerical Uncertainty Analysis of Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Groves, Curtis; Ilie, Marcel; Schallhorn, Paul

    2014-01-01

    Using Computational Fluid Dynamics (CFD) to predict a flow field is an approximation to the exact problem and uncertainties exist. There is a method to approximate the errors in CFD via Richardson's Extrapolation. This method is based off of progressive grid refinement. To estimate the errors in an unstructured grid, the analyst must interpolate between at least three grids. This paper describes a study to find an appropriate interpolation scheme that can be used in Richardson's extrapolation or other uncertainty method to approximate errors. Nomenclature

  2. Implicit adaptive mesh refinement for 2D reduced resistive magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Philip, Bobby; Chacón, Luis; Pernice, Michael

    2008-10-01

    An implicit structured adaptive mesh refinement (SAMR) solver for 2D reduced magnetohydrodynamics (MHD) is described. The time-implicit discretization is able to step over fast normal modes, while the spatial adaptivity resolves thin, dynamically evolving features. A Jacobian-free Newton-Krylov method is used for the nonlinear solver engine. For preconditioning, we have extended the optimal "physics-based" approach developed in [L. Chacón, D.A. Knoll, J.M. Finn, An implicit, nonlinear reduced resistive MHD solver, J. Comput. Phys. 178 (2002) 15-36] (which employed multigrid solver technology in the preconditioner for scalability) to SAMR grids using the well-known Fast Adaptive Composite grid (FAC) method [S. McCormick, Multilevel Adaptive Methods for Partial Differential Equations, SIAM, Philadelphia, PA, 1989]. A grid convergence study demonstrates that the solver performance is independent of the number of grid levels and only depends on the finest resolution considered, and that it scales well with grid refinement. The study of error generation and propagation in our SAMR implementation demonstrates that high-order (cubic) interpolation during regridding, combined with a robustly damping second-order temporal scheme such as BDF2, is required to minimize impact of grid errors at coarse-fine interfaces on the overall error of the computation for this MHD application. We also demonstrate that our implementation features the desired property that the overall numerical error is dependent only on the finest resolution level considered, and not on the base-grid resolution or on the number of refinement levels present during the simulation. We demonstrate the effectiveness of the tool on several challenging problems.

  3. A new parallelization scheme for adaptive mesh refinement

    DOE PAGES

    Loffler, Frank; Cao, Zhoujian; Brandt, Steven R.; ...

    2016-05-06

    Here, we present a new method for parallelization of adaptive mesh refinement called Concurrent Structured Adaptive Mesh Refinement (CSAMR). This new method offers the lower computational cost (i.e. wall time x processor count) of subcycling in time, but with the runtime performance (i.e. smaller wall time) of evolving all levels at once using the time step of the finest level (which does more work than subcycling but has less parallelism). We demonstrate our algorithm's effectiveness using an adaptive mesh refinement code, AMSS-NCKU, and show performance on Blue Waters and other high performance clusters. For the class of problem considered inmore » this paper, our algorithm achieves a speedup of 1.7-1.9 when the processor count for a given AMR run is doubled, consistent with our theoretical predictions.« less

  4. A new parallelization scheme for adaptive mesh refinement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loffler, Frank; Cao, Zhoujian; Brandt, Steven R.

    Here, we present a new method for parallelization of adaptive mesh refinement called Concurrent Structured Adaptive Mesh Refinement (CSAMR). This new method offers the lower computational cost (i.e. wall time x processor count) of subcycling in time, but with the runtime performance (i.e. smaller wall time) of evolving all levels at once using the time step of the finest level (which does more work than subcycling but has less parallelism). We demonstrate our algorithm's effectiveness using an adaptive mesh refinement code, AMSS-NCKU, and show performance on Blue Waters and other high performance clusters. For the class of problem considered inmore » this paper, our algorithm achieves a speedup of 1.7-1.9 when the processor count for a given AMR run is doubled, consistent with our theoretical predictions.« less

  5. Application of a 5-tiered scheme for standardized classification of 2,360 unique mismatch repair gene variants in the InSiGHT locus-specific database.

    PubMed

    Thompson, Bryony A; Spurdle, Amanda B; Plazzer, John-Paul; Greenblatt, Marc S; Akagi, Kiwamu; Al-Mulla, Fahd; Bapat, Bharati; Bernstein, Inge; Capellá, Gabriel; den Dunnen, Johan T; du Sart, Desiree; Fabre, Aurelie; Farrell, Michael P; Farrington, Susan M; Frayling, Ian M; Frebourg, Thierry; Goldgar, David E; Heinen, Christopher D; Holinski-Feder, Elke; Kohonen-Corish, Maija; Robinson, Kristina Lagerstedt; Leung, Suet Yi; Martins, Alexandra; Moller, Pal; Morak, Monika; Nystrom, Minna; Peltomaki, Paivi; Pineda, Marta; Qi, Ming; Ramesar, Rajkumar; Rasmussen, Lene Juel; Royer-Pokora, Brigitte; Scott, Rodney J; Sijmons, Rolf; Tavtigian, Sean V; Tops, Carli M; Weber, Thomas; Wijnen, Juul; Woods, Michael O; Macrae, Finlay; Genuardi, Maurizio

    2014-02-01

    The clinical classification of hereditary sequence variants identified in disease-related genes directly affects clinical management of patients and their relatives. The International Society for Gastrointestinal Hereditary Tumours (InSiGHT) undertook a collaborative effort to develop, test and apply a standardized classification scheme to constitutional variants in the Lynch syndrome-associated genes MLH1, MSH2, MSH6 and PMS2. Unpublished data submission was encouraged to assist in variant classification and was recognized through microattribution. The scheme was refined by multidisciplinary expert committee review of the clinical and functional data available for variants, applied to 2,360 sequence alterations, and disseminated online. Assessment using validated criteria altered classifications for 66% of 12,006 database entries. Clinical recommendations based on transparent evaluation are now possible for 1,370 variants that were not obviously protein truncating from nomenclature. This large-scale endeavor will facilitate the consistent management of families suspected to have Lynch syndrome and demonstrates the value of multidisciplinary collaboration in the curation and classification of variants in public locus-specific databases.

  6. Mixed finite-difference scheme for analysis of simply supported thick plates.

    NASA Technical Reports Server (NTRS)

    Noor, A. K.

    1973-01-01

    A mixed finite-difference scheme is presented for the stress and free vibration analysis of simply supported nonhomogeneous and layered orthotropic thick plates. The analytical formulation is based on the linear, three-dimensional theory of orthotropic elasticity and a Fourier approach is used to reduce the governing equations to six first-order ordinary differential equations in the thickness coordinate. The governing equations possess a symmetric coefficient matrix and are free of derivatives of the elastic characteristics of the plate. In the finite difference discretization two interlacing grids are used for the different fundamental unknowns in such a way as to reduce both the local discretization error and the bandwidth of the resulting finite-difference field equations. Numerical studies are presented for the effects of reducing the interior and boundary discretization errors and of mesh refinement on the accuracy and convergence of solutions. It is shown that the proposed scheme, in addition to a number of other advantages, leads to highly accurate results, even when a small number of finite difference intervals is used.

  7. Application of a five-tiered scheme for standardized classification of 2,360 unique mismatch repair gene variants lodged on the InSiGHT locus-specific database

    PubMed Central

    Plazzer, John-Paul; Greenblatt, Marc S.; Akagi, Kiwamu; Al-Mulla, Fahd; Bapat, Bharati; Bernstein, Inge; Capellá, Gabriel; den Dunnen, Johan T.; du Sart, Desiree; Fabre, Aurelie; Farrell, Michael P.; Farrington, Susan M.; Frayling, Ian M.; Frebourg, Thierry; Goldgar, David E.; Heinen, Christopher D.; Holinski-Feder, Elke; Kohonen-Corish, Maija; Robinson, Kristina Lagerstedt; Leung, Suet Yi; Martins, Alexandra; Moller, Pal; Morak, Monika; Nystrom, Minna; Peltomaki, Paivi; Pineda, Marta; Qi, Ming; Ramesar, Rajkumar; Rasmussen, Lene Juel; Royer-Pokora, Brigitte; Scott, Rodney J.; Sijmons, Rolf; Tavtigian, Sean V.; Tops, Carli M.; Weber, Thomas; Wijnen, Juul; Woods, Michael O.; Macrae, Finlay; Genuardi, Maurizio

    2015-01-01

    Clinical classification of sequence variants identified in hereditary disease genes directly affects clinical management of patients and their relatives. The International Society for Gastrointestinal Hereditary Tumours (InSiGHT) undertook a collaborative effort to develop, test and apply a standardized classification scheme to constitutional variants in the Lynch Syndrome genes MLH1, MSH2, MSH6 and PMS2. Unpublished data submission was encouraged to assist variant classification, and recognized by microattribution. The scheme was refined by multidisciplinary expert committee review of clinical and functional data available for variants, applied to 2,360 sequence alterations, and disseminated online. Assessment using validated criteria altered classifications for 66% of 12,006 database entries. Clinical recommendations based on transparent evaluation are now possible for 1,370 variants not obviously protein-truncating from nomenclature. This large-scale endeavor will facilitate consistent management of suspected Lynch Syndrome families, and demonstrates the value of multidisciplinary collaboration for curation and classification of variants in public locus-specific databases. PMID:24362816

  8. Adaptive elimination of synchronization in coupled oscillator

    NASA Astrophysics Data System (ADS)

    Zhou, Shijie; Ji, Peng; Zhou, Qing; Feng, Jianfeng; Kurths, Jürgen; Lin, Wei

    2017-08-01

    We present here an adaptive control scheme with a feedback delay to achieve elimination of synchronization in a large population of coupled and synchronized oscillators. We validate the feasibility of this scheme not only in the coupled Kuramoto’s oscillators with a unimodal or bimodal distribution of natural frequency, but also in two representative models of neuronal networks, namely, the FitzHugh-Nagumo spiking oscillators and the Hindmarsh-Rose bursting oscillators. More significantly, we analytically illustrate the feasibility of the proposed scheme with a feedback delay and reveal how the exact topological form of the bimodal natural frequency distribution influences the scheme performance. We anticipate that our developed scheme will deepen the understanding and refinement of those controllers, e.g. techniques of deep brain stimulation, which have been implemented in remedying some synchronization-induced mental disorders including Parkinson disease and epilepsy.

  9. Computerized Liver Volumetry on MRI by Using 3D Geodesic Active Contour Segmentation

    PubMed Central

    Huynh, Hieu Trung; Karademir, Ibrahim; Oto, Aytekin; Suzuki, Kenji

    2014-01-01

    OBJECTIVE Our purpose was to develop an accurate automated 3D liver segmentation scheme for measuring liver volumes on MRI. SUBJECTS AND METHODS Our scheme for MRI liver volumetry consisted of three main stages. First, the preprocessing stage was applied to T1-weighted MRI of the liver in the portal venous phase to reduce noise and produce the boundary-enhanced image. This boundary-enhanced image was used as a speed function for a 3D fast-marching algorithm to generate an initial surface that roughly approximated the shape of the liver. A 3D geodesic-active-contour segmentation algorithm refined the initial surface to precisely determine the liver boundaries. The liver volumes determined by our scheme were compared with those manually traced by a radiologist, used as the reference standard. RESULTS The two volumetric methods reached excellent agreement (intraclass correlation coefficient, 0.98) without statistical significance (p = 0.42). The average (± SD) accuracy was 99.4% ± 0.14%, and the average Dice overlap coefficient was 93.6% ± 1.7%. The mean processing time for our automated scheme was 1.03 ± 0.13 minutes, whereas that for manual volumetry was 24.0 ± 4.4 minutes (p < 0.001). CONCLUSION The MRI liver volumetry based on our automated scheme agreed excellently with reference-standard volumetry, and it required substantially less completion time. PMID:24370139

  10. Computerized liver volumetry on MRI by using 3D geodesic active contour segmentation.

    PubMed

    Huynh, Hieu Trung; Karademir, Ibrahim; Oto, Aytekin; Suzuki, Kenji

    2014-01-01

    Our purpose was to develop an accurate automated 3D liver segmentation scheme for measuring liver volumes on MRI. Our scheme for MRI liver volumetry consisted of three main stages. First, the preprocessing stage was applied to T1-weighted MRI of the liver in the portal venous phase to reduce noise and produce the boundary-enhanced image. This boundary-enhanced image was used as a speed function for a 3D fast-marching algorithm to generate an initial surface that roughly approximated the shape of the liver. A 3D geodesic-active-contour segmentation algorithm refined the initial surface to precisely determine the liver boundaries. The liver volumes determined by our scheme were compared with those manually traced by a radiologist, used as the reference standard. The two volumetric methods reached excellent agreement (intraclass correlation coefficient, 0.98) without statistical significance (p = 0.42). The average (± SD) accuracy was 99.4% ± 0.14%, and the average Dice overlap coefficient was 93.6% ± 1.7%. The mean processing time for our automated scheme was 1.03 ± 0.13 minutes, whereas that for manual volumetry was 24.0 ± 4.4 minutes (p < 0.001). The MRI liver volumetry based on our automated scheme agreed excellently with reference-standard volumetry, and it required substantially less completion time.

  11. Modeling of shock wave propagation in large amplitude ultrasound.

    PubMed

    Pinton, Gianmarco F; Trahey, Gregg E

    2008-01-01

    The Rankine-Hugoniot relation for shock wave propagation describes the shock speed of a nonlinear wave. This paper investigates time-domain numerical methods that solve the nonlinear parabolic wave equation, or the Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation, and the conditions they require to satisfy the Rankine-Hugoniot relation. Two numerical methods commonly used in hyperbolic conservation laws are adapted to solve the KZK equation: Godunov's method and the monotonic upwind scheme for conservation laws (MUSCL). It is shown that they satisfy the Rankine-Hugoniot relation regardless of attenuation. These two methods are compared with the current implicit solution based method. When the attenuation is small, such as in water, the current method requires a degree of grid refinement that is computationally impractical. All three numerical methods are compared in simulations for lithotripters and high intensity focused ultrasound (HIFU) where the attenuation is small compared to the nonlinearity because much of the propagation occurs in water. The simulations are performed on grid sizes that are consistent with present-day computational resources but are not sufficiently refined for the current method to satisfy the Rankine-Hugoniot condition. It is shown that satisfying the Rankine-Hugoniot conditions has a significant impact on metrics relevant to lithotripsy (such as peak pressures) and HIFU (intensity). Because the Godunov and MUSCL schemes satisfy the Rankine-Hugoniot conditions on coarse grids, they are particularly advantageous for three-dimensional simulations.

  12. Large-eddy simulation of wind turbine wake interactions on locally refined Cartesian grids

    NASA Astrophysics Data System (ADS)

    Angelidis, Dionysios; Sotiropoulos, Fotis

    2014-11-01

    Performing high-fidelity numerical simulations of turbulent flow in wind farms remains a challenging issue mainly because of the large computational resources required to accurately simulate the turbine wakes and turbine/turbine interactions. The discretization of the governing equations on structured grids for mesoscale calculations may not be the most efficient approach for resolving the large disparity of spatial scales. A 3D Cartesian grid refinement method enabling the efficient coupling of the Actuator Line Model (ALM) with locally refined unstructured Cartesian grids adapted to accurately resolve tip vortices and multi-turbine interactions, is presented. Second order schemes are employed for the discretization of the incompressible Navier-Stokes equations in a hybrid staggered/non-staggered formulation coupled with a fractional step method that ensures the satisfaction of local mass conservation to machine zero. The current approach enables multi-resolution LES of turbulent flow in multi-turbine wind farms. The numerical simulations are in good agreement with experimental measurements and are able to resolve the rich dynamics of turbine wakes on grids containing only a small fraction of the grid nodes that would be required in simulations without local mesh refinement. This material is based upon work supported by the Department of Energy under Award Number DE-EE0005482 and the National Science Foundation under Award number NSF PFI:BIC 1318201.

  13. Evaluation of MODFLOW-LGR in connection with a synthetic regional-scale model

    USGS Publications Warehouse

    Vilhelmsen, T.N.; Christensen, S.; Mehl, S.W.

    2012-01-01

    This work studies costs and benefits of utilizing local-grid refinement (LGR) as implemented in MODFLOW-LGR to simulate groundwater flow in a buried tunnel valley interacting with a regional aquifer. Two alternative LGR methods were used: the shared-node (SN) method and the ghost-node (GN) method. To conserve flows the SN method requires correction of sources and sinks in cells at the refined/coarse-grid interface. We found that the optimal correction method is case dependent and difficult to identify in practice. However, the results showed little difference and suggest that identifying the optimal method was of minor importance in our case. The GN method does not require corrections at the models' interface, and it uses a simpler head interpolation scheme than the SN method. The simpler scheme is faster but less accurate so that more iterations may be necessary. However, the GN method solved our flow problem more efficiently than the SN method. The MODFLOW-LGR results were compared with the results obtained using a globally coarse (GC) grid. The LGR simulations required one to two orders of magnitude longer run times than the GC model. However, the improvements of the numerical resolution around the buried valley substantially increased the accuracy of simulated heads and flows compared with the GC simulation. Accuracy further increased locally around the valley flanks when improving the geological resolution using the refined grid. Finally, comparing MODFLOW-LGR simulation with a globally refined (GR) grid showed that the refinement proportion of the model should not exceed 10% to 15% in order to secure method efficiency. ?? 2011, The Author(s). Ground Water ?? 2011, National Ground Water Association.

  14. Blade loss transient dynamics analysis, volume 2. Task 2: Theoretical and analytical development. Task 3: Experimental verification

    NASA Technical Reports Server (NTRS)

    Gallardo, V. C.; Storace, A. S.; Gaffney, E. F.; Bach, L. J.; Stallone, M. J.

    1981-01-01

    The component element method was used to develop a transient dynamic analysis computer program which is essentially based on modal synthesis combined with a central, finite difference, numerical integration scheme. The methodology leads to a modular or building-block technique that is amenable to computer programming. To verify the analytical method, turbine engine transient response analysis (TETRA), was applied to two blade-out test vehicles that had been previously instrumented and tested. Comparison of the time dependent test data with those predicted by TETRA led to recommendations for refinement or extension of the analytical method to improve its accuracy and overcome its shortcomings. The development of working equations, their discretization, numerical solution scheme, the modular concept of engine modelling, the program logical structure and some illustrated results are discussed. The blade-loss test vehicles (rig full engine), the type of measured data, and the engine structural model are described.

  15. A Least-Squares Finite Element Method for Electromagnetic Scattering Problems

    NASA Technical Reports Server (NTRS)

    Wu, Jie; Jiang, Bo-nan

    1996-01-01

    The least-squares finite element method (LSFEM) is applied to electromagnetic scattering and radar cross section (RCS) calculations. In contrast to most existing numerical approaches, in which divergence-free constraints are omitted, the LSFF-M directly incorporates two divergence equations in the discretization process. The importance of including the divergence equations is demonstrated by showing that otherwise spurious solutions with large divergence occur near the scatterers. The LSFEM is based on unstructured grids and possesses full flexibility in handling complex geometry and local refinement Moreover, the LSFEM does not require any special handling, such as upwinding, staggered grids, artificial dissipation, flux-differencing, etc. Implicit time discretization is used and the scheme is unconditionally stable. By using a matrix-free iterative method, the computational cost and memory requirement for the present scheme is competitive with other approaches. The accuracy of the LSFEM is verified by several benchmark test problems.

  16. A parallel adaptive mesh refinement algorithm

    NASA Technical Reports Server (NTRS)

    Quirk, James J.; Hanebutte, Ulf R.

    1993-01-01

    Over recent years, Adaptive Mesh Refinement (AMR) algorithms which dynamically match the local resolution of the computational grid to the numerical solution being sought have emerged as powerful tools for solving problems that contain disparate length and time scales. In particular, several workers have demonstrated the effectiveness of employing an adaptive, block-structured hierarchical grid system for simulations of complex shock wave phenomena. Unfortunately, from the parallel algorithm developer's viewpoint, this class of scheme is quite involved; these schemes cannot be distilled down to a small kernel upon which various parallelizing strategies may be tested. However, because of their block-structured nature such schemes are inherently parallel, so all is not lost. In this paper we describe the method by which Quirk's AMR algorithm has been parallelized. This method is built upon just a few simple message passing routines and so it may be implemented across a broad class of MIMD machines. Moreover, the method of parallelization is such that the original serial code is left virtually intact, and so we are left with just a single product to support. The importance of this fact should not be underestimated given the size and complexity of the original algorithm.

  17. Large-eddy simulation/Reynolds-averaged Navier-Stokes hybrid schemes for high speed flows

    NASA Astrophysics Data System (ADS)

    Xiao, Xudong

    Three LES/RANS hybrid schemes have been proposed for the prediction of high speed separated flows. Each method couples the k-zeta (Enstrophy) BANS model with an LES subgrid scale one-equation model by using a blending function that is coordinate system independent. Two of these functions are based on turbulence dissipation length scale and grid size, while the third one has no explicit dependence on the grid. To implement the LES/RANS hybrid schemes, a new rescaling-reintroducing method is used to generate time-dependent turbulent inflow conditions. The hybrid schemes have been tested on a Mach 2.88 flow over 25 degree compression-expansion ramp and a Mach 2.79 flow over 20 degree compression ramp. A special computation procedure has been designed to prevent the separation zone from expanding upstream to the recycle-plane. The code is parallelized using Message Passing Interface (MPI) and is optimized for running on IBM-SP3 parallel machine. The scheme was validated first for a flat plate. It was shown that the blending function has to be monotonic to prevent the RANS region from appearing in the LES region. In the 25 deg ramp case, the hybrid schemes provided better agreement with experiment in the recovery region. Grid refinement studies demonstrated the importance of using a grid independent blend function and further improvement with experiment in the recovery region. In the 20 deg ramp case, with a relatively finer grid, the hybrid scheme characterized by grid independent blending function well predicted the flow field in both the separation region and the recovery region. Therefore, with "appropriately" fine grid, current hybrid schemes are promising for the simulation of shock wave/boundary layer interaction problems.

  18. Assessing the feasibility of using uniaxial accelerometers with an online support platform in the delivery of a community-based exercise referral scheme.

    PubMed

    Hawkins, Jemma L; Oliver, Emily J; Wyatt-Williams, Jeannie; Scale, Elaine; van Woerden, Hugo C

    2014-10-01

    Exercise referral schemes are established within community-based health care; however, they have been criticized for failing to evidence long-term behavior change relative to usual care. As such, recent reviews have called for refinement of their delivery with a focus on embedded strategies targeting client motivation. This research letter presents findings from an initial pilot trial conducted within Wales' National Exercise Referral Scheme (NERS), examining the feasibility of using validated physical activity monitoring devices and an accompanying online platform within standard scheme delivery. 30 individuals referred to generic or cardiovascular pathways were offered the system; of these 17 agreed to participate. Common reasons for declining were clustered into lack of technology literacy or access, condition severity, or fear of costs associated with losing the device. Analysis of follow-up interviews after 4 weeks of use indicated that while participants found the monitoring devices practical and informative, only a minority (n = 4) were using the system in full. Crucially, the system element most aligned with contemporary theories of motivation (the online portal) was not used as expected. In addition, feedback from exercise referral professionals indicated that there were demands for support from clients, which might be mitigated by more effective independent system use. Recommendations for larger scale trials using similar systems include consideration of targeted patient groups, equity of access, and providing adequate technological support that is currently beyond the capacity of the NERS system. © The Author(s) 2014.

  19. A new numerical approach to solve Thomas-Fermi model of an atom using bio-inspired heuristics integrated with sequential quadratic programming.

    PubMed

    Raja, Muhammad Asif Zahoor; Zameer, Aneela; Khan, Aziz Ullah; Wazwaz, Abdul Majid

    2016-01-01

    In this study, a novel bio-inspired computing approach is developed to analyze the dynamics of nonlinear singular Thomas-Fermi equation (TFE) arising in potential and charge density models of an atom by exploiting the strength of finite difference scheme (FDS) for discretization and optimization through genetic algorithms (GAs) hybrid with sequential quadratic programming. The FDS procedures are used to transform the TFE differential equations into a system of nonlinear equations. A fitness function is constructed based on the residual error of constituent equations in the mean square sense and is formulated as the minimization problem. Optimization of parameters for the system is carried out with GAs, used as a tool for viable global search integrated with SQP algorithm for rapid refinement of the results. The design scheme is applied to solve TFE for five different scenarios by taking various step sizes and different input intervals. Comparison of the proposed results with the state of the art numerical and analytical solutions reveals that the worth of our scheme in terms of accuracy and convergence. The reliability and effectiveness of the proposed scheme are validated through consistently getting optimal values of statistical performance indices calculated for a sufficiently large number of independent runs to establish its significance.

  20. The impact of glaucoma referral refinement criteria on referral to, and first-visit discharge rates from, the hospital eye service: the Health Innovation & Education Cluster (HIEC) Glaucoma Pathways project.

    PubMed

    Ratnarajan, Gokulan; Newsom, Wendy; French, Karen; Kean, Jane; Chang, Lydia; Parker, Mike; Garway-Heath, David F; Bourne, Rupert R A

    2013-03-01

    To assess the impact of referral refinement criteria on the number of patients referred to, and first-visit discharges from, the Hospital Eye Service (HES) in relation to the National Institute for Health & Clinical Excellence (NICE) Glaucoma Guidelines, Joint College Group Guidance (JCG) and the NICE commissioning guidance. All low-risk (one risk factor: suspicious optic disc, abnormal visual field (VF), raised intra-ocular pressure (IOP) (22-28 mmHg) or IOP asymmetry (>5 mmHg) and high-risk (more than one risk factor, shallow anterior chamber or IOP >28 mmHg) referrals to the HES from 2006 to 2011 were analysed. Low-risk referrals were seen by Optometrists with a specialist interest in glaucoma and high-risk referrals were referred directly to the HES. Two thousand nine hundred and twelve patient records were analysed. The highest Consultant first-visit discharge rates were for referrals based on IOP alone (45% for IOP 22-28 mmHg) and IOP asymmetry (53%), VF defect alone (46%) and for abnormal IOP and VF (54%). The lowest first-visit discharge rates were for referrals for suspicious optic disc (19%) and IOP >28 mmHg (22%). 73% of patients aged 65-80 and 60% of patients aged >80 who were referred by the OSI due to an IOP between 22-28 mmHg would have satisfied the JCG criteria for non-referral. For patients referred with an IOP >28 mmHg and an otherwise normal examination, adherence to the NICE commissioning guidance would have resulted in 6% fewer referrals. In 2010 this scheme reduced the number of patients attending the HES by 15%, which resulted in a saving of £16 258 (13%). The results support that referrals for a raised IOP alone or in combination with an abnormal VF be classified as low-risk and undergo referral refinement. Adherence to the JCG and the NICE commissioning guidance as onward referral criteria for specialist optometrists in this referral refinement scheme would result in fewer referrals. Ophthalmic & Physiological Optics © 2013 The College of Optometrists.

  1. Intelligent multi-spectral IR image segmentation

    NASA Astrophysics Data System (ADS)

    Lu, Thomas; Luong, Andrew; Heim, Stephen; Patel, Maharshi; Chen, Kang; Chao, Tien-Hsin; Chow, Edward; Torres, Gilbert

    2017-05-01

    This article presents a neural network based multi-spectral image segmentation method. A neural network is trained on the selected features of both the objects and background in the longwave (LW) Infrared (IR) images. Multiple iterations of training are performed until the accuracy of the segmentation reaches satisfactory level. The segmentation boundary of the LW image is used to segment the midwave (MW) and shortwave (SW) IR images. A second neural network detects the local discontinuities and refines the accuracy of the local boundaries. This article compares the neural network based segmentation method to the Wavelet-threshold and Grab-Cut methods. Test results have shown increased accuracy and robustness of this segmentation scheme for multi-spectral IR images.

  2. Parallel-Vector Algorithm For Rapid Structural Anlysis

    NASA Technical Reports Server (NTRS)

    Agarwal, Tarun R.; Nguyen, Duc T.; Storaasli, Olaf O.

    1993-01-01

    New algorithm developed to overcome deficiency of skyline storage scheme by use of variable-band storage scheme. Exploits both parallel and vector capabilities of modern high-performance computers. Gives engineers and designers opportunity to include more design variables and constraints during optimization of structures. Enables use of more refined finite-element meshes to obtain improved understanding of complex behaviors of aerospace structures leading to better, safer designs. Not only attractive for current supercomputers but also for next generation of shared-memory supercomputers.

  3. The block adaptive multigrid method applied to the solution of the Euler equations

    NASA Technical Reports Server (NTRS)

    Pantelelis, Nikos

    1993-01-01

    In the present study, a scheme capable of solving very fast and robust complex nonlinear systems of equations is presented. The Block Adaptive Multigrid (BAM) solution method offers multigrid acceleration and adaptive grid refinement based on the prediction of the solution error. The proposed solution method was used with an implicit upwind Euler solver for the solution of complex transonic flows around airfoils. Very fast results were obtained (18-fold acceleration of the solution) using one fourth of the volumes of a global grid with the same solution accuracy for two test cases.

  4. Vortex breakdown simulation - A circumspect study of the steady, laminar, axisymmetric model

    NASA Technical Reports Server (NTRS)

    Salas, M. D.; Kuruvila, G.

    1989-01-01

    The incompressible axisymmetric steady Navier-Stokes equations are written using the streamfunction-vorticity formulation. The resulting equations are discretized using a second-order central-difference scheme. The discretized equations are linearized and then solved using an exact LU decomposition, Gaussian elimination, and Newton iteration. Solutions are presented for Reynolds numbers (based on vortex core radius) 100-1800 and swirl parameter 0.9-1.1. The effects of inflow boundary conditions, the location of farfield and outflow boundaries, and mesh refinement are examined. Finally, the stability of the steady solutions is investigated by solving the time-dependent equations.

  5. The Predatory Bird Monitoring Scheme: identifying chemical risks to top predators in Britain.

    PubMed

    Walker, Lee A; Shore, Richard F; Turk, Anthony; Pereira, M Glória; Best, Jennifer

    2008-09-01

    The Predatory Bird Monitoring Scheme (PBMS) is a long term (>40 y), UK-wide, exposure monitoring scheme that determines the concentration of selected pesticides and pollutants in the livers and eggs of predatory birds. This paper describes how the PBMS works, and in particular highlights some of the key scientific and policy drivers for monitoring contaminants in predatory birds and describes the specific aims, scope, and methods of the PBMS. We also present previously unpublished data that illustrates how the PBMS has been used to demonstrate the success of mitigation measures in reversing chemical-mediated impacts; identify and evaluate chemical threats to species of high conservation value; and finally to inform and refine monitoring methodologies. In addition, we discuss how such schemes can also address wider conservation needs.

  6. Adaptive Mesh Refinement in Curvilinear Body-Fitted Grid Systems

    NASA Technical Reports Server (NTRS)

    Steinthorsson, Erlendur; Modiano, David; Colella, Phillip

    1995-01-01

    To be truly compatible with structured grids, an AMR algorithm should employ a block structure for the refined grids to allow flow solvers to take advantage of the strengths of unstructured grid systems, such as efficient solution algorithms for implicit discretizations and multigrid schemes. One such algorithm, the AMR algorithm of Berger and Colella, has been applied to and adapted for use with body-fitted structured grid systems. Results are presented for a transonic flow over a NACA0012 airfoil (AGARD-03 test case) and a reflection of a shock over a double wedge.

  7. Four-dimensional MRI using an internal respiratory surrogate derived by dimensionality reduction

    NASA Astrophysics Data System (ADS)

    Uh, Jinsoo; Ayaz Khan, M.; Hua, Chiaho

    2016-11-01

    This study aimed to develop a practical and accurate 4-dimensional (4D) magnetic resonance imaging (MRI) method using a non-navigator, image-based internal respiratory surrogate derived by dimensionality reduction (DR). The use of DR has been previously suggested but not implemented for reconstructing 4D MRI, despite its practical advantages. We compared multiple image-acquisition schemes and refined a retrospective-sorting process to optimally implement a DR-derived surrogate. The comparison included an unconventional scheme that acquires paired slices alternately to mitigate the internal surrogate’s dependency on a specific slice location. We introduced ‘target-oriented sorting’, as opposed to conventional binning, to quantify the coherence in retrospectively sorted images, thereby determining the minimal scan time needed for sufficient coherence. This study focused on evaluating the proposed method using digital phantoms which provided unequivocal gold standard. The evaluation indicated that the DR-based respiratory surrogate is highly accurate: the error in amplitude percentile of the surrogate signal was less than 5% with the optimal scheme. Acquiring alternating paired slices was superior to the conventional scheme of acquiring individual slices; the advantage of the unconventional scheme was more pronounced when a substantial phase shift occurred across slice locations. The analysis of coherence across sorted images confirmed the advantage of higher sampling efficiencies in non-navigator respiratory surrogates. We determined that a scan time of 20 s per imaging slice was sufficient to achieve a mean coherence error of less than 1% for the tested respiratory patterns. The clinical applicability of the proposed 4D MRI has been demonstrated with volunteers and patients. The diaphragm motion in 4D MRI was consistent with that in dynamic 2D imaging which was regarded as the gold standard (difference within 1.8 mm on average).

  8. MCL-CAw: a refinement of MCL for detecting yeast complexes from weighted PPI networks by incorporating core-attachment structure

    PubMed Central

    2010-01-01

    Background The reconstruction of protein complexes from the physical interactome of organisms serves as a building block towards understanding the higher level organization of the cell. Over the past few years, several independent high-throughput experiments have helped to catalogue enormous amount of physical protein interaction data from organisms such as yeast. However, these individual datasets show lack of correlation with each other and also contain substantial number of false positives (noise). Over these years, several affinity scoring schemes have also been devised to improve the qualities of these datasets. Therefore, the challenge now is to detect meaningful as well as novel complexes from protein interaction (PPI) networks derived by combining datasets from multiple sources and by making use of these affinity scoring schemes. In the attempt towards tackling this challenge, the Markov Clustering algorithm (MCL) has proved to be a popular and reasonably successful method, mainly due to its scalability, robustness, and ability to work on scored (weighted) networks. However, MCL produces many noisy clusters, which either do not match known complexes or have additional proteins that reduce the accuracies of correctly predicted complexes. Results Inspired by recent experimental observations by Gavin and colleagues on the modularity structure in yeast complexes and the distinctive properties of "core" and "attachment" proteins, we develop a core-attachment based refinement method coupled to MCL for reconstruction of yeast complexes from scored (weighted) PPI networks. We combine physical interactions from two recent "pull-down" experiments to generate an unscored PPI network. We then score this network using available affinity scoring schemes to generate multiple scored PPI networks. The evaluation of our method (called MCL-CAw) on these networks shows that: (i) MCL-CAw derives larger number of yeast complexes and with better accuracies than MCL, particularly in the presence of natural noise; (ii) Affinity scoring can effectively reduce the impact of noise on MCL-CAw and thereby improve the quality (precision and recall) of its predicted complexes; (iii) MCL-CAw responds well to most available scoring schemes. We discuss several instances where MCL-CAw was successful in deriving meaningful complexes, and where it missed a few proteins or whole complexes due to affinity scoring of the networks. We compare MCL-CAw with several recent complex detection algorithms on unscored and scored networks, and assess the relative performance of the algorithms on these networks. Further, we study the impact of augmenting physical datasets with computationally inferred interactions for complex detection. Finally, we analyse the essentiality of proteins within predicted complexes to understand a possible correlation between protein essentiality and their ability to form complexes. Conclusions We demonstrate that core-attachment based refinement in MCL-CAw improves the predictions of MCL on yeast PPI networks. We show that affinity scoring improves the performance of MCL-CAw. PMID:20939868

  9. AN EXACT PEAK CAPTURING AND OSCILLATION-FREE SCHEME TO SOLVE ADVECTION-DISPERSION TRANSPORT EQUATIONS

    EPA Science Inventory

    An exact peak capturing and essentially oscillation-free (EPCOF) algorithm, consisting of advection-dispersion decoupling, backward method of characteristics, forward node tracking, and adaptive local grid refinement, is developed to solve transport equations. This algorithm repr...

  10. Cryo-EM image alignment based on nonuniform fast Fourier transform.

    PubMed

    Yang, Zhengfan; Penczek, Pawel A

    2008-08-01

    In single particle analysis, two-dimensional (2-D) alignment is a fundamental step intended to put into register various particle projections of biological macromolecules collected at the electron microscope. The efficiency and quality of three-dimensional (3-D) structure reconstruction largely depends on the computational speed and alignment accuracy of this crucial step. In order to improve the performance of alignment, we introduce a new method that takes advantage of the highly accurate interpolation scheme based on the gridding method, a version of the nonuniform fast Fourier transform, and utilizes a multi-dimensional optimization algorithm for the refinement of the orientation parameters. Using simulated data, we demonstrate that by using less than half of the sample points and taking twice the runtime, our new 2-D alignment method achieves dramatically better alignment accuracy than that based on quadratic interpolation. We also apply our method to image to volume registration, the key step in the single particle EM structure refinement protocol. We find that in this case the accuracy of the method not only surpasses the accuracy of the commonly used real-space implementation, but results are achieved in much shorter time, making gridding-based alignment a perfect candidate for efficient structure determination in single particle analysis.

  11. Cryo-EM Image Alignment Based on Nonuniform Fast Fourier Transform

    PubMed Central

    Yang, Zhengfan; Penczek, Pawel A.

    2008-01-01

    In single particle analysis, two-dimensional (2-D) alignment is a fundamental step intended to put into register various particle projections of biological macromolecules collected at the electron microscope. The efficiency and quality of three-dimensional (3-D) structure reconstruction largely depends on the computational speed and alignment accuracy of this crucial step. In order to improve the performance of alignment, we introduce a new method that takes advantage of the highly accurate interpolation scheme based on the gridding method, a version of the nonuniform Fast Fourier Transform, and utilizes a multi-dimensional optimization algorithm for the refinement of the orientation parameters. Using simulated data, we demonstrate that by using less than half of the sample points and taking twice the runtime, our new 2-D alignment method achieves dramatically better alignment accuracy than that based on quadratic interpolation. We also apply our method to image to volume registration, the key step in the single particle EM structure refinement protocol. We find that in this case the accuracy of the method not only surpasses the accuracy of the commonly used real-space implementation, but results are achieved in much shorter time, making gridding-based alignment a perfect candidate for efficient structure determination in single particle analysis. PMID:18499351

  12. Hybrid simulation combining two space-time discretization of the discrete-velocity Boltzmann equation

    NASA Astrophysics Data System (ADS)

    Horstmann, Jan Tobias; Le Garrec, Thomas; Mincu, Daniel-Ciprian; Lévêque, Emmanuel

    2017-11-01

    Despite the efficiency and low dissipation of the stream-collide scheme of the discrete-velocity Boltzmann equation, which is nowadays implemented in many lattice Boltzmann solvers, a major drawback exists over alternative discretization schemes, i.e. finite-volume or finite-difference, that is the limitation to Cartesian uniform grids. In this paper, an algorithm is presented that combines the positive features of each scheme in a hybrid lattice Boltzmann method. In particular, the node-based streaming of the distribution functions is coupled with a second-order finite-volume discretization of the advection term of the Boltzmann equation under the Bhatnagar-Gross-Krook approximation. The algorithm is established on a multi-domain configuration, with the individual schemes being solved on separate sub-domains and connected by an overlapping interface of at least 2 grid cells. A critical parameter in the coupling is the CFL number equal to unity, which is imposed by the stream-collide algorithm. Nevertheless, a semi-implicit treatment of the collision term in the finite-volume formulation allows us to obtain a stable solution for this condition. The algorithm is validated in the scope of three different test cases on a 2D periodic mesh. It is shown that the accuracy of the combined discretization schemes agrees with the order of each separate scheme involved. The overall numerical error of the hybrid algorithm in the macroscopic quantities is contained between the error of the two individual algorithms. Finally, we demonstrate how such a coupling can be used to adapt to anisotropic flows with some gradual mesh refinement in the FV domain.

  13. A portable foot-parameter-extracting system

    NASA Astrophysics Data System (ADS)

    Zhang, MingKai; Liang, Jin; Li, Wenpan; Liu, Shifan

    2016-03-01

    In order to solve the problem of automatic foot measurement in garment customization, a new automatic footparameter- extracting system based on stereo vision, photogrammetry and heterodyne multiple frequency phase shift technology is proposed and implemented. The key technologies applied in the system are studied, including calibration of projector, alignment of point clouds, and foot measurement. Firstly, a new projector calibration algorithm based on plane model has been put forward to get the initial calibration parameters and a feature point detection scheme of calibration board image is developed. Then, an almost perfect match of two clouds is achieved by performing a first alignment using the Sampled Consensus - Initial Alignment algorithm (SAC-IA) and refining the alignment using the Iterative Closest Point algorithm (ICP). Finally, the approaches used for foot-parameterextracting and the system scheme are presented in detail. Experimental results show that the RMS error of the calibration result is 0.03 pixel and the foot parameter extracting experiment shows the feasibility of the extracting algorithm. Compared with the traditional measurement method, the system can be more portable, accurate and robust.

  14. Output-Adaptive Tetrahedral Cut-Cell Validation for Sonic Boom Prediction

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Darmofal, David L.

    2008-01-01

    A cut-cell approach to Computational Fluid Dynamics (CFD) that utilizes the median dual of a tetrahedral background grid is described. The discrete adjoint is also calculated, which permits adaptation based on improving the calculation of a specified output (off-body pressure signature) in supersonic inviscid flow. These predicted signatures are compared to wind tunnel measurements on and off the configuration centerline 10 body lengths below the model to validate the method for sonic boom prediction. Accurate mid-field sonic boom pressure signatures are calculated with the Euler equations without the use of hybrid grid or signature propagation methods. Highly-refined, shock-aligned anisotropic grids were produced by this method from coarse isotropic grids created without prior knowledge of shock locations. A heuristic reconstruction limiter provided stable flow and adjoint solution schemes while producing similar signatures to Barth-Jespersen and Venkatakrishnan limiters. The use of cut-cells with an output-based adaptive scheme completely automated this accurate prediction capability after a triangular mesh is generated for the cut surface. This automation drastically reduces the manual intervention required by existing methods.

  15. The Refinement of The Preliminary Genetic Decomposition of Group

    NASA Astrophysics Data System (ADS)

    Wijayanti, K.

    2017-04-01

    Mathematics proof is one of the characteristics of advanced mathematics thinking, and proof plays an important role in learning the abstract algebra included group theory. The depth and complexity of individual’s understanding of a concept depend on his/her ability to establish connections among the mental structure that constitute it. One of the cognitive styles is field dependent/independent. Field independent (FI) and field dependent (FD) learners have different characteristics. Our research question is (1)How is the proposed refinement of preliminary genetic decomposition of group that is designed with a preliminary study of the learning with APOS works; (2) What understanding about group that is generated by student (Field Independent, Field Neutral, and Field Dependent) when learning through designed material. This study was a descriptive qualitative. The participants of this study were nine (9) undergraduate students who were taking Introduction of Algebraic Structure 1, which included group, in the even semester of academic year 2015/2016 at Universitas Negeri Semarang. Each of type of cognitive styles consisted of 3 participants. There were two instruments used to gather data: written examination in the course and a set of the interview. The FD and FN participants generated Action for a binary operation. The FI participant generated Action and Process for a binary operation. The FD, FN and FI participants generated Action, Process, Object, and Scheme for the set. The FD and FN participant did not generate mental structure for axiom. The FI participant generated Scheme for axiom. The FD, FN and FI participants tend to have no Coherence of Scheme of the group. Not all mental structure on the refinement of the preliminary genetic decomposition can be constructed by participants so well that there are still obstacles in the process of proving.

  16. Modelling Detailed-Chemistry Effects on Turbulent Diffusion Flames using a Parallel Solution-Adaptive Scheme

    NASA Astrophysics Data System (ADS)

    Jha, Pradeep Kumar

    Capturing the effects of detailed-chemistry on turbulent combustion processes is a central challenge faced by the numerical combustion community. However, the inherent complexity and non-linear nature of both turbulence and chemistry require that combustion models rely heavily on engineering approximations to remain computationally tractable. This thesis proposes a computationally efficient algorithm for modelling detailed-chemistry effects in turbulent diffusion flames and numerically predicting the associated flame properties. The cornerstone of this combustion modelling tool is the use of parallel Adaptive Mesh Refinement (AMR) scheme with the recently proposed Flame Prolongation of Intrinsic low-dimensional manifold (FPI) tabulated-chemistry approach for modelling complex chemistry. The effect of turbulence on the mean chemistry is incorporated using a Presumed Conditional Moment (PCM) approach based on a beta-probability density function (PDF). The two-equation k-w turbulence model is used for modelling the effects of the unresolved turbulence on the mean flow field. The finite-rate of methane-air combustion is represented here by using the GRI-Mech 3.0 scheme. This detailed mechanism is used to build the FPI tables. A state of the art numerical scheme based on a parallel block-based solution-adaptive algorithm has been developed to solve the Favre-averaged Navier-Stokes (FANS) and other governing partial-differential equations using a second-order accurate, fully-coupled finite-volume formulation on body-fitted, multi-block, quadrilateral/hexahedral mesh for two-dimensional and three-dimensional flow geometries, respectively. A standard fourth-order Runge-Kutta time-marching scheme is used for time-accurate temporal discretizations. Numerical predictions of three different diffusion flames configurations are considered in the present work: a laminar counter-flow flame; a laminar co-flow diffusion flame; and a Sydney bluff-body turbulent reacting flow. Comparisons are made between the predicted results of the present FPI scheme and Steady Laminar Flamelet Model (SLFM) approach for diffusion flames. The effects of grid resolution on the predicted overall flame solutions are also assessed. Other non-reacting flows have also been considered to further validate other aspects of the numerical scheme. The present schemes predict results which are in good agreement with published experimental results and reduces the computational cost involved in modelling turbulent diffusion flames significantly, both in terms of storage and processing time.

  17. Efficient parallel seismic simulations including topography and 3-D material heterogeneities on locally refined composite grids

    NASA Astrophysics Data System (ADS)

    Petersson, Anders; Rodgers, Arthur

    2010-05-01

    The finite difference method on a uniform Cartesian grid is a highly efficient and easy to implement technique for solving the elastic wave equation in seismic applications. However, the spacing in a uniform Cartesian grid is fixed throughout the computational domain, whereas the resolution requirements in realistic seismic simulations usually are higher near the surface than at depth. This can be seen from the well-known formula h ≤ L-P which relates the grid spacing h to the wave length L, and the required number of grid points per wavelength P for obtaining an accurate solution. The compressional and shear wave lengths in the earth generally increase with depth and are often a factor of ten larger below the Moho discontinuity (at about 30 km depth), than in sedimentary basins near the surface. A uniform grid must have a grid spacing based on the small wave lengths near the surface, which results in over-resolving the solution at depth. As a result, the number of points in a uniform grid is unnecessarily large. In the wave propagation project (WPP) code, we address the over-resolution-at-depth issue by generalizing our previously developed single grid finite difference scheme to work on a composite grid consisting of a set of structured rectangular grids of different spacings, with hanging nodes on the grid refinement interfaces. The computational domain in a regional seismic simulation often extends to depth 40-50 km. Hence, using a refinement ratio of two, we need about three grid refinements from the bottom of the computational domain to the surface, to keep the local grid size in approximate parity with the local wave lengths. The challenge of the composite grid approach is to find a stable and accurate method for coupling the solution across the grid refinement interface. Of particular importance is the treatment of the solution at the hanging nodes, i.e., the fine grid points which are located in between coarse grid points. WPP implements a new, energy conserving, coupling procedure for the elastic wave equation at grid refinement interfaces. When used together with our single grid finite difference scheme, it results in a method which is provably stable, without artificial dissipation, for arbitrary heterogeneous isotropic elastic materials. The new coupling procedure is based on satisfying the summation-by-parts principle across refinement interfaces. From a practical standpoint, an important advantage of the proposed method is the absence of tunable numerical parameters, which seldom are appreciated by application experts. In WPP, the composite grid discretization is combined with a curvilinear grid approach that enables accurate modeling of free surfaces on realistic (non-planar) topography. The overall method satisfies the summation-by-parts principle and is stable under a CFL time step restriction. A feature of great practical importance is that WPP automatically generates the composite grid based on the user provided topography and the depths of the grid refinement interfaces. The WPP code has been verified extensively, for example using the method of manufactured solutions, by solving Lamb's problem, by solving various layer over half- space problems and comparing to semi-analytic (FK) results, and by simulating scenario earthquakes where results from other seismic simulation codes are available. WPP has also been validated against seismographic recordings of moderate earthquakes. WPP performs well on large parallel computers and has been run on up to 32,768 processors using about 26 Billion grid points (78 Billion DOF) and 41,000 time steps. WPP is an open source code that is available under the Gnu general public license.

  18. A mass and momentum conserving unsplit semi-Lagrangian framework for simulating multiphase flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Owkes, Mark, E-mail: mark.owkes@montana.edu; Desjardins, Olivier

    In this work, we present a computational methodology for convection and advection that handles discontinuities with second order accuracy and maintains conservation to machine precision. This method can transport a variety of discontinuous quantities and is used in the context of an incompressible gas–liquid flow to transport the phase interface, momentum, and scalars. The proposed method provides a modification to the three-dimensional, unsplit, second-order semi-Lagrangian flux method of Owkes & Desjardins (JCP, 2014). The modification adds a refined grid that provides consistent fluxes of mass and momentum defined on a staggered grid and discrete conservation of mass and momentum, evenmore » for flows with large density ratios. Additionally, the refined grid doubles the resolution of the interface without significantly increasing the computational cost over previous non-conservative schemes. This is possible due to a novel partitioning of the semi-Lagrangian fluxes into a small number of simplices. The proposed scheme is tested using canonical verification tests, rising bubbles, and an atomizing liquid jet.« less

  19. Efficient Low Dissipative High Order Schemes for Multiscale MHD Flows

    NASA Technical Reports Server (NTRS)

    Sjoegreen, Bjoern; Yee, Helen C.; Mansour, Nagi (Technical Monitor)

    2002-01-01

    Accurate numerical simulations of complex multiscale compressible viscous flows, especially high speed turbulence combustion and acoustics, demand high order schemes with adaptive numerical dissipation controls. Standard high resolution shock-capturing methods are too dissipative to capture the small scales and/or long-time wave propagations without extreme grid refinements and small time steps. An integrated approach for the control of numerical dissipation in high order schemes for the compressible Euler and Navier-Stokes equations has been developed and verified by the authors and collaborators. These schemes are suitable for the problems in question. Basically, the scheme consists of sixth-order or higher non-dissipative spatial difference operators as the base scheme. To control the amount of numerical dissipation, multiresolution wavelets are used as sensors to adaptively limit the amount and to aid the selection and/or blending of the appropriate types of numerical dissipation to be used. Magnetohydrodynamics (MHD) waves play a key role in drag reduction in highly maneuverable high speed combat aircraft, in space weather forecasting, and in the understanding of the dynamics of the evolution of our solar system and the main sequence stars. Although there exist a few well-studied second and third-order high-resolution shock-capturing schemes for the MHD in the literature, these schemes are too diffusive and not practical for turbulence/combustion MHD flows. On the other hand, extension of higher than third-order high-resolution schemes to the MHD system of equations is not straightforward. Unlike the hydrodynamic equations, the inviscid MHD system is non-strictly hyperbolic with non-convex fluxes. The wave structures and shock types are different from their hydrodynamic counterparts. Many of the non-traditional hydrodynamic shocks are not fully understood. Consequently, reliable and highly accurate numerical schemes for multiscale MHD equations pose a great challenge to algorithm development. In addition, controlling the numerical error of the divergence free condition of the magnetic fields for high order methods has been a stumbling block. Lower order methods are not practical for the astrophysical problems in question. We propose to extend our hydrodynamics schemes to the MHD equations with several desired properties over commonly used MHD schemes.

  20. Adaptive h -refinement for reduced-order models: ADAPTIVE h -refinement for reduced-order models

    DOE PAGES

    Carlberg, Kevin T.

    2014-11-05

    Our work presents a method to adaptively refine reduced-order models a posteriori without requiring additional full-order-model solves. The technique is analogous to mesh-adaptive h-refinement: it enriches the reduced-basis space online by ‘splitting’ a given basis vector into several vectors with disjoint support. The splitting scheme is defined by a tree structure constructed offline via recursive k-means clustering of the state variables using snapshot data. This method identifies the vectors to split online using a dual-weighted-residual approach that aims to reduce error in an output quantity of interest. The resulting method generates a hierarchy of subspaces online without requiring large-scale operationsmore » or full-order-model solves. Furthermore, it enables the reduced-order model to satisfy any prescribed error tolerance regardless of its original fidelity, as a completely refined reduced-order model is mathematically equivalent to the original full-order model. Experiments on a parameterized inviscid Burgers equation highlight the ability of the method to capture phenomena (e.g., moving shocks) not contained in the span of the original reduced basis.« less

  1. On Spurious Numerics in Solving Reactive Equations

    NASA Technical Reports Server (NTRS)

    Kotov, D. V; Yee, H. C.; Wang, W.; Shu, C.-W.

    2013-01-01

    The objective of this study is to gain a deeper understanding of the behavior of high order shock-capturing schemes for problems with stiff source terms and discontinuities and on corresponding numerical prediction strategies. The studies by Yee et al. (2012) and Wang et al. (2012) focus only on solving the reactive system by the fractional step method using the Strang splitting (Strang 1968). It is a common practice by developers in computational physics and engineering simulations to include a cut off safeguard if densities are outside the permissible range. Here we compare the spurious behavior of the same schemes by solving the fully coupled reactive system without the Strang splitting vs. using the Strang splitting. Comparison between the two procedures and the effects of a cut off safeguard is the focus the present study. The comparison of the performance of these schemes is largely based on the degree to which each method captures the correct location of the reaction front for coarse grids. Here "coarse grids" means standard mesh density requirement for accurate simulation of typical non-reacting flows of similar problem setup. It is remarked that, in order to resolve the sharp reaction front, local refinement beyond standard mesh density is still needed.

  2. Application of high-order numerical schemes and Newton-Krylov method to two-phase drift-flux model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zou, Ling; Zhao, Haihua; Zhang, Hongbin

    This study concerns the application and solver robustness of the Newton-Krylov method in solving two-phase flow drift-flux model problems using high-order numerical schemes. In our previous studies, the Newton-Krylov method has been proven as a promising solver for two-phase flow drift-flux model problems. However, these studies were limited to use first-order numerical schemes only. Moreover, the previous approach to treating the drift-flux closure correlations was later revealed to cause deteriorated solver convergence performance, when the mesh was highly refined, and also when higher-order numerical schemes were employed. In this study, a second-order spatial discretization scheme that has been tested withmore » two-fluid two-phase flow model was extended to solve drift-flux model problems. In order to improve solver robustness, and therefore efficiency, a new approach was proposed to treating the mean drift velocity of the gas phase as a primary nonlinear variable to the equation system. With this new approach, significant improvement in solver robustness was achieved. With highly refined mesh, the proposed treatment along with the Newton-Krylov solver were extensively tested with two-phase flow problems that cover a wide range of thermal-hydraulics conditions. Satisfactory convergence performances were observed for all test cases. Numerical verification was then performed in the form of mesh convergence studies, from which expected orders of accuracy were obtained for both the first-order and the second-order spatial discretization schemes. Finally, the drift-flux model, along with numerical methods presented, were validated with three sets of flow boiling experiments that cover different flow channel geometries (round tube, rectangular tube, and rod bundle), and a wide range of test conditions (pressure, mass flux, wall heat flux, inlet subcooling and outlet void fraction).« less

  3. Application of high-order numerical schemes and Newton-Krylov method to two-phase drift-flux model

    DOE PAGES

    Zou, Ling; Zhao, Haihua; Zhang, Hongbin

    2017-08-07

    This study concerns the application and solver robustness of the Newton-Krylov method in solving two-phase flow drift-flux model problems using high-order numerical schemes. In our previous studies, the Newton-Krylov method has been proven as a promising solver for two-phase flow drift-flux model problems. However, these studies were limited to use first-order numerical schemes only. Moreover, the previous approach to treating the drift-flux closure correlations was later revealed to cause deteriorated solver convergence performance, when the mesh was highly refined, and also when higher-order numerical schemes were employed. In this study, a second-order spatial discretization scheme that has been tested withmore » two-fluid two-phase flow model was extended to solve drift-flux model problems. In order to improve solver robustness, and therefore efficiency, a new approach was proposed to treating the mean drift velocity of the gas phase as a primary nonlinear variable to the equation system. With this new approach, significant improvement in solver robustness was achieved. With highly refined mesh, the proposed treatment along with the Newton-Krylov solver were extensively tested with two-phase flow problems that cover a wide range of thermal-hydraulics conditions. Satisfactory convergence performances were observed for all test cases. Numerical verification was then performed in the form of mesh convergence studies, from which expected orders of accuracy were obtained for both the first-order and the second-order spatial discretization schemes. Finally, the drift-flux model, along with numerical methods presented, were validated with three sets of flow boiling experiments that cover different flow channel geometries (round tube, rectangular tube, and rod bundle), and a wide range of test conditions (pressure, mass flux, wall heat flux, inlet subcooling and outlet void fraction).« less

  4. Bridge deficiency metric refinement : longer-term planning for state and locally-owned bridges.

    DOT National Transportation Integrated Search

    2010-06-01

    The focus of this study was to devise a prioritization scheme for locally-owned bridges and to extend the planning time horizon for state-owned bridges. The inherent nature of the local and county structures prevents a direct application of the formu...

  5. A morphing-based scheme for large deformation analysis with stereo-DIC

    NASA Astrophysics Data System (ADS)

    Genovese, Katia; Sorgente, Donato

    2018-05-01

    A key step in the DIC-based image registration process is the definition of the initial guess for the non-linear optimization routine aimed at finding the parameters describing the pixel subset transformation. This initialization may result very challenging and possibly fail when dealing with pairs of largely deformed images such those obtained from two angled-views of not-flat objects or from the temporal undersampling of rapidly evolving phenomena. To address this problem, we developed a procedure that generates a sequence of intermediate synthetic images for gradually tracking the pixel subset transformation between the two extreme configurations. To this scope, a proper image warping function is defined over the entire image domain through the adoption of a robust feature-based algorithm followed by a NURBS-based interpolation scheme. This allows a fast and reliable estimation of the initial guess of the deformation parameters for the subsequent refinement stage of the DIC analysis. The proposed method is described step-by-step by illustrating the measurement of the large and heterogeneous deformation of a circular silicone membrane undergoing axisymmetric indentation. A comparative analysis of the results is carried out by taking as a benchmark a standard reference-updating approach. Finally, the morphing scheme is extended to the most general case of the correspondence search between two largely deformed textured 3D geometries. The feasibility of this latter approach is demonstrated on a very challenging case: the full-surface measurement of the severe deformation (> 150% strain) suffered by an aluminum sheet blank subjected to a pneumatic bulge test.

  6. High-order central ENO finite-volume scheme for hyperbolic conservation laws on three-dimensional cubed-sphere grids

    NASA Astrophysics Data System (ADS)

    Ivan, L.; De Sterck, H.; Susanto, A.; Groth, C. P. T.

    2015-02-01

    A fourth-order accurate finite-volume scheme for hyperbolic conservation laws on three-dimensional (3D) cubed-sphere grids is described. The approach is based on a central essentially non-oscillatory (CENO) finite-volume method that was recently introduced for two-dimensional compressible flows and is extended to 3D geometries with structured hexahedral grids. Cubed-sphere grids feature hexahedral cells with nonplanar cell surfaces, which are handled with high-order accuracy using trilinear geometry representations in the proposed approach. Varying stencil sizes and slope discontinuities in grid lines occur at the boundaries and corners of the six sectors of the cubed-sphere grid where the grid topology is unstructured, and these difficulties are handled naturally with high-order accuracy by the multidimensional least-squares based 3D CENO reconstruction with overdetermined stencils. A rotation-based mechanism is introduced to automatically select appropriate smaller stencils at degenerate block boundaries, where fewer ghost cells are available and the grid topology changes, requiring stencils to be modified. Combining these building blocks results in a finite-volume discretization for conservation laws on 3D cubed-sphere grids that is uniformly high-order accurate in all three grid directions. While solution-adaptivity is natural in the multi-block setting of our code, high-order accurate adaptive refinement on cubed-sphere grids is not pursued in this paper. The 3D CENO scheme is an accurate and robust solution method for hyperbolic conservation laws on general hexahedral grids that is attractive because it is inherently multidimensional by employing a K-exact overdetermined reconstruction scheme, and it avoids the complexity of considering multiple non-central stencil configurations that characterizes traditional ENO schemes. Extensive numerical tests demonstrate fourth-order convergence for stationary and time-dependent Euler and magnetohydrodynamic flows on cubed-sphere grids, and robustness against spurious oscillations at 3D shocks. Performance tests illustrate efficiency gains that can be potentially achieved using fourth-order schemes as compared to second-order methods for the same error level. Applications on extended cubed-sphere grids incorporating a seventh root block that discretizes the interior of the inner sphere demonstrate the versatility of the spatial discretization method.

  7. A complete system for head tracking using motion-based particle filter and randomly perturbed active contour

    NASA Astrophysics Data System (ADS)

    Bouaynaya, N.; Schonfeld, Dan

    2005-03-01

    Many real world applications in computer and multimedia such as augmented reality and environmental imaging require an elastic accurate contour around a tracked object. In the first part of the paper we introduce a novel tracking algorithm that combines a motion estimation technique with the Bayesian Importance Sampling framework. We use Adaptive Block Matching (ABM) as the motion estimation technique. We construct the proposal density from the estimated motion vector. The resulting algorithm requires a small number of particles for efficient tracking. The tracking is adaptive to different categories of motion even with a poor a priori knowledge of the system dynamics. Particulary off-line learning is not needed. A parametric representation of the object is used for tracking purposes. In the second part of the paper, we refine the tracking output from a parametric sample to an elastic contour around the object. We use a 1D active contour model based on a dynamic programming scheme to refine the output of the tracker. To improve the convergence of the active contour, we perform the optimization over a set of randomly perturbed initial conditions. Our experiments are applied to head tracking. We report promising tracking results in complex environments.

  8. Recent advances in high-order WENO finite volume methods for compressible multiphase flows

    NASA Astrophysics Data System (ADS)

    Dumbser, Michael

    2013-10-01

    We present two new families of better than second order accurate Godunov-type finite volume methods for the solution of nonlinear hyperbolic partial differential equations with nonconservative products. One family is based on a high order Arbitrary-Lagrangian-Eulerian (ALE) formulation on moving meshes, which allows to resolve the material contact wave in a very sharp way when the mesh is moved at the speed of the material interface. The other family of methods is based on a high order Adaptive Mesh Refinement (AMR) strategy, where the mesh can be strongly refined in the vicinity of the material interface. Both classes of schemes have several building blocks in common, in particular: a high order WENO reconstruction operator to obtain high order of accuracy in space; the use of an element-local space-time Galerkin predictor step which evolves the reconstruction polynomials in time and that allows to reach high order of accuracy in time in one single step; the use of a path-conservative approach to treat the nonconservative terms of the PDE. We show applications of both methods to the Baer-Nunziato model for compressible multiphase flows.

  9. An adaptive interpolation scheme for molecular potential energy surfaces

    NASA Astrophysics Data System (ADS)

    Kowalewski, Markus; Larsson, Elisabeth; Heryudono, Alfa

    2016-08-01

    The calculation of potential energy surfaces for quantum dynamics can be a time consuming task—especially when a high level of theory for the electronic structure calculation is required. We propose an adaptive interpolation algorithm based on polyharmonic splines combined with a partition of unity approach. The adaptive node refinement allows to greatly reduce the number of sample points by employing a local error estimate. The algorithm and its scaling behavior are evaluated for a model function in 2, 3, and 4 dimensions. The developed algorithm allows for a more rapid and reliable interpolation of a potential energy surface within a given accuracy compared to the non-adaptive version.

  10. Asynchronous variational integration using continuous assumed gradient elements.

    PubMed

    Wolff, Sebastian; Bucher, Christian

    2013-03-01

    Asynchronous variational integration (AVI) is a tool which improves the numerical efficiency of explicit time stepping schemes when applied to finite element meshes with local spatial refinement. This is achieved by associating an individual time step length to each spatial domain. Furthermore, long-term stability is ensured by its variational structure. This article presents AVI in the context of finite elements based on a weakened weak form (W2) Liu (2009) [1], exemplified by continuous assumed gradient elements Wolff and Bucher (2011) [2]. The article presents the main ideas of the modified AVI, gives implementation notes and a recipe for estimating the critical time step.

  11. An edge-based solution-adaptive method applied to the AIRPLANE code

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Thomas, Scott D.; Cliff, Susan E.

    1995-01-01

    Computational methods to solve large-scale realistic problems in fluid flow can be made more efficient and cost effective by using them in conjunction with dynamic mesh adaption procedures that perform simultaneous coarsening and refinement to capture flow features of interest. This work couples the tetrahedral mesh adaption scheme, 3D_TAG, with the AIRPLANE code to solve complete aircraft configuration problems in transonic and supersonic flow regimes. Results indicate that the near-field sonic boom pressure signature of a cone-cylinder is improved, the oblique and normal shocks are better resolved on a transonic wing, and the bow shock ahead of an unstarted inlet is better defined.

  12. Investigation of supersonic jet plumes using an improved two-equation turbulence model

    NASA Technical Reports Server (NTRS)

    Lakshmanan, B.; Abdol-Hamid, Khaled S.

    1994-01-01

    Supersonic jet plumes were studied using a two-equation turbulence model employing corrections for compressible dissipation and pressure-dilatation. A space-marching procedure based on an upwind numerical scheme was used to solve the governing equations and turbulence transport equations. The computed results indicate that two-equation models employing corrections for compressible dissipation and pressure-dilatation yield improved agreement with the experimental data. In addition, the numerical study demonstrates that the computed results are sensitive to the effect of grid refinement and insensitive to the type of velocity profiles used at the inflow boundary for the cases considered in the present study.

  13. Use of diagnostic information submitted to the United Kingdom Central Cardiac Audit Database: development of categorisation and allocation algorithms.

    PubMed

    Brown, Kate L; Crowe, Sonya; Pagel, Christina; Bull, Catherine; Muthialu, Nagarajan; Gibbs, John; Cunningham, David; Utley, Martin; Tsang, Victor T; Franklin, Rodney

    2013-08-01

    To categorise records according to primary cardiac diagnosis in the United Kingdom Central Cardiac Audit Database in order to add this information to a risk adjustment model for paediatric cardiac surgery. Codes from the International Paediatric Congenital Cardiac Code were mapped to recognisable primary cardiac diagnosis groupings, allocated using a hierarchy and less refined diagnosis groups, based on the number of functional ventricles and presence of aortic obstruction. A National Clinical Audit Database. Patients Children undergoing cardiac interventions: the proportions for each diagnosis scheme are presented for 13,551 first patient surgical episodes since 2004. In Scheme 1, the most prevalent diagnoses nationally were ventricular septal defect (13%), patent ductus arteriosus (10.4%), and tetralogy of Fallot (9.5%). In Scheme 2, the prevalence of a biventricular heart without aortic obstruction was 64.2% and with aortic obstruction was 14.1%; the prevalence of a functionally univentricular heart without aortic obstruction was 4.3% and with aortic obstruction was 4.7%; the prevalence of unknown (ambiguous) number of ventricles was 8.4%; and the prevalence of acquired heart disease only was 2.2%. Diagnostic groups added to procedural information: of the 17% of all operations classed as "not a specific procedure", 97.1% had a diagnosis identified in Scheme 1 and 97.2% in Scheme 2. Diagnostic information adds to surgical procedural data when the complexity of case mix is analysed in a national database. These diagnostic categorisation schemes may be used for future investigation of the frequency of conditions and evaluation of long-term outcome over a series of procedures.

  14. A PDE Sensitivity Equation Method for Optimal Aerodynamic Design

    NASA Technical Reports Server (NTRS)

    Borggaard, Jeff; Burns, John

    1996-01-01

    The use of gradient based optimization algorithms in inverse design is well established as a practical approach to aerodynamic design. A typical procedure uses a simulation scheme to evaluate the objective function (from the approximate states) and its gradient, then passes this information to an optimization algorithm. Once the simulation scheme (CFD flow solver) has been selected and used to provide approximate function evaluations, there are several possible approaches to the problem of computing gradients. One popular method is to differentiate the simulation scheme and compute design sensitivities that are then used to obtain gradients. Although this black-box approach has many advantages in shape optimization problems, one must compute mesh sensitivities in order to compute the design sensitivity. In this paper, we present an alternative approach using the PDE sensitivity equation to develop algorithms for computing gradients. This approach has the advantage that mesh sensitivities need not be computed. Moreover, when it is possible to use the CFD scheme for both the forward problem and the sensitivity equation, then there are computational advantages. An apparent disadvantage of this approach is that it does not always produce consistent derivatives. However, for a proper combination of discretization schemes, one can show asymptotic consistency under mesh refinement, which is often sufficient to guarantee convergence of the optimal design algorithm. In particular, we show that when asymptotically consistent schemes are combined with a trust-region optimization algorithm, the resulting optimal design method converges. We denote this approach as the sensitivity equation method. The sensitivity equation method is presented, convergence results are given and the approach is illustrated on two optimal design problems involving shocks.

  15. Matching by linear programming and successive convexification.

    PubMed

    Jiang, Hao; Drew, Mark S; Li, Ze-Nian

    2007-06-01

    We present a novel convex programming scheme to solve matching problems, focusing on the challenging problem of matching in a large search range and with cluttered background. Matching is formulated as metric labeling with L1 regularization terms, for which we propose a novel linear programming relaxation method and an efficient successive convexification implementation. The unique feature of the proposed relaxation scheme is that a much smaller set of basis labels is used to represent the original label space. This greatly reduces the size of the searching space. A successive convexification scheme solves the labeling problem in a coarse to fine manner. Importantly, the original cost function is reconvexified at each stage, in the new focus region only, and the focus region is updated so as to refine the searching result. This makes the method well-suited for large label set matching. Experiments demonstrate successful applications of the proposed matching scheme in object detection, motion estimation, and tracking.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xing, Yulong; Shu, Chi-wang; Noelle, Sebastian

    This note aims at demonstrating the advantage of moving-water well-balanced schemes over still-water well-balanced schemes for the shallow water equations. We concentrate on numerical examples with solutions near a moving-water equilibrium. For such examples, still-water well-balanced methods are not capable of capturing the small perturbations of the moving-water equilibrium and may generate significant spurious oscillations, unless an extremely refined mesh is used. On the other hand, moving-water well-balanced methods perform well in these tests. The numerical examples in this note clearly demonstrate the importance of utilizing moving-water well-balanced methods for solutions near a moving-water equilibrium.

  17. MYCIN II: design and implementation of a therapy reference with complex content-based indexing.

    PubMed Central

    Kim, D. K.; Fagan, L. M.; Jones, K. T.; Berrios, D. C.; Yu, V. L.

    1998-01-01

    We describe the construction of MYCIN II, a prototype system that provides for content-based markup and search of a forthcoming clinical therapeutics textbook, Antimicrobial Therapy and Vaccines. Existing commercial search technology for digital references utilizes generic tools such as textword-based searches with geographical or statistical refinements. We suggest that the drawbacks of such systems significantly restrict their use in everyday clinical practice. This is in spite of the fact that there is a great need for the information contained within these same references. The system we describe is intended to supplement keyword searching so that certain important questions can be asked easily and can be answered reliably (in terms of precision and recall). Our method attacks this problem in a restricted domain of knowledge-clinical infectious disease. For example, we would like to be able to answer the class of questions exemplified by the following query: "What antimicrobial agents can be used to treat endocarditis caused by Eikenella corrodens?" We have compiled and analyzed a list of such questions to develop a concept-based markup scheme. This scheme was then applied within an HTML markup to electronically "highlight" passages from three textbook chapters. We constructed a functioning web-based search interface. Our system also provides semi-automated querying of PubMed using our concept markup and the user's actions as a guide. PMID:9929205

  18. MYCIN II: design and implementation of a therapy reference with complex content-based indexing.

    PubMed

    Kim, D K; Fagan, L M; Jones, K T; Berrios, D C; Yu, V L

    1998-01-01

    We describe the construction of MYCIN II, a prototype system that provides for content-based markup and search of a forthcoming clinical therapeutics textbook, Antimicrobial Therapy and Vaccines. Existing commercial search technology for digital references utilizes generic tools such as textword-based searches with geographical or statistical refinements. We suggest that the drawbacks of such systems significantly restrict their use in everyday clinical practice. This is in spite of the fact that there is a great need for the information contained within these same references. The system we describe is intended to supplement keyword searching so that certain important questions can be asked easily and can be answered reliably (in terms of precision and recall). Our method attacks this problem in a restricted domain of knowledge-clinical infectious disease. For example, we would like to be able to answer the class of questions exemplified by the following query: "What antimicrobial agents can be used to treat endocarditis caused by Eikenella corrodens?" We have compiled and analyzed a list of such questions to develop a concept-based markup scheme. This scheme was then applied within an HTML markup to electronically "highlight" passages from three textbook chapters. We constructed a functioning web-based search interface. Our system also provides semi-automated querying of PubMed using our concept markup and the user's actions as a guide.

  19. Development of a stationary carbon emission inventory for Shanghai using pollution source census data

    NASA Astrophysics Data System (ADS)

    Li, Xianzhe; Jiang, Ping; Zhang, Yan; Ma, Weichun

    2016-12-01

    This study utilizes 521,631 activity data points from the 2007 Shanghai Pollution Source Census to compile a stationary carbon emission inventory for Shanghai. The inventory generated from our dataset shows that a large portion of Shanghai's total energy use consists of coal-oriented energy consumption. The electricity and heat production industries, iron and steel mills, and the petroleum refining industry are the main carbon emitters. In addition, most of these industries are located in Baoshan District, which is Shanghai's largest contributor of carbon emissions. Policy makers can use the enterpriselevel carbon emission inventory and the method designed in this study to construct sound carbon emission reduction policies. The carbon trading scheme to be established in Shanghai based on the developed carbon inventory is also introduced in this paper with the aim of promoting the monitoring, reporting and verification of carbon trading. Moreover, we believe that it might be useful to consider the participation of industries, such as those for food processing, beverage, and tobacco, in Shanghai's carbon trading scheme. Based on the results contained herein, we recommend establishing a comprehensive carbon emission inventory by inputting data from the pollution source census used in this study.

  20. A weakly-compressible Cartesian grid approach for hydrodynamic flows

    NASA Astrophysics Data System (ADS)

    Bigay, P.; Oger, G.; Guilcher, P.-M.; Le Touzé, D.

    2017-11-01

    The present article aims at proposing an original strategy to solve hydrodynamic flows. In introduction, the motivations for this strategy are developed. It aims at modeling viscous and turbulent flows including complex moving geometries, while avoiding meshing constraints. The proposed approach relies on a weakly-compressible formulation of the Navier-Stokes equations. Unlike most hydrodynamic CFD (Computational Fluid Dynamics) solvers usually based on implicit incompressible formulations, a fully-explicit temporal scheme is used. A purely Cartesian grid is adopted for numerical accuracy and algorithmic simplicity purposes. This characteristic allows an easy use of Adaptive Mesh Refinement (AMR) methods embedded within a massively parallel framework. Geometries are automatically immersed within the Cartesian grid with an AMR compatible treatment. The method proposed uses an Immersed Boundary Method (IBM) adapted to the weakly-compressible formalism and imposed smoothly through a regularization function, which stands as another originality of this work. All these features have been implemented within an in-house solver based on this WCCH (Weakly-Compressible Cartesian Hydrodynamic) method which meets the above requirements whilst allowing the use of high-order (> 3) spatial schemes rarely used in existing hydrodynamic solvers. The details of this WCCH method are presented and validated in this article.

  1. Hierarchical adaptation scheme for multiagent data fusion and resource management in situation analysis

    NASA Astrophysics Data System (ADS)

    Benaskeur, Abder R.; Roy, Jean

    2001-08-01

    Sensor Management (SM) has to do with how to best manage, coordinate and organize the use of sensing resources in a manner that synergistically improves the process of data fusion. Based on the contextual information, SM develops options for collecting further information, allocates and directs the sensors towards the achievement of the mission goals and/or tunes the parameters for the realtime improvement of the effectiveness of the sensing process. Conscious of the important role that SM has to play in modern data fusion systems, we are currently studying advanced SM Concepts that would help increase the survivability of the current Halifax and Iroquois Class ships, as well as their possible future upgrades. For this purpose, a hierarchical scheme has been proposed for data fusion and resource management adaptation, based on the control theory and within the process refinement paradigm of the JDL data fusion model, and taking into account the multi-agent model put forward by the SASS Group for the situation analysis process. The novelty of this work lies in the unified framework that has been defined for tackling the adaptation of both the fusion process and the sensor/weapon management.

  2. Creation of parallel algorithms for the solution of problems of gas dynamics on multi-core computers and GPU

    NASA Astrophysics Data System (ADS)

    Rybakin, B.; Bogatencov, P.; Secrieru, G.; Iliuha, N.

    2013-10-01

    The paper deals with a parallel algorithm for calculations on multiprocessor computers and GPU accelerators. The calculations of shock waves interaction with low-density bubble results and the problem of the gas flow with the forces of gravity are presented. This algorithm combines a possibility to capture a high resolution of shock waves, the second-order accuracy for TVD schemes, and a possibility to observe a low-level diffusion of the advection scheme. Many complex problems of continuum mechanics are numerically solved on structured or unstructured grids. To improve the accuracy of the calculations is necessary to choose a sufficiently small grid (with a small cell size). This leads to the drawback of a substantial increase of computation time. Therefore, for the calculations of complex problems it is reasonable to use the method of Adaptive Mesh Refinement. That is, the grid refinement is performed only in the areas of interest of the structure, where, e.g., the shock waves are generated, or a complex geometry or other such features exist. Thus, the computing time is greatly reduced. In addition, the execution of the application on the resulting sequence of nested, decreasing nets can be parallelized. Proposed algorithm is based on the AMR method. Utilization of AMR method can significantly improve the resolution of the difference grid in areas of high interest, and from other side to accelerate the processes of the multi-dimensional problems calculating. Parallel algorithms of the analyzed difference models realized for the purpose of calculations on graphic processors using the CUDA technology [1].

  3. Cold Helium Pressurization for Liquid Oxygen / Liquid Methane Propulsion Systems: Fully-Integrated Initial Hot-Fire Test Results

    NASA Technical Reports Server (NTRS)

    Morehead, R. L.; Atwell, M. J.; Melcher, J. C.; Hurlbert, E. A.

    2016-01-01

    A prototype cold helium active pressurization system was incorporated into an existing liquid oxygen (LOX) / liquid methane (LCH4) prototype planetary lander and hot-fire tested to collect vehicle-level performance data. Results from this hot-fire test series were used to validate integrated models of the vehicle helium and propulsion systems and demonstrate system effectiveness for a throttling lander. Pressurization systems vary greatly in complexity and efficiency between vehicles, so a pressurization performance metric was also developed as a means to compare different active pressurization schemes. This implementation of an active repress system is an initial sizing draft. Refined implementations will be tested in the future, improving the general knowledge base for a cryogenic lander-based cold helium system.

  4. Estimation of distributed Fermat-point location for wireless sensor networking.

    PubMed

    Huang, Po-Hsian; Chen, Jiann-Liang; Larosa, Yanuarius Teofilus; Chiang, Tsui-Lien

    2011-01-01

    This work presents a localization scheme for use in wireless sensor networks (WSNs) that is based on a proposed connectivity-based RF localization strategy called the distributed Fermat-point location estimation algorithm (DFPLE). DFPLE applies triangle area of location estimation formed by intersections of three neighboring beacon nodes. The Fermat point is determined as the shortest path from three vertices of the triangle. The area of estimated location then refined using Fermat point to achieve minimum error in estimating sensor nodes location. DFPLE solves problems of large errors and poor performance encountered by localization schemes that are based on a bounding box algorithm. Performance analysis of a 200-node development environment reveals that, when the number of sensor nodes is below 150, the mean error decreases rapidly as the node density increases, and when the number of sensor nodes exceeds 170, the mean error remains below 1% as the node density increases. Second, when the number of beacon nodes is less than 60, normal nodes lack sufficient beacon nodes to enable their locations to be estimated. However, the mean error changes slightly as the number of beacon nodes increases above 60. Simulation results revealed that the proposed algorithm for estimating sensor positions is more accurate than existing algorithms, and improves upon conventional bounding box strategies.

  5. Multistage estimation of received carrier signal parameters under very high dynamic conditions of the receiver

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra (Inventor)

    1991-01-01

    A multistage estimator is provided for the parameters of a received carrier signal possibly phase-modulated by unknown data and experiencing very high Doppler, Doppler rate, etc., as may arise, for example, in the case of Global Positioning Systems (GPS) where the signal parameters are directly related to the position, velocity and jerk of the GPS ground-based receiver. In a two-stage embodiment of the more general multistage scheme, the first stage, selected to be a modified least squares algorithm referred to as differential least squares (DLS), operates as a coarse estimator resulting in higher rms estimation errors but with a relatively small probability of the frequency estimation error exceeding one-half of the sampling frequency, provides relatively coarse estimates of the frequency and its derivatives. The second stage of the estimator, an extended Kalman filter (EKF), operates on the error signal available from the first stage refining the overall estimates of the phase along with a more refined estimate of frequency as well and in the process also reduces the number of cycle slips.

  6. Fast-kick-off monotonically convergent algorithm for searching optimal control fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, Sheng-Lun; Ho, Tak-San; Rabitz, Herschel

    2011-09-15

    This Rapid Communication presents a fast-kick-off search algorithm for quickly finding optimal control fields in the state-to-state transition probability control problems, especially those with poorly chosen initial control fields. The algorithm is based on a recently formulated monotonically convergent scheme [T.-S. Ho and H. Rabitz, Phys. Rev. E 82, 026703 (2010)]. Specifically, the local temporal refinement of the control field at each iteration is weighted by a fractional inverse power of the instantaneous overlap of the backward-propagating wave function, associated with the target state and the control field from the previous iteration, and the forward-propagating wave function, associated with themore » initial state and the concurrently refining control field. Extensive numerical simulations for controls of vibrational transitions and ultrafast electron tunneling show that the new algorithm not only greatly improves the search efficiency but also is able to attain good monotonic convergence quality when further frequency constraints are required. The algorithm is particularly effective when the corresponding control dynamics involves a large number of energy levels or ultrashort control pulses.« less

  7. Multistage estimation of received carrier signal parameters under very high dynamic conditions of the receiver

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra (Inventor)

    1990-01-01

    A multistage estimator is provided for the parameters of a received carrier signal possibly phase-modulated by unknown data and experiencing very high Doppler, Doppler rate, etc., as may arise, for example, in the case of Global Positioning Systems (GPS) where the signal parameters are directly related to the position, velocity and jerk of the GPS ground-based receiver. In a two-stage embodiment of the more general multistage scheme, the first stage, selected to be a modified least squares algorithm referred to as differential least squares (DLS), operates as a coarse estimator resulting in higher rms estimation errors but with a relatively small probability of the frequency estimation error exceeding one-half of the sampling frequency, provides relatively coarse estimates of the frequency and its derivatives. The second stage of the estimator, an extended Kalman filter (EKF), operates on the error signal available from the first stage refining the overall estimates of the phase along with a more refined estimate of frequency as well and in the process also reduces the number of cycle slips.

  8. A back-fitting algorithm to improve real-time flood forecasting

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaojing; Liu, Pan; Cheng, Lei; Liu, Zhangjun; Zhao, Yan

    2018-07-01

    Real-time flood forecasting is important for decision-making with regards to flood control and disaster reduction. The conventional approach involves a postprocessor calibration strategy that first calibrates the hydrological model and then estimates errors. This procedure can simulate streamflow consistent with observations, but obtained parameters are not optimal. Joint calibration strategies address this issue by refining hydrological model parameters jointly with the autoregressive (AR) model. In this study, five alternative schemes are used to forecast floods. Scheme I uses only the hydrological model, while scheme II includes an AR model for error correction. In scheme III, differencing is used to remove non-stationarity in the error series. A joint inference strategy employed in scheme IV calibrates the hydrological and AR models simultaneously. The back-fitting algorithm, a basic approach for training an additive model, is adopted in scheme V to alternately recalibrate hydrological and AR model parameters. The performance of the five schemes is compared with a case study of 15 recorded flood events from China's Baiyunshan reservoir basin. Our results show that (1) schemes IV and V outperform scheme III during the calibration and validation periods and (2) scheme V is inferior to scheme IV in the calibration period, but provides better results in the validation period. Joint calibration strategies can therefore improve the accuracy of flood forecasting. Additionally, the back-fitting recalibration strategy produces weaker overcorrection and a more robust performance compared with the joint inference strategy.

  9. The Importance of Temporal and Spatial Vegetation Structure Information in Biotope Mapping Schemes: A Case Study in Helsingborg, Sweden

    NASA Astrophysics Data System (ADS)

    Gao, Tian; Qiu, Ling; Hammer, Mårten; Gunnarsson, Allan

    2012-02-01

    Temporal and spatial vegetation structure has impact on biodiversity qualities. Yet, current schemes of biotope mapping do only to a limited extend incorporate these factors in the mapping. The purpose of this study is to evaluate the application of a modified biotope mapping scheme that includes temporal and spatial vegetation structure. A refined scheme was developed based on a biotope classification, and applied to a green structure system in Helsingborg city in southern Sweden. It includes four parameters of vegetation structure: continuity of forest cover, age of dominant trees, horizontal structure, and vertical structure. The major green structure sites were determined by interpretation of panchromatic aerial photographs assisted with a field survey. A set of biotope maps was constructed on the basis of each level of modified classification. An evaluation of the scheme included two aspects in particular: comparison of species richness between long-continuity and short-continuity forests based on identification of woodland continuity using ancient woodland indicators (AWI) species and related historical documents, and spatial distribution of animals in the green space in relation to vegetation structure. The results indicate that (1) the relationship between forest continuity: according to verification of historical documents, the richness of AWI species was higher in long-continuity forests; Simpson's diversity was significantly different between long- and short-continuity forests; the total species richness and Shannon's diversity were much higher in long-continuity forests shown a very significant difference. (2) The spatial vegetation structure and age of stands influence the richness and abundance of the avian fauna and rabbits, and distance to the nearest tree and shrub was a strong determinant of presence for these animal groups. It is concluded that continuity of forest cover, age of dominant trees, horizontal and vertical structures of vegetation should now be included in urban biotope classifications.

  10. A geometry-adaptive IB-LBM for FSI problems at moderate and high Reynolds numbers

    NASA Astrophysics Data System (ADS)

    Tian, Fangbao; Xu, Lincheng; Young, John; Lai, Joseph C. S.

    2017-11-01

    An FSI framework combining the LBM and an improved IBM is introduced for FSI problems at moderate and high Reynolds numbers. In this framework, the fluid dynamics is obtained by the LBM. The FSI boundary conditions are handled by an improved IBM based on the feedback scheme where the feedback coefficient is mathematically derived and explicitly approximated. The Lagrangian force is divided into two parts: one is caused by the mismatching of the flow velocity and the boundary velocity at previous time step, and the other is caused by the boundary acceleration. Such treatment significantly enhances the numerical stability. A geometry-adaptive refinement is applied to provide fine resolution around the immersed geometries. The overlapping grids between two adjacent refinements consist of two layers. The movement of fluid-structure interfaces only causes adding or removing grids at the boundaries of refinements. Finally, the classic Smagorinsky large eddy simulation model is incorporated into the framework to model turbulent flows at relatively high Reynolds numbers. Several validation cases are conducted to verify the accuracy and fidelity of the present solver over a range of Reynolds numbers. Mr L. Xu acknowledges the support of the University International Postgraduate Award by University of New South Wales. Dr. F.-B. Tian is the recipient of an Australian Research Council Discovery Early Career Researcher Award (Project Number DE160101098).

  11. Development of an unstructured solution adaptive method for the quasi-three-dimensional Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Jiang, Yi-Tsann

    1993-01-01

    A general solution adaptive scheme-based on a remeshing technique is developed for solving the two-dimensional and quasi-three-dimensional Euler and Favre-averaged Navier-Stokes equations. The numerical scheme is formulated on an unstructured triangular mesh utilizing an edge-based pointer system which defines the edge connectivity of the mesh structure. Jameson's four-stage hybrid Runge-Kutta scheme is used to march the solution in time. The convergence rate is enhanced through the use of local time stepping and implicit residual averaging. As the solution evolves, the mesh is regenerated adaptively using flow field information. Mesh adaptation parameters are evaluated such that an estimated local numerical error is equally distributed over the whole domain. For inviscid flows, the present approach generates a complete unstructured triangular mesh using the advancing front method. For turbulent flows, the approach combines a local highly stretched structured triangular mesh in the boundary layer region with an unstructured mesh in the remaining regions to efficiently resolve the important flow features. One-equation and two-equation turbulence models are incorporated into the present unstructured approach. Results are presented for a wide range of flow problems including two-dimensional multi-element airfoils, two-dimensional cascades, and quasi-three-dimensional cascades. This approach is shown to gain flow resolution in the refined regions while achieving a great reduction in the computational effort and storage requirements since solution points are not wasted in regions where they are not required.

  12. Development of an unstructured solution adaptive method for the quasi-three-dimensional Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Jiang, Yi-Tsann; Usab, William J., Jr.

    1993-01-01

    A general solution adaptive scheme based on a remeshing technique is developed for solving the two-dimensional and quasi-three-dimensional Euler and Favre-averaged Navier-Stokes equations. The numerical scheme is formulated on an unstructured triangular mesh utilizing an edge-based pointer system which defines the edge connectivity of the mesh structure. Jameson's four-stage hybrid Runge-Kutta scheme is used to march the solution in time. The convergence rate is enhanced through the use of local time stepping and implicit residual averaging. As the solution evolves, the mesh is regenerated adaptively using flow field information. Mesh adaptation parameters are evaluated such that an estimated local numerical error is equally distributed over the whole domain. For inviscid flows, the present approach generates a complete unstructured triangular mesh using the advancing front method. For turbulent flows, the approach combines a local highly stretched structured triangular mesh in the boundary layer region with an unstructured mesh in the remaining regions to efficiently resolve the important flow features. One-equation and two-equation turbulence models are incorporated into the present unstructured approach. Results are presented for a wide range of flow problems including two-dimensional multi-element airfoils, two-dimensional cascades, and quasi-three-dimensional cascades. This approach is shown to gain flow resolution in the refined regions while achieving a great reduction in the computational effort and storage requirements since solution points are not wasted in regions where they are not required.

  13. New high order schemes in BATS-R-US

    NASA Astrophysics Data System (ADS)

    Toth, G.; van der Holst, B.; Daldorff, L.; Chen, Y.; Gombosi, T. I.

    2013-12-01

    The University of Michigan global magnetohydrodynamics code BATS-R-US has long relied on the block-adaptive mesh refinement (AMR) to increase accuracy in regions of interest, and we used a second order accurate TVD scheme. While AMR can in principle produce arbitrarily accurate results, there are still practical limitations due to computational resources. To further improve the accuracy of the BATS-R-US code, recently, we have implemented a 4th order accurate finite volume scheme (McCorquodale and Colella, 2011}), the 5th order accurate Monotonicity Preserving scheme (MP5, Suresh and Huynh, 1997) and the 5th order accurate CWENO5 scheme (Capdeville, 2008). In the first implementation the high order accuracy is achieved in the uniform parts of the Cartesian grids, and we still use the second order TVD scheme at resolution changes. For spherical grids the new schemes are only second order accurate so far, but still much less diffusive than the TVD scheme. We show a few verification tests that demonstrate the order of accuracy as well as challenging space physics applications. The high order schemes are less robust than the TVD scheme, and it requires some tricks and effort to make the code work. When the high order scheme works, however, we find that in most cases it can obtain similar or better results than the TVD scheme on twice finer grids. For three dimensional time dependent simulations this means that the high order scheme is almost 10 times faster requires 8 times less storage than the second order method.

  14. Multi-stakeholder perspectives of locally commissioned enhanced optometric services

    PubMed Central

    Baker, H; Harper, R A; Edgar, D F; Lawrenson, J G

    2016-01-01

    Objectives To explore views of all stakeholders (patients, optometrists, general practitioners (GPs), commissioners and ophthalmologists) regarding the operation of community-based enhanced optometric services. Design Qualitative study using mixed methods (patient satisfaction surveys, semi-structured telephone interviews and optometrist focus groups). Setting A minor eye conditions scheme (MECS) and glaucoma referral refinement scheme (GRRS) provided by accredited community optometrists. Participants 189 patients, 25 community optometrists, 4 glaucoma specialist hospital optometrists (GRRS), 5 ophthalmologists, 6 GPs (MECS), 4 commissioners. Results Overall, 99% (GRRS) and 100% (MECS) patients were satisfied with their optometrists’ examination. The vast majority rated the following as ‘very good’; examination duration, optometrists’ listening skills, explanations of tests and management, patient involvement in decision-making, treating the patient with care and concern. 99% of MECS patients would recommend the service. Manchester optometrists were enthusiastic about GRRS, feeling fortunate to practise in a ‘pro-optometry’ area. No major negatives were reported, although both schemes were limited to patients resident within certain postcode areas, and some inappropriate GP referrals occurred (MECS). Communication with hospitals was praised in GRRS but was variable, depending on hospital (MECS). Training for both schemes was valuable and appropriate but should be ongoing. MECS GPs were very supportive, reporting the scheme would reduce secondary care referral numbers, although some MECS patients were referred back to GPs for medication. Ophthalmologists (MECS and GRRS) expressed very positive views and widely acknowledged that these new care pathways would reduce unnecessary referrals and shorten patient waiting times. Commissioners felt both schemes met or exceeded expectations in terms of quality of care, allowing patients to be seen quicker and more efficiently. Conclusions Locally commissioned schemes can be a positive experience for all involved. With appropriate training, clear referral pathways and good communication, community optometrists can offer high-quality services that are highly acceptable to patients, health professionals and commissioners. PMID:27798000

  15. An adaptive interpolation scheme for molecular potential energy surfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kowalewski, Markus, E-mail: mkowalew@uci.edu; Larsson, Elisabeth; Heryudono, Alfa

    The calculation of potential energy surfaces for quantum dynamics can be a time consuming task—especially when a high level of theory for the electronic structure calculation is required. We propose an adaptive interpolation algorithm based on polyharmonic splines combined with a partition of unity approach. The adaptive node refinement allows to greatly reduce the number of sample points by employing a local error estimate. The algorithm and its scaling behavior are evaluated for a model function in 2, 3, and 4 dimensions. The developed algorithm allows for a more rapid and reliable interpolation of a potential energy surface within amore » given accuracy compared to the non-adaptive version.« less

  16. System identification of the JPL micro-precision interferometer truss - Test-analysis reconciliation

    NASA Technical Reports Server (NTRS)

    Red-Horse, J. R.; Marek, E. L.; Levine-West, M.

    1993-01-01

    The JPL Micro-Precision Interferometer (MPI) is a testbed for studying the use of control-structure interaction technology in the design of space-based interferometers. A layered control architecture will be employed to regulate the interferometer optical system to tolerances in the nanometer range. An important aspect of designing and implementing the control schemes for such a system is the need for high fidelity, test-verified analytical structural models. This paper focuses on one aspect of the effort to produce such a model for the MPI structure, test-analysis model reconciliation. Pretest analysis, modal testing, and model refinement results are summarized for a series of tests at both the component and full system levels.

  17. Numerical analysis of flow about a total temperature sensor

    NASA Technical Reports Server (NTRS)

    Von Lavante, Ernst; Bruns, Russell L., Jr.; Sanetrik, Mark D.; Lam, Tim

    1989-01-01

    The unsteady flowfield about an airfoil-shaped inlet temperature sensor has been investigated using the thin-layer and full Navier-Stokes equations. A finite-volume formulation of the governing equations was used in conjunction with a Runge-Kutta time stepping scheme to analyze the flow about the sensor. Flow characteristics for this configuration were established at Mach numbers of 0.5 and 0.8 for different Reynolds numbers. The results were obtained for configurations of increasing complexity; important physical phenomena such as shock formation, boundary-layer separation, and unsteady wake formation were noted. Based on the computational results, recommendations for further study and refinement of the inlet temperature sensor were made.

  18. Noise Analysis of Simultaneous Quantum Key Distribution and Classical Communication Scheme Using a True Local Oscillator

    DOE PAGES

    Qi, Bing; Lim, Charles Ci Wen

    2018-05-07

    Recently, we proposed a simultaneous quantum and classical communication (SQCC) protocol where random numbers for quantum key distribution and bits for classical communication are encoded on the same weak coherent pulse and decoded by the same coherent receiver. Such a scheme could be appealing in practice since a single coherent communication system can be used for multiple purposes. However, previous studies show that the SQCC protocol can tolerate only very small phase noise. This makes it incompatible with the coherent communication scheme using a true local oscillator (LO), which presents a relatively high phase noise due to the fact thatmore » the signal and the LO are generated from two independent lasers. We improve the phase noise tolerance of the SQCC scheme using a true LO by adopting a refined noise model where phase noises originating from different sources are treated differently: on the one hand, phase noise associated with the coherent receiver may be regarded as trusted noise since the detector can be calibrated locally and the photon statistics of the detected signals can be determined from the measurement results; on the other hand, phase noise due to the instability of fiber interferometers may be regarded as untrusted noise since its randomness (from the adversary’s point of view) is hard to justify. Simulation results show the tolerable phase noise in this refined noise model is significantly higher than that in the previous study, where all of the phase noises are assumed to be untrusted. In conclusion, we conduct an experiment to show that the required phase stability can be achieved in a coherent communication system using a true LO.« less

  19. Noise Analysis of Simultaneous Quantum Key Distribution and Classical Communication Scheme Using a True Local Oscillator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qi, Bing; Lim, Charles Ci Wen

    Recently, we proposed a simultaneous quantum and classical communication (SQCC) protocol where random numbers for quantum key distribution and bits for classical communication are encoded on the same weak coherent pulse and decoded by the same coherent receiver. Such a scheme could be appealing in practice since a single coherent communication system can be used for multiple purposes. However, previous studies show that the SQCC protocol can tolerate only very small phase noise. This makes it incompatible with the coherent communication scheme using a true local oscillator (LO), which presents a relatively high phase noise due to the fact thatmore » the signal and the LO are generated from two independent lasers. We improve the phase noise tolerance of the SQCC scheme using a true LO by adopting a refined noise model where phase noises originating from different sources are treated differently: on the one hand, phase noise associated with the coherent receiver may be regarded as trusted noise since the detector can be calibrated locally and the photon statistics of the detected signals can be determined from the measurement results; on the other hand, phase noise due to the instability of fiber interferometers may be regarded as untrusted noise since its randomness (from the adversary’s point of view) is hard to justify. Simulation results show the tolerable phase noise in this refined noise model is significantly higher than that in the previous study, where all of the phase noises are assumed to be untrusted. In conclusion, we conduct an experiment to show that the required phase stability can be achieved in a coherent communication system using a true LO.« less

  20. Noise Analysis of Simultaneous Quantum Key Distribution and Classical Communication Scheme Using a True Local Oscillator

    NASA Astrophysics Data System (ADS)

    Qi, Bing; Lim, Charles Ci Wen

    2018-05-01

    Recently, we proposed a simultaneous quantum and classical communication (SQCC) protocol where random numbers for quantum key distribution and bits for classical communication are encoded on the same weak coherent pulse and decoded by the same coherent receiver. Such a scheme could be appealing in practice since a single coherent communication system can be used for multiple purposes. However, previous studies show that the SQCC protocol can tolerate only very small phase noise. This makes it incompatible with the coherent communication scheme using a true local oscillator (LO), which presents a relatively high phase noise due to the fact that the signal and the LO are generated from two independent lasers. We improve the phase noise tolerance of the SQCC scheme using a true LO by adopting a refined noise model where phase noises originating from different sources are treated differently: on the one hand, phase noise associated with the coherent receiver may be regarded as trusted noise since the detector can be calibrated locally and the photon statistics of the detected signals can be determined from the measurement results; on the other hand, phase noise due to the instability of fiber interferometers may be regarded as untrusted noise since its randomness (from the adversary's point of view) is hard to justify. Simulation results show the tolerable phase noise in this refined noise model is significantly higher than that in the previous study, where all of the phase noises are assumed to be untrusted. We conduct an experiment to show that the required phase stability can be achieved in a coherent communication system using a true LO.

  1. Research on segmentation based on multi-atlas in brain MR image

    NASA Astrophysics Data System (ADS)

    Qian, Yuejing

    2018-03-01

    Accurate segmentation of specific tissues in brain MR image can be effectively achieved with the multi-atlas-based segmentation method, and the accuracy mainly depends on the image registration accuracy and fusion scheme. This paper proposes an automatic segmentation method based on the multi-atlas for brain MR image. Firstly, to improve the registration accuracy in the area to be segmented, we employ a target-oriented image registration method for the refinement. Then In the label fusion, we proposed a new algorithm to detect the abnormal sparse patch and simultaneously abandon the corresponding abnormal sparse coefficients, this method is made based on the remaining sparse coefficients combined with the multipoint label estimator strategy. The performance of the proposed method was compared with those of the nonlocal patch-based label fusion method (Nonlocal-PBM), the sparse patch-based label fusion method (Sparse-PBM) and majority voting method (MV). Based on our experimental results, the proposed method is efficient in the brain MR images segmentation compared with MV, Nonlocal-PBM, and Sparse-PBM methods.

  2. ADER discontinuous Galerkin schemes for general-relativistic ideal magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Fambri, F.; Dumbser, M.; Köppel, S.; Rezzolla, L.; Zanotti, O.

    2018-07-01

    We present a new class of high-order accurate numerical algorithms for solving the equations of general-relativistic ideal magnetohydrodynamics in curved space-times. In this paper, we assume the background space-time to be given and static, i.e. we make use of the Cowling approximation. The governing partial differential equations are solved via a new family of fully discrete and arbitrary high-order accurate path-conservative discontinuous Galerkin (DG) finite-element methods combined with adaptive mesh refinement and time accurate local time-stepping. In order to deal with shock waves and other discontinuities, the high-order DG schemes are supplemented with a novel a posteriori subcell finite-volume limiter, which makes the new algorithms as robust as classical second-order total-variation diminishing finite-volume methods at shocks and discontinuities, but also as accurate as unlimited high-order DG schemes in smooth regions of the flow. We show the advantages of this new approach by means of various classical two- and three-dimensional benchmark problems on fixed space-times. Finally, we present a performance and accuracy comparisons between Runge-Kutta DG schemes and ADER high-order finite-volume schemes, showing the higher efficiency of DG schemes.

  3. ADER discontinuous Galerkin schemes for general-relativistic ideal magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Fambri, F.; Dumbser, M.; Köppel, S.; Rezzolla, L.; Zanotti, O.

    2018-03-01

    We present a new class of high-order accurate numerical algorithms for solving the equations of general-relativistic ideal magnetohydrodynamics in curved spacetimes. In this paper we assume the background spacetime to be given and static, i.e. we make use of the Cowling approximation. The governing partial differential equations are solved via a new family of fully-discrete and arbitrary high-order accurate path-conservative discontinuous Galerkin (DG) finite-element methods combined with adaptive mesh refinement and time accurate local timestepping. In order to deal with shock waves and other discontinuities, the high-order DG schemes are supplemented with a novel a-posteriori subcell finite-volume limiter, which makes the new algorithms as robust as classical second-order total-variation diminishing finite-volume methods at shocks and discontinuities, but also as accurate as unlimited high-order DG schemes in smooth regions of the flow. We show the advantages of this new approach by means of various classical two- and three-dimensional benchmark problems on fixed spacetimes. Finally, we present a performance and accuracy comparisons between Runge-Kutta DG schemes and ADER high-order finite-volume schemes, showing the higher efficiency of DG schemes.

  4. Implementation of a cryo-electron tomography tilt-scheme optimized for high resolution subtomogram averaging.

    PubMed

    Hagen, Wim J H; Wan, William; Briggs, John A G

    2017-02-01

    Cryo-electron tomography (cryoET) allows 3D structural information to be obtained from cells and other biological samples in their close-to-native state. In combination with subtomogram averaging, detailed structures of repeating features can be resolved. CryoET data is collected as a series of images of the sample from different tilt angles; this is performed by physically rotating the sample in the microscope between each image. The angles at which the images are collected, and the order in which they are collected, together are called the tilt-scheme. Here we describe a "dose-symmetric tilt-scheme" that begins at low tilt and then alternates between increasingly positive and negative tilts. This tilt-scheme maximizes the amount of high-resolution information maintained in the tomogram for subsequent subtomogram averaging, and may also be advantageous for other applications. We describe implementation of the tilt-scheme in combination with further data-collection refinements including setting thresholds on acceptable drift and improving focus accuracy. Requirements for microscope set-up are introduced, and a macro is provided which automates the application of the tilt-scheme within SerialEM. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  5. Designing effective animations for computer science instruction

    NASA Astrophysics Data System (ADS)

    Grillmeyer, Oliver

    This study investigated the potential for animations of Scheme functions to help novice computer science students understand difficult programming concepts. These animations used an instructional framework inspired by theories of constructivism and knowledge integration. The framework had students make predictions, reflect, and specify examples to animate to promote autonomous learning and result in more integrated knowledge. The framework used animated pivotal cases to help integrate disconnected ideas and restructure students' incomplete ideas by illustrating weaknesses in their existing models. The animations scaffolded learners, making the thought processes of experts more visible by modeling complex and tacit information. The animation design was guided by prior research and a methodology of design and refinement. Analysis of pilot studies led to the development of four design concerns to aid animation designers: clearly illustrate the mapping between objects in animations with the actual objects they represent, show causal connections between elements, draw attention to the salient features of the modeled system, and create animations that reduce complexity. Refined animations based on these design concerns were compared to computer-based tools, text-based instruction, and simpler animations that do not embody the design concerns. Four studies comprised this dissertation work. Two sets of animated presentations of list creation functions were compared to control groups. No significant differences were found in support of animations. Three different animated models of traces of recursive functions ranging from concrete to abstract representations were compared. No differences in learning gains were found between the three models in test performance. Three models of animations of applicative operators were compared with students using the replacement modeler and the Scheme interpreter. Significant differences were found favoring animations that addressed causality and salience in their design. Lastly, two binary tree search algorithm animations designed to reduce complexity were compared with hand-tracing of calls. Students made fewer mistakes in predicting the tree traversal when guided by the animations. However, the posttest findings were inconsistent. In summary, animations designed based on the design concerns did not consistently add value to instruction in the form investigated in this research.

  6. Some results on numerical methods for hyperbolic conservation laws

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang Huanan.

    1989-01-01

    This dissertation contains some results on the numerical solutions of hyperbolic conservation laws. (1) The author introduced an artificial compression method as a correction to the basic ENO schemes. The method successfully prevents contact discontinuities from being smeared. This is achieved by increasing the slopes of the ENO reconstructions in such a way that the essentially non-oscillatory property of the schemes is kept. He analyzes the non-oscillatory property of the new artificial compression method by applying it to the UNO scheme which is a second order accurate ENO scheme, and proves that the resulting scheme is indeed non-oscillatory. Extensive 1-Dmore » numerical results and some preliminary 2-D ones are provided to show the strong performance of the method. (2) He combines the ENO schemes and the centered difference schemes into self-adjusting hybrid schemes which will be called the localized ENO schemes. At or near the jumps, he uses the ENO schemes with the field by field decompositions, otherwise he simply uses the centered difference schemes without the field by field decompositions. The method involves a new interpolation analysis. In the numerical experiments on several standard test problems, the quality of the numerical results of this method is close to that of the pure ENO results. The localized ENO schemes can be equipped with the above artificial compression method. In this way, he dramatically improves the resolutions of the contact discontinuities at very little additional costs. (3) He introduces a space-time mesh refinement method for time dependent problems.« less

  7. Cross-sectional mapping for refined beam elements with applications to shell-like structures

    NASA Astrophysics Data System (ADS)

    Pagani, A.; de Miguel, A. G.; Carrera, E.

    2017-06-01

    This paper discusses the use of higher-order mapping functions for enhancing the physical representation of refined beam theories. Based on the Carrera unified formulation (CUF), advanced one-dimensional models are formulated by expressing the displacement field as a generic expansion of the generalized unknowns. According to CUF, a novel physically/geometrically consistent model is devised by employing Legendre-like polynomial sets to approximate the generalized unknowns at the cross-sectional level, whereas a local mapping technique based on the blending functions method is used to describe the exact physical boundaries of the cross-section domain. Classical and innovative finite element methods, including hierarchical p-elements and locking-free integration schemes, are utilized to solve the governing equations of the unified beam theory. Several numerical applications accounting for small displacements/rotations and strains are discussed, including beam structures with cross-sectional curved edges, cylindrical shells, and thin-walled aeronautical wing structures with reinforcements. The results from the proposed methodology are widely assessed by comparisons with solutions from the literature and commercial finite element software tools. The attention is focussed on the high computational efficiency and the marked capabilities of the present beam model, which can deal with a broad spectrum of structural problems with unveiled accuracy in terms of geometrical representation of the domain boundaries.

  8. Hydrodynamic Modeling of Air Blast Propagation from the Humble Redwood Chemical High Explosive Detonations Using GEODYN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chipman, V D

    Two-dimensional axisymmetric hydrodynamic models were developed using GEODYN to simulate the propagation of air blasts resulting from a series of high explosive detonations conducted at Kirtland Air Force Base in August and September of 2007. Dubbed Humble Redwood I (HR-1), these near-surface chemical high explosive detonations consisted of seven shots of varying height or depth of burst. Each shot was simulated numerically using GEODYN. An adaptive mesh refinement scheme based on air pressure gradients was employed such that the mesh refinement tracked the advancing shock front where sharp discontinuities existed in the state variables, but allowed the mesh to sufficientlymore » relax behind the shock front for runtime efficiency. Comparisons of overpressure, sound speed, and positive phase impulse from the GEODYN simulations were made to the recorded data taken from each HR-1 shot. Where the detonations occurred above ground or were shallowly buried (no deeper than 1 m), the GEODYN model was able to simulate the sound speeds, peak overpressures, and positive phase impulses to within approximately 1%, 23%, and 6%, respectively, of the actual recorded data, supporting the use of numerical simulation of the air blast as a forensic tool in determining the yield of an otherwise unknown explosion.« less

  9. Distinguishing the Effects of Local Point Sources from Those Caused by Upstream Nonpoint Source (NPS) Inputs: Refinement of a Watershed Development Index for New England

    EPA Science Inventory

    Using EMAP data from the NE Wadeable Stream Survey and state datasets (CT, ME), assessment tools were developed to predict diffuse NPS effects from watershed development and distinguish these from local impacts (point sources, contaminated sediments). Classification schemes were...

  10. Addressing Common Cloud-Radiation Errors from 4-hour to 4-week Model Prediction

    NASA Astrophysics Data System (ADS)

    Benjamin, S.; Sun, S.; Grell, G. A.; Green, B.; Olson, J.; Kenyon, J.; James, E.; Smirnova, T. G.; Brown, J. M.

    2017-12-01

    Cloud-radiation representation in models for subgrid-scale clouds is a known gap from subseasonal-to-seasonal models down to storm-scale models applied for forecast duration of only a few hours. NOAA/ESRL has been applying common physical parameterizations for scale-aware deep/shallow convection and boundary-layer mixing over this wide range of time and spatial scales, with some progress to be reported in this presentation. The Grell-Freitas scheme (2014, Atmos. Chem. Phys.) and MYNN boundary-layer EDMF scheme (Olson / Benjamin et al. 2016 Mon. Wea. Rev.) have been applied and tested extensively for the NOAA hourly updated 3-km High-Resolution Rapid Refresh (HRRR) and 13-km Rapid Refresh (RAP) model/assimilation systems over the United States and North America, with targeting toward improvement to boundary-layer evolution and cloud-radiation representation in all seasons. This representation is critical for both warm-season severe convective storm forecasting and for winter-storm prediction of snow and mixed precipitation. At the same time the Grell-Freitas scheme has been applied also as an option for subseasonal forecasting toward improved US week 3-4 prediction with the FIM-HYCOM coupled model (Green et al 2017, MWR). Cloud/radiation evaluation using CERES satellite-based estimates have been applied to both 12-h RAP (13km) and also during Weeks 1-4 from 32-day FIM-HYCOM (60km) forecasts. Initial results reveal that improved cloud representation is needed for both resolutions and now is guiding further refinement for cloud representation including with the Grell-Freitas scheme and with the updated MYNN-EDMF scheme (both now also in global testing as well as with the 3km HRRR and 13km RAP models).

  11. Parallel Adaptive Mesh Refinement for High-Order Finite-Volume Schemes in Computational Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Schwing, Alan Michael

    For computational fluid dynamics, the governing equations are solved on a discretized domain of nodes, faces, and cells. The quality of the grid or mesh can be a driving source for error in the results. While refinement studies can help guide the creation of a mesh, grid quality is largely determined by user expertise and understanding of the flow physics. Adaptive mesh refinement is a technique for enriching the mesh during a simulation based on metrics for error, impact on important parameters, or location of important flow features. This can offload from the user some of the difficult and ambiguous decisions necessary when discretizing the domain. This work explores the implementation of adaptive mesh refinement in an implicit, unstructured, finite-volume solver. Consideration is made for applying modern computational techniques in the presence of hanging nodes and refined cells. The approach is developed to be independent of the flow solver in order to provide a path for augmenting existing codes. It is designed to be applicable for unsteady simulations and refinement and coarsening of the grid does not impact the conservatism of the underlying numerics. The effect on high-order numerical fluxes of fourth- and sixth-order are explored. Provided the criteria for refinement is appropriately selected, solutions obtained using adapted meshes have no additional error when compared to results obtained on traditional, unadapted meshes. In order to leverage large-scale computational resources common today, the methods are parallelized using MPI. Parallel performance is considered for several test problems in order to assess scalability of both adapted and unadapted grids. Dynamic repartitioning of the mesh during refinement is crucial for load balancing an evolving grid. Development of the methods outlined here depend on a dual-memory approach that is described in detail. Validation of the solver developed here against a number of motivating problems shows favorable comparisons across a range of regimes. Unsteady and steady applications are considered in both subsonic and supersonic flows. Inviscid and viscous simulations achieve similar results at a much reduced cost when employing dynamic mesh adaptation. Several techniques for guiding adaptation are compared. Detailed analysis of statistics from the instrumented solver enable understanding of the costs associated with adaptation. Adaptive mesh refinement shows promise for the test cases presented here. It can be considerably faster than using conventional grids and provides accurate results. The procedures for adapting the grid are light-weight enough to not require significant computational time and yield significant reductions in grid size.

  12. [Review of dynamic global vegetation models (DGVMs)].

    PubMed

    Che, Ming-Liang; Chen, Bao-Zhang; Wang, Ying; Guo, Xiang-Yun

    2014-01-01

    Dynamic global vegetation model (DGVM) is an important and efficient tool for study on the terrestrial carbon circle processes and vegetation dynamics. This paper reviewed the development history of DGVMs, introduced the basic structure of DGVMs, and the outlines of several world-widely used DGVMs, including CLM-DGVM, LPJ, IBIS and SEIB. The shortages of the description of dynamic vegetation mechanisms in the current DGVMs were proposed, including plant functional types (PFT) scheme, vegetation competition, disturbance, and phenology. Then the future research directions of DGVMs were pointed out, i. e. improving the PFT scheme, refining the vegetation dynamic mechanism, and implementing a model inter-comparison project.

  13. Study designs appropriate for the workplace.

    PubMed

    Hogue, C J

    1986-01-01

    Carlo and Hearn have called for "refinement of old [epidemiologic] methods and an ongoing evaluation of where methods fit in the overall scheme as we address the multiple complexities of reproductive hazard assessment." This review is an attempt to bring together the current state-of-the-art methods for problem definition and hypothesis testing available to the occupational epidemiologist. For problem definition, meta analysis can be utilized to narrow the field of potential causal hypotheses. Passive active surveillance may further refine issues for analytic research. Within analytic epidemiology, several methods may be appropriate for the workplace setting. Those discussed here may be used to estimate the risk ratio in either a fixed or dynamic population.

  14. THE PLUTO CODE FOR ADAPTIVE MESH COMPUTATIONS IN ASTROPHYSICAL FLUID DYNAMICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mignone, A.; Tzeferacos, P.; Zanni, C.

    We present a description of the adaptive mesh refinement (AMR) implementation of the PLUTO code for solving the equations of classical and special relativistic magnetohydrodynamics (MHD and RMHD). The current release exploits, in addition to the static grid version of the code, the distributed infrastructure of the CHOMBO library for multidimensional parallel computations over block-structured, adaptively refined grids. We employ a conservative finite-volume approach where primary flow quantities are discretized at the cell center in a dimensionally unsplit fashion using the Corner Transport Upwind method. Time stepping relies on a characteristic tracing step where piecewise parabolic method, weighted essentially non-oscillatory,more » or slope-limited linear interpolation schemes can be handily adopted. A characteristic decomposition-free version of the scheme is also illustrated. The solenoidal condition of the magnetic field is enforced by augmenting the equations with a generalized Lagrange multiplier providing propagation and damping of divergence errors through a mixed hyperbolic/parabolic explicit cleaning step. Among the novel features, we describe an extension of the scheme to include non-ideal dissipative processes, such as viscosity, resistivity, and anisotropic thermal conduction without operator splitting. Finally, we illustrate an efficient treatment of point-local, potentially stiff source terms over hierarchical nested grids by taking advantage of the adaptivity in time. Several multidimensional benchmarks and applications to problems of astrophysical relevance assess the potentiality of the AMR version of PLUTO in resolving flow features separated by large spatial and temporal disparities.« less

  15. Methods for prismatic/tetrahedral grid generation and adaptation

    NASA Technical Reports Server (NTRS)

    Kallinderis, Y.

    1995-01-01

    The present work involves generation of hybrid prismatic/tetrahedral grids for complex 3-D geometries including multi-body domains. The prisms cover the region close to each body's surface, while tetrahedra are created elsewhere. Two developments are presented for hybrid grid generation around complex 3-D geometries. The first is a new octree/advancing front type of method for generation of the tetrahedra of the hybrid mesh. The main feature of the present advancing front tetrahedra generator that is different from previous such methods is that it does not require the creation of a background mesh by the user for the determination of the grid-spacing and stretching parameters. These are determined via an automatically generated octree. The second development is a method for treating the narrow gaps in between different bodies in a multiply-connected domain. This method is applied to a two-element wing case. A High Speed Civil Transport (HSCT) type of aircraft geometry is considered. The generated hybrid grid required only 170 K tetrahedra instead of an estimated two million had a tetrahedral mesh been used in the prisms region as well. A solution adaptive scheme for viscous computations on hybrid grids is also presented. A hybrid grid adaptation scheme that employs both h-refinement and redistribution strategies is developed to provide optimum meshes for viscous flow computations. Grid refinement is a dual adaptation scheme that couples 3-D, isotropic division of tetrahedra and 2-D, directional division of prisms.

  16. Overcoming numerical shockwave anomalies using energy balanced numerical schemes. Application to the Shallow Water Equations with discontinuous topography

    NASA Astrophysics Data System (ADS)

    Navas-Montilla, A.; Murillo, J.

    2017-07-01

    When designing a numerical scheme for the resolution of conservation laws, the selection of a particular source term discretization (STD) may seem irrelevant whenever it ensures convergence with mesh refinement, but it has a decisive impact on the solution. In the framework of the Shallow Water Equations (SWE), well-balanced STD based on quiescent equilibrium are unable to converge to physically based solutions, which can be constructed considering energy arguments. Energy based discretizations can be designed assuming dissipation or conservation, but in any case, the STD procedure required should not be merely based on ad hoc approximations. The STD proposed in this work is derived from the Generalized Hugoniot Locus obtained from the Generalized Rankine Hugoniot conditions and the Integral Curve across the contact wave associated to the bed step. In any case, the STD must allow energy-dissipative solutions: steady and unsteady hydraulic jumps, for which some numerical anomalies have been documented in the literature. These anomalies are the incorrect positioning of steady jumps and the presence of a spurious spike of discharge inside the cell containing the jump. The former issue can be addressed by proposing a modification of the energy-conservative STD that ensures a correct dissipation rate across the hydraulic jump, whereas the latter is of greater complexity and cannot be fixed by simply choosing a suitable STD, as there are more variables involved. The problem concerning the spike of discharge is a well-known problem in the scientific community, also known as slowly-moving shock anomaly, it is produced by a nonlinearity of the Hugoniot locus connecting the states at both sides of the jump. However, it seems that this issue is more a feature than a problem when considering steady solutions of the SWE containing hydraulic jumps. The presence of the spurious spike in the discharge has been taken for granted and has become a feature of the solution. Even though it does not disturb the rest of the solution in steady cases, when considering transient cases it produces a very undesirable shedding of spurious oscillations downstream that should be circumvented. Based on spike-reducing techniques (originally designed for homogeneous Euler equations) that propose the construction of interpolated fluxes in the untrustworthy regions, we design a novel Roe-type scheme for the SWE with discontinuous topography that reduces the presence of the aforementioned spurious spike. The resulting spike-reducing method in combination with the proposed STD ensures an accurate positioning of steady jumps, provides convergence with mesh refinement, which was not possible for previous methods that cannot avoid the spike.

  17. Experience in using a numerical scheme with artificial viscosity at solving the Riemann problem for a multi-fluid model of multiphase flow

    NASA Astrophysics Data System (ADS)

    Bulovich, S. V.; Smirnov, E. M.

    2018-05-01

    The paper covers application of the artificial viscosity technique to numerical simulation of unsteady one-dimensional multiphase compressible flows on the base of the multi-fluid approach. The system of the governing equations is written under assumption of the pressure equilibrium between the "fluids" (phases). No interfacial exchange is taken into account. A model for evaluation of the artificial viscosity coefficient that (i) assumes identity of this coefficient for all interpenetrating phases and (ii) uses the multiphase-mixture Wood equation for evaluation of a scale speed of sound has been suggested. Performance of the artificial viscosity technique has been evaluated via numerical solution of a model problem of pressure discontinuity breakdown in a three-fluid medium. It has been shown that a relatively simple numerical scheme, explicit and first-order, combined with the suggested artificial viscosity model, predicts a physically correct behavior of the moving shock and expansion waves, and a subsequent refinement of the computational grid results in a monotonic approaching to an asymptotic time-dependent solution, without non-physical oscillations.

  18. A scheme for racquet sports video analysis with the combination of audio-visual information

    NASA Astrophysics Data System (ADS)

    Xing, Liyuan; Ye, Qixiang; Zhang, Weigang; Huang, Qingming; Yu, Hua

    2005-07-01

    As a very important category in sports video, racquet sports video, e.g. table tennis, tennis and badminton, has been paid little attention in the past years. Considering the characteristics of this kind of sports video, we propose a new scheme for structure indexing and highlight generating based on the combination of audio and visual information. Firstly, a supervised classification method is employed to detect important audio symbols including impact (ball hit), audience cheers, commentator speech, etc. Meanwhile an unsupervised algorithm is proposed to group video shots into various clusters. Then, by taking advantage of temporal relationship between audio and visual signals, we can specify the scene clusters with semantic labels including rally scenes and break scenes. Thirdly, a refinement procedure is developed to reduce false rally scenes by further audio analysis. Finally, an exciting model is proposed to rank the detected rally scenes from which many exciting video clips such as game (match) points can be correctly retrieved. Experiments on two types of representative racquet sports video, table tennis video and tennis video, demonstrate encouraging results.

  19. 3D Indoor Positioning of UAVs with Spread Spectrum Ultrasound and Time-of-Flight Cameras

    PubMed Central

    Aguilera, Teodoro

    2017-01-01

    This work proposes the use of a hybrid acoustic and optical indoor positioning system for the accurate 3D positioning of Unmanned Aerial Vehicles (UAVs). The acoustic module of this system is based on a Time-Code Division Multiple Access (T-CDMA) scheme, where the sequential emission of five spread spectrum ultrasonic codes is performed to compute the horizontal vehicle position following a 2D multilateration procedure. The optical module is based on a Time-Of-Flight (TOF) camera that provides an initial estimation for the vehicle height. A recursive algorithm programmed on an external computer is then proposed to refine the estimated position. Experimental results show that the proposed system can increase the accuracy of a solely acoustic system by 70–80% in terms of positioning mean square error. PMID:29301211

  20. Effectiveness of vegetation-based biodiversity offset metrics as surrogates for ants.

    PubMed

    Hanford, Jayne K; Crowther, Mathew S; Hochuli, Dieter F

    2017-02-01

    Biodiversity offset schemes are globally popular policy tools for balancing the competing demands of conservation and development. Trading currencies for losses and gains in biodiversity value at development and credit sites are usually based on several vegetation attributes combined to yield a simple score (multimetric), but the score is rarely validated prior to implementation. Inaccurate biodiversity trading currencies are likely to accelerate global biodiversity loss through unrepresentative trades of losses and gains. We tested a model vegetation multimetric (i.e., vegetation structural and compositional attributes) typical of offset trading currencies to determine whether it represented measurable components of compositional and functional biodiversity. Study sites were located in remnant patches of a critically endangered ecological community in western Sydney, Australia, an area representative of global conflicts between conservation and expanding urban development. We sampled ant fauna composition with pitfall traps and enumerated removal by ants of native plant seeds from artificial seed containers (seed depots). Ants are an excellent model taxon because they are strongly associated with habitat complexity, respond rapidly to environmental change, and are functionally important at many trophic levels. The vegetation multimetric did not predict differences in ant community composition or seed removal, despite underlying assumptions that biodiversity trading currencies used in offset schemes represent all components of a site's biodiversity value. This suggests that vegetation multimetrics are inadequate surrogates for total biodiversity value. These findings highlight the urgent need to refine existing offsetting multimetrics to ensure they meet underlying assumptions of surrogacy. Despite the best intentions, offset schemes will never achieve their goal of no net loss of biodiversity values if trades are based on metrics unrepresentative of total biodiversity. © 2016 Society for Conservation Biology.

  1. Development of an Acoustic Localization Method for Cavitation Experiments in Reverberant Environments

    NASA Astrophysics Data System (ADS)

    Ranjeva, Minna; Thompson, Lee; Perlitz, Daniel; Bonness, William; Capone, Dean; Elbing, Brian

    2011-11-01

    Cavitation is a major concern for the US Navy since it can cause ship damage and produce unwanted noise. The ability to precisely locate cavitation onset in laboratory scale experiments is essential for proper design that will minimize this undesired phenomenon. Measuring the cavitation onset is more accurately determined acoustically than visually. However, if other parts of the model begin to cavitate prior to the component of interest the acoustic data is contaminated with spurious noise. Consequently, cavitation onset is widely determined by optically locating the event of interest. The current research effort aims at developing an acoustic localization scheme for reverberant environments such as water tunnels. Currently cavitation bubbles are being induced in a static water tank with a laser, allowing the localization techniques to be refined with the bubble at a known location. The source is located with the use of acoustic data collected with hydrophones and analyzed using signal processing techniques. To verify the accuracy of the acoustic scheme, the events are simultaneously monitored visually with the use of a high speed camera. Once refined testing will be conducted in a water tunnel. This research was sponsored by the Naval Engineering Education Center (NEEC).

  2. APPLICATION OF TRAVEL TIME RELIABILITY FOR PERFORMANCE ORIENTED OPERATIONAL PLANNING OF EXPRESSWAYS

    NASA Astrophysics Data System (ADS)

    Mehran, Babak; Nakamura, Hideki

    Evaluation of impacts of congestion improvement scheme s on travel time reliability is very significant for road authorities since travel time reliability repr esents operational performance of expressway segments. In this paper, a methodology is presented to estimate travel tim e reliability prior to implementation of congestion relief schemes based on travel time variation modeling as a function of demand, capacity, weather conditions and road accident s. For subject expressway segmen ts, traffic conditions are modeled over a whole year considering demand and capacity as random variables. Patterns of demand and capacity are generated for each five minute interval by appl ying Monte-Carlo simulation technique, and accidents are randomly generated based on a model that links acci dent rate to traffic conditions. A whole year analysis is performed by comparing de mand and available capacity for each scenario and queue length is estimated through shockwave analysis for each time in terval. Travel times are estimated from refined speed-flow relationships developed for intercity expressways and buffer time index is estimated consequently as a measure of travel time reliability. For validation, estimated reliability indices are compared with measured values from empirical data, and it is shown that the proposed method is suitable for operational evaluation and planning purposes.

  3. Image-adaptive and robust digital wavelet-domain watermarking for images

    NASA Astrophysics Data System (ADS)

    Zhao, Yi; Zhang, Liping

    2018-03-01

    We propose a new frequency domain wavelet based watermarking technique. The key idea of our scheme is twofold: multi-tier solution representation of image and odd-even quantization embedding/extracting watermark. Because many complementary watermarks need to be hidden, the watermark image designed is image-adaptive. The meaningful and complementary watermark images was embedded into the original image (host image) by odd-even quantization modifying coefficients, which was selected from the detail wavelet coefficients of the original image, if their magnitudes are larger than their corresponding Just Noticeable Difference thresholds. The tests show good robustness against best-known attacks such as noise addition, image compression, median filtering, clipping as well as geometric transforms. Further research may improve the performance by refining JND thresholds.

  4. A template-based approach for parallel hexahedral two-refinement

    DOE PAGES

    Owen, Steven J.; Shih, Ryan M.; Ernst, Corey D.

    2016-10-17

    Here, we provide a template-based approach for generating locally refined all-hex meshes. We focus specifically on refinement of initially structured grids utilizing a 2-refinement approach where uniformly refined hexes are subdivided into eight child elements. The refinement algorithm consists of identifying marked nodes that are used as the basis for a set of four simple refinement templates. The target application for 2-refinement is a parallel grid-based all-hex meshing tool for high performance computing in a distributed environment. The result is a parallel consistent locally refined mesh requiring minimal communication and where minimum mesh quality is greater than scaled Jacobian 0.3more » prior to smoothing.« less

  5. A template-based approach for parallel hexahedral two-refinement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Owen, Steven J.; Shih, Ryan M.; Ernst, Corey D.

    Here, we provide a template-based approach for generating locally refined all-hex meshes. We focus specifically on refinement of initially structured grids utilizing a 2-refinement approach where uniformly refined hexes are subdivided into eight child elements. The refinement algorithm consists of identifying marked nodes that are used as the basis for a set of four simple refinement templates. The target application for 2-refinement is a parallel grid-based all-hex meshing tool for high performance computing in a distributed environment. The result is a parallel consistent locally refined mesh requiring minimal communication and where minimum mesh quality is greater than scaled Jacobian 0.3more » prior to smoothing.« less

  6. MEDICINAL CANNABIS LAW REFORM: LESSONS FROM CANADIAN LITIGATION.

    PubMed

    Freckelton, Ian

    2015-06-01

    This editorial reviews medicinal cannabis litigation in Canada's superior courts between 1998 and 2015. It reflects upon the outcomes of the decisions and the reasoning within them. It identifies the issues that have driven Canada's jurisprudence in relation to access to medicinal cannabis, particularly insofar as it has engaged patients' rights to liberty and security of the person. It argues that the sequence of medicinal schemes adopted and refined in Canada provides constructive guidance for countries such as Australia which are contemplating introduction of medicinal cannabis as a therapeutic option in compassionate circumstances for patients. In particular, it contends that Canada's experience suggests that strategies calculated to introduce such schemes in a gradualist way, enabling informed involvement by medical practitioners and pharmacists, and that provide for safe and inexpensive accessibility to forms of medicinal cannabis that are clearly distinguished from recreational use and unlikely to be diverted criminally maximise the chances of such schemes being accepted by key stakeholders.

  7. Development of a novel coding scheme (SABICS) to record nurse-child interactive behaviours in a community dental preventive intervention.

    PubMed

    Zhou, Yuefang; Cameron, Elaine; Forbes, Gillian; Humphris, Gerry

    2012-08-01

    To develop and validate the St Andrews Behavioural Interaction Coding Scheme (SABICS): a tool to record nurse-child interactive behaviours. The SABICS was developed primarily from observation of video recorded interactions; and refined through an iterative process of applying the scheme to new data sets. Its practical applicability was assessed via implementation of the scheme on specialised behavioural coding software. Reliability was calculated using Cohen's Kappa. Discriminant validity was assessed using logistic regression. The SABICS contains 48 codes. Fifty-five nurse-child interactions were successfully coded through administering the scheme on The Observer XT8.0 system. Two visualization results of interaction patterns demonstrated the scheme's capability of capturing complex interaction processes. Cohen's Kappa was 0.66 (inter-coder) and 0.88 and 0.78 (two intra-coders). The frequency of nurse behaviours, such as "instruction" (OR = 1.32, p = 0.027) and "praise" (OR = 2.04, p = 0.027), predicted a child receiving the intervention. The SABICS is a unique system to record interactions between dental nurses and 3-5 years old children. It records and displays complex nurse-child interactive behaviours. It is easily administered and demonstrates reasonable psychometric properties. The SABICS has potential for other paediatric settings. Its development procedure may be helpful for other similar coding scheme development. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  8. Numerical viscosity and resolution of high-order weighted essentially nonoscillatory schemes for compressible flows with high Reynolds numbers.

    PubMed

    Zhang, Yong-Tao; Shi, Jing; Shu, Chi-Wang; Zhou, Ye

    2003-10-01

    A quantitative study is carried out in this paper to investigate the size of numerical viscosities and the resolution power of high-order weighted essentially nonoscillatory (WENO) schemes for solving one- and two-dimensional Navier-Stokes equations for compressible gas dynamics with high Reynolds numbers. A one-dimensional shock tube problem, a one-dimensional example with parameters motivated by supernova and laser experiments, and a two-dimensional Rayleigh-Taylor instability problem are used as numerical test problems. For the two-dimensional Rayleigh-Taylor instability problem, or similar problems with small-scale structures, the details of the small structures are determined by the physical viscosity (therefore, the Reynolds number) in the Navier-Stokes equations. Thus, to obtain faithful resolution to these small-scale structures, the numerical viscosity inherent in the scheme must be small enough so that the physical viscosity dominates. A careful mesh refinement study is performed to capture the threshold mesh for full resolution, for specific Reynolds numbers, when WENO schemes of different orders of accuracy are used. It is demonstrated that high-order WENO schemes are more CPU time efficient to reach the same resolution, both for the one-dimensional and two-dimensional test problems.

  9. New French Coverage with Evidence Development for Innovative Medical Devices: Improvements and Unresolved Issues.

    PubMed

    Martelli, Nicolas; van den Brink, Hélène; Borget, Isabelle

    2016-01-01

    We describe here recent modifications to the French Coverage with Evidence Development (CED) scheme for innovative medical devices. CED can be defined as temporary coverage for a novel health product during collection of the additional evidence required to determine whether definitive coverage is possible. The principle refinements to the scheme include a more precise definition of what may be considered an innovative product, the possibility for device manufacturers to request CED either independently or in partnership with hospitals, and the establishment of processing deadlines for health authorities. In the long term, these modifications may increase the number of applications to the CED scheme, which could lead to unsustainable funding for future projects. It will also be necessary to ensure that the study conditions required by national health authorities are suitable for medical devices and that processing deadlines are met for the scheme to be fully operational. Overall, the modifications recently applied to the French CED scheme for innovative medical devices should increase the transparency of the process, and therefore be more appealing to medical device manufacturers. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  10. Spatiotemporal Local-Remote Senor Fusion (ST-LRSF) for Cooperative Vehicle Positioning.

    PubMed

    Jeong, Han-You; Nguyen, Hoa-Hung; Bhawiyuga, Adhitya

    2018-04-04

    Vehicle positioning plays an important role in the design of protocols, algorithms, and applications in the intelligent transport systems. In this paper, we present a new framework of spatiotemporal local-remote sensor fusion (ST-LRSF) that cooperatively improves the accuracy of absolute vehicle positioning based on two state estimates of a vehicle in the vicinity: a local sensing estimate, measured by the on-board exteroceptive sensors, and a remote sensing estimate, received from neighbor vehicles via vehicle-to-everything communications. Given both estimates of vehicle state, the ST-LRSF scheme identifies the set of vehicles in the vicinity, determines the reference vehicle state, proposes a spatiotemporal dissimilarity metric between two reference vehicle states, and presents a greedy algorithm to compute a minimal weighted matching (MWM) between them. Given the outcome of MWM, the theoretical position uncertainty of the proposed refinement algorithm is proven to be inversely proportional to the square root of matching size. To further reduce the positioning uncertainty, we also develop an extended Kalman filter model with the refined position of ST-LRSF as one of the measurement inputs. The numerical results demonstrate that the proposed ST-LRSF framework can achieve high positioning accuracy for many different scenarios of cooperative vehicle positioning.

  11. Study of the Characteristics of Elementary Processes in a Chain Hydrogen Burning Reaction in Oxygen

    NASA Astrophysics Data System (ADS)

    Bychkov, M. E.; Petrushevich, Yu. V.; Starostin, A. N.

    2017-12-01

    The characteristics of possible chain explosive hydrogen burning reactions in an oxidizing medium are calculated on the potential energy surface. Specifically, reactions H2 + O2 → H2O + O, H2 + O2 → HO2 + H, and H2 + O2 → OH + OH are considered. Special attention is devoted to the production of a pair of fast highly reactive OH radicals. Because of the high activation threshold, this reaction is often excluded from the known kinetic scheme of hydrogen burning. However, a spread in estimates of kinetic characteristics and a disagreement between theoretical predictions with experimental results suggest that the kinetic scheme should be refined.

  12. A parallel finite-difference method for computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Swisshelm, Julie M.

    1989-01-01

    A finite-difference scheme for solving complex three-dimensional aerodynamic flow on parallel-processing supercomputers is presented. The method consists of a basic flow solver with multigrid convergence acceleration, embedded grid refinements, and a zonal equation scheme. Multitasking and vectorization have been incorporated into the algorithm. Results obtained include multiprocessed flow simulations from the Cray X-MP and Cray-2. Speedups as high as 3.3 for the two-dimensional case and 3.5 for segments of the three-dimensional case have been achieved on the Cray-2. The entire solver attained a factor of 2.7 improvement over its unitasked version on the Cray-2. The performance of the parallel algorithm on each machine is analyzed.

  13. Accurate solutions for transonic viscous flow over finite wings

    NASA Technical Reports Server (NTRS)

    Vatsa, V. N.

    1986-01-01

    An explicit multistage Runge-Kutta type time-stepping scheme is used for solving the three-dimensional, compressible, thin-layer Navier-Stokes equations. A finite-volume formulation is employed to facilitate treatment of complex grid topologies encountered in three-dimensional calculations. Convergence to steady state is expedited through usage of acceleration techniques. Further numerical efficiency is achieved through vectorization of the computer code. The accuracy of the overall scheme is evaluated by comparing the computed solutions with the experimental data for a finite wing under different test conditions in the transonic regime. A grid refinement study ir conducted to estimate the grid requirements for adequate resolution of salient features of such flows.

  14. Parallel, adaptive finite element methods for conservation laws

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Devine, Karen D.; Flaherty, Joseph E.

    1994-01-01

    We construct parallel finite element methods for the solution of hyperbolic conservation laws in one and two dimensions. Spatial discretization is performed by a discontinuous Galerkin finite element method using a basis of piecewise Legendre polynomials. Temporal discretization utilizes a Runge-Kutta method. Dissipative fluxes and projection limiting prevent oscillations near solution discontinuities. A posteriori estimates of spatial errors are obtained by a p-refinement technique using superconvergence at Radau points. The resulting method is of high order and may be parallelized efficiently on MIMD computers. We compare results using different limiting schemes and demonstrate parallel efficiency through computations on an NCUBE/2 hypercube. We also present results using adaptive h- and p-refinement to reduce the computational cost of the method.

  15. Adaptive computational methods for aerothermal heating analysis

    NASA Technical Reports Server (NTRS)

    Price, John M.; Oden, J. Tinsley

    1988-01-01

    The development of adaptive gridding techniques for finite-element analysis of fluid dynamics equations is described. The developmental work was done with the Euler equations with concentration on shock and inviscid flow field capturing. Ultimately this methodology is to be applied to a viscous analysis for the purpose of predicting accurate aerothermal loads on complex shapes subjected to high speed flow environments. The development of local error estimate strategies as a basis for refinement strategies is discussed, as well as the refinement strategies themselves. The application of the strategies to triangular elements and a finite-element flux-corrected-transport numerical scheme are presented. The implementation of these strategies in the GIM/PAGE code for 2-D and 3-D applications is documented and demonstrated.

  16. From the History of Maths to the History of Greatness: Towards Worthwhile Cross-Curricular Study through the Refinement of a Scheme of Work

    ERIC Educational Resources Information Center

    Fletcher-Wood, Harry

    2016-01-01

    Readers of "Teaching History" will be familiar with the benefits and difficulties of cross-curricular planning, and the pages of this journal have often carried analysis of successful collaborations with the English department, or music, or geography. Harry Fletcher-Wood describes in this article a collaboration involving maths,…

  17. Wavelet-enabled progressive data Access and Storage Protocol (WASP)

    NASA Astrophysics Data System (ADS)

    Clyne, J.; Frank, L.; Lesperance, T.; Norton, A.

    2015-12-01

    Current practices for storing numerical simulation outputs hail from an era when the disparity between compute and I/O performance was not as great as it is today. The memory contents for every sample, computed at every grid point location, are simply saved at some prescribed temporal frequency. Though straightforward, this approach fails to take advantage of the coherency in neighboring grid points that invariably exists in numerical solutions to mathematical models. Exploiting such coherence is essential to digital multimedia; DVD-Video, digital cameras, streaming movies and audio are all possible today because of transform-based compression schemes that make substantial reductions in data possible by taking advantage of the strong correlation between adjacent samples in both space and time. Such methods can also be exploited to enable progressive data refinement in a manner akin to that used in ubiquitous digital mapping applications: views from far away are shown in coarsened detail to provide context, and can be progressively refined as the user zooms in on a localized region of interest. The NSF funded WASP project aims to provide a common, NetCDF-compatible software framework for supporting wavelet-based, multi-scale, progressive data, enabling interactive exploration of large data sets for the geoscience communities. This presentation will provide an overview of this work in progress to develop community cyber-infrastructure for the efficient analysis of very large data sets.

  18. [Disease management programs from a health insurer's point of view].

    PubMed

    Szymkowiak, Christof; Walkenhorst, Karen; Straub, Christoph

    2003-06-01

    Disease Management Programmes represent a great challenge to the German statutory health insurance system. According to politicians, disease management programmes are an appropriate tool for increasing the level of care for chronically ill patients significantly, while at the same time they can slow down the cost explosion in health care. The statutory health insurers' point of view yields a more refined picture of the chances and risks involved. The chances are that a medical guideline-based, evidence-based, co-operative care of the chronically ill could be established. But also, there are the risks of misuse of disease management programmes and of misallocation of funds due to the ill-advised linkage with the so-called risk compensation scheme (RSA) balancing the sickness funds' structural deficits through redistribution. The nation-wide introduction of disease management programmes appears to be a gigantic experiment whose aim is to change the care of chronically ill patients and whose outcome is unpredictable.

  19. Lattice Boltzmann and Navier-Stokes Cartesian CFD Approaches for Airframe Noise Predictions

    NASA Technical Reports Server (NTRS)

    Barad, Michael F.; Kocheemoolayil, Joseph G.; Kiris, Cetin C.

    2017-01-01

    Lattice Boltzmann (LB) and compressible Navier-Stokes (NS) equations based computational fluid dynamics (CFD) approaches are compared for simulating airframe noise. Both LB and NS CFD approaches are implemented within the Launch Ascent and Vehicle Aerodynamics (LAVA) framework. Both schemes utilize the same underlying Cartesian structured mesh paradigm with provision for local adaptive grid refinement and sub-cycling in time. We choose a prototypical massively separated, wake-dominated flow ideally suited for Cartesian-grid based approaches in this study - The partially-dressed, cavity-closed nose landing gear (PDCC-NLG) noise problem from AIAA's Benchmark problems for Airframe Noise Computations (BANC) series of workshops. The relative accuracy and computational efficiency of the two approaches are systematically compared. Detailed comments are made on the potential held by LB to significantly reduce time-to-solution for a desired level of accuracy within the context of modeling airframes noise from first principles.

  20. Automating the expert consensus paradigm for robust lung tissue classification

    NASA Astrophysics Data System (ADS)

    Rajagopalan, Srinivasan; Karwoski, Ronald A.; Raghunath, Sushravya; Bartholmai, Brian J.; Robb, Richard A.

    2012-03-01

    Clinicians confirm the efficacy of dynamic multidisciplinary interactions in diagnosing Lung disease/wellness from CT scans. However, routine clinical practice cannot readily accomodate such interactions. Current schemes for automating lung tissue classification are based on a single elusive disease differentiating metric; this undermines their reliability in routine diagnosis. We propose a computational workflow that uses a collection (#: 15) of probability density functions (pdf)-based similarity metrics to automatically cluster pattern-specific (#patterns: 5) volumes of interest (#VOI: 976) extracted from the lung CT scans of 14 patients. The resultant clusters are refined for intra-partition compactness and subsequently aggregated into a super cluster using a cluster ensemble technique. The super clusters were validated against the consensus agreement of four clinical experts. The aggregations correlated strongly with expert consensus. By effectively mimicking the expertise of physicians, the proposed workflow could make automation of lung tissue classification a clinical reality.

  1. Goal-based h-adaptivity of the 1-D diamond difference discrete ordinate method

    NASA Astrophysics Data System (ADS)

    Jeffers, R. S.; Kópházi, J.; Eaton, M. D.; Févotte, F.; Hülsemann, F.; Ragusa, J.

    2017-04-01

    The quantity of interest (QoI) associated with a solution of a partial differential equation (PDE) is not, in general, the solution itself, but a functional of the solution. Dual weighted residual (DWR) error estimators are one way of providing an estimate of the error in the QoI resulting from the discretisation of the PDE. This paper aims to provide an estimate of the error in the QoI due to the spatial discretisation, where the discretisation scheme being used is the diamond difference (DD) method in space and discrete ordinate (SN) method in angle. The QoI are reaction rates in detectors and the value of the eigenvalue (Keff) for 1-D fixed source and eigenvalue (Keff criticality) neutron transport problems respectively. Local values of the DWR over individual cells are used as error indicators for goal-based mesh refinement, which aims to give an optimal mesh for a given QoI.

  2. An improved principal component analysis based region matching method for fringe direction estimation

    NASA Astrophysics Data System (ADS)

    He, A.; Quan, C.

    2018-04-01

    The principal component analysis (PCA) and region matching combined method is effective for fringe direction estimation. However, its mask construction algorithm for region matching fails in some circumstances, and the algorithm for conversion of orientation to direction in mask areas is computationally-heavy and non-optimized. We propose an improved PCA based region matching method for the fringe direction estimation, which includes an improved and robust mask construction scheme, and a fast and optimized orientation-direction conversion algorithm for the mask areas. Along with the estimated fringe direction map, filtered fringe pattern by automatic selective reconstruction modification and enhanced fast empirical mode decomposition (ASRm-EFEMD) is used for Hilbert spiral transform (HST) to demodulate the phase. Subsequently, windowed Fourier ridge (WFR) method is used for the refinement of the phase. The robustness and effectiveness of proposed method are demonstrated by both simulated and experimental fringe patterns.

  3. A Godunov-like point-centered essentially Lagrangian hydrodynamic approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morgan, Nathaniel R.; Waltz, Jacob I.; Burton, Donald E.

    We present an essentially Lagrangian hydrodynamic scheme suitable for modeling complex compressible flows on tetrahedron meshes. The scheme reduces to a purely Lagrangian approach when the flow is linear or if the mesh size is equal to zero; as a result, we use the term essentially Lagrangian for the proposed approach. The motivation for developing a hydrodynamic method for tetrahedron meshes is because tetrahedron meshes have some advantages over other mesh topologies. Notable advantages include reduced complexity in generating conformal meshes, reduced complexity in mesh reconnection, and preserving tetrahedron cells with automatic mesh refinement. A challenge, however, is tetrahedron meshesmore » do not correctly deform with a lower order (i.e. piecewise constant) staggered-grid hydrodynamic scheme (SGH) or with a cell-centered hydrodynamic (CCH) scheme. The SGH and CCH approaches calculate the strain via the tetrahedron, which can cause artificial stiffness on large deformation problems. To resolve the stiffness problem, we adopt the point-centered hydrodynamic approach (PCH) and calculate the evolution of the flow via an integration path around the node. The PCH approach stores the conserved variables (mass, momentum, and total energy) at the node. The evolution equations for momentum and total energy are discretized using an edge-based finite element (FE) approach with linear basis functions. A multidirectional Riemann-like problem is introduced at the center of the tetrahedron to account for discontinuities in the flow such as a shock. Conservation is enforced at each tetrahedron center. The multidimensional Riemann-like problem used here is based on Lagrangian CCH work [8, 19, 37, 38, 44] and recent Lagrangian SGH work [33-35, 39, 45]. In addition, an approximate 1D Riemann problem is solved on each face of the nodal control volume to advect mass, momentum, and total energy. The 1D Riemann problem produces fluxes [18] that remove a volume error in the PCH discretization. A 2-stage Runge–Kutta method is used to evolve the solution in time. The details of the new hydrodynamic scheme are discussed; likewise, results from numerical test problems are presented.« less

  4. A Godunov-like point-centered essentially Lagrangian hydrodynamic approach

    DOE PAGES

    Morgan, Nathaniel R.; Waltz, Jacob I.; Burton, Donald E.; ...

    2014-10-28

    We present an essentially Lagrangian hydrodynamic scheme suitable for modeling complex compressible flows on tetrahedron meshes. The scheme reduces to a purely Lagrangian approach when the flow is linear or if the mesh size is equal to zero; as a result, we use the term essentially Lagrangian for the proposed approach. The motivation for developing a hydrodynamic method for tetrahedron meshes is because tetrahedron meshes have some advantages over other mesh topologies. Notable advantages include reduced complexity in generating conformal meshes, reduced complexity in mesh reconnection, and preserving tetrahedron cells with automatic mesh refinement. A challenge, however, is tetrahedron meshesmore » do not correctly deform with a lower order (i.e. piecewise constant) staggered-grid hydrodynamic scheme (SGH) or with a cell-centered hydrodynamic (CCH) scheme. The SGH and CCH approaches calculate the strain via the tetrahedron, which can cause artificial stiffness on large deformation problems. To resolve the stiffness problem, we adopt the point-centered hydrodynamic approach (PCH) and calculate the evolution of the flow via an integration path around the node. The PCH approach stores the conserved variables (mass, momentum, and total energy) at the node. The evolution equations for momentum and total energy are discretized using an edge-based finite element (FE) approach with linear basis functions. A multidirectional Riemann-like problem is introduced at the center of the tetrahedron to account for discontinuities in the flow such as a shock. Conservation is enforced at each tetrahedron center. The multidimensional Riemann-like problem used here is based on Lagrangian CCH work [8, 19, 37, 38, 44] and recent Lagrangian SGH work [33-35, 39, 45]. In addition, an approximate 1D Riemann problem is solved on each face of the nodal control volume to advect mass, momentum, and total energy. The 1D Riemann problem produces fluxes [18] that remove a volume error in the PCH discretization. A 2-stage Runge–Kutta method is used to evolve the solution in time. The details of the new hydrodynamic scheme are discussed; likewise, results from numerical test problems are presented.« less

  5. Adaptive mesh fluid simulations on GPU

    NASA Astrophysics Data System (ADS)

    Wang, Peng; Abel, Tom; Kaehler, Ralf

    2010-10-01

    We describe an implementation of compressible inviscid fluid solvers with block-structured adaptive mesh refinement on Graphics Processing Units using NVIDIA's CUDA. We show that a class of high resolution shock capturing schemes can be mapped naturally on this architecture. Using the method of lines approach with the second order total variation diminishing Runge-Kutta time integration scheme, piecewise linear reconstruction, and a Harten-Lax-van Leer Riemann solver, we achieve an overall speedup of approximately 10 times faster execution on one graphics card as compared to a single core on the host computer. We attain this speedup in uniform grid runs as well as in problems with deep AMR hierarchies. Our framework can readily be applied to more general systems of conservation laws and extended to higher order shock capturing schemes. This is shown directly by an implementation of a magneto-hydrodynamic solver and comparing its performance to the pure hydrodynamic case. Finally, we also combined our CUDA parallel scheme with MPI to make the code run on GPU clusters. Close to ideal speedup is observed on up to four GPUs.

  6. Multiresolution Wavelet Based Adaptive Numerical Dissipation Control for Shock-Turbulence Computations

    NASA Technical Reports Server (NTRS)

    Sjoegreen, B.; Yee, H. C.

    2001-01-01

    The recently developed essentially fourth-order or higher low dissipative shock-capturing scheme of Yee, Sandham and Djomehri (1999) aimed at minimizing nu- merical dissipations for high speed compressible viscous flows containing shocks, shears and turbulence. To detect non smooth behavior and control the amount of numerical dissipation to be added, Yee et al. employed an artificial compression method (ACM) of Harten (1978) but utilize it in an entirely different context than Harten originally intended. The ACM sensor consists of two tuning parameters and is highly physical problem dependent. To minimize the tuning of parameters and physical problem dependence, new sensors with improved detection properties are proposed. The new sensors are derived from utilizing appropriate non-orthogonal wavelet basis functions and they can be used to completely switch to the extra numerical dissipation outside shock layers. The non-dissipative spatial base scheme of arbitrarily high order of accuracy can be maintained without compromising its stability at all parts of the domain where the solution is smooth. Two types of redundant non-orthogonal wavelet basis functions are considered. One is the B-spline wavelet (Mallat & Zhong 1992) used by Gerritsen and Olsson (1996) in an adaptive mesh refinement method, to determine regions where re nement should be done. The other is the modification of the multiresolution method of Harten (1995) by converting it to a new, redundant, non-orthogonal wavelet. The wavelet sensor is then obtained by computing the estimated Lipschitz exponent of a chosen physical quantity (or vector) to be sensed on a chosen wavelet basis function. Both wavelet sensors can be viewed as dual purpose adaptive methods leading to dynamic numerical dissipation control and improved grid adaptation indicators. Consequently, they are useful not only for shock-turbulence computations but also for computational aeroacoustics and numerical combustion. In addition, these sensors are scheme independent and can be stand alone options for numerical algorithm other than the Yee et al. scheme.

  7. Properties of Base Stocks Obtained from Used Engine Oils by Acid/Clay Re-refining (Proprietes des Stocks de Base Obtenus par Regeneration des Huiles a Moteur Usees par le Procede de Traitement a l’Acide et a la Terre),

    DTIC Science & Technology

    1980-09-01

    Research Conseil national Council Canada de recherches Canada LEY EL < PROPERTIES OF BASE STOCKS OBTAINED FROM USED ENGINE OILS BY ACID /CLAY RE-REFINING DTIC...MECHANICAL ENGINEERING REPORT Canad NC MP75 NRC NO. 18719 PROPERTIES OF BASE STOCKS OBTAINED FROM USED ENGINE OILS BY ACID /CLAY RE-REFINING (PROPRIETES...refined Base Stock ..................................... 10 3 Physical Test Data of Acid /Clay Process - Re-refined Base Stock Oils ............ 11 4

  8. Improved predictability of droughts over southern Africa using the standardized precipitation evapotranspiration index and ENSO

    NASA Astrophysics Data System (ADS)

    Manatsa, Desmond; Mushore, Terrence; Lenouo, Andre

    2017-01-01

    The provision of timely and reliable climate information on which to base management decisions remains a critical component in drought planning for southern Africa. In this observational study, we have not only proposed a forecasting scheme which caters for timeliness and reliability but improved relevance of the climate information by using a novel drought index called the standardised precipitation evapotranspiration index (SPEI), instead of the traditional precipitation only based index, the standardised precipitation index (SPI). The SPEI which includes temperature and other climatic factors in its construction has a more robust connection to ENSO than the SPI. Consequently, the developed ENSO-SPEI prediction scheme can provide quantitative information about the spatial extent and severity of predicted drought conditions in a way that reflects more closely the level of risk in the global warming context of the sub region. However, it is established that the ENSO significant regional impact is restricted only to the period December-March, implying a revisit to the traditional ENSO-based forecast scheme which essentially divides the rainfall season into the two periods, October to December and January to March. Although the prediction of ENSO events has increased with the refinement of numerical models, this work has demonstrated that the prediction of drought impacts related to ENSO is also a reality based only on observations. A large temporal lag is observed between the development of ENSO phenomena (typically in May of the previous year) and the identification of regional SPEI defined drought conditions. It has been shown that using the Southern Africa Regional Climate Outlook Forum's (SARCOF) traditional 3-month averaged Nino 3.4 SST index (June to August) as a predictor does not have an added advantage over using only the May SST index values. In this regard, the extended lead time and improved skill demonstrated in this study could immensely benefit regional decision makers.

  9. Structure refinement of membrane proteins via molecular dynamics simulations.

    PubMed

    Dutagaci, Bercem; Heo, Lim; Feig, Michael

    2018-07-01

    A refinement protocol based on physics-based techniques established for water soluble proteins is tested for membrane protein structures. Initial structures were generated by homology modeling and sampled via molecular dynamics simulations in explicit lipid bilayer and aqueous solvent systems. Snapshots from the simulations were selected based on scoring with either knowledge-based or implicit membrane-based scoring functions and averaged to obtain refined models. The protocol resulted in consistent and significant refinement of the membrane protein structures similar to the performance of refinement methods for soluble proteins. Refinement success was similar between sampling in the presence of lipid bilayers and aqueous solvent but the presence of lipid bilayers may benefit the improvement of lipid-facing residues. Scoring with knowledge-based functions (DFIRE and RWplus) was found to be as good as scoring using implicit membrane-based scoring functions suggesting that differences in internal packing is more important than orientations relative to the membrane during the refinement of membrane protein homology models. © 2018 Wiley Periodicals, Inc.

  10. The implementation and use of Ada on distributed systems with high reliability requirements

    NASA Technical Reports Server (NTRS)

    Knight, J. C.

    1987-01-01

    Performance analysis was begin on the Ada implementations. The goal is to supply the system designer with tools that will allow a rational decision to be made about whether a particular implementation can support a given application early in the design cycle. Primary activities were: analysis of the original approach to recovery in distributed Ada programs using the Advanced Transport Operating System (ATOPS) example; review and assessment of the original approach which was found to be capable of improvement; preparation and presentation of a paper at the 1987 Washington DC Ada Symposium; development of a refined approach to recovery that is presently being applied to the ATOPS example; and design and development of a performance assessment scheme for Ada programs based on a flexible user-driven benchmarking system.

  11. Self-learning fuzzy controllers based on temporal back propagation

    NASA Technical Reports Server (NTRS)

    Jang, Jyh-Shing R.

    1992-01-01

    This paper presents a generalized control strategy that enhances fuzzy controllers with self-learning capability for achieving prescribed control objectives in a near-optimal manner. This methodology, termed temporal back propagation, is model-insensitive in the sense that it can deal with plants that can be represented in a piecewise-differentiable format, such as difference equations, neural networks, GMDH structures, and fuzzy models. Regardless of the numbers of inputs and outputs of the plants under consideration, the proposed approach can either refine the fuzzy if-then rules if human experts, or automatically derive the fuzzy if-then rules obtained from human experts are not available. The inverted pendulum system is employed as a test-bed to demonstrate the effectiveness of the proposed control scheme and the robustness of the acquired fuzzy controller.

  12. Dictionary Indexing of Electron Channeling Patterns.

    PubMed

    Singh, Saransh; De Graef, Marc

    2017-02-01

    The dictionary-based approach to the indexing of diffraction patterns is applied to electron channeling patterns (ECPs). The main ingredients of the dictionary method are introduced, including the generalized forward projector (GFP), the relevant detector model, and a scheme to uniformly sample orientation space using the "cubochoric" representation. The GFP is used to compute an ECP "master" pattern. Derivative free optimization algorithms, including the Nelder-Mead simplex and the bound optimization by quadratic approximation are used to determine the correct detector parameters and to refine the orientation obtained from the dictionary approach. The indexing method is applied to poly-silicon and shows excellent agreement with the calibrated values. Finally, it is shown that the method results in a mean disorientation error of 1.0° with 0.5° SD for a range of detector parameters.

  13. Progressive transmission of pseudo-color images. Appendix 1: Item 4. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Hadenfeldt, Andrew C.

    1991-01-01

    The transmission of digital images can require considerable channel bandwidth. The cost of obtaining such a channel can be prohibitive, or the channel might simply not be available. In this case, progressive transmission (PT) can be useful. PT presents the user with a coarse initial image approximation, and then proceeds to refine it. In this way, the user tends to receive information about the content of the image sooner than if a sequential transmission method is used. PT finds application in image data base browsing, teleconferencing, medical and other applications. A PT scheme is developed for use with a particular type of image data, the pseudo-color or color mapped image. Such images consist of a table of colors called a colormap, plus a 2-D array of index values which indicate which colormap entry is to be used to display a given pixel. This type of image presents some unique problems for a PT coder, and techniques for overcoming these problems are developed. A computer simulation of the color mapped PT scheme is developed to evaluate its performance. Results of simulation using several test images are presented.

  14. Modelling ESCOMPTE episodes with the CTM MOCAGE. Part 2 : sensitivity studies.

    NASA Astrophysics Data System (ADS)

    Dufour, A.; Amodei, M.; Brocheton, F.; Michou, M.; Peuch, V.-H.

    2003-04-01

    The multi-scale CTM MOCAGE has been applied to study pollution episodes documented during the ESCOMPTE field campain in June July 2001 in south eastern France (http://medias.obs-mip.fr/escompte). Several sensitivity studies have been performed on the basis of the 2nd IOP, covering 6 continuous days. The main objective of the present work is to investigate the question of chemical boundary conditions, as on the vertical than on the horizontal, for regional air quality simulations of several days. This issue, that often tended to be oversimplified (use of fixed continental climatology), raises increasing interest, particurlarly with the perspective of space-born tropospheric chemisry data assimilation in global model. In addition, we have examined how resolution refinements impact on the quality of the model outputs, at the surface and in altitude, against the observational database of dynamic and chemistry : resolution of the model by the way of the four nested models (from 2° to 0.01°), but also resolution of emission inventories (from 1° to 0.01°). Lastly, the impact of the refinement in the representation of chemistry has been assessed by using either detailed chemical schemes, such as RAM or SAPRC, or schemes used in global modelling, which just account for a limited amount of volatil hydrocarbon.

  15. Code Development of Three-Dimensional General Relativistic Hydrodynamics with AMR (Adaptive-Mesh Refinement) and Results from Special and General Relativistic Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Dönmez, Orhan

    2004-09-01

    In this paper, the general procedure to solve the general relativistic hydrodynamical (GRH) equations with adaptive-mesh refinement (AMR) is presented. In order to achieve, the GRH equations are written in the conservation form to exploit their hyperbolic character. The numerical solutions of GRH equations are obtained by high resolution shock Capturing schemes (HRSC), specifically designed to solve nonlinear hyperbolic systems of conservation laws. These schemes depend on the characteristic information of the system. The Marquina fluxes with MUSCL left and right states are used to solve GRH equations. First, different test problems with uniform and AMR grids on the special relativistic hydrodynamics equations are carried out to verify the second-order convergence of the code in one, two and three dimensions. Results from uniform and AMR grid are compared. It is found that adaptive grid does a better job when the number of resolution is increased. Second, the GRH equations are tested using two different test problems which are Geodesic flow and Circular motion of particle In order to do this, the flux part of GRH equations is coupled with source part using Strang splitting. The coupling of the GRH equations is carried out in a treatment which gives second order accurate solutions in space and time.

  16. Thermodynamical effects and high resolution methods for compressible fluid flows

    NASA Astrophysics Data System (ADS)

    Li, Jiequan; Wang, Yue

    2017-08-01

    One of the fundamental differences of compressible fluid flows from incompressible fluid flows is the involvement of thermodynamics. This difference should be manifested in the design of numerical schemes. Unfortunately, the role of entropy, expressing irreversibility, is often neglected even though the entropy inequality, as a conceptual derivative, is verified for some first order schemes. In this paper, we refine the GRP solver to illustrate how the thermodynamical variation is integrated into the design of high resolution methods for compressible fluid flows and demonstrate numerically the importance of thermodynamic effects in the resolution of strong waves. As a by-product, we show that the GRP solver works for generic equations of state, and is independent of technical arguments.

  17. Parallel Adaptive Simulation of Detonation Waves Using a Weighted Essentially Non-Oscillatory Scheme

    NASA Astrophysics Data System (ADS)

    McMahon, Sean

    The purpose of this thesis was to develop a code that could be used to develop a better understanding of the physics of detonation waves. First, a detonation was simulated in one dimension using ZND theory. Then, using the 1D solution as an initial condition, a detonation was simulated in two dimensions using a weighted essentially non-oscillatory scheme on an adaptive mesh with the smallest lengthscales being equal to 2-3 flamelet lengths. The code development in linking Chemkin for chemical kinetics to the adaptive mesh refinement flow solver was completed. The detonation evolved in a way that, qualitatively, matched the experimental observations, however, the simulation was unable to progress past the formation of the triple point.

  18. An accuracy assessment of Cartesian-mesh approaches for the Euler equations

    NASA Technical Reports Server (NTRS)

    Coirier, William J.; Powell, Kenneth G.

    1995-01-01

    A critical assessment of the accuracy of Cartesian-mesh approaches for steady, transonic solutions of the Euler equations of gas dynamics is made. An exact solution of the Euler equations (Ringleb's flow) is used not only to infer the order of the truncation error of the Cartesian-mesh approaches, but also to compare the magnitude of the discrete error directly to that obtained with a structured mesh approach. Uniformly and adaptively refined solutions using a Cartesian-mesh approach are obtained and compared to each other and to uniformly refined structured mesh results. The effect of cell merging is investigated as well as the use of two different K-exact reconstruction procedures. The solution methodology of the schemes is explained and tabulated results are presented to compare the solution accuracies.

  19. Multiresolution strategies for the numerical solution of optimal control problems

    NASA Astrophysics Data System (ADS)

    Jain, Sachin

    There exist many numerical techniques for solving optimal control problems but less work has been done in the field of making these algorithms run faster and more robustly. The main motivation of this work is to solve optimal control problems accurately in a fast and efficient way. Optimal control problems are often characterized by discontinuities or switchings in the control variables. One way of accurately capturing the irregularities in the solution is to use a high resolution (dense) uniform grid. This requires a large amount of computational resources both in terms of CPU time and memory. Hence, in order to accurately capture any irregularities in the solution using a few computational resources, one can refine the mesh locally in the region close to an irregularity instead of refining the mesh uniformly over the whole domain. Therefore, a novel multiresolution scheme for data compression has been designed which is shown to outperform similar data compression schemes. Specifically, we have shown that the proposed approach results in fewer grid points in the grid compared to a common multiresolution data compression scheme. The validity of the proposed mesh refinement algorithm has been verified by solving several challenging initial-boundary value problems for evolution equations in 1D. The examples have demonstrated the stability and robustness of the proposed algorithm. The algorithm adapted dynamically to any existing or emerging irregularities in the solution by automatically allocating more grid points to the region where the solution exhibited sharp features and fewer points to the region where the solution was smooth. Thereby, the computational time and memory usage has been reduced significantly, while maintaining an accuracy equivalent to the one obtained using a fine uniform mesh. Next, a direct multiresolution-based approach for solving trajectory optimization problems is developed. The original optimal control problem is transcribed into a nonlinear programming (NLP) problem that is solved using standard NLP codes. The novelty of the proposed approach hinges on the automatic calculation of a suitable, nonuniform grid over which the NLP problem is solved, which tends to increase numerical efficiency and robustness. Control and/or state constraints are handled with ease, and without any additional computational complexity. The proposed algorithm is based on a simple and intuitive method to balance several conflicting objectives, such as accuracy of the solution, convergence, and speed of the computations. The benefits of the proposed algorithm over uniform grid implementations are demonstrated with the help of several nontrivial examples. Furthermore, two sequential multiresolution trajectory optimization algorithms for solving problems with moving targets and/or dynamically changing environments have been developed. For such problems, high accuracy is desirable only in the immediate future, yet the ultimate mission objectives should be accommodated as well. An intelligent trajectory generation for such situations is thus enabled by introducing the idea of multigrid temporal resolution to solve the associated trajectory optimization problem on a non-uniform grid across time that is adapted to: (i) immediate future, and (ii) potential discontinuities in the state and control variables.

  20. SMART empirical approaches for predicting field performance of PV modules from results of reliability tests

    NASA Astrophysics Data System (ADS)

    Hardikar, Kedar Y.; Liu, Bill J. J.; Bheemreddy, Venkata

    2016-09-01

    Gaining an understanding of degradation mechanisms and their characterization are critical in developing relevant accelerated tests to ensure PV module performance warranty over a typical lifetime of 25 years. As newer technologies are adapted for PV, including new PV cell technologies, new packaging materials, and newer product designs, the availability of field data over extended periods of time for product performance assessment cannot be expected within the typical timeframe for business decisions. In this work, to enable product design decisions and product performance assessment for PV modules utilizing newer technologies, Simulation and Mechanism based Accelerated Reliability Testing (SMART) methodology and empirical approaches to predict field performance from accelerated test results are presented. The method is demonstrated for field life assessment of flexible PV modules based on degradation mechanisms observed in two accelerated tests, namely, Damp Heat and Thermal Cycling. The method is based on design of accelerated testing scheme with the intent to develop relevant acceleration factor models. The acceleration factor model is validated by extensive reliability testing under different conditions going beyond the established certification standards. Once the acceleration factor model is validated for the test matrix a modeling scheme is developed to predict field performance from results of accelerated testing for particular failure modes of interest. Further refinement of the model can continue as more field data becomes available. While the demonstration of the method in this work is for thin film flexible PV modules, the framework and methodology can be adapted to other PV products.

  1. CREST v2.1 Refined by a Distributed Linear Reservoir Routing Scheme

    NASA Astrophysics Data System (ADS)

    Shen, X.; Hong, Y.; Zhang, K.; Hao, Z.; Wang, D.

    2014-12-01

    Hydrologic modeling is important in water resources management, and flooding disaster warning and management. Routing scheme is among the most important components of a hydrologic model. In this study, we replace the lumped LRR (linear reservoir routing) scheme used in previous versions of the distributed hydrological model, CREST (coupled routing and excess storage) by a newly proposed distributed LRR method, which is theoretically more suitable for distributed hydrological models. Consequently, we have effectively solved the problems of: 1) low values of channel flow in daily simulation, 2) discontinuous flow value along the river network during flood events and 3) irrational model parameters. The CREST model equipped with both the routing schemes have been tested in the Gan River basin. The distributed LRR scheme has been confirmed to outperform the lumped counterpart by two comparisons, hydrograph validation and visual speculation of the continuity of stream flow along the river: 1) The CREST v2.1 (version 2.1) with the implementation of the distributed LRR achieved excellent result of [NSCE(Nash coefficient), CC (correlation coefficient), bias] =[0.897, 0.947 -1.57%] while the original CREST v2.0 produced only negative NSCE, close to zero CC and large bias. 2) CREST v2.1 produced more naturally smooth river flow pattern along the river network while v2.0 simulated bumping and discontinuous discharge along the mainstream. Moreover, we further observe that by using the distributed LRR method, 1) all model parameters fell within their reasonable region after an automatic optimization; 2) CREST forced by satellite-based precipitation and PET products produces a reasonably well result, i.e., (NSCE, CC, bias) = (0.756, 0.871, -0.669%) in the case study, although there is still room to improve regarding their low spatial resolution and underestimation of the heavy rainfall events in the satellite products.

  2. Discontinuous Galerkin method for multicomponent chemically reacting flows and combustion

    NASA Astrophysics Data System (ADS)

    Lv, Yu; Ihme, Matthias

    2014-08-01

    This paper presents the development of a discontinuous Galerkin (DG) method for application to chemically reacting flows in subsonic and supersonic regimes under the consideration of variable thermo-viscous-diffusive transport properties, detailed and stiff reaction chemistry, and shock capturing. A hybrid-flux formulation is developed for treatment of the convective fluxes, combining a conservative Riemann-solver and an extended double-flux scheme. A computationally efficient splitting scheme is proposed, in which advection and diffusion operators are solved in the weak form, and the chemically stiff substep is advanced in the strong form using a time-implicit scheme. The discretization of the viscous-diffusive transport terms follows the second form of Bassi and Rebay, and the WENO-based limiter due to Zhong and Shu is extended to multicomponent systems. Boundary conditions are developed for subsonic and supersonic flow conditions, and the algorithm is coupled to thermochemical libraries to account for detailed reaction chemistry and complex transport. The resulting DG method is applied to a series of test cases of increasing physico-chemical complexity. Beginning with one- and two-dimensional multispecies advection and shock-fluid interaction problems, computational efficiency, convergence, and conservation properties are demonstrated. This study is followed by considering a series of detonation and supersonic combustion problems to investigate the convergence-rate and the shock-capturing capability in the presence of one- and multistep reaction chemistry. The DG algorithm is then applied to diffusion-controlled deflagration problems. By examining convergence properties for polynomial order and spatial resolution, and comparing these with second-order finite-volume solutions, it is shown that optimal convergence is achieved and that polynomial refinement provides advantages in better resolving the localized flame structure and complex flow-field features associated with multidimensional and hydrodynamic/thermo-diffusive instabilities in deflagration and detonation systems. Comparisons with standard third- and fifth-order WENO schemes are presented to illustrate the benefit of the DG scheme for application to detonation and multispecies flow/shock-interaction problems.

  3. An Instrumented Glove to Assess Manual Dexterity in Simulation-Based Neurosurgical Education

    PubMed Central

    Lemos, Juan Diego; Hernandez, Alher Mauricio; Soto-Romero, Georges

    2017-01-01

    The traditional neurosurgical apprenticeship scheme includes the assessment of trainee’s manual skills carried out by experienced surgeons. However, the introduction of surgical simulation technology presents a new paradigm where residents can refine surgical techniques on a simulator before putting them into practice in real patients. Unfortunately, in this new scheme, an experienced surgeon will not always be available to evaluate trainee’s performance. For this reason, it is necessary to develop automatic mechanisms to estimate metrics for assessing manual dexterity in a quantitative way. Authors have proposed some hardware-software approaches to evaluate manual dexterity on surgical simulators. This paper presents IGlove, a wearable device that uses inertial sensors embedded on an elastic glove to capture hand movements. Metrics to assess manual dexterity are estimated from sensors signals using data processing and information analysis algorithms. It has been designed to be used with a neurosurgical simulator called Daubara NS Trainer, but can be easily adapted to another benchtop- and manikin-based medical simulators. The system was tested with a sample of 14 volunteers who performed a test that was designed to simultaneously evaluate their fine motor skills and the IGlove’s functionalities. Metrics obtained by each of the participants are presented as results in this work; it is also shown how these metrics are used to automatically evaluate the level of manual dexterity of each volunteer. PMID:28468268

  4. Microtopographic characterization of ice-wedge polygon landscape in Barrow, Alaska: a digital map of troughs, rims, centers derived from high resolution (0.25 m) LiDAR data

    DOE Data Explorer

    Gangodagamage, Chandana; Wullschleger, Stan

    2014-07-03

    The dataset represents microtopographic characterization of the ice-wedge polygon landscape in Barrow, Alaska. Three microtopographic features are delineated using 0.25 m high resolution digital elevation dataset derived from LiDAR. The troughs, rims, and centers are the three categories in this classification scheme. The polygon troughs are the surface expression of the ice-wedges that are in lower elevations than the interior polygon. The elevated shoulders of the polygon interior immediately adjacent to the polygon troughs are the polygon rims for the low center polygons. In case of high center polygons, these features are the topographic highs. In this classification scheme, both topographic highs and rims are considered as polygon rims. The next version of the dataset will include more refined classification scheme including separate classes for rims ad topographic highs. The interior part of the polygon just adjacent to the polygon rims are the polygon centers.

  5. Microstructure Refinement and Mechanical Properties of 304 Stainless Steel by Repetitive Thermomechanical Processing

    NASA Astrophysics Data System (ADS)

    Al-Fadhalah, Khaled; Aleem, Muhammad

    2018-04-01

    Repetitive thermomechanical processing (TMP) was applied for evaluating the effect of strain-induced α'-martensite transformation and reversion annealing on microstructure refinement and mechanical properties of 304 austenitic stainless steel. The first TMP scheme consisted of four cycles of tensile deformation to strain of 0.4, while the second TMP scheme applied two cycles of tensile straining to 0.6. For both schemes, tensile tests were conducted at 173 K (- 100 °C) followed by 5-minute annealing at 1073 K (800 °C). The volume fraction of α'-martensite in deformed samples increased with increasing cycles, reaching a maximum of 98 vol pct. Examination of annealed microstructure by electron backscattered diffraction indicated that increasing strain and/or number of cycles resulted in stronger reversion to austenite with finer grain size of 1 μm. Yet, increasing strain reduced the formation of Σ3 boundaries. The annealing textures generally show reversion of α'-martensite texture components to the austenite texture of brass and copper orientations. The increase in strain and/or number of cycles resulted in stronger intensity of copper orientation, accompanied by the formation of recrystallization texture components of Goss, cube, and rotated cube. The reduction in grain size with increasing cycles caused an increase in yield strength. It also resulted in an increase in strain hardening rate during deformation due to the increase in the formation of α'-martensite. The increase in strain hardening rate occurred in two consecutive stages, marked as stages II and III. The strain hardening in stage II is due to the formation of α'-martensite from either austenite or ɛ-martensite, while the stage-III strain hardening is attributed to the necessity to break the α'-martensite-banded structure for forming block-type martensite at high strains.

  6. Galaxy Tagging: photometric redshift refinement and group richness enhancement

    NASA Astrophysics Data System (ADS)

    Kafle, P. R.; Robotham, A. S. G.; Driver, S. P.; Deeley, S.; Norberg, P.; Drinkwater, M. J.; Davies, L. J.

    2018-06-01

    We present a new scheme, galtag, for refining the photometric redshift measurements of faint galaxies by probabilistically tagging them to observed galaxy groups constructed from a brighter, magnitude-limited spectroscopy survey. First, this method is tested on the DESI light-cone data constructed on the GALFORM galaxy formation model to tests its validity. We then apply it to the photometric observations of galaxies in the Kilo-Degree Imaging Survey (KiDS) over a 1 deg2 region centred at 15h. This region contains Galaxy and Mass Assembly (GAMA) deep spectroscopic observations (i-band<22) and an accompanying group catalogue to r-band<19.8. We demonstrate that even with some trade-off in sample size, an order of magnitude improvement on the accuracy of photometric redshifts is achievable when using galtag. This approach provides both refined photometric redshift measurements and group richness enhancement. In combination these products will hugely improve the scientific potential of both photometric and spectroscopic datasets. The galtag software will be made publicly available at https://github.com/pkaf/galtag.git.

  7. Predicting Adverse Drug Effects from Literature- and Database-Mined Assertions.

    PubMed

    La, Mary K; Sedykh, Alexander; Fourches, Denis; Muratov, Eugene; Tropsha, Alexander

    2018-06-06

    Given that adverse drug effects (ADEs) have led to post-market patient harm and subsequent drug withdrawal, failure of candidate agents in the drug development process, and other negative outcomes, it is essential to attempt to forecast ADEs and other relevant drug-target-effect relationships as early as possible. Current pharmacologic data sources, providing multiple complementary perspectives on the drug-target-effect paradigm, can be integrated to facilitate the inference of relationships between these entities. This study aims to identify both existing and unknown relationships between chemicals (C), protein targets (T), and ADEs (E) based on evidence in the literature. Cheminformatics and data mining approaches were employed to integrate and analyze publicly available clinical pharmacology data and literature assertions interrelating drugs, targets, and ADEs. Based on these assertions, a C-T-E relationship knowledge base was developed. Known pairwise relationships between chemicals, targets, and ADEs were collected from several pharmacological and biomedical data sources. These relationships were curated and integrated according to Swanson's paradigm to form C-T-E triangles. Missing C-E edges were then inferred as C-E relationships. Unreported associations between drugs, targets, and ADEs were inferred, and inferences were prioritized as testable hypotheses. Several C-E inferences, including testosterone → myocardial infarction, were identified using inferences based on the literature sources published prior to confirmatory case reports. Timestamping approaches confirmed the predictive ability of this inference strategy on a larger scale. The presented workflow, based on free-access databases and an association-based inference scheme, provided novel C-E relationships that have been validated post hoc in case reports. With refinement of prioritization schemes for the generated C-E inferences, this workflow may provide an effective computational method for the early detection of potential drug candidate ADEs that can be followed by targeted experimental investigations.

  8. Spatiotemporal Local-Remote Senor Fusion (ST-LRSF) for Cooperative Vehicle Positioning

    PubMed Central

    Bhawiyuga, Adhitya

    2018-01-01

    Vehicle positioning plays an important role in the design of protocols, algorithms, and applications in the intelligent transport systems. In this paper, we present a new framework of spatiotemporal local-remote sensor fusion (ST-LRSF) that cooperatively improves the accuracy of absolute vehicle positioning based on two state estimates of a vehicle in the vicinity: a local sensing estimate, measured by the on-board exteroceptive sensors, and a remote sensing estimate, received from neighbor vehicles via vehicle-to-everything communications. Given both estimates of vehicle state, the ST-LRSF scheme identifies the set of vehicles in the vicinity, determines the reference vehicle state, proposes a spatiotemporal dissimilarity metric between two reference vehicle states, and presents a greedy algorithm to compute a minimal weighted matching (MWM) between them. Given the outcome of MWM, the theoretical position uncertainty of the proposed refinement algorithm is proven to be inversely proportional to the square root of matching size. To further reduce the positioning uncertainty, we also develop an extended Kalman filter model with the refined position of ST-LRSF as one of the measurement inputs. The numerical results demonstrate that the proposed ST-LRSF framework can achieve high positioning accuracy for many different scenarios of cooperative vehicle positioning. PMID:29617341

  9. PaFlexPepDock: parallel ab-initio docking of peptides onto their receptors with full flexibility based on Rosetta.

    PubMed

    Li, Haiou; Lu, Liyao; Chen, Rong; Quan, Lijun; Xia, Xiaoyan; Lü, Qiang

    2014-01-01

    Structural information related to protein-peptide complexes can be very useful for novel drug discovery and design. The computational docking of protein and peptide can supplement the structural information available on protein-peptide interactions explored by experimental ways. Protein-peptide docking of this paper can be described as three processes that occur in parallel: ab-initio peptide folding, peptide docking with its receptor, and refinement of some flexible areas of the receptor as the peptide is approaching. Several existing methods have been used to sample the degrees of freedom in the three processes, which are usually triggered in an organized sequential scheme. In this paper, we proposed a parallel approach that combines all the three processes during the docking of a folding peptide with a flexible receptor. This approach mimics the actual protein-peptide docking process in parallel way, and is expected to deliver better performance than sequential approaches. We used 22 unbound protein-peptide docking examples to evaluate our method. Our analysis of the results showed that the explicit refinement of the flexible areas of the receptor facilitated more accurate modeling of the interfaces of the complexes, while combining all of the moves in parallel helped the constructing of energy funnels for predictions.

  10. Clocks to Computers: A Machine-Based “Big Picture” of the History of Modern Science.

    PubMed

    van Lunteren, Frans

    2016-12-01

    Over the last few decades there have been several calls for a “big picture” of the history of science. There is a general need for a concise overview of the rise of modern science, with a clear structure allowing for a rough division into periods. This essay proposes such a scheme, one that is both elementary and comprehensive. It focuses on four machines, which can be seen to have mediated between science and society during successive periods of time: the clock, the balance, the steam engine, and the computer. Following an extended developmental phase, each of these machines came to play a highly visible role in Western societies, both socially and economically. Each of these machines, moreover, was used as a powerful resource for the understanding of both inorganic and organic nature. More specifically, their metaphorical use helped to construe and refine some key concepts that would play a prominent role in such understanding. In each case the key concept would at some point be considered to represent the ultimate building block of reality. Finally, in a refined form, each of these machines would eventually make its entry in scientific research, thereby strengthening the ties between these machines and nature.

  11. Grid Convergence of High Order Methods for Multiscale Complex Unsteady Viscous Compressible Flows

    NASA Technical Reports Server (NTRS)

    Sjoegreen, B.; Yee, H. C.

    2001-01-01

    Grid convergence of several high order methods for the computation of rapidly developing complex unsteady viscous compressible flows with a wide range of physical scales is studied. The recently developed adaptive numerical dissipation control high order methods referred to as the ACM and wavelet filter schemes are compared with a fifth-order weighted ENO (WENO) scheme. The two 2-D compressible full Navier-Stokes models considered do not possess known analytical and experimental data. Fine grid solutions from a standard second-order TVD scheme and a MUSCL scheme with limiters are used as reference solutions. The first model is a 2-D viscous analogue of a shock tube problem which involves complex shock/shear/boundary-layer interactions. The second model is a supersonic reactive flow concerning fuel breakup. The fuel mixing involves circular hydrogen bubbles in air interacting with a planar moving shock wave. Both models contain fine scale structures and are stiff in the sense that even though the unsteadiness of the flows are rapidly developing, extreme grid refinement and time step restrictions are needed to resolve all the flow scales as well as the chemical reaction scales.

  12. Recent Progress on the Parallel Implementation of Moving-Body Overset Grid Schemes

    NASA Technical Reports Server (NTRS)

    Wissink, Andrew; Allen, Edwin (Technical Monitor)

    1998-01-01

    Viscous calculations about geometrically complex bodies in which there is relative motion between component parts is one of the most computationally demanding problems facing CFD researchers today. This presentation documents results from the first two years of a CHSSI-funded effort within the U.S. Army AFDD to develop scalable dynamic overset grid methods for unsteady viscous calculations with moving-body problems. The first pan of the presentation will focus on results from OVERFLOW-D1, a parallelized moving-body overset grid scheme that employs traditional Chimera methodology. The two processes that dominate the cost of such problems are the flow solution on each component and the intergrid connectivity solution. Parallel implementations of the OVERFLOW flow solver and DCF3D connectivity software are coupled with a proposed two-part static-dynamic load balancing scheme and tested on the IBM SP and Cray T3E multi-processors. The second part of the presentation will cover some recent results from OVERFLOW-D2, a new flow solver that employs Cartesian grids with various levels of refinement, facilitating solution adaption. A study of the parallel performance of the scheme on large distributed- memory multiprocessor computer architectures will be reported.

  13. Increased error-related brain activity distinguishes generalized anxiety disorder with and without comorbid major depressive disorder.

    PubMed

    Weinberg, Anna; Klein, Daniel N; Hajcak, Greg

    2012-11-01

    Generalized anxiety disorder (GAD) and major depressive disorder (MDD) are so frequently comorbid that some have suggested that the 2 should be collapsed into a single overarching "distress" disorder. Yet there is also increasing evidence that the 2 categories are not redundant. Neurobehavioral markers that differentiate GAD and MDD would be helpful in ongoing efforts to refine classification schemes based on neurobiological measures. The error-related negativity (ERN) may be one such marker. The ERN is an event-related potential component presenting as a negative deflection approximately 50 ms following an erroneous response and reflects activity of the anterior cingulate cortex. There is evidence for an enhanced ERN in individuals with GAD, but the literature in MDD is mixed. The present study measured the ERN in 26 GAD, 23 comorbid GAD and MDD, and 36 control participants, all of whom were female and medication-free. Consistent with previous research, the GAD group was characterized by a larger ERN and an increased difference between error and correct trials than controls. No such enhancement was evident in the comorbid group, suggesting comorbid depression may moderate the relationship between the ERN and anxiety. The present study further suggests that the ERN is a potentially useful neurobiological marker for future studies that consider the pathophysiology of multiple disorders in order to construct or refine neurobiologically based diagnostic phenotypes. (PsycINFO Database Record (c) 2012 APA, all rights reserved).

  14. Stereo matching and view interpolation based on image domain triangulation.

    PubMed

    Fickel, Guilherme Pinto; Jung, Claudio R; Malzbender, Tom; Samadani, Ramin; Culbertson, Bruce

    2013-09-01

    This paper presents a new approach for stereo matching and view interpolation problems based on triangular tessellations suitable for a linear array of rectified cameras. The domain of the reference image is initially partitioned into triangular regions using edge and scale information, aiming to place vertices along image edges and increase the number of triangles in textured regions. A region-based matching algorithm is then used to find an initial disparity for each triangle, and a refinement stage is applied to change the disparity at the vertices of the triangles, generating a piecewise linear disparity map. A simple post-processing procedure is applied to connect triangles with similar disparities generating a full 3D mesh related to each camera (view), which are used to generate new synthesized views along the linear camera array. With the proposed framework, view interpolation reduces to the trivial task of rendering polygonal meshes, which can be done very fast, particularly when GPUs are employed. Furthermore, the generated views are hole-free, unlike most point-based view interpolation schemes that require some kind of post-processing procedures to fill holes.

  15. Automated detection and quantification of residual brain tumor using an interactive computer-aided detection scheme

    NASA Astrophysics Data System (ADS)

    Gaffney, Kevin P.; Aghaei, Faranak; Battiste, James; Zheng, Bin

    2017-03-01

    Detection of residual brain tumor is important to evaluate efficacy of brain cancer surgery, determine optimal strategy of further radiation therapy if needed, and assess ultimate prognosis of the patients. Brain MR is a commonly used imaging modality for this task. In order to distinguish between residual tumor and surgery induced scar tissues, two sets of MRI scans are conducted pre- and post-gadolinium contrast injection. The residual tumors are only enhanced in the post-contrast injection images. However, subjective reading and quantifying this type of brain MR images faces difficulty in detecting real residual tumor regions and measuring total volume of the residual tumor. In order to help solve this clinical difficulty, we developed and tested a new interactive computer-aided detection scheme, which consists of three consecutive image processing steps namely, 1) segmentation of the intracranial region, 2) image registration and subtraction, 3) tumor segmentation and refinement. The scheme also includes a specially designed and implemented graphical user interface (GUI) platform. When using this scheme, two sets of pre- and post-contrast injection images are first automatically processed to detect and quantify residual tumor volume. Then, a user can visually examine segmentation results and conveniently guide the scheme to correct any detection or segmentation errors if needed. The scheme has been repeatedly tested using five cases. Due to the observed high performance and robustness of the testing results, the scheme is currently ready for conducting clinical studies and helping clinicians investigate the association between this quantitative image marker and outcome of patients.

  16. There is No Free Lunch: Tradeoffs in the Utility of Learned Knowledge

    NASA Technical Reports Server (NTRS)

    Kedar, Smadar T.; McKusick, Kathleen B.

    1992-01-01

    With the recent introduction of learning in integrated systems, there is a need to measure the utility of learned knowledge for these more complex systems. A difficulty arrises when there are multiple, possibly conflicting, utility metrics to be measured. In this paper, we present schemes which trade off conflicting utility metrics in order to achieve some global performance objectives. In particular, we present a case study of a multi-strategy machine learning system, mutual theory refinement, which refines world models for an integrated reactive system, the Entropy Reduction Engine. We provide experimental results on the utility of learned knowledge in two conflicting metrics - improved accuracy and degraded efficiency. We then demonstrate two ways to trade off these metrics. In each, some learned knowledge is either approximated or dynamically 'forgotten' so as to improve efficiency while degrading accuracy only slightly.

  17. A cell-vertex multigrid method for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Radespiel, R.

    1989-01-01

    A cell-vertex scheme for the Navier-Stokes equations, which is based on central difference approximations and Runge-Kutta time stepping, is described. Using local time stepping, implicit residual smoothing, a multigrid method, and carefully controlled artificial dissipative terms, very good convergence rates are obtained for a wide range of two- and three-dimensional flows over airfoils and wings. The accuracy of the code is examined by grid refinement studies and comparison with experimental data. For an accurate prediction of turbulent flows with strong separations, a modified version of the nonequilibrium turbulence model of Johnson and King is introduced, which is well suited for an implementation into three-dimensional Navier-Stokes codes. It is shown that the solutions for three-dimensional flows with strong separations can be dramatically improved, when a nonequilibrium model of turbulence is used.

  18. Elliptic generation of composite three-dimensional grids about realistic aircraft

    NASA Technical Reports Server (NTRS)

    Sorenson, R. L.

    1986-01-01

    An elliptic method for generating composite grids about realistic aircraft is presented. A body-conforming grid is first generated about the entire aircraft by the solution of Poisson's differential equation. This grid has relatively coarse spacing, and it covers the entire physical domain. At boundary surfaces, cell size is controlled and cell skewness is nearly eliminated by inhomogeneous terms, which are found automatically by the program. Certain regions of the grid in which high gradients are expected, and which map into rectangular solids in the computational domain, are then designated for zonal refinement. Spacing in the zonal grids is reduced by adding points with a simple, algebraic scheme. Details of the grid generation method are presented along with results of the present application, a wing-body configuration based on the F-16 fighter aircraft.

  19. Deformable complex network for refining low-resolution X-ray structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Chong; Wang, Qinghua; Ma, Jianpeng, E-mail: jpma@bcm.edu

    2015-10-27

    A new refinement algorithm called the deformable complex network that combines a novel angular network-based restraint with a deformable elastic network model in the target function has been developed to aid in structural refinement in macromolecular X-ray crystallography. In macromolecular X-ray crystallography, building more accurate atomic models based on lower resolution experimental diffraction data remains a great challenge. Previous studies have used a deformable elastic network (DEN) model to aid in low-resolution structural refinement. In this study, the development of a new refinement algorithm called the deformable complex network (DCN) is reported that combines a novel angular network-based restraint withmore » the DEN model in the target function. Testing of DCN on a wide range of low-resolution structures demonstrated that it constantly leads to significantly improved structural models as judged by multiple refinement criteria, thus representing a new effective refinement tool for low-resolution structural determination.« less

  20. Three dimensional unstructured multigrid for the Euler equations

    NASA Technical Reports Server (NTRS)

    Mavriplis, D. J.

    1991-01-01

    The three dimensional Euler equations are solved on unstructured tetrahedral meshes using a multigrid strategy. The driving algorithm consists of an explicit vertex-based finite element scheme, which employs an edge-based data structure to assemble the residuals. The multigrid approach employs a sequence of independently generated coarse and fine meshes to accelerate the convergence to steady-state of the fine grid solution. Variables, residuals and corrections are passed back and forth between the various grids of the sequence using linear interpolation. The addresses and weights for interpolation are determined in a preprocessing stage using linear interpolation. The addresses and weights for interpolation are determined in a preprocessing stage using an efficient graph traversal algorithm. The preprocessing operation is shown to require a negligible fraction of the CPU time required by the overall solution procedure, while gains in overall solution efficiencies greater than an order of magnitude are demonstrated on meshes containing up to 350,000 vertices. Solutions using globally regenerated fine meshes as well as adaptively refined meshes are given.

  1. Monitoring tumor motion by real time 2D/3D registration during radiotherapy.

    PubMed

    Gendrin, Christelle; Furtado, Hugo; Weber, Christoph; Bloch, Christoph; Figl, Michael; Pawiro, Supriyanto Ardjo; Bergmann, Helmar; Stock, Markus; Fichtinger, Gabor; Georg, Dietmar; Birkfellner, Wolfgang

    2012-02-01

    In this paper, we investigate the possibility to use X-ray based real time 2D/3D registration for non-invasive tumor motion monitoring during radiotherapy. The 2D/3D registration scheme is implemented using general purpose computation on graphics hardware (GPGPU) programming techniques and several algorithmic refinements in the registration process. Validation is conducted off-line using a phantom and five clinical patient data sets. The registration is performed on a region of interest (ROI) centered around the planned target volume (PTV). The phantom motion is measured with an rms error of 2.56 mm. For the patient data sets, a sinusoidal movement that clearly correlates to the breathing cycle is shown. Videos show a good match between X-ray and digitally reconstructed radiographs (DRR) displacement. Mean registration time is 0.5 s. We have demonstrated that real-time organ motion monitoring using image based markerless registration is feasible. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  2. A quantum perturbative pair distribution for determining interatomic potentials from extended x-ray absorption spectroscopy

    NASA Astrophysics Data System (ADS)

    Piazza, F.

    2002-11-01

    In this paper we develop a technique for determining interatomic potentials in materials in the quantum regime from single-shell extended x-ray absorption spectroscopy (EXAFS) spectra. We introduce a pair distribution function, based on ordinary quantum time-independent perturbation theory. In the proposed scheme, the model potential parameters enter the distribution through a fourth-order Taylor expansion of the potential, and are directly refined in the fit of the model signal to the experimental spectrum. We discuss in general the validity of our theoretical framework, namely the quantum regime and perturbative treatment, and work out a simple tool for monitoring the sensitivity of our theory in determining lattice anharmonicities based on the statistical F-test. As an example, we apply our formalism to an EXAFS spectrum at the Ag K edge of AgI at T = 77 K. We determine the Ag-I potential parameters and find good agreement with previous studies.

  3. Optimal rotated staggered-grid finite-difference schemes for elastic wave modeling in TTI media

    NASA Astrophysics Data System (ADS)

    Yang, Lei; Yan, Hongyong; Liu, Hong

    2015-11-01

    The rotated staggered-grid finite-difference (RSFD) is an effective approach for numerical modeling to study the wavefield characteristics in tilted transversely isotropic (TTI) media. But it surfaces from serious numerical dispersion, which directly affects the modeling accuracy. In this paper, we propose two different optimal RSFD schemes based on the sampling approximation (SA) method and the least-squares (LS) method respectively to overcome this problem. We first briefly introduce the RSFD theory, based on which we respectively derive the SA-based RSFD scheme and the LS-based RSFD scheme. Then different forms of analysis are used to compare the SA-based RSFD scheme and the LS-based RSFD scheme with the conventional RSFD scheme, which is based on the Taylor-series expansion (TE) method. The contrast in numerical accuracy analysis verifies the greater accuracy of the two proposed optimal schemes, and indicates that these schemes can effectively widen the wavenumber range with great accuracy compared with the TE-based RSFD scheme. Further comparisons between these two optimal schemes show that at small wavenumbers, the SA-based RSFD scheme performs better, while at large wavenumbers, the LS-based RSFD scheme leads to a smaller error. Finally, the modeling results demonstrate that for the same operator length, the SA-based RSFD scheme and the LS-based RSFD scheme can achieve greater accuracy than the TE-based RSFD scheme, while for the same accuracy, the optimal schemes can adopt shorter difference operators to save computing time.

  4. Adaptive Distributed Video Coding with Correlation Estimation using Expectation Propagation

    PubMed Central

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2013-01-01

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method. PMID:23750314

  5. Adaptive distributed video coding with correlation estimation using expectation propagation

    NASA Astrophysics Data System (ADS)

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2012-10-01

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.

  6. Adaptive Distributed Video Coding with Correlation Estimation using Expectation Propagation.

    PubMed

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2012-10-15

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.

  7. An HP Adaptive Discontinuous Galerkin Method for Hyperbolic Conservation Laws. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Bey, Kim S.

    1994-01-01

    This dissertation addresses various issues for model classes of hyperbolic conservation laws. The basic approach developed in this work employs a new family of adaptive, hp-version, finite element methods based on a special discontinuous Galerkin formulation for hyperbolic problems. The discontinuous Galerkin formulation admits high-order local approximations on domains of quite general geometry, while providing a natural framework for finite element approximations and for theoretical developments. The use of hp-versions of the finite element method makes possible exponentially convergent schemes with very high accuracies in certain cases; the use of adaptive hp-schemes allows h-refinement in regions of low regularity and p-enrichment to deliver high accuracy, while keeping problem sizes manageable and dramatically smaller than many conventional approaches. The use of discontinuous Galerkin methods is uncommon in applications, but the methods rest on a reasonable mathematical basis for low-order cases and has local approximation features that can be exploited to produce very efficient schemes, especially in a parallel, multiprocessor environment. The place of this work is to first and primarily focus on a model class of linear hyperbolic conservation laws for which concrete mathematical results, methodologies, error estimates, convergence criteria, and parallel adaptive strategies can be developed, and to then briefly explore some extensions to more general cases. Next, we provide preliminaries to the study and a review of some aspects of the theory of hyperbolic conservation laws. We also provide a review of relevant literature on this subject and on the numerical analysis of these types of problems.

  8. Odor Recognition vs. Classification in Artificial Olfaction

    NASA Astrophysics Data System (ADS)

    Raman, Baranidharan; Hertz, Joshua; Benkstein, Kurt; Semancik, Steve

    2011-09-01

    Most studies in chemical sensing have focused on the problem of precise identification of chemical species that were exposed during the training phase (the recognition problem). However, generalization of training to predict the chemical composition of untrained gases based on their similarity with analytes in the training set (the classification problem) has received very limited attention. These two analytical tasks pose conflicting constraints on the system. While correct recognition requires detection of molecular features that are unique to an analyte, generalization to untrained chemicals requires detection of features that are common across a desired class of analytes. A simple solution that addresses both issues simultaneously can be obtained from biological olfaction, where the odor class and identity information are decoupled and extracted individually over time. Mimicking this approach, we proposed a hierarchical scheme that allowed initial discrimination between broad chemical classes (e.g. contains oxygen) followed by finer refinements using additional data into sub-classes (e.g. ketones vs. alcohols) and, eventually, specific compositions (e.g. ethanol vs. methanol) [1]. We validated this approach using an array of temperature-controlled chemiresistors. We demonstrated that a small set of training analytes is sufficient to allow generalization to novel chemicals and that the scheme provides robust categorization despite aging. Here, we provide further characterization of this approach.

  9. Memetic computing through bio-inspired heuristics integration with sequential quadratic programming for nonlinear systems arising in different physical models.

    PubMed

    Raja, Muhammad Asif Zahoor; Kiani, Adiqa Kausar; Shehzad, Azam; Zameer, Aneela

    2016-01-01

    In this study, bio-inspired computing is exploited for solving system of nonlinear equations using variants of genetic algorithms (GAs) as a tool for global search method hybrid with sequential quadratic programming (SQP) for efficient local search. The fitness function is constructed by defining the error function for systems of nonlinear equations in mean square sense. The design parameters of mathematical models are trained by exploiting the competency of GAs and refinement are carried out by viable SQP algorithm. Twelve versions of the memetic approach GA-SQP are designed by taking a different set of reproduction routines in the optimization process. Performance of proposed variants is evaluated on six numerical problems comprising of system of nonlinear equations arising in the interval arithmetic benchmark model, kinematics, neurophysiology, combustion and chemical equilibrium. Comparative studies of the proposed results in terms of accuracy, convergence and complexity are performed with the help of statistical performance indices to establish the worth of the schemes. Accuracy and convergence of the memetic computing GA-SQP is found better in each case of the simulation study and effectiveness of the scheme is further established through results of statistics based on different performance indices for accuracy and complexity.

  10. LIBRA-WA: a web application for ligand binding site detection and protein function recognition.

    PubMed

    Toti, Daniele; Viet Hung, Le; Tortosa, Valentina; Brandi, Valentina; Polticelli, Fabio

    2018-03-01

    Recently, LIBRA, a tool for active/ligand binding site prediction, was described. LIBRA's effectiveness was comparable to similar state-of-the-art tools; however, its scoring scheme, output presentation, dependence on local resources and overall convenience were amenable to improvements. To solve these issues, LIBRA-WA, a web application based on an improved LIBRA engine, has been developed, featuring a novel scoring scheme consistently improving LIBRA's performance, and a refined algorithm that can identify binding sites hosted at the interface between different subunits. LIBRA-WA also sports additional functionalities like ligand clustering and a completely redesigned interface for an easier analysis of the output. Extensive tests on 373 apoprotein structures indicate that LIBRA-WA is able to identify the biologically relevant ligand/ligand binding site in 357 cases (∼96%), with the correct prediction ranking first in 349 cases (∼98% of the latter, ∼94% of the total). The earlier stand-alone tool has also been updated and dubbed LIBRA+, by integrating LIBRA-WA's improved engine for cross-compatibility purposes. LIBRA-WA and LIBRA+ are available at: http://www.computationalbiology.it/software.html. polticel@uniroma3.it. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  11. A taxonomy for mechanical ventilation: 10 fundamental maxims.

    PubMed

    Chatburn, Robert L; El-Khatib, Mohamad; Mireles-Cabodevila, Eduardo

    2014-11-01

    The American Association for Respiratory Care has declared a benchmark for competency in mechanical ventilation that includes the ability to "apply to practice all ventilation modes currently available on all invasive and noninvasive mechanical ventilators." This level of competency presupposes the ability to identify, classify, compare, and contrast all modes of ventilation. Unfortunately, current educational paradigms do not supply the tools to achieve such goals. To fill this gap, we expand and refine a previously described taxonomy for classifying modes of ventilation and explain how it can be understood in terms of 10 fundamental constructs of ventilator technology: (1) defining a breath, (2) defining an assisted breath, (3) specifying the means of assisting breaths based on control variables specified by the equation of motion, (4) classifying breaths in terms of how inspiration is started and stopped, (5) identifying ventilator-initiated versus patient-initiated start and stop events, (6) defining spontaneous and mandatory breaths, (7) defining breath sequences (8), combining control variables and breath sequences into ventilatory patterns, (9) describing targeting schemes, and (10) constructing a formal taxonomy for modes of ventilation composed of control variable, breath sequence, and targeting schemes. Having established the theoretical basis of the taxonomy, we demonstrate a step-by-step procedure to classify any mode on any mechanical ventilator. Copyright © 2014 by Daedalus Enterprises.

  12. Structure factor of liquid alkali metals using a classical-plasma reference system

    NASA Astrophysics Data System (ADS)

    Pastore, G.; Tosi, M. P.

    1984-06-01

    This paper presents calculations of the liquid structure factor of the alkali metals near freezing, starting from the classical plasma of bare ions as reference liquid. The indirect ion-ion interaction arising from electronic screening is treated by an optimized random phase approximation (ORPA), imposing physical requirements as in the original ORPA scheme developed by Weeks, Chandler and Andersen for liquids with strongly repulsive core potentials. A comparison of the results with computer simulation data for a model of liquid rubidium shows that the present approach overcomes the well-known difficulties met in applying to these metals the standard ORPA based on a reference liquid of neutral hard spheres. The optimization scheme is also shown to be equivalent to a reduction of the range of the indirect interaction in momentum space, as proposed empirically in an earlier work. Comparison with experiment for the other alkalis shows that a good overall representation of the data can be obtained for sodium, potassium and cesium, but not for lithium, when one uses a very simple form of the electron-ion potential adjusted to the liquid compressibility. The small-angle scattering region is finally examined more carefully in the light of recent data of Waseda, with a view to possible refinements of the pseudopotential model.

  13. Modeling of Turbulent Natural Convection in Enclosed Tall Cavities

    NASA Astrophysics Data System (ADS)

    Goloviznin, V. M.; Korotkin, I. A.; Finogenov, S. A.

    2017-12-01

    It was shown in our previous work (J. Appl. Mech. Tech. Phys 57 (7), 1159-1171 (2016)) that the eddy-resolving parameter-free CABARET scheme as applied to two-and three-dimensional de Vahl Davis benchmark tests (thermal convection in a square cavity) yields numerical results on coarse (20 × 20 and 20 × 20 × 20) grids that agree surprisingly well with experimental data and highly accurate computations for Rayleigh numbers of up to 1014. In the present paper, the sensitivity of this phenomenon to the cavity shape (varying from cubical to highly elongated) is analyzed. Box-shaped computational domains with aspect ratios of 1: 4, 1: 10, and 1: 28.6 are considered. The results produced by the CABARET scheme are compared with experimental data (aspect ratio of 1: 28.6), DNS results (aspect ratio of 1: 4), and an empirical formula (aspect ratio of 1: 10). In all the cases, the CABARET-based integral parameters of the cavity flow agree well with the other authors' results. Notably coarse grids with mesh refinement toward the walls are used in the CABARET calculations. It is shown that acceptable numerical accuracy on extremely coarse grids is achieved for an aspect ratio of up to 1: 10. For higher aspect ratios, the number of grid cells required for achieving prescribed accuracy grows significantly.

  14. 78 FR 49508 - Tesoro Refining & Marketing Company LLC; Supplemental Notice That Initial Market-Based Rate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-14

    ... Refining & Marketing Company LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes... proceeding of Tesoro Refining & Marketing Company LLC's application for market-based rate authority, with an... of protests and interventions in lieu of paper, using the FERC Online links at http://www.ferc.gov...

  15. 78 FR 16262 - Tesoro Refining & Marketing Company LLC; Supplemental Notice That Initial Market-Based Rate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-14

    ... Refining & Marketing Company LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes... proceeding, of Tesoro Refining & Marketing Company LLC's application for market- based rate authority, with... submission of protests and interventions in lieu of paper, using the FERC Online links at http://www.ferc.gov...

  16. BlobContours: adapting Blobworld for supervised color- and texture-based image segmentation

    NASA Astrophysics Data System (ADS)

    Vogel, Thomas; Nguyen, Dinh Quyen; Dittmann, Jana

    2006-01-01

    Extracting features is the first and one of the most crucial steps in recent image retrieval process. While the color features and the texture features of digital images can be extracted rather easily, the shape features and the layout features depend on reliable image segmentation. Unsupervised image segmentation, often used in image analysis, works on merely syntactical basis. That is, what an unsupervised segmentation algorithm can segment is only regions, but not objects. To obtain high-level objects, which is desirable in image retrieval, human assistance is needed. Supervised image segmentations schemes can improve the reliability of segmentation and segmentation refinement. In this paper we propose a novel interactive image segmentation technique that combines the reliability of a human expert with the precision of automated image segmentation. The iterative procedure can be considered a variation on the Blobworld algorithm introduced by Carson et al. from EECS Department, University of California, Berkeley. Starting with an initial segmentation as provided by the Blobworld framework, our algorithm, namely BlobContours, gradually updates it by recalculating every blob, based on the original features and the updated number of Gaussians. Since the original algorithm has hardly been designed for interactive processing we had to consider additional requirements for realizing a supervised segmentation scheme on the basis of Blobworld. Increasing transparency of the algorithm by applying usercontrolled iterative segmentation, providing different types of visualization for displaying the segmented image and decreasing computational time of segmentation are three major requirements which are discussed in detail.

  17. Overview of the NASA Glenn Flux Reconstruction Based High-Order Unstructured Grid Code

    NASA Technical Reports Server (NTRS)

    Spiegel, Seth C.; DeBonis, James R.; Huynh, H. T.

    2016-01-01

    A computational fluid dynamics code based on the flux reconstruction (FR) method is currently being developed at NASA Glenn Research Center to ultimately provide a large- eddy simulation capability that is both accurate and efficient for complex aeropropulsion flows. The FR approach offers a simple and efficient method that is easy to implement and accurate to an arbitrary order on common grid cell geometries. The governing compressible Navier-Stokes equations are discretized in time using various explicit Runge-Kutta schemes, with the default being the 3-stage/3rd-order strong stability preserving scheme. The code is written in modern Fortran (i.e., Fortran 2008) and parallelization is attained through MPI for execution on distributed-memory high-performance computing systems. An h- refinement study of the isentropic Euler vortex problem is able to empirically demonstrate the capability of the FR method to achieve super-accuracy for inviscid flows. Additionally, the code is applied to the Taylor-Green vortex problem, performing numerous implicit large-eddy simulations across a range of grid resolutions and solution orders. The solution found by a pseudo-spectral code is commonly used as a reference solution to this problem, and the FR code is able to reproduce this solution using approximately the same grid resolution. Finally, an examination of the code's performance demonstrates good parallel scaling, as well as an implementation of the FR method with a computational cost/degree- of-freedom/time-step that is essentially independent of the solution order of accuracy for structured geometries.

  18. Customizing G Protein-coupled receptor models for structure-based virtual screening.

    PubMed

    de Graaf, Chris; Rognan, Didier

    2009-01-01

    This review will focus on the construction, refinement, and validation of G Protein-coupled receptor models for the purpose of structure-based virtual screening. Practical tips and tricks derived from concrete modeling and virtual screening exercises to overcome the problems and pitfalls associated with the different steps of the receptor modeling workflow will be presented. These examples will not only include rhodopsin-like (class A), but also secretine-like (class B), and glutamate-like (class C) receptors. In addition, the review will present a careful comparative analysis of current crystal structures and their implication on homology modeling. The following themes will be discussed: i) the use of experimental anchors in guiding the modeling procedure; ii) amino acid sequence alignments; iii) ligand binding mode accommodation and binding cavity expansion; iv) proline-induced kinks in transmembrane helices; v) binding mode prediction and virtual screening by receptor-ligand interaction fingerprint scoring; vi) extracellular loop modeling; vii) virtual filtering schemes. Finally, an overview of several successful structure-based screening shows that receptor models, despite structural inaccuracies, can be efficiently used to find novel ligands.

  19. An Accurate Non-Cooperative Method for Measuring Textureless Spherical Target Based on Calibrated Lasers.

    PubMed

    Wang, Fei; Dong, Hang; Chen, Yanan; Zheng, Nanning

    2016-12-09

    Strong demands for accurate non-cooperative target measurement have been arising recently for the tasks of assembling and capturing. Spherical objects are one of the most common targets in these applications. However, the performance of the traditional vision-based reconstruction method was limited for practical use when handling poorly-textured targets. In this paper, we propose a novel multi-sensor fusion system for measuring and reconstructing textureless non-cooperative spherical targets. Our system consists of four simple lasers and a visual camera. This paper presents a complete framework of estimating the geometric parameters of textureless spherical targets: (1) an approach to calibrate the extrinsic parameters between a camera and simple lasers; and (2) a method to reconstruct the 3D position of the laser spots on the target surface and achieve the refined results via an optimized scheme. The experiment results show that our proposed calibration method can obtain a fine calibration result, which is comparable to the state-of-the-art LRF-based methods, and our calibrated system can estimate the geometric parameters with high accuracy in real time.

  20. An Accurate Non-Cooperative Method for Measuring Textureless Spherical Target Based on Calibrated Lasers

    PubMed Central

    Wang, Fei; Dong, Hang; Chen, Yanan; Zheng, Nanning

    2016-01-01

    Strong demands for accurate non-cooperative target measurement have been arising recently for the tasks of assembling and capturing. Spherical objects are one of the most common targets in these applications. However, the performance of the traditional vision-based reconstruction method was limited for practical use when handling poorly-textured targets. In this paper, we propose a novel multi-sensor fusion system for measuring and reconstructing textureless non-cooperative spherical targets. Our system consists of four simple lasers and a visual camera. This paper presents a complete framework of estimating the geometric parameters of textureless spherical targets: (1) an approach to calibrate the extrinsic parameters between a camera and simple lasers; and (2) a method to reconstruct the 3D position of the laser spots on the target surface and achieve the refined results via an optimized scheme. The experiment results show that our proposed calibration method can obtain a fine calibration result, which is comparable to the state-of-the-art LRF-based methods, and our calibrated system can estimate the geometric parameters with high accuracy in real time. PMID:27941705

  1. A multistage motion vector processing method for motion-compensated frame interpolation.

    PubMed

    Huang, Ai- Mei; Nguyen, Truong Q

    2008-05-01

    In this paper, a novel, low-complexity motion vector processing algorithm at the decoder is proposed for motion-compensated frame interpolation or frame rate up-conversion. We address the problems of having broken edges and deformed structures in an interpolated frame by hierarchically refining motion vectors on different block sizes. Our method explicitly considers the reliability of each received motion vector and has the capability of preserving the structure information. This is achieved by analyzing the distribution of residual energies and effectively merging blocks that have unreliable motion vectors. The motion vector reliability information is also used as a prior knowledge in motion vector refinement using a constrained vector median filter to avoid choosing identical unreliable one. We also propose using chrominance information in our method. Experimental results show that the proposed scheme has better visual quality and is also robust, even in video sequences with complex scenes and fast motion.

  2. Passive microwave algorithm development and evaluation

    NASA Technical Reports Server (NTRS)

    Petty, Grant W.

    1995-01-01

    The scientific objectives of this grant are: (1) thoroughly evaluate, both theoretically and empirically, all available Special Sensor Microwave Imager (SSM/I) retrieval algorithms for column water vapor, column liquid water, and surface wind speed; (2) where both appropriate and feasible, develop, validate, and document satellite passive microwave retrieval algorithms that offer significantly improved performance compared with currently available algorithms; and (3) refine and validate a novel physical inversion scheme for retrieving rain rate over the ocean. This report summarizes work accomplished or in progress during the first year of a three year grant. The emphasis during the first year has been on the validation and refinement of the rain rate algorithm published by Petty and on the analysis of independent data sets that can be used to help evaluate the performance of rain rate algorithms over remote areas of the ocean. Two articles in the area of global oceanic precipitation are attached.

  3. An operational integrated short-term warning solution for solar radiation storms: introducing the Forecasting Solar Particle Events and Flares (FORSPEF) system

    NASA Astrophysics Data System (ADS)

    Anastasiadis, Anastasios; Sandberg, Ingmar; Papaioannou, Athanasios; Georgoulis, Manolis; Tziotziou, Kostas; Jiggens, Piers; Hilgers, Alain

    2015-04-01

    We present a novel integrated prediction system, of both solar flares and solar energetic particle (SEP) events, which is in place to provide short-term warnings for hazardous solar radiation storms. FORSPEF system provides forecasting of solar eruptive events, such as solar flares with a projection to coronal mass ejections (CMEs) (occurrence and velocity) and the likelihood of occurrence of a SEP event. It also provides nowcasting of SEP events based on actual solar flare and CME near real-time alerts, as well as SEP characteristics (peak flux, fluence, rise time, duration) per parent solar event. The prediction of solar flares relies on a morphological method which is based on the sophisticated derivation of the effective connected magnetic field strength (Beff) of potentially flaring active-region (AR) magnetic configurations and it utilizes analysis of a large number of AR magnetograms. For the prediction of SEP events a new reductive statistical method has been implemented based on a newly constructed database of solar flares, CMEs and SEP events that covers a large time span from 1984-2013. The method is based on flare location (longitude), flare size (maximum soft X-ray intensity), and the occurrence (or not) of a CME. Warnings are issued for all > C1.0 soft X-ray flares. The warning time in the forecasting scheme extends to 24 hours with a refresh rate of 3 hours while the respective warning time for the nowcasting scheme depends on the availability of the near real-time data and falls between 15-20 minutes. We discuss the modules of the FORSPEF system, their interconnection and the operational set up. The dual approach in the development of FORPSEF (i.e. forecasting and nowcasting scheme) permits the refinement of predictions upon the availability of new data that characterize changes on the Sun and the interplanetary space, while the combined usage of solar flare and SEP forecasting methods upgrades FORSPEF to an integrated forecasting solution. This work has been funded through the "FORSPEF: FORecasting Solar Particle Events and Flares", ESA Contract No. 4000109641/13/NL/AK

  4. Effect of biogenic fermentation impurities on lactic acid hydrogenation to propylene glycol.

    PubMed

    Zhang, Zhigang; Jackson, James E; Miller, Dennis J

    2008-09-01

    The effect of residual impurities from glucose fermentation to lactic acid (LA) on subsequent ruthenium-catalyzed hydrogenation of LA to propylene glycol (PG) is examined. Whereas refined LA feed exhibits stable conversion to PG over carbon-supported ruthenium catalyst in a trickle bed reactor, partially refined LA from fermentation shows a steep decline in PG production over short (<40 h) reaction times followed by a further slow decay in performance. Addition of model impurities to refined LA has varying effects: organic acids, sugars, or inorganic salts have little effect on conversion; alanine, a model amino acid, results in a strong but reversible decline in conversion via competitive adsorption between alanine and LA on the Ru surface. The sulfur-containing amino acids cysteine and methionine irreversibly poison the catalyst for LA conversion. Addition of 0.1 wt% albumin as a model protein leads to slow decline in rate, consistent with pore plugging or combined pore plugging and poisoning of the Ru surface. This study points to the need for integrated design and operation of biological processes and chemical processes in the biorefinery in order to make efficient conversion schemes viable.

  5. MODIS Solar Calibration Simulation Assisted Refinement

    NASA Technical Reports Server (NTRS)

    Waluschka, Eugene; Xiaoxiong, Xiong; Guenther, Bruce; Barnes, William; Moyer, David; Salomonson, Vincent V.

    2004-01-01

    A detailed optical radiometric model has been created of the MODIS instruments solar calibration process. This model takes into account the orientation and distance of the spacecraft with respect to the sun, the correlated motions of the scan mirror and the sun, all of the optical elements, the detector locations on the visible and near IR focal planes, the solar diffuser and the attenuation screen with all of its hundreds of pinholes. An efficient computational scheme, takes into account all of these factors and has produced results which reproduce the observed time dependent intensity variations on the two focal planes with considerable fidelity. This agreement between predictions and observations, has given insight to the causes of some small time dependent variations and how to incorporate them into the overall calibration scheme. The radiometric model is described and modeled and actual measurements are presented and compared.

  6. Variable High Order Multiblock Overlapping Grid Methods for Mixed Steady and Unsteady Multiscale Viscous Flows

    NASA Technical Reports Server (NTRS)

    Sjogreen, Bjoern; Yee, H. C.

    2007-01-01

    Flows containing steady or nearly steady strong shocks in parts of the flow field, and unsteady turbulence with shocklets on other parts of the flow field are difficult to capture accurately and efficiently employing the same numerical scheme even under the multiblock grid or adaptive grid refinement framework. On one hand, sixth-order or higher shock-capturing methods are appropriate for unsteady turbulence with shocklets. On the other hand, lower order shock-capturing methods are more effective for strong steady shocks in terms of convergence. In order to minimize the shortcomings of low order and high order shock-capturing schemes for the subject flows,a multi- block overlapping grid with different orders of accuracy on different blocks is proposed. Test cases to illustrate the performance of the new solver are included.

  7. Leap-dynamics: efficient sampling of conformational space of proteins and peptides in solution.

    PubMed

    Kleinjung, J; Bayley, P; Fraternali, F

    2000-03-31

    A molecular simulation scheme, called Leap-dynamics, that provides efficient sampling of protein conformational space in solution is presented. The scheme is a combined approach using a fast sampling method, imposing conformational 'leaps' to force the system over energy barriers, and molecular dynamics (MD) for refinement. The presence of solvent is approximated by a potential of mean force depending on the solvent accessible surface area. The method has been successfully applied to N-acetyl-L-alanine-N-methylamide (alanine dipeptide), sampling experimentally observed conformations inaccessible to MD alone under the chosen conditions. The method predicts correctly the increased partial flexibility of the mutant Y35G compared to native bovine pancreatic trypsin inhibitor. In particular, the improvement over MD consists of the detection of conformational flexibility that corresponds closely to slow motions identified by nuclear magnetic resonance techniques.

  8. Two-dimensional mesh embedding for Galerkin B-spline methods

    NASA Technical Reports Server (NTRS)

    Shariff, Karim; Moser, Robert D.

    1995-01-01

    A number of advantages result from using B-splines as basis functions in a Galerkin method for solving partial differential equations. Among them are arbitrary order of accuracy and high resolution similar to that of compact schemes but without the aliasing error. This work develops another property, namely, the ability to treat semi-structured embedded or zonal meshes for two-dimensional geometries. This can drastically reduce the number of grid points in many applications. Both integer and non-integer refinement ratios are allowed. The report begins by developing an algorithm for choosing basis functions that yield the desired mesh resolution. These functions are suitable products of one-dimensional B-splines. Finally, test cases for linear scalar equations such as the Poisson and advection equation are presented. The scheme is conservative and has uniformly high order of accuracy throughout the domain.

  9. One-loop corrections to light cone wave functions: The dipole picture DIS cross section

    NASA Astrophysics Data System (ADS)

    Hänninen, H.; Lappi, T.; Paatelainen, R.

    2018-06-01

    We develop methods to perform loop calculations in light cone perturbation theory using a helicity basis, refining the method introduced in our earlier work. In particular this includes implementing a consistent way to contract the four-dimensional tensor structures from the helicity vectors with d-dimensional tensors arising from loop integrals, in a way that can be fully automatized. We demonstrate this explicitly by calculating the one-loop correction to the virtual photon to quark-antiquark dipole light cone wave function. This allows us to calculate the deep inelastic scattering cross section in the dipole formalism to next-to-leading order accuracy. Our results, obtained using the four dimensional helicity scheme, agree with the recent calculation by Beuf using conventional dimensional regularization, confirming the regularization scheme independence of this cross section.

  10. Viscous Corrections of the Time Incremental Minimization Scheme and Visco-Energetic Solutions to Rate-Independent Evolution Problems

    NASA Astrophysics Data System (ADS)

    Minotti, Luca; Savaré, Giuseppe

    2018-02-01

    We propose the new notion of Visco-Energetic solutions to rate-independent systems {(X, E,} d) driven by a time dependent energy E and a dissipation quasi-distance d in a general metric-topological space X. As for the classic Energetic approach, solutions can be obtained by solving a modified time Incremental Minimization Scheme, where at each step the dissipation quasi-distance d is incremented by a viscous correction {δ} (for example proportional to the square of the distance d), which penalizes far distance jumps by inducing a localized version of the stability condition. We prove a general convergence result and a typical characterization by Stability and Energy Balance in a setting comparable to the standard energetic one, thus capable of covering a wide range of applications. The new refined Energy Balance condition compensates for the localized stability and provides a careful description of the jump behavior: at every jump the solution follows an optimal transition, which resembles in a suitable variational sense the discrete scheme that has been implemented for the whole construction.

  11. Adaptive CFD schemes for aerospace propulsion

    NASA Astrophysics Data System (ADS)

    Ferrero, A.; Larocca, F.

    2017-05-01

    The flow fields which can be observed inside several components of aerospace propulsion systems are characterised by the presence of very localised phenomena (boundary layers, shock waves,...) which can deeply influence the performances of the system. In order to accurately evaluate these effects by means of Computational Fluid Dynamics (CFD) simulations, it is necessary to locally refine the computational mesh. In this way the degrees of freedom related to the discretisation are focused in the most interesting regions and the computational cost of the simulation remains acceptable. In the present work, a discontinuous Galerkin (DG) discretisation is used to numerically solve the equations which describe the flow field. The local nature of the DG reconstruction makes it possible to efficiently exploit several adaptive schemes in which the size of the elements (h-adaptivity) and the order of reconstruction (p-adaptivity) are locally changed. After a review of the main adaptation criteria, some examples related to compressible flows in turbomachinery are presented. An hybrid hp-adaptive algorithm is also proposed and compared with a standard h-adaptive scheme in terms of computational efficiency.

  12. Pricing and simulation for real estate index options: Radial basis point interpolation

    NASA Astrophysics Data System (ADS)

    Gong, Pu; Zou, Dong; Wang, Jiayue

    2018-06-01

    This study employs the meshfree radial basis point interpolation (RBPI) for pricing real estate derivatives contingent on real estate index. This method combines radial and polynomial basis functions, which can guarantee the interpolation scheme with Kronecker property and effectively improve accuracy. An exponential change of variables, a mesh refinement algorithm and the Richardson extrapolation are employed in this study to implement the RBPI. Numerical results are presented to examine the computational efficiency and accuracy of our method.

  13. A critical analysis of some popular methods for the discretisation of the gradient operator in finite volume methods

    NASA Astrophysics Data System (ADS)

    Syrakos, Alexandros; Varchanis, Stylianos; Dimakopoulos, Yannis; Goulas, Apostolos; Tsamopoulos, John

    2017-12-01

    Finite volume methods (FVMs) constitute a popular class of methods for the numerical simulation of fluid flows. Among the various components of these methods, the discretisation of the gradient operator has received less attention despite its fundamental importance with regards to the accuracy of the FVM. The most popular gradient schemes are the divergence theorem (DT) (or Green-Gauss) scheme and the least-squares (LS) scheme. Both are widely believed to be second-order accurate, but the present study shows that in fact the common variant of the DT gradient is second-order accurate only on structured meshes whereas it is zeroth-order accurate on general unstructured meshes, and the LS gradient is second-order and first-order accurate, respectively. This is explained through a theoretical analysis and is confirmed by numerical tests. The schemes are then used within a FVM to solve a simple diffusion equation on unstructured grids generated by several methods; the results reveal that the zeroth-order accuracy of the DT gradient is inherited by the FVM as a whole, and the discretisation error does not decrease with grid refinement. On the other hand, use of the LS gradient leads to second-order accurate results, as does the use of alternative, consistent, DT gradient schemes, including a new iterative scheme that makes the common DT gradient consistent at almost no extra cost. The numerical tests are performed using both an in-house code and the popular public domain partial differential equation solver OpenFOAM.

  14. Adaptive mesh refinement versus subgrid friction interpolation in simulations of Antarctic ice dynamics

    DOE PAGES

    Cornford, S. L.; Martin, D. F.; Lee, V.; ...

    2016-05-13

    At least in conventional hydrostatic ice-sheet models, the numerical error associated with grounding line dynamics can be reduced by modifications to the discretization scheme. These involve altering the integration formulae for the basal traction and/or driving stress close to the grounding line and exhibit lower – if still first-order – error in the MISMIP3d experiments. MISMIP3d may not represent the variety of real ice streams, in that it lacks strong lateral stresses, and imposes a large basal traction at the grounding line. We study resolution sensitivity in the context of extreme forcing simulations of the entire Antarctic ice sheet, using the BISICLES adaptive mesh ice-sheet model with two schemes: the original treatment, and a scheme, which modifies the discretization of the basal traction. The second scheme does indeed improve accuracy – by around a factor of two – for a given mesh spacing, butmore » $$\\lesssim 1$$ km resolution is still necessary. For example, in coarser resolution simulations Thwaites Glacier retreats so slowly that other ice streams divert its trunk. In contrast, with $$\\lesssim 1$$ km meshes, the same glacier retreats far more quickly and triggers the final phase of West Antarctic collapse a century before any such diversion can take place.« less

  15. A consistent and conservative scheme for MHD flows with complex boundaries on an unstructured Cartesian adaptive system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jie; Ni, Ming-Jiu, E-mail: mjni@ucas.ac.cn

    2014-01-01

    The numerical simulation of Magnetohydrodynamics (MHD) flows with complex boundaries has been a topic of great interest in the development of a fusion reactor blanket for the difficulty to accurately simulate the Hartmann layers and side layers along arbitrary geometries. An adaptive version of a consistent and conservative scheme has been developed for simulating the MHD flows. Besides, the present study forms the first attempt to apply the cut-cell approach for irregular wall-bounded MHD flows, which is more flexible and conveniently implemented under adaptive mesh refinement (AMR) technique. It employs a Volume-of-Fluid (VOF) approach to represent the fluid–conducting wall interfacemore » that makes it possible to solve the fluid–solid coupling magnetic problems, emphasizing at how electric field solver is implemented when conductivity is discontinuous in cut-cell. For the irregular cut-cells, the conservative interpolation technique is applied to calculate the Lorentz force at cell-center. On the other hand, it will be shown how consistent and conservative scheme is implemented on fine/coarse mesh boundaries when using AMR technique. Then, the applied numerical schemes are validated by five test simulations and excellent agreement was obtained for all the cases considered, simultaneously showed good consistency and conservative properties.« less

  16. Student Teachers’ Proof Schemes on Proof Tasks Involving Inequality: Deductive or Inductive?

    NASA Astrophysics Data System (ADS)

    Rosyidi, A. H.; Kohar, A. W.

    2018-01-01

    Exploring student teachers’ proof ability is crucial as it is important for improving the quality of their learning process and help their future students learn how to construct a proof. Hence, this study aims at exploring at the proof schemes of student teachers in the beginning of their studies. Data were collected from 130 proofs resulted by 65 Indonesian student teachers on two proof tasks involving algebraic inequality. To analyse, the proofs were classified into the refined proof schemes level proposed by Lee (2016) ranging from inductive, which only provides irrelevant inferences, to deductive proofs, which consider addressing formal representation. Findings present several examples of each of Lee’s level on the student teachers’ proofs spanning from irrelevant inferences, novice use of examples or logical reasoning, strategic use examples for reasoning, deductive inferences with major and minor logical coherence, and deductive proof with informal and formal representation. Besides, it was also found that more than half of the students’ proofs coded as inductive schemes, which does not meet the requirement for doing the proof for the proof tasks examined in this study. This study suggests teacher educators in teacher colleges to reform the curriculum regarding proof learning which can accommodate the improvement of student teachers’ proving ability from inductive to deductive proof as well from informal to formal proof.

  17. A fault diagnosis scheme for planetary gearboxes using adaptive multi-scale morphology filter and modified hierarchical permutation entropy

    NASA Astrophysics Data System (ADS)

    Li, Yongbo; Li, Guoyan; Yang, Yuantao; Liang, Xihui; Xu, Minqiang

    2018-05-01

    The fault diagnosis of planetary gearboxes is crucial to reduce the maintenance costs and economic losses. This paper proposes a novel fault diagnosis method based on adaptive multi-scale morphological filter (AMMF) and modified hierarchical permutation entropy (MHPE) to identify the different health conditions of planetary gearboxes. In this method, AMMF is firstly adopted to remove the fault-unrelated components and enhance the fault characteristics. Second, MHPE is utilized to extract the fault features from the denoised vibration signals. Third, Laplacian score (LS) approach is employed to refine the fault features. In the end, the obtained features are fed into the binary tree support vector machine (BT-SVM) to accomplish the fault pattern identification. The proposed method is numerically and experimentally demonstrated to be able to recognize the different fault categories of planetary gearboxes.

  18. Global Existence Analysis of Cross-Diffusion Population Systems for Multiple Species

    NASA Astrophysics Data System (ADS)

    Chen, Xiuqing; Daus, Esther S.; Jüngel, Ansgar

    2018-02-01

    The existence of global-in-time weak solutions to reaction-cross-diffusion systems for an arbitrary number of competing population species is proved. The equations can be derived from an on-lattice random-walk model with general transition rates. In the case of linear transition rates, it extends the two-species population model of Shigesada, Kawasaki, and Teramoto. The equations are considered in a bounded domain with homogeneous Neumann boundary conditions. The existence proof is based on a refined entropy method and a new approximation scheme. Global existence follows under a detailed balance or weak cross-diffusion condition. The detailed balance condition is related to the symmetry of the mobility matrix, which mirrors Onsager's principle in thermodynamics. Under detailed balance (and without reaction) the entropy is nonincreasing in time, but counter-examples show that the entropy may increase initially if detailed balance does not hold.

  19. Advances in land modeling of KIAPS based on the Noah Land Surface Model

    NASA Astrophysics Data System (ADS)

    Koo, Myung-Seo; Baek, Sunghye; Seol, Kyung-Hee; Cho, Kyoungmi

    2017-08-01

    As of 2013, the Noah Land Surface Model (LSM) version 2.7.1 was implemented in a new global model being developed at the Korea Institute of Atmospheric Prediction Systems (KIAPS). This land surface scheme is further refined in two aspects, by adding new physical processes and by updating surface input parameters. Thus, the treatment of glacier land, sea ice, and snow cover are addressed more realistically. Inconsistencies in the amount of absorbed solar flux at ground level by the land surface and radiative processes are rectified. In addition, new parameters are available by using 1-km land cover data, which had usually not been possible at a global scale. Land surface albedo/emissivity climatology is newly created using Moderate-Resolution Imaging Spectroradiometer (MODIS) satellitebased data and adjusted parameterization. These updates have been applied to the KIAPS-developed model and generally provide a positive impact on near-surface weather forecasting.

  20. Application of Enhanced Sampling Monte Carlo Methods for High-Resolution Protein-Protein Docking in Rosetta

    PubMed Central

    Zhang, Zhe; Schindler, Christina E. M.; Lange, Oliver F.; Zacharias, Martin

    2015-01-01

    The high-resolution refinement of docked protein-protein complexes can provide valuable structural and mechanistic insight into protein complex formation complementing experiment. Monte Carlo (MC) based approaches are frequently applied to sample putative interaction geometries of proteins including also possible conformational changes of the binding partners. In order to explore efficiency improvements of the MC sampling, several enhanced sampling techniques, including temperature or Hamiltonian replica exchange and well-tempered ensemble approaches, have been combined with the MC method and were evaluated on 20 protein complexes using unbound partner structures. The well-tempered ensemble method combined with a 2-dimensional temperature and Hamiltonian replica exchange scheme (WTE-H-REMC) was identified as the most efficient search strategy. Comparison with prolonged MC searches indicates that the WTE-H-REMC approach requires approximately 5 times fewer MC steps to identify near native docking geometries compared to conventional MC searches. PMID:26053419

  1. Experimental and Theoretical Study of Propeller Spinner/Shank Interference. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Cornell, C. C.

    1986-01-01

    A fundamental experimental and theoretical investigation into the aerodynamic interference associated with propeller spinner and shank regions was conducted. The research program involved a theoretical assessment of solutions previously proposed, followed by a systematic experimental study to supplement the existing data base. As a result, a refined computational procedure was established for prediction of interference effects in terms of interference drag and resolved into propeller thrust and torque components. These quantities were examined with attention to engineering parameters such as two spinner finess ratios, three blade shank forms, and two/three/four/six/eight blades. Consideration of the physics of the phenomena aided in the logical deduction of two individual interference quantities (cascade effects and spinner/shank juncture interference). These interference effects were semi-empirically modeled using existing theories and placed into a compatible form with an existing propeller performance scheme which provided the basis for examples of application.

  2. A fault diagnosis scheme for planetary gearboxes using modified multi-scale symbolic dynamic entropy and mRMR feature selection

    NASA Astrophysics Data System (ADS)

    Li, Yongbo; Yang, Yuantao; Li, Guoyan; Xu, Minqiang; Huang, Wenhu

    2017-07-01

    Health condition identification of planetary gearboxes is crucial to reduce the downtime and maximize productivity. This paper aims to develop a novel fault diagnosis method based on modified multi-scale symbolic dynamic entropy (MMSDE) and minimum redundancy maximum relevance (mRMR) to identify the different health conditions of planetary gearbox. MMSDE is proposed to quantify the regularity of time series, which can assess the dynamical characteristics over a range of scales. MMSDE has obvious advantages in the detection of dynamical changes and computation efficiency. Then, the mRMR approach is introduced to refine the fault features. Lastly, the obtained new features are fed into the least square support vector machine (LSSVM) to complete the fault pattern identification. The proposed method is numerically and experimentally demonstrated to be able to recognize the different fault types of planetary gearboxes.

  3. The implementation and use of Ada on distributed systems with high reliability requirements

    NASA Technical Reports Server (NTRS)

    Knight, J. C.

    1988-01-01

    The use and implementation of Ada were investigated in distributed environments in which reliability is the primary concern. In particular, the focus was on the possibility that a distributed system may be programmed entirely in Ada so that the individual tasks of the system are unconcerned with which processors are being executed, and that failures may occur in the software and underlying hardware. A secondary interest is in the performance of Ada systems and how that performance can be gauged reliably. Primary activities included: analysis of the original approach to recovery in distributed Ada programs using the Advanced Transport Operating System (ATOPS) example; review and assessment of the original approach which was found to be capable of improvement; development of a refined approach to recovery that was applied to the ATOPS example; and design and development of a performance assessment scheme for Ada programs based on a flexible user-driven benchmarking system.

  4. A biopsychosocial vignette for case conceptualization in dementia (VIG-Dem): development and pilot study.

    PubMed

    Spector, Aimee; Hebditch, Molly; Stoner, Charlotte R; Gibbor, Luke

    2016-09-01

    The ability to identify biological, social, and psychological issues for people with dementia is an important skill for healthcare professionals. Therefore, valid and reliable measures are needed to assess this ability. This study involves the development of a vignette style measure to capture the extent to which health professionals use "Biopsychosocial" thinking in dementia care (VIG-Dem), based on the framework of the model developed by Spector and Orrell (2010). The development process consisted of Phase 1: Developing and refining the vignettes; Phase 2: Field testing (N = 9), and Phase 3: A pilot study to assess reliability and validity (N = 131). The VIG-Dem, consisting of two vignettes with open-ended questions and a standardized scoring scheme, was developed. Evidence for the good inter-rater reliability, convergent validity, and test-retest reliability were established. The VIG-Dem has good psychometric properties and may provide a useful tool in dementia care research and practice.

  5. Iris recognition using possibilistic fuzzy matching on local features.

    PubMed

    Tsai, Chung-Chih; Lin, Heng-Yi; Taur, Jinshiuh; Tao, Chin-Wang

    2012-02-01

    In this paper, we propose a novel possibilistic fuzzy matching strategy with invariant properties, which can provide a robust and effective matching scheme for two sets of iris feature points. In addition, the nonlinear normalization model is adopted to provide more accurate position before matching. Moreover, an effective iris segmentation method is proposed to refine the detected inner and outer boundaries to smooth curves. For feature extraction, the Gabor filters are adopted to detect the local feature points from the segmented iris image in the Cartesian coordinate system and to generate a rotation-invariant descriptor for each detected point. After that, the proposed matching algorithm is used to compute a similarity score for two sets of feature points from a pair of iris images. The experimental results show that the performance of our system is better than those of the systems based on the local features and is comparable to those of the typical systems.

  6. Recent assimilation developments of FOAM the Met Office ocean forecast system

    NASA Astrophysics Data System (ADS)

    Lea, Daniel; Martin, Matthew; Waters, Jennifer; Mirouze, Isabelle; While, James; King, Robert

    2015-04-01

    FOAM is the Met Office's operational ocean forecasting system. This system comprises a range of models from a 1/4 degree resolution global to 1/12 degree resolution regional models and shelf seas models at 7 km resolution. The system is made up of the ocean model NEMO (Nucleus for European Modeling of the Ocean), the Los Alomos sea ice model CICE and the NEMOVAR assimilation run in 3D-VAR FGAT mode. Work is ongoing to transition to both a higher resolution global ocean model at 1/12 degrees and to run FOAM in coupled models. The FOAM system generally performs well. One area of concern however is the performance in the tropics where spurious oscillations and excessive vertical velocity gradients are found after assimilation. NEMOVAR includes a balance operator which in the extra-tropics uses geostrophic balance to produce velocity increments which balance the density increments applied. In the tropics, however, the main balance is between the pressure gradients produced by the density gradient and the applied wind stress. A scheme is presented which aims to maintain this balance when increments are applied. Another issue in FOAM is that there are sometimes persistent temperature and salinity errors which are not effectively corrected by the assimilation. The standard NEMOVAR has a single correlation length scale based on the local Rossby radius. This means that observations in the extra tropics have influence on the model only on short length-scales. In order to maximise the information extracted from the observations and to correct large scale model biases a multiple correlation length-scale scheme has been developed. This includes a larger length scale which spreads observation information further. Various refinements of the scheme are also explored including reducing the longer length scale component at the edge of the sea ice and in areas with high potential vorticity gradients. A related scheme which varies the correlation length scale in the shelf seas is also described.

  7. Aerosol Complexity and Implications for Predictability and Short-Term Forecasting

    NASA Technical Reports Server (NTRS)

    Colarco, Peter

    2016-01-01

    There are clear NWP and climate impacts from including aerosol radiative and cloud interactions. Changes in dynamics and cloud fields affect aerosol lifecycle, plume height, long-range transport, overall forcing of the climate system, etc. Inclusion of aerosols in NWP systems has benefit to surface field biases (e.g., T2m, U10m). Including aerosol affects has impact on analysis increments and can have statistically significant impacts on, e.g., tropical cyclogenesis. Above points are made especially with respect to aerosol radiative interactions, but aerosol-cloud interaction is a bigger signal on the global system. Many of these impacts are realized even in models with relatively simple (bulk) aerosol schemes (approx.10 -20 tracers). Simple schemes though imply simple representation of aerosol absorption and importantly for aerosol-cloud interaction particle-size distribution. Even so, more complex schemes exhibit a lot of diversity between different models, with issues such as size selection both for emitted particles and for modes. Prospects for complex sectional schemes to tune modal (and even bulk) schemes toward better selection of size representation. I think this is a ripe topic for more research -Systematic documentation of benefits of no vs. climatological vs. interactive (direct and then direct+indirect) aerosols. Document aerosol impact on analysis increments, inclusion in NWP data assimilation operator -Further refinement of baseline assumptions in model design (e.g., absorption, particle size distribution). Did not get into model resolution and interplay of other physical processes with aerosols (e.g., moist physics, obviously important), chemistry

  8. Low-energy effective Hamiltonians for correlated electron systems beyond density functional theory

    NASA Astrophysics Data System (ADS)

    Hirayama, Motoaki; Miyake, Takashi; Imada, Masatoshi; Biermann, Silke

    2017-08-01

    We propose a refined scheme of deriving an effective low-energy Hamiltonian for materials with strong electronic Coulomb correlations beyond density functional theory (DFT). By tracing out the electronic states away from the target degrees of freedom in a controlled way by a perturbative scheme, we construct an effective Hamiltonian for a restricted low-energy target space incorporating the effects of high-energy degrees of freedom in an effective manner. The resulting effective Hamiltonian can afterwards be solved by accurate many-body solvers. We improve this "multiscale ab initio scheme for correlated electrons" (MACE) primarily in two directions by elaborating and combining two frameworks developed by Hirayama et al. [M. Hirayama, T. Miyake, and M. Imada, Phys. Rev. B 87, 195144 (2013), 10.1103/PhysRevB.87.195144] and Casula et al. [M. Casula, P. Werner, L. Vaugier, F. Aryasetiawan, T. Miyake, A. J. Millis, and S. Biermann, Phys. Rev. Lett. 109, 126408 (2012), 10.1103/PhysRevLett.109.126408]: (1) Double counting of electronic correlations between the DFT and the low-energy solver is avoided by using the constrained G W scheme; and (2) the frequency dependent interactions emerging from the partial trace summation are successfully separated into a nonlocal part that is treated following ideas by Hirayama et al. and a local part treated nonperturbatively in the spirit of Casula et al. and are incorporated into the renormalization of the low-energy dispersion. The scheme is favorably tested on the example of SrVO3.

  9. Fully automated MR liver volumetry using watershed segmentation coupled with active contouring.

    PubMed

    Huynh, Hieu Trung; Le-Trong, Ngoc; Bao, Pham The; Oto, Aytek; Suzuki, Kenji

    2017-02-01

    Our purpose is to develop a fully automated scheme for liver volume measurement in abdominal MR images, without requiring any user input or interaction. The proposed scheme is fully automatic for liver volumetry from 3D abdominal MR images, and it consists of three main stages: preprocessing, rough liver shape generation, and liver extraction. The preprocessing stage reduced noise and enhanced the liver boundaries in 3D abdominal MR images. The rough liver shape was revealed fully automatically by using the watershed segmentation, thresholding transform, morphological operations, and statistical properties of the liver. An active contour model was applied to refine the rough liver shape to precisely obtain the liver boundaries. The liver volumes calculated by the proposed scheme were compared to the "gold standard" references which were estimated by an expert abdominal radiologist. The liver volumes computed by using our developed scheme excellently agreed (Intra-class correlation coefficient was 0.94) with the "gold standard" manual volumes by the radiologist in the evaluation with 27 cases from multiple medical centers. The running time was 8.4 min per case on average. We developed a fully automated liver volumetry scheme in MR, which does not require any interaction by users. It was evaluated with cases from multiple medical centers. The liver volumetry performance of our developed system was comparable to that of the gold standard manual volumetry, and it saved radiologists' time for manual liver volumetry of 24.7 min per case.

  10. Algebraic multigrid preconditioning within parallel finite-element solvers for 3-D electromagnetic modelling problems in geophysics

    NASA Astrophysics Data System (ADS)

    Koldan, Jelena; Puzyrev, Vladimir; de la Puente, Josep; Houzeaux, Guillaume; Cela, José María

    2014-06-01

    We present an elaborate preconditioning scheme for Krylov subspace methods which has been developed to improve the performance and reduce the execution time of parallel node-based finite-element (FE) solvers for 3-D electromagnetic (EM) numerical modelling in exploration geophysics. This new preconditioner is based on algebraic multigrid (AMG) that uses different basic relaxation methods, such as Jacobi, symmetric successive over-relaxation (SSOR) and Gauss-Seidel, as smoothers and the wave front algorithm to create groups, which are used for a coarse-level generation. We have implemented and tested this new preconditioner within our parallel nodal FE solver for 3-D forward problems in EM induction geophysics. We have performed series of experiments for several models with different conductivity structures and characteristics to test the performance of our AMG preconditioning technique when combined with biconjugate gradient stabilized method. The results have shown that, the more challenging the problem is in terms of conductivity contrasts, ratio between the sizes of grid elements and/or frequency, the more benefit is obtained by using this preconditioner. Compared to other preconditioning schemes, such as diagonal, SSOR and truncated approximate inverse, the AMG preconditioner greatly improves the convergence of the iterative solver for all tested models. Also, when it comes to cases in which other preconditioners succeed to converge to a desired precision, AMG is able to considerably reduce the total execution time of the forward-problem code-up to an order of magnitude. Furthermore, the tests have confirmed that our AMG scheme ensures grid-independent rate of convergence, as well as improvement in convergence regardless of how big local mesh refinements are. In addition, AMG is designed to be a black-box preconditioner, which makes it easy to use and combine with different iterative methods. Finally, it has proved to be very practical and efficient in the parallel context.

  11. A fault diagnosis scheme for rolling bearing based on local mean decomposition and improved multiscale fuzzy entropy

    NASA Astrophysics Data System (ADS)

    Li, Yongbo; Xu, Minqiang; Wang, Rixin; Huang, Wenhu

    2016-01-01

    This paper presents a new rolling bearing fault diagnosis method based on local mean decomposition (LMD), improved multiscale fuzzy entropy (IMFE), Laplacian score (LS) and improved support vector machine based binary tree (ISVM-BT). When the fault occurs in rolling bearings, the measured vibration signal is a multi-component amplitude-modulated and frequency-modulated (AM-FM) signal. LMD, a new self-adaptive time-frequency analysis method can decompose any complicated signal into a series of product functions (PFs), each of which is exactly a mono-component AM-FM signal. Hence, LMD is introduced to preprocess the vibration signal. Furthermore, IMFE that is designed to avoid the inaccurate estimation of fuzzy entropy can be utilized to quantify the complexity and self-similarity of time series for a range of scales based on fuzzy entropy. Besides, the LS approach is introduced to refine the fault features by sorting the scale factors. Subsequently, the obtained features are fed into the multi-fault classifier ISVM-BT to automatically fulfill the fault pattern identifications. The experimental results validate the effectiveness of the methodology and demonstrate that proposed algorithm can be applied to recognize the different categories and severities of rolling bearings.

  12. Simulation of axisymmetric jets with a finite element Navier-Stokes solver and a multilevel VOF approach

    NASA Astrophysics Data System (ADS)

    Cervone, A.; Manservisi, S.; Scardovelli, R.

    2010-09-01

    A multilevel VOF approach has been coupled to an accurate finite element Navier-Stokes solver in axisymmetric geometry for the simulation of incompressible liquid jets with high density ratios. The representation of the color function over a fine grid has been introduced to reduce the discontinuity of the interface at the cell boundary. In the refined grid the automatic breakup and coalescence occur at a spatial scale much smaller than the coarse grid spacing. To reduce memory requirements, we have implemented on the fine grid a compact storage scheme which memorizes the color function data only in the mixed cells. The capillary force is computed by using the Laplace-Beltrami operator and a volumetric approach for the two principal curvatures. Several simulations of axisymmetric jets have been performed to show the accuracy and robustness of the proposed scheme.

  13. Counterrotating prop-fan simulations which feature a relative-motion multiblock grid decomposition enabling arbitrary time-steps

    NASA Technical Reports Server (NTRS)

    Janus, J. Mark; Whitfield, David L.

    1990-01-01

    Improvements are presented of a computer algorithm developed for the time-accurate flow analysis of rotating machines. The flow model is a finite volume method utilizing a high-resolution approximate Riemann solver for interface flux definitions. The numerical scheme is a block LU implicit iterative-refinement method which possesses apparent unconditional stability. Multiblock composite gridding is used to orderly partition the field into a specified arrangement of blocks exhibiting varying degrees of similarity. Block-block relative motion is achieved using local grid distortion to reduce grid skewness and accommodate arbitrary time step selection. A general high-order numerical scheme is applied to satisfy the geometric conservation law. An even-blade-count counterrotating unducted fan configuration is chosen for a computational study comparing solutions resulting from altering parameters such as time step size and iteration count. The solutions are compared with measured data.

  14. Adaptive Packet Combining Scheme in Three State Channel Model

    NASA Astrophysics Data System (ADS)

    Saring, Yang; Bulo, Yaka; Bhunia, Chandan Tilak

    2018-01-01

    The two popular techniques of packet combining based error correction schemes are: Packet Combining (PC) scheme and Aggressive Packet Combining (APC) scheme. PC scheme and APC scheme have their own merits and demerits; PC scheme has better throughput than APC scheme, but suffers from higher packet error rate than APC scheme. The wireless channel state changes all the time. Because of this random and time varying nature of wireless channel, individual application of SR ARQ scheme, PC scheme and APC scheme can't give desired levels of throughput. Better throughput can be achieved if appropriate transmission scheme is used based on the condition of channel. Based on this approach, adaptive packet combining scheme has been proposed to achieve better throughput. The proposed scheme adapts to the channel condition to carry out transmission using PC scheme, APC scheme and SR ARQ scheme to achieve better throughput. Experimentally, it was observed that the error correction capability and throughput of the proposed scheme was significantly better than that of SR ARQ scheme, PC scheme and APC scheme.

  15. Acute, subchronic, and developmental toxicological properties of lubricating oil base stocks.

    PubMed

    Dalbey, Walden E; McKee, Richard H; Goyak, Katy Olsavsky; Biles, Robert W; Murray, Jay; White, Russell

    2014-01-01

    Lubricating oil base stocks (LOBs) are substances used in the manufacture of finished lubricants and greases. They are produced from residue remaining after atmospheric distillation of crude oil that is subsequently fractionated by vacuum distillation and additional refining steps. Initial LOB streams that have been produced by vacuum distillation but not further refined may contain polycyclic aromatic compounds (PACs) and may present carcinogenic hazards. In modern refineries, LOBs are further refined by multistep processes including solvent extraction and/or hydrogen treatment to reduce the levels of PACs and other undesirable constituents. Thus, mildly (insufficiently) refined LOBs are potentially more hazardous than more severely (sufficiently) refined LOBs. This article discusses the evaluation of LOBs using statistical models based on content of PACs; these models indicate that insufficiently refined LOBs (potentially carcinogenic LOBs) can also produce systemic and developmental effects with repeated dermal exposure. Experimental data were also obtained in ten 13-week dermal studies in rats, eight 4-week dermal studies in rabbits, and seven dermal developmental toxicity studies with sufficiently refined LOBs (noncarcinogenic and commonly marketed) in which no observed adverse effect levels for systemic toxicity and developmental toxicity were 1000 to 2000 mg/kg/d with dermal exposures, typically the highest dose tested. Results in both oral and inhalation developmental toxicity studies were similar. This absence of toxicologically relevant findings was consistent with lower PAC content of sufficiently refined LOBs. Based on data on reproductive organs with repeated dosing and parameters in developmental toxicity studies, sufficiently refined LOBs are likely to have little, if any, effect on reproductive parameters.

  16. Simulations of cloudy hyperspectral infrared radiances using the HT-FRTC, a fast PC-based multipurpose radiative transfer code

    NASA Astrophysics Data System (ADS)

    Havemann, S.; Aumann, H. H.; Desouza-Machado, S. G.

    2017-12-01

    The HT-FRTC uses principal components which cover the spectrum at a very high spectral resolution allowing very fast line-by-line-like, hyperspectral and broadband simulations for satellite-based, airborne and ground-based sensors. Using data from IASI and from the Airborne Research Interferometer Evaluation System (ARIES) on board the FAAM BAE 146 aircraft, variational retrievals in principal component space with HT-FRTC as forward model have demonstrated that valuable information on temperature and humidity profiles and on the cirrus cloud properties can be obtained simultaneously. The NASA/JPL/UMBC cloudy RTM inter-comparison project has been working on a global dataset consisting of 7377 AIRS spectra. Initial simulations with HT-FRTC for this dataset have been promising. A next step taken here is to investigate how sensitive the results are with respect to different assumptions in the cloud modelling. One aspect of this is to study how assumptions about the microphysical and related optical properties of liquid/ice clouds impact the statistics of the agreement between model and observations. The other aspect is about the cloud overlap scheme. Different schemes have been tested (maximum, random, maximum random). As the computational cost increases linearly with the number of cloud columns, it will be investigated if there is an optimal number of columns beyond which there is little additional benefit to be gained. During daytime the high wave number channels of AIRS are affected by solar radiation. With full scattering calculations using a monochromatic version of the Edwards-Slingo radiation code the HT-FRTC can model solar radiation reasonably well, but full scattering calculations are relatively expensive. Pure Chou scaling on the other hand can not properly describe scattering of solar radiation by clouds and requires additional refinements.

  17. Simulation of the Tsunami Resulting from the M 9.2 2004 Sumatra-Andaman Earthquake - Dynamic Rupture vs. Seismic Inversion Source Model

    NASA Astrophysics Data System (ADS)

    Vater, Stefan; Behrens, Jörn

    2017-04-01

    Simulations of historic tsunami events such as the 2004 Sumatra or the 2011 Tohoku event are usually initialized using earthquake sources resulting from inversion of seismic data. Also, other data from ocean buoys etc. is sometimes included in the derivation of the source model. The associated tsunami event can often be well simulated in this way, and the results show high correlation with measured data. However, it is unclear how the derived source model compares to the particular earthquake event. In this study we use the results from dynamic rupture simulations obtained with SeisSol, a software package based on an ADER-DG discretization solving the spontaneous dynamic earthquake rupture problem with high-order accuracy in space and time. The tsunami model is based on a second-order Runge-Kutta discontinuous Galerkin (RKDG) scheme on triangular grids and features a robust wetting and drying scheme for the simulation of inundation events at the coast. Adaptive mesh refinement enables the efficient computation of large domains, while at the same time it allows for high local resolution and geometric accuracy. The results are compared to measured data and results using earthquake sources based on inversion. With the approach of using the output of actual dynamic rupture simulations, we can estimate the influence of different earthquake parameters. Furthermore, the comparison to other source models enables a thorough comparison and validation of important tsunami parameters, such as the runup at the coast. This work is part of the ASCETE (Advanced Simulation of Coupled Earthquake and Tsunami Events) project, which aims at an improved understanding of the coupling between the earthquake and the generated tsunami event.

  18. Toward an optimisation technique for dynamically monitored environment

    NASA Astrophysics Data System (ADS)

    Shurrab, Orabi M.

    2016-10-01

    The data fusion community has introduced multiple procedures of situational assessments; this is to facilitate timely responses to emerging situations. More directly, the process refinement of the Joint Directors of Laboratories (JDL) is a meta-process to assess and improve the data fusion task during real-time operation. In other wording, it is an optimisation technique to verify the overall data fusion performance, and enhance it toward the top goals of the decision-making resources. This paper discusses the theoretical concept of prioritisation. Where the analysts team is required to keep an up to date with the dynamically changing environment, concerning different domains such as air, sea, land, space and cyberspace. Furthermore, it demonstrates an illustration example of how various tracking activities are ranked, simultaneously into a predetermined order. Specifically, it presents a modelling scheme for a case study based scenario, where the real-time system is reporting different classes of prioritised events. Followed by a performance metrics for evaluating the prioritisation process of situational awareness (SWA) domain. The proposed performance metrics has been designed and evaluated using an analytical approach. The modelling scheme represents the situational awareness system outputs mathematically, in the form of a list of activities. Such methods allowed the evaluation process to conduct a rigorous analysis of the prioritisation process, despite any constrained related to a domain-specific configuration. After conducted three levels of assessments over three separates scenario, The Prioritisation Capability Score (PCS) has provided an appropriate scoring scheme for different ranking instances, Indeed, from the data fusion perspectives, the proposed metric has assessed real-time system performance adequately, and it is capable of conducting a verification process, to direct the operator's attention to any issue, concerning the prioritisation capability of situational awareness domain.

  19. Control of parallel manipulators using force feedback

    NASA Technical Reports Server (NTRS)

    Nanua, Prabjot

    1994-01-01

    Two control schemes are compared for parallel robotic mechanisms actuated by hydraulic cylinders. One scheme, the 'rate based scheme', uses the position and rate information only for feedback. The second scheme, the 'force based scheme' feeds back the force information also. The force control scheme is shown to improve the response over the rate control one. It is a simple constant gain control scheme better suited to parallel mechanisms. The force control scheme can be easily modified for the dynamic forces on the end effector. This paper presents the results of a computer simulation of both the rate and force control schemes. The gains in the force based scheme can be individually adjusted in all three directions, whereas the adjustment in just one direction of the rate based scheme directly affects the other two directions.

  20. Electron Beam Melting and Refining of Metals: Computational Modeling and Optimization

    PubMed Central

    Vutova, Katia; Donchev, Veliko

    2013-01-01

    Computational modeling offers an opportunity for a better understanding and investigation of thermal transfer mechanisms. It can be used for the optimization of the electron beam melting process and for obtaining new materials with improved characteristics that have many applications in the power industry, medicine, instrument engineering, electronics, etc. A time-dependent 3D axis-symmetrical heat model for simulation of thermal transfer in metal ingots solidified in a water-cooled crucible at electron beam melting and refining (EBMR) is developed. The model predicts the change in the temperature field in the casting ingot during the interaction of the beam with the material. A modified Pismen-Rekford numerical scheme to discretize the analytical model is developed. These equation systems, describing the thermal processes and main characteristics of the developed numerical method, are presented. In order to optimize the technological regimes, different criteria for better refinement and obtaining dendrite crystal structures are proposed. Analytical problems of mathematical optimization are formulated, discretized and heuristically solved by cluster methods. Using important for the practice simulation results, suggestions can be made for EBMR technology optimization. The proposed tool is important and useful for studying, control, optimization of EBMR process parameters and improving of the quality of the newly produced materials. PMID:28788351

  1. Solving the transport equation with quadratic finite elements: Theory and applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferguson, J.M.

    1997-12-31

    At the 4th Joint Conference on Computational Mathematics, the author presented a paper introducing a new quadratic finite element scheme (QFEM) for solving the transport equation. In the ensuing year the author has obtained considerable experience in the application of this method, including solution of eigenvalue problems, transmission problems, and solution of the adjoint form of the equation as well as the usual forward solution. He will present detailed results, and will also discuss other refinements of his transport codes, particularly for 3-dimensional problems on rectilinear and non-rectilinear grids.

  2. An alternative regionalization scheme for defining nutrient criteria for rivers and streams

    USGS Publications Warehouse

    Robertson, Dale M.; Saad, David A.; Wieben, Ann M.

    2001-01-01

    The environmental nutrient zone approach can be applied to specific states or nutrient ecoregions and used to develop criteria as a function of stream type. This approach can also be applied on the basis of environmental characteristics of the watershed alone rather than the general environmental characteristics from the region in which the site is located. The environmental nutrient zone approach will enable states to refine the basic nutrient criteria established by the USEPA by developing attainable criteria given the environmental characteristics where the streams are located.

  3. Hydrometallurgical methods of recovery of scandium from the wastes of various technologies

    NASA Astrophysics Data System (ADS)

    Molchanova, T. V.; Akimova, I. D.; Smirnov, K. M.; Krylova, O. K.; Zharova, E. V.

    2017-03-01

    The recovery of scandium from the wastes of the production of uranium, titanium, iron-vanadium, and alumina is studied. The applied acid schemes of scandium transfer to a solution followed by ion-exchange recovery and extraction concentration of scandium ensure the precipitation of crude scandium oxides containing up to 5% Sc2O3. Scandium oxides of 99.96-99.99% purity are formed after additional refining of these crude oxides according to an extraction technology using a mixture 15% multiradical phosphine oxide or Cyanex-925 + 15% tributyl phosphate in kerosene.

  4. An adaptive method for a model of two-phase reactive flow on overlapping grids

    NASA Astrophysics Data System (ADS)

    Schwendeman, D. W.

    2008-11-01

    A two-phase model of heterogeneous explosives is handled computationally by a new numerical approach that is a modification of the standard Godunov scheme. The approach generates well-resolved and accurate solutions using adaptive mesh refinement on overlapping grids, and treats rationally the nozzling terms that render the otherwise hyperbolic model incapable of a conservative representation. The evolution and structure of detonation waves for a variety of one and two-dimensional configurations will be discussed with a focus given to problems of detonation diffraction and failure.

  5. On the sub-model errors of a generalized one-way coupling scheme for linking models at different scales

    NASA Astrophysics Data System (ADS)

    Zeng, Jicai; Zha, Yuanyuan; Zhang, Yonggen; Shi, Liangsheng; Zhu, Yan; Yang, Jinzhong

    2017-11-01

    Multi-scale modeling of the localized groundwater flow problems in a large-scale aquifer has been extensively investigated under the context of cost-benefit controversy. An alternative is to couple the parent and child models with different spatial and temporal scales, which may result in non-trivial sub-model errors in the local areas of interest. Basically, such errors in the child models originate from the deficiency in the coupling methods, as well as from the inadequacy in the spatial and temporal discretizations of the parent and child models. In this study, we investigate the sub-model errors within a generalized one-way coupling scheme given its numerical stability and efficiency, which enables more flexibility in choosing sub-models. To couple the models at different scales, the head solution at parent scale is delivered downward onto the child boundary nodes by means of the spatial and temporal head interpolation approaches. The efficiency of the coupling model is improved either by refining the grid or time step size in the parent and child models, or by carefully locating the sub-model boundary nodes. The temporal truncation errors in the sub-models can be significantly reduced by the adaptive local time-stepping scheme. The generalized one-way coupling scheme is promising to handle the multi-scale groundwater flow problems with complex stresses and heterogeneity.

  6. Refinement of current monitoring methodology for electroosmotic flow assessment under low ionic strength conditions

    PubMed Central

    Saucedo-Espinosa, Mario A.; Lapizco-Encinas, Blanca H.

    2016-01-01

    Current monitoring is a well-established technique for the characterization of electroosmotic (EO) flow in microfluidic devices. This method relies on monitoring the time response of the electric current when a test buffer solution is displaced by an auxiliary solution using EO flow. In this scheme, each solution has a different ionic concentration (and electric conductivity). The difference in the ionic concentration of the two solutions defines the dynamic time response of the electric current and, hence, the current signal to be measured: larger concentration differences result in larger measurable signals. A small concentration difference is needed, however, to avoid dispersion at the interface between the two solutions, which can result in undesired pressure-driven flow that conflicts with the EO flow. Additional challenges arise as the conductivity of the test solution decreases, leading to a reduced electric current signal that may be masked by noise during the measuring process, making for a difficult estimation of an accurate EO mobility. This contribution presents a new scheme for current monitoring that employs multiple channels arranged in parallel, producing an increase in the signal-to-noise ratio of the electric current to be measured and increasing the estimation accuracy. The use of this parallel approach is particularly useful in the estimation of the EO mobility in systems where low conductivity mediums are required, such as insulator based dielectrophoresis devices. PMID:27375813

  7. Axial postbuckling analysis of multilayer functionally graded composite nanoplates reinforced with GPLs based on nonlocal strain gradient theory

    NASA Astrophysics Data System (ADS)

    Sahmani, S.; Aghdam, M. M.

    2017-11-01

    In this paper, a new size-dependent inhomogeneous plate model is constructed to analyze the nonlinear buckling and postbuckling characteristics of multilayer functionally graded composite nanoplates reinforced with graphene platelet (GPL) nanofillers under axial compressive load. To this purpose, the nonlocal strain gradient theory of elasticity is implemented into a refined hyperbolic shear deformation plate theory. The mechanical properties of multilayer graphene platelet-reinforced composite (GPLRC) nanoplates are evaluated based upon the Halpin-Tsai micromechanical scheme. The weight fraction of randomly dispersed GPLs remain constant in each individual layer, which results in U-GPLRC nanoplate, or changes layerwise in accordance with three different functionally graded patterns, which make X-GPLRC, O-GPLRC and A-GPLRC nanoplates. Via a two-stepped perturbation technique, explicit analytical expressions for nonlocal strain gradient stability paths are established for layerwise functionally graded GPLRC nanoplates. It is demonstrated that both the nonlocal and strain gradient size dependencies are more significant for multilayer GPLRC nanoplates filling by GPL nanofillers with higher length-to-thickness and width-to-thickness ratios.

  8. Accurate registration of temporal CT images for pulmonary nodules detection

    NASA Astrophysics Data System (ADS)

    Yan, Jichao; Jiang, Luan; Li, Qiang

    2017-02-01

    Interpretation of temporal CT images could help the radiologists to detect some subtle interval changes in the sequential examinations. The purpose of this study was to develop a fully automated scheme for accurate registration of temporal CT images for pulmonary nodule detection. Our method consisted of three major registration steps. Firstly, affine transformation was applied in the segmented lung region to obtain global coarse registration images. Secondly, B-splines based free-form deformation (FFD) was used to refine the coarse registration images. Thirdly, Demons algorithm was performed to align the feature points extracted from the registered images in the second step and the reference images. Our database consisted of 91 temporal CT cases obtained from Beijing 301 Hospital and Shanghai Changzheng Hospital. The preliminary results showed that approximately 96.7% cases could obtain accurate registration based on subjective observation. The subtraction images of the reference images and the rigid and non-rigid registered images could effectively remove the normal structures (i.e. blood vessels) and retain the abnormalities (i.e. pulmonary nodules). This would be useful for the screening of lung cancer in our future study.

  9. ShakeMap Atlas 2.0: an improved suite of recent historical earthquake ShakeMaps for global hazard analyses and loss model calibration

    USGS Publications Warehouse

    Garcia, D.; Mah, R.T.; Johnson, K.L.; Hearne, M.G.; Marano, K.D.; Lin, K.-W.; Wald, D.J.

    2012-01-01

    We introduce the second version of the U.S. Geological Survey ShakeMap Atlas, which is an openly-available compilation of nearly 8,000 ShakeMaps of the most significant global earthquakes between 1973 and 2011. This revision of the Atlas includes: (1) a new version of the ShakeMap software that improves data usage and uncertainty estimations; (2) an updated earthquake source catalogue that includes regional locations and finite fault models; (3) a refined strategy to select prediction and conversion equations based on a new seismotectonic regionalization scheme; and (4) vastly more macroseismic intensity and ground-motion data from regional agencies All these changes make the new Atlas a self-consistent, calibrated ShakeMap catalogue that constitutes an invaluable resource for investigating near-source strong ground-motion, as well as for seismic hazard, scenario, risk, and loss-model development. To this end, the Atlas will provide a hazard base layer for PAGER loss calibration and for the Earthquake Consequences Database within the Global Earthquake Model initiative.

  10. An efficient and accurate molecular alignment and docking technique using ab initio quality scoring

    PubMed Central

    Füsti-Molnár, László; Merz, Kenneth M.

    2008-01-01

    An accurate and efficient molecular alignment technique is presented based on first principle electronic structure calculations. This new scheme maximizes quantum similarity matrices in the relative orientation of the molecules and uses Fourier transform techniques for two purposes. First, building up the numerical representation of true ab initio electronic densities and their Coulomb potentials is accelerated by the previously described Fourier transform Coulomb method. Second, the Fourier convolution technique is applied for accelerating optimizations in the translational coordinates. In order to avoid any interpolation error, the necessary analytical formulas are derived for the transformation of the ab initio wavefunctions in rotational coordinates. The results of our first implementation for a small test set are analyzed in detail and compared with published results of the literature. A new way of refinement of existing shape based alignments is also proposed by using Fourier convolutions of ab initio or other approximate electron densities. This new alignment technique is generally applicable for overlap, Coulomb, kinetic energy, etc., quantum similarity measures and can be extended to a genuine docking solution with ab initio scoring. PMID:18624561

  11. Finite element mesh refinement criteria for stress analysis

    NASA Technical Reports Server (NTRS)

    Kittur, Madan G.; Huston, Ronald L.

    1990-01-01

    This paper discusses procedures for finite-element mesh selection and refinement. The objective is to improve accuracy. The procedures are based on (1) the minimization of the stiffness matrix race (optimizing node location); (2) the use of h-version refinement (rezoning, element size reduction, and increasing the number of elements); and (3) the use of p-version refinement (increasing the order of polynomial approximation of the elements). A step-by-step procedure of mesh selection, improvement, and refinement is presented. The criteria for 'goodness' of a mesh are based on strain energy, displacement, and stress values at selected critical points of a structure. An analysis of an aircraft lug problem is presented as an example.

  12. A WENO-solver combined with adaptive momentum discretization for the Wigner transport equation and its application to resonant tunneling diodes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dorda, Antonius, E-mail: dorda@tugraz.at; Schürrer, Ferdinand, E-mail: ferdinand.schuerrer@tugraz.at

    2015-03-01

    We present a novel numerical scheme for the deterministic solution of the Wigner transport equation, especially suited to deal with situations in which strong quantum effects are present. The unique feature of the algorithm is the expansion of the Wigner function in local basis functions, similar to finite element or finite volume methods. This procedure yields a discretization of the pseudo-differential operator that conserves the particle density on arbitrarily chosen grids. The high flexibility in refining the grid spacing together with the weighted essentially non-oscillatory (WENO) scheme for the advection term allows for an accurate and well-resolved simulation of themore » phase space dynamics. A resonant tunneling diode is considered as test case and a detailed convergence study is given by comparing the results to a non-equilibrium Green's functions calculation. The impact of the considered domain size and of the grid spacing is analyzed. The obtained convergence of the results towards a quasi-exact agreement of the steady state Wigner and Green's functions computations demonstrates the accuracy of the scheme, as well as the high flexibility to adjust to different physical situations.« less

  13. Synthetic turbulence

    NASA Astrophysics Data System (ADS)

    Juneja, A.; Lathrop, D. P.; Sreenivasan, K. R.; Stolovitzky, G.

    1994-06-01

    A family of schemes is outlined for constructing stochastic fields that are close to turbulence. The fields generated from the more sophisticated versions of these schemes differ little in terms of one-point and two-point statistics from velocity fluctuations in high-Reynolds-number turbulence; we shall designate such fields as synthetic turbulence. All schemes, implemented here in one dimension, consist of the following three ingredients, but differ in various details. First, a simple multiplicative procedure is utilized for generating an intermittent signal which has the same properties as those of the turbulent energy dissipation rate ɛ. Second, the properties of the intermittent signal averaged over an interval of size r are related to those of longitudinal velocity increments Δu(r), evaluated over the same distance r, through a stochastic variable V introduced in the spirit of Kolmogorov's refined similarity hypothesis. The third and final step, which partially resembles a well-known procedure for constructing fractional Brownian motion, consists of suitably combining velocity increments to construct an artificial velocity signal. Various properties of the synthetic turbulence are obtained both analytically and numerically, and found to be in good agreement with measurements made in the atmospheric surface layer. A brief review of some previous models is provided.

  14. Using adaptive grid in modeling rocket nozzle flow

    NASA Technical Reports Server (NTRS)

    Chow, Alan S.; Jin, Kang-Ren

    1992-01-01

    The mechanical behavior of a rocket motor internal flow field results in a system of nonlinear partial differential equations which cannot be solved analytically. However, this system of equations called the Navier-Stokes equations can be solved numerically. The accuracy and the convergence of the solution of the system of equations will depend largely on how precisely the sharp gradients in the domain of interest can be resolved. With the advances in computer technology, more sophisticated algorithms are available to improve the accuracy and convergence of the solutions. An adaptive grid generation is one of the schemes which can be incorporated into the algorithm to enhance the capability of numerical modeling. It is equivalent to putting intelligence into the algorithm to optimize the use of computer memory. With this scheme, the finite difference domain of the flow field called the grid does neither have to be very fine nor strategically placed at the location of sharp gradients. The grid is self adapting as the solution evolves. This scheme significantly improves the methodology of solving flow problems in rocket nozzles by taking the refinement part of grid generation out of the hands of computational fluid dynamics (CFD) specialists and place it into the computer algorithm itself.

  15. A WENO-solver combined with adaptive momentum discretization for the Wigner transport equation and its application to resonant tunneling diodes

    PubMed Central

    Dorda, Antonius; Schürrer, Ferdinand

    2015-01-01

    We present a novel numerical scheme for the deterministic solution of the Wigner transport equation, especially suited to deal with situations in which strong quantum effects are present. The unique feature of the algorithm is the expansion of the Wigner function in local basis functions, similar to finite element or finite volume methods. This procedure yields a discretization of the pseudo-differential operator that conserves the particle density on arbitrarily chosen grids. The high flexibility in refining the grid spacing together with the weighted essentially non-oscillatory (WENO) scheme for the advection term allows for an accurate and well-resolved simulation of the phase space dynamics. A resonant tunneling diode is considered as test case and a detailed convergence study is given by comparing the results to a non-equilibrium Green's functions calculation. The impact of the considered domain size and of the grid spacing is analyzed. The obtained convergence of the results towards a quasi-exact agreement of the steady state Wigner and Green's functions computations demonstrates the accuracy of the scheme, as well as the high flexibility to adjust to different physical situations. PMID:25892748

  16. A WENO-solver combined with adaptive momentum discretization for the Wigner transport equation and its application to resonant tunneling diodes.

    PubMed

    Dorda, Antonius; Schürrer, Ferdinand

    2015-03-01

    We present a novel numerical scheme for the deterministic solution of the Wigner transport equation, especially suited to deal with situations in which strong quantum effects are present. The unique feature of the algorithm is the expansion of the Wigner function in local basis functions, similar to finite element or finite volume methods. This procedure yields a discretization of the pseudo-differential operator that conserves the particle density on arbitrarily chosen grids. The high flexibility in refining the grid spacing together with the weighted essentially non-oscillatory (WENO) scheme for the advection term allows for an accurate and well-resolved simulation of the phase space dynamics. A resonant tunneling diode is considered as test case and a detailed convergence study is given by comparing the results to a non-equilibrium Green's functions calculation. The impact of the considered domain size and of the grid spacing is analyzed. The obtained convergence of the results towards a quasi-exact agreement of the steady state Wigner and Green's functions computations demonstrates the accuracy of the scheme, as well as the high flexibility to adjust to different physical situations.

  17. Refining of metallurgical-grade silicon

    NASA Technical Reports Server (NTRS)

    Dietl, J.

    1986-01-01

    A basic requirement of large scale solar cell fabrication is to provide low cost base material. Unconventional refining of metallurical grade silicon represents one of the most promising ways of silicon meltstock processing. The refining concept is based on an optimized combination of metallurgical treatments. Commercially available crude silicon, in this sequence, requires a first pyrometallurgical step by slagging, or, alternatively, solvent extraction by aluminum. After grinding and leaching, high purity qualtiy is gained as an advanced stage of refinement. To reach solar grade quality a final pyrometallurgical step is needed: liquid-gas extraction.

  18. A moving mesh unstaggered constrained transport scheme for magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Mocz, Philip; Pakmor, Rüdiger; Springel, Volker; Vogelsberger, Mark; Marinacci, Federico; Hernquist, Lars

    2016-11-01

    We present a constrained transport (CT) algorithm for solving the 3D ideal magnetohydrodynamic (MHD) equations on a moving mesh, which maintains the divergence-free condition on the magnetic field to machine-precision. Our CT scheme uses an unstructured representation of the magnetic vector potential, making the numerical method simple and computationally efficient. The scheme is implemented in the moving mesh code AREPO. We demonstrate the performance of the approach with simulations of driven MHD turbulence, a magnetized disc galaxy, and a cosmological volume with primordial magnetic field. We compare the outcomes of these experiments to those obtained with a previously implemented Powell divergence-cleaning scheme. While CT and the Powell technique yield similar results in idealized test problems, some differences are seen in situations more representative of astrophysical flows. In the turbulence simulations, the Powell cleaning scheme artificially grows the mean magnetic field, while CT maintains this conserved quantity of ideal MHD. In the disc simulation, CT gives slower magnetic field growth rate and saturates to equipartition between the turbulent kinetic energy and magnetic energy, whereas Powell cleaning produces a dynamically dominant magnetic field. Such difference has been observed in adaptive-mesh refinement codes with CT and smoothed-particle hydrodynamics codes with divergence-cleaning. In the cosmological simulation, both approaches give similar magnetic amplification, but Powell exhibits more cell-level noise. CT methods in general are more accurate than divergence-cleaning techniques, and, when coupled to a moving mesh can exploit the advantages of automatic spatial/temporal adaptivity and reduced advection errors, allowing for improved astrophysical MHD simulations.

  19. Development and feasibility testing of the Pediatric Emergency Discharge Interaction Coding Scheme.

    PubMed

    Curran, Janet A; Taylor, Alexandra; Chorney, Jill; Porter, Stephen; Murphy, Andrea; MacPhee, Shannon; Bishop, Andrea; Haworth, Rebecca

    2017-08-01

    Discharge communication is an important aspect of high-quality emergency care. This study addresses the gap in knowledge on how to describe discharge communication in a paediatric emergency department (ED). The objective of this feasibility study was to develop and test a coding scheme to characterize discharge communication between health-care providers (HCPs) and caregivers who visit the ED with their children. The Pediatric Emergency Discharge Interaction Coding Scheme (PEDICS) and coding manual were developed following a review of the literature and an iterative refinement process involving HCP observations, inter-rater assessments and team consensus. The coding scheme was pilot-tested through observations of HCPs across a range of shifts in one urban paediatric ED. Overall, 329 patient observations were carried out across 50 observational shifts. Inter-rater reliability was evaluated in 16% of the observations. The final version of the PEDICS contained 41 communication elements. Kappa scores were greater than .60 for the majority of communication elements. The most frequently observed communication elements were under the Introduction node and the least frequently observed were under the Social Concerns node. HCPs initiated the majority of the communication. Pediatric Emergency Discharge Interaction Coding Scheme addresses an important gap in the discharge communication literature. The tool is useful for mapping patterns of discharge communication between HCPs and caregivers. Results from our pilot test identified deficits in specific areas of discharge communication that could impact adherence to discharge instructions. The PEDICS would benefit from further testing with a different sample of HCPs. © 2017 The Authors. Health Expectations Published by John Wiley & Sons Ltd.

  20. KoBaMIN: a knowledge-based minimization web server for protein structure refinement.

    PubMed

    Rodrigues, João P G L M; Levitt, Michael; Chopra, Gaurav

    2012-07-01

    The KoBaMIN web server provides an online interface to a simple, consistent and computationally efficient protein structure refinement protocol based on minimization of a knowledge-based potential of mean force. The server can be used to refine either a single protein structure or an ensemble of proteins starting from their unrefined coordinates in PDB format. The refinement method is particularly fast and accurate due to the underlying knowledge-based potential derived from structures deposited in the PDB; as such, the energy function implicitly includes the effects of solvent and the crystal environment. Our server allows for an optional but recommended step that optimizes stereochemistry using the MESHI software. The KoBaMIN server also allows comparison of the refined structures with a provided reference structure to assess the changes brought about by the refinement protocol. The performance of KoBaMIN has been benchmarked widely on a large set of decoys, all models generated at the seventh worldwide experiments on critical assessment of techniques for protein structure prediction (CASP7) and it was also shown to produce top-ranking predictions in the refinement category at both CASP8 and CASP9, yielding consistently good results across a broad range of model quality values. The web server is fully functional and freely available at http://csb.stanford.edu/kobamin.

  1. Scaling behavior of ground-state energy cluster expansion for linear polyenes

    NASA Astrophysics Data System (ADS)

    Griffin, L. L.; Wu, Jian; Klein, D. J.; Schmalz, T. G.; Bytautas, L.

    Ground-state energies for linear-chain polyenes are additively expanded in a sequence of terms for chemically relevant conjugated substructures of increasing size. The asymptotic behavior of the large-substructure limit (i.e., high-polymer limit) is investigated as a means of characterizing the rapidity of convergence and consequent utility of this energy cluster expansion. Consideration is directed to computations via: simple Hückel theory, a refined Hückel scheme with geometry optimization, restricted Hartree-Fock self-consistent field (RHF-SCF) solutions of fixed bond-length Parisier-Parr-Pople (PPP)/Hubbard models, and ab initio SCF approaches with and without geometry optimization. The cluster expansion in what might be described as the more "refined" approaches appears to lead to qualitatively more rapid convergence: exponentially fast as opposed to an inverse power at the simple Hückel or SCF-Hubbard levels. The substructural energy cluster expansion then seems to merit special attention. Its possible utility in making accurate extrapolations from finite systems to extended polymers is noted.

  2. Ghost-gluon vertex in the presence of the Gribov horizon

    NASA Astrophysics Data System (ADS)

    Mintz, B. W.; Palhares, L. F.; Sorella, S. P.; Pereira, A. D.

    2018-02-01

    We consider Yang-Mills theories quantized in the Landau gauge in the presence of the Gribov horizon via the refined Gribov-Zwanziger (RGZ) framework. As the restriction of the gauge path integral to the Gribov region is taken into account, the resulting gauge field propagators display a nontrivial infrared behavior, being very close to the ones observed in lattice gauge field theory simulations. In this work, we explore a higher correlation function in the refined Gribov-Zwanziger theory: the ghost-gluon interaction vertex, at one-loop level. We show explicit compatibility with kinematical constraints, as required by the Ward identities of the theory, and obtain analytical expressions in the limit of vanishing gluon momentum. We find that the RGZ results are nontrivial in the infrared regime, being compatible with lattice Yang-Mills simulations in both SU(2) and SU(3), as well as with solutions from Schwinger-Dyson equations in different truncation schemes, Functional Renormalization Group analysis, and the renormalization group-improved Curci-Ferrari model.

  3. Fast High Resolution Volume Carving for 3D Plant Shoot Reconstruction

    PubMed Central

    Scharr, Hanno; Briese, Christoph; Embgenbroich, Patrick; Fischbach, Andreas; Fiorani, Fabio; Müller-Linow, Mark

    2017-01-01

    Volume carving is a well established method for visual hull reconstruction and has been successfully applied in plant phenotyping, especially for 3d reconstruction of small plants and seeds. When imaging larger plants at still relatively high spatial resolution (≤1 mm), well known implementations become slow or have prohibitively large memory needs. Here we present and evaluate a computationally efficient algorithm for volume carving, allowing e.g., 3D reconstruction of plant shoots. It combines a well-known multi-grid representation called “Octree” with an efficient image region integration scheme called “Integral image.” Speedup with respect to less efficient octree implementations is about 2 orders of magnitude, due to the introduced refinement strategy “Mark and refine.” Speedup is about a factor 1.6 compared to a highly optimized GPU implementation using equidistant voxel grids, even without using any parallelization. We demonstrate the application of this method for trait derivation of banana and maize plants. PMID:29033961

  4. A thermodynamic description for water, hydrogen fluoride and hydrogen dissolutions in cryolite-base molten salts.

    PubMed

    Wang, Kun; Chartrand, Patrice

    2018-06-15

    This paper presents a quantitative thermodynamic description for water, hydrogen fluoride and hydrogen dissolutions in cryolite-base molten salts, which is of technological importance to the Hall-Héroult electrolytic aluminum extraction cell. The Modified Quasichemical Model in the Quadruplet Approximation (MQMQA), as used to treat a large variety of molten salt systems, was adopted to thermodynamically describe the present liquid phase; all solid solutions were modeled using the Compound Energy Formalism (CEF); the gas phase was thermodynamically treated as an ideal mixture of all possible species. The model parameters were mainly obtained by critical evaluations and optimizations of thermodynamic and phase equilibrium data available from relative experimental measurements and theoretical predictions (first-principles calculations and empirical estimations) for the lower-order subsystems. These optimized model parameters were thereafter merged within the Kohler/Toop interpolation scheme, facilitating the prediction of gas solubility (H2O, HF and H2) in multicomponent cryolite-base molten salts using the FactSage thermochemical software. Several interesting diagrams were finally obtained in order to provide useful information for the industrial partners dedicated to the Hall-Héroult electrolytic aluminum production or other molten-salt technologies (the purification process and electroslag refining).

  5. Automated Assume-Guarantee Reasoning by Abstraction Refinement

    NASA Technical Reports Server (NTRS)

    Pasareanu, Corina S.; Giannakopoulous, Dimitra; Glannakopoulou, Dimitra

    2008-01-01

    Current automated approaches for compositional model checking in the assume-guarantee style are based on learning of assumptions as deterministic automata. We propose an alternative approach based on abstraction refinement. Our new method computes the assumptions for the assume-guarantee rules as conservative and not necessarily deterministic abstractions of some of the components, and refines those abstractions using counter-examples obtained from model checking them together with the other components. Our approach also exploits the alphabets of the interfaces between components and performs iterative refinement of those alphabets as well as of the abstractions. We show experimentally that our preliminary implementation of the proposed alternative achieves similar or better performance than a previous learning-based implementation.

  6. Slowness based CCP stacking technique in suppressing crustal multiples

    NASA Astrophysics Data System (ADS)

    Guan, Z.; Niu, F.

    2016-12-01

    Common-conversion-point (CCP) stacking of receiver function is a widely used technique to image velocity discontinuities in the mantle, such as the lithosphere-asthenosphere boundary (LAB) in the upper mantle, the 410-km and the 660-km discontinuities in the mantle transition zone. In a layered medium, a teleseismic record can be considered as the summation of the direct arrival and a series of conversions and reflections at boundaries below the station. Receiver functions are an attempt to approximate a Green's function associated with structure beneath the receiver by deconvolving one component of a teleseismic signal from another to remove source signals from seismograms. The CCP technique assumes that receiver functions composed solely of P to S conversions at velocity boundaries, whose depths can be mapped out through their arrival times. The multiple reflections at shallow boundaries with large velocity contrasts, such as the base of unconsolidated sediments and the Moho, can pose significant challenges to the accuracy of CCP imaging. In principle, the P to S conversions and multiples originated from deep and shallow boundaries arrive at a seismic station with incident angles that are, respectively, smaller and larger than that of the direct P wave. Therefore the corresponding slowness can be used to isolate the conversions from multiples, allowing for minimizing multiple-induced artifacts. We developed a refined CCP stacking method that uses relative slowness as a weighting factor to suppress the multiples. We performed extensive numerical tests with synthetic data to seek the best weighting scheme and to verify the robustness of the images. We applied the refined technique to the NECESSArray data, and found that the complicated low velocity structures in the depth range of 200-400 km shown in the CCP images of previous studies are mostly artifacts resulted from crustal multiples.

  7. Genetic algorithms with memory- and elitism-based immigrants in dynamic environments.

    PubMed

    Yang, Shengxiang

    2008-01-01

    In recent years the genetic algorithm community has shown a growing interest in studying dynamic optimization problems. Several approaches have been devised. The random immigrants and memory schemes are two major ones. The random immigrants scheme addresses dynamic environments by maintaining the population diversity while the memory scheme aims to adapt genetic algorithms quickly to new environments by reusing historical information. This paper investigates a hybrid memory and random immigrants scheme, called memory-based immigrants, and a hybrid elitism and random immigrants scheme, called elitism-based immigrants, for genetic algorithms in dynamic environments. In these schemes, the best individual from memory or the elite from the previous generation is retrieved as the base to create immigrants into the population by mutation. This way, not only can diversity be maintained but it is done more efficiently to adapt genetic algorithms to the current environment. Based on a series of systematically constructed dynamic problems, experiments are carried out to compare genetic algorithms with the memory-based and elitism-based immigrants schemes against genetic algorithms with traditional memory and random immigrants schemes and a hybrid memory and multi-population scheme. The sensitivity analysis regarding some key parameters is also carried out. Experimental results show that the memory-based and elitism-based immigrants schemes efficiently improve the performance of genetic algorithms in dynamic environments.

  8. Comparison of the co-gasification of sewage sludge and food wastes and cost-benefit analysis of gasification- and incineration-based waste treatment schemes.

    PubMed

    You, Siming; Wang, Wei; Dai, Yanjun; Tong, Yen Wah; Wang, Chi-Hwa

    2016-10-01

    The compositions of food wastes and their co-gasification producer gas were compared with the existing data of sewage sludge. Results showed that food wastes are more favorable than sewage sludge for co-gasification based on residue generation and energy output. Two decentralized gasification-based schemes were proposed to dispose of the sewage sludge and food wastes in Singapore. Monte Carlo simulation-based cost-benefit analysis was conducted to compare the proposed schemes with the existing incineration-based scheme. It was found that the gasification-based schemes are financially superior to the incineration-based scheme based on the data of net present value (NPV), benefit-cost ratio (BCR), and internal rate of return (IRR). Sensitivity analysis was conducted to suggest effective measures to improve the economics of the schemes. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. [Succession caused by beaver (Castor fiber L.) life activity: II. A refined Markov model].

    PubMed

    Logofet; Evstigneev, O I; Aleinikov, A A; Morozova, A O

    2015-01-01

    The refined Markov model of cyclic zoogenic successions caused by beaver (Castor fiber L.) life activity represents a discrete chain of the following six states: flooded forest, swamped forest, pond, grassy swamp, shrubby swamp, and wet forest, which correspond to certain stages of succession. Those stages are defined, and a conceptual scheme of probable transitions between them for one time step is constructed from the knowledge of beaver behaviour in small river floodplains of "Bryanskii Les" Reserve. We calibrated the corresponding matrix of transition probabilities according to the optimization principle: minimizing differences between the model outcome and reality; the model generates a distribution of relative areas corresponding to the stages of succession, that has to be compared to those gained from case studies in the Reserve during 2002-2006. The time step is chosen to equal 2 years, and the first-step data in the sum of differences are given various weights, w (between 0 and 1). The value of w = 0.2 is selected due to its optimality and for some additional reasons. By the formulae of finite homogeneous Markov chain theory, we obtained the main results of the calibrated model, namely, a steady-state distribution of stage areas, indexes of cyclicity, and the mean durations (M(j)) of succession stages. The results of calibration give an objective quantitative nature to the expert knowledge of the course of succession and get a proper interpretation. The 2010 data, which are not involved in the calibration procedure, enabled assessing the quality of prediction by the homogeneous model in short-term (from the 2006 situation): the error of model area distribution relative to the distribution observed in 2010 falls into the range of 9-17%, the best prognosis being given by the least optimal matrices (rejected values of w). This indicates a formally heterogeneous nature of succession processes in time. Thus, the refined version of the homogeneous Markov chain has not eliminated all the contradictions between the model results and expert knowledge, which suggests a further model development towards a "logically inhomogeneous" version or/and refusal to postulate the Markov property in the conceptual scheme of succession.

  10. Solidification Based Grain Refinement in Steels

    DTIC Science & Technology

    2009-07-24

    pearlite (See Figure 1). No evidence of the as-cast austenite dendrite structure was observed. The gating system for this sample resides at the thermal...possible nucleating compounds. 3) Extend grain refinement theory and solidification knowledge through experimental data. 4) Determine structure ...refine the structure of a casting through heat treatment. The energy required for grain refining via thermomechanical processes or heat treatment

  11. Unstructured Cartesian refinement with sharp interface immersed boundary method for 3D unsteady incompressible flows

    NASA Astrophysics Data System (ADS)

    Angelidis, Dionysios; Chawdhary, Saurabh; Sotiropoulos, Fotis

    2016-11-01

    A novel numerical method is developed for solving the 3D, unsteady, incompressible Navier-Stokes equations on locally refined fully unstructured Cartesian grids in domains with arbitrarily complex immersed boundaries. Owing to the utilization of the fractional step method on an unstructured Cartesian hybrid staggered/non-staggered grid layout, flux mismatch and pressure discontinuity issues are avoided and the divergence free constraint is inherently satisfied to machine zero. Auxiliary/hanging nodes are used to facilitate the discretization of the governing equations. The second-order accuracy of the solver is ensured by using multi-dimension Lagrange interpolation operators and appropriate differencing schemes at the interface of regions with different levels of refinement. The sharp interface immersed boundary method is augmented with local near-boundary refinement to handle arbitrarily complex boundaries. The discrete momentum equation is solved with the matrix free Newton-Krylov method and the Krylov-subspace method is employed to solve the Poisson equation. The second-order accuracy of the proposed method on unstructured Cartesian grids is demonstrated by solving the Poisson equation with a known analytical solution. A number of three-dimensional laminar flow simulations of increasing complexity illustrate the ability of the method to handle flows across a range of Reynolds numbers and flow regimes. Laminar steady and unsteady flows past a sphere and the oblique vortex shedding from a circular cylinder mounted between two end walls demonstrate the accuracy, the efficiency and the smooth transition of scales and coherent structures across refinement levels. Large-eddy simulation (LES) past a miniature wind turbine rotor, parameterized using the actuator line approach, indicates the ability of the fully unstructured solver to simulate complex turbulent flows. Finally, a geometry resolving LES of turbulent flow past a complete hydrokinetic turbine illustrates the potential of the method to simulate turbulent flows past geometrically complex bodies on locally refined meshes. In all the cases, the results are found to be in very good agreement with published data and savings in computational resources are achieved.

  12. Computational assessment of a proposed technique for global warming mitigation via albedo-enhancement of marine stratocumulus clouds

    NASA Astrophysics Data System (ADS)

    Bower, Keith; Choularton, Tom; Latham, John; Sahraei, Jalil; Salter, Stephen

    2006-11-01

    A simplified version of the model of marine stratocumulus clouds developed by Bower, Jones and Choularton [Bower, K.N., Jones, A., and Choularton, T.W., 1999. A modeling study of aerosol processing by stratocumulus clouds and its impact on GCM parameterisations of cloud and aerosol. Atmospheric Research, Vol. 50, Nos. 3-4, The Great Dun Fell Experiment, 1995-special issue, 317-344.] was used to examine the sensitivity of the albedo-enhancement global warming mitigation scheme proposed by Latham [Latham, J., 1990. Control of global warming? Nature 347, 339-340; Latham, J., 2002. Amelioration of global warming by controlled enhancement of the albedo and longevity of low-level maritime clouds. Atmos. Sci. Letters (doi:10.1006/Asle.2002.0048).] to the cloud and environmental aerosol characteristics, as well as those of the seawater aerosol of salt-mass ms and number concentration Δ N, which-under the scheme-are advertently introduced into the clouds. Values of albedo-change Δ A and droplet number concentration Nd were calculated for a wide range of values of ms, Δ N, updraught speed W, cloud thickness Δ Z and cloud-base temperature TB: for three measured aerosol spectra, corresponding to ambient air of negligible, moderate and high levels of pollution. Our choices of parameter value ranges were determined by the extent of their applicability to the mitigation scheme, whose current formulation is still somewhat preliminary, thus rendering unwarranted in this study the utilisation of refinements incorporated into other stratocumulus models. In agreement with earlier studies: (1) Δ A was found to be very sensitive to Δ N and (within certain constraints) insensitive to changes in ms, W, Δ Z and TB; (2) Δ A was greatest for clouds formed in pure air and least for highly polluted air. In many situations considered to be within the ambit of the mitigation scheme, the calculated Δ A values exceeded those estimated by earlier workers as being necessary to produce a cooling sufficient to compensate, globally, for the warming resulting from a doubling of the atmospheric carbon dioxide concentration. Our calculations provide quantitative support for the physical viability of the mitigation scheme and offer new insights into its technological requirements.

  13. PMD compensation in multilevel coded-modulation schemes with coherent detection using BLAST algorithm and iterative polarization cancellation.

    PubMed

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2008-09-15

    We present two PMD compensation schemes suitable for use in multilevel (M>or=2) block-coded modulation schemes with coherent detection. The first scheme is based on a BLAST-type polarization-interference cancellation scheme, and the second scheme is based on iterative polarization cancellation. Both schemes use the LDPC codes as channel codes. The proposed PMD compensations schemes are evaluated by employing coded-OFDM and coherent detection. When used in combination with girth-10 LDPC codes those schemes outperform polarization-time coding based OFDM by 1 dB at BER of 10(-9), and provide two times higher spectral efficiency. The proposed schemes perform comparable and are able to compensate even 1200 ps of differential group delay with negligible penalty.

  14. An Adaptive Mesh Algorithm: Mesh Structure and Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scannapieco, Anthony J.

    2016-06-21

    The purpose of Adaptive Mesh Refinement is to minimize spatial errors over the computational space not to minimize the number of computational elements. The additional result of the technique is that it may reduce the number of computational elements needed to retain a given level of spatial accuracy. Adaptive mesh refinement is a computational technique used to dynamically select, over a region of space, a set of computational elements designed to minimize spatial error in the computational model of a physical process. The fundamental idea is to increase the mesh resolution in regions where the physical variables are represented bymore » a broad spectrum of modes in k-space, hence increasing the effective global spectral coverage of those physical variables. In addition, the selection of the spatially distributed elements is done dynamically by cyclically adjusting the mesh to follow the spectral evolution of the system. Over the years three types of AMR schemes have evolved; block, patch and locally refined AMR. In block and patch AMR logical blocks of various grid sizes are overlaid to span the physical space of interest, whereas in locally refined AMR no logical blocks are employed but locally nested mesh levels are used to span the physical space. The distinction between block and patch AMR is that in block AMR the original blocks refine and coarsen entirely in time, whereas in patch AMR the patches change location and zone size with time. The type of AMR described herein is a locally refi ned AMR. In the algorithm described, at any point in physical space only one zone exists at whatever level of mesh that is appropriate for that physical location. The dynamic creation of a locally refi ned computational mesh is made practical by a judicious selection of mesh rules. With these rules the mesh is evolved via a mesh potential designed to concentrate the nest mesh in regions where the physics is modally dense, and coarsen zones in regions where the physics is modally sparse.« less

  15. Report on Pairing-based Cryptography.

    PubMed

    Moody, Dustin; Peralta, Rene; Perlner, Ray; Regenscheid, Andrew; Roginsky, Allen; Chen, Lily

    2015-01-01

    This report summarizes study results on pairing-based cryptography. The main purpose of the study is to form NIST's position on standardizing and recommending pairing-based cryptography schemes currently published in research literature and standardized in other standard bodies. The report reviews the mathematical background of pairings. This includes topics such as pairing-friendly elliptic curves and how to compute various pairings. It includes a brief introduction to existing identity-based encryption (IBE) schemes and other cryptographic schemes using pairing technology. The report provides a complete study of the current status of standard activities on pairing-based cryptographic schemes. It explores different application scenarios for pairing-based cryptography schemes. As an important aspect of adopting pairing-based schemes, the report also considers the challenges inherent in validation testing of cryptographic algorithms and modules. Based on the study, the report suggests an approach for including pairing-based cryptography schemes in the NIST cryptographic toolkit. The report also outlines several questions that will require further study if this approach is followed.

  16. Report on Pairing-based Cryptography

    PubMed Central

    Moody, Dustin; Peralta, Rene; Perlner, Ray; Regenscheid, Andrew; Roginsky, Allen; Chen, Lily

    2015-01-01

    This report summarizes study results on pairing-based cryptography. The main purpose of the study is to form NIST’s position on standardizing and recommending pairing-based cryptography schemes currently published in research literature and standardized in other standard bodies. The report reviews the mathematical background of pairings. This includes topics such as pairing-friendly elliptic curves and how to compute various pairings. It includes a brief introduction to existing identity-based encryption (IBE) schemes and other cryptographic schemes using pairing technology. The report provides a complete study of the current status of standard activities on pairing-based cryptographic schemes. It explores different application scenarios for pairing-based cryptography schemes. As an important aspect of adopting pairing-based schemes, the report also considers the challenges inherent in validation testing of cryptographic algorithms and modules. Based on the study, the report suggests an approach for including pairing-based cryptography schemes in the NIST cryptographic toolkit. The report also outlines several questions that will require further study if this approach is followed. PMID:26958435

  17. Coupling limit equilibrium analyses and real-time monitoring to refine a landslide surveillance system in Calabria (southern Italy)

    NASA Astrophysics Data System (ADS)

    Iovine, G. G. R.; Lollino, P.; Gariano, S. L.; Terranova, O. G.

    2010-11-01

    On 28 January 2009, a large debris slide was triggered by prolonged rainfalls at the southern suburbs of San Benedetto Ullano (Northern Calabria). The slope movement affected fractured and weathered migmatitic gneiss and biotitic schist, and included a pre-existing landslide. A detailed geomorphologic field survey, carried out during the whole phase of mobilization, allowed to recognize the evolution of the phenomenon. A set of datum points was located along the borders of the landslide and frequent hand-made measurements of surface displacements were performed. Since 11 February, a basic real-time monitoring system of meteoric parameters and of surface displacements, measured by means of high-precision extensometers, was also implemented. Based on the data gained through the monitoring system, and on field surveying, a basic support system for emergency management could be defined since the first phases of activation of the phenomenon. The evolution of the landslide was monitored during the following months: as a consequence, evidence of retrogressive distribution could be recognized, with initial activation in the middle sector of the slope, where new temporary springs were observed. During early May, the activity reduced to displacements of a few millimetres per month and the geo-hydrological crisis seemed to be concluded. Afterwards, the geological scheme of the slope was refined based on the data collected through a set of explorative boreholes, equipped with inclinometers and piezometers: according to the stratigraphic and inclinometric data, the depth of the mobilized body resulted in varying between 15 and 35 m along a longitudinal section. A parametric limit equilibrium analysis was carried out to explore the stability conditions of the slope affected by the landslide as well as to quantify the role of the water table in destabilizing the slope. The interpretation of the process based on field observations was confirmed by the limit equilibrium analysis: the first activation of the landslide was, in fact, to be expected in the middle portion of the slope, provided that the groundwater levels approximate the ground surface in the same sector. On 1 February 2010, another remarkable phase of landslide mobilization began, following a new period of exceptional and prolonged rainfalls. On 11 February, an abrupt stage of slope acceleration was observed, after further extraordinary rainfalls. The slope movement essentially replicated the phases of mobilization observed on 28 January 2009, thus confirming the results of the limit equilibrium analysis. Based on the outcomes of the parametric analysis, the support system for emergency management could then be tentatively refined on a more physical basis.

  18. Intensity non-uniformity correction in MRI: existing methods and their validation.

    PubMed

    Belaroussi, Boubakeur; Milles, Julien; Carme, Sabin; Zhu, Yue Min; Benoit-Cattin, Hugues

    2006-04-01

    Magnetic resonance imaging is a popular and powerful non-invasive imaging technique. Automated analysis has become mandatory to efficiently cope with the large amount of data generated using this modality. However, several artifacts, such as intensity non-uniformity, can degrade the quality of acquired data. Intensity non-uniformity consists in anatomically irrelevant intensity variation throughout data. It can be induced by the choice of the radio-frequency coil, the acquisition pulse sequence and by the nature and geometry of the sample itself. Numerous methods have been proposed to correct this artifact. In this paper, we propose an overview of existing methods. We first sort them according to their location in the acquisition/processing pipeline. Sorting is then refined based on the assumptions those methods rely on. Next, we present the validation protocols used to evaluate these different correction schemes both from a qualitative and a quantitative point of view. Finally, availability and usability of the presented methods is discussed.

  19. Parallel Adaptive High-Order CFD Simulations Characterizing Cavity Acoustics for the Complete SOFIA Aircraft

    NASA Technical Reports Server (NTRS)

    Barad, Michael F.; Brehm, Christoph; Kiris, Cetin C.; Biswas, Rupak

    2014-01-01

    This paper presents one-of-a-kind MPI-parallel computational fluid dynamics simulations for the Stratospheric Observatory for Infrared Astronomy (SOFIA). SOFIA is an airborne, 2.5-meter infrared telescope mounted in an open cavity in the aft of a Boeing 747SP. These simulations focus on how the unsteady flow field inside and over the cavity interferes with the optical path and mounting of the telescope. A temporally fourth-order Runge-Kutta, and spatially fifth-order WENO-5Z scheme was used to perform implicit large eddy simulations. An immersed boundary method provides automated gridding for complex geometries and natural coupling to a block-structured Cartesian adaptive mesh refinement framework. Strong scaling studies using NASA's Pleiades supercomputer with up to 32,000 cores and 4 billion cells shows excellent scaling. Dynamic load balancing based on execution time on individual AMR blocks addresses irregularities caused by the highly complex geometry. Limits to scaling beyond 32K cores are identified, and targeted code optimizations are discussed.

  20. An interactive tool for gamut masking

    NASA Astrophysics Data System (ADS)

    Song, Ying; Lau, Cheryl; Süsstrunk, Sabine

    2014-02-01

    Artists often want to change the colors of an image to achieve a particular aesthetic goal. For example, they might limit colors to a warm or cool color scheme to create an image with a certain mood or feeling. Gamut masking is a technique that artists use to limit the set of colors they can paint with. They draw a mask over a color wheel and only use the hues within the mask. However, creating the color palette from the mask and applying the colors to the image requires skill. We propose an interactive tool for gamut masking that allows amateur artists to create an image with a desired mood or feeling. Our system extracts a 3D color gamut from the 2D user-drawn mask and maps the image to this gamut. The user can draw a different gamut mask or locally refine the image colors. Our voxel grid gamut representation allows us to represent gamuts of any shape, and our cluster-based image representation allows the user to change colors locally.

  1. Feature selection with harmony search.

    PubMed

    Diao, Ren; Shen, Qiang

    2012-12-01

    Many search strategies have been exploited for the task of feature selection (FS), in an effort to identify more compact and better quality subsets. Such work typically involves the use of greedy hill climbing (HC), or nature-inspired heuristics, in order to discover the optimal solution without going through exhaustive search. In this paper, a novel FS approach based on harmony search (HS) is presented. It is a general approach that can be used in conjunction with many subset evaluation techniques. The simplicity of HS is exploited to reduce the overall complexity of the search process. The proposed approach is able to escape from local solutions and identify multiple solutions owing to the stochastic nature of HS. Additional parameter control schemes are introduced to reduce the effort and impact of parameter configuration. These can be further combined with the iterative refinement strategy, tailored to enforce the discovery of quality subsets. The resulting approach is compared with those that rely on HC, genetic algorithms, and particle swarm optimization, accompanied by in-depth studies of the suggested improvements.

  2. A refined characterization of the alluvial geology of yucca flat and its effect on bulk hydraulic conductivity

    USGS Publications Warehouse

    Phelps, G.A.; Halford, K.J.

    2011-01-01

    In Yucca Flat, on the Nevada National Security Site in southern Nevada, the migration of radionuclides from tests located in the alluvial deposits into the Paleozoic carbonate aquifer involves passage through a thick, heterogeneous section of late Tertiary and Quaternary alluvial sediments. An understanding of the lateral and vertical changes in the material properties of the alluvial sediments will aid in the further development of the hydrogeologic framework and the delineation of hydrostratigraphic units and hydraulic properties required for simulating groundwater flow in the Yucca Flat area. Previously published geologic models for the alluvial sediments within Yucca Flat are based on extensive examination and categorization of drill-hole data, combined with a simple, data-driven interpolation scheme. The U.S. Geological Survey, in collaboration with Stanford University, is researching improvements to the modeling of the alluvial section, incorporating prior knowledge of geologic structure into the interpolation method and estimating the uncertainty of the modeled hydrogeologic units.

  3. Protein secondary structure determination by constrained single-particle cryo-electron tomography.

    PubMed

    Bartesaghi, Alberto; Lecumberry, Federico; Sapiro, Guillermo; Subramaniam, Sriram

    2012-12-05

    Cryo-electron microscopy (cryo-EM) is a powerful technique for 3D structure determination of protein complexes by averaging information from individual molecular images. The resolutions that can be achieved with single-particle cryo-EM are frequently limited by inaccuracies in assigning molecular orientations based solely on 2D projection images. Tomographic data collection schemes, however, provide powerful constraints that can be used to more accurately determine molecular orientations necessary for 3D reconstruction. Here, we propose "constrained single-particle tomography" as a general strategy for 3D structure determination in cryo-EM. A key component of our approach is the effective use of images recorded in tilt series to extract high-resolution information and correct for the contrast transfer function. By incorporating geometric constraints into the refinement to improve orientational accuracy of images, we reduce model bias and overrefinement artifacts and demonstrate that protein structures can be determined at resolutions of ∼8 Å starting from low-dose tomographic tilt series. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Incremental triangulation by way of edge swapping and local optimization

    NASA Technical Reports Server (NTRS)

    Wiltberger, N. Lyn

    1994-01-01

    This document is intended to serve as an installation, usage, and basic theory guide for the two dimensional triangulation software 'HARLEY' written for the Silicon Graphics IRIS workstation. This code consists of an incremental triangulation algorithm based on point insertion and local edge swapping. Using this basic strategy, several types of triangulations can be produced depending on user selected options. For example, local edge swapping criteria can be chosen which minimizes the maximum interior angle (a MinMax triangulation) or which maximizes the minimum interior angle (a MaxMin or Delaunay triangulation). It should be noted that the MinMax triangulation is generally only locally optical (not globally optimal) in this measure. The MaxMin triangulation, however, is both locally and globally optical. In addition, Steiner triangulations can be constructed by inserting new sites at triangle circumcenters followed by edge swapping based on the MaxMin criteria. Incremental insertion of sites also provides flexibility in choosing cell refinement criteria. A dynamic heap structure has been implemented in the code so that once a refinement measure is specified (i.e., maximum aspect ratio or some measure of a solution gradient for the solution adaptive grid generation) the cell with the largest value of this measure is continually removed from the top of the heap and refined. The heap refinement strategy allows the user to specify either the number of cells desired or refine the mesh until all cell refinement measures satisfy a user specified tolerance level. Since the dynamic heap structure is constantly updated, the algorithm always refines the particular cell in the mesh with the largest refinement criteria value. The code allows the user to: triangulate a cloud of prespecified points (sites), triangulate a set of prespecified interior points constrained by prespecified boundary curve(s), Steiner triangulate the interior/exterior of prespecified boundary curve(s), refine existing triangulations based on solution error measures, and partition meshes based on the Cuthill-McKee, spectral, and coordinate bisection strategies.

  5. Advanced Numerical Methods and Software Approaches for Semiconductor Device Simulation

    DOE PAGES

    Carey, Graham F.; Pardhanani, A. L.; Bova, S. W.

    2000-01-01

    In this article we concisely present several modern strategies that are applicable to driftdominated carrier transport in higher-order deterministic models such as the driftdiffusion, hydrodynamic, and quantum hydrodynamic systems. The approaches include extensions of “upwind” and artificial dissipation schemes, generalization of the traditional Scharfetter – Gummel approach, Petrov – Galerkin and streamline-upwind Petrov Galerkin (SUPG), “entropy” variables, transformations, least-squares mixed methods and other stabilized Galerkin schemes such as Galerkin least squares and discontinuous Galerkin schemes. The treatment is representative rather than an exhaustive review and several schemes are mentioned only briefly with appropriate reference to the literature. Some of themore » methods have been applied to the semiconductor device problem while others are still in the early stages of development for this class of applications. We have included numerical examples from our recent research tests with some of the methods. A second aspect of the work deals with algorithms that employ unstructured grids in conjunction with adaptive refinement strategies. The full benefits of such approaches have not yet been developed in this application area and we emphasize the need for further work on analysis, data structures and software to support adaptivity. Finally, we briefly consider some aspects of software frameworks. These include dial-an-operator approaches such as that used in the industrial simulator PROPHET, and object-oriented software support such as those in the SANDIA National Laboratory framework SIERRA.« less

  6. Combining global and local approximations

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.

    1991-01-01

    A method based on a linear approximation to a scaling factor, designated the 'global-local approximation' (GLA) method, is presented and shown capable of extending the range of usefulness of derivative-based approximations to a more refined model. The GLA approach refines the conventional scaling factor by means of a linearly varying, rather than constant, scaling factor. The capabilities of the method are demonstrated for a simple beam example with a crude and more refined FEM model.

  7. PREFMD: a web server for protein structure refinement via molecular dynamics simulations.

    PubMed

    Heo, Lim; Feig, Michael

    2018-03-15

    Refinement of protein structure models is a long-standing problem in structural bioinformatics. Molecular dynamics-based methods have emerged as an avenue to achieve consistent refinement. The PREFMD web server implements an optimized protocol based on the method successfully tested in CASP11. Validation with recent CASP refinement targets shows consistent and more significant improvement in global structure accuracy over other state-of-the-art servers. PREFMD is freely available as a web server at http://feiglab.org/prefmd. Scripts for running PREFMD as a stand-alone package are available at https://github.com/feiglab/prefmd.git. feig@msu.edu. Supplementary data are available at Bioinformatics online.

  8. Refining the detection of the zero crossing for the three-gluon vertex in symmetric and asymmetric momentum subtraction schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boucaud, Ph.; De Soto, F.; Rodriguez-Quintero, J.

    This article reports on the detailed study of the three-gluon vertex in four-dimensional $SU(3)$ Yang-Mills theory employing lattice simulations with large physical volumes and high statistics. A meticulous scrutiny of the so-called symmetric and asymmetric kinematical configurations is performed and it is shown that the associated form-factor changes sign at a given range of momenta. Here, the lattice results are compared to the model independent predictions of Schwinger-Dyson equations and a very good agreement among the two is found.

  9. Refining the detection of the zero crossing for the three-gluon vertex in symmetric and asymmetric momentum subtraction schemes

    DOE PAGES

    Boucaud, Ph.; De Soto, F.; Rodriguez-Quintero, J.; ...

    2017-06-14

    This article reports on the detailed study of the three-gluon vertex in four-dimensional $SU(3)$ Yang-Mills theory employing lattice simulations with large physical volumes and high statistics. A meticulous scrutiny of the so-called symmetric and asymmetric kinematical configurations is performed and it is shown that the associated form-factor changes sign at a given range of momenta. Here, the lattice results are compared to the model independent predictions of Schwinger-Dyson equations and a very good agreement among the two is found.

  10. A secure smart-card based authentication and key agreement scheme for telecare medicine information systems.

    PubMed

    Lee, Tian-Fu; Liu, Chuan-Ming

    2013-06-01

    A smart-card based authentication scheme for telecare medicine information systems enables patients, doctors, nurses, health visitors and the medicine information systems to establish a secure communication platform through public networks. Zhu recently presented an improved authentication scheme in order to solve the weakness of the authentication scheme of Wei et al., where the off-line password guessing attacks cannot be resisted. This investigation indicates that the improved scheme of Zhu has some faults such that the authentication scheme cannot execute correctly and is vulnerable to the attack of parallel sessions. Additionally, an enhanced authentication scheme based on the scheme of Zhu is proposed. The enhanced scheme not only avoids the weakness in the original scheme, but also provides users' anonymity and authenticated key agreements for secure data communications.

  11. Cryptanalysis and Improvement of a Biometric-Based Multi-Server Authentication and Key Agreement Scheme.

    PubMed

    Wang, Chengqi; Zhang, Xiao; Zheng, Zhiming

    2016-01-01

    With the security requirements of networks, biometrics authenticated schemes which are applied in the multi-server environment come to be more crucial and widely deployed. In this paper, we propose a novel biometric-based multi-server authentication and key agreement scheme which is based on the cryptanalysis of Mishra et al.'s scheme. The informal and formal security analysis of our scheme are given, which demonstrate that our scheme satisfies the desirable security requirements. The presented scheme provides a variety of significant functionalities, in which some features are not considered in the most of existing authentication schemes, such as, user revocation or re-registration and biometric information protection. Compared with several related schemes, our scheme has more secure properties and lower computation cost. It is obviously more appropriate for practical applications in the remote distributed networks.

  12. ID-based encryption scheme with revocation

    NASA Astrophysics Data System (ADS)

    Othman, Hafizul Azrie; Ismail, Eddie Shahril

    2017-04-01

    In 2015, Meshram proposed an efficient ID-based cryptographic encryption based on the difficulty of solving discrete logarithm and integer-factoring problems. The scheme was pairing free and claimed to be secure against adaptive chosen plaintext attacks (CPA). Later, Tan et al. proved that the scheme was insecure by presenting a method to recover the secret master key and to obtain prime factorization of modulo n. In this paper, we propose a new pairing-free ID-based encryption scheme with revocation based on Meshram's ID-based encryption scheme, which is also secure against Tan et al.'s attacks.

  13. A secure biometrics-based authentication scheme for telecare medicine information systems.

    PubMed

    Yan, Xiaopeng; Li, Weiheng; Li, Ping; Wang, Jiantao; Hao, Xinhong; Gong, Peng

    2013-10-01

    The telecare medicine information system (TMIS) allows patients and doctors to access medical services or medical information at remote sites. Therefore, it could bring us very big convenient. To safeguard patients' privacy, authentication schemes for the TMIS attracted wide attention. Recently, Tan proposed an efficient biometrics-based authentication scheme for the TMIS and claimed their scheme could withstand various attacks. However, in this paper, we point out that Tan's scheme is vulnerable to the Denial-of-Service attack. To enhance security, we also propose an improved scheme based on Tan's work. Security and performance analysis shows our scheme not only could overcome weakness in Tan's scheme but also has better performance.

  14. Gait development on Minitaur, a direct drive quadrupedal robot

    NASA Astrophysics Data System (ADS)

    Blackman, Daniel J.; Nicholson, John V.; Ordonez, Camilo; Miller, Bruce D.; Clark, Jonathan E.

    2016-05-01

    This paper describes the development of a dynamic, quadrupedal robot designed for rapid traversal and interaction in human environments. We explore improvements to both physical and control methods to a legged robot (Minitaur) in order to improve the speed and stability of its gaits and increase the range of obstacles that it can overcome, with an eye toward negotiating man-made terrains such as stairs. These modifications include an analysis of physical compliance, an investigation of foot and leg design, and the implementation of ground and obstacle contact sensing for inclusion in the control schemes. Structural and mechanical improvements were made to reduce undesired compliance for more consistent agreement with dynamic models, which necessitated refinement of foot design for greater durability. Contact sensing was implemented into the control scheme for identifying obstacles and deviations in surface level for negotiation of varying terrain. Overall the incorporation of these features greatly enhances the mobility of the dynamic quadrupedal robot and helps to establish a basis for overcoming obstacles.

  15. Analytic energy gradient of projected Hartree-Fock within projection after variation

    NASA Astrophysics Data System (ADS)

    Uejima, Motoyuki; Ten-no, Seiichiro

    2017-03-01

    We develop a geometrical optimization technique for the projection-after-variation (PAV) scheme of the recently refined projected Hartree-Fock (PHF) as a fast alternative to the variation-after-projection (VAP) approach for optimizing the structures of molecules/clusters in symmetry-adapted electronic states at the mean-field computational cost. PHF handles the nondynamic correlation effects by restoring the symmetry of a broken-symmetry single reference wavefunction and moreover enables a black-box treatment of orbital selections. Using HF orbitals instead of PHF orbitals, our approach saves the computational cost for the orbital optimization, avoiding the convergence problem that sometimes emerges in the VAP scheme. We show that PAV-PHF provides geometries comparable to those of the complete active space self-consistent field and VAP-PHF for the tested systems, namely, CH2, O3, and the [Cu2O2 ] 2 + core, where nondynamic correlation is abundant. The proposed approach is useful for large systems mainly dominated by nondynamic correlation to find stable structures in many symmetry-adapted states.

  16. Ideal GLM-MHD: About the entropy consistent nine-wave magnetic field divergence diminishing ideal magnetohydrodynamics equations

    NASA Astrophysics Data System (ADS)

    Derigs, Dominik; Winters, Andrew R.; Gassner, Gregor J.; Walch, Stefanie; Bohm, Marvin

    2018-07-01

    The paper presents two contributions in the context of the numerical simulation of magnetized fluid dynamics. First, we show how to extend the ideal magnetohydrodynamics (MHD) equations with an inbuilt magnetic field divergence cleaning mechanism in such a way that the resulting model is consistent with the second law of thermodynamics. As a byproduct of these derivations, we show that not all of the commonly used divergence cleaning extensions of the ideal MHD equations are thermodynamically consistent. Secondly, we present a numerical scheme obtained by constructing a specific finite volume discretization that is consistent with the discrete thermodynamic entropy. It includes a mechanism to control the discrete divergence error of the magnetic field by construction and is Galilean invariant. We implement the new high-order MHD solver in the adaptive mesh refinement code FLASH where we compare the divergence cleaning efficiency to the constrained transport solver available in FLASH (unsplit staggered mesh scheme).

  17. Newmark local time stepping on high-performance computing architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rietmann, Max, E-mail: max.rietmann@erdw.ethz.ch; Institute of Geophysics, ETH Zurich; Grote, Marcus, E-mail: marcus.grote@unibas.ch

    In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strongmore » element-size contrasts (more than 100x). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.« less

  18. Photorefraction of eyes: history and future prospects.

    PubMed

    Howland, Howard C

    2009-06-01

    A brief history of photorefraction, i.e., the refraction of eyes by photography or computer image capture, is given. The method of photorefraction originated from an optical scheme for secret communication across the Berlin wall. This scheme used a lens whose focus about infinity was modulated by a movable reflecting surface. From this device, it was recognized that the vertebrate eye was such a reflector and that its double-pass pointspread could be used to compute its degree of defocus. Subsequently, a second, totally independent invention, more accurately termed "photoretinoscopy," used an eccentric light source and obtained retinoscopic-like images of the reflex in the pupil of the subject's eyes. Photoretinoscopy has become the preferred method of photorefraction and has been instantiated in a wide variety of devices used in vision screening and research. This has been greatly helped by the parallel development of computer and digital camera technology. It seems likely that photorefractive methods will continue to be refined and may eventually become ubiquitous in clinical practice.

  19. Advanced adaptive computational methods for Navier-Stokes simulations in rotorcraft aerodynamics

    NASA Technical Reports Server (NTRS)

    Stowers, S. T.; Bass, J. M.; Oden, J. T.

    1993-01-01

    A phase 2 research and development effort was conducted in area transonic, compressible, inviscid flows with an ultimate goal of numerically modeling complex flows inherent in advanced helicopter blade designs. The algorithms and methodologies therefore are classified as adaptive methods, which are error estimation techniques for approximating the local numerical error, and automatically refine or unrefine the mesh so as to deliver a given level of accuracy. The result is a scheme which attempts to produce the best possible results with the least number of grid points, degrees of freedom, and operations. These types of schemes automatically locate and resolve shocks, shear layers, and other flow details to an accuracy level specified by the user of the code. The phase 1 work involved a feasibility study of h-adaptive methods for steady viscous flows, with emphasis on accurate simulation of vortex initiation, migration, and interaction. Phase 2 effort focused on extending these algorithms and methodologies to a three-dimensional topology.

  20. Direct numerical simulation of the sea flows around blunt bodies

    NASA Astrophysics Data System (ADS)

    Matyushin, Pavel V.; Gushchin, Valentin A.

    2015-11-01

    The aim of the present paper is the demonstration of the opportunities of the mathematical modeling of the separated flows of the sea water around blunt bodies on the basis of the Navier-Stokes equations (NSE) in the Boussinesq approximation. The 3D density stratified incompressible viscous fluid flows around a sphere have been investigated by means of the direct numerical simulation (DNS) on supercomputers and the visualization of the 3D vortex structures in the wake. For solving of NSE the Splitting on physical factors Method for Incompressible Fluid flows (SMIF) with hybrid explicit finite difference scheme (second-order accuracy in space, minimum scheme viscosity and dispersion, capable for work in wide range of the Reynolds (Re) and the internal Froude (Fr) numbers and monotonous) has been developed and successfully applied. The different transitions in sphere wakes with increasing of Re (10 < Re < 500) and decreasing of Fr (0.005 < Fr < 100) have been investigated in details. Thus the classifications of the viscous fluid flow regimes around a sphere have been refined.

  1. Linking payment to health outcomes: a taxonomy and examination of performance-based reimbursement schemes between healthcare payers and manufacturers.

    PubMed

    Carlson, Josh J; Sullivan, Sean D; Garrison, Louis P; Neumann, Peter J; Veenstra, David L

    2010-08-01

    To identify, categorize and examine performance-based health outcomes reimbursement schemes for medical technology. We performed a review of performance-based health outcomes reimbursement schemes over the past 10 years (7/98-010/09) using publicly available databases, web and grey literature searches, and input from healthcare reimbursement experts. We developed a taxonomy of scheme types by inductively organizing the schemes identified according to the timing, execution, and health outcomes measured in the schemes. Our search yielded 34 coverage with evidence development schemes, 10 conditional treatment continuation schemes, and 14 performance-linked reimbursement schemes. The majority of schemes are in Europe and Australia, with an increasing number in Canada and the U.S. These schemes have the potential to alter the reimbursement and pricing landscape for medical technology, but significant challenges, including high transaction costs and insufficient information systems, may limit their long-term impact. Future studies regarding experiences and outcomes of implemented schemes are necessary. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  2. How Certain are We of the Uncertainties in Recent Ozone Profile Trend Assessments of Merged Limbo Ccultation Records? Challenges and Possible Ways Forward

    NASA Technical Reports Server (NTRS)

    Hubert, Daan; Lambert, Jean-Christopher; Verhoelst, Tijl; Granville, Jose; Keppens, Arno; Baray, Jean-Luc; Cortesi, Ugo; Degenstein, D. A.; Froidevaux, Lucien; Godin-Beekmann, Sophie; hide

    2015-01-01

    Most recent assessments of long-term changes in the vertical distribution of ozone (by e.g. WMO and SI2N) rely on data sets that integrate observations by multiple instruments. Several merged satellite ozone profile records have been developed over the past few years; each considers a particular set of instruments and adopts a particular merging strategy. Their intercomparison by Tummon et al. revealed that the current merging schemes are not sufficiently refined to correct for all major differences between the limb/occultation records. This shortcoming introduces uncertainties that need to be known to obtain a sound interpretation of the different satellite-based trend studies. In practice however, producing realistic uncertainty estimates is an intricate task which depends on a sufficiently detailed understanding of the characteristics of each contributing data record and on the subsequent interplay and propagation of these through the merging scheme. Our presentation discusses these challenges in the context of limb/occultation ozone profile records, but they are equally relevant for other instruments and atmospheric measurements. We start by showing how the NDACC and GAW-affiliated ground-based networks of ozonesonde and lidar instruments allowed us to characterize fourteen limb/occultation ozone profile records, together providing a global view over the last three decades. Our prime focus will be on techniques to estimate long-term drift since our results suggest this is the main driver of the major trend differences between the merged data sets. The single-instrument drift estimates are then used for a tentative estimate of the systematic uncertainty in the profile trends from merged data records. We conclude by reflecting on possible further steps needed to improve the merging algorithms and to obtain a better characterization of the uncertainties involved.

  3. A new class of accurate, mesh-free hydrodynamic simulation methods

    NASA Astrophysics Data System (ADS)

    Hopkins, Philip F.

    2015-06-01

    We present two new Lagrangian methods for hydrodynamics, in a systematic comparison with moving-mesh, smoothed particle hydrodynamics (SPH), and stationary (non-moving) grid methods. The new methods are designed to simultaneously capture advantages of both SPH and grid-based/adaptive mesh refinement (AMR) schemes. They are based on a kernel discretization of the volume coupled to a high-order matrix gradient estimator and a Riemann solver acting over the volume `overlap'. We implement and test a parallel, second-order version of the method with self-gravity and cosmological integration, in the code GIZMO:1 this maintains exact mass, energy and momentum conservation; exhibits superior angular momentum conservation compared to all other methods we study; does not require `artificial diffusion' terms; and allows the fluid elements to move with the flow, so resolution is automatically adaptive. We consider a large suite of test problems, and find that on all problems the new methods appear competitive with moving-mesh schemes, with some advantages (particularly in angular momentum conservation), at the cost of enhanced noise. The new methods have many advantages versus SPH: proper convergence, good capturing of fluid-mixing instabilities, dramatically reduced `particle noise' and numerical viscosity, more accurate sub-sonic flow evolution, and sharp shock-capturing. Advantages versus non-moving meshes include: automatic adaptivity, dramatically reduced advection errors and numerical overmixing, velocity-independent errors, accurate coupling to gravity, good angular momentum conservation and elimination of `grid alignment' effects. We can, for example, follow hundreds of orbits of gaseous discs, while AMR and SPH methods break down in a few orbits. However, fixed meshes minimize `grid noise'. These differences are important for a range of astrophysical problems.

  4. Computer simulation of refining process of a high consistency disc refiner based on CFD

    NASA Astrophysics Data System (ADS)

    Wang, Ping; Yang, Jianwei; Wang, Jiahui

    2017-08-01

    In order to reduce refining energy consumption, the ANSYS CFX was used to simulate the refining process of a high consistency disc refiner. In the first it was assumed to be uniform Newton fluid of turbulent state in disc refiner with the k-ɛ flow model; then meshed grids and set the boundary conditions in 3-D model of the disc refiner; and then was simulated and analyzed; finally, the viscosity of the pulp were measured. The results show that the CFD method can be used to analyze the pressure and torque on the disc plate, so as to calculate the refining power, and streamlines and velocity vectors can also be observed. CFD simulation can optimize parameters of the bar and groove, which is of great significance to reduce the experimental cost and cycle.

  5. Empirical Analysis and Refinement of Expert System Knowledge Bases

    DTIC Science & Technology

    1988-08-31

    refinement. Both a simulated case generation program, and a random rule basher were developed to enhance rule refinement experimentation. *Substantial...the second fiscal year 88 objective was fully met. Rule Refinement System Simulated Rule Basher Case Generator Stored Cases Expert System Knowledge...generated until the rule is satisfied. Cases may be randomly generated for a given rule or hypothesis. Rule Basher Given that one has a correct

  6. Understanding security failures of two authentication and key agreement schemes for telecare medicine information systems.

    PubMed

    Mishra, Dheerendra

    2015-03-01

    Smart card based authentication and key agreement schemes for telecare medicine information systems (TMIS) enable doctors, nurses, patients and health visitors to use smart cards for secure login to medical information systems. In recent years, several authentication and key agreement schemes have been proposed to present secure and efficient solution for TMIS. Most of the existing authentication schemes for TMIS have either higher computation overhead or are vulnerable to attacks. To reduce the computational overhead and enhance the security, Lee recently proposed an authentication and key agreement scheme using chaotic maps for TMIS. Xu et al. also proposed a password based authentication and key agreement scheme for TMIS using elliptic curve cryptography. Both the schemes provide better efficiency from the conventional public key cryptography based schemes. These schemes are important as they present an efficient solution for TMIS. We analyze the security of both Lee's scheme and Xu et al.'s schemes. Unfortunately, we identify that both the schemes are vulnerable to denial of service attack. To understand the security failures of these cryptographic schemes which are the key of patching existing schemes and designing future schemes, we demonstrate the security loopholes of Lee's scheme and Xu et al.'s scheme in this paper.

  7. Robust and efficient biometrics based password authentication scheme for telecare medicine information systems using extended chaotic maps.

    PubMed

    Lu, Yanrong; Li, Lixiang; Peng, Haipeng; Xie, Dong; Yang, Yixian

    2015-06-01

    The Telecare Medicine Information Systems (TMISs) provide an efficient communicating platform supporting the patients access health-care delivery services via internet or mobile networks. Authentication becomes an essential need when a remote patient logins into the telecare server. Recently, many extended chaotic maps based authentication schemes using smart cards for TMISs have been proposed. Li et al. proposed a secure smart cards based authentication scheme for TMISs using extended chaotic maps based on Lee's and Jiang et al.'s scheme. In this study, we show that Li et al.'s scheme has still some weaknesses such as violation the session key security, vulnerability to user impersonation attack and lack of local verification. To conquer these flaws, we propose a chaotic maps and smart cards based password authentication scheme by applying biometrics technique and hash function operations. Through the informal and formal security analyses, we demonstrate that our scheme is resilient possible known attacks including the attacks found in Li et al.'s scheme. As compared with the previous authentication schemes, the proposed scheme is more secure and efficient and hence more practical for telemedical environments.

  8. Security patterns and a weighting scheme for mobile agents

    NASA Astrophysics Data System (ADS)

    Walker, Jessie J.

    The notion of mobility has always been a prime factor in human endeavor and achievement. This need to migrate by humans has been distilled into software entities, which are their representatives on distant environments. Software agents are developed to act on behalf of a user. Mobile agents were born from the understanding that many times it was much more useful to move the code (program) to where the resources are located, instead of connecting remotely. Within the mobile agent research community, security has traditionally been the most defining issue facing the community and preventing the paradigm from gaining wide acceptance. There are still numerous difficult problems being addressed with very few practical solutions, such as the malicious host and agent problems. These problems are some of the most active areas of research within the mobile agent community. The major principles, facets, fundamental concepts, techniques and architectures of the field are well understood within the community. This is evident by the many mobile agent systems developed in the last decade that share common core components such as agent management, communication facilities, and mobility services. In other words new mobile agent systems and frameworks do not provide any new insights into agent system architecture or mobility services, agent coordination, communication that could be useful to the agent research community, although these new mobile agent systems do in many instances validate, refine, demonstrate the reuse of many previously proposed and discussed mobile agent research elements. Since mobile agent research for the last decade has been defined by security and related issues, our research into security patterns are within this narrow arena of mobile agent research. The research presented in this thesis examines the issue of mobile agent security from the standpoint of security pattern documented from the universe of mobile agent systems. In addition, we explore how these documented security patterns can be quantitatively compared based on a unique weighting scheme. The scheme is formalized into a theory that can be used improve the development of secure mobile agents and agent-based systems.

  9. Enhanced smartcard-based password-authenticated key agreement using extended chaotic maps.

    PubMed

    Lee, Tian-Fu; Hsiao, Chia-Hung; Hwang, Shi-Han; Lin, Tsung-Hung

    2017-01-01

    A smartcard based password-authenticated key agreement scheme enables a legal user to log in to a remote authentication server and access remote services through public networks using a weak password and a smart card. Lin recently presented an improved chaotic maps-based password-authenticated key agreement scheme that used smartcards to eliminate the weaknesses of the scheme of Guo and Chang, which does not provide strong user anonymity and violates session key security. However, the improved scheme of Lin does not exhibit the freshness property and the validity of messages so it still fails to withstand denial-of-service and privileged-insider attacks. Additionally, a single malicious participant can predetermine the session key such that the improved scheme does not exhibit the contributory property of key agreements. This investigation discusses these weaknesses and proposes an enhanced smartcard-based password-authenticated key agreement scheme that utilizes extended chaotic maps. The session security of this enhanced scheme is based on the extended chaotic map-based Diffie-Hellman problem, and is proven in the real-or-random and the sequence of games models. Moreover, the enhanced scheme ensures the freshness of communicating messages by appending timestamps, and thereby avoids the weaknesses in previous schemes.

  10. Enhanced smartcard-based password-authenticated key agreement using extended chaotic maps

    PubMed Central

    Lee, Tian-Fu; Hsiao, Chia-Hung; Hwang, Shi-Han

    2017-01-01

    A smartcard based password-authenticated key agreement scheme enables a legal user to log in to a remote authentication server and access remote services through public networks using a weak password and a smart card. Lin recently presented an improved chaotic maps-based password-authenticated key agreement scheme that used smartcards to eliminate the weaknesses of the scheme of Guo and Chang, which does not provide strong user anonymity and violates session key security. However, the improved scheme of Lin does not exhibit the freshness property and the validity of messages so it still fails to withstand denial-of-service and privileged-insider attacks. Additionally, a single malicious participant can predetermine the session key such that the improved scheme does not exhibit the contributory property of key agreements. This investigation discusses these weaknesses and proposes an enhanced smartcard-based password-authenticated key agreement scheme that utilizes extended chaotic maps. The session security of this enhanced scheme is based on the extended chaotic map-based Diffie-Hellman problem, and is proven in the real-or-random and the sequence of games models. Moreover, the enhanced scheme ensures the freshness of communicating messages by appending timestamps, and thereby avoids the weaknesses in previous schemes. PMID:28759615

  11. Optimization of illumination schemes in a head-mounted display integrated with eye tracking capabilities

    NASA Astrophysics Data System (ADS)

    Pansing, Craig W.; Hua, Hong; Rolland, Jannick P.

    2005-08-01

    Head-mounted display (HMD) technologies find a variety of applications in the field of 3D virtual and augmented environments, 3D scientific visualization, as well as wearable displays. While most of the current HMDs use head pose to approximate line of sight, we propose to investigate approaches and designs for integrating eye tracking capability into HMDs from a low-level system design perspective and to explore schemes for optimizing system performance. In this paper, we particularly propose to optimize the illumination scheme, which is a critical component in designing an eye tracking-HMD (ET-HMD) integrated system. An optimal design can improve not only eye tracking accuracy, but also robustness. Using LightTools, we present the simulation of a complete eye illumination and imaging system using an eye model along with multiple near infrared LED (IRLED) illuminators and imaging optics, showing the irradiance variation of the different eye structures. The simulation of dark pupil effects along with multiple 1st-order Purkinje images will be presented. A parametric analysis is performed to investigate the relationships between the IRLED configurations and the irradiance distribution at the eye, and a set of optimal configuration parameters is recommended. The analysis will be further refined by actual eye image acquisition and processing.

  12. From nonlinear optimization to convex optimization through firefly algorithm and indirect approach with applications to CAD/CAM.

    PubMed

    Gálvez, Akemi; Iglesias, Andrés

    2013-01-01

    Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently.

  13. From Nonlinear Optimization to Convex Optimization through Firefly Algorithm and Indirect Approach with Applications to CAD/CAM

    PubMed Central

    Gálvez, Akemi; Iglesias, Andrés

    2013-01-01

    Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently. PMID:24376380

  14. Cryptanalysis and Improvement of a Biometric-Based Multi-Server Authentication and Key Agreement Scheme

    PubMed Central

    Wang, Chengqi; Zhang, Xiao; Zheng, Zhiming

    2016-01-01

    With the security requirements of networks, biometrics authenticated schemes which are applied in the multi-server environment come to be more crucial and widely deployed. In this paper, we propose a novel biometric-based multi-server authentication and key agreement scheme which is based on the cryptanalysis of Mishra et al.’s scheme. The informal and formal security analysis of our scheme are given, which demonstrate that our scheme satisfies the desirable security requirements. The presented scheme provides a variety of significant functionalities, in which some features are not considered in the most of existing authentication schemes, such as, user revocation or re-registration and biometric information protection. Compared with several related schemes, our scheme has more secure properties and lower computation cost. It is obviously more appropriate for practical applications in the remote distributed networks. PMID:26866606

  15. Inhalation TTC values: A new integrative grouping approach considering structural, toxicological and mechanistic features.

    PubMed

    Tluczkiewicz, I; Kühne, R; Ebert, R-U; Batke, M; Schüürmann, G; Mangelsdorf, I; Escher, S E

    2016-07-01

    The present publication describes an integrative grouping concept to derive threshold values for inhalation exposure. The classification scheme starts with differences in toxicological potency and develops criteria to group compounds into two potency classes, namely toxic (T-group) or low toxic (L-group). The TTC concept for inhalation exposure is based on the TTC RepDose data set, consisting of 296 organic compounds with 608 repeated-dose inhalation studies. Initially, 21 structural features (SFs) were identified as being characteristic for compounds of either high or low NOEC values (Schüürmann et al., 2016). In subsequent analyses these SF groups were further refined by taking into account structural homogeneity, type of toxicological effect observed, differences in absorption, metabolism and mechanism of action (MoA), to better define their structural and toxicological boundaries. Differentiation of a local or systemic mode of action did not improve the classification scheme. Finally, 28 groups were discriminated: 19 T-groups and 9 L-groups. Clearly distinct thresholds were derived for the T- and L-toxicity groups, being 2 × 10(-5) ppm (2 μg/person/day) and 0.05 ppm (4260 μg/person/day), respectively. The derived thresholds and the classification are compared to the initial mainly structure driven grouping (Schüürmann et al., 2016) and to the Cramer classification. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. A Dynamic Compressive Gammachirp Auditory Filterbank

    PubMed Central

    Irino, Toshio; Patterson, Roy D.

    2008-01-01

    It is now common to use knowledge about human auditory processing in the development of audio signal processors. Until recently, however, such systems were limited by their linearity. The auditory filter system is known to be level-dependent as evidenced by psychophysical data on masking, compression, and two-tone suppression. However, there were no analysis/synthesis schemes with nonlinear filterbanks. This paper describe18300060s such a scheme based on the compressive gammachirp (cGC) auditory filter. It was developed to extend the gammatone filter concept to accommodate the changes in psychophysical filter shape that are observed to occur with changes in stimulus level in simultaneous, tone-in-noise masking. In models of simultaneous noise masking, the temporal dynamics of the filtering can be ignored. Analysis/synthesis systems, however, are intended for use with speech sounds where the glottal cycle can be long with respect to auditory time constants, and so they require specification of the temporal dynamics of auditory filter. In this paper, we describe a fast-acting level control circuit for the cGC filter and show how psychophysical data involving two-tone suppression and compression can be used to estimate the parameter values for this dynamic version of the cGC filter (referred to as the “dcGC” filter). One important advantage of analysis/synthesis systems with a dcGC filterbank is that they can inherit previously refined signal processing algorithms developed with conventional short-time Fourier transforms (STFTs) and linear filterbanks. PMID:19330044

  17. i3Drefine software for protein 3D structure refinement and its assessment in CASP10.

    PubMed

    Bhattacharya, Debswapna; Cheng, Jianlin

    2013-01-01

    Protein structure refinement refers to the process of improving the qualities of protein structures during structure modeling processes to bring them closer to their native states. Structure refinement has been drawing increasing attention in the community-wide Critical Assessment of techniques for Protein Structure prediction (CASP) experiments since its addition in 8(th) CASP experiment. During the 9(th) and recently concluded 10(th) CASP experiments, a consistent growth in number of refinement targets and participating groups has been witnessed. Yet, protein structure refinement still remains a largely unsolved problem with majority of participating groups in CASP refinement category failed to consistently improve the quality of structures issued for refinement. In order to alleviate this need, we developed a completely automated and computationally efficient protein 3D structure refinement method, i3Drefine, based on an iterative and highly convergent energy minimization algorithm with a powerful all-atom composite physics and knowledge-based force fields and hydrogen bonding (HB) network optimization technique. In the recent community-wide blind experiment, CASP10, i3Drefine (as 'MULTICOM-CONSTRUCT') was ranked as the best method in the server section as per the official assessment of CASP10 experiment. Here we provide the community with free access to i3Drefine software and systematically analyse the performance of i3Drefine in strict blind mode on the refinement targets issued in CASP10 refinement category and compare with other state-of-the-art refinement methods participating in CASP10. Our analysis demonstrates that i3Drefine is only fully-automated server participating in CASP10 exhibiting consistent improvement over the initial structures in both global and local structural quality metrics. Executable version of i3Drefine is freely available at http://protein.rnet.missouri.edu/i3drefine/.

  18. Incorporation of Three-dimensional Radiative Transfer into a Very High Resolution Simulation of Horizontally Inhomogeneous Clouds

    NASA Astrophysics Data System (ADS)

    Ishida, H.; Ota, Y.; Sekiguchi, M.; Sato, Y.

    2016-12-01

    A three-dimensional (3D) radiative transfer calculation scheme is developed to estimate horizontal transport of radiation energy in a very high resolution (with the order of 10 m in spatial grid) simulation of cloud evolution, especially for horizontally inhomogeneous clouds such as shallow cumulus and stratocumulus. Horizontal radiative transfer due to inhomogeneous clouds seems to cause local heating/cooling in an atmosphere with a fine spatial scale. It is, however, usually difficult to estimate the 3D effects, because the 3D radiative transfer often needs a large resource for computation compared to a plane-parallel approximation. This study attempts to incorporate a solution scheme that explicitly solves the 3D radiative transfer equation into a numerical simulation, because this scheme has an advantage in calculation for a sequence of time evolution (i.e., the scene at a time is little different from that at the previous time step). This scheme is also appropriate to calculation of radiation with strong absorption, such as the infrared regions. For efficient computation, this scheme utilizes several techniques, e.g., the multigrid method for iteration solution, and a correlated-k distribution method refined for efficient approximation of the wavelength integration. For a case study, the scheme is applied to an infrared broadband radiation calculation in a broken cloud field generated with a large eddy simulation model. The horizontal transport of infrared radiation, which cannot be estimated by the plane-parallel approximation, and its variation in time can be retrieved. The calculation result elucidates that the horizontal divergences and convergences of infrared radiation flux are not negligible, especially at the boundaries of clouds and within optically thin clouds, and the radiative cooling at lateral boundaries of clouds may reduce infrared radiative heating in clouds. In a future work, the 3D effects on radiative heating/cooling will be able to be included into atmospheric numerical models.

  19. An efficient and provable secure revocable identity-based encryption scheme.

    PubMed

    Wang, Changji; Li, Yuan; Xia, Xiaonan; Zheng, Kangjia

    2014-01-01

    Revocation functionality is necessary and crucial to identity-based cryptosystems. Revocable identity-based encryption (RIBE) has attracted a lot of attention in recent years, many RIBE schemes have been proposed in the literature but shown to be either insecure or inefficient. In this paper, we propose a new scalable RIBE scheme with decryption key exposure resilience by combining Lewko and Waters' identity-based encryption scheme and complete subtree method, and prove our RIBE scheme to be semantically secure using dual system encryption methodology. Compared to existing scalable and semantically secure RIBE schemes, our proposed RIBE scheme is more efficient in term of ciphertext size, public parameters size and decryption cost at price of a little looser security reduction. To the best of our knowledge, this is the first construction of scalable and semantically secure RIBE scheme with constant size public system parameters.

  20. Efficiently Multi-User Searchable Encryption Scheme with Attribute Revocation and Grant for Cloud Storage

    PubMed Central

    Wang, Shangping; Zhang, Xiaoxue; Zhang, Yaling

    2016-01-01

    Cipher-policy attribute-based encryption (CP-ABE) focus on the problem of access control, and keyword-based searchable encryption scheme focus on the problem of finding the files that the user interested in the cloud storage quickly. To design a searchable and attribute-based encryption scheme is a new challenge. In this paper, we propose an efficiently multi-user searchable attribute-based encryption scheme with attribute revocation and grant for cloud storage. In the new scheme the attribute revocation and grant processes of users are delegated to proxy server. Our scheme supports multi attribute are revoked and granted simultaneously. Moreover, the keyword searchable function is achieved in our proposed scheme. The security of our proposed scheme is reduced to the bilinear Diffie-Hellman (BDH) assumption. Furthermore, the scheme is proven to be secure under the security model of indistinguishability against selective ciphertext-policy and chosen plaintext attack (IND-sCP-CPA). And our scheme is also of semantic security under indistinguishability against chosen keyword attack (IND-CKA) in the random oracle model. PMID:27898703

  1. Efficiently Multi-User Searchable Encryption Scheme with Attribute Revocation and Grant for Cloud Storage.

    PubMed

    Wang, Shangping; Zhang, Xiaoxue; Zhang, Yaling

    2016-01-01

    Cipher-policy attribute-based encryption (CP-ABE) focus on the problem of access control, and keyword-based searchable encryption scheme focus on the problem of finding the files that the user interested in the cloud storage quickly. To design a searchable and attribute-based encryption scheme is a new challenge. In this paper, we propose an efficiently multi-user searchable attribute-based encryption scheme with attribute revocation and grant for cloud storage. In the new scheme the attribute revocation and grant processes of users are delegated to proxy server. Our scheme supports multi attribute are revoked and granted simultaneously. Moreover, the keyword searchable function is achieved in our proposed scheme. The security of our proposed scheme is reduced to the bilinear Diffie-Hellman (BDH) assumption. Furthermore, the scheme is proven to be secure under the security model of indistinguishability against selective ciphertext-policy and chosen plaintext attack (IND-sCP-CPA). And our scheme is also of semantic security under indistinguishability against chosen keyword attack (IND-CKA) in the random oracle model.

  2. Evaluating the effect of alternative carbon allocation schemes in a land surface model (CLM4.5) on carbon fluxes, pools, and turnover in temperate forests

    NASA Astrophysics Data System (ADS)

    Montané, Francesc; Fox, Andrew M.; Arellano, Avelino F.; MacBean, Natasha; Alexander, M. Ross; Dye, Alex; Bishop, Daniel A.; Trouet, Valerie; Babst, Flurin; Hessl, Amy E.; Pederson, Neil; Blanken, Peter D.; Bohrer, Gil; Gough, Christopher M.; Litvak, Marcy E.; Novick, Kimberly A.; Phillips, Richard P.; Wood, Jeffrey D.; Moore, David J. P.

    2017-09-01

    How carbon (C) is allocated to different plant tissues (leaves, stem, and roots) determines how long C remains in plant biomass and thus remains a central challenge for understanding the global C cycle. We used a diverse set of observations (AmeriFlux eddy covariance tower observations, biomass estimates from tree-ring data, and leaf area index (LAI) measurements) to compare C fluxes, pools, and LAI data with those predicted by a land surface model (LSM), the Community Land Model (CLM4.5). We ran CLM4.5 for nine temperate (including evergreen and deciduous) forests in North America between 1980 and 2013 using four different C allocation schemes: i. dynamic C allocation scheme (named "D-CLM4.5") with one dynamic allometric parameter, which allocates C to the stem and leaves to vary in time as a function of annual net primary production (NPP); ii. an alternative dynamic C allocation scheme (named "D-Litton"), where, similar to (i), C allocation is a dynamic function of annual NPP, but unlike (i) includes two dynamic allometric parameters involving allocation to leaves, stem, and coarse roots; iii.-iv. a fixed C allocation scheme with two variants, one representative of observations in evergreen (named "F-Evergreen") and the other of observations in deciduous forests (named "F-Deciduous"). D-CLM4.5 generally overestimated gross primary production (GPP) and ecosystem respiration, and underestimated net ecosystem exchange (NEE). In D-CLM4.5, initial aboveground biomass in 1980 was largely overestimated (between 10 527 and 12 897 g C m-2) for deciduous forests, whereas aboveground biomass accumulation through time (between 1980 and 2011) was highly underestimated (between 1222 and 7557 g C m-2) for both evergreen and deciduous sites due to a lower stem turnover rate in the sites than the one used in the model. D-CLM4.5 overestimated LAI in both evergreen and deciduous sites because the leaf C-LAI relationship in the model did not match the observed leaf C-LAI relationship at our sites. Although the four C allocation schemes gave similar results for aggregated C fluxes, they translated to important differences in long-term aboveground biomass accumulation and aboveground NPP. For deciduous forests, D-Litton gave more realistic Cstem / Cleaf ratios and strongly reduced the overestimation of initial aboveground biomass and aboveground NPP for deciduous forests by D-CLM4.5. We identified key structural and parameterization deficits that need refinement to improve the accuracy of LSMs in the near future. These include changing how C is allocated in fixed and dynamic schemes based on data from current forest syntheses and different parameterization of allocation schemes for different forest types. Our results highlight the utility of using measurements of aboveground biomass to evaluate and constrain the C allocation scheme in LSMs, and suggest that stem turnover is overestimated by CLM4.5 for these AmeriFlux sites. Understanding the controls of turnover will be critical to improving long-term C processes in LSMs.

  3. Evaluating the effect of alternative carbon allocation schemes in a land surface model (CLM4.5) on carbon fluxes, pools, and turnover in temperate forests

    DOE PAGES

    Montané, Francesc; Fox, Andrew M.; Arellano, Avelino F.; ...

    2017-09-22

    How carbon (C) is allocated to different plant tissues (leaves, stem, and roots) determines how long C remains in plant biomass and thus remains a central challenge for understanding the global C cycle. We used a diverse set of observations (AmeriFlux eddy covariance tower observations, biomass estimates from tree-ring data, and leaf area index (LAI) measurements) to compare C fluxes, pools, and LAI data with those predicted by a land surface model (LSM), the Community Land Model (CLM4.5). We ran CLM4.5 for nine temperate (including evergreen and deciduous) forests in North America between 1980 and 2013 using four different C allocationmore » schemes: i. dynamic C allocation scheme (named "D-CLM4.5") with one dynamic allometric parameter, which allocates C to the stem and leaves to vary in time as a function of annual net primary production (NPP); ii. an alternative dynamic C allocation scheme (named "D-Litton"), where, similar to (i), C allocation is a dynamic function of annual NPP, but unlike (i) includes two dynamic allometric parameters involving allocation to leaves, stem, and coarse roots; iii.–iv. a fixed C allocation scheme with two variants, one representative of observations in evergreen (named "F-Evergreen") and the other of observations in deciduous forests (named "F-Deciduous"). D-CLM4.5 generally overestimated gross primary production (GPP) and ecosystem respiration, and underestimated net ecosystem exchange (NEE). In D-CLM4.5, initial aboveground biomass in 1980 was largely overestimated (between 10 527 and 12 897 g C m -2) for deciduous forests, whereas aboveground biomass accumulation through time (between 1980 and 2011) was highly underestimated (between 1222 and 7557 g C m -2) for both evergreen and deciduous sites due to a lower stem turnover rate in the sites than the one used in the model. D-CLM4.5 overestimated LAI in both evergreen and deciduous sites because the leaf C–LAI relationship in the model did not match the observed leaf C–LAI relationship at our sites. Although the four C allocation schemes gave similar results for aggregated C fluxes, they translated to important differences in long-term aboveground biomass accumulation and aboveground NPP. For deciduous forests, D-Litton gave more realistic C stem/C leaf ratios and strongly reduced the overestimation of initial aboveground biomass and aboveground NPP for deciduous forests by D-CLM4.5. We identified key structural and parameterization deficits that need refinement to improve the accuracy of LSMs in the near future. These include changing how C is allocated in fixed and dynamic schemes based on data from current forest syntheses and different parameterization of allocation schemes for different forest types. Our results highlight the utility of using measurements of aboveground biomass to evaluate and constrain the C allocation scheme in LSMs, and suggest that stem turnover is overestimated by CLM4.5 for these AmeriFlux sites. Understanding the controls of turnover will be critical to improving long-term C processes in LSMs.« less

  4. Evaluating the effect of alternative carbon allocation schemes in a land surface model (CLM4.5) on carbon fluxes, pools, and turnover in temperate forests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Montané, Francesc; Fox, Andrew M.; Arellano, Avelino F.

    How carbon (C) is allocated to different plant tissues (leaves, stem, and roots) determines how long C remains in plant biomass and thus remains a central challenge for understanding the global C cycle. We used a diverse set of observations (AmeriFlux eddy covariance tower observations, biomass estimates from tree-ring data, and leaf area index (LAI) measurements) to compare C fluxes, pools, and LAI data with those predicted by a land surface model (LSM), the Community Land Model (CLM4.5). We ran CLM4.5 for nine temperate (including evergreen and deciduous) forests in North America between 1980 and 2013 using four different C allocationmore » schemes: i. dynamic C allocation scheme (named "D-CLM4.5") with one dynamic allometric parameter, which allocates C to the stem and leaves to vary in time as a function of annual net primary production (NPP); ii. an alternative dynamic C allocation scheme (named "D-Litton"), where, similar to (i), C allocation is a dynamic function of annual NPP, but unlike (i) includes two dynamic allometric parameters involving allocation to leaves, stem, and coarse roots; iii.–iv. a fixed C allocation scheme with two variants, one representative of observations in evergreen (named "F-Evergreen") and the other of observations in deciduous forests (named "F-Deciduous"). D-CLM4.5 generally overestimated gross primary production (GPP) and ecosystem respiration, and underestimated net ecosystem exchange (NEE). In D-CLM4.5, initial aboveground biomass in 1980 was largely overestimated (between 10 527 and 12 897 g C m -2) for deciduous forests, whereas aboveground biomass accumulation through time (between 1980 and 2011) was highly underestimated (between 1222 and 7557 g C m -2) for both evergreen and deciduous sites due to a lower stem turnover rate in the sites than the one used in the model. D-CLM4.5 overestimated LAI in both evergreen and deciduous sites because the leaf C–LAI relationship in the model did not match the observed leaf C–LAI relationship at our sites. Although the four C allocation schemes gave similar results for aggregated C fluxes, they translated to important differences in long-term aboveground biomass accumulation and aboveground NPP. For deciduous forests, D-Litton gave more realistic C stem/C leaf ratios and strongly reduced the overestimation of initial aboveground biomass and aboveground NPP for deciduous forests by D-CLM4.5. We identified key structural and parameterization deficits that need refinement to improve the accuracy of LSMs in the near future. These include changing how C is allocated in fixed and dynamic schemes based on data from current forest syntheses and different parameterization of allocation schemes for different forest types. Our results highlight the utility of using measurements of aboveground biomass to evaluate and constrain the C allocation scheme in LSMs, and suggest that stem turnover is overestimated by CLM4.5 for these AmeriFlux sites. Understanding the controls of turnover will be critical to improving long-term C processes in LSMs.« less

  5. A provably-secure ECC-based authentication scheme for wireless sensor networks.

    PubMed

    Nam, Junghyun; Kim, Moonseong; Paik, Juryon; Lee, Youngsook; Won, Dongho

    2014-11-06

    A smart-card-based user authentication scheme for wireless sensor networks (in short, a SUA-WSN scheme) is designed to restrict access to the sensor data only to users who are in possession of both a smart card and the corresponding password. While a significant number of SUA-WSN schemes have been suggested in recent years, their intended security properties lack formal definitions and proofs in a widely-accepted model. One consequence is that SUA-WSN schemes insecure against various attacks have proliferated. In this paper, we devise a security model for the analysis of SUA-WSN schemes by extending the widely-accepted model of Bellare, Pointcheval and Rogaway (2000). Our model provides formal definitions of authenticated key exchange and user anonymity while capturing side-channel attacks, as well as other common attacks. We also propose a new SUA-WSN scheme based on elliptic curve cryptography (ECC), and prove its security properties in our extended model. To the best of our knowledge, our proposed scheme is the first SUA-WSN scheme that provably achieves both authenticated key exchange and user anonymity. Our scheme is also computationally competitive with other ECC-based (non-provably secure) schemes.

  6. A Provably-Secure ECC-Based Authentication Scheme for Wireless Sensor Networks

    PubMed Central

    Nam, Junghyun; Kim, Moonseong; Paik, Juryon; Lee, Youngsook; Won, Dongho

    2014-01-01

    A smart-card-based user authentication scheme for wireless sensor networks (in short, a SUA-WSN scheme) is designed to restrict access to the sensor data only to users who are in possession of both a smart card and the corresponding password. While a significant number of SUA-WSN schemes have been suggested in recent years, their intended security properties lack formal definitions and proofs in a widely-accepted model. One consequence is that SUA-WSN schemes insecure against various attacks have proliferated. In this paper, we devise a security model for the analysis of SUA-WSN schemes by extending the widely-accepted model of Bellare, Pointcheval and Rogaway (2000). Our model provides formal definitions of authenticated key exchange and user anonymity while capturing side-channel attacks, as well as other common attacks. We also propose a new SUA-WSN scheme based on elliptic curve cryptography (ECC), and prove its security properties in our extended model. To the best of our knowledge, our proposed scheme is the first SUA-WSN scheme that provably achieves both authenticated key exchange and user anonymity. Our scheme is also computationally competitive with other ECC-based (non-provably secure) schemes. PMID:25384009

  7. A soft-hard combination-based cooperative spectrum sensing scheme for cognitive radio networks.

    PubMed

    Do, Nhu Tri; An, Beongku

    2015-02-13

    In this paper we propose a soft-hard combination scheme, called SHC scheme, for cooperative spectrum sensing in cognitive radio networks. The SHC scheme deploys a cluster based network in which Likelihood Ratio Test (LRT)-based soft combination is applied at each cluster, and weighted decision fusion rule-based hard combination is utilized at the fusion center. The novelties of the SHC scheme are as follows: the structure of the SHC scheme reduces the complexity of cooperative detection which is an inherent limitation of soft combination schemes. By using the LRT, we can detect primary signals in a low signal-to-noise ratio regime (around an average of -15 dB). In addition, the computational complexity of the LRT is reduced since we derive the closed-form expression of the probability density function of LRT value. The SHC scheme also takes into account the different effects of large scale fading on different users in the wide area network. The simulation results show that the SHC scheme not only provides the better sensing performance compared to the conventional hard combination schemes, but also reduces sensing overhead in terms of reporting time compared to the conventional soft combination scheme using the LRT.

  8. FUN3D Grid Refinement and Adaptation Studies for the Ares Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.; Vasta, Veer; Carlson, Jan-Renee; Park, Mike; Mineck, Raymond E.

    2010-01-01

    This paper presents grid refinement and adaptation studies performed in conjunction with computational aeroelastic analyses of the Ares crew launch vehicle (CLV). The unstructured grids used in this analysis were created with GridTool and VGRID while the adaptation was performed using the Computational Fluid Dynamic (CFD) code FUN3D with a feature based adaptation software tool. GridTool was developed by ViGYAN, Inc. while the last three software suites were developed by NASA Langley Research Center. The feature based adaptation software used here operates by aligning control volumes with shock and Mach line structures and by refining/de-refining where necessary. It does not redistribute node points on the surface. This paper assesses the sensitivity of the complex flow field about a launch vehicle to grid refinement. It also assesses the potential of feature based grid adaptation to improve the accuracy of CFD analysis for a complex launch vehicle configuration. The feature based adaptation shows the potential to improve the resolution of shocks and shear layers. Further development of the capability to adapt the boundary layer and surface grids of a tetrahedral grid is required for significant improvements in modeling the flow field.

  9. Refinement of protein termini in template-based modeling using conformational space annealing.

    PubMed

    Park, Hahnbeom; Ko, Junsu; Joo, Keehyoung; Lee, Julian; Seok, Chaok; Lee, Jooyoung

    2011-09-01

    The rapid increase in the number of experimentally determined protein structures in recent years enables us to obtain more reliable protein tertiary structure models than ever by template-based modeling. However, refinement of template-based models beyond the limit available from the best templates is still needed for understanding protein function in atomic detail. In this work, we develop a new method for protein terminus modeling that can be applied to refinement of models with unreliable terminus structures. The energy function for terminus modeling consists of both physics-based and knowledge-based potential terms with carefully optimized relative weights. Effective sampling of both the framework and terminus is performed using the conformational space annealing technique. This method has been tested on a set of termini derived from a nonredundant structure database and two sets of termini from the CASP8 targets. The performance of the terminus modeling method is significantly improved over our previous method that does not employ terminus refinement. It is also comparable or superior to the best server methods tested in CASP8. The success of the current approach suggests that similar strategy may be applied to other types of refinement problems such as loop modeling or secondary structure rearrangement. Copyright © 2011 Wiley-Liss, Inc.

  10. Center for Macromolecular Crystallography, University of Alabama in Birmingham

    NASA Technical Reports Server (NTRS)

    Navia, Manuel A.

    1991-01-01

    Porcine pancreatic elastase (PPE) crystals grown under microgravity conditions on mission STS-26 of the Space Shuttle Discovery were shown to diffract to considerably higher resolution than the best PPE crystals grown by us on the ground. We have now independently refined both the microgravity and ground-based data. Preliminary results of these refinements are summarized. These results show nearly a doubling of experimental diffraction data for this structure, exceeding 1.3 A resolution. Improved phase information derived from the refined structure of PPE based on this microgravity data has allowed us to interpret previously-uninterpretable electron density obtained from ground-based crystals of a complex of PPE with a chemically-reactive inhibitor. Intermediate stages in the enzyme-inhibitor reaction mechanism in the crystal can now be directly observed. Further refinement of PPE structures is in progress.

  11. A Novel Passive Tracking Scheme Exploiting Geometric and Intercept Theorems

    PubMed Central

    Zhou, Biao; Sun, Chao; Ahn, Deockhyeon; Kim, Youngok

    2018-01-01

    Passive tracking aims to track targets without assistant devices, that is, device-free targets. Passive tracking based on Radio Frequency (RF) Tomography in wireless sensor networks has recently been addressed as an emerging field. The passive tracking scheme using geometric theorems (GTs) is one of the most popular RF Tomography schemes, because the GT-based method can effectively mitigate the demand for a high density of wireless nodes. In the GT-based tracking scheme, the tracking scenario is considered as a two-dimensional geometric topology and then geometric theorems are applied to estimate crossing points (CPs) of the device-free target on line-of-sight links (LOSLs), which reveal the target’s trajectory information in a discrete form. In this paper, we review existing GT-based tracking schemes, and then propose a novel passive tracking scheme by exploiting the Intercept Theorem (IT). To create an IT-based CP estimation scheme available in the noisy non-parallel LOSL situation, we develop the equal-ratio traverse (ERT) method. Finally, we analyze properties of three GT-based tracking algorithms and the performance of these schemes is evaluated experimentally under various trajectories, node densities, and noisy topologies. Analysis of experimental results shows that tracking schemes exploiting geometric theorems can achieve remarkable positioning accuracy even under rather a low density of wireless nodes. Moreover, the proposed IT scheme can provide generally finer tracking accuracy under even lower node density and noisier topologies, in comparison to other schemes. PMID:29562621

  12. General relativistic hydrodynamics with Adaptive-Mesh Refinement (AMR) and modeling of accretion disks

    NASA Astrophysics Data System (ADS)

    Donmez, Orhan

    We present a general procedure to solve the General Relativistic Hydrodynamical (GRH) equations with Adaptive-Mesh Refinement (AMR) and model of an accretion disk around a black hole. To do this, the GRH equations are written in a conservative form to exploit their hyperbolic character. The numerical solutions of the general relativistic hydrodynamic equations is done by High Resolution Shock Capturing schemes (HRSC), specifically designed to solve non-linear hyperbolic systems of conservation laws. These schemes depend on the characteristic information of the system. We use Marquina fluxes with MUSCL left and right states to solve GRH equations. First, we carry out different test problems with uniform and AMR grids on the special relativistic hydrodynamics equations to verify the second order convergence of the code in 1D, 2 D and 3D. Second, we solve the GRH equations and use the general relativistic test problems to compare the numerical solutions with analytic ones. In order to this, we couple the flux part of general relativistic hydrodynamic equation with a source part using Strang splitting. The coupling of the GRH equations is carried out in a treatment which gives second order accurate solutions in space and time. The test problems examined include shock tubes, geodesic flows, and circular motion of particle around the black hole. Finally, we apply this code to the accretion disk problems around the black hole using the Schwarzschild metric at the background of the computational domain. We find spiral shocks on the accretion disk. They are observationally expected results. We also examine the star-disk interaction near a massive black hole. We find that when stars are grounded down or a hole is punched on the accretion disk, they create shock waves which destroy the accretion disk.

  13. An Improved Biometrics-Based Remote User Authentication Scheme with User Anonymity

    PubMed Central

    Kumari, Saru

    2013-01-01

    The authors review the biometrics-based user authentication scheme proposed by An in 2012. The authors show that there exist loopholes in the scheme which are detrimental for its security. Therefore the authors propose an improved scheme eradicating the flaws of An's scheme. Then a detailed security analysis of the proposed scheme is presented followed by its efficiency comparison. The proposed scheme not only withstands security problems found in An's scheme but also provides some extra features with mere addition of only two hash operations. The proposed scheme allows user to freely change his password and also provides user anonymity with untraceability. PMID:24350272

  14. An improved biometrics-based remote user authentication scheme with user anonymity.

    PubMed

    Khan, Muhammad Khurram; Kumari, Saru

    2013-01-01

    The authors review the biometrics-based user authentication scheme proposed by An in 2012. The authors show that there exist loopholes in the scheme which are detrimental for its security. Therefore the authors propose an improved scheme eradicating the flaws of An's scheme. Then a detailed security analysis of the proposed scheme is presented followed by its efficiency comparison. The proposed scheme not only withstands security problems found in An's scheme but also provides some extra features with mere addition of only two hash operations. The proposed scheme allows user to freely change his password and also provides user anonymity with untraceability.

  15. Provably secure identity-based identification and signature schemes from code assumptions

    PubMed Central

    Zhao, Yiming

    2017-01-01

    Code-based cryptography is one of few alternatives supposed to be secure in a post-quantum world. Meanwhile, identity-based identification and signature (IBI/IBS) schemes are two of the most fundamental cryptographic primitives, so several code-based IBI/IBS schemes have been proposed. However, with increasingly profound researches on coding theory, the security reduction and efficiency of such schemes have been invalidated and challenged. In this paper, we construct provably secure IBI/IBS schemes from code assumptions against impersonation under active and concurrent attacks through a provably secure code-based signature technique proposed by Preetha, Vasant and Rangan (PVR signature), and a security enhancement Or-proof technique. We also present the parallel-PVR technique to decrease parameter values while maintaining the standard security level. Compared to other code-based IBI/IBS schemes, our schemes achieve not only preferable public parameter size, private key size, communication cost and signature length due to better parameter choices, but also provably secure. PMID:28809940

  16. Provably secure identity-based identification and signature schemes from code assumptions.

    PubMed

    Song, Bo; Zhao, Yiming

    2017-01-01

    Code-based cryptography is one of few alternatives supposed to be secure in a post-quantum world. Meanwhile, identity-based identification and signature (IBI/IBS) schemes are two of the most fundamental cryptographic primitives, so several code-based IBI/IBS schemes have been proposed. However, with increasingly profound researches on coding theory, the security reduction and efficiency of such schemes have been invalidated and challenged. In this paper, we construct provably secure IBI/IBS schemes from code assumptions against impersonation under active and concurrent attacks through a provably secure code-based signature technique proposed by Preetha, Vasant and Rangan (PVR signature), and a security enhancement Or-proof technique. We also present the parallel-PVR technique to decrease parameter values while maintaining the standard security level. Compared to other code-based IBI/IBS schemes, our schemes achieve not only preferable public parameter size, private key size, communication cost and signature length due to better parameter choices, but also provably secure.

  17. A secure and efficient chaotic map-based authenticated key agreement scheme for telecare medicine information systems.

    PubMed

    Mishra, Dheerendra; Srinivas, Jangirala; Mukhopadhyay, Sourav

    2014-10-01

    Advancement in network technology provides new ways to utilize telecare medicine information systems (TMIS) for patient care. Although TMIS usually faces various attacks as the services are provided over the public network. Recently, Jiang et al. proposed a chaotic map-based remote user authentication scheme for TMIS. Their scheme has the merits of low cost and session key agreement using Chaos theory. It enhances the security of the system by resisting various attacks. In this paper, we analyze the security of Jiang et al.'s scheme and demonstrate that their scheme is vulnerable to denial of service attack. Moreover, we demonstrate flaws in password change phase of their scheme. Further, our aim is to propose a new chaos map-based anonymous user authentication scheme for TMIS to overcome the weaknesses of Jiang et al.'s scheme, while also retaining the original merits of their scheme. We also show that our scheme is secure against various known attacks including the attacks found in Jiang et al.'s scheme. The proposed scheme is comparable in terms of the communication and computational overheads with Jiang et al.'s scheme and other related existing schemes. Moreover, we demonstrate the validity of the proposed scheme through the BAN (Burrows, Abadi, and Needham) logic.

  18. Research to Assembly Scheme for Satellite Deck Based on Robot Flexibility Control Principle

    NASA Astrophysics Data System (ADS)

    Guo, Tao; Hu, Ruiqin; Xiao, Zhengyi; Zhao, Jingjing; Fang, Zhikai

    2018-03-01

    Deck assembly is critical quality control point in final satellite assembly process, and cable extrusion and structure collision problems in assembly process will affect development quality and progress of satellite directly. Aimed at problems existing in deck assembly process, assembly project scheme for satellite deck based on robot flexibility control principle is proposed in this paper. Scheme is introduced firstly; secondly, key technologies on end force perception and flexible docking control in the scheme are studied; then, implementation process of assembly scheme for satellite deck is described in detail; finally, actual application case of assembly scheme is given. Result shows that compared with traditional assembly scheme, assembly scheme for satellite deck based on robot flexibility control principle has obvious advantages in work efficiency, reliability and universality aspects etc.

  19. A keyword searchable attribute-based encryption scheme with attribute update for cloud storage.

    PubMed

    Wang, Shangping; Ye, Jian; Zhang, Yaling

    2018-01-01

    Ciphertext-policy attribute-based encryption (CP-ABE) scheme is a new type of data encryption primitive, which is very suitable for data cloud storage for its fine-grained access control. Keyword-based searchable encryption scheme enables users to quickly find interesting data stored in the cloud server without revealing any information of the searched keywords. In this work, we provide a keyword searchable attribute-based encryption scheme with attribute update for cloud storage, which is a combination of attribute-based encryption scheme and keyword searchable encryption scheme. The new scheme supports the user's attribute update, especially in our new scheme when a user's attribute need to be updated, only the user's secret key related with the attribute need to be updated, while other user's secret key and the ciphertexts related with this attribute need not to be updated with the help of the cloud server. In addition, we outsource the operation with high computation cost to cloud server to reduce the user's computational burden. Moreover, our scheme is proven to be semantic security against chosen ciphertext-policy and chosen plaintext attack in the general bilinear group model. And our scheme is also proven to be semantic security against chosen keyword attack under bilinear Diffie-Hellman (BDH) assumption.

  20. A keyword searchable attribute-based encryption scheme with attribute update for cloud storage

    PubMed Central

    Wang, Shangping; Zhang, Yaling

    2018-01-01

    Ciphertext-policy attribute-based encryption (CP-ABE) scheme is a new type of data encryption primitive, which is very suitable for data cloud storage for its fine-grained access control. Keyword-based searchable encryption scheme enables users to quickly find interesting data stored in the cloud server without revealing any information of the searched keywords. In this work, we provide a keyword searchable attribute-based encryption scheme with attribute update for cloud storage, which is a combination of attribute-based encryption scheme and keyword searchable encryption scheme. The new scheme supports the user's attribute update, especially in our new scheme when a user's attribute need to be updated, only the user's secret key related with the attribute need to be updated, while other user's secret key and the ciphertexts related with this attribute need not to be updated with the help of the cloud server. In addition, we outsource the operation with high computation cost to cloud server to reduce the user's computational burden. Moreover, our scheme is proven to be semantic security against chosen ciphertext-policy and chosen plaintext attack in the general bilinear group model. And our scheme is also proven to be semantic security against chosen keyword attack under bilinear Diffie-Hellman (BDH) assumption. PMID:29795577

  1. Exponential Arithmetic Based Self-Healing Group Key Distribution Scheme with Backward Secrecy under the Resource-Constrained Wireless Networks

    PubMed Central

    Guo, Hua; Zheng, Yandong; Zhang, Xiyong; Li, Zhoujun

    2016-01-01

    In resource-constrained wireless networks, resources such as storage space and communication bandwidth are limited. To guarantee secure communication in resource-constrained wireless networks, group keys should be distributed to users. The self-healing group key distribution (SGKD) scheme is a promising cryptographic tool, which can be used to distribute and update the group key for the secure group communication over unreliable wireless networks. Among all known SGKD schemes, exponential arithmetic based SGKD (E-SGKD) schemes reduce the storage overhead to constant, thus is suitable for the the resource-constrained wireless networks. In this paper, we provide a new mechanism to achieve E-SGKD schemes with backward secrecy. We first propose a basic E-SGKD scheme based on a known polynomial-based SGKD, where it has optimal storage overhead while having no backward secrecy. To obtain the backward secrecy and reduce the communication overhead, we introduce a novel approach for message broadcasting and self-healing. Compared with other E-SGKD schemes, our new E-SGKD scheme has the optimal storage overhead, high communication efficiency and satisfactory security. The simulation results in Zigbee-based networks show that the proposed scheme is suitable for the resource-restrained wireless networks. Finally, we show the application of our proposed scheme. PMID:27136550

  2. Electronic damping of anharmonic adsorbate vibrations at metallic surfaces

    NASA Astrophysics Data System (ADS)

    Tremblay, Jean Christophe; Monturet, Serge; Saalfrank, Peter

    2010-03-01

    The nonadiabatic coupling of an adsorbate close to a metallic surface leads to electronic damping of adsorbate vibrations and line broadening in vibrational spectroscopy. Here, a perturbative treatment of the electronic contribution to the lifetime broadening serves as a building block for a new approach, in which anharmonic vibrational transition rates are calculated from a position-dependent coupling function. Different models for the coupling function will be tested, all related to embedding theory. The first two are models based on a scattering approach with (i) a jellium-type and (ii) a density functional theory based embedding density, respectively. In a third variant a further refined model is used for the embedding density, and a semiempirical approach is taken in which a scaling factor is chosen to match harmonic, single-site, first-principles transition rates, obtained from periodic density functional theory. For the example of hydrogen atoms on (adsorption) and below (subsurface absorption) a Pd(111) surface, lifetimes of and transition rates between vibrational levels are computed. The transition rates emerging from different models serve as input for the selective subsurface adsorption of hydrogen in palladium starting from an adsorption site, by using sequences of infrared laser pulses in a laser distillation scheme.

  3. An Efficient Metadynamics-Based Protocol To Model the Binding Affinity and the Transition State Ensemble of G-Protein-Coupled Receptor Ligands.

    PubMed

    Saleh, Noureldin; Ibrahim, Passainte; Saladino, Giorgio; Gervasio, Francesco Luigi; Clark, Timothy

    2017-05-22

    A generally applicable metadynamics scheme for predicting the free energy profile of ligand binding to G-protein-coupled receptors (GPCRs) is described. A common and effective collective variable (CV) has been defined using the ideally placed and highly conserved Trp6.48 as a reference point for ligand-GPCR distance measurement and the common orientation of GPCRs in the cell membrane. Using this single CV together with well-tempered multiple-walker metadynamics with a funnel-like boundary allows an efficient exploration of the entire ligand binding path from the extracellular medium to the orthosteric binding site, including vestibule and intermediate sites. The protocol can be used with X-ray structures or high-quality homology models (based on a high-quality template and after thorough refinement) for the receptor and is universally applicable to agonists, antagonists, and partial and reverse agonists. The root-mean-square error (RMSE) in predicted binding free energies for 12 diverse ligands in five receptors (a total of 23 data points) is surprisingly small (less than 1 kcal mol -1 ). The RMSEs for simulations that use receptor X-ray structures and homology models are very similar.

  4. New multigrid approach for three-dimensional unstructured, adaptive grids

    NASA Technical Reports Server (NTRS)

    Parthasarathy, Vijayan; Kallinderis, Y.

    1994-01-01

    A new multigrid method with adaptive unstructured grids is presented. The three-dimensional Euler equations are solved on tetrahedral grids that are adaptively refined or coarsened locally. The multigrid method is employed to propagate the fine grid corrections more rapidly by redistributing the changes-in-time of the solution from the fine grid to the coarser grids to accelerate convergence. A new approach is employed that uses the parent cells of the fine grid cells in an adapted mesh to generate successively coaser levels of multigrid. This obviates the need for the generation of a sequence of independent, nonoverlapping grids as well as the relatively complicated operations that need to be performed to interpolate the solution and the residuals between the independent grids. The solver is an explicit, vertex-based, finite volume scheme that employs edge-based data structures and operations. Spatial discretization is of central-differencing type combined with a special upwind-like smoothing operators. Application cases include adaptive solutions obtained with multigrid acceleration for supersonic and subsonic flow over a bump in a channel, as well as transonic flow around the ONERA M6 wing. Two levels of multigrid resulted in reduction in the number of iterations by a factor of 5.

  5. Controlling the energy of defects and interfaces in the amplitude expansion of the phase-field crystal model

    NASA Astrophysics Data System (ADS)

    Salvalaglio, Marco; Backofen, Rainer; Voigt, Axel; Elder, Ken R.

    2017-08-01

    One of the major difficulties in employing phase-field crystal (PFC) modeling and the associated amplitude (APFC) formulation is the ability to tune model parameters to match experimental quantities. In this work, we address the problem of tuning the defect core and interface energies in the APFC formulation. We show that the addition of a single term to the free-energy functional can be used to increase the solid-liquid interface and defect energies in a well-controlled fashion, without any major change to other features. The influence of the newly added term is explored in two-dimensional triangular and honeycomb structures as well as bcc and fcc lattices in three dimensions. In addition, a finite-element method (FEM) is developed for the model that incorporates a mesh refinement scheme. The combination of the FEM and mesh refinement to simulate amplitude expansion with a new energy term provides a method of controlling microscopic features such as defect and interface energies while simultaneously delivering a coarse-grained examination of the system.

  6. Mathematical modeling of polymer flooding using the unstructured Voronoi grid

    NASA Astrophysics Data System (ADS)

    Kireev, T. F.; Bulgakova, G. T.; Khatmullin, I. F.

    2017-12-01

    Effective recovery of unconventional oil reserves necessitates development of enhanced oil recovery techniques such as polymer flooding. The study investigated the model of polymer flooding with effects of adsorption and water salinity. The model takes into account six components that include elements of the classic black oil model. These components are polymer, salt, water, dead oil, dry gas and dissolved gas. Solution of the problem is obtained by finite volume method on unstructured Voronoi grid using fully implicit scheme and the Newton’s method. To compare several different grid configurations numerical simulation of polymer flooding is performed. The oil rates obtained by a hexagonal locally refined Voronoi grid are shown to be more accurate than the oil rates obtained by a rectangular grid with the same number of cells. The latter effect is caused by high solution accuracy near the wells due to the local grid refinement. Minimization of the grid orientation effect caused by the hexagonal pattern is also demonstrated. However, in the inter-well regions with large Voronoi cells flood front tends to flatten and the water breakthrough moment is smoothed.

  7. PF-WFS Shell Inspection Update December 2016

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vigil, Anthony Eugene; Ledoux, Reina Rebecca; Gonzales, Antonio R.

    Since the last project update in FY16:Q2, PF-WFS personnel have advanced in understanding of shell inspection on Coordinate Measuring Machines {CMM} and refined the PF-WFS process to the point it was decided to convert shell inspection from the Sheffield #1 gage to Lietz CM Ms. As a part of introspection on the quality of this process many sets of data have been reviewed and analyzed. This analysis included Sheffield to CMM comparisons, CMM inspection repeatability, fixturing differences, quality check development, probing approach changes. This update report will touch on these improvements that have built the confidence in this process tomore » mainstream it inspecting shells. In addition to the CMM programming advancements, the continuation in refinement of input and outputs for the CMM program has created an archiving scheme, input spline files, an output metafile, and inspection report package. This project will continue to mature. Part designs may require program modifications to accommodate "new to this process" part designs. Technology limitations tied to security and performance are requiring possible changes to computer configurations to support an automated process.« less

  8. Numerical Study of Richtmyer-Meshkov Instability with Re-Shock

    NASA Astrophysics Data System (ADS)

    Wong, Man Long; Livescu, Daniel; Lele, Sanjiva

    2017-11-01

    The interaction of a Mach 1.45 shock wave with a perturbed planar interface between two gases with an Atwood number 0.68 is studied through 2D and 3D shock-capturing adaptive mesh refinement (AMR) simulations with physical diffusive and viscous terms. The simulations have initial conditions similar to those in the actual experiment conducted by Poggi et al. [1998]. The development of the flow and evolution of mixing due to the interactions with the first shock and the re-shock are studied together with the sensitivity of various global parameters to the properties of the initial perturbation. Grid resolutions needed for fully resolved and 2D and 3D simulations are also evaluated. Simulations are conducted with an in-house AMR solver HAMeRS built on the SAMRAI library. The code utilizes the high-order localized dissipation weighted compact nonlinear scheme [Wong and Lele, 2017] for shock-capturing and different sensors including the wavelet sensor [Wong and Lele, 2016] to identify regions for grid refinement. First and third authors acknowledge the project sponsor LANL.

  9. Reported Theory Use by Digital Interventions for Hazardous and Harmful Alcohol Consumption, and Association With Effectiveness: Meta-Regression

    PubMed Central

    Crane, David; Brown, Jamie; Kaner, Eileen; Beyer, Fiona; Muirhead, Colin; Hickman, Matthew; Redmore, James; de Vocht, Frank; Beard, Emma; Michie, Susan

    2018-01-01

    Background Applying theory to the design and evaluation of interventions is likely to increase effectiveness and improve the evidence base from which future interventions are developed, though few interventions report this. Objective The aim of this paper was to assess how digital interventions to reduce hazardous and harmful alcohol consumption report the use of theory in their development and evaluation, and whether reporting of theory use is associated with intervention effectiveness. Methods Randomized controlled trials were extracted from a Cochrane review on digital interventions for reducing hazardous and harmful alcohol consumption. Reporting of theory use within these digital interventions was investigated using the theory coding scheme (TCS). Reported theory use was analyzed by frequency counts and descriptive statistics. Associations were analyzed with meta-regression models. Results Of 41 trials involving 42 comparisons, half did not mention theory (50% [21/42]), and only 38% (16/42) used theory to select or develop the intervention techniques. Significant heterogeneity existed between studies in the effect of interventions on alcohol reduction (I2=77.6%, P<.001). No significant associations were detected between reporting of theory use and intervention effectiveness in unadjusted models, though the meta-regression was underpowered to detect modest associations. Conclusions Digital interventions offer a unique opportunity to refine and develop new dynamic, temporally sensitive theories, yet none of the studies reported refining or developing theory. Clearer selection, application, and reporting of theory use is needed to accurately assess how useful theory is in this field and to advance the field of behavior change theories. PMID:29490895

  10. Iterative refinement of structure-based sequence alignments by Seed Extension

    PubMed Central

    Kim, Changhoon; Tai, Chin-Hsien; Lee, Byungkook

    2009-01-01

    Background Accurate sequence alignment is required in many bioinformatics applications but, when sequence similarity is low, it is difficult to obtain accurate alignments based on sequence similarity alone. The accuracy improves when the structures are available, but current structure-based sequence alignment procedures still mis-align substantial numbers of residues. In order to correct such errors, we previously explored the possibility of replacing the residue-based dynamic programming algorithm in structure alignment procedures with the Seed Extension algorithm, which does not use a gap penalty. Here, we describe a new procedure called RSE (Refinement with Seed Extension) that iteratively refines a structure-based sequence alignment. Results RSE uses SE (Seed Extension) in its core, which is an algorithm that we reported recently for obtaining a sequence alignment from two superimposed structures. The RSE procedure was evaluated by comparing the correctly aligned fractions of residues before and after the refinement of the structure-based sequence alignments produced by popular programs. CE, DaliLite, FAST, LOCK2, MATRAS, MATT, TM-align, SHEBA and VAST were included in this analysis and the NCBI's CDD root node set was used as the reference alignments. RSE improved the average accuracy of sequence alignments for all programs tested when no shift error was allowed. The amount of improvement varied depending on the program. The average improvements were small for DaliLite and MATRAS but about 5% for CE and VAST. More substantial improvements have been seen in many individual cases. The additional computation times required for the refinements were negligible compared to the times taken by the structure alignment programs. Conclusion RSE is a computationally inexpensive way of improving the accuracy of a structure-based sequence alignment. It can be used as a standalone procedure following a regular structure-based sequence alignment or to replace the traditional iterative refinement procedures based on residue-level dynamic programming algorithm in many structure alignment programs. PMID:19589133

  11. Semi-Automatic Segmentation Software for Quantitative Clinical Brain Glioblastoma Evaluation

    PubMed Central

    Zhu, Y; Young, G; Xue, Z; Huang, R; You, H; Setayesh, K; Hatabu, H; Cao, F; Wong, S.T.

    2012-01-01

    Rationale and Objectives Quantitative measurement provides essential information about disease progression and treatment response in patients with Glioblastoma multiforme (GBM). The goal of this paper is to present and validate a software pipeline for semi-automatic GBM segmentation, called AFINITI (Assisted Follow-up in NeuroImaging of Therapeutic Intervention), using clinical data from GBM patients. Materials and Methods Our software adopts the current state-of-the-art tumor segmentation algorithms and combines them into one clinically usable pipeline. Both the advantages of the traditional voxel-based and the deformable shape-based segmentation are embedded into the software pipeline. The former provides an automatic tumor segmentation scheme based on T1- and T2-weighted MR brain data, and the latter refines the segmentation results with minimal manual input. Results Twenty six clinical MR brain images of GBM patients were processed and compared with manual results. The results can be visualized using the embedded graphic user interface (GUI). Conclusion Validation results using clinical GBM data showed high correlation between the AFINITI results and manual annotation. Compared to the voxel-wise segmentation, AFINITI yielded more accurate results in segmenting the enhanced GBM from multimodality MRI data. The proposed pipeline could be used as additional information to interpret MR brain images in neuroradiology. PMID:22591720

  12. A high-resolution Godunov method for compressible multi-material flow on overlapping grids

    NASA Astrophysics Data System (ADS)

    Banks, J. W.; Schwendeman, D. W.; Kapila, A. K.; Henshaw, W. D.

    2007-04-01

    A numerical method is described for inviscid, compressible, multi-material flow in two space dimensions. The flow is governed by the multi-material Euler equations with a general mixture equation of state. Composite overlapping grids are used to handle complex flow geometry and block-structured adaptive mesh refinement (AMR) is used to locally increase grid resolution near shocks and material interfaces. The discretization of the governing equations is based on a high-resolution Godunov method, but includes an energy correction designed to suppress numerical errors that develop near a material interface for standard, conservative shock-capturing schemes. The energy correction is constructed based on a uniform-pressure-velocity flow and is significant only near the captured interface. A variety of two-material flows are presented to verify the accuracy of the numerical approach and to illustrate its use. These flows assume an equation of state for the mixture based on the Jones-Wilkins-Lee (JWL) forms for the components. This equation of state includes a mixture of ideal gases as a special case. Flow problems considered include unsteady one-dimensional shock-interface collision, steady interaction of a planar interface and an oblique shock, planar shock interaction with a collection of gas-filled cylindrical inhomogeneities, and the impulsive motion of the two-component mixture in a rigid cylindrical vessel.

  13. Examining the integrity of measurement of cognitive abilities in the prediction of achievement: Comparisons and contrasts across variables from higher-order and bifactor models.

    PubMed

    Benson, Nicholas F; Kranzler, John H; Floyd, Randy G

    2016-10-01

    Prior research examining cognitive ability and academic achievement relations have been based on different theoretical models, have employed both latent variables as well as observed variables, and have used a variety of analytic methods. Not surprisingly, results have been inconsistent across studies. The aims of this study were to (a) examine how relations between psychometric g, Cattell-Horn-Carroll (CHC) broad abilities, and academic achievement differ across higher-order and bifactor models; (b) examine how well various types of observed scores corresponded with latent variables; and (c) compare two types of observed scores (i.e., refined and non-refined factor scores) as predictors of academic achievement. Results suggest that cognitive-achievement relations vary across theoretical models and that both types of factor scores tend to correspond well with the models on which they are based. However, orthogonal refined factor scores (derived from a bifactor model) have the advantage of controlling for multicollinearity arising from the measurement of psychometric g across all measures of cognitive abilities. Results indicate that the refined factor scores provide more precise representations of their targeted constructs than non-refined factor scores and maintain close correspondence with the cognitive-achievement relations observed for latent variables. Thus, we argue that orthogonal refined factor scores provide more accurate representations of the relations between CHC broad abilities and achievement outcomes than non-refined scores do. Further, the use of refined factor scores addresses calls for the application of scores based on latent variable models. Copyright © 2016 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  14. Refining Automatically Extracted Knowledge Bases Using Crowdsourcing.

    PubMed

    Li, Chunhua; Zhao, Pengpeng; Sheng, Victor S; Xian, Xuefeng; Wu, Jian; Cui, Zhiming

    2017-01-01

    Machine-constructed knowledge bases often contain noisy and inaccurate facts. There exists significant work in developing automated algorithms for knowledge base refinement. Automated approaches improve the quality of knowledge bases but are far from perfect. In this paper, we leverage crowdsourcing to improve the quality of automatically extracted knowledge bases. As human labelling is costly, an important research challenge is how we can use limited human resources to maximize the quality improvement for a knowledge base. To address this problem, we first introduce a concept of semantic constraints that can be used to detect potential errors and do inference among candidate facts. Then, based on semantic constraints, we propose rank-based and graph-based algorithms for crowdsourced knowledge refining, which judiciously select the most beneficial candidate facts to conduct crowdsourcing and prune unnecessary questions. Our experiments show that our method improves the quality of knowledge bases significantly and outperforms state-of-the-art automatic methods under a reasonable crowdsourcing cost.

  15. The development of a framework for high-quality, sustainable and accessible rural private therapy under the Australian National Disability Insurance Scheme.

    PubMed

    Dew, Angela; Barton, Rebecca; Ragen, Jo; Bulkeley, Kim; Iljadica, Alexandra; Chedid, Rebecca; Brentnall, Jennie; Bundy, Anita; Lincoln, Michelle; Gallego, Gisselle; Veitch, Craig

    2016-12-01

    The Australian National Disability Insurance Scheme (NDIS) will provide people with individual funding with which to purchase services such as therapy from private providers. This study developed a framework to support rural private therapists to meet the anticipated increase in demand. The study consisted of three stages utilizing focus groups, interviews and an online expert panel. Participants included private therapists delivering services in rural New South Wales (n = 28), disability service users (n = 9) and key representatives from a range of relevant consumer and service organizations (n = 16). We conducted a thematic analysis of focus groups and interview data and developed a draft framework which was subsequently refined based on feedback from stakeholders. The framework highlights the need for a 'rural-proofed' policy context in which service users, therapists and communities engage collaboratively in a therapy pathway. This collaborative engagement is supported by enablers, including networks, resources and processes which are influenced by the drivers of time, cost, opportunity and motivation. The framework identifies factors that will facilitate delivery of high-quality, sustainable, individualized private therapy services for people with a disability in rural Australia under the NDIS and emphasizes the need to reconceptualize the nature of private therapy service delivery. Implications for Rehabilitation Rural private therapists need upskilling to work with individuals with disability who have individual funding such as that provided by the Australian National Disability Insurance Scheme. Therapists working in rural communities need to consider alternative ways of delivering therapy to individuals with disability beyond the traditional one-on-one therapy models. Rural private therapists need support to work collaboratively with individuals with disability and the local community. Rural private therapists should harness locally available and broader networks, resources and processes to meet the needs and goals of individuals with disability.

  16. Conformal and covariant Z4 formulation of the Einstein equations: Strongly hyperbolic first-order reduction and solution with discontinuous Galerkin schemes

    NASA Astrophysics Data System (ADS)

    Dumbser, Michael; Guercilena, Federico; Köppel, Sven; Rezzolla, Luciano; Zanotti, Olindo

    2018-04-01

    We present a strongly hyperbolic first-order formulation of the Einstein equations based on the conformal and covariant Z4 system (CCZ4) with constraint-violation damping, which we refer to as FO-CCZ4. As CCZ4, this formulation combines the advantages of a conformal and traceless formulation, with the suppression of constraint violations given by the damping terms, but being first order in time and space, it is particularly suited for a discontinuous Galerkin (DG) implementation. The strongly hyperbolic first-order formulation has been obtained by making careful use of first and second-order ordering constraints. A proof of strong hyperbolicity is given for a selected choice of standard gauges via an analytical computation of the entire eigenstructure of the FO-CCZ4 system. The resulting governing partial differential equations system is written in nonconservative form and requires the evolution of 58 unknowns. A key feature of our formulation is that the first-order CCZ4 system decouples into a set of pure ordinary differential equations and a reduced hyperbolic system of partial differential equations that contains only linearly degenerate fields. We implement FO-CCZ4 in a high-order path-conservative arbitrary-high-order-method-using-derivatives (ADER)-DG scheme with adaptive mesh refinement and local time-stepping, supplemented with a third-order ADER-WENO subcell finite-volume limiter in order to deal with singularities arising with black holes. We validate the correctness of the formulation through a series of standard tests in vacuum, performed in one, two and three spatial dimensions, and also present preliminary results on the evolution of binary black-hole systems. To the best of our knowledge, these are the first successful three-dimensional simulations of moving punctures carried out with high-order DG schemes using a first-order formulation of the Einstein equations.

  17. An improved biometrics-based authentication scheme for telecare medical information systems.

    PubMed

    Guo, Dianli; Wen, Qiaoyan; Li, Wenmin; Zhang, Hua; Jin, Zhengping

    2015-03-01

    Telecare medical information system (TMIS) offers healthcare delivery services and patients can acquire their desired medical services conveniently through public networks. The protection of patients' privacy and data confidentiality are significant. Very recently, Mishra et al. proposed a biometrics-based authentication scheme for telecare medical information system. Their scheme can protect user privacy and is believed to resist a range of network attacks. In this paper, we analyze Mishra et al.'s scheme and identify that their scheme is insecure to against known session key attack and impersonation attack. Thereby, we present a modified biometrics-based authentication scheme for TMIS to eliminate the aforementioned faults. Besides, we demonstrate the completeness of the proposed scheme through BAN-logic. Compared to the related schemes, our protocol can provide stronger security and it is more practical.

  18. Using patient self-reports to study heterogeneity of treatment effects in major depressive disorder

    PubMed Central

    Kessler, R.C.; van Loo, H.M.; Wardenaar, K.J.; Bossarte, R.M.; Brenner, L.A.; Ebert, D.D; de Jonge, P.; Nierenberg, A.A.; Rosellini, A.J.; Sampson, N.A.; Schoevers, R.A.; Wilcox, M.A.; Zaslavsky, A.M.

    2016-01-01

    Aims Clinicians need guidance to address the heterogeneity of treatment responses of patients with major depressive disorder (MDD). While prediction schemes based on symptom clustering and biomarkers have so far not yielded results of sufficient strength to inform clinical decision-making, prediction schemes based on big data predictive analytic models might be more practically useful. Methods We review evidence suggesting that prediction equations based on symptoms and other easily-assessed clinical features found in previous research to predict MDD treatment outcomes might provide a foundation for developing predictive analytic clinical decision support models that could help clinicians select optimal (personalized) MDD treatments. These methods could also be useful in targeting patient subsamples for more expensive biomarker assessments. Results Approximately two dozen baseline variables obtained from medical records or patient reports have been found repeatedly in MDD treatment trials to predict overall treatment outcomes (i.e., intervention versus control) or differential treatment outcomes (i.e., intervention A versus intervention B). Similar evidence has been found in observational studies of MDD persistence-severity. However, no treatment studies have yet attempted to develop treatment outcome equations using the full set of these predictors. Promising preliminary empirical results coupled with recent developments in statistical methodology suggest that models could be developed to provide useful clinical decision support in personalized treatment selection. These tools could also provide a strong foundation to increase statistical power in focused studies of biomarkers and MDD heterogeneity of treatment response in subsequent controlled trials. Conclusions Coordinated efforts are needed to develop a protocol for systematically collecting information about established predictors of heterogeneity of MDD treatment response in large observational treatment studies, applying and refining these models in subsequent pragmatic trials, carrying out pooled secondary analyses to extract the maximum amount of information from these coordinated studies, and using this information to focus future discovery efforts in the segment of the patient population in which continued uncertainty about treatment response exists. PMID:26810628

  19. Using patient self-reports to study heterogeneity of treatment effects in major depressive disorder.

    PubMed

    Kessler, R C; van Loo, H M; Wardenaar, K J; Bossarte, R M; Brenner, L A; Ebert, D D; de Jonge, P; Nierenberg, A A; Rosellini, A J; Sampson, N A; Schoevers, R A; Wilcox, M A; Zaslavsky, A M

    2017-02-01

    Clinicians need guidance to address the heterogeneity of treatment responses of patients with major depressive disorder (MDD). While prediction schemes based on symptom clustering and biomarkers have so far not yielded results of sufficient strength to inform clinical decision-making, prediction schemes based on big data predictive analytic models might be more practically useful. We review evidence suggesting that prediction equations based on symptoms and other easily-assessed clinical features found in previous research to predict MDD treatment outcomes might provide a foundation for developing predictive analytic clinical decision support models that could help clinicians select optimal (personalised) MDD treatments. These methods could also be useful in targeting patient subsamples for more expensive biomarker assessments. Approximately two dozen baseline variables obtained from medical records or patient reports have been found repeatedly in MDD treatment trials to predict overall treatment outcomes (i.e., intervention v. control) or differential treatment outcomes (i.e., intervention A v. intervention B). Similar evidence has been found in observational studies of MDD persistence-severity. However, no treatment studies have yet attempted to develop treatment outcome equations using the full set of these predictors. Promising preliminary empirical results coupled with recent developments in statistical methodology suggest that models could be developed to provide useful clinical decision support in personalised treatment selection. These tools could also provide a strong foundation to increase statistical power in focused studies of biomarkers and MDD heterogeneity of treatment response in subsequent controlled trials. Coordinated efforts are needed to develop a protocol for systematically collecting information about established predictors of heterogeneity of MDD treatment response in large observational treatment studies, applying and refining these models in subsequent pragmatic trials, carrying out pooled secondary analyses to extract the maximum amount of information from these coordinated studies, and using this information to focus future discovery efforts in the segment of the patient population in which continued uncertainty about treatment response exists.

  20. Refinement of the β-Sn Grains in Ni-Doped Sn-3.0Ag-0.5Cu Solder Joints with Cu-Based and Ni-Based Substrates

    NASA Astrophysics Data System (ADS)

    Chou, Tzu-Ting; Chen, Wei-Yu; Fleshman, Collin Jordon; Duh, Jenq-Gong

    2018-03-01

    A fine-grain structure with random orientations of lead-free solder joints was successfully obtained in this study. The Sn-Ag-Cu solder alloys doped with minor Ni were reflowed with Ni-based or Cu-based substrates to fabricate the joints containing different Ni content. Adding 0.1 wt.% Ni into the solder effectively promoted the formation of fine Sn grains, and reflowing with Ni-based substrates further enhanced the effects of β-Sn grain refinement. The crystallographic characteristics and the microstructures were analyzed to identify the solidification mechanism of different types of microstructure in the joints. The phase precipitating order in the joint altered as the solder composition were modified by elemental doping and changing substrate, which significantly affected the efficiency of grain refinement and the final grain structure. The formation mechanism of fine β-Sn grains in the Ni-doped joint with a Ni-based substrate is attributable to the heterogeneous nucleation by Ni, whereas the Ni in the joint using ChouCu-based substrate is consumed to form an intermetallic compound and thus retard the effect of grain refining.

  1. Scheduler for monitoring objects orbiting earth using satellite-based telescopes

    DOEpatents

    Olivier, Scot S; Pertica, Alexander J; Riot, Vincent J; De Vries, Willem H; Bauman, Brian J; Nikolaev, Sergei; Henderson, John R; Phillion, Donald W

    2015-04-28

    An ephemeris refinement system includes satellites with imaging devices in earth orbit to make observations of space-based objects ("target objects") and a ground-based controller that controls the scheduling of the satellites to make the observations of the target objects and refines orbital models of the target objects. The ground-based controller determines when the target objects of interest will be near enough to a satellite for that satellite to collect an image of the target object based on an initial orbital model for the target objects. The ground-based controller directs the schedules to be uploaded to the satellites, and the satellites make observations as scheduled and download the observations to the ground-based controller. The ground-based controller then refines the initial orbital models of the target objects based on the locations of the target objects that are derived from the observations.

  2. Monitoring objects orbiting earth using satellite-based telescopes

    DOEpatents

    Olivier, Scot S.; Pertica, Alexander J.; Riot, Vincent J.; De Vries, Willem H.; Bauman, Brian J.; Nikolaev, Sergei; Henderson, John R.; Phillion, Donald W.

    2015-06-30

    An ephemeris refinement system includes satellites with imaging devices in earth orbit to make observations of space-based objects ("target objects") and a ground-based controller that controls the scheduling of the satellites to make the observations of the target objects and refines orbital models of the target objects. The ground-based controller determines when the target objects of interest will be near enough to a satellite for that satellite to collect an image of the target object based on an initial orbital model for the target objects. The ground-based controller directs the schedules to be uploaded to the satellites, and the satellites make observations as scheduled and download the observations to the ground-based controller. The ground-based controller then refines the initial orbital models of the target objects based on the locations of the target objects that are derived from the observations.

  3. Parallelised photoacoustic signal acquisition using a Fabry-Perot sensor and a camera-based interrogation scheme

    NASA Astrophysics Data System (ADS)

    Saeb Gilani, T.; Villringer, C.; Zhang, E.; Gundlach, H.; Buchmann, J.; Schrader, S.; Laufer, J.

    2018-02-01

    Tomographic photoacoustic (PA) images acquired using a Fabry-Perot (FP) based scanner offer high resolution and image fidelity but can result in long acquisition times due to the need for raster scanning. To reduce the acquisition times, a parallelised camera-based PA signal detection scheme is developed. The scheme is based on using a sCMOScamera and FPI sensors with high homogeneity of optical thickness. PA signals were acquired using the camera-based setup and the signal to noise ratio (SNR) was measured. A comparison of the SNR of PA signal detected using 1) a photodiode in a conventional raster scanning detection scheme and 2) a sCMOS camera in parallelised detection scheme is made. The results show that the parallelised interrogation scheme has the potential to provide high speed PA imaging.

  4. Surface reconstruction and deformation monitoring of stratospheric airship based on laser scanning technology

    NASA Astrophysics Data System (ADS)

    Guo, Kai; Xie, Yongjie; Ye, Hu; Zhang, Song; Li, Yunfei

    2018-04-01

    Due to the uncertainty of stratospheric airship's shape and the security problem caused by the uncertainty, surface reconstruction and surface deformation monitoring of airship was conducted based on laser scanning technology and a √3-subdivision scheme based on Shepard interpolation was developed. Then, comparison was conducted between our subdivision scheme and the original √3-subdivision scheme. The result shows our subdivision scheme could reduce the shrinkage of surface and the number of narrow triangles. In addition, our subdivision scheme could keep the sharp features. So, surface reconstruction and surface deformation monitoring of airship could be conducted precisely by our subdivision scheme.

  5. Galaxy clusters in simulations of the local Universe: a matter of constraints

    NASA Astrophysics Data System (ADS)

    Sorce, Jenny G.; Tempel, Elmo

    2018-06-01

    To study the full formation and evolution history of galaxy clusters and their population, high-resolution simulations of the latter are flourishing. However, comparing observed clusters to the simulated ones on a one-to-one basis to refine the models and theories down to the details is non-trivial. The large variety of clusters limits the comparisons between observed and numerical clusters. Simulations resembling the local Universe down to the cluster scales permit pushing the limit. Simulated and observed clusters can be matched on a one-to-one basis for direct comparisons provided that clusters are well reproduced besides being in the proper large-scale environment. Comparing random and local Universe-like simulations obtained with differently grouped observational catalogues of peculiar velocities, this paper shows that the grouping scheme used to remove non-linear motions in the catalogues that constrain the simulations affects the quality of the numerical clusters. With a less aggressive grouping scheme - galaxies still falling on to clusters are preserved - combined with a bias minimization scheme, the mass of the dark matter haloes, simulacra for five local clusters - Virgo, Centaurus, Coma, Hydra, and Perseus - is increased by 39 per cent closing the gap with observational mass estimates. Simulacra are found on average in 89 per cent of the simulations, an increase of 5 per cent with respect to the previous grouping scheme. The only exception is Perseus. Since the Perseus-Pisces region is not well covered by the used peculiar velocity catalogue, the latest release lets us foresee a better simulacrum for Perseus in a near future.

  6. Generalized Rainich conditions, generalized stress-energy conditions, and the Hawking-Ellis classification

    NASA Astrophysics Data System (ADS)

    Martín–Moruno, Prado; Visser, Matt

    2017-11-01

    The (generalized) Rainich conditions are algebraic conditions which are polynomial in the (mixed-component) stress-energy tensor. As such they are logically distinct from the usual classical energy conditions (NEC, WEC, SEC, DEC), and logically distinct from the usual Hawking-Ellis (Segré-Plebański) classification of stress-energy tensors (type I, type II, type III, type IV). There will of course be significant inter-connections between these classification schemes, which we explore in the current article. Overall, we shall argue that it is best to view the (generalized) Rainich conditions as a refinement of the classical energy conditions and the usual Hawking-Ellis classification.

  7. A-posteriori error estimation for the finite point method with applications to compressible flow

    NASA Astrophysics Data System (ADS)

    Ortega, Enrique; Flores, Roberto; Oñate, Eugenio; Idelsohn, Sergio

    2017-08-01

    An a-posteriori error estimate with application to inviscid compressible flow problems is presented. The estimate is a surrogate measure of the discretization error, obtained from an approximation to the truncation terms of the governing equations. This approximation is calculated from the discrete nodal differential residuals using a reconstructed solution field on a modified stencil of points. Both the error estimation methodology and the flow solution scheme are implemented using the Finite Point Method, a meshless technique enabling higher-order approximations and reconstruction procedures on general unstructured discretizations. The performance of the proposed error indicator is studied and applications to adaptive grid refinement are presented.

  8. Handbook for the estimation of microwave propagation effects: Link calculations for earth-space paths (path loss and noise estimation)

    NASA Technical Reports Server (NTRS)

    Crane, R. K.; Blood, D. W.

    1979-01-01

    A single model for a standard of comparison for other models when dealing with rain attenuation problems in system design and experimentation is proposed. Refinements to the Global Rain Production Model are incorporated. Path loss and noise estimation procedures as the basic input to systems design for earth-to-space microwave links operating at frequencies from 1 to 300 GHz are provided. Topics covered include gaseous absorption, attenuation by rain, ionospheric and tropospheric scintillation, low elevation angle effects, radome attenuation, diversity schemes, link calculation, and receiver noise emission by atmospheric gases, rain, and antenna contributions.

  9. Mapping ocean tides with satellites - A computer simulation

    NASA Technical Reports Server (NTRS)

    Won, I. J.; Kuo, J. T.; Jachens, R. C.

    1978-01-01

    As a preliminary study for the future worldwide direct mapping of the open ocean tide with satellites equipped with precision altimeters we conducted a simulated study using sets of artificially generated altimeter data constructed from a realistic geoid and four pairs of major tides in the northeastern Pacific Ocean. Recovery of the original geoid and eight tidal maps is accomplished by a space-time, least squares harmonic analysis scheme. The resultant maps appear fairly satisfactory even when random noises up to + or - 100 cm are added to the altimeter data of sufficient space-time density. The method also produces a refined geoid which is rigorously corrected for the dynamic tides.

  10. Defining Clonal Color in Fluorescent Multi-Clonal Tracking

    PubMed Central

    Wu, Juwell W.; Turcotte, Raphaël; Alt, Clemens; Runnels, Judith M.; Tsao, Hensin; Lin, Charles P.

    2016-01-01

    Clonal heterogeneity and selection underpin many biological processes including development and tumor progression. Combinatorial fluorescent protein expression in germline cells has proven its utility for tracking the formation and regeneration of different organ systems. Such cell populations encoded by combinatorial fluorescent proteins are also attractive tools for understanding clonal expansion and clonal competition in cancer. However, the assignment of clonal identity requires an analytical framework in which clonal markings can be parameterized and validated. Here we present a systematic and quantitative method for RGB analysis of fluorescent melanoma cancer clones. We then demonstrate refined clonal trackability of melanoma cells using this scheme. PMID:27073117

  11. Refining of plant oils to chemicals by olefin metathesis.

    PubMed

    Chikkali, Samir; Mecking, Stefan

    2012-06-11

    Plant oils are attractive substrates for the chemical industry. Their scope for the production of chemicals can be expanded by sophisticated catalytic conversions. Olefin metathesis is an example, which also illustrates generic issues of "biorefining" to chemicals. Utilization on a large scale requires high catalyst activities, which influences the choice of the metathesis reaction. The mixture of different fatty acids composing a technical-grade plant oil substrate gives rise to a range of products. This decisively determines possible process schemes, and potentially provides novel chemicals and intermediates not employed to date. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. An Identity-Based Anti-Quantum Privacy-Preserving Blind Authentication in Wireless Sensor Networks.

    PubMed

    Zhu, Hongfei; Tan, Yu-An; Zhu, Liehuang; Wang, Xianmin; Zhang, Quanxin; Li, Yuanzhang

    2018-05-22

    With the development of wireless sensor networks, IoT devices are crucial for the Smart City; these devices change people's lives such as e-payment and e-voting systems. However, in these two systems, the state-of-art authentication protocols based on traditional number theory cannot defeat a quantum computer attack. In order to protect user privacy and guarantee trustworthy of big data, we propose a new identity-based blind signature scheme based on number theorem research unit lattice, this scheme mainly uses a rejection sampling theorem instead of constructing a trapdoor. Meanwhile, this scheme does not depend on complex public key infrastructure and can resist quantum computer attack. Then we design an e-payment protocol using the proposed scheme. Furthermore, we prove our scheme is secure in the random oracle, and satisfies confidentiality, integrity, and non-repudiation. Finally, we demonstrate that the proposed scheme outperforms the other traditional existing identity-based blind signature schemes in signing speed and verification speed, outperforms the other lattice-based blind signature in signing speed, verification speed, and signing secret key size.

  13. An Identity-Based Anti-Quantum Privacy-Preserving Blind Authentication in Wireless Sensor Networks

    PubMed Central

    Zhu, Hongfei; Tan, Yu-an; Zhu, Liehuang; Wang, Xianmin; Zhang, Quanxin; Li, Yuanzhang

    2018-01-01

    With the development of wireless sensor networks, IoT devices are crucial for the Smart City; these devices change people’s lives such as e-payment and e-voting systems. However, in these two systems, the state-of-art authentication protocols based on traditional number theory cannot defeat a quantum computer attack. In order to protect user privacy and guarantee trustworthy of big data, we propose a new identity-based blind signature scheme based on number theorem research unit lattice, this scheme mainly uses a rejection sampling theorem instead of constructing a trapdoor. Meanwhile, this scheme does not depend on complex public key infrastructure and can resist quantum computer attack. Then we design an e-payment protocol using the proposed scheme. Furthermore, we prove our scheme is secure in the random oracle, and satisfies confidentiality, integrity, and non-repudiation. Finally, we demonstrate that the proposed scheme outperforms the other traditional existing identity-based blind signature schemes in signing speed and verification speed, outperforms the other lattice-based blind signature in signing speed, verification speed, and signing secret key size. PMID:29789475

  14. Adaptive Numerical Dissipative Control in High Order Schemes for Multi-D Non-Ideal MHD

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sjoegreen, B.

    2004-01-01

    The goal is to extend our adaptive numerical dissipation control in high order filter schemes and our new divergence-free methods for ideal MHD to non-ideal MHD that include viscosity and resistivity. The key idea consists of automatic detection of different flow features as distinct sensors to signal the appropriate type and amount of numerical dissipation/filter where needed and leave the rest of the region free of numerical dissipation contamination. These scheme-independent detectors are capable of distinguishing shocks/shears, flame sheets, turbulent fluctuations and spurious high-frequency oscillations. The detection algorithm is based on an artificial compression method (ACM) (for shocks/shears), and redundant multi-resolution wavelets (WAV) (for the above types of flow feature). These filter approaches also provide a natural and efficient way for the minimization of Div(B) numerical error. The filter scheme consists of spatially sixth order or higher non-dissipative spatial difference operators as the base scheme for the inviscid flux derivatives. If necessary, a small amount of high order linear dissipation is used to remove spurious high frequency oscillations. For example, an eighth-order centered linear dissipation (AD8) might be included in conjunction with a spatially sixth-order base scheme. The inviscid difference operator is applied twice for the viscous flux derivatives. After the completion of a full time step of the base scheme step, the solution is adaptively filtered by the product of a 'flow detector' and the 'nonlinear dissipative portion' of a high-resolution shock-capturing scheme. In addition, the scheme independent wavelet flow detector can be used in conjunction with spatially compact, spectral or spectral element type of base schemes. The ACM and wavelet filter schemes using the dissipative portion of a second-order shock-capturing scheme with sixth-order spatial central base scheme for both the inviscid and viscous MHD flux derivatives and a fourth-order Runge-Kutta method are denoted.

  15. 26 CFR 1.954-8 - Foreign base company oil related income.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... refined oil in country A. In addition, assume that country A is a net exporter of crude oil. As in Example...) Special rules for applying the extraction exception of paragraph (a)(1)(i) of this section—(1) Refining income described in section 907(c)(2)(A). With regard to a controlled foreign corporation's refining...

  16. 26 CFR 1.954-8 - Foreign base company oil related income.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... refined oil in country A. In addition, assume that country A is a net exporter of crude oil. As in Example...) Special rules for applying the extraction exception of paragraph (a)(1)(i) of this section—(1) Refining income described in section 907(c)(2)(A). With regard to a controlled foreign corporation's refining...

  17. 26 CFR 1.954-8 - Foreign base company oil related income.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... refined oil in country A. In addition, assume that country A is a net exporter of crude oil. As in Example...) Special rules for applying the extraction exception of paragraph (a)(1)(i) of this section—(1) Refining income described in section 907(c)(2)(A). With regard to a controlled foreign corporation's refining...

  18. Correcting pervasive errors in RNA crystallography through enumerative structure prediction.

    PubMed

    Chou, Fang-Chieh; Sripakdeevong, Parin; Dibrov, Sergey M; Hermann, Thomas; Das, Rhiju

    2013-01-01

    Three-dimensional RNA models fitted into crystallographic density maps exhibit pervasive conformational ambiguities, geometric errors and steric clashes. To address these problems, we present enumerative real-space refinement assisted by electron density under Rosetta (ERRASER), coupled to Python-based hierarchical environment for integrated 'xtallography' (PHENIX) diffraction-based refinement. On 24 data sets, ERRASER automatically corrects the majority of MolProbity-assessed errors, improves the average R(free) factor, resolves functionally important discrepancies in noncanonical structure and refines low-resolution models to better match higher-resolution models.

  19. On macromolecular refinement at subatomic resolution withinteratomic scatterers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Afonine, Pavel V.; Grosse-Kunstleve, Ralf W.; Adams, Paul D.

    2007-11-09

    A study of the accurate electron density distribution in molecular crystals at subatomic resolution, better than {approx} 1.0 {angstrom}, requires more detailed models than those based on independent spherical atoms. A tool conventionally used in small-molecule crystallography is the multipolar model. Even at upper resolution limits of 0.8-1.0 {angstrom}, the number of experimental data is insufficient for the full multipolar model refinement. As an alternative, a simpler model composed of conventional independent spherical atoms augmented by additional scatterers to model bonding effects has been proposed. Refinement of these mixed models for several benchmark datasets gave results comparable in quality withmore » results of multipolar refinement and superior of those for conventional models. Applications to several datasets of both small- and macro-molecules are shown. These refinements were performed using the general-purpose macromolecular refinement module phenix.refine of the PHENIX package.« less

  20. On macromolecular refinement at subatomic resolution with interatomic scatterers

    PubMed Central

    Afonine, Pavel V.; Grosse-Kunstleve, Ralf W.; Adams, Paul D.; Lunin, Vladimir Y.; Urzhumtsev, Alexandre

    2007-01-01

    A study of the accurate electron-density distribution in molecular crystals at subatomic resolution (better than ∼1.0 Å) requires more detailed models than those based on independent spherical atoms. A tool that is conventionally used in small-molecule crystallography is the multipolar model. Even at upper resolution limits of 0.8–1.0 Å, the number of experimental data is insufficient for full multipolar model refinement. As an alternative, a simpler model composed of conventional independent spherical atoms augmented by additional scatterers to model bonding effects has been proposed. Refinement of these mixed models for several benchmark data sets gave results that were comparable in quality with the results of multipolar refinement and superior to those for conventional models. Applications to several data sets of both small molecules and macromolecules are shown. These refinements were performed using the general-purpose macromolecular refinement module phenix.refine of the PHENIX package. PMID:18007035

  1. On macromolecular refinement at subatomic resolution with interatomic scatterers.

    PubMed

    Afonine, Pavel V; Grosse-Kunstleve, Ralf W; Adams, Paul D; Lunin, Vladimir Y; Urzhumtsev, Alexandre

    2007-11-01

    A study of the accurate electron-density distribution in molecular crystals at subatomic resolution (better than approximately 1.0 A) requires more detailed models than those based on independent spherical atoms. A tool that is conventionally used in small-molecule crystallography is the multipolar model. Even at upper resolution limits of 0.8-1.0 A, the number of experimental data is insufficient for full multipolar model refinement. As an alternative, a simpler model composed of conventional independent spherical atoms augmented by additional scatterers to model bonding effects has been proposed. Refinement of these mixed models for several benchmark data sets gave results that were comparable in quality with the results of multipolar refinement and superior to those for conventional models. Applications to several data sets of both small molecules and macromolecules are shown. These refinements were performed using the general-purpose macromolecular refinement module phenix.refine of the PHENIX package.

  2. Error function attack of chaos synchronization based encryption schemes.

    PubMed

    Wang, Xingang; Zhan, Meng; Lai, C-H; Gang, Hu

    2004-03-01

    Different chaos synchronization based encryption schemes are reviewed and compared from the practical point of view. As an efficient cryptanalysis tool for chaos encryption, a proposal based on the error function attack is presented systematically and used to evaluate system security. We define a quantitative measure (quality factor) of the effective applicability of a chaos encryption scheme, which takes into account the security, the encryption speed, and the robustness against channel noise. A comparison is made of several encryption schemes and it is found that a scheme based on one-way coupled chaotic map lattices performs outstandingly well, as judged from quality factor. Copyright 2004 American Institute of Physics.

  3. Security analysis and enhancements of an effective biometric-based remote user authentication scheme using smart cards.

    PubMed

    An, Younghwa

    2012-01-01

    Recently, many biometrics-based user authentication schemes using smart cards have been proposed to improve the security weaknesses in user authentication system. In 2011, Das proposed an efficient biometric-based remote user authentication scheme using smart cards that can provide strong authentication and mutual authentication. In this paper, we analyze the security of Das's authentication scheme, and we have shown that Das's authentication scheme is still insecure against the various attacks. Also, we proposed the enhanced scheme to remove these security problems of Das's authentication scheme, even if the secret information stored in the smart card is revealed to an attacker. As a result of security analysis, we can see that the enhanced scheme is secure against the user impersonation attack, the server masquerading attack, the password guessing attack, and the insider attack and provides mutual authentication between the user and the server.

  4. Security Analysis and Enhancements of an Effective Biometric-Based Remote User Authentication Scheme Using Smart Cards

    PubMed Central

    An, Younghwa

    2012-01-01

    Recently, many biometrics-based user authentication schemes using smart cards have been proposed to improve the security weaknesses in user authentication system. In 2011, Das proposed an efficient biometric-based remote user authentication scheme using smart cards that can provide strong authentication and mutual authentication. In this paper, we analyze the security of Das's authentication scheme, and we have shown that Das's authentication scheme is still insecure against the various attacks. Also, we proposed the enhanced scheme to remove these security problems of Das's authentication scheme, even if the secret information stored in the smart card is revealed to an attacker. As a result of security analysis, we can see that the enhanced scheme is secure against the user impersonation attack, the server masquerading attack, the password guessing attack, and the insider attack and provides mutual authentication between the user and the server. PMID:22899887

  5. i3Drefine Software for Protein 3D Structure Refinement and Its Assessment in CASP10

    PubMed Central

    Bhattacharya, Debswapna; Cheng, Jianlin

    2013-01-01

    Protein structure refinement refers to the process of improving the qualities of protein structures during structure modeling processes to bring them closer to their native states. Structure refinement has been drawing increasing attention in the community-wide Critical Assessment of techniques for Protein Structure prediction (CASP) experiments since its addition in 8th CASP experiment. During the 9th and recently concluded 10th CASP experiments, a consistent growth in number of refinement targets and participating groups has been witnessed. Yet, protein structure refinement still remains a largely unsolved problem with majority of participating groups in CASP refinement category failed to consistently improve the quality of structures issued for refinement. In order to alleviate this need, we developed a completely automated and computationally efficient protein 3D structure refinement method, i3Drefine, based on an iterative and highly convergent energy minimization algorithm with a powerful all-atom composite physics and knowledge-based force fields and hydrogen bonding (HB) network optimization technique. In the recent community-wide blind experiment, CASP10, i3Drefine (as ‘MULTICOM-CONSTRUCT’) was ranked as the best method in the server section as per the official assessment of CASP10 experiment. Here we provide the community with free access to i3Drefine software and systematically analyse the performance of i3Drefine in strict blind mode on the refinement targets issued in CASP10 refinement category and compare with other state-of-the-art refinement methods participating in CASP10. Our analysis demonstrates that i3Drefine is only fully-automated server participating in CASP10 exhibiting consistent improvement over the initial structures in both global and local structural quality metrics. Executable version of i3Drefine is freely available at http://protein.rnet.missouri.edu/i3drefine/. PMID:23894517

  6. An efficient chaotic maps-based authentication and key agreement scheme using smartcards for telecare medicine information systems.

    PubMed

    Lee, Tian-Fu

    2013-12-01

    A smartcard-based authentication and key agreement scheme for telecare medicine information systems enables patients, doctors, nurses and health visitors to use smartcards for secure login to medical information systems. Authorized users can then efficiently access remote services provided by the medicine information systems through public networks. Guo and Chang recently improved the efficiency of a smartcard authentication and key agreement scheme by using chaotic maps. Later, Hao et al. reported that the scheme developed by Guo and Chang had two weaknesses: inability to provide anonymity and inefficient double secrets. Therefore, Hao et al. proposed an authentication scheme for telecare medicine information systems that solved these weaknesses and improved performance. However, a limitation in both schemes is their violation of the contributory property of key agreements. This investigation discusses these weaknesses and proposes a new smartcard-based authentication and key agreement scheme that uses chaotic maps for telecare medicine information systems. Compared to conventional schemes, the proposed scheme provides fewer weaknesses, better security, and more efficiency.

  7. Lightweight ECC based RFID authentication integrated with an ID verifier transfer protocol.

    PubMed

    He, Debiao; Kumar, Neeraj; Chilamkurti, Naveen; Lee, Jong-Hyouk

    2014-10-01

    The radio frequency identification (RFID) technology has been widely adopted and being deployed as a dominant identification technology in a health care domain such as medical information authentication, patient tracking, blood transfusion medicine, etc. With more and more stringent security and privacy requirements to RFID based authentication schemes, elliptic curve cryptography (ECC) based RFID authentication schemes have been proposed to meet the requirements. However, many recently published ECC based RFID authentication schemes have serious security weaknesses. In this paper, we propose a new ECC based RFID authentication integrated with an ID verifier transfer protocol that overcomes the weaknesses of the existing schemes. A comprehensive security analysis has been conducted to show strong security properties that are provided from the proposed authentication scheme. Moreover, the performance of the proposed authentication scheme is analyzed in terms of computational cost, communicational cost, and storage requirement.

  8. Searchable attribute-based encryption scheme with attribute revocation in cloud storage.

    PubMed

    Wang, Shangping; Zhao, Duqiao; Zhang, Yaling

    2017-01-01

    Attribute based encryption (ABE) is a good way to achieve flexible and secure access control to data, and attribute revocation is the extension of the attribute-based encryption, and the keyword search is an indispensable part for cloud storage. The combination of both has an important application in the cloud storage. In this paper, we construct a searchable attribute-based encryption scheme with attribute revocation in cloud storage, the keyword search in our scheme is attribute based with access control, when the search succeeds, the cloud server returns the corresponding cipher text to user and the user can decrypt the cipher text definitely. Besides, our scheme supports multiple keywords search, which makes the scheme more practical. Under the assumption of decisional bilinear Diffie-Hellman exponent (q-BDHE) and decisional Diffie-Hellman (DDH) in the selective security model, we prove that our scheme is secure.

  9. Simple scheme to implement decoy-state reference-frame-independent quantum key distribution

    NASA Astrophysics Data System (ADS)

    Zhang, Chunmei; Zhu, Jianrong; Wang, Qin

    2018-06-01

    We propose a simple scheme to implement decoy-state reference-frame-independent quantum key distribution (RFI-QKD), where signal states are prepared in Z, X, and Y bases, decoy states are prepared in X and Y bases, and vacuum states are set to no bases. Different from the original decoy-state RFI-QKD scheme whose decoy states are prepared in Z, X and Y bases, in our scheme decoy states are only prepared in X and Y bases, which avoids the redundancy of decoy states in Z basis, saves the random number consumption, simplifies the encoding device of practical RFI-QKD systems, and makes the most of the finite pulses in a short time. Numerical simulations show that, considering the finite size effect with reasonable number of pulses in practical scenarios, our simple decoy-state RFI-QKD scheme exhibits at least comparable or even better performance than that of the original decoy-state RFI-QKD scheme. Especially, in terms of the resistance to the relative rotation of reference frames, our proposed scheme behaves much better than the original scheme, which has great potential to be adopted in current QKD systems.

  10. Work, Train, Win: Work-Based Learning Design and Management for Productivity Gains. OECD Education Working Papers, No. 135

    ERIC Educational Resources Information Center

    Kis, Viktoria

    2016-01-01

    Realising the potential of work-based learning schemes as a driver of productivity requires careful design and support. The length of work-based learning schemes should be adapted to the profile of productivity gains. A scheme that is too long for a given skill set might be unattractive for learners and waste public resources, but a scheme that is…

  11. Cryptanalysis and improvement of Yan et al.'s biometric-based authentication scheme for telecare medicine information systems.

    PubMed

    Mishra, Dheerendra; Mukhopadhyay, Sourav; Chaturvedi, Ankita; Kumari, Saru; Khan, Muhammad Khurram

    2014-06-01

    Remote user authentication is desirable for a Telecare Medicine Information System (TMIS) for the safety, security and integrity of transmitted data over the public channel. In 2013, Tan presented a biometric based remote user authentication scheme and claimed that his scheme is secure. Recently, Yan et al. demonstrated some drawbacks in Tan's scheme and proposed an improved scheme to erase the drawbacks of Tan's scheme. We analyze Yan et al.'s scheme and identify that their scheme is vulnerable to off-line password guessing attack, and does not protect anonymity. Moreover, in their scheme, login and password change phases are inefficient to identify the correctness of input where inefficiency in password change phase can cause denial of service attack. Further, we design an improved scheme for TMIS with the aim to eliminate the drawbacks of Yan et al.'s scheme.

  12. An Improvement of Robust and Efficient Biometrics Based Password Authentication Scheme for Telecare Medicine Information Systems Using Extended Chaotic Maps.

    PubMed

    Moon, Jongho; Choi, Younsung; Kim, Jiye; Won, Dongho

    2016-03-01

    Recently, numerous extended chaotic map-based password authentication schemes that employ smart card technology were proposed for Telecare Medical Information Systems (TMISs). In 2015, Lu et al. used Li et al.'s scheme as a basis to propose a password authentication scheme for TMISs that is based on biometrics and smart card technology and employs extended chaotic maps. Lu et al. demonstrated that Li et al.'s scheme comprises some weaknesses such as those regarding a violation of the session-key security, a vulnerability to the user impersonation attack, and a lack of local verification. In this paper, however, we show that Lu et al.'s scheme is still insecure with respect to issues such as a violation of the session-key security, and that it is vulnerable to both the outsider attack and the impersonation attack. To overcome these drawbacks, we retain the useful properties of Lu et al.'s scheme to propose a new password authentication scheme that is based on smart card technology and requires the use of chaotic maps. Then, we show that our proposed scheme is more secure and efficient and supports security properties.

  13. A privacy preserving secure and efficient authentication scheme for telecare medical information systems.

    PubMed

    Mishra, Raghavendra; Barnwal, Amit Kumar

    2015-05-01

    The Telecare medical information system (TMIS) presents effective healthcare delivery services by employing information and communication technologies. The emerging privacy and security are always a matter of great concern in TMIS. Recently, Chen at al. presented a password based authentication schemes to address the privacy and security. Later on, it is proved insecure against various active and passive attacks. To erase the drawbacks of Chen et al.'s anonymous authentication scheme, several password based authentication schemes have been proposed using public key cryptosystem. However, most of them do not present pre-smart card authentication which leads to inefficient login and password change phases. To present an authentication scheme with pre-smart card authentication, we present an improved anonymous smart card based authentication scheme for TMIS. The proposed scheme protects user anonymity and satisfies all the desirable security attributes. Moreover, the proposed scheme presents efficient login and password change phases where incorrect input can be quickly detected and a user can freely change his password without server assistance. Moreover, we demonstrate the validity of the proposed scheme by utilizing the widely-accepted BAN (Burrows, Abadi, and Needham) logic. The proposed scheme is also comparable in terms of computational overheads with relevant schemes.

  14. Combining Techniques to Refine Item to Skills Q-Matrices with a Partition Tree

    ERIC Educational Resources Information Center

    Desmarais, Michel C.; Xu, Peng; Beheshti, Behzad

    2015-01-01

    The problem of mapping items to skills is gaining interest with the emergence of recent techniques that can use data for both defining this mapping, and for refining mappings given by experts. We investigate the problem of refining mapping from an expert by combining the output of different techniques. The combination is based on a partition tree…

  15. Color encryption scheme based on adapted quantum logistic map

    NASA Astrophysics Data System (ADS)

    Zaghloul, Alaa; Zhang, Tiejun; Amin, Mohamed; Abd El-Latif, Ahmed A.

    2014-04-01

    This paper presents a new color image encryption scheme based on quantum chaotic system. In this scheme, a new encryption scheme is accomplished by generating an intermediate chaotic key stream with the help of quantum chaotic logistic map. Then, each pixel is encrypted by the cipher value of the previous pixel and the adapted quantum logistic map. The results show that the proposed scheme has adequate security for the confidentiality of color images.

  16. A Hash Based Remote User Authentication and Authenticated Key Agreement Scheme for the Integrated EPR Information System.

    PubMed

    Li, Chun-Ta; Weng, Chi-Yao; Lee, Cheng-Chi; Wang, Chun-Cheng

    2015-11-01

    To protect patient privacy and ensure authorized access to remote medical services, many remote user authentication schemes for the integrated electronic patient record (EPR) information system have been proposed in the literature. In a recent paper, Das proposed a hash based remote user authentication scheme using passwords and smart cards for the integrated EPR information system, and claimed that the proposed scheme could resist various passive and active attacks. However, in this paper, we found that Das's authentication scheme is still vulnerable to modification and user duplication attacks. Thereafter we propose a secure and efficient authentication scheme for the integrated EPR information system based on lightweight hash function and bitwise exclusive-or (XOR) operations. The security proof and performance analysis show our new scheme is well-suited to adoption in remote medical healthcare services.

  17. A Practical and Secure Coercion-Resistant Scheme for Internet Voting

    NASA Astrophysics Data System (ADS)

    Araújo, Roberto; Foulle, Sébastien; Traoré, Jacques

    Juels, Catalano, and Jakobsson (JCJ) proposed at WPES 2005 the first voting scheme that considers real-world threats and that is more realistic for Internet elections. Their scheme, though, has a quadratic work factor and thereby is not efficient for large scale elections. Based on the work of JCJ, Smith proposed an efficient scheme that has a linear work factor. In this paper we first show that Smith's scheme is insecure. Then we present a new coercion-resistant election scheme with a linear work factor that overcomes the flaw of Smith's proposal. Our solution is based on the group signature scheme of Camenisch and Lysyanskaya (Crypto 2004).

  18. FIELD TESTS OF GEOGRAPHICALLY-DEPENDENT VS. THRESHOLD-BASED WATERSHED CLASSIFICATION SCHEMES IN THE GREAT LAKES BASIN

    EPA Science Inventory

    We compared classification schemes based on watershed storage (wetland + lake area/watershed area) and forest fragmentation with a geographically-based classification scheme for two case studies involving 1) Lake Superior tributaries and 2) watersheds of riverine coastal wetlands...

  19. FIELD TESTS OF GEOGRAPHICALLY-DEPENDENT VS. THRESHOLD-BASED WATERSHED CLASSIFICATION SCHEMED IN THE GREAT LAKES BASIN

    EPA Science Inventory

    We compared classification schemes based on watershed storage (wetland + lake area/watershed area) and forest fragmentation with a geographically-based classification scheme for two case studies involving 1)Lake Superior tributaries and 2) watersheds of riverine coastal wetlands ...

  20. Clinical review: Reunification of acid–base physiology

    PubMed Central

    Kellum, John A

    2005-01-01

    Recent advances in acid–base physiology and in the epidemiology of acid–base disorders have refined our understanding of the basic control mechanisms that determine blood pH in health and disease. These refinements have also brought parity between the newer, quantitative and older, descriptive approaches to acid–base physiology. This review explores how the new and older approaches to acid–base physiology can be reconciled and combined to result in a powerful bedside tool. A case based tutorial is also provided. PMID:16277739

Top