Health Information in Amharic (Amarɨñña / አማርኛ )
... Section Bed Bugs Bed Bug Control in Residences - English PDF Bed Bug Control in Residences - Amarɨñña / አማርኛ ( ... Department of Agriculture Vacuuming to Capture Bed Bugs - English PDF Vacuuming to Capture Bed Bugs - Amarɨñña / አማርኛ ( ...
In Memoriam: Amar J.S. Klar, Ph.D. | Center for Cancer Research
In Memoriam: Amar J.S. Klar, Ph.D. The Center for Cancer Research mourns the recent death of colleague and friend Amar J.S. Klar, Ph.D. Dr. Klar was a much-liked and respected member of the NCI community as part of the Gene Regulation and Chromosome Biology Laboratory since 1988.
NASA Astrophysics Data System (ADS)
Nunes Amaral, Luis A.
2002-03-01
We study the statistical properties of a variety of diverse real-world networks including the neural network of C. Elegans, food webs for seven distinct environments, transportation and technological networks, and a number of distinct social networks [1-5]. We present evidence of the occurrence of three classes of small-world networks [2]: (a) scale-free networks, characterized by a vertex connectivity distribution that decays as a power law; (b) broad-scale networks, characterized by a connectivity distribution that has a power-law regime followed by a sharp cut-off; (c) single-scale networks, characterized by a connectivity distribution with a fast decaying tail. Moreover, we note for the classes of broad-scale and single-scale networks that there are constraints limiting the addition of new links. Our results suggest that the nature of such constraints may be the controlling factor for the emergence of different classes of networks. [See http://polymer.bu.edu/ amaral/Networks.html for details and htpp://polymer.bu.edu/ amaral/Professional.html for access to PDF files of articles.] 1. M. Barthélémy, L. A. N. Amaral, Phys. Rev. Lett. 82, 3180-3183 (1999). 2. L. A. N. Amaral, A. Scala, M. Barthélémy, H. E. Stanley, Proc. Nat. Acad. Sci. USA 97, 11149-11152 (2000). 3. F. Liljeros, C. R. Edling, L. A. N. Amaral, H. E. Stanley, and Y. Åberg, Nature 411, 907-908 (2001). 4. J. Camacho, R. Guimera, L.A.N. Amaral, Phys. Rev. E RC (to appear). 5. S. Mossa, M. Barthelemy, H.E. Stanley, L.A.N. Amaral (submitted).
A Novel AMARS Technique for Baseline Wander Removal Applied to Photoplethysmogram.
Timimi, Ammar A K; Ali, M A Mohd; Chellappan, K
2017-06-01
A new digital filter, AMARS (aligning minima of alternating random signal) has been derived using trigonometry to regulate signal pulsations inline. The pulses are randomly presented in continuous signals comprising frequency band lower than the signal's mean rate. Frequency selective filters are conventionally employed to reject frequencies undesired by specific applications. However, these conventional filters only reduce the effects of the rejected range producing a signal superimposed by some baseline wander (BW). In this work, filters of different ranges and techniques were independently configured to preprocess a photoplethysmogram, an optical biosignal of blood volume dynamics, producing wave shapes with several BWs. The AMARS application effectively removed the encountered BWs to assemble similarly aligned trends. The removal implementation was found repeatable in both ear and finger photoplethysmograms, emphasizing the importance of BW removal in biosignal processing in retaining its structural, functional and physiological properties. We also believe that AMARS may be relevant to other biological and continuous signals modulated by similar types of baseline volatility.
Germs and Hygiene - Multiple Languages
... አማርኛ ) Expand Section Cleaning to Prevent the Flu - English PDF Cleaning to Prevent the Flu - Amarɨñña / አማርኛ ( ... Disease Control and Prevention Fight the Flu Poster - English PDF Fight the Flu Poster - Amarɨñña / አማርኛ (Amharic) ...
Implications for Child Bilingual Acquisition, Optionality and Transfer
ERIC Educational Resources Information Center
Serratrice, Ludovica
2014-01-01
Amaral & Roeper's Multiple Grammars (MG) proposal offers an appealingly simple way of thinking about the linguistic representations of bilingual speakers. This article presents a commentary on the MG language acquisition theory proposed by Luiz Amaral and Tom Roeper in this issue, focusing on the theory's implications for child…
Commentary to "Multiple Grammars and Second Language Representation," by Luiz Amaral and Tom Roeper
ERIC Educational Resources Information Center
Pérez-Leroux, Ana T.
2014-01-01
In this commentary, the author defends the Multiple Grammars (MG) theory proposed by Luiz Amaral and Tom Roepe (A&R) in the present issue. Topics discussed include second language acquisition, the concept of developmental optionality, and the idea that structural decisions involve the lexical dimension. The author states that A&R's…
Omnivorous Representation Might Lead to Indigestion: Commentary on Amaral and Roeper
ERIC Educational Resources Information Center
Slabakova, Roumyana
2014-01-01
This article offers commentary that the Multiple Grammar (MG) language acquisition theory proposed by Luiz Amaral and Tom Roeper (A&R) in the present issue lacks elaboration of the psychological mechanisms at work in second language acquisition. Topics discussed include optionality in a speaker's grammar and the rules of verb position in…
Child Nutrition - Multiple Languages
... Amarɨñña / አማርኛ ) Expand Section Foods For Healthy Teeth - English PDF Foods For Healthy Teeth - Amarɨñña / አማርኛ (Amharic) ... Arabic (العربية) Expand Section Foods For Healthy Teeth - English PDF Foods For Healthy Teeth - العربية (Arabic) PDF ...
Unconventional protein sources: apricot seed kernels.
Gabrial, G N; El-Nahry, F I; Awadalla, M Z; Girgis, S M
1981-09-01
Hamawy apricot seed kernels (sweet), Amar apricot seed kernels (bitter) and treated Amar apricot kernels (bitterness removed) were evaluated biochemically. All kernels were found to be high in fat (42.2--50.91%), protein (23.74--25.70%) and fiber (15.08--18.02%). Phosphorus, calcium, and iron were determined in all experimental samples. The three different apricot seed kernels were used for extensive study including the qualitative determination of the amino acid constituents by acid hydrolysis, quantitative determination of some amino acids, and biological evaluation of the kernel proteins in order to use them as new protein sources. Weanling albino rats failed to grow on diets containing the Amar apricot seed kernels due to low food consumption because of its bitterness. There was no loss in weight in that case. The Protein Efficiency Ratio data and blood analysis results showed the Hamawy apricot seed kernels to be higher in biological value than treated apricot seed kernels. The Net Protein Ratio data which accounts for both weight, maintenance and growth showed the treated apricot seed kernels to be higher in biological value than both Hamawy and Amar kernels. The Net Protein Ratio for the last two kernels were nearly equal.
Piezoelectric Resonance Defined High Performance Sensors and Modulators
2016-05-30
Lopez-Ribot, Amar S. Bhalla, Melissa Montes, Ruyan Guo. Properties of Silver and Copper Nanoparticle Containing Aqueous Suspensions and Evaluation of...Amar S. Bhalla, Ruyan Guo, “Properties of Silver and Copper Nanoparticle - Containing Aqueous Solutions and Their Anti-Biofilm Effects," (2015)Symposium...Properties of Silver and Copper Nanoparticle -Containing AqueousSolutions and Evaluation of their In Vitro Activity againstCandida albicans and
Developing and Modifying Behavioral Coding Schemes in Pediatric Psychology: A Practical Guide
McMurtry, C. Meghan; Chambers, Christine T.; Bakeman, Roger
2015-01-01
Objectives To provide a concise and practical guide to the development, modification, and use of behavioral coding schemes for observational data in pediatric psychology. Methods This article provides a review of relevant literature and experience in developing and refining behavioral coding schemes. Results A step-by-step guide to developing and/or modifying behavioral coding schemes is provided. Major steps include refining a research question, developing or refining the coding manual, piloting and refining the coding manual, and implementing the coding scheme. Major tasks within each step are discussed, and pediatric psychology examples are provided throughout. Conclusions Behavioral coding can be a complex and time-intensive process, but the approach is invaluable in allowing researchers to address clinically relevant research questions in ways that would not otherwise be possible. PMID:25416837
ADAPTIVE TETRAHEDRAL GRID REFINEMENT AND COARSENING IN MESSAGE-PASSING ENVIRONMENTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hallberg, J.; Stagg, A.
2000-10-01
A grid refinement and coarsening scheme has been developed for tetrahedral and triangular grid-based calculations in message-passing environments. The element adaption scheme is based on an edge bisection of elements marked for refinement by an appropriate error indicator. Hash-table/linked-list data structures are used to store nodal and element formation. The grid along inter-processor boundaries is refined and coarsened consistently with the update of these data structures via MPI calls. The parallel adaption scheme has been applied to the solution of a transient, three-dimensional, nonlinear, groundwater flow problem. Timings indicate efficiency of the grid refinement process relative to the flow solvermore » calculations.« less
Aerodynamic design optimization via reduced Hessian SQP with solution refining
NASA Technical Reports Server (NTRS)
Feng, Dan; Pulliam, Thomas H.
1995-01-01
An all-at-once reduced Hessian Successive Quadratic Programming (SQP) scheme has been shown to be efficient for solving aerodynamic design optimization problems with a moderate number of design variables. This paper extends this scheme to allow solution refining. In particular, we introduce a reduced Hessian refining technique that is critical for making a smooth transition of the Hessian information from coarse grids to fine grids. Test results on a nozzle design using quasi-one-dimensional Euler equations show that through solution refining the efficiency and the robustness of the all-at-once reduced Hessian SQP scheme are significantly improved.
Developing and modifying behavioral coding schemes in pediatric psychology: a practical guide.
Chorney, Jill MacLaren; McMurtry, C Meghan; Chambers, Christine T; Bakeman, Roger
2015-01-01
To provide a concise and practical guide to the development, modification, and use of behavioral coding schemes for observational data in pediatric psychology. This article provides a review of relevant literature and experience in developing and refining behavioral coding schemes. A step-by-step guide to developing and/or modifying behavioral coding schemes is provided. Major steps include refining a research question, developing or refining the coding manual, piloting and refining the coding manual, and implementing the coding scheme. Major tasks within each step are discussed, and pediatric psychology examples are provided throughout. Behavioral coding can be a complex and time-intensive process, but the approach is invaluable in allowing researchers to address clinically relevant research questions in ways that would not otherwise be possible. © The Author 2014. Published by Oxford University Press on behalf of the Society of Pediatric Psychology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
The Portsmouth-based glaucoma refinement scheme: a role for virtual clinics in the future?
Trikha, S; Macgregor, C; Jeffery, M; Kirwan, J
2012-10-01
Glaucoma referrals continue to impart a significant burden on Hospital Eye Services (HES), with a large proportion of these false positives. To evaluate the Portsmouth glaucoma scheme, utilising virtual clinics, digital technology, and community optometrists to streamline glaucoma referrals. The stages of the patient trail were mapped and, at each step of the process, 100 consecutive patient decisions were identified. The diagnostic outcomes of 50 consecutive patients referred from the refinement scheme to the HES were identified. A total of 76% of 'glaucoma' referrals were suitable for the refinement scheme. Overall, 94% of disc images were gradeable in the virtual clinic. In all, 11% of patients 'attending' the virtual clinic were accepted into HES, with 89% being discharged for community follow-up. Of referrals accepted into HES, the positive predictive value (glaucoma/ocular hypertension/suspect) was 0.78 vs 0.37 in the predating 'unrefined' scheme (95% CI 0.65-0.87). The scheme has released 1400 clinic slots/year for HES, and has produced a £244 200/year cost saving for Portsmouth Hospitals' Trust. The refinement scheme is streamlining referrals and increasing the positive predictive rate in the diagnosis of glaucoma, glaucoma suspect or ocular hypertension. This consultant-led practice-based commissioning scheme, if adopted widely, is likely to incur a significant cost saving while maintaining high quality of care within the NHS.
The Portsmouth-based glaucoma refinement scheme: a role for virtual clinics in the future?
Trikha, S; Macgregor, C; Jeffery, M; Kirwan, J
2012-01-01
Background Glaucoma referrals continue to impart a significant burden on Hospital Eye Services (HES), with a large proportion of these false positives. Aims To evaluate the Portsmouth glaucoma scheme, utilising virtual clinics, digital technology, and community optometrists to streamline glaucoma referrals. Method The stages of the patient trail were mapped and, at each step of the process, 100 consecutive patient decisions were identified. The diagnostic outcomes of 50 consecutive patients referred from the refinement scheme to the HES were identified. Results A total of 76% of ‘glaucoma' referrals were suitable for the refinement scheme. Overall, 94% of disc images were gradeable in the virtual clinic. In all, 11% of patients ‘attending' the virtual clinic were accepted into HES, with 89% being discharged for community follow-up. Of referrals accepted into HES, the positive predictive value (glaucoma/ocular hypertension/suspect) was 0.78 vs 0.37 in the predating ‘unrefined' scheme (95% CI 0.65–0.87). The scheme has released 1400 clinic slots/year for HES, and has produced a £244 200/year cost saving for Portsmouth Hospitals' Trust. Conclusion The refinement scheme is streamlining referrals and increasing the positive predictive rate in the diagnosis of glaucoma, glaucoma suspect or ocular hypertension. This consultant-led practice-based commissioning scheme, if adopted widely, is likely to incur a significant cost saving while maintaining high quality of care within the NHS. PMID:22766539
NASA Astrophysics Data System (ADS)
Pimpinelli, Alberto; Einstein, T. L.; González, Diego Luis; Sathiyanarayanan, Rajesh; Hamouda, Ajmi Bh.
2011-03-01
Earlier we showed [PRL 99, 226102 (2007)] that the CZD in growth could be well described by P (s) = asβ exp (-bs2) , where s is the CZ area divided by its average value. Painstaking simulations by Amar's [PRE 79, 011602 (2009)] and Evans's [PRL 104, 149601 (2010)] groups showed inadequacies in our mean field Fokker-Planck argument relating β to the critical nucleus size. We refine our derivation to retrieve their β ~ i + 2 [PRL 104, 149602 (2010)]. We discuss applications of this formula and methodology to experiments on Ge/Si(001) and on various organics on Si O2 , as well as to kinetic Monte Carlo studies homoepitaxial growth on Cu(100) with codeposited impurities of different sorts. In contrast to theory, there can be significant changes to β with coverage. Some experiments also show temperature dependence. Supported by NSF-MRSEC at UMD, Grant DMR 05-20471.
NASA Astrophysics Data System (ADS)
Pantano, Carlos
2005-11-01
We describe a hybrid finite difference method for large-eddy simulation (LES) of compressible flows with a low-numerical dissipation scheme and structured adaptive mesh refinement (SAMR). Numerical experiments and validation calculations are presented including a turbulent jet and the strongly shock-driven mixing of a Richtmyer-Meshkov instability. The approach is a conservative flux-based SAMR formulation and as such, it utilizes refinement to computational advantage. The numerical method for the resolved scale terms encompasses the cases of scheme alternation and internal mesh interfaces resulting from SAMR. An explicit centered scheme that is consistent with a skew-symmetric finite difference formulation is used in turbulent flow regions while a weighted essentially non-oscillatory (WENO) scheme is employed to capture shocks. The subgrid stresses and transports are calculated by means of the streched-vortex model, Misra & Pullin (1997)
Mansour, M M; Spink, A E F
2013-01-01
Grid refinement is introduced in a numerical groundwater model to increase the accuracy of the solution over local areas without compromising the run time of the model. Numerical methods developed for grid refinement suffered certain drawbacks, for example, deficiencies in the implemented interpolation technique; the non-reciprocity in head calculations or flow calculations; lack of accuracy resulting from high truncation errors, and numerical problems resulting from the construction of elongated meshes. A refinement scheme based on the divergence theorem and Taylor's expansions is presented in this article. This scheme is based on the work of De Marsily (1986) but includes more terms of the Taylor's series to improve the numerical solution. In this scheme, flow reciprocity is maintained and high order of refinement was achievable. The new numerical method is applied to simulate groundwater flows in homogeneous and heterogeneous confined aquifers. It produced results with acceptable degrees of accuracy. This method shows the potential for its application to solving groundwater heads over nested meshes with irregular shapes. © 2012, British Geological Survey © NERC 2012. Ground Water © 2012, National GroundWater Association.
Some observations on mesh refinement schemes applied to shock wave phenomena
NASA Technical Reports Server (NTRS)
Quirk, James J.
1995-01-01
This workshop's double-wedge test problem is taken from one of a sequence of experiments which were performed in order to classify the various canonical interactions between a planar shock wave and a double wedge. Therefore to build up a reasonably broad picture of the performance of our mesh refinement algorithm we have simulated three of these experiments and not just the workshop case. Here, using the results from these simulations together with their experimental counterparts, we make some general observations concerning the development of mesh refinement schemes for shock wave phenomena.
An Efficient Means of Adaptive Refinement Within Systems of Overset Grids
NASA Technical Reports Server (NTRS)
Meakin, Robert L.
1996-01-01
An efficient means of adaptive refinement within systems of overset grids is presented. Problem domains are segregated into near-body and off-body fields. Near-body fields are discretized via overlapping body-fitted grids that extend only a short distance from body surfaces. Off-body fields are discretized via systems of overlapping uniform Cartesian grids of varying levels of refinement. a novel off-body grid generation and management scheme provides the mechanism for carrying out adaptive refinement of off-body flow dynamics and solid body motion. The scheme allows for very efficient use of memory resources, and flow solvers and domain connectivity routines that can exploit the structure inherent to uniform Cartesian grids.
A new adaptive mesh refinement strategy for numerically solving evolutionary PDE's
NASA Astrophysics Data System (ADS)
Burgarelli, Denise; Kischinhevsky, Mauricio; Biezuner, Rodney Josue
2006-11-01
A graph-based implementation of quadtree meshes for dealing with adaptive mesh refinement (AMR) in the numerical solution of evolutionary partial differential equations is discussed using finite volume methods. The technique displays a plug-in feature that allows replacement of a group of cells in any region of interest for another one with arbitrary refinement, and with only local changes occurring in the data structure. The data structure is also specially designed to minimize the number of operations needed in the AMR. Implementation of the new scheme allows flexibility in the levels of refinement of adjacent regions. Moreover, storage requirements and computational cost compare competitively with mesh refinement schemes based on hierarchical trees. Low storage is achieved for only the children nodes are stored when a refinement takes place. These nodes become part of a graph structure, thus motivating the denomination autonomous leaves graph (ALG) for the new scheme. Neighbors can then be reached without accessing their parent nodes. Additionally, linear-system solvers based on the minimization of functionals can be easily employed. ALG was not conceived with any particular problem or geometry in mind and can thus be applied to the study of several phenomena. Some test problems are used to illustrate the effectiveness of the technique.
NASA Astrophysics Data System (ADS)
Hamimi, Z.; Kassem, O. M. K.; El-Sabrouty, M. N.
2015-09-01
The rotation of rigid objects within a flowing viscous medium is a function of several factors including the degree of non-coaxiality. The relationship between the orientation of such objects and their aspect ratio can be used in vorticity analyses in a variety of geological settings. Method for estimation of vorticity analysis to quantitative of kinematic vorticity number (Wm) has been applied using rotated rigid objects, such as quartz and feldspar objects. The kinematic vorticity number determined for high temperature mylonitic Abt schist in Al Amar area, extreme eastern Arabian Shield, ranges from ˜0.8 to 0.9. Obtained results from vorticity and strain analyses indicate that deformation in the area deviated from simple shear. It is concluded that nappe stacking occurred early during an earlier thrusting event, probably by brittle imbrications. Ductile strain was superimposed on the nappe structure at high-pressure as revealed by a penetrative subhorizontal foliation that is developed subparallel to tectonic contacts versus the underlying and overlying nappes. Accumulation of ductile strain during underplating was not by simple shear but involved a component of vertical shortening, which caused the subhorizontal foliation in the Al Amar area. In most cases, this foliation was formed concurrently with thrust sheets imbrications, indicating that nappe stacking was associated with vertical shortening.
Felsic plutonism in the Al Amar—Idsas area, Kingdom of Saudi Arabia
NASA Astrophysics Data System (ADS)
Le Bel, L.; Laval, M.
A tonalite—trondhjemite suite, calc-alkalic plutons and alkali-feldspar granites dated 670 and 580 Ma, intrude thick volcano-sedimentary rocks of the Al Amar group E of the Al Amar fault and the Abt schist W of the fault. The tonalite—trondhjemite suite (group I) is characterized by low Rb (50 ppm) and Sr (100-400 ppm) and by weakly fractionated rare-earth patterns (La/Yb Nca 2-3) with a weak negative Eu anomaly. Calc-alkalic plutons (group II) are richer in Rb (50-150 ppm), contain variable Sr (50-1000 ppm), and have strongly fractionated rare-earth patterns (La/Yb Nca 6-22) with no Eu anomaly. Alkali-feldspar granite (group III) is characterized by high Rb (150-200 ppm) and shows fractionated rare-earth patterns (La/Yb Nca 6-18) with a well-developed Eu anomaly. Group III includes 'specialized granites' with high Rb (300-400 ppm) and Sn (28-66 ppm), and rare-earth patterns showing a distinctive 'sea gull' profile with a very strong Eu anomaly (Eu*/Eu = 20). Oxygen isotope geochemistry suggests that group I rocks (¯x δ18O ca 7.0) were mantle-derived, and that group II and III rocks intruding the Al Amar group ( δ18O ca 7.9 and 8.8 respectively) were derived by remelting of group I, whereas those intruding Abt schist ( δ18O ca 8.7 and 10.8 respectively) were partially derived by anatexis of the Afif block. Magmatogenesis reflects an island-arc development. Rocks of group I represent the initial subduction phase. Syn- to late-tectonic plutons of group II intruded the arc east of the Al Amar fault and the accretionary prism (Abt schist) to the west, which was in collision with the older Afif block. Post-tectonic group III rocks were emplaced in an already cratonized area.
Residual Distribution Schemes for Conservation Laws Via Adaptive Quadrature
NASA Technical Reports Server (NTRS)
Barth, Timothy; Abgrall, Remi; Biegel, Bryan (Technical Monitor)
2000-01-01
This paper considers a family of nonconservative numerical discretizations for conservation laws which retains the correct weak solution behavior in the limit of mesh refinement whenever sufficient order numerical quadrature is used. Our analysis of 2-D discretizations in nonconservative form follows the 1-D analysis of Hou and Le Floch. For a specific family of nonconservative discretizations, it is shown under mild assumptions that the error arising from non-conservation is strictly smaller than the discretization error in the scheme. In the limit of mesh refinement under the same assumptions, solutions are shown to satisfy an entropy inequality. Using results from this analysis, a variant of the "N" (Narrow) residual distribution scheme of van der Weide and Deconinck is developed for first-order systems of conservation laws. The modified form of the N-scheme supplants the usual exact single-state mean-value linearization of flux divergence, typically used for the Euler equations of gasdynamics, by an equivalent integral form on simplex interiors. This integral form is then numerically approximated using an adaptive quadrature procedure. This renders the scheme nonconservative in the sense described earlier so that correct weak solutions are still obtained in the limit of mesh refinement. Consequently, we then show that the modified form of the N-scheme can be easily applied to general (non-simplicial) element shapes and general systems of first-order conservation laws equipped with an entropy inequality where exact mean-value linearization of the flux divergence is not readily obtained, e.g. magnetohydrodynamics, the Euler equations with certain forms of chemistry, etc. Numerical examples of subsonic, transonic and supersonic flows containing discontinuities together with multi-level mesh refinement are provided to verify the analysis.
NASA Astrophysics Data System (ADS)
Li, Gaohua; Fu, Xiang; Wang, Fuxin
2017-10-01
The low-dissipation high-order accurate hybrid up-winding/central scheme based on fifth-order weighted essentially non-oscillatory (WENO) and sixth-order central schemes, along with the Spalart-Allmaras (SA)-based delayed detached eddy simulation (DDES) turbulence model, and the flow feature-based adaptive mesh refinement (AMR), are implemented into a dual-mesh overset grid infrastructure with parallel computing capabilities, for the purpose of simulating vortex-dominated unsteady detached wake flows with high spatial resolutions. The overset grid assembly (OGA) process based on collection detection theory and implicit hole-cutting algorithm achieves an automatic coupling for the near-body and off-body solvers, and the error-and-try method is used for obtaining a globally balanced load distribution among the composed multiple codes. The results of flows over high Reynolds cylinder and two-bladed helicopter rotor show that the combination of high-order hybrid scheme, advanced turbulence model, and overset adaptive mesh refinement can effectively enhance the spatial resolution for the simulation of turbulent wake eddies.
Modeling flow at the nozzle of a solid rocket motor
NASA Technical Reports Server (NTRS)
Chow, Alan S.; Jin, Kang-Ren
1991-01-01
The mechanical behavior of a rocket motor internal flow field results in a system of nonlinear partial differential equations which can be solved numerically. The accuracy and the convergence of the solution of the system of equations depends largely on how precisely the sharp gradients can be resolved. An adaptive grid generation scheme is incorporated into the computer algorithm to enhance the capability of numerical modeling. With this scheme, the grid is refined as the solution evolves. This scheme significantly improves the methodology of solving flow problems in rocket nozzle by putting the refinement part of grid generation into the computer algorithm.
NASA Astrophysics Data System (ADS)
Pappalardo, Francesco; Pennisi, Marzio
2016-07-01
Fibrosis represents a process where an excessive tissue formation in an organ follows the failure of a physiological reparative or reactive process. Mathematical and computational techniques may be used to improve the understanding of the mechanisms that lead to the disease and to test potential new treatments that may directly or indirectly have positive effects against fibrosis [1]. In this scenario, Ben Amar and Bianca [2] give us a broad picture of the existing mathematical and computational tools that have been used to model fibrotic processes at the molecular, cellular, and tissue levels. Among such techniques, agent based models (ABM) can give a valuable contribution in the understanding and better management of fibrotic diseases.
NASA Astrophysics Data System (ADS)
Semplice, Matteo; Loubère, Raphaël
2018-02-01
In this paper we propose a third order accurate finite volume scheme based on a posteriori limiting of polynomial reconstructions within an Adaptive-Mesh-Refinement (AMR) simulation code for hydrodynamics equations in 2D. The a posteriori limiting is based on the detection of problematic cells on a so-called candidate solution computed at each stage of a third order Runge-Kutta scheme. Such detection may include different properties, derived from physics, such as positivity, from numerics, such as a non-oscillatory behavior, or from computer requirements such as the absence of NaN's. Troubled cell values are discarded and re-computed starting again from the previous time-step using a more dissipative scheme but only locally, close to these cells. By locally decrementing the degree of the polynomial reconstructions from 2 to 0 we switch from a third-order to a first-order accurate but more stable scheme. The entropy indicator sensor is used to refine/coarsen the mesh. This sensor is also employed in an a posteriori manner because if some refinement is needed at the end of a time step, then the current time-step is recomputed with the refined mesh, but only locally, close to the new cells. We show on a large set of numerical tests that this a posteriori limiting procedure coupled with the entropy-based AMR technology can maintain not only optimal accuracy on smooth flows but also stability on discontinuous profiles such as shock waves, contacts, interfaces, etc. Moreover numerical evidences show that this approach is at least comparable in terms of accuracy and cost to a more classical CWENO approach within the same AMR context.
Jeribi, Aref; Krichen, Bilel; Mefteh, Bilel
2013-01-01
In the paper [A. Ben Amar, A. Jeribi, and B. Krichen, Fixed point theorems for block operator matrix and an application to a structured problem under boundary conditions of Rotenberg's model type, to appear in Math. Slovaca. (2014)], the existence of solutions of the two-dimensional boundary value problem (1) and (2) was discussed in the product Banach space L(p)×L(p) for p∈(1, ∞). Due to the lack of compactness on L1 spaces, the analysis did not cover the case p=1. The purpose of this work is to extend the results of Ben Amar et al. to the case p=1 by establishing new variants of fixed-point theorems for a 2×2 operator matrix, involving weakly compact operators.
A Cartesian grid approach with hierarchical refinement for compressible flows
NASA Technical Reports Server (NTRS)
Quirk, James J.
1994-01-01
Many numerical studies of flows that involve complex geometries are limited by the difficulties in generating suitable grids. We present a Cartesian boundary scheme for two-dimensional, compressible flows that is unfettered by the need to generate a computational grid and so it may be used, routinely, even for the most awkward of geometries. In essence, an arbitrary-shaped body is allowed to blank out some region of a background Cartesian mesh and the resultant cut-cells are singled out for special treatment. This is done within a finite-volume framework and so, in principle, any explicit flux-based integration scheme can take advantage of this method for enforcing solid boundary conditions. For best effect, the present Cartesian boundary scheme has been combined with a sophisticated, local mesh refinement scheme, and a number of examples are shown in order to demonstrate the efficacy of the combined algorithm for simulations of shock interaction phenomena.
NASA Astrophysics Data System (ADS)
Kachapova, Farida
2016-07-01
Mathematical and computational models in biology and medicine help to improve diagnostics and medical treatments. Modeling of pathological fibrosis is reviewed by M. Ben Amar and C. Bianca in [4]. Pathological fibrosis is the process when excessive fibrous tissue is deposited on an organ or tissue during a wound healing and can obliterate their normal function. In [4] the phenomena of fibrosis are briefly explained including the causes, mechanism and management; research models of pathological fibrosis are described, compared and critically analyzed. Different models are suitable at different levels: molecular, cellular and tissue. The main goal of mathematical modeling of fibrosis is to predict long term behavior of the system depending on bifurcation parameters; there are two main trends: inhibition of fibrosis due to an active immune system and swelling of fibrosis because of a weak immune system.
NASA Astrophysics Data System (ADS)
Wu, Min
2016-07-01
The development of anti-fibrotic therapies in diversities of diseases becomes more and more urgent recently, such as in pulmonary, renal and liver fibrosis [1,2], as well as in malignant tumor growths [3]. As reviewed by Ben Amar and Bianca [4], various theoretical, experimental and in-silico models have been developed to understand the fibrosis process, where the implication on therapeutic strategies has also been frequently demonstrated (e.g., [5-7]). In [4], these models are analyzed and sorted according to their approaches, and in the end of [4], a unified multi-scale approach was proposed to understand fibrosis. While one of the major purposes of extensive modeling of fibrosis is to shed light on therapeutic strategies, the theoretical, experimental and in-silico studies of anti-fibrosis therapies should be conducted more intensively.
The Collaborative Seismic Earth Model: Generation 1
NASA Astrophysics Data System (ADS)
Fichtner, Andreas; van Herwaarden, Dirk-Philip; Afanasiev, Michael; SimutÄ--, SaulÄ--; Krischer, Lion; ćubuk-Sabuncu, Yeşim; Taymaz, Tuncay; Colli, Lorenzo; Saygin, Erdinc; Villaseñor, Antonio; Trampert, Jeannot; Cupillard, Paul; Bunge, Hans-Peter; Igel, Heiner
2018-05-01
We present a general concept for evolutionary, collaborative, multiscale inversion of geophysical data, specifically applied to the construction of a first-generation Collaborative Seismic Earth Model. This is intended to address the limited resources of individual researchers and the often limited use of previously accumulated knowledge. Model evolution rests on a Bayesian updating scheme, simplified into a deterministic method that honors today's computational restrictions. The scheme is able to harness distributed human and computing power. It furthermore handles conflicting updates, as well as variable parameterizations of different model refinements or different inversion techniques. The first-generation Collaborative Seismic Earth Model comprises 12 refinements from full seismic waveform inversion, ranging from regional crustal- to continental-scale models. A global full-waveform inversion ensures that regional refinements translate into whole-Earth structure.
Modified Mean-Pyramid Coding Scheme
NASA Technical Reports Server (NTRS)
Cheung, Kar-Ming; Romer, Richard
1996-01-01
Modified mean-pyramid coding scheme requires transmission of slightly fewer data. Data-expansion factor reduced from 1/3 to 1/12. Schemes for progressive transmission of image data transmitted in sequence of frames in such way coarse version of image reconstructed after receipt of first frame and increasingly refined version of image reconstructed after receipt of each subsequent frame.
Drug Safety - Multiple Languages
... Expand Section Guide to Over the Counter Medications - English PDF Guide to Over the Counter Medications - Amarɨñña / ... Information Translations Guide to Over the Counter Medications - English PDF Guide to Over the Counter Medications - العربية ( ...
Parallel implementation of an adaptive scheme for 3D unstructured grids on the SP2
NASA Technical Reports Server (NTRS)
Strawn, Roger C.; Oliker, Leonid; Biswas, Rupak
1996-01-01
Dynamic mesh adaption on unstructured grids is a powerful tool for computing unsteady flows that require local grid modifications to efficiently resolve solution features. For this work, we consider an edge-based adaption scheme that has shown good single-processor performance on the C90. We report on our experience parallelizing this code for the SP2. Results show a 47.0X speedup on 64 processors when 10 percent of the mesh is randomly refined. Performance deteriorates to 7.7X when the same number of edges are refined in a highly-localized region. This is because almost all the mesh adaption is confined to a single processor. However, this problem can be remedied by repartitioning the mesh immediately after targeting edges for refinement but before the actual adaption takes place. With this change, the speedup improves dramatically to 43.6X.
Parallel Implementation of an Adaptive Scheme for 3D Unstructured Grids on the SP2
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Biswas, Rupak; Strawn, Roger C.
1996-01-01
Dynamic mesh adaption on unstructured grids is a powerful tool for computing unsteady flows that require local grid modifications to efficiently resolve solution features. For this work, we consider an edge-based adaption scheme that has shown good single-processor performance on the C90. We report on our experience parallelizing this code for the SP2. Results show a 47.OX speedup on 64 processors when 10% of the mesh is randomly refined. Performance deteriorates to 7.7X when the same number of edges are refined in a highly-localized region. This is because almost all mesh adaption is confined to a single processor. However, this problem can be remedied by repartitioning the mesh immediately after targeting edges for refinement but before the actual adaption takes place. With this change, the speedup improves dramatically to 43.6X.
Cartesian Off-Body Grid Adaption for Viscous Time- Accurate Flow Simulation
NASA Technical Reports Server (NTRS)
Buning, Pieter G.; Pulliam, Thomas H.
2011-01-01
An improved solution adaption capability has been implemented in the OVERFLOW overset grid CFD code. Building on the Cartesian off-body approach inherent in OVERFLOW and the original adaptive refinement method developed by Meakin, the new scheme provides for automated creation of multiple levels of finer Cartesian grids. Refinement can be based on the undivided second-difference of the flow solution variables, or on a specific flow quantity such as vorticity. Coupled with load-balancing and an inmemory solution interpolation procedure, the adaption process provides very good performance for time-accurate simulations on parallel compute platforms. A method of using refined, thin body-fitted grids combined with adaption in the off-body grids is presented, which maximizes the part of the domain subject to adaption. Two- and three-dimensional examples are used to illustrate the effectiveness and performance of the adaption scheme.
75 FR 44270 - Government-Owned Inventions; Availability for Licensing
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-28
... Cardiovascular Disease Description of Invention: This technology consists of peptides and peptide-analogues that... atherosclerosis Treatment and prevention of cardiovascular disease, coronary artery disease, heart attack, stroke.... Global sales for cardiovascular therapeutics are expected to exceed $50b in 2010. Inventors: Amar A...
A Viable Scheme for Elemental Extraction and Purification Using In-Situ Planetary Resources
NASA Technical Reports Server (NTRS)
Sen, S.; Schofield, E.; ODell, S.; Ray, C. S.
2005-01-01
NASA's new strategic direction includes establishing a self-sufficient, affordable and safe human and robotic presence outside the low earth orbit. Some of the items required for a self-sufficient extra-terrestrial habitat will include materials for power generation (e.g. Si for solar cells) and habitat construction (e.g. Al, Fe, and Ti). In this paper we will present a viable elemental extraction and refining process from in-situ regolith which would be optimally continuous, robotically automated, and require a minimum amount of astronaut supervision and containment facilities, The approach is based on using a concentrated heat source and translating sample geometry to enable simultaneous oxide reduction and elemental refining. Preliminary results will be presented to demonstrate that the proposed zone refining process is capable of segregating or refining important elements such as Si (for solar cell fabrication) and Fe (for habitat construction). A conceptual scheme will be presented whereby such a process could be supported by use of solar energy and a precursor robotic mission on the surface of the moon.
A conservation and biophysics guided stochastic approach to refining docked multimeric proteins.
Akbal-Delibas, Bahar; Haspel, Nurit
2013-01-01
We introduce a protein docking refinement method that accepts complexes consisting of any number of monomeric units. The method uses a scoring function based on a tight coupling between evolutionary conservation, geometry and physico-chemical interactions. Understanding the role of protein complexes in the basic biology of organisms heavily relies on the detection of protein complexes and their structures. Different computational docking methods are developed for this purpose, however, these methods are often not accurate and their results need to be further refined to improve the geometry and the energy of the resulting complexes. Also, despite the fact that complexes in nature often have more than two monomers, most docking methods focus on dimers since the computational complexity increases exponentially due to the addition of monomeric units. Our results show that the refinement scheme can efficiently handle complexes with more than two monomers by biasing the results towards complexes with native interactions, filtering out false positive results. Our refined complexes have better IRMSDs with respect to the known complexes and lower energies than those initial docked structures. Evolutionary conservation information allows us to bias our results towards possible functional interfaces, and the probabilistic selection scheme helps us to escape local energy minima. We aim to incorporate our refinement method in a larger framework which also enables docking of multimeric complexes given only monomeric structures.
Thoughts on the chimera method of simulation of three-dimensional viscous flow
NASA Technical Reports Server (NTRS)
Steger, Joseph L.
1991-01-01
The chimera overset grid is reviewed and discussed relative to other procedures for simulating flow about complex configurations. It is argued that while more refinement of the technique is needed, current schemes are competitive to unstructured grid schemes and should ultimately prove more useful.
Multiple Grammars: Old Wine in Old Bottles
ERIC Educational Resources Information Center
Sorace, Antonella
2014-01-01
Amaral and Roeper (this issue; henceforth A&R) argue that all speakers -- regardless of whether monolingual or bilingual -- have multiple grammars in their mental language representations. They further claim that this simple assumption can explain many things: optionality in second language (L2) language behaviour, multilingualism, language…
Constraining Multiple Grammars
ERIC Educational Resources Information Center
Hopp, Holger
2014-01-01
This article offers the author's commentary on the Multiple Grammars (MG) language acquisition theory proposed by Luiz Amaral and Tom Roeper in the present issue. Multiple Grammars advances the claim that optionality is a constitutive characteristic of any one grammar, with interlanguage grammars being perhaps the clearest examples of a…
Final Report: Fourth Peer Review of the CMAQ Model
The CMAQ Model External Peer Review Panel conducted a two and a half day review view on June 27, 28, and 29, 2011. This report summarizes its findings, and follows other reviews conducted in 2004, 2005, and 2006 [Amar et al., 2004; 2005 and 2007].
Adaptively Refined Euler and Navier-Stokes Solutions with a Cartesian-Cell Based Scheme
NASA Technical Reports Server (NTRS)
Coirier, William J.; Powell, Kenneth G.
1995-01-01
A Cartesian-cell based scheme with adaptive mesh refinement for solving the Euler and Navier-Stokes equations in two dimensions has been developed and tested. Grids about geometrically complicated bodies were generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells were created using polygon-clipping algorithms. The grid was stored in a binary-tree data structure which provided a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations were solved on the resulting grids using an upwind, finite-volume formulation. The inviscid fluxes were found in an upwinded manner using a linear reconstruction of the cell primitives, providing the input states to an approximate Riemann solver. The viscous fluxes were formed using a Green-Gauss type of reconstruction upon a co-volume surrounding the cell interface. Data at the vertices of this co-volume were found in a linearly K-exact manner, which ensured linear K-exactness of the gradients. Adaptively-refined solutions for the inviscid flow about a four-element airfoil (test case 3) were compared to theory. Laminar, adaptively-refined solutions were compared to accepted computational, experimental and theoretical results.
Oxygen-Induced Cracking Distillation of Oil in the Continuous Flow Tank Reactor
ERIC Educational Resources Information Center
Shvets, Valeriy F.; Kozlovskiy, Roman A.; Luganskiy, Artur I.; Gorbunov, Andrey V.; Suchkov, Yuriy P.; Ushin, Nikolay S.; Cherepanov, Alexandr A.
2016-01-01
The article analyses problems of processing black oil fuel and addresses the possibility of increasing the depth of oil refining by a new processing scheme. The study examines various methods of increasing the depth of oil refining reveals their inadequacies and highlights a need to introduce a new method of processing atmospheric and vacuum…
NASA Astrophysics Data System (ADS)
Matsunaga, Y.; Sugita, Y.
2018-06-01
A data-driven modeling scheme is proposed for conformational dynamics of biomolecules based on molecular dynamics (MD) simulations and experimental measurements. In this scheme, an initial Markov State Model (MSM) is constructed from MD simulation trajectories, and then, the MSM parameters are refined using experimental measurements through machine learning techniques. The second step can reduce the bias of MD simulation results due to inaccurate force-field parameters. Either time-series trajectories or ensemble-averaged data are available as a training data set in the scheme. Using a coarse-grained model of a dye-labeled polyproline-20, we compare the performance of machine learning estimations from the two types of training data sets. Machine learning from time-series data could provide the equilibrium populations of conformational states as well as their transition probabilities. It estimates hidden conformational states in more robust ways compared to that from ensemble-averaged data although there are limitations in estimating the transition probabilities between minor states. We discuss how to use the machine learning scheme for various experimental measurements including single-molecule time-series trajectories.
Complexity and Conflicting Grammars in Language Acquisition
ERIC Educational Resources Information Center
Westergaard, Marit
2014-01-01
The article by Amaral and Roeper (this issue; henceforth A&R) presents many interesting ideas about first and second language acquisition as well as some experimental data convincingly illustrating the difference between production and comprehension. The article extends the concept of Universal Bilingualism proposed in Roeper (1999) to second…
Tuberculosis - Multiple Languages
... this page, please enable JavaScript. Amharic (Amarɨñña / አማርኛ ) Arabic (العربية) Bosnian (bosanski) Cape Verdean Creole (Kabuverdianu) Chinese, ... Amharic) Bilingual PDF Massachusetts Department of Public Health Arabic (العربية) Expand Section Home Respiratory Precautions for Patients ...
Doebrich, J.L.; Al-Jehani, A. M.; Siddiqui, A.A.; Hayes, T.S.; Wooden, J.L.; Johnson, P.R.
2007-01-01
The Neoproterozoic Ar Rayn terrane is exposed along the eastern margin of the Arabian shield. The terrane is bounded on the west by the Ad Dawadimi terrane across the Al Amar fault zone (AAF), and is nonconformably overlain on the east by Phanerozoic sedimentary rocks. The terrane is composed of a magmatic arc complex and syn- to post-orogenic intrusions. The layered rocks of the arc, the Al Amar group (>689 Ma to ???625 Ma), consist of tholeiitic to calc-alkaline basaltic to rhyolitic volcanic and volcaniclastic rocks with subordinate tuffaceous sedimentary rocks and carbonates, and are divided into an eastern and western sequence. Plutonic rocks of the terrane form three distinct lithogeochemical groups: (1) low-Al trondhjemite-tonalite-granodiorite (TTG) of arc affinity (632-616 Ma) in the western part of the terrane, (2) high-Al TTG/adakite of arc affinity (689-617 Ma) in the central and eastern part of the terrane, and (3) syn- to post-orogenic alkali granite (607-583 Ma). West-dipping subduction along a trench east of the terrane is inferred from high-Al TTG/adakite emplaced east of low-Al TTG. The Ar Rayn terrane contains significant resources in epithermal Au-Ag-Zn-Cu-barite, enigmatic stratiform volcanic-hosted Khnaiguiyah-type Zn-Cu-Fe-Mn, and orogenic Au vein deposits, and the potential for significant resources in Fe-oxide Cu-Au (IOCG), and porphyry Cu deposits. Khnaiguiyah-type deposits formed before or during early deformation of the Al Amar group eastern sequence. Epithermal and porphyry deposits formed proximal to volcanic centers in Al Amar group western sequence. IOCG deposits are largely structurally controlled and hosted by group-1 intrusions and Al Amar group volcanic rocks in the western part of the terrane. Orogenic gold veins are largely associated with north-striking faults, particularly in and near the AAF, and are presumably related to amalgamation of the Ar Rayn and Ad Dawadimi terranes. Geologic, structural, and metallogenic characteristics of the Ar Rayn terrane are analogous to the Andean continental margin of Chile, with opposite subduction polarity. The Ar Rayn terrane represents a continental margin arc that lay above a west-dipping subduction zone along a continental block represented by the Afif composite terrane. The concentration of epithermal, porphyry Cu and IOCG mineral systems, of central arc affiliation, along the AAF suggests that the AAF is not an ophiolitic suture zone, but originated as a major intra-arc fault that localized magmatism and mineralization. West-directed oblique subduction and ultimate collision with a land mass from the east (East Gondwana?) resulted in major transcurrent displacement along the AAF, bringing the eastern part of the arc terrane to its present exposed position, juxtaposed across the AAF against a back-arc basin assemblage represented by the Abt schist of the Ad Dawadimi terrane. Our findings indicate that arc formation and accretionary processes in the Arabian shield were still ongoing into the latest Neoproterozoic (Ediacaran), to about 620-600 Ma, and lead us to conclude that evolution of the Ar Rayn terrane (arc formation, accretion, syn- to postorogenic plutonism) defines a final stage of assembly of the Gondwana supercontinent along the northeastern margin of the East African orogen. ?? 2007 Elsevier B.V. All rights reserved.
In Search of Grid Converged Solutions
NASA Technical Reports Server (NTRS)
Lockard, David P.
2010-01-01
Assessing solution error continues to be a formidable task when numerically solving practical flow problems. Currently, grid refinement is the primary method used for error assessment. The minimum grid spacing requirements to achieve design order accuracy for a structured-grid scheme are determined for several simple examples using truncation error evaluations on a sequence of meshes. For certain methods and classes of problems, obtaining design order may not be sufficient to guarantee low error. Furthermore, some schemes can require much finer meshes to obtain design order than would be needed to reduce the error to acceptable levels. Results are then presented from realistic problems that further demonstrate the challenges associated with using grid refinement studies to assess solution accuracy.
Block structured adaptive mesh and time refinement for hybrid, hyperbolic + N-body systems
NASA Astrophysics Data System (ADS)
Miniati, Francesco; Colella, Phillip
2007-11-01
We present a new numerical algorithm for the solution of coupled collisional and collisionless systems, based on the block structured adaptive mesh and time refinement strategy (AMR). We describe the issues associated with the discretization of the system equations and the synchronization of the numerical solution on the hierarchy of grid levels. We implement a code based on a higher order, conservative and directionally unsplit Godunov’s method for hydrodynamics; a symmetric, time centered modified symplectic scheme for collisionless component; and a multilevel, multigrid relaxation algorithm for the elliptic equation coupling the two components. Numerical results that illustrate the accuracy of the code and the relative merit of various implemented schemes are also presented.
An adaptive embedded mesh procedure for leading-edge vortex flows
NASA Technical Reports Server (NTRS)
Powell, Kenneth G.; Beer, Michael A.; Law, Glenn W.
1989-01-01
A procedure for solving the conical Euler equations on an adaptively refined mesh is presented, along with a method for determining which cells to refine. The solution procedure is a central-difference cell-vertex scheme. The adaptation procedure is made up of a parameter on which the refinement decision is based, and a method for choosing a threshold value of the parameter. The refinement parameter is a measure of mesh-convergence, constructed by comparison of locally coarse- and fine-grid solutions. The threshold for the refinement parameter is based on the curvature of the curve relating the number of cells flagged for refinement to the value of the refinement threshold. Results for three test cases are presented. The test problem is that of a delta wing at angle of attack in a supersonic free-stream. The resulting vortices and shocks are captured efficiently by the adaptive code.
Heterogeneous Systems for Information-Variable Environments (HIVE)
2017-05-01
ARL-TR-8027 ● May 2017 US Army Research Laboratory Heterogeneous Systems for Information - Variable Environments (HIVE) by Amar...not return it to the originator. ARL-TR-8027 ● May 2017 US Army Research Laboratory Heterogeneous Systems for Information ...Computational and Information Sciences Directorate, ARL Approved for public release; distribution is unlimited. ii REPORT
NASA Astrophysics Data System (ADS)
Aftosmis, Michael J.
1992-10-01
A new node based upwind scheme for the solution of the 3D Navier-Stokes equations on adaptively refined meshes is presented. The method uses a second-order upwind TVD scheme to integrate the convective terms, and discretizes the viscous terms with a new compact central difference technique. Grid adaptation is achieved through directional division of hexahedral cells in response to evolving features as the solution converges. The method is advanced in time with a multistage Runge-Kutta time stepping scheme. Two- and three-dimensional examples establish the accuracy of the inviscid and viscous discretization. These investigations highlight the ability of the method to produce crisp shocks, while accurately and economically resolving viscous layers. The representation of these and other structures is shown to be comparable to that obtained by structured methods. Further 3D examples demonstrate the ability of the adaptive algorithm to effectively locate and resolve multiple scale features in complex 3D flows with many interacting, viscous, and inviscid structures.
Random Matrix Theory and Econophysics
NASA Astrophysics Data System (ADS)
Rosenow, Bernd
2000-03-01
Random Matrix Theory (RMT) [1] is used in many branches of physics as a ``zero information hypothesis''. It describes generic behavior of different classes of systems, while deviations from its universal predictions allow to identify system specific properties. We use methods of RMT to analyze the cross-correlation matrix C of stock price changes [2] of the largest 1000 US companies. In addition to its scientific interest, the study of correlations between the returns of different stocks is also of practical relevance in quantifying the risk of a given stock portfolio. We find [3,4] that the statistics of most of the eigenvalues of the spectrum of C agree with the predictions of RMT, while there are deviations for some of the largest eigenvalues. We interpret these deviations as a system specific property, e.g. containing genuine information about correlations in the stock market. We demonstrate that C shares universal properties with the Gaussian orthogonal ensemble of random matrices. Furthermore, we analyze the eigenvectors of C through their inverse participation ratio and find eigenvectors with large ratios at both edges of the eigenvalue spectrum - a situation reminiscent of localization theory results. This work was done in collaboration with V. Plerou, P. Gopikrishnan, T. Guhr, L.A.N. Amaral, and H.E Stanley and is related to recent work of Laloux et al.. 1. T. Guhr, A. Müller Groeling, and H.A. Weidenmüller, ``Random Matrix Theories in Quantum Physics: Common Concepts'', Phys. Rep. 299, 190 (1998). 2. See, e.g. R.N. Mantegna and H.E. Stanley, Econophysics: Correlations and Complexity in Finance (Cambridge University Press, Cambridge, England, 1999). 3. V. Plerou, P. Gopikrishnan, B. Rosenow, L.A.N. Amaral, and H.E. Stanley, ``Universal and Nonuniversal Properties of Cross Correlations in Financial Time Series'', Phys. Rev. Lett. 83, 1471 (1999). 4. V. Plerou, P. Gopikrishnan, T. Guhr, B. Rosenow, L.A.N. Amaral, and H.E. Stanley, ``Random Matrix Theory Analysis of Diffusion in Stock Price Dynamics, preprint
Sparse spikes super-resolution on thin grids II: the continuous basis pursuit
NASA Astrophysics Data System (ADS)
Duval, Vincent; Peyré, Gabriel
2017-09-01
This article analyzes the performance of the continuous basis pursuit (C-BP) method for sparse super-resolution. The C-BP has been recently proposed by Ekanadham, Tranchina and Simoncelli as a refined discretization scheme for the recovery of spikes in inverse problems regularization. One of the most well known discretization scheme, the basis pursuit (BP, also known as \
AMAR: A Computational Model of Autosegmental Phonology
1993-10-01
iirjpliologv i td lat Ia iiglia’t.. liitcad. it iiilistitclv li k iii~ hrl~ ielt -f)rcr, iiwietlid’ te I rv to) siilt oii(aeiti. iu~ilg .III a coiiealt-iatlvix...Summiier Institute of Linguistics. Inc. 1 -253. Occasional Publications in Academic C om- put ing. Barton, G. E.. R. CI. Berwick, and E. S. Rista&I
SU-E-J-128: Two-Stage Atlas Selection in Multi-Atlas-Based Image Segmentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, T; Ruan, D
2015-06-15
Purpose: In the new era of big data, multi-atlas-based image segmentation is challenged by heterogeneous atlas quality and high computation burden from extensive atlas collection, demanding efficient identification of the most relevant atlases. This study aims to develop a two-stage atlas selection scheme to achieve computational economy with performance guarantee. Methods: We develop a low-cost fusion set selection scheme by introducing a preliminary selection to trim full atlas collection into an augmented subset, alleviating the need for extensive full-fledged registrations. More specifically, fusion set selection is performed in two successive steps: preliminary selection and refinement. An augmented subset is firstmore » roughly selected from the whole atlas collection with a simple registration scheme and the corresponding preliminary relevance metric; the augmented subset is further refined into the desired fusion set size, using full-fledged registration and the associated relevance metric. The main novelty of this work is the introduction of an inference model to relate the preliminary and refined relevance metrics, based on which the augmented subset size is rigorously derived to ensure the desired atlases survive the preliminary selection with high probability. Results: The performance and complexity of the proposed two-stage atlas selection method were assessed using a collection of 30 prostate MR images. It achieved comparable segmentation accuracy as the conventional one-stage method with full-fledged registration, but significantly reduced computation time to 1/3 (from 30.82 to 11.04 min per segmentation). Compared with alternative one-stage cost-saving approach, the proposed scheme yielded superior performance with mean and medium DSC of (0.83, 0.85) compared to (0.74, 0.78). Conclusion: This work has developed a model-guided two-stage atlas selection scheme to achieve significant cost reduction while guaranteeing high segmentation accuracy. The benefit in both complexity and performance is expected to be most pronounced with large-scale heterogeneous data.« less
Testing hydrodynamics schemes in galaxy disc simulations
NASA Astrophysics Data System (ADS)
Few, C. G.; Dobbs, C.; Pettitt, A.; Konstandin, L.
2016-08-01
We examine how three fundamentally different numerical hydrodynamics codes follow the evolution of an isothermal galactic disc with an external spiral potential. We compare an adaptive mesh refinement code (RAMSES), a smoothed particle hydrodynamics code (SPHNG), and a volume-discretized mesh-less code (GIZMO). Using standard refinement criteria, we find that RAMSES produces a disc that is less vertically concentrated and does not reach such high densities as the SPHNG or GIZMO runs. The gas surface density in the spiral arms increases at a lower rate for the RAMSES simulations compared to the other codes. There is also a greater degree of substructure in the SPHNG and GIZMO runs and secondary spiral arms are more pronounced. By resolving the Jeans length with a greater number of grid cells, we achieve more similar results to the Lagrangian codes used in this study. Other alterations to the refinement scheme (adding extra levels of refinement and refining based on local density gradients) are less successful in reducing the disparity between RAMSES and SPHNG/GIZMO. Although more similar, SPHNG displays different density distributions and vertical mass profiles to all modes of GIZMO (including the smoothed particle hydrodynamics version). This suggests differences also arise which are not intrinsic to the particular method but rather due to its implementation. The discrepancies between codes (in particular, the densities reached in the spiral arms) could potentially result in differences in the locations and time-scales for gravitational collapse, and therefore impact star formation activity in more complex galaxy disc simulations.
Ryu, Hyojung; Lim, GyuTae; Sung, Bong Hyun; Lee, Jinhyuk
2016-02-15
Protein structure refinement is a necessary step for the study of protein function. In particular, some nuclear magnetic resonance (NMR) structures are of lower quality than X-ray crystallographic structures. Here, we present NMRe, a web-based server for NMR structure refinement. The previously developed knowledge-based energy function STAP (Statistical Torsion Angle Potential) was used for NMRe refinement. With STAP, NMRe provides two refinement protocols using two types of distance restraints. If a user provides NOE (Nuclear Overhauser Effect) data, the refinement is performed with the NOE distance restraints as a conventional NMR structure refinement. Additionally, NMRe generates NOE-like distance restraints based on the inter-hydrogen distances derived from the input structure. The efficiency of NMRe refinement was validated on 20 NMR structures. Most of the quality assessment scores of the refined NMR structures were better than those of the original structures. The refinement results are provided as a three-dimensional structure view, a secondary structure scheme, and numerical and graphical structure validation scores. NMRe is available at http://psb.kobic.re.kr/nmre/. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Simurda, Matej; Duggen, Lars; Basse, Nils T; Lassen, Benny
2018-02-01
A numerical model for transit-time ultrasonic flowmeters operating under multiphase flow conditions previously presented by us is extended by mesh refinement and grid point redistribution. The method solves modified first-order stress-velocity equations of elastodynamics with additional terms to account for the effect of the background flow. Spatial derivatives are calculated by a Fourier collocation scheme allowing the use of the fast Fourier transform, while the time integration is realized by the explicit third-order Runge-Kutta finite-difference scheme. The method is compared against analytical solutions and experimental measurements to verify the benefit of using mapped grids. Additionally, a study of clamp-on and in-line ultrasonic flowmeters operating under multiphase flow conditions is carried out.
NASA Astrophysics Data System (ADS)
Wei, Hai-Rui; Liu, Ji-Zhen
2017-02-01
It is very important to seek an efficient and robust quantum algorithm demanding less quantum resources. We propose one-photon three-qubit original and refined Deutsch-Jozsa algorithms with polarization and two linear momentums degrees of freedom (DOFs). Our schemes are constructed by solely using linear optics. Compared to the traditional ones with one DOF, our schemes are more economic and robust because the necessary photons are reduced from three to one. Our linear-optic schemes are working in a determinate way, and they are feasible with current experimental technology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, Hai-Rui, E-mail: hrwei@ustb.edu.cn; Liu, Ji-Zhen
2017-02-15
It is very important to seek an efficient and robust quantum algorithm demanding less quantum resources. We propose one-photon three-qubit original and refined Deutsch–Jozsa algorithms with polarization and two linear momentums degrees of freedom (DOFs). Our schemes are constructed by solely using linear optics. Compared to the traditional ones with one DOF, our schemes are more economic and robust because the necessary photons are reduced from three to one. Our linear-optic schemes are working in a determinate way, and they are feasible with current experimental technology.
Multistage Estimation Of Frequency And Phase
NASA Technical Reports Server (NTRS)
Kumar, Rajendra
1991-01-01
Conceptual two-stage software scheme serves as prototype of multistage scheme for digital estimation of phase, frequency, and rate of change of frequency ("Doppler rate") of possibly phase-modulated received sinusoidal signal in communication system in which transmitter and/or receiver traveling rapidly, accelerating, and/or jerking severely. Each additional stage of multistage scheme provides increasingly refined estimate of frequency and phase of signal. Conceived for use in estimating parameters of signals from spacecraft and high dynamic GPS signal parameters, also applicable, to terrestrial stationary/mobile (e.g., cellular radio) and land-mobile/satellite communication systems.
NASA Technical Reports Server (NTRS)
Aftosmis, M. J.; Berger, M. J.; Adomavicius, G.
2000-01-01
Preliminary verification and validation of an efficient Euler solver for adaptively refined Cartesian meshes with embedded boundaries is presented. The parallel, multilevel method makes use of a new on-the-fly parallel domain decomposition strategy based upon the use of space-filling curves, and automatically generates a sequence of coarse meshes for processing by the multigrid smoother. The coarse mesh generation algorithm produces grids which completely cover the computational domain at every level in the mesh hierarchy. A series of examples on realistically complex three-dimensional configurations demonstrate that this new coarsening algorithm reliably achieves mesh coarsening ratios in excess of 7 on adaptively refined meshes. Numerical investigations of the scheme's local truncation error demonstrate an achieved order of accuracy between 1.82 and 1.88. Convergence results for the multigrid scheme are presented for both subsonic and transonic test cases and demonstrate W-cycle multigrid convergence rates between 0.84 and 0.94. Preliminary parallel scalability tests on both simple wing and complex complete aircraft geometries shows a computational speedup of 52 on 64 processors using the run-time mesh partitioner.
Genetics Research Discovered in a Bestseller | Poster
By Nancy Parrish, Staff Writer One morning in early January, Amar Klar sat down at his computer and found an e-mail with a curious message from a colleague. While reading a bestselling novel, The Marriage Plot by Jeffrey Eugenides, his colleague, a professor at Princeton University, found a description of research on yeast genetics that was surprisingly similar to Klar’s early
Caires-Júnior, Luiz Carlos; Goulart, Ernesto; Melo, Uirá Souto; Araujo, Bruno Henrique Silva; Alvizi, Lucas; Soares-Schanoski, Alessandra; de Oliveira, Danyllo Felipe; Kobayashi, Gerson Shigeru; Griesi-Oliveira, Karina; Musso, Camila Manso; Amaral, Murilo Sena; daSilva, Lucas Ferreira; Astray, Renato Mancini; Suárez-Patiño, Sandra Fernanda; Ventini, Daniella Cristina; da Silva, Sérgio Gomes; Yamamoto, Guilherme Lopes; Ezquina, Suzana; Naslavsky, Michel Satya; Telles-Silva, Kayque Alves; Weinmann, Karina; van der Linden, Vanessa; van der Linden, Helio; de Oliveira, João Ricardo Mendes; Arrais, Nivia Maria Rodrigues; Melo, Adriana; Figueiredo, Thalita; Santos, Silvana; Meira, Joanna Goes Castro; Passos, Saulo Duarte; de Almeida, Roque Pacheco; Bispo, Ana Jovina Barreto; Cavalheiro, Esper Abrão; Kalil, Jorge; Cunha-Neto, Edécio; Nakaya, Helder; Andreata-Santos, Robert; de Souza Ferreira, Luis Carlos; Verjovski-Almeida, Sergio; Ho, Paulo Lee; Passos-Bueno, Maria Rita; Zatz, Mayana
2018-03-13
The original PDF version of this Article contained errors in the spelling of Luiz Carlos Caires-Júnior, Uirá Souto Melo, Bruno Henrique Silva Araujo, Alessandra Soares-Schanoski, Murilo Sena Amaral, Kayque Alves Telles-Silva, Vanessa van der Linden, Helio van der Linden, João Ricardo Mendes de Oliveira, Nivia Maria Rodrigues Arrais, Joanna Goes Castro Meira, Ana Jovina Barreto Bispo, Esper Abrão Cavalheiro, and Robert Andreata-Santos, which were incorrectly given as Luiz Carlos de Caires Jr., UiráSouto Melo, Bruno Silva Henrique Araujo, Alessandra Soares Schanoski, MuriloSena Amaral, Kayque Telles Alves Silva, Vanessa Van der Linden, Helio Van der Linden, João Mendes Ricardo de Oliveira, Nivia Rodrigues Maria Arrais, Joanna Castro Goes Meira, Ana JovinaBarreto Bispo, EsperAbrão Cavalheiro, and Robert Andreata Santos. Furthermore, in both the PDF and HTML versions of the Article, the top panel of Fig. 3e was incorrectly labeled '10608-1' and should have been '10608-4', and financial support from CAPES and DECIT-MS was inadvertently omitted from the Acknowledgements section. These errors have now been corrected in both the PDF and HTML versions of the Article.
Reaching extended length-scales with accelerated dynamics
NASA Astrophysics Data System (ADS)
Hubartt, Bradley; Shim, Yunsic; Amar, Jacques
2012-02-01
While temperature-accelerated dynamics (TAD) has been quite successful in extending the time-scales for non-equilibrium simulations of small systems, the computational time increases rapidly with system size. One possible solution to this problem, which we refer to as parTAD^1 is to use spatial decomposition combined with our previously developed semi-rigorous synchronous sublattice algorithm^2. However, while such an approach leads to significantly better scaling as a function of system-size, it also artificially limits the size of activated events and is not completely rigorous. Here we discuss progress we have made in developing an alternative approach in which localized saddle-point searches are combined with parallel GPU-based molecular dynamics in order to improve the scaling behavior. By using this method, along with the use of an adaptive method to determine the optimal high-temperature^3, we have been able to significantly increase the range of time- and length-scales over which accelerated dynamics simulations may be carried out. [1] Y. Shim et al, Phys. Rev. B 76, 205439 (2007); ibid, Phys. Rev. Lett. 101, 116101 (2008). [2] Y. Shim and J.G. Amar, Phys. Rev. B 71, 125432 (2005). [3] Y. Shim and J.G. Amar, J. Chem. Phys. 134, 054127 (2011).
Taxonomic revision of Drymoluber Amaral, 1930 (Serpentes: Colubridae).
Costa, Henrique Caldeira; Moura, Mário Ribeiro; Feio, Renato Neves
2013-01-01
The present study is a taxonomic revision of the genus Drymoluber Amaral, 1930, using meristic and morphometric characters, aspects of external hemipenial morphology and body coloration. Sexual dimorphism occurs in D. dichrous and D. brazili but was not detected in D. apurimacensis. Morphological variation within D. dichrous is related to geographic distance between populations. Furthermore, variation in the number of ventrals and subcaudals in D. dichrous and D. brazili follows latitudinal and longitudinal clinal patterns. Drymoluber dichrous is diagnosed by the presence of 15-15-15 smooth dorsal scale rows with two apical pits, and 157-180 ventrals and 86-110 subcaudals; it occurs along the eastern versant of the Andes, in the Amazon forest, on the Guiana Shield, in the Atlantic forest, and its transitional areas with the Caatinga and Cerrado. Drymoluber brazili has 17-17-15 smooth dorsal scale rows with two apical pits, 182-202 ventrals and 109-127 subcaudals, and ranges throughout the Caatinga, Cerrado, Atlantic forest and transitional areas between these last two domains. Drymoluber apurimacensis has 13-13-13 smooth dorsal scale rows without apical pits, 158-182 ventrals and 84-93 subcaudals, and occurs in the Apurimac Valley, south of the Apurimac and Pampas rivers in Peru.
Meshfree truncated hierarchical refinement for isogeometric analysis
NASA Astrophysics Data System (ADS)
Atri, H. R.; Shojaee, S.
2018-05-01
In this paper truncated hierarchical B-spline (THB-spline) is coupled with reproducing kernel particle method (RKPM) to blend advantages of the isogeometric analysis and meshfree methods. Since under certain conditions, the isogeometric B-spline and NURBS basis functions are exactly represented by reproducing kernel meshfree shape functions, recursive process of producing isogeometric bases can be omitted. More importantly, a seamless link between meshfree methods and isogeometric analysis can be easily defined which provide an authentic meshfree approach to refine the model locally in isogeometric analysis. This procedure can be accomplished using truncated hierarchical B-splines to construct new bases and adaptively refine them. It is also shown that the THB-RKPM method can provide efficient approximation schemes for numerical simulations and represent a promising performance in adaptive refinement of partial differential equations via isogeometric analysis. The proposed approach for adaptive locally refinement is presented in detail and its effectiveness is investigated through well-known benchmark examples.
A semi-implicit level set method for multiphase flows and fluid-structure interaction problems
NASA Astrophysics Data System (ADS)
Cottet, Georges-Henri; Maitre, Emmanuel
2016-06-01
In this paper we present a novel semi-implicit time-discretization of the level set method introduced in [8] for fluid-structure interaction problems. The idea stems from a linear stability analysis derived on a simplified one-dimensional problem. The semi-implicit scheme relies on a simple filter operating as a pre-processing on the level set function. It applies to multiphase flows driven by surface tension as well as to fluid-structure interaction problems. The semi-implicit scheme avoids the stability constraints that explicit scheme need to satisfy and reduces significantly the computational cost. It is validated through comparisons with the original explicit scheme and refinement studies on two-dimensional benchmarks.
NASA Technical Reports Server (NTRS)
Coirier, William John
1994-01-01
A Cartesian, cell-based scheme for solving the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, polygonal 'cut' cells are created. The geometry of the cut cells is computed using polygon-clipping algorithms. The grid is stored in a binary-tree data structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded, with a limited linear reconstruction of the primitive variables used to provide input states to an approximate Riemann solver for computing the fluxes between neighboring cells. A multi-stage time-stepping scheme is used to reach a steady-state solution. Validation of the Euler solver with benchmark numerical and exact solutions is presented. An assessment of the accuracy of the approach is made by uniform and adaptive grid refinements for a steady, transonic, exact solution to the Euler equations. The error of the approach is directly compared to a structured solver formulation. A non smooth flow is also assessed for grid convergence, comparing uniform and adaptively refined results. Several formulations of the viscous terms are assessed analytically, both for accuracy and positivity. The two best formulations are used to compute adaptively refined solutions of the Navier-Stokes equations. These solutions are compared to each other, to experimental results and/or theory for a series of low and moderate Reynolds numbers flow fields. The most suitable viscous discretization is demonstrated for geometrically-complicated internal flows. For flows at high Reynolds numbers, both an altered grid-generation procedure and a different formulation of the viscous terms are shown to be necessary. A hybrid Cartesian/body-fitted grid generation approach is demonstrated. In addition, a grid-generation procedure based on body-aligned cell cutting coupled with a viscous stensil-construction procedure based on quadratic programming is presented.
An upwind multigrid method for solving viscous flows on unstructured triangular meshes. M.S. Thesis
NASA Technical Reports Server (NTRS)
Bonhaus, Daryl Lawrence
1993-01-01
A multigrid algorithm is combined with an upwind scheme for solving the two dimensional Reynolds averaged Navier-Stokes equations on triangular meshes resulting in an efficient, accurate code for solving complex flows around multiple bodies. The relaxation scheme uses a backward-Euler time difference and relaxes the resulting linear system using a red-black procedure. Roe's flux-splitting scheme is used to discretize convective and pressure terms, while a central difference is used for the diffusive terms. The multigrid scheme is demonstrated for several flows around single and multi-element airfoils, including inviscid, laminar, and turbulent flows. The results show an appreciable speed up of the scheme for inviscid and laminar flows, and dramatic increases in efficiency for turbulent cases, especially those on increasingly refined grids.
Characterization of the Boundary Layers on Full-Scale Bluefin Tuna
2014-09-30
NUWC-NPT Technical Report 12,163 30 September 2014 Characterization of the Boundary Layers on Full-Scale Bluefin Tuna Kimberly M. Cipolla...Center Division Newport, under Section 219 Research Project, “Characterization of the Boundary Layers on Full-Scale Bluefin Tuna ,” principal...K. Amaral (Code 1522). The author thanks Barbara Block (Stanford University), head of the Tuna Research and Conservation Center (TRCC) at the
2018-05-04
ARL-TR-8359 ● MAY 2018 US Army Research Laboratory Enhancing Human–Agent Teaming with Individualized, Adaptive Technologies : A...with Individualized, Adaptive Technologies : A Discussion of Critical Scientific Questions by Arwen H DeCostanza, Amar R Marathe, Addison Bohannon...Enhancing Human–Agent Teaming with Individualized, Adaptive Technologies : A Discussion of Critical Scientific Questions 5a. CONTRACT NUMBER 5b
ERIC Educational Resources Information Center
Unsworth, Sharon
2014-01-01
The central claim in Amaral and Roeper's (this issue; henceforth A&R) keynote article is that everyone is multilingual, whether they speak one or more languages. In a nutshell, the idea is that each speaker has multiple grammars or "sub-sets of rules (or sub-grammars) that co-exist". Thus, rather than positing complex rules to…
Why Minimal Multiple Rules Provide a Unique Window into UG and L2
ERIC Educational Resources Information Center
Amaral, Luiz; Roeper, Tom
2014-01-01
This article clarifies some ideas presented in this issue's keynote article (Amaral and Roeper, this issue) and discusses several issues raised by the contributors' comments on the nature of the Multiple Grammars (MG) theory. One of the key goals of the article is to unequivocally state that MG is not a parametric theory and that its…
Iqbal, Zohaib; Wilson, Neil E; Keller, Margaret A; Michalik, David E; Church, Joseph A; Nielsen-Saines, Karin; Deville, Jaime; Souza, Raissa; Brecht, Mary-Lynn; Thomas, M Albert
2016-01-01
To measure cerebral metabolite levels in perinatally HIV-infected youths and healthy controls using the accelerated five dimensional (5D) echo planar J-resolved spectroscopic imaging (EP-JRESI) sequence, which is capable of obtaining two dimensional (2D) J-resolved spectra from three spatial dimensions (3D). After acquisition and reconstruction of the 5D EP-JRESI data, T1-weighted MRIs were used to classify brain regions of interest for HIV patients and healthy controls: right frontal white (FW), medial frontal gray (FG), right basal ganglia (BG), right occipital white (OW), and medial occipital gray (OG). From these locations, respective J-resolved and TE-averaged spectra were extracted and fit using two different quantitation methods. The J-resolved spectra were fit using prior knowledge fitting (ProFit) while the TE-averaged spectra were fit using the advanced method for accurate robust and efficient spectral fitting (AMARES). Quantitation of the 5D EP-JRESI data using the ProFit algorithm yielded significant metabolic differences in two spatial locations of the perinatally HIV-infected youths compared to controls: elevated NAA/(Cr+Ch) in the FW and elevated Asp/(Cr+Ch) in the BG. Using the TE-averaged data quantified by AMARES, an increase of Glu/(Cr+Ch) was shown in the FW region. A strong negative correlation (r < -0.6) was shown between tCh/(Cr+Ch) quantified using ProFit in the FW and CD4 counts. Also, strong positive correlations (r > 0.6) were shown between Asp/(Cr+Ch) and CD4 counts in the FG and BG. The complimentary results using ProFit fitting of J-resolved spectra and AMARES fitting of TE-averaged spectra, which are a subset of the 5D EP-JRESI acquisition, demonstrate an abnormal energy metabolism in the brains of perinatally HIV-infected youths. This may be a result of the HIV pathology and long-term combinational anti-retroviral therapy (cART). Further studies of larger perinatally HIV-infected cohorts are necessary to confirm these findings.
NASA Technical Reports Server (NTRS)
Steinthorsson, E.; Modiano, David; Colella, Phillip
1994-01-01
A methodology for accurate and efficient simulation of unsteady, compressible flows is presented. The cornerstones of the methodology are a special discretization of the Navier-Stokes equations on structured body-fitted grid systems and an efficient solution-adaptive mesh refinement technique for structured grids. The discretization employs an explicit multidimensional upwind scheme for the inviscid fluxes and an implicit treatment of the viscous terms. The mesh refinement technique is based on the AMR algorithm of Berger and Colella. In this approach, cells on each level of refinement are organized into a small number of topologically rectangular blocks, each containing several thousand cells. The small number of blocks leads to small overhead in managing data, while their size and regular topology means that a high degree of optimization can be achieved on computers with vector processors.
A new parallelization scheme for adaptive mesh refinement
Loffler, Frank; Cao, Zhoujian; Brandt, Steven R.; ...
2016-05-06
Here, we present a new method for parallelization of adaptive mesh refinement called Concurrent Structured Adaptive Mesh Refinement (CSAMR). This new method offers the lower computational cost (i.e. wall time x processor count) of subcycling in time, but with the runtime performance (i.e. smaller wall time) of evolving all levels at once using the time step of the finest level (which does more work than subcycling but has less parallelism). We demonstrate our algorithm's effectiveness using an adaptive mesh refinement code, AMSS-NCKU, and show performance on Blue Waters and other high performance clusters. For the class of problem considered inmore » this paper, our algorithm achieves a speedup of 1.7-1.9 when the processor count for a given AMR run is doubled, consistent with our theoretical predictions.« less
A new parallelization scheme for adaptive mesh refinement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loffler, Frank; Cao, Zhoujian; Brandt, Steven R.
Here, we present a new method for parallelization of adaptive mesh refinement called Concurrent Structured Adaptive Mesh Refinement (CSAMR). This new method offers the lower computational cost (i.e. wall time x processor count) of subcycling in time, but with the runtime performance (i.e. smaller wall time) of evolving all levels at once using the time step of the finest level (which does more work than subcycling but has less parallelism). We demonstrate our algorithm's effectiveness using an adaptive mesh refinement code, AMSS-NCKU, and show performance on Blue Waters and other high performance clusters. For the class of problem considered inmore » this paper, our algorithm achieves a speedup of 1.7-1.9 when the processor count for a given AMR run is doubled, consistent with our theoretical predictions.« less
Adaptive elimination of synchronization in coupled oscillator
NASA Astrophysics Data System (ADS)
Zhou, Shijie; Ji, Peng; Zhou, Qing; Feng, Jianfeng; Kurths, Jürgen; Lin, Wei
2017-08-01
We present here an adaptive control scheme with a feedback delay to achieve elimination of synchronization in a large population of coupled and synchronized oscillators. We validate the feasibility of this scheme not only in the coupled Kuramoto’s oscillators with a unimodal or bimodal distribution of natural frequency, but also in two representative models of neuronal networks, namely, the FitzHugh-Nagumo spiking oscillators and the Hindmarsh-Rose bursting oscillators. More significantly, we analytically illustrate the feasibility of the proposed scheme with a feedback delay and reveal how the exact topological form of the bimodal natural frequency distribution influences the scheme performance. We anticipate that our developed scheme will deepen the understanding and refinement of those controllers, e.g. techniques of deep brain stimulation, which have been implemented in remedying some synchronization-induced mental disorders including Parkinson disease and epilepsy.
Gaussian entanglement distribution via satellite
NASA Astrophysics Data System (ADS)
Hosseinidehaj, Nedasadat; Malaney, Robert
2015-02-01
In this work we analyze three quantum communication schemes for the generation of Gaussian entanglement between two ground stations. Communication occurs via a satellite over two independent atmospheric fading channels dominated by turbulence-induced beam wander. In our first scheme, the engineering complexity remains largely on the ground transceivers, with the satellite acting simply as a reflector. Although the channel state information of the two atmospheric channels remains unknown in this scheme, the Gaussian entanglement generation between the ground stations can still be determined. On the ground, distillation and Gaussification procedures can be applied, leading to a refined Gaussian entanglement generation rate between the ground stations. We compare the rates produced by this first scheme with two competing schemes in which quantum complexity is added to the satellite, thereby illustrating the tradeoff between space-based engineering complexity and the rate of ground-station entanglement generation.
Evaluation of MODFLOW-LGR in connection with a synthetic regional-scale model
Vilhelmsen, T.N.; Christensen, S.; Mehl, S.W.
2012-01-01
This work studies costs and benefits of utilizing local-grid refinement (LGR) as implemented in MODFLOW-LGR to simulate groundwater flow in a buried tunnel valley interacting with a regional aquifer. Two alternative LGR methods were used: the shared-node (SN) method and the ghost-node (GN) method. To conserve flows the SN method requires correction of sources and sinks in cells at the refined/coarse-grid interface. We found that the optimal correction method is case dependent and difficult to identify in practice. However, the results showed little difference and suggest that identifying the optimal method was of minor importance in our case. The GN method does not require corrections at the models' interface, and it uses a simpler head interpolation scheme than the SN method. The simpler scheme is faster but less accurate so that more iterations may be necessary. However, the GN method solved our flow problem more efficiently than the SN method. The MODFLOW-LGR results were compared with the results obtained using a globally coarse (GC) grid. The LGR simulations required one to two orders of magnitude longer run times than the GC model. However, the improvements of the numerical resolution around the buried valley substantially increased the accuracy of simulated heads and flows compared with the GC simulation. Accuracy further increased locally around the valley flanks when improving the geological resolution using the refined grid. Finally, comparing MODFLOW-LGR simulation with a globally refined (GR) grid showed that the refinement proportion of the model should not exceed 10% to 15% in order to secure method efficiency. ?? 2011, The Author(s). Ground Water ?? 2011, National Ground Water Association.
On Accuracy of Adaptive Grid Methods for Captured Shocks
NASA Technical Reports Server (NTRS)
Yamaleev, Nail K.; Carpenter, Mark H.
2002-01-01
The accuracy of two grid adaptation strategies, grid redistribution and local grid refinement, is examined by solving the 2-D Euler equations for the supersonic steady flow around a cylinder. Second- and fourth-order linear finite difference shock-capturing schemes, based on the Lax-Friedrichs flux splitting, are used to discretize the governing equations. The grid refinement study shows that for the second-order scheme, neither grid adaptation strategy improves the numerical solution accuracy compared to that calculated on a uniform grid with the same number of grid points. For the fourth-order scheme, the dominant first-order error component is reduced by the grid adaptation, while the design-order error component drastically increases because of the grid nonuniformity. As a result, both grid adaptation techniques improve the numerical solution accuracy only on the coarsest mesh or on very fine grids that are seldom found in practical applications because of the computational cost involved. Similar error behavior has been obtained for the pressure integral across the shock. A simple analysis shows that both grid adaptation strategies are not without penalties in the numerical solution accuracy. Based on these results, a new grid adaptation criterion for captured shocks is proposed.
A parallel adaptive mesh refinement algorithm
NASA Technical Reports Server (NTRS)
Quirk, James J.; Hanebutte, Ulf R.
1993-01-01
Over recent years, Adaptive Mesh Refinement (AMR) algorithms which dynamically match the local resolution of the computational grid to the numerical solution being sought have emerged as powerful tools for solving problems that contain disparate length and time scales. In particular, several workers have demonstrated the effectiveness of employing an adaptive, block-structured hierarchical grid system for simulations of complex shock wave phenomena. Unfortunately, from the parallel algorithm developer's viewpoint, this class of scheme is quite involved; these schemes cannot be distilled down to a small kernel upon which various parallelizing strategies may be tested. However, because of their block-structured nature such schemes are inherently parallel, so all is not lost. In this paper we describe the method by which Quirk's AMR algorithm has been parallelized. This method is built upon just a few simple message passing routines and so it may be implemented across a broad class of MIMD machines. Moreover, the method of parallelization is such that the original serial code is left virtually intact, and so we are left with just a single product to support. The importance of this fact should not be underestimated given the size and complexity of the original algorithm.
Wang, Jinke; Guo, Haoyan
2016-01-01
This paper presents a fully automatic framework for lung segmentation, in which juxta-pleural nodule problem is brought into strong focus. The proposed scheme consists of three phases: skin boundary detection, rough segmentation of lung contour, and pulmonary parenchyma refinement. Firstly, chest skin boundary is extracted through image aligning, morphology operation, and connective region analysis. Secondly, diagonal-based border tracing is implemented for lung contour segmentation, with maximum cost path algorithm used for separating the left and right lungs. Finally, by arc-based border smoothing and concave-based border correction, the refined pulmonary parenchyma is obtained. The proposed scheme is evaluated on 45 volumes of chest scans, with volume difference (VD) 11.15 ± 69.63 cm 3 , volume overlap error (VOE) 3.5057 ± 1.3719%, average surface distance (ASD) 0.7917 ± 0.2741 mm, root mean square distance (RMSD) 1.6957 ± 0.6568 mm, maximum symmetric absolute surface distance (MSD) 21.3430 ± 8.1743 mm, and average time-cost 2 seconds per image. The preliminary results on accuracy and complexity prove that our scheme is a promising tool for lung segmentation with juxta-pleural nodules.
Yeast Studies Lead to a New DNA-Based Model for Research on Development | Poster
A paper from Amar J. S. Klar, Ph.D., with the RNA Biology Laboratory in NCI’s Center for Cancer Research, has identified a model for DNA research that explains the congenital disorder of mirror hand movements in humans. A mirror movement is when an intentional movement on one side of the body is mirrored by an involuntary movement on the other.
Development of Novel Noninvasive Methods of Stress Assessment in Baleen Whales
2013-09-30
adrenal hormone ( aldosterone ) that has not been adequately studied in baleen whales. Respiratory sampling is a novel method of physiological ... physiological stress levels of free-swimming cetaceans (Amaral 2010, ONR 2010, Hunt et al. 2013a). We have previously demonstrated that respiratory vapor...hormones have not yet been tested in either feces or blow, particularly aldosterone . Our aim in this project is to further develop both techniques
Development of Novel Noninvasive Methods of Stress Assessment in Baleen Whales
2014-09-30
large whales. Few methods exist for assessment of physiological stress levels of free-swimming cetaceans (Amaral 2010, ONR 2010, Hunt et al. 2013...hormone aldosterone . Our aim in this project is to further develop both techniques - respiratory hormone analysis and fecal hormone analysis - for use...noninvasive aldosterone assay (for both feces and blow) that can be used as an alternative measure of adrenal gland activation relative to stress
1987-02-26
de Almeida, the shadow of Joao Bosco Mota Amaral, whom de Almeida accused of having drawn up the FLA manifesto during the turbulent era of the PREC...followed by the sycophantic scene staged by separatism at Jose de Almeida’s press conference. On the other hand the near spontaneity with which Alberto ... Joao Jardiffi received Pieter Botha, the South African president—a spontaneity in keeping with the frankness with which he confesses to preferring
Ratnarajan, Gokulan; Kean, Jane; French, Karen; Parker, Mike; Bourne, Rupert
2015-09-01
To establish the safety of the CHANGES glaucoma referral refinement scheme (GRRS). The CHANGES scheme risk stratifies glaucoma referrals, with low risk referrals seen by a community based specialist optometrist (OSI) while high risk referrals are referred directly to the hospital. In this study, those patients discharged by the OSI were reviewed by the consultant ophthalmologist to establish a 'false negative' rate (Study 1). Virtual review of optic disc photographs was carried out both by a hospital-based specialist optometrist as well as the consultant ophthalmologist (Study 2). None of these 34 discharged patients seen by the consultant were found to have glaucoma or started on treatment to lower the intra-ocular pressure. Five of the 34 (15%) were classified as 'glaucoma suspect' based on the appearance of the optic disc and offered a follow-up appointment. Virtual review by both the consultant and optometrist had a sensitivity of 80%, whilst the false positive rate for the optometrist was 3.4%, and 32% for the consultant (p < 0.05). The false negative rate of the OSIs in the CHANGES scheme was 15%, however there were no patients where glaucoma was missed. Virtual review in experienced hands can be as effective as clinical review by a consultant, and is a valid method to ensure glaucoma is not missed in GRRS. The CHANGES scheme, which includes virtual review, is effective at reducing referrals to the hospital whilst not compromising patient safety. © 2015 The Authors Ophthalmic & Physiological Optics © 2015 The College of Optometrists.
CT liver volumetry using geodesic active contour segmentation with a level-set algorithm
NASA Astrophysics Data System (ADS)
Suzuki, Kenji; Epstein, Mark L.; Kohlbrenner, Ryan; Obajuluwa, Ademola; Xu, Jianwu; Hori, Masatoshi; Baron, Richard
2010-03-01
Automatic liver segmentation on CT images is challenging because the liver often abuts other organs of a similar density. Our purpose was to develop an accurate automated liver segmentation scheme for measuring liver volumes. We developed an automated volumetry scheme for the liver in CT based on a 5 step schema. First, an anisotropic smoothing filter was applied to portal-venous phase CT images to remove noise while preserving the liver structure, followed by an edge enhancer to enhance the liver boundary. By using the boundary-enhanced image as a speed function, a fastmarching algorithm generated an initial surface that roughly estimated the liver shape. A geodesic-active-contour segmentation algorithm coupled with level-set contour-evolution refined the initial surface so as to more precisely fit the liver boundary. The liver volume was calculated based on the refined liver surface. Hepatic CT scans of eighteen prospective liver donors were obtained under a liver transplant protocol with a multi-detector CT system. Automated liver volumes obtained were compared with those manually traced by a radiologist, used as "gold standard." The mean liver volume obtained with our scheme was 1,520 cc, whereas the mean manual volume was 1,486 cc, with the mean absolute difference of 104 cc (7.0%). CT liver volumetrics based on an automated scheme agreed excellently with "goldstandard" manual volumetrics (intra-class correlation coefficient was 0.95) with no statistically significant difference (p(F<=f)=0.32), and required substantially less completion time. Our automated scheme provides an efficient and accurate way of measuring liver volumes.
An adaptive mesh-moving and refinement procedure for one-dimensional conservation laws
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Flaherty, Joseph E.; Arney, David C.
1993-01-01
We examine the performance of an adaptive mesh-moving and /or local mesh refinement procedure for the finite difference solution of one-dimensional hyperbolic systems of conservation laws. Adaptive motion of a base mesh is designed to isolate spatially distinct phenomena, and recursive local refinement of the time step and cells of the stationary or moving base mesh is performed in regions where a refinement indicator exceeds a prescribed tolerance. These adaptive procedures are incorporated into a computer code that includes a MacCormack finite difference scheme wih Davis' artificial viscosity model and a discretization error estimate based on Richardson's extrapolation. Experiments are conducted on three problems in order to qualify the advantages of adaptive techniques relative to uniform mesh computations and the relative benefits of mesh moving and refinement. Key results indicate that local mesh refinement, with and without mesh moving, can provide reliable solutions at much lower computational cost than possible on uniform meshes; that mesh motion can be used to improve the results of uniform mesh solutions for a modest computational effort; that the cost of managing the tree data structure associated with refinement is small; and that a combination of mesh motion and refinement reliably produces solutions for the least cost per unit accuracy.
Adaptive mesh refinement and load balancing based on multi-level block-structured Cartesian mesh
NASA Astrophysics Data System (ADS)
Misaka, Takashi; Sasaki, Daisuke; Obayashi, Shigeru
2017-11-01
We developed a framework for a distributed-memory parallel computer that enables dynamic data management for adaptive mesh refinement and load balancing. We employed simple data structure of the building cube method (BCM) where a computational domain is divided into multi-level cubic domains and each cube has the same number of grid points inside, realising a multi-level block-structured Cartesian mesh. Solution adaptive mesh refinement, which works efficiently with the help of the dynamic load balancing, was implemented by dividing cubes based on mesh refinement criteria. The framework was investigated with the Laplace equation in terms of adaptive mesh refinement, load balancing and the parallel efficiency. It was then applied to the incompressible Navier-Stokes equations to simulate a turbulent flow around a sphere. We considered wall-adaptive cube refinement where a non-dimensional wall distance y+ near the sphere is used for a criterion of mesh refinement. The result showed the load imbalance due to y+ adaptive mesh refinement was corrected by the present approach. To utilise the BCM framework more effectively, we also tested a cube-wise algorithm switching where an explicit and implicit time integration schemes are switched depending on the local Courant-Friedrichs-Lewy (CFL) condition in each cube.
Parallel-Vector Algorithm For Rapid Structural Anlysis
NASA Technical Reports Server (NTRS)
Agarwal, Tarun R.; Nguyen, Duc T.; Storaasli, Olaf O.
1993-01-01
New algorithm developed to overcome deficiency of skyline storage scheme by use of variable-band storage scheme. Exploits both parallel and vector capabilities of modern high-performance computers. Gives engineers and designers opportunity to include more design variables and constraints during optimization of structures. Enables use of more refined finite-element meshes to obtain improved understanding of complex behaviors of aerospace structures leading to better, safer designs. Not only attractive for current supercomputers but also for next generation of shared-memory supercomputers.
The Predatory Bird Monitoring Scheme: identifying chemical risks to top predators in Britain.
Walker, Lee A; Shore, Richard F; Turk, Anthony; Pereira, M Glória; Best, Jennifer
2008-09-01
The Predatory Bird Monitoring Scheme (PBMS) is a long term (>40 y), UK-wide, exposure monitoring scheme that determines the concentration of selected pesticides and pollutants in the livers and eggs of predatory birds. This paper describes how the PBMS works, and in particular highlights some of the key scientific and policy drivers for monitoring contaminants in predatory birds and describes the specific aims, scope, and methods of the PBMS. We also present previously unpublished data that illustrates how the PBMS has been used to demonstrate the success of mitigation measures in reversing chemical-mediated impacts; identify and evaluate chemical threats to species of high conservation value; and finally to inform and refine monitoring methodologies. In addition, we discuss how such schemes can also address wider conservation needs.
Adaptive Mesh Refinement in Curvilinear Body-Fitted Grid Systems
NASA Technical Reports Server (NTRS)
Steinthorsson, Erlendur; Modiano, David; Colella, Phillip
1995-01-01
To be truly compatible with structured grids, an AMR algorithm should employ a block structure for the refined grids to allow flow solvers to take advantage of the strengths of unstructured grid systems, such as efficient solution algorithms for implicit discretizations and multigrid schemes. One such algorithm, the AMR algorithm of Berger and Colella, has been applied to and adapted for use with body-fitted structured grid systems. Results are presented for a transonic flow over a NACA0012 airfoil (AGARD-03 test case) and a reflection of a shock over a double wedge.
Development of Novel Noninvasive Methods of Stress Assessment in Baleen Whales
2015-09-30
large whales. Few methods exist for assessment of physiological stress levels of free-swimming cetaceans (Amaral 2010, ONR 2010, Hunt et al. 2013...adrenal hormone aldosterone . Our aim in this project is to further develop both techniques - respiratory hormone analysis and fecal hormone analysis...development of a noninvasive aldosterone assay (for both feces and blow) that can be used as an alternative measure of adrenal gland activation relative to
High Order Schemes in Bats-R-US for Faster and More Accurate Predictions
NASA Astrophysics Data System (ADS)
Chen, Y.; Toth, G.; Gombosi, T. I.
2014-12-01
BATS-R-US is a widely used global magnetohydrodynamics model that originally employed second order accurate TVD schemes combined with block based Adaptive Mesh Refinement (AMR) to achieve high resolution in the regions of interest. In the last years we have implemented fifth order accurate finite difference schemes CWENO5 and MP5 for uniform Cartesian grids. Now the high order schemes have been extended to generalized coordinates, including spherical grids and also to the non-uniform AMR grids including dynamic regridding. We present numerical tests that verify the preservation of free-stream solution and high-order accuracy as well as robust oscillation-free behavior near discontinuities. We apply the new high order accurate schemes to both heliospheric and magnetospheric simulations and show that it is robust and can achieve the same accuracy as the second order scheme with much less computational resources. This is especially important for space weather prediction that requires faster than real time code execution.
NASA Astrophysics Data System (ADS)
Grayver, Alexander V.
2015-07-01
This paper presents a distributed magnetotelluric inversion scheme based on adaptive finite-element method (FEM). The key novel aspect of the introduced algorithm is the use of automatic mesh refinement techniques for both forward and inverse modelling. These techniques alleviate tedious and subjective procedure of choosing a suitable model parametrization. To avoid overparametrization, meshes for forward and inverse problems were decoupled. For calculation of accurate electromagnetic (EM) responses, automatic mesh refinement algorithm based on a goal-oriented error estimator has been adopted. For further efficiency gain, EM fields for each frequency were calculated using independent meshes in order to account for substantially different spatial behaviour of the fields over a wide range of frequencies. An automatic approach for efficient initial mesh design in inverse problems based on linearized model resolution matrix was developed. To make this algorithm suitable for large-scale problems, it was proposed to use a low-rank approximation of the linearized model resolution matrix. In order to fill a gap between initial and true model complexities and resolve emerging 3-D structures better, an algorithm for adaptive inverse mesh refinement was derived. Within this algorithm, spatial variations of the imaged parameter are calculated and mesh is refined in the neighborhoods of points with the largest variations. A series of numerical tests were performed to demonstrate the utility of the presented algorithms. Adaptive mesh refinement based on the model resolution estimates provides an efficient tool to derive initial meshes which account for arbitrary survey layouts, data types, frequency content and measurement uncertainties. Furthermore, the algorithm is capable to deliver meshes suitable to resolve features on multiple scales while keeping number of unknowns low. However, such meshes exhibit dependency on an initial model guess. Additionally, it is demonstrated that the adaptive mesh refinement can be particularly efficient in resolving complex shapes. The implemented inversion scheme was able to resolve a hemisphere object with sufficient resolution starting from a coarse discretization and refining mesh adaptively in a fully automatic process. The code is able to harness the computational power of modern distributed platforms and is shown to work with models consisting of millions of degrees of freedom. Significant computational savings were achieved by using locally refined decoupled meshes.
An exact peak capturing and essentially oscillation-free (EPCOF) algorithm, consisting of advection-dispersion decoupling, backward method of characteristics, forward node tracking, and adaptive local grid refinement, is developed to solve transport equations. This algorithm repr...
The Refinement of The Preliminary Genetic Decomposition of Group
NASA Astrophysics Data System (ADS)
Wijayanti, K.
2017-04-01
Mathematics proof is one of the characteristics of advanced mathematics thinking, and proof plays an important role in learning the abstract algebra included group theory. The depth and complexity of individual’s understanding of a concept depend on his/her ability to establish connections among the mental structure that constitute it. One of the cognitive styles is field dependent/independent. Field independent (FI) and field dependent (FD) learners have different characteristics. Our research question is (1)How is the proposed refinement of preliminary genetic decomposition of group that is designed with a preliminary study of the learning with APOS works; (2) What understanding about group that is generated by student (Field Independent, Field Neutral, and Field Dependent) when learning through designed material. This study was a descriptive qualitative. The participants of this study were nine (9) undergraduate students who were taking Introduction of Algebraic Structure 1, which included group, in the even semester of academic year 2015/2016 at Universitas Negeri Semarang. Each of type of cognitive styles consisted of 3 participants. There were two instruments used to gather data: written examination in the course and a set of the interview. The FD and FN participants generated Action for a binary operation. The FI participant generated Action and Process for a binary operation. The FD, FN and FI participants generated Action, Process, Object, and Scheme for the set. The FD and FN participant did not generate mental structure for axiom. The FI participant generated Scheme for axiom. The FD, FN and FI participants tend to have no Coherence of Scheme of the group. Not all mental structure on the refinement of the preliminary genetic decomposition can be constructed by participants so well that there are still obstacles in the process of proving.
An advanced teaching scheme for integrating problem-based learning in control education
NASA Astrophysics Data System (ADS)
Juuso, Esko K.
2018-03-01
Engineering education needs to provide both theoretical knowledge and problem-solving skills. Many topics can be presented in lectures and computer exercises are good tools in teaching the skills. Learning by doing is combined with lectures to provide additional material and perspectives. The teaching scheme includes lectures, computer exercises, case studies, seminars and reports organized as a problem-based learning process. In the gradually refining learning material, each teaching method has its own role. The scheme, which has been used in teaching two 4th year courses, is beneficial for overall learning progress, especially in bilingual courses. The students become familiar with new perspectives and are ready to use the course material in application projects.
Moving overlapping grids with adaptive mesh refinement for high-speed reactive and non-reactive flow
NASA Astrophysics Data System (ADS)
Henshaw, William D.; Schwendeman, Donald W.
2006-08-01
We consider the solution of the reactive and non-reactive Euler equations on two-dimensional domains that evolve in time. The domains are discretized using moving overlapping grids. In a typical grid construction, boundary-fitted grids are used to represent moving boundaries, and these grids overlap with stationary background Cartesian grids. Block-structured adaptive mesh refinement (AMR) is used to resolve fine-scale features in the flow such as shocks and detonations. Refinement grids are added to base-level grids according to an estimate of the error, and these refinement grids move with their corresponding base-level grids. The numerical approximation of the governing equations takes place in the parameter space of each component grid which is defined by a mapping from (fixed) parameter space to (moving) physical space. The mapped equations are solved numerically using a second-order extension of Godunov's method. The stiff source term in the reactive case is handled using a Runge-Kutta error-control scheme. We consider cases when the boundaries move according to a prescribed function of time and when the boundaries of embedded bodies move according to the surface stress exerted by the fluid. In the latter case, the Newton-Euler equations describe the motion of the center of mass of the each body and the rotation about it, and these equations are integrated numerically using a second-order predictor-corrector scheme. Numerical boundary conditions at slip walls are described, and numerical results are presented for both reactive and non-reactive flows that demonstrate the use and accuracy of the numerical approach.
A mass and momentum conserving unsplit semi-Lagrangian framework for simulating multiphase flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Owkes, Mark, E-mail: mark.owkes@montana.edu; Desjardins, Olivier
In this work, we present a computational methodology for convection and advection that handles discontinuities with second order accuracy and maintains conservation to machine precision. This method can transport a variety of discontinuous quantities and is used in the context of an incompressible gas–liquid flow to transport the phase interface, momentum, and scalars. The proposed method provides a modification to the three-dimensional, unsplit, second-order semi-Lagrangian flux method of Owkes & Desjardins (JCP, 2014). The modification adds a refined grid that provides consistent fluxes of mass and momentum defined on a staggered grid and discrete conservation of mass and momentum, evenmore » for flows with large density ratios. Additionally, the refined grid doubles the resolution of the interface without significantly increasing the computational cost over previous non-conservative schemes. This is possible due to a novel partitioning of the semi-Lagrangian fluxes into a small number of simplices. The proposed scheme is tested using canonical verification tests, rising bubbles, and an atomizing liquid jet.« less
Structure-based coarse-graining for inhomogeneous liquid polymer systems.
Fukuda, Motoo; Zhang, Hedong; Ishiguro, Takahiro; Fukuzawa, Kenji; Itoh, Shintaro
2013-08-07
The iterative Boltzmann inversion (IBI) method is used to derive interaction potentials for coarse-grained (CG) systems by matching structural properties of a reference atomistic system. However, because it depends on such thermodynamic conditions as density and pressure of the reference system, the derived CG nonbonded potential is probably not applicable to inhomogeneous systems containing different density regimes. In this paper, we propose a structure-based coarse-graining scheme to devise CG nonbonded potentials that are applicable to different density bulk systems and inhomogeneous systems with interfaces. Similar to the IBI, the radial distribution function (RDF) of a reference atomistic bulk system is used for iteratively refining the CG nonbonded potential. In contrast to the IBI, however, our scheme employs an appropriately estimated initial guess and a small amount of refinement to suppress transfer of the many-body interaction effects included in the reference RDF into the CG nonbonded potential. To demonstrate the application of our approach to inhomogeneous systems, we perform coarse-graining for a liquid perfluoropolyether (PFPE) film coated on a carbon surface. The constructed CG PFPE model favorably reproduces structural and density distribution functions, not only for bulk systems, but also at the liquid-vacuum and liquid-solid interfaces, demonstrating that our CG scheme offers an easy and practical way to accurately determine nonbonded potentials for inhomogeneous systems.
Adaptive h -refinement for reduced-order models: ADAPTIVE h -refinement for reduced-order models
Carlberg, Kevin T.
2014-11-05
Our work presents a method to adaptively refine reduced-order models a posteriori without requiring additional full-order-model solves. The technique is analogous to mesh-adaptive h-refinement: it enriches the reduced-basis space online by ‘splitting’ a given basis vector into several vectors with disjoint support. The splitting scheme is defined by a tree structure constructed offline via recursive k-means clustering of the state variables using snapshot data. This method identifies the vectors to split online using a dual-weighted-residual approach that aims to reduce error in an output quantity of interest. The resulting method generates a hierarchy of subspaces online without requiring large-scale operationsmore » or full-order-model solves. Furthermore, it enables the reduced-order model to satisfy any prescribed error tolerance regardless of its original fidelity, as a completely refined reduced-order model is mathematically equivalent to the original full-order model. Experiments on a parameterized inviscid Burgers equation highlight the ability of the method to capture phenomena (e.g., moving shocks) not contained in the span of the original reduced basis.« less
Application of high-order numerical schemes and Newton-Krylov method to two-phase drift-flux model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zou, Ling; Zhao, Haihua; Zhang, Hongbin
This study concerns the application and solver robustness of the Newton-Krylov method in solving two-phase flow drift-flux model problems using high-order numerical schemes. In our previous studies, the Newton-Krylov method has been proven as a promising solver for two-phase flow drift-flux model problems. However, these studies were limited to use first-order numerical schemes only. Moreover, the previous approach to treating the drift-flux closure correlations was later revealed to cause deteriorated solver convergence performance, when the mesh was highly refined, and also when higher-order numerical schemes were employed. In this study, a second-order spatial discretization scheme that has been tested withmore » two-fluid two-phase flow model was extended to solve drift-flux model problems. In order to improve solver robustness, and therefore efficiency, a new approach was proposed to treating the mean drift velocity of the gas phase as a primary nonlinear variable to the equation system. With this new approach, significant improvement in solver robustness was achieved. With highly refined mesh, the proposed treatment along with the Newton-Krylov solver were extensively tested with two-phase flow problems that cover a wide range of thermal-hydraulics conditions. Satisfactory convergence performances were observed for all test cases. Numerical verification was then performed in the form of mesh convergence studies, from which expected orders of accuracy were obtained for both the first-order and the second-order spatial discretization schemes. Finally, the drift-flux model, along with numerical methods presented, were validated with three sets of flow boiling experiments that cover different flow channel geometries (round tube, rectangular tube, and rod bundle), and a wide range of test conditions (pressure, mass flux, wall heat flux, inlet subcooling and outlet void fraction).« less
Two-stage atlas subset selection in multi-atlas based image segmentation.
Zhao, Tingting; Ruan, Dan
2015-06-01
Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. The authors have developed a novel two-stage atlas subset selection scheme for multi-atlas based segmentation. It achieves good segmentation accuracy with significantly reduced computation cost, making it a suitable configuration in the presence of extensive heterogeneous atlases.
Fan, Zhaoyang; Hodnett, Philip A; Davarpanah, Amir H; Scanlon, Timothy G; Sheehan, John J; Varga, John; Carr, James C; Li, Debiao
2011-08-01
: To develop a flow-sensitive dephasing (FSD) preparative scheme to facilitate multidirectional flow-signal suppression in 3-dimensional balanced steady-state free precession imaging and to validate the feasibility of the refined sequence for noncontrast magnetic resonance angiography (NC-MRA) of the hand. : A new FSD preparative scheme was developed that combines 2 conventional FSD modules. Studies using a flow phantom (gadolinium-doped water 15 cm/s) and the hands of 11 healthy volunteers (6 males and 5 females) were performed to compare the proposed FSD scheme with its conventional counterpart with respect to the signal suppression of multidirectional flow. In 9 of the 11 healthy subjects and 2 patients with suspected vasculitis and documented Raynaud phenomenon, respectively, 3-dimensional balanced steady-state free precession imaging coupled with the new FSD scheme was compared with spatial-resolution-matched (0.94 × 0.94 × 0.94 mm) contrast-enhanced magnetic resonance angiography (0.15 mmol/kg gadopentetate dimeglumine) in terms of overall image quality, venous contamination, motion degradation, and arterial conspicuity. : The proposed FSD scheme was able to suppress 2-dimensional flow signal in the flow phantom and hands and yielded significantly higher arterial conspicuity scores than the conventional scheme did on NC-MRA at the regions of common digitals and proper digitals. Compared with contrast-enhanced magnetic resonance angiography, the refined NC-MRA technique yielded comparable overall image quality and motion degradation, significantly less venous contamination, and significantly higher arterial conspicuity score at digital arteries. : The FSD-based NC-MRA technique is improved in the depiction of multidirectional flow by applying a 2-module FSD preparation, which enhances its potential to serve as an alternative magnetic resonance angiography technique for the assessment of hand vascular abnormalities.
Application of high-order numerical schemes and Newton-Krylov method to two-phase drift-flux model
Zou, Ling; Zhao, Haihua; Zhang, Hongbin
2017-08-07
This study concerns the application and solver robustness of the Newton-Krylov method in solving two-phase flow drift-flux model problems using high-order numerical schemes. In our previous studies, the Newton-Krylov method has been proven as a promising solver for two-phase flow drift-flux model problems. However, these studies were limited to use first-order numerical schemes only. Moreover, the previous approach to treating the drift-flux closure correlations was later revealed to cause deteriorated solver convergence performance, when the mesh was highly refined, and also when higher-order numerical schemes were employed. In this study, a second-order spatial discretization scheme that has been tested withmore » two-fluid two-phase flow model was extended to solve drift-flux model problems. In order to improve solver robustness, and therefore efficiency, a new approach was proposed to treating the mean drift velocity of the gas phase as a primary nonlinear variable to the equation system. With this new approach, significant improvement in solver robustness was achieved. With highly refined mesh, the proposed treatment along with the Newton-Krylov solver were extensively tested with two-phase flow problems that cover a wide range of thermal-hydraulics conditions. Satisfactory convergence performances were observed for all test cases. Numerical verification was then performed in the form of mesh convergence studies, from which expected orders of accuracy were obtained for both the first-order and the second-order spatial discretization schemes. Finally, the drift-flux model, along with numerical methods presented, were validated with three sets of flow boiling experiments that cover different flow channel geometries (round tube, rectangular tube, and rod bundle), and a wide range of test conditions (pressure, mass flux, wall heat flux, inlet subcooling and outlet void fraction).« less
Southeast Asia: A Climatological Study
1997-05-01
forms an obstacle that prevents invasions of air from this high into southeast Asia. Austwrian Heat Low. This low develops during However, modified air is...Tibetan Plateau protects southeast Asia from the direct invasion of cold air from the Asiatic high and causes a very I n-AMar, V strong baroclinic zone...of Tonkin. more than 1,500 species of woody plants in Vietnam. At the Gulf, the Red River valley-the economic There are also numerous species of woody
Bridge deficiency metric refinement : longer-term planning for state and locally-owned bridges.
DOT National Transportation Integrated Search
2010-06-01
The focus of this study was to devise a prioritization scheme for locally-owned bridges and to extend the planning time horizon for state-owned bridges. The inherent nature of the local and county structures prevents a direct application of the formu...
NASA Astrophysics Data System (ADS)
Ison, Mark; Artemiadis, Panagiotis
2014-10-01
Myoelectric control is filled with potential to significantly change human-robot interaction due to the ability to non-invasively measure human motion intent. However, current control schemes have struggled to achieve the robust performance that is necessary for use in commercial applications. As demands in myoelectric control trend toward simultaneous multifunctional control, multi-muscle coordinations, or synergies, play larger roles in the success of the control scheme. Detecting and refining patterns in muscle activations robust to the high variance and transient changes associated with surface electromyography is essential for efficient, user-friendly control. This article reviews the role of muscle synergies in myoelectric control schemes by dissecting each component of the scheme with respect to associated challenges for achieving robust simultaneous control of myoelectric interfaces. Electromyography recording details, signal feature extraction, pattern recognition and motor learning based control schemes are considered, and future directions are proposed as steps toward fulfilling the potential of myoelectric control in clinically and commercially viable applications.
Analysis of High Order Difference Methods for Multiscale Complex Compressible Flows
NASA Technical Reports Server (NTRS)
Sjoegreen, Bjoern; Yee, H. C.; Tang, Harry (Technical Monitor)
2002-01-01
Accurate numerical simulations of complex multiscale compressible viscous flows, especially high speed turbulence combustion and acoustics, demand high order schemes with adaptive numerical dissipation controls. Standard high resolution shock-capturing methods are too dissipative to capture the small scales and/or long-time wave propagations without extreme grid refinements and small time steps. An integrated approach for the control of numerical dissipation in high order schemes with incremental studies was initiated. Here we further refine the analysis on, and improve the understanding of the adaptive numerical dissipation control strategy. Basically, the development of these schemes focuses on high order nondissipative schemes and takes advantage of the progress that has been made for the last 30 years in numerical methods for conservation laws, such as techniques for imposing boundary conditions, techniques for stability at shock waves, and techniques for stable and accurate long-time integration. We concentrate on high order centered spatial discretizations and a fourth-order Runge-Kutta temporal discretizations as the base scheme. Near the bound-aries, the base scheme has stable boundary difference operators. To further enhance stability, the split form of the inviscid flux derivatives is frequently used for smooth flow problems. To enhance nonlinear stability, linear high order numerical dissipations are employed away from discontinuities, and nonlinear filters are employed after each time step in order to suppress spurious oscillations near discontinuities to minimize the smearing of turbulent fluctuations. Although these schemes are built from many components, each of which is well-known, it is not entirely obvious how the different components be best connected. For example, the nonlinear filter could instead have been built into the spatial discretization, so that it would have been activated at each stage in the Runge-Kutta time stepping. We could think of a mechanism that activates the split form of the equations only at some parts of the domain. Another issue is how to define good sensors for determining in which parts of the computational domain a certain feature should be filtered by the appropriate numerical dissipation. For the present study we employ a wavelet technique introduced in as sensors. Here, the method is briefly described with selected numerical experiments.
Matching by linear programming and successive convexification.
Jiang, Hao; Drew, Mark S; Li, Ze-Nian
2007-06-01
We present a novel convex programming scheme to solve matching problems, focusing on the challenging problem of matching in a large search range and with cluttered background. Matching is formulated as metric labeling with L1 regularization terms, for which we propose a novel linear programming relaxation method and an efficient successive convexification implementation. The unique feature of the proposed relaxation scheme is that a much smaller set of basis labels is used to represent the original label space. This greatly reduces the size of the searching space. A successive convexification scheme solves the labeling problem in a coarse to fine manner. Importantly, the original cost function is reconvexified at each stage, in the new focus region only, and the focus region is updated so as to refine the searching result. This makes the method well-suited for large label set matching. Experiments demonstrate successful applications of the proposed matching scheme in object detection, motion estimation, and tracking.
Structure Refinement of Protein Low Resolution Models Using the GNEIMO Constrained Dynamics Method
Park, In-Hee; Gangupomu, Vamshi; Wagner, Jeffrey; Jain, Abhinandan; Vaidehi, Nagara-jan
2012-01-01
The challenge in protein structure prediction using homology modeling is the lack of reliable methods to refine the low resolution homology models. Unconstrained all-atom molecular dynamics (MD) does not serve well for structure refinement due to its limited conformational search. We have developed and tested the constrained MD method, based on the Generalized Newton-Euler Inverse Mass Operator (GNEIMO) algorithm for protein structure refinement. In this method, the high-frequency degrees of freedom are replaced with hard holonomic constraints and a protein is modeled as a collection of rigid body clusters connected by flexible torsional hinges. This allows larger integration time steps and enhances the conformational search space. In this work, we have demonstrated the use of a constraint free GNEIMO method for protein structure refinement that starts from low-resolution decoy sets derived from homology methods. In the eight proteins with three decoys for each, we observed an improvement of ~2 Å in the RMSD to the known experimental structures of these proteins. The GNEIMO method also showed enrichment in the population density of native-like conformations. In addition, we demonstrated structural refinement using a “Freeze and Thaw” clustering scheme with the GNEIMO framework as a viable tool for enhancing localized conformational search. We have derived a robust protocol based on the GNEIMO replica exchange method for protein structure refinement that can be readily extended to other proteins and possibly applicable for high throughput protein structure refinement. PMID:22260550
Talking With Your Doctor - Multiple Languages
... this page, please enable JavaScript. Amharic (Amarɨñña / አማርኛ ) Arabic (العربية) Burmese (myanma bhasa) Chinese, Simplified (Mandarin dialect) ( ... Karen (S’gaw Karen) Kinyarwanda (Rwanda) Korean (한국어) Levantine (Arabic dialect) (الـلَّـهْـجَـةُ الـشَّـامِـيَّـة) Nepali (नेपाली) Pashto (Pax̌tō / ...
Poisoning - Multiple Languages
... this page, please enable JavaScript. Amharic (Amarɨñña / አማርኛ ) Arabic (العربية) Burmese (myanma bhasa) Dari (دری) Farsi (فارسی) ... Chin (Laiholh) Karen (S’gaw Karen) Kinyarwanda (Rwanda) Levantine (Arabic dialect) (الـلَّـهْـجَـةُ الـشَّـامِـيَّـة) Nepali (नेपाली) Pashto (Pax̌tō / ...
Vital Signs - Multiple Languages
... this page, please enable JavaScript. Amharic (Amarɨñña / አማርኛ ) Arabic (العربية) Burmese (myanma bhasa) Dari (دری) Farsi (فارسی) ... Chin (Laiholh) Karen (S’gaw Karen) Kinyarwanda (Rwanda) Levantine (Arabic dialect) (الـلَّـهْـجَـةُ الـشَّـامِـيَّـة) Nepali (नेपाली) Pashto (Pax̌tō / ...
Infection Control - Multiple Languages
... this page, please enable JavaScript. Amharic (Amarɨñña / አማርኛ ) Arabic (العربية) Burmese (myanma bhasa) Dari (دری) Farsi (فارسی) ... Hmong (Hmoob) Karen (S’gaw Karen) Kinyarwanda (Rwanda) Levantine (Arabic dialect) (الـلَّـهْـجَـةُ الـشَّـامِـيَّـة) Nepali (नेपाली) Pashto (Pax̌tō / ...
Over-the-Counter Medicines - Multiple Languages
... this page, please enable JavaScript. Amharic (Amarɨñña / አማርኛ ) Arabic (العربية) Burmese (myanma bhasa) Dari (دری) Farsi (فارسی) Karen (S’gaw Karen) Kinyarwanda (Rwanda) Levantine (Arabic dialect) (الـلَّـهْـجَـةُ الـشَّـامِـيَّـة) Pashto (Pax̌tō / پښتو ) Somali (Af- ...
... this page, please enable JavaScript. Amharic (Amarɨñña / አማርኛ ) Arabic (العربية) Burmese (myanma bhasa) Dari (دری) Farsi (فارسی) ... Chin (Laiholh) Karen (S’gaw Karen) Kinyarwanda (Rwanda) Levantine (Arabic dialect) (الـلَّـهْـجَـةُ الـشَّـامِـيَّـة) Nepali (नेपाली) Pashto (Pax̌tō / ...
Health Facilities - Multiple Languages
... this page, please enable JavaScript. Amharic (Amarɨñña / አማርኛ ) Arabic (العربية) Burmese (myanma bhasa) Dari (دری) Farsi (فارسی) ... Chin (Laiholh) Karen (S’gaw Karen) Kinyarwanda (Rwanda) Levantine (Arabic dialect) (الـلَّـهْـجَـةُ الـشَّـامِـيَّـة) Nepali (नेपाली) Pashto (Pax̌tō / ...
... this page, please enable JavaScript. Amharic (Amarɨñña / አማርኛ ) Arabic (العربية) Bosnian (bosanski) Burmese (myanma bhasa) Chinese, Simplified ( ... Karen (S’gaw Karen) Kinyarwanda (Rwanda) Korean (한국어) Levantine (Arabic dialect) (الـلَّـهْـجَـةُ الـشَّـامِـيَّـة) Nepali (नेपाली) Pashto (Pax̌tō / ...
High Blood Pressure - Multiple Languages
... this page, please enable JavaScript. Amharic (Amarɨñña / አማርኛ ) Arabic (العربية) Burmese (myanma bhasa) Chinese, Simplified (Mandarin dialect) ( ... Karen (S’gaw Karen) Kinyarwanda (Rwanda) Korean (한국어) Levantine (Arabic dialect) (الـلَّـهْـجَـةُ الـشَّـامِـيَّـة) Nepali (नेपाली) Pashto (Pax̌tō / ...
Mental Health - Multiple Languages
... this page, please enable JavaScript. Amharic (Amarɨñña / አማርኛ ) Arabic (العربية) Burmese (myanma bhasa) Dari (دری) Farsi (فارسی) ... Chin (Laiholh) Karen (S’gaw Karen) Kinyarwanda (Rwanda) Levantine (Arabic dialect) (الـلَّـهْـجَـةُ الـشَّـامِـيَّـة) Nepali (नेपाली) Oromo (Afan ...
1988-09-16
Precipitates in Carbon Steel by Low Dose alpha- particle bombard- mento, .M.M. Ramos. L. Amaral, M. Behar. A Vas- quez, G. Marest and F.C. Zawislak...planted martensitic low carbon steel (C - 0.2 wt%). The characteriza- tion of the precipitates is done via Conversion Electron Mbssbauer technique (CEMS... PHASE TRANSFORMATIONS OF A NITROGEN IMPLANTED AUSTENITIC STAINLESS STEEL (XO CrNITI 189) by R. Leutenecker Fraunhofer-Institut for
Simulation of violent free surface flow by AMR method
NASA Astrophysics Data System (ADS)
Hu, Changhong; Liu, Cheng
2018-05-01
A novel CFD approach based on adaptive mesh refinement (AMR) technique is being developed for numerical simulation of violent free surface flows. CIP method is applied to the flow solver and tangent of hyperbola for interface capturing with slope weighting (THINC/SW) scheme is implemented as the free surface capturing scheme. The PETSc library is adopted to solve the linear system. The linear solver is redesigned and modified to satisfy the requirement of the AMR mesh topology. In this paper, our CFD method is outlined and newly obtained results on numerical simulation of violent free surface flows are presented.
Recent developments in the structural design and optimization of ITER neutral beam manifold
NASA Astrophysics Data System (ADS)
Chengzhi, CAO; Yudong, PAN; Zhiwei, XIA; Bo, LI; Tao, JIANG; Wei, LI
2018-02-01
This paper describes a new design of the neutral beam manifold based on a more optimized support system. A proposed alternative scheme has presented to replace the former complex manifold supports and internal pipe supports in the final design phase. Both the structural reliability and feasibility were confirmed with detailed analyses. Comparative analyses between two typical types of manifold support scheme were performed. All relevant results of mechanical analyses for typical operation scenarios and fault conditions are presented. Future optimization activities are described, which will give useful information for a refined setting of components in the next phase.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xing, Yulong; Shu, Chi-wang; Noelle, Sebastian
This note aims at demonstrating the advantage of moving-water well-balanced schemes over still-water well-balanced schemes for the shallow water equations. We concentrate on numerical examples with solutions near a moving-water equilibrium. For such examples, still-water well-balanced methods are not capable of capturing the small perturbations of the moving-water equilibrium and may generate significant spurious oscillations, unless an extremely refined mesh is used. On the other hand, moving-water well-balanced methods perform well in these tests. The numerical examples in this note clearly demonstrate the importance of utilizing moving-water well-balanced methods for solutions near a moving-water equilibrium.
Advances in Patch-Based Adaptive Mesh Refinement Scalability
Gunney, Brian T.N.; Anderson, Robert W.
2015-12-18
Patch-based structured adaptive mesh refinement (SAMR) is widely used for high-resolution simu- lations. Combined with modern supercomputers, it could provide simulations of unprecedented size and resolution. A persistent challenge for this com- bination has been managing dynamically adaptive meshes on more and more MPI tasks. The dis- tributed mesh management scheme in SAMRAI has made some progress SAMR scalability, but early al- gorithms still had trouble scaling past the regime of 105 MPI tasks. This work provides two critical SAMR regridding algorithms, which are integrated into that scheme to ensure efficiency of the whole. The clustering algorithm is an extensionmore » of the tile- clustering approach, making it more flexible and efficient in both clustering and parallelism. The partitioner is a new algorithm designed to prevent the network congestion experienced by its prede- cessor. We evaluated performance using weak- and strong-scaling benchmarks designed to be difficult for dynamic adaptivity. Results show good scaling on up to 1.5M cores and 2M MPI tasks. Detailed timing diagnostics suggest scaling would continue well past that.« less
Spatial correlation-based side information refinement for distributed video coding
NASA Astrophysics Data System (ADS)
Taieb, Mohamed Haj; Chouinard, Jean-Yves; Wang, Demin
2013-12-01
Distributed video coding (DVC) architecture designs, based on distributed source coding principles, have benefitted from significant progresses lately, notably in terms of achievable rate-distortion performances. However, a significant performance gap still remains when compared to prediction-based video coding schemes such as H.264/AVC. This is mainly due to the non-ideal exploitation of the video sequence temporal correlation properties during the generation of side information (SI). In fact, the decoder side motion estimation provides only an approximation of the true motion. In this paper, a progressive DVC architecture is proposed, which exploits the spatial correlation of the video frames to improve the motion-compensated temporal interpolation (MCTI). Specifically, Wyner-Ziv (WZ) frames are divided into several spatially correlated groups that are then sent progressively to the receiver. SI refinement (SIR) is performed as long as these groups are being decoded, thus providing more accurate SI for the next groups. It is shown that the proposed progressive SIR method leads to significant improvements over the Discover DVC codec as well as other SIR schemes recently introduced in the literature.
Advances in Patch-Based Adaptive Mesh Refinement Scalability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gunney, Brian T.N.; Anderson, Robert W.
Patch-based structured adaptive mesh refinement (SAMR) is widely used for high-resolution simu- lations. Combined with modern supercomputers, it could provide simulations of unprecedented size and resolution. A persistent challenge for this com- bination has been managing dynamically adaptive meshes on more and more MPI tasks. The dis- tributed mesh management scheme in SAMRAI has made some progress SAMR scalability, but early al- gorithms still had trouble scaling past the regime of 105 MPI tasks. This work provides two critical SAMR regridding algorithms, which are integrated into that scheme to ensure efficiency of the whole. The clustering algorithm is an extensionmore » of the tile- clustering approach, making it more flexible and efficient in both clustering and parallelism. The partitioner is a new algorithm designed to prevent the network congestion experienced by its prede- cessor. We evaluated performance using weak- and strong-scaling benchmarks designed to be difficult for dynamic adaptivity. Results show good scaling on up to 1.5M cores and 2M MPI tasks. Detailed timing diagnostics suggest scaling would continue well past that.« less
Bailly, Sébastien; Leroy, Olivier; Azoulay, Elie; Montravers, Philippe; Constantin, Jean-Michel; Dupont, Hervé; Guillemot, Didier; Lortholary, Olivier; Mira, Jean-Paul; Perrigault, Pierre-François; Gangneux, Jean-Pierre; Timsit, Jean-François
2017-04-01
guidelines recommend first-line systemic antifungal therapy (SAT) with echinocandins in invasive candidiasis (IC), especially in critically ill patients. This study aimed at assessing the impact of echinocandins compared to azoles as initial SAT on the 28-day prognosis in adult ICU patients. From the prospective multicenter AmarCAND2 cohort (835 patients), we selected those with documented IC and treated with echinocandins (ECH) or azoles (AZO). The average causal effect of echinocandins on 28-day mortality was assessed using an inverse probability of treatment weight (IPTW) estimator. 397 patients were selected, treated with echinocandins (242 patients, 61%) or azoles (155 patients, 39%); septic shock: 179 patients (45%). The median SAPSII was higher in the ECH group (48 [35; 62] vs. 43 [31; 58], p = 0.01). Crude mortality was 34% (ECH group) vs. 25% (AZO group). After adjustment on baseline confounders, no significant association emerged between initial SAT with echinocandins and 28-day mortality (HR: 0.95; 95% CI: [0.60; 1.49]; p = 0.82). However, echinocandin tended to benefit patients with septic shock (HR: 0.46 [0.19; 1.07]; p = 0.07). Patients who received echinocandins were more severely ill. Echinocandin use was associated with a non-significant 7% decrease of 28-day mortality and a trend to a beneficial effect for patient with septic shock. Copyright © 2017 The British Infection Association. Published by Elsevier Ltd. All rights reserved.
Adaptation of abbreviated mathematics anxiety rating scale for engineering students
NASA Astrophysics Data System (ADS)
Nordin, Sayed Kushairi Sayed; Samat, Khairul Fadzli; Sultan, Al Amin Mohamed; Halim, Bushra Abdul; Ismail, Siti Fatimah; Mafazi, Nurul Wirdah
2015-05-01
Mathematics is an essential and fundamental tool used by engineers to analyse and solve problems in their field. Due to this, most engineering education programs involve a concentration of study in mathematics courses whereby engineering students have to take mathematics courses such as numerical methods, differential equations and calculus in the first two years and continue to do so until the completion of the sequence. However, the students struggled and had difficulties in learning courses that require mathematical abilities. Hence, this study presents the factors that caused mathematics anxiety among engineering students using Abbreviated Mathematics Anxiety Rating Scale (AMARS) through 95 students of Universiti Teknikal Malaysia Melaka (UTeM). From 25 items in AMARS, principal component analysis (PCA) suggested that there are four mathematics anxiety factors, namely experiences of learning mathematics, cognitive skills, mathematics evaluation anxiety and students' perception on mathematics. Minitab 16 software was used to analyse the nonparametric statistics. Kruskal-Wallis Test indicated that there is a significant difference in the experience of learning mathematics and mathematics evaluation anxiety among races. The Chi-Square Test of Independence revealed that the experience of learning mathematics, cognitive skills and mathematics evaluation anxiety depend on the results of their SPM additional mathematics. Based on this study, it is recommended to address the anxiety problems among engineering students at the early stage of studying in the university. Thus, lecturers should play their part by ensuring a positive classroom environment which encourages students to study mathematics without fear.
A back-fitting algorithm to improve real-time flood forecasting
NASA Astrophysics Data System (ADS)
Zhang, Xiaojing; Liu, Pan; Cheng, Lei; Liu, Zhangjun; Zhao, Yan
2018-07-01
Real-time flood forecasting is important for decision-making with regards to flood control and disaster reduction. The conventional approach involves a postprocessor calibration strategy that first calibrates the hydrological model and then estimates errors. This procedure can simulate streamflow consistent with observations, but obtained parameters are not optimal. Joint calibration strategies address this issue by refining hydrological model parameters jointly with the autoregressive (AR) model. In this study, five alternative schemes are used to forecast floods. Scheme I uses only the hydrological model, while scheme II includes an AR model for error correction. In scheme III, differencing is used to remove non-stationarity in the error series. A joint inference strategy employed in scheme IV calibrates the hydrological and AR models simultaneously. The back-fitting algorithm, a basic approach for training an additive model, is adopted in scheme V to alternately recalibrate hydrological and AR model parameters. The performance of the five schemes is compared with a case study of 15 recorded flood events from China's Baiyunshan reservoir basin. Our results show that (1) schemes IV and V outperform scheme III during the calibration and validation periods and (2) scheme V is inferior to scheme IV in the calibration period, but provides better results in the validation period. Joint calibration strategies can therefore improve the accuracy of flood forecasting. Additionally, the back-fitting recalibration strategy produces weaker overcorrection and a more robust performance compared with the joint inference strategy.
New high order schemes in BATS-R-US
NASA Astrophysics Data System (ADS)
Toth, G.; van der Holst, B.; Daldorff, L.; Chen, Y.; Gombosi, T. I.
2013-12-01
The University of Michigan global magnetohydrodynamics code BATS-R-US has long relied on the block-adaptive mesh refinement (AMR) to increase accuracy in regions of interest, and we used a second order accurate TVD scheme. While AMR can in principle produce arbitrarily accurate results, there are still practical limitations due to computational resources. To further improve the accuracy of the BATS-R-US code, recently, we have implemented a 4th order accurate finite volume scheme (McCorquodale and Colella, 2011}), the 5th order accurate Monotonicity Preserving scheme (MP5, Suresh and Huynh, 1997) and the 5th order accurate CWENO5 scheme (Capdeville, 2008). In the first implementation the high order accuracy is achieved in the uniform parts of the Cartesian grids, and we still use the second order TVD scheme at resolution changes. For spherical grids the new schemes are only second order accurate so far, but still much less diffusive than the TVD scheme. We show a few verification tests that demonstrate the order of accuracy as well as challenging space physics applications. The high order schemes are less robust than the TVD scheme, and it requires some tricks and effort to make the code work. When the high order scheme works, however, we find that in most cases it can obtain similar or better results than the TVD scheme on twice finer grids. For three dimensional time dependent simulations this means that the high order scheme is almost 10 times faster requires 8 times less storage than the second order method.
Qi, Bing; Lim, Charles Ci Wen
2018-05-07
Recently, we proposed a simultaneous quantum and classical communication (SQCC) protocol where random numbers for quantum key distribution and bits for classical communication are encoded on the same weak coherent pulse and decoded by the same coherent receiver. Such a scheme could be appealing in practice since a single coherent communication system can be used for multiple purposes. However, previous studies show that the SQCC protocol can tolerate only very small phase noise. This makes it incompatible with the coherent communication scheme using a true local oscillator (LO), which presents a relatively high phase noise due to the fact thatmore » the signal and the LO are generated from two independent lasers. We improve the phase noise tolerance of the SQCC scheme using a true LO by adopting a refined noise model where phase noises originating from different sources are treated differently: on the one hand, phase noise associated with the coherent receiver may be regarded as trusted noise since the detector can be calibrated locally and the photon statistics of the detected signals can be determined from the measurement results; on the other hand, phase noise due to the instability of fiber interferometers may be regarded as untrusted noise since its randomness (from the adversary’s point of view) is hard to justify. Simulation results show the tolerable phase noise in this refined noise model is significantly higher than that in the previous study, where all of the phase noises are assumed to be untrusted. In conclusion, we conduct an experiment to show that the required phase stability can be achieved in a coherent communication system using a true LO.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qi, Bing; Lim, Charles Ci Wen
Recently, we proposed a simultaneous quantum and classical communication (SQCC) protocol where random numbers for quantum key distribution and bits for classical communication are encoded on the same weak coherent pulse and decoded by the same coherent receiver. Such a scheme could be appealing in practice since a single coherent communication system can be used for multiple purposes. However, previous studies show that the SQCC protocol can tolerate only very small phase noise. This makes it incompatible with the coherent communication scheme using a true local oscillator (LO), which presents a relatively high phase noise due to the fact thatmore » the signal and the LO are generated from two independent lasers. We improve the phase noise tolerance of the SQCC scheme using a true LO by adopting a refined noise model where phase noises originating from different sources are treated differently: on the one hand, phase noise associated with the coherent receiver may be regarded as trusted noise since the detector can be calibrated locally and the photon statistics of the detected signals can be determined from the measurement results; on the other hand, phase noise due to the instability of fiber interferometers may be regarded as untrusted noise since its randomness (from the adversary’s point of view) is hard to justify. Simulation results show the tolerable phase noise in this refined noise model is significantly higher than that in the previous study, where all of the phase noises are assumed to be untrusted. In conclusion, we conduct an experiment to show that the required phase stability can be achieved in a coherent communication system using a true LO.« less
NASA Astrophysics Data System (ADS)
Qi, Bing; Lim, Charles Ci Wen
2018-05-01
Recently, we proposed a simultaneous quantum and classical communication (SQCC) protocol where random numbers for quantum key distribution and bits for classical communication are encoded on the same weak coherent pulse and decoded by the same coherent receiver. Such a scheme could be appealing in practice since a single coherent communication system can be used for multiple purposes. However, previous studies show that the SQCC protocol can tolerate only very small phase noise. This makes it incompatible with the coherent communication scheme using a true local oscillator (LO), which presents a relatively high phase noise due to the fact that the signal and the LO are generated from two independent lasers. We improve the phase noise tolerance of the SQCC scheme using a true LO by adopting a refined noise model where phase noises originating from different sources are treated differently: on the one hand, phase noise associated with the coherent receiver may be regarded as trusted noise since the detector can be calibrated locally and the photon statistics of the detected signals can be determined from the measurement results; on the other hand, phase noise due to the instability of fiber interferometers may be regarded as untrusted noise since its randomness (from the adversary's point of view) is hard to justify. Simulation results show the tolerable phase noise in this refined noise model is significantly higher than that in the previous study, where all of the phase noises are assumed to be untrusted. We conduct an experiment to show that the required phase stability can be achieved in a coherent communication system using a true LO.
Two-stage atlas subset selection in multi-atlas based image segmentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Tingting, E-mail: tingtingzhao@mednet.ucla.edu; Ruan, Dan, E-mail: druan@mednet.ucla.edu
2015-06-15
Purpose: Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. Methods: An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stagemore » atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. Results: The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. Conclusions: The authors have developed a novel two-stage atlas subset selection scheme for multi-atlas based segmentation. It achieves good segmentation accuracy with significantly reduced computation cost, making it a suitable configuration in the presence of extensive heterogeneous atlases.« less
xMDFF: molecular dynamics flexible fitting of low-resolution X-ray structures.
McGreevy, Ryan; Singharoy, Abhishek; Li, Qufei; Zhang, Jingfen; Xu, Dong; Perozo, Eduardo; Schulten, Klaus
2014-09-01
X-ray crystallography remains the most dominant method for solving atomic structures. However, for relatively large systems, the availability of only medium-to-low-resolution diffraction data often limits the determination of all-atom details. A new molecular dynamics flexible fitting (MDFF)-based approach, xMDFF, for determining structures from such low-resolution crystallographic data is reported. xMDFF employs a real-space refinement scheme that flexibly fits atomic models into an iteratively updating electron-density map. It addresses significant large-scale deformations of the initial model to fit the low-resolution density, as tested with synthetic low-resolution maps of D-ribose-binding protein. xMDFF has been successfully applied to re-refine six low-resolution protein structures of varying sizes that had already been submitted to the Protein Data Bank. Finally, via systematic refinement of a series of data from 3.6 to 7 Å resolution, xMDFF refinements together with electrophysiology experiments were used to validate the first all-atom structure of the voltage-sensing protein Ci-VSP.
ADER discontinuous Galerkin schemes for general-relativistic ideal magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Fambri, F.; Dumbser, M.; Köppel, S.; Rezzolla, L.; Zanotti, O.
2018-07-01
We present a new class of high-order accurate numerical algorithms for solving the equations of general-relativistic ideal magnetohydrodynamics in curved space-times. In this paper, we assume the background space-time to be given and static, i.e. we make use of the Cowling approximation. The governing partial differential equations are solved via a new family of fully discrete and arbitrary high-order accurate path-conservative discontinuous Galerkin (DG) finite-element methods combined with adaptive mesh refinement and time accurate local time-stepping. In order to deal with shock waves and other discontinuities, the high-order DG schemes are supplemented with a novel a posteriori subcell finite-volume limiter, which makes the new algorithms as robust as classical second-order total-variation diminishing finite-volume methods at shocks and discontinuities, but also as accurate as unlimited high-order DG schemes in smooth regions of the flow. We show the advantages of this new approach by means of various classical two- and three-dimensional benchmark problems on fixed space-times. Finally, we present a performance and accuracy comparisons between Runge-Kutta DG schemes and ADER high-order finite-volume schemes, showing the higher efficiency of DG schemes.
ADER discontinuous Galerkin schemes for general-relativistic ideal magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Fambri, F.; Dumbser, M.; Köppel, S.; Rezzolla, L.; Zanotti, O.
2018-03-01
We present a new class of high-order accurate numerical algorithms for solving the equations of general-relativistic ideal magnetohydrodynamics in curved spacetimes. In this paper we assume the background spacetime to be given and static, i.e. we make use of the Cowling approximation. The governing partial differential equations are solved via a new family of fully-discrete and arbitrary high-order accurate path-conservative discontinuous Galerkin (DG) finite-element methods combined with adaptive mesh refinement and time accurate local timestepping. In order to deal with shock waves and other discontinuities, the high-order DG schemes are supplemented with a novel a-posteriori subcell finite-volume limiter, which makes the new algorithms as robust as classical second-order total-variation diminishing finite-volume methods at shocks and discontinuities, but also as accurate as unlimited high-order DG schemes in smooth regions of the flow. We show the advantages of this new approach by means of various classical two- and three-dimensional benchmark problems on fixed spacetimes. Finally, we present a performance and accuracy comparisons between Runge-Kutta DG schemes and ADER high-order finite-volume schemes, showing the higher efficiency of DG schemes.
Hagen, Wim J H; Wan, William; Briggs, John A G
2017-02-01
Cryo-electron tomography (cryoET) allows 3D structural information to be obtained from cells and other biological samples in their close-to-native state. In combination with subtomogram averaging, detailed structures of repeating features can be resolved. CryoET data is collected as a series of images of the sample from different tilt angles; this is performed by physically rotating the sample in the microscope between each image. The angles at which the images are collected, and the order in which they are collected, together are called the tilt-scheme. Here we describe a "dose-symmetric tilt-scheme" that begins at low tilt and then alternates between increasingly positive and negative tilts. This tilt-scheme maximizes the amount of high-resolution information maintained in the tomogram for subsequent subtomogram averaging, and may also be advantageous for other applications. We describe implementation of the tilt-scheme in combination with further data-collection refinements including setting thresholds on acceptable drift and improving focus accuracy. Requirements for microscope set-up are introduced, and a macro is provided which automates the application of the tilt-scheme within SerialEM. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Some results on numerical methods for hyperbolic conservation laws
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang Huanan.
1989-01-01
This dissertation contains some results on the numerical solutions of hyperbolic conservation laws. (1) The author introduced an artificial compression method as a correction to the basic ENO schemes. The method successfully prevents contact discontinuities from being smeared. This is achieved by increasing the slopes of the ENO reconstructions in such a way that the essentially non-oscillatory property of the schemes is kept. He analyzes the non-oscillatory property of the new artificial compression method by applying it to the UNO scheme which is a second order accurate ENO scheme, and proves that the resulting scheme is indeed non-oscillatory. Extensive 1-Dmore » numerical results and some preliminary 2-D ones are provided to show the strong performance of the method. (2) He combines the ENO schemes and the centered difference schemes into self-adjusting hybrid schemes which will be called the localized ENO schemes. At or near the jumps, he uses the ENO schemes with the field by field decompositions, otherwise he simply uses the centered difference schemes without the field by field decompositions. The method involves a new interpolation analysis. In the numerical experiments on several standard test problems, the quality of the numerical results of this method is close to that of the pure ENO results. The localized ENO schemes can be equipped with the above artificial compression method. In this way, he dramatically improves the resolutions of the contact discontinuities at very little additional costs. (3) He introduces a space-time mesh refinement method for time dependent problems.« less
Using EMAP data from the NE Wadeable Stream Survey and state datasets (CT, ME), assessment tools were developed to predict diffuse NPS effects from watershed development and distinguish these from local impacts (point sources, contaminated sediments). Classification schemes were...
Health Screening - Multiple Languages
... enable JavaScript. Albanian (Gjuha Shqipe) Amharic (Amarɨñña / አማርኛ ) Arabic (العربية) Bengali (Bangla / বাংলা) Burmese (myanma bhasa) ... សាខ្មែរ) Korean (한국어) Lao (ພາສາລາວ) Levantine (Arabic dialect) (الـلَّـهْـجَـةُ الـشَّـامِـيَّـة) Nepali (नेपाली) Polish (polski) ...
Health Information in Multiple Languages
... health topic . Albanian (Gjuha Shqipe) Amharic (Amarɨñña / አማርኛ ) Arabic (العربية) Armenian (Հայերեն) Bengali (Bangla / বাংলা) Bosnian ( ... Korean (한국어) Kurdish (Kurdî / کوردی) Lao (ພາສາລາວ) Levantine (Arabic dialect) (الـلَّـهْـجَـةُ الـشَّـامِـيَّـة) Malay (Bahasa Malaysia) Marshallese (Ebon) ...
Women's Health Checkup - Multiple Languages
... this page, please enable JavaScript. Amharic (Amarɨñña / አማርኛ ) Arabic (العربية) Burmese (myanma bhasa) Chinese, Simplified (Mandarin dialect) ( ... សាខ្មែរ) Korean (한국어) Lao (ພາສາລາວ) Levantine (Arabic dialect) (الـلَّـهْـجَـةُ الـشَّـامِـيَّـة) Nepali (नेपाली) Russian (Русский) ...
2008-08-04
Army Research Office (ARO) grants DAAD 19-02-1-0383 and W911NF-06-1-0076. Stéphane Coulombe , Sharon Core, Amar Mukherjee and David Chester played a...the goodness of the final outcome of the joint alignment is critically de - pendent on the appropriate choice of penalty in the objective function1...C. Dahlke, LB. Davenport, P. Davies, B. de Pablos, A. Delcher, Z. Deng, AD. Mays, I. Dew, SM. Dietz, K. Dodson, LE. Doup, M. Downes, S. Dugan-Rocha
[Review of dynamic global vegetation models (DGVMs)].
Che, Ming-Liang; Chen, Bao-Zhang; Wang, Ying; Guo, Xiang-Yun
2014-01-01
Dynamic global vegetation model (DGVM) is an important and efficient tool for study on the terrestrial carbon circle processes and vegetation dynamics. This paper reviewed the development history of DGVMs, introduced the basic structure of DGVMs, and the outlines of several world-widely used DGVMs, including CLM-DGVM, LPJ, IBIS and SEIB. The shortages of the description of dynamic vegetation mechanisms in the current DGVMs were proposed, including plant functional types (PFT) scheme, vegetation competition, disturbance, and phenology. Then the future research directions of DGVMs were pointed out, i. e. improving the PFT scheme, refining the vegetation dynamic mechanism, and implementing a model inter-comparison project.
NASA Astrophysics Data System (ADS)
Czernek, Jiří; Pawlak, Tomasz; Potrzebowski, Marek J.; Brus, Jiří
2013-01-01
The 13C and 15N CPMAS SSNMR measurements were accompanied by the proper theoretical description of the solid-phase environment, as provided by the density functional theory in the pseudopotential plane-wave scheme, and employed in refining the atomic coordinates of the crystal structures of thiamine chloride hydrochloride and of its monohydrate. Thus, using the DFT functionals PBE, PW91 and RPBE, the SSNMR-consistent solid-phase structures of these compounds are derived from the geometrical optimization, which is followed by an assessment of the fits of the GIPAW-predicted values of the chemical shielding parameters to their experimental counterparts.
Study designs appropriate for the workplace.
Hogue, C J
1986-01-01
Carlo and Hearn have called for "refinement of old [epidemiologic] methods and an ongoing evaluation of where methods fit in the overall scheme as we address the multiple complexities of reproductive hazard assessment." This review is an attempt to bring together the current state-of-the-art methods for problem definition and hypothesis testing available to the occupational epidemiologist. For problem definition, meta analysis can be utilized to narrow the field of potential causal hypotheses. Passive active surveillance may further refine issues for analytic research. Within analytic epidemiology, several methods may be appropriate for the workplace setting. Those discussed here may be used to estimate the risk ratio in either a fixed or dynamic population.
THE PLUTO CODE FOR ADAPTIVE MESH COMPUTATIONS IN ASTROPHYSICAL FLUID DYNAMICS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mignone, A.; Tzeferacos, P.; Zanni, C.
We present a description of the adaptive mesh refinement (AMR) implementation of the PLUTO code for solving the equations of classical and special relativistic magnetohydrodynamics (MHD and RMHD). The current release exploits, in addition to the static grid version of the code, the distributed infrastructure of the CHOMBO library for multidimensional parallel computations over block-structured, adaptively refined grids. We employ a conservative finite-volume approach where primary flow quantities are discretized at the cell center in a dimensionally unsplit fashion using the Corner Transport Upwind method. Time stepping relies on a characteristic tracing step where piecewise parabolic method, weighted essentially non-oscillatory,more » or slope-limited linear interpolation schemes can be handily adopted. A characteristic decomposition-free version of the scheme is also illustrated. The solenoidal condition of the magnetic field is enforced by augmenting the equations with a generalized Lagrange multiplier providing propagation and damping of divergence errors through a mixed hyperbolic/parabolic explicit cleaning step. Among the novel features, we describe an extension of the scheme to include non-ideal dissipative processes, such as viscosity, resistivity, and anisotropic thermal conduction without operator splitting. Finally, we illustrate an efficient treatment of point-local, potentially stiff source terms over hierarchical nested grids by taking advantage of the adaptivity in time. Several multidimensional benchmarks and applications to problems of astrophysical relevance assess the potentiality of the AMR version of PLUTO in resolving flow features separated by large spatial and temporal disparities.« less
NASA Technical Reports Server (NTRS)
Mocko, David M.; Sud, Y. C.
2000-01-01
Refinements to the snow-physics scheme of SSiB (Simplified Simple Biosphere Model) are described and evaluated. The upgrades include a partial redesign of the conceptual architecture to better simulate the diurnal temperature of the snow surface. For a deep snowpack, there are two separate prognostic temperature snow layers - the top layer responds to diurnal fluctuations in the surface forcing, while the deep layer exhibits a slowly varying response. In addition, the use of a very deep soil temperature and a treatment of snow aging with its influence on snow density is parameterized and evaluated. The upgraded snow scheme produces better timing of snow melt in GSWP-style simulations using ISLSCP Initiative I data for 1987-1988 in the Russian Wheat Belt region. To simulate more realistic runoff in regions with high orographic variability, additional improvements are made to SSiB's soil hydrology. These improvements include an orography-based surface runoff scheme as well as interaction with a water table below SSiB's three soil layers. The addition of these parameterizations further help to simulate more realistic runoff and accompanying prognostic soil moisture fields in the GSWP-style simulations. In intercomparisons of the performance of the new snow-physics SSiB with its earlier versions using an 18-year single-site dataset from Valdai Russia, the version of SSiB described in this paper again produces the earliest onset of snow melt. Soil moisture and deep soil temperatures also compare favorably with observations.
Methods for prismatic/tetrahedral grid generation and adaptation
NASA Technical Reports Server (NTRS)
Kallinderis, Y.
1995-01-01
The present work involves generation of hybrid prismatic/tetrahedral grids for complex 3-D geometries including multi-body domains. The prisms cover the region close to each body's surface, while tetrahedra are created elsewhere. Two developments are presented for hybrid grid generation around complex 3-D geometries. The first is a new octree/advancing front type of method for generation of the tetrahedra of the hybrid mesh. The main feature of the present advancing front tetrahedra generator that is different from previous such methods is that it does not require the creation of a background mesh by the user for the determination of the grid-spacing and stretching parameters. These are determined via an automatically generated octree. The second development is a method for treating the narrow gaps in between different bodies in a multiply-connected domain. This method is applied to a two-element wing case. A High Speed Civil Transport (HSCT) type of aircraft geometry is considered. The generated hybrid grid required only 170 K tetrahedra instead of an estimated two million had a tetrahedral mesh been used in the prisms region as well. A solution adaptive scheme for viscous computations on hybrid grids is also presented. A hybrid grid adaptation scheme that employs both h-refinement and redistribution strategies is developed to provide optimum meshes for viscous flow computations. Grid refinement is a dual adaptation scheme that couples 3-D, isotropic division of tetrahedra and 2-D, directional division of prisms.
NASA Astrophysics Data System (ADS)
Ranjeva, Minna; Thompson, Lee; Perlitz, Daniel; Bonness, William; Capone, Dean; Elbing, Brian
2011-11-01
Cavitation is a major concern for the US Navy since it can cause ship damage and produce unwanted noise. The ability to precisely locate cavitation onset in laboratory scale experiments is essential for proper design that will minimize this undesired phenomenon. Measuring the cavitation onset is more accurately determined acoustically than visually. However, if other parts of the model begin to cavitate prior to the component of interest the acoustic data is contaminated with spurious noise. Consequently, cavitation onset is widely determined by optically locating the event of interest. The current research effort aims at developing an acoustic localization scheme for reverberant environments such as water tunnels. Currently cavitation bubbles are being induced in a static water tank with a laser, allowing the localization techniques to be refined with the bubble at a known location. The source is located with the use of acoustic data collected with hydrophones and analyzed using signal processing techniques. To verify the accuracy of the acoustic scheme, the events are simultaneously monitored visually with the use of a high speed camera. Once refined testing will be conducted in a water tunnel. This research was sponsored by the Naval Engineering Education Center (NEEC).
Universal Blind Quantum Computation
NASA Astrophysics Data System (ADS)
Fitzsimons, Joseph; Kashefi, Elham
2012-02-01
Blind Quantum Computing (BQC) allows a client to have a server carry out a quantum computation for them such that the client's inputs, outputs and computation remain private. Recently we proposed a universal unconditionally secure BQC scheme, based on the conceptual framework of the measurement-based quantum computing model, where the client only needs to be able to prepare single qubits in separable states randomly chosen from a finite set and send them to the server, who has the balance of the required quantum computational resources. Here we present a refinement of the scheme which vastly expands the class of quantum circuits which can be directly implemented as a blind computation, by introducing a new class of resource states which we term dotted-complete graph states and expanding the set of single qubit states the client is required to prepare. These two modifications significantly simplify the overall protocol and remove the previously present restriction that only nearest-neighbor circuits could be implemented as blind computations directly. As an added benefit, the refined protocol admits a substantially more intuitive and simplified verification mechanism, allowing the correctness of a blind computation to be verified with arbitrarily small probability of error.
Tectonic Evolution of Jabal Tays Ophiolite Complex, Eastern Arabian Shield, Saudi Arabia
NASA Astrophysics Data System (ADS)
AlHumidan, Saad; Kassem, Osama; Almutairi, Majed; Al-Faifi, Hussain; Kahal, Ali
2017-04-01
Microstructural analysis is important for investigation of tectonic evaluation of Jable Tays area. Furthermore, the Jable Tays ophiolite complex is effected by Al Amar -Idsas fault. The nature of the Al Amar-Idsas fault is a part of the Eastern Arabian Shield, which was subjected to multiple interpretations. Through fieldwork investigation, microscopic examination, and microstructural analysis, we aim to understand the evolution and tectonic setting of the Jable Tays area. Finite-strain data displays that the Abt schist, the metavolcanics and the metagranites are highly to moderately deformed. The axial ratios in the XZ section range from 1.40 to 2.20. The long axes of the finite-strain ellipsoids trend NW- SE and W-E in the Jable Tays area while, their short axes are subvertical to subhorizontal foliations. The strain magnitude does not increase towards the tectonic contacts between the Abt schist and metavolcano-sedimentary. While majority of the obtained data indicate a dominant oblate with minor prolate strain symmetries in the Abt schist, metavolcano-sedimentary and metagranites. The strain data also indicate flattening with some constriction. We assume that the Abt schist and the metavolcano-sedimentry rocks have similar deformation behavior. The finite strain in the studied rocks accumulated during the metamorphism that effected by thrusting activity. Based on these results, we finally concluded that the contact between Abt schist and metavolcano-sedimentary rocks were formed during the progressive thrusting under brittle to semi-ductile deformation conditions by simple shear that also involved a component of vertical shortening, causing subhorizontal foliation in Jable Tays area.
A Key Pre-Distribution Scheme Based on µ-PBIBD for Enhancing Resilience in Wireless Sensor Networks.
Yuan, Qi; Ma, Chunguang; Yu, Haitao; Bian, Xuefen
2018-05-12
Many key pre-distribution (KPD) schemes based on combinatorial design were proposed for secure communication of wireless sensor networks (WSNs). Due to complexity of constructing the combinatorial design, it is infeasible to generate key rings using the corresponding combinatorial design in large scale deployment of WSNs. In this paper, we present a definition of new combinatorial design, termed “µ-partially balanced incomplete block design (µ-PBIBD)”, which is a refinement of partially balanced incomplete block design (PBIBD), and then describe a 2-D construction of µ-PBIBD which is mapped to KPD in WSNs. Our approach is of simple construction which provides a strong key connectivity and a poor network resilience. To improve the network resilience of KPD based on 2-D µ-PBIBD, we propose a KPD scheme based on 3-D Ex-µ-PBIBD which is a construction of µ-PBIBD from 2-D space to 3-D space. Ex-µ-PBIBD KPD scheme improves network scalability and resilience while has better key connectivity. Theoretical analysis and comparison with the related schemes show that key pre-distribution scheme based on Ex-µ-PBIBD provides high network resilience and better key scalability, while it achieves a trade-off between network resilience and network connectivity.
A Key Pre-Distribution Scheme Based on µ-PBIBD for Enhancing Resilience in Wireless Sensor Networks
Yuan, Qi; Ma, Chunguang; Yu, Haitao; Bian, Xuefen
2018-01-01
Many key pre-distribution (KPD) schemes based on combinatorial design were proposed for secure communication of wireless sensor networks (WSNs). Due to complexity of constructing the combinatorial design, it is infeasible to generate key rings using the corresponding combinatorial design in large scale deployment of WSNs. In this paper, we present a definition of new combinatorial design, termed “µ-partially balanced incomplete block design (µ-PBIBD)”, which is a refinement of partially balanced incomplete block design (PBIBD), and then describe a 2-D construction of µ-PBIBD which is mapped to KPD in WSNs. Our approach is of simple construction which provides a strong key connectivity and a poor network resilience. To improve the network resilience of KPD based on 2-D µ-PBIBD, we propose a KPD scheme based on 3-D Ex-µ-PBIBD which is a construction of µ-PBIBD from 2-D space to 3-D space. Ex-µ-PBIBD KPD scheme improves network scalability and resilience while has better key connectivity. Theoretical analysis and comparison with the related schemes show that key pre-distribution scheme based on Ex-µ-PBIBD provides high network resilience and better key scalability, while it achieves a trade-off between network resilience and network connectivity. PMID:29757244
A fast and accurate dihedral interpolation loop subdivision scheme
NASA Astrophysics Data System (ADS)
Shi, Zhuo; An, Yalei; Wang, Zhongshuai; Yu, Ke; Zhong, Si; Lan, Rushi; Luo, Xiaonan
2018-04-01
In this paper, we propose a fast and accurate dihedral interpolation Loop subdivision scheme for subdivision surfaces based on triangular meshes. In order to solve the problem of surface shrinkage, we keep the limit condition unchanged, which is important. Extraordinary vertices are handled using modified Butterfly rules. Subdivision schemes are computationally costly as the number of faces grows exponentially at higher levels of subdivision. To address this problem, our approach is to use local surface information to adaptively refine the model. This is achieved simply by changing the threshold value of the dihedral angle parameter, i.e., the angle between the normals of a triangular face and its adjacent faces. We then demonstrate the effectiveness of the proposed method for various 3D graphic triangular meshes, and extensive experimental results show that it can match or exceed the expected results at lower computational cost.
MEDICINAL CANNABIS LAW REFORM: LESSONS FROM CANADIAN LITIGATION.
Freckelton, Ian
2015-06-01
This editorial reviews medicinal cannabis litigation in Canada's superior courts between 1998 and 2015. It reflects upon the outcomes of the decisions and the reasoning within them. It identifies the issues that have driven Canada's jurisprudence in relation to access to medicinal cannabis, particularly insofar as it has engaged patients' rights to liberty and security of the person. It argues that the sequence of medicinal schemes adopted and refined in Canada provides constructive guidance for countries such as Australia which are contemplating introduction of medicinal cannabis as a therapeutic option in compassionate circumstances for patients. In particular, it contends that Canada's experience suggests that strategies calculated to introduce such schemes in a gradualist way, enabling informed involvement by medical practitioners and pharmacists, and that provide for safe and inexpensive accessibility to forms of medicinal cannabis that are clearly distinguished from recreational use and unlikely to be diverted criminally maximise the chances of such schemes being accepted by key stakeholders.
A Lagrangian particle method with remeshing for tracer transport on the sphere
Bosler, Peter Andrew; Kent, James; Krasny, Robert; ...
2017-03-30
A Lagrangian particle method (called LPM) based on the flow map is presented for tracer transport on the sphere. The particles carry tracer values and are located at the centers and vertices of triangular Lagrangian panels. Remeshing is applied to control particle disorder and two schemes are compared, one using direct tracer interpolation and another using inverse flow map interpolation with sampling of the initial tracer density. Test cases include a moving-vortices flow and reversing-deformational flow with both zero and nonzero divergence, as well as smooth and discontinuous tracers. We examine the accuracy of the computed tracer density and tracermore » integral, and preservation of nonlinear correlation in a pair of tracers. Here, we compare results obtained using LPM and the Lin–Rood finite-volume scheme. An adaptive particle/panel refinement scheme is demonstrated.« less
A Lagrangian particle method with remeshing for tracer transport on the sphere
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bosler, Peter Andrew; Kent, James; Krasny, Robert
A Lagrangian particle method (called LPM) based on the flow map is presented for tracer transport on the sphere. The particles carry tracer values and are located at the centers and vertices of triangular Lagrangian panels. Remeshing is applied to control particle disorder and two schemes are compared, one using direct tracer interpolation and another using inverse flow map interpolation with sampling of the initial tracer density. Test cases include a moving-vortices flow and reversing-deformational flow with both zero and nonzero divergence, as well as smooth and discontinuous tracers. We examine the accuracy of the computed tracer density and tracermore » integral, and preservation of nonlinear correlation in a pair of tracers. Here, we compare results obtained using LPM and the Lin–Rood finite-volume scheme. An adaptive particle/panel refinement scheme is demonstrated.« less
Parametric Human Body Reconstruction Based on Sparse Key Points.
Cheng, Ke-Li; Tong, Ruo-Feng; Tang, Min; Qian, Jing-Ye; Sarkis, Michel
2016-11-01
We propose an automatic parametric human body reconstruction algorithm which can efficiently construct a model using a single Kinect sensor. A user needs to stand still in front of the sensor for a couple of seconds to measure the range data. The user's body shape and pose will then be automatically constructed in several seconds. Traditional methods optimize dense correspondences between range data and meshes. In contrast, our proposed scheme relies on sparse key points for the reconstruction. It employs regression to find the corresponding key points between the scanned range data and some annotated training data. We design two kinds of feature descriptors as well as corresponding regression stages to make the regression robust and accurate. Our scheme follows with dense refinement where a pre-factorization method is applied to improve the computational efficiency. Compared with other methods, our scheme achieves similar reconstruction accuracy but significantly reduces runtime.
Hierarchical Volume Representation with 3{radical}2 Subdivision and Trivariate B-Spline Wavelets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Linsen, L; Gray, JT; Pascucci, V
2002-01-11
Multiresolution methods provide a means for representing data at multiple levels of detail. They are typically based on a hierarchical data organization scheme and update rules needed for data value computation. We use a data organization that is based on what we call n{radical}2 subdivision. The main advantage of subdivision, compared to quadtree (n = 2) or octree (n = 3) organizations, is that the number of vertices is only doubled in each subdivision step instead of multiplied by a factor of four or eight, respectively. To update data values we use n-variate B-spline wavelets, which yields better approximations formore » each level of detail. We develop a lifting scheme for n = 2 and n = 3 based on the n{radical}2-subdivision scheme. We obtain narrow masks that could also provide a basis for view-dependent visualization and adaptive refinement.« less
Zhou, Yuefang; Cameron, Elaine; Forbes, Gillian; Humphris, Gerry
2012-08-01
To develop and validate the St Andrews Behavioural Interaction Coding Scheme (SABICS): a tool to record nurse-child interactive behaviours. The SABICS was developed primarily from observation of video recorded interactions; and refined through an iterative process of applying the scheme to new data sets. Its practical applicability was assessed via implementation of the scheme on specialised behavioural coding software. Reliability was calculated using Cohen's Kappa. Discriminant validity was assessed using logistic regression. The SABICS contains 48 codes. Fifty-five nurse-child interactions were successfully coded through administering the scheme on The Observer XT8.0 system. Two visualization results of interaction patterns demonstrated the scheme's capability of capturing complex interaction processes. Cohen's Kappa was 0.66 (inter-coder) and 0.88 and 0.78 (two intra-coders). The frequency of nurse behaviours, such as "instruction" (OR = 1.32, p = 0.027) and "praise" (OR = 2.04, p = 0.027), predicted a child receiving the intervention. The SABICS is a unique system to record interactions between dental nurses and 3-5 years old children. It records and displays complex nurse-child interactive behaviours. It is easily administered and demonstrates reasonable psychometric properties. The SABICS has potential for other paediatric settings. Its development procedure may be helpful for other similar coding scheme development. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Parallel Tetrahedral Mesh Adaptation with Dynamic Load Balancing
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Biswas, Rupak; Gabow, Harold N.
1999-01-01
The ability to dynamically adapt an unstructured grid is a powerful tool for efficiently solving computational problems with evolving physical features. In this paper, we report on our experience parallelizing an edge-based adaptation scheme, called 3D_TAG. using message passing. Results show excellent speedup when a realistic helicopter rotor mesh is randomly refined. However. performance deteriorates when the mesh is refined using a solution-based error indicator since mesh adaptation for practical problems occurs in a localized region., creating a severe load imbalance. To address this problem, we have developed PLUM, a global dynamic load balancing framework for adaptive numerical computations. Even though PLUM primarily balances processor workloads for the solution phase, it reduces the load imbalance problem within mesh adaptation by repartitioning the mesh after targeting edges for refinement but before the actual subdivision. This dramatically improves the performance of parallel 3D_TAG since refinement occurs in a more load balanced fashion. We also present optimal and heuristic algorithms that, when applied to the default mapping of a parallel repartitioner, significantly reduce the data redistribution overhead. Finally, portability is examined by comparing performance on three state-of-the-art parallel machines.
Zhang, Yong-Tao; Shi, Jing; Shu, Chi-Wang; Zhou, Ye
2003-10-01
A quantitative study is carried out in this paper to investigate the size of numerical viscosities and the resolution power of high-order weighted essentially nonoscillatory (WENO) schemes for solving one- and two-dimensional Navier-Stokes equations for compressible gas dynamics with high Reynolds numbers. A one-dimensional shock tube problem, a one-dimensional example with parameters motivated by supernova and laser experiments, and a two-dimensional Rayleigh-Taylor instability problem are used as numerical test problems. For the two-dimensional Rayleigh-Taylor instability problem, or similar problems with small-scale structures, the details of the small structures are determined by the physical viscosity (therefore, the Reynolds number) in the Navier-Stokes equations. Thus, to obtain faithful resolution to these small-scale structures, the numerical viscosity inherent in the scheme must be small enough so that the physical viscosity dominates. A careful mesh refinement study is performed to capture the threshold mesh for full resolution, for specific Reynolds numbers, when WENO schemes of different orders of accuracy are used. It is demonstrated that high-order WENO schemes are more CPU time efficient to reach the same resolution, both for the one-dimensional and two-dimensional test problems.
Martelli, Nicolas; van den Brink, Hélène; Borget, Isabelle
2016-01-01
We describe here recent modifications to the French Coverage with Evidence Development (CED) scheme for innovative medical devices. CED can be defined as temporary coverage for a novel health product during collection of the additional evidence required to determine whether definitive coverage is possible. The principle refinements to the scheme include a more precise definition of what may be considered an innovative product, the possibility for device manufacturers to request CED either independently or in partnership with hospitals, and the establishment of processing deadlines for health authorities. In the long term, these modifications may increase the number of applications to the CED scheme, which could lead to unsustainable funding for future projects. It will also be necessary to ensure that the study conditions required by national health authorities are suitable for medical devices and that processing deadlines are met for the scheme to be fully operational. Overall, the modifications recently applied to the French CED scheme for innovative medical devices should increase the transparency of the process, and therefore be more appealing to medical device manufacturers. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Study of the Characteristics of Elementary Processes in a Chain Hydrogen Burning Reaction in Oxygen
NASA Astrophysics Data System (ADS)
Bychkov, M. E.; Petrushevich, Yu. V.; Starostin, A. N.
2017-12-01
The characteristics of possible chain explosive hydrogen burning reactions in an oxidizing medium are calculated on the potential energy surface. Specifically, reactions H2 + O2 → H2O + O, H2 + O2 → HO2 + H, and H2 + O2 → OH + OH are considered. Special attention is devoted to the production of a pair of fast highly reactive OH radicals. Because of the high activation threshold, this reaction is often excluded from the known kinetic scheme of hydrogen burning. However, a spread in estimates of kinetic characteristics and a disagreement between theoretical predictions with experimental results suggest that the kinetic scheme should be refined.
A parallel finite-difference method for computational aerodynamics
NASA Technical Reports Server (NTRS)
Swisshelm, Julie M.
1989-01-01
A finite-difference scheme for solving complex three-dimensional aerodynamic flow on parallel-processing supercomputers is presented. The method consists of a basic flow solver with multigrid convergence acceleration, embedded grid refinements, and a zonal equation scheme. Multitasking and vectorization have been incorporated into the algorithm. Results obtained include multiprocessed flow simulations from the Cray X-MP and Cray-2. Speedups as high as 3.3 for the two-dimensional case and 3.5 for segments of the three-dimensional case have been achieved on the Cray-2. The entire solver attained a factor of 2.7 improvement over its unitasked version on the Cray-2. The performance of the parallel algorithm on each machine is analyzed.
Accurate solutions for transonic viscous flow over finite wings
NASA Technical Reports Server (NTRS)
Vatsa, V. N.
1986-01-01
An explicit multistage Runge-Kutta type time-stepping scheme is used for solving the three-dimensional, compressible, thin-layer Navier-Stokes equations. A finite-volume formulation is employed to facilitate treatment of complex grid topologies encountered in three-dimensional calculations. Convergence to steady state is expedited through usage of acceleration techniques. Further numerical efficiency is achieved through vectorization of the computer code. The accuracy of the overall scheme is evaluated by comparing the computed solutions with the experimental data for a finite wing under different test conditions in the transonic regime. A grid refinement study ir conducted to estimate the grid requirements for adequate resolution of salient features of such flows.
A refined mixed shear flexible finite element for the nonlinear analysis of laminated plates
NASA Technical Reports Server (NTRS)
Putcha, N. S.; Reddy, J. N.
1986-01-01
The present study is concerned with the development of a mixed shear flexible finite element with relaxed continuity for the geometrically linear and nonlinear analysis of laminated anisotropic plates. The formulation of the element is based on a refined higher-order theory. This theory satisfies the zero transverse shear stress boundary conditions on the top and bottom faces of the plate. Shear correction coefficients are not needed. The developed element consists of 11 degrees-of-freedom per node, taking into account three displacements, two rotations, and six moment resultants. An evaluation of the element is conducted with respect to the accuracy obtained in the bending of laminated anistropic rectangular plates with different lamination schemes, loadings, and boundary conditions.
Parallel, adaptive finite element methods for conservation laws
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Devine, Karen D.; Flaherty, Joseph E.
1994-01-01
We construct parallel finite element methods for the solution of hyperbolic conservation laws in one and two dimensions. Spatial discretization is performed by a discontinuous Galerkin finite element method using a basis of piecewise Legendre polynomials. Temporal discretization utilizes a Runge-Kutta method. Dissipative fluxes and projection limiting prevent oscillations near solution discontinuities. A posteriori estimates of spatial errors are obtained by a p-refinement technique using superconvergence at Radau points. The resulting method is of high order and may be parallelized efficiently on MIMD computers. We compare results using different limiting schemes and demonstrate parallel efficiency through computations on an NCUBE/2 hypercube. We also present results using adaptive h- and p-refinement to reduce the computational cost of the method.
Adaptive computational methods for aerothermal heating analysis
NASA Technical Reports Server (NTRS)
Price, John M.; Oden, J. Tinsley
1988-01-01
The development of adaptive gridding techniques for finite-element analysis of fluid dynamics equations is described. The developmental work was done with the Euler equations with concentration on shock and inviscid flow field capturing. Ultimately this methodology is to be applied to a viscous analysis for the purpose of predicting accurate aerothermal loads on complex shapes subjected to high speed flow environments. The development of local error estimate strategies as a basis for refinement strategies is discussed, as well as the refinement strategies themselves. The application of the strategies to triangular elements and a finite-element flux-corrected-transport numerical scheme are presented. The implementation of these strategies in the GIM/PAGE code for 2-D and 3-D applications is documented and demonstrated.
Virtual Round Table on ten leading questions for network research
NASA Astrophysics Data System (ADS)
2004-03-01
The following discussion is an edited summary of the public debate started during the conference “Growing Networks and Graphs in Statistical Physics, Finance, Biology and Social Systems” held in Rome in September 2003. Drafts documents were circulated electronically among experts in the field and additions and follow-up to the original discussion have been included. Among the scientists participating to the discussion L.A.N. Amaral, A. Barrat, A.L. Barabasi, G. Caldarelli, P. De Los Rios, A. Erzan, B. Kahng, R. Mantegna, J.F.F. Mendes, R. Pastor-Satorras, A. Vespignani are acknowledged for their contributions and editing.
Array-based, parallel hierarchical mesh refinement algorithms for unstructured meshes
Ray, Navamita; Grindeanu, Iulian; Zhao, Xinglin; ...
2016-08-18
In this paper, we describe an array-based hierarchical mesh refinement capability through uniform refinement of unstructured meshes for efficient solution of PDE's using finite element methods and multigrid solvers. A multi-degree, multi-dimensional and multi-level framework is designed to generate the nested hierarchies from an initial coarse mesh that can be used for a variety of purposes such as in multigrid solvers/preconditioners, to do solution convergence and verification studies and to improve overall parallel efficiency by decreasing I/O bandwidth requirements (by loading smaller meshes and in memory refinement). We also describe a high-order boundary reconstruction capability that can be used tomore » project the new points after refinement using high-order approximations instead of linear projection in order to minimize and provide more control on geometrical errors introduced by curved boundaries.The capability is developed under the parallel unstructured mesh framework "Mesh Oriented dAtaBase" (MOAB Tautges et al. (2004)). We describe the underlying data structures and algorithms to generate such hierarchies in parallel and present numerical results for computational efficiency and effect on mesh quality. Furthermore, we also present results to demonstrate the applicability of the developed capability to study convergence properties of different point projection schemes for various mesh hierarchies and to a multigrid finite-element solver for elliptic problems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Besse, Nicolas; Latu, Guillaume; Ghizzo, Alain
In this paper we present a new method for the numerical solution of the relativistic Vlasov-Maxwell system on a phase-space grid using an adaptive semi-Lagrangian method. The adaptivity is performed through a wavelet multiresolution analysis, which gives a powerful and natural refinement criterion based on the local measurement of the approximation error and regularity of the distribution function. Therefore, the multiscale expansion of the distribution function allows to get a sparse representation of the data and thus save memory space and CPU time. We apply this numerical scheme to reduced Vlasov-Maxwell systems arising in laser-plasma physics. Interaction of relativistically strongmore » laser pulses with overdense plasma slabs is investigated. These Vlasov simulations revealed a rich variety of phenomena associated with the fast particle dynamics induced by electromagnetic waves as electron trapping, particle acceleration, and electron plasma wavebreaking. However, the wavelet based adaptive method that we developed here, does not yield significant improvements compared to Vlasov solvers on a uniform mesh due to the substantial overhead that the method introduces. Nonetheless they might be a first step towards more efficient adaptive solvers based on different ideas for the grid refinement or on a more efficient implementation. Here the Vlasov simulations are performed in a two-dimensional phase-space where the development of thin filaments, strongly amplified by relativistic effects requires an important increase of the total number of points of the phase-space grid as they get finer as time goes on. The adaptive method could be more useful in cases where these thin filaments that need to be resolved are a very small fraction of the hyper-volume, which arises in higher dimensions because of the surface-to-volume scaling and the essentially one-dimensional structure of the filaments. Moreover, the main way to improve the efficiency of the adaptive method is to increase the local character in phase-space of the numerical scheme, by considering multiscale reconstruction with more compact support and by replacing the semi-Lagrangian method with more local - in space - numerical scheme as compact finite difference schemes, discontinuous-Galerkin method or finite element residual schemes which are well suited for parallel domain decomposition techniques.« less
ERIC Educational Resources Information Center
Fletcher-Wood, Harry
2016-01-01
Readers of "Teaching History" will be familiar with the benefits and difficulties of cross-curricular planning, and the pages of this journal have often carried analysis of successful collaborations with the English department, or music, or geography. Harry Fletcher-Wood describes in this article a collaboration involving maths,…
NASA Astrophysics Data System (ADS)
Sauer, Roger A.
2013-08-01
Recently an enriched contact finite element formulation has been developed that substantially increases the accuracy of contact computations while keeping the additional numerical effort at a minimum reported by Sauer (Int J Numer Meth Eng, 87: 593-616, 2011). Two enrich-ment strategies were proposed, one based on local p-refinement using Lagrange interpolation and one based on Hermite interpolation that produces C 1-smoothness on the contact surface. Both classes, which were initially considered for the frictionless Signorini problem, are extended here to friction and contact between deformable bodies. For this, a symmetric contact formulation is used that allows the unbiased treatment of both contact partners. This paper also proposes a post-processing scheme for contact quantities like the contact pressure. The scheme, which provides a more accurate representation than the raw data, is based on an averaging procedure that is inspired by mortar formulations. The properties of the enrichment strategies and the corresponding post-processing scheme are illustrated by several numerical examples considering sliding and peeling contact in the presence of large deformations.
Adaptive mesh fluid simulations on GPU
NASA Astrophysics Data System (ADS)
Wang, Peng; Abel, Tom; Kaehler, Ralf
2010-10-01
We describe an implementation of compressible inviscid fluid solvers with block-structured adaptive mesh refinement on Graphics Processing Units using NVIDIA's CUDA. We show that a class of high resolution shock capturing schemes can be mapped naturally on this architecture. Using the method of lines approach with the second order total variation diminishing Runge-Kutta time integration scheme, piecewise linear reconstruction, and a Harten-Lax-van Leer Riemann solver, we achieve an overall speedup of approximately 10 times faster execution on one graphics card as compared to a single core on the host computer. We attain this speedup in uniform grid runs as well as in problems with deep AMR hierarchies. Our framework can readily be applied to more general systems of conservation laws and extended to higher order shock capturing schemes. This is shown directly by an implementation of a magneto-hydrodynamic solver and comparing its performance to the pure hydrodynamic case. Finally, we also combined our CUDA parallel scheme with MPI to make the code run on GPU clusters. Close to ideal speedup is observed on up to four GPUs.
Variationally consistent discretization schemes and numerical algorithms for contact problems
NASA Astrophysics Data System (ADS)
Wohlmuth, Barbara
We consider variationally consistent discretization schemes for mechanical contact problems. Most of the results can also be applied to other variational inequalities, such as those for phase transition problems in porous media, for plasticity or for option pricing applications from finance. The starting point is to weakly incorporate the constraint into the setting and to reformulate the inequality in the displacement in terms of a saddle-point problem. Here, the Lagrange multiplier represents the surface forces, and the constraints are restricted to the boundary of the simulation domain. Having a uniform inf-sup bound, one can then establish optimal low-order a priori convergence rates for the discretization error in the primal and dual variables. In addition to the abstract framework of linear saddle-point theory, complementarity terms have to be taken into account. The resulting inequality system is solved by rewriting it equivalently by means of the non-linear complementarity function as a system of equations. Although it is not differentiable in the classical sense, semi-smooth Newton methods, yielding super-linear convergence rates, can be applied and easily implemented in terms of a primal-dual active set strategy. Quite often the solution of contact problems has a low regularity, and the efficiency of the approach can be improved by using adaptive refinement techniques. Different standard types, such as residual- and equilibrated-based a posteriori error estimators, can be designed based on the interpretation of the dual variable as Neumann boundary condition. For the fully dynamic setting it is of interest to apply energy-preserving time-integration schemes. However, the differential algebraic character of the system can result in high oscillations if standard methods are applied. A possible remedy is to modify the fully discretized system by a local redistribution of the mass. Numerical results in two and three dimensions illustrate the wide range of possible applications and show the performance of the space discretization scheme, non-linear solver, adaptive refinement process and time integration.
NASA Astrophysics Data System (ADS)
Bhat, Himanshu; Sajja, Balasrinivasa Rao; Narayana, Ponnada A.
2006-11-01
Accurate quantification of the MRSI-observed regional distribution of metabolites involves relatively long processing times. This is particularly true in dealing with large amount of data that is typically acquired in multi-center clinical studies. To significantly shorten the processing time, an artificial neural network (ANN)-based approach was explored for quantifying the phase corrected (as opposed to magnitude) spectra. Specifically, in these studies radial basis function neural network (RBFNN) was used. This method was tested on simulated and normal human brain data acquired at 3T. The N-acetyl aspartate (NAA)/creatine (Cr), choline (Cho)/Cr, glutamate + glutamine (Glx)/Cr, and myo-inositol (mI)/Cr ratios in normal subjects were compared with the line fitting (LF) technique and jMRUI-AMARES analysis, and published values. The average NAA/Cr, Cho/Cr, Glx/Cr and mI/Cr ratios in normal controls were found to be 1.58 ± 0.13, 0.9 ± 0.08, 0.7 ± 0.17 and 0.42 ± 0.07, respectively. The corresponding ratios using the LF and jMRUI-AMARES methods were 1.6 ± 0.11, 0.95 ± 0.08, 0.78 ± 0.18, 0.49 ± 0.1 and 1.61 ± 0.15, 0.78 ± 0.07, 0.61 ± 0.18, 0.42 ± 0.13, respectively. These results agree with those published in literature. Bland-Altman analysis indicated an excellent agreement and minimal bias between the results obtained with RBFNN and other methods. The computational time for the current method was 15 s compared to approximately 10 min for the LF-based analysis.
Patch-based Adaptive Mesh Refinement for Multimaterial Hydrodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lomov, I; Pember, R; Greenough, J
2005-10-18
We present a patch-based direct Eulerian adaptive mesh refinement (AMR) algorithm for modeling real equation-of-state, multimaterial compressible flow with strength. Our approach to AMR uses a hierarchical, structured grid approach first developed by (Berger and Oliger 1984), (Berger and Oliger 1984). The grid structure is dynamic in time and is composed of nested uniform rectangular grids of varying resolution. The integration scheme on the grid hierarchy is a recursive procedure in which the coarse grids are advanced, then the fine grids are advanced multiple steps to reach the same time, and finally the coarse and fine grids are synchronized tomore » remove conservation errors during the separate advances. The methodology presented here is based on a single grid algorithm developed for multimaterial gas dynamics by (Colella et al. 1993), refined by(Greenough et al. 1995), and extended to the solution of solid mechanics problems with significant strength by (Lomov and Rubin 2003). The single grid algorithm uses a second-order Godunov scheme with an approximate single fluid Riemann solver and a volume-of-fluid treatment of material interfaces. The method also uses a non-conservative treatment of the deformation tensor and an acoustic approximation for shear waves in the Riemann solver. This departure from a strict application of the higher-order Godunov methodology to the equation of solid mechanics is justified due to the fact that highly nonlinear behavior of shear stresses is rare. This algorithm is implemented in two codes, Geodyn and Raptor, the latter of which is a coupled rad-hydro code. The present discussion will be solely concerned with hydrodynamics modeling. Results from a number of simulations for flows with and without strength will be presented.« less
Little, R; Wheeler, K; Edge, S
2017-02-11
This paper examines farmer attitudes towards the development of a voluntary risk-based trading scheme for cattle in England as a risk mitigation measure for bovine tuberculosis (bTB). The research reported here was commissioned to gather evidence on the type of scheme that would have a good chance of success in improving the information farmers receive about the bTB risk of cattle they buy. Telephone interviews were conducted with a stratified random sample of 203 cattle farmers in England, splitting the interviews equally between respondents in the high-risk area and low-risk area for bTB. Supplementary interviews and focus groups with farmers were also carried out across the risk areas. Results suggest a greater enthusiasm for a risk-based trading scheme in low-risk areas compared with high-risk areas and among members of breed societies and cattle health schemes. Third-party certification of herds by private vets or the Animal and Plant Health Agency were regarded as the most credible source, with farmer self-certification being favoured by sellers, but being regarded as least credible by buyers. Understanding farmers' attitudes towards voluntary risk-based trading is important to gauge likely uptake, understand preferences for information provision and to assist in monitoring, evaluating and refining the scheme once established. British Veterinary Association.
High speed inviscid compressible flow by the finite element method
NASA Technical Reports Server (NTRS)
Zienkiewicz, O. C.; Loehner, R.; Morgan, K.
1984-01-01
The finite element method and an explicit time stepping algorithm which is based on Taylor-Galerkin schemes with an appropriate artificial viscosity is combined with an automatic mesh refinement process which is designed to produce accurate steady state solutions to problems of inviscid compressible flow in two dimensions. The results of two test problems are included which demonstrate the excellent performance characteristics of the proposed procedures.
Modelling ESCOMPTE episodes with the CTM MOCAGE. Part 2 : sensitivity studies.
NASA Astrophysics Data System (ADS)
Dufour, A.; Amodei, M.; Brocheton, F.; Michou, M.; Peuch, V.-H.
2003-04-01
The multi-scale CTM MOCAGE has been applied to study pollution episodes documented during the ESCOMPTE field campain in June July 2001 in south eastern France (http://medias.obs-mip.fr/escompte). Several sensitivity studies have been performed on the basis of the 2nd IOP, covering 6 continuous days. The main objective of the present work is to investigate the question of chemical boundary conditions, as on the vertical than on the horizontal, for regional air quality simulations of several days. This issue, that often tended to be oversimplified (use of fixed continental climatology), raises increasing interest, particurlarly with the perspective of space-born tropospheric chemisry data assimilation in global model. In addition, we have examined how resolution refinements impact on the quality of the model outputs, at the surface and in altitude, against the observational database of dynamic and chemistry : resolution of the model by the way of the four nested models (from 2° to 0.01°), but also resolution of emission inventories (from 1° to 0.01°). Lastly, the impact of the refinement in the representation of chemistry has been assessed by using either detailed chemical schemes, such as RAM or SAPRC, or schemes used in global modelling, which just account for a limited amount of volatil hydrocarbon.
NASA Astrophysics Data System (ADS)
Dönmez, Orhan
2004-09-01
In this paper, the general procedure to solve the general relativistic hydrodynamical (GRH) equations with adaptive-mesh refinement (AMR) is presented. In order to achieve, the GRH equations are written in the conservation form to exploit their hyperbolic character. The numerical solutions of GRH equations are obtained by high resolution shock Capturing schemes (HRSC), specifically designed to solve nonlinear hyperbolic systems of conservation laws. These schemes depend on the characteristic information of the system. The Marquina fluxes with MUSCL left and right states are used to solve GRH equations. First, different test problems with uniform and AMR grids on the special relativistic hydrodynamics equations are carried out to verify the second-order convergence of the code in one, two and three dimensions. Results from uniform and AMR grid are compared. It is found that adaptive grid does a better job when the number of resolution is increased. Second, the GRH equations are tested using two different test problems which are Geodesic flow and Circular motion of particle In order to do this, the flux part of GRH equations is coupled with source part using Strang splitting. The coupling of the GRH equations is carried out in a treatment which gives second order accurate solutions in space and time.
Simulations of viscous and compressible gas-gas flows using high-order finite difference schemes
NASA Astrophysics Data System (ADS)
Capuano, M.; Bogey, C.; Spelt, P. D. M.
2018-05-01
A computational method for the simulation of viscous and compressible gas-gas flows is presented. It consists in solving the Navier-Stokes equations associated with a convection equation governing the motion of the interface between two gases using high-order finite-difference schemes. A discontinuity-capturing methodology based on sensors and a spatial filter enables capturing shock waves and deformable interfaces. One-dimensional test cases are performed as validation and to justify choices in the numerical method. The results compare well with analytical solutions. Shock waves and interfaces are accurately propagated, and remain sharp. Subsequently, two-dimensional flows are considered including viscosity and thermal conductivity. In Richtmyer-Meshkov instability, generated on an air-SF6 interface, the influence of the mesh refinement on the instability shape is studied, and the temporal variations of the instability amplitude is compared with experimental data. Finally, for a plane shock wave propagating in air and impacting a cylindrical bubble filled with helium or R22, numerical Schlieren pictures obtained using different grid refinements are found to compare well with experimental shadow-photographs. The mass conservation is verified from the temporal variations of the mass of the bubble. The mean velocities of pressure waves and bubble interface are similar to those obtained experimentally.
Thermodynamical effects and high resolution methods for compressible fluid flows
NASA Astrophysics Data System (ADS)
Li, Jiequan; Wang, Yue
2017-08-01
One of the fundamental differences of compressible fluid flows from incompressible fluid flows is the involvement of thermodynamics. This difference should be manifested in the design of numerical schemes. Unfortunately, the role of entropy, expressing irreversibility, is often neglected even though the entropy inequality, as a conceptual derivative, is verified for some first order schemes. In this paper, we refine the GRP solver to illustrate how the thermodynamical variation is integrated into the design of high resolution methods for compressible fluid flows and demonstrate numerically the importance of thermodynamic effects in the resolution of strong waves. As a by-product, we show that the GRP solver works for generic equations of state, and is independent of technical arguments.
Parallel Adaptive Simulation of Detonation Waves Using a Weighted Essentially Non-Oscillatory Scheme
NASA Astrophysics Data System (ADS)
McMahon, Sean
The purpose of this thesis was to develop a code that could be used to develop a better understanding of the physics of detonation waves. First, a detonation was simulated in one dimension using ZND theory. Then, using the 1D solution as an initial condition, a detonation was simulated in two dimensions using a weighted essentially non-oscillatory scheme on an adaptive mesh with the smallest lengthscales being equal to 2-3 flamelet lengths. The code development in linking Chemkin for chemical kinetics to the adaptive mesh refinement flow solver was completed. The detonation evolved in a way that, qualitatively, matched the experimental observations, however, the simulation was unable to progress past the formation of the triple point.
Optimal guidance with obstacle avoidance for nap-of-the-earth flight
NASA Technical Reports Server (NTRS)
Pekelsma, Nicholas J.
1988-01-01
The development of automatic guidance is discussed for helicopter Nap-of-the-Earth (NOE) and near-NOE flight. It deals with algorithm refinements relating to automated real-time flight path planning and to mission planning. With regard to path planning, it relates rotorcraft trajectory characteristics to the NOE computation scheme and addresses real-time computing issues and both ride quality issues and pilot-vehicle interfaces. The automated mission planning algorithm refinements include route optimization, automatic waypoint generation, interactive applications, and provisions for integrating the results into the real-time path planning software. A microcomputer based mission planning workstation was developed and is described. Further, the application of Defense Mapping Agency (DMA) digital terrain to both the mission planning workstation and to automatic guidance is both discussed and illustrated.
An accuracy assessment of Cartesian-mesh approaches for the Euler equations
NASA Technical Reports Server (NTRS)
Coirier, William J.; Powell, Kenneth G.
1995-01-01
A critical assessment of the accuracy of Cartesian-mesh approaches for steady, transonic solutions of the Euler equations of gas dynamics is made. An exact solution of the Euler equations (Ringleb's flow) is used not only to infer the order of the truncation error of the Cartesian-mesh approaches, but also to compare the magnitude of the discrete error directly to that obtained with a structured mesh approach. Uniformly and adaptively refined solutions using a Cartesian-mesh approach are obtained and compared to each other and to uniformly refined structured mesh results. The effect of cell merging is investigated as well as the use of two different K-exact reconstruction procedures. The solution methodology of the schemes is explained and tabulated results are presented to compare the solution accuracies.
CASTRO: A NEW COMPRESSIBLE ASTROPHYSICAL SOLVER. II. GRAY RADIATION HYDRODYNAMICS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, W.; Almgren, A.; Bell, J.
We describe the development of a flux-limited gray radiation solver for the compressible astrophysics code, CASTRO. CASTRO uses an Eulerian grid with block-structured adaptive mesh refinement based on a nested hierarchy of logically rectangular variable-sized grids with simultaneous refinement in both space and time. The gray radiation solver is based on a mixed-frame formulation of radiation hydrodynamics. In our approach, the system is split into two parts, one part that couples the radiation and fluid in a hyperbolic subsystem, and another parabolic part that evolves radiation diffusion and source-sink terms. The hyperbolic subsystem is solved explicitly with a high-order Godunovmore » scheme, whereas the parabolic part is solved implicitly with a first-order backward Euler method.« less
Constrained-transport Magnetohydrodynamics with Adaptive Mesh Refinement in CHARM
NASA Astrophysics Data System (ADS)
Miniati, Francesco; Martin, Daniel F.
2011-07-01
We present the implementation of a three-dimensional, second-order accurate Godunov-type algorithm for magnetohydrodynamics (MHD) in the adaptive-mesh-refinement (AMR) cosmological code CHARM. The algorithm is based on the full 12-solve spatially unsplit corner-transport-upwind (CTU) scheme. The fluid quantities are cell-centered and are updated using the piecewise-parabolic method (PPM), while the magnetic field variables are face-centered and are evolved through application of the Stokes theorem on cell edges via a constrained-transport (CT) method. The so-called multidimensional MHD source terms required in the predictor step for high-order accuracy are applied in a simplified form which reduces their complexity in three dimensions without loss of accuracy or robustness. The algorithm is implemented on an AMR framework which requires specific synchronization steps across refinement levels. These include face-centered restriction and prolongation operations and a reflux-curl operation, which maintains a solenoidal magnetic field across refinement boundaries. The code is tested against a large suite of test problems, including convergence tests in smooth flows, shock-tube tests, classical two- and three-dimensional MHD tests, a three-dimensional shock-cloud interaction problem, and the formation of a cluster of galaxies in a fully cosmological context. The magnetic field divergence is shown to remain negligible throughout.
NASA Astrophysics Data System (ADS)
Rewieński, M.; Lamecki, A.; Mrozowski, M.
2013-09-01
This paper proposes a technique, based on the Inexact Shift-Invert Lanczos (ISIL) method with Inexact Jacobi Orthogonal Component Correction (IJOCC) refinement, and a preconditioned conjugate-gradient (PCG) linear solver with multilevel preconditioner, for finding several eigenvalues for generalized symmetric eigenproblems. Several eigenvalues are found by constructing (with the ISIL process) an extended projection basis. Presented results of numerical experiments confirm the technique can be effectively applied to challenging, large-scale problems characterized by very dense spectra, such as resonant cavities with spatial dimensions which are large with respect to wavelengths of the resonating electromagnetic fields. It is also shown that the proposed scheme based on inexact linear solves delivers superior performance, as compared to methods which rely on exact linear solves, indicating tremendous potential of the 'inexact solve' concept. Finally, the scheme which generates an extended projection basis is found to provide a cost-efficient alternative to classical deflation schemes when several eigenvalues are computed.
NASA Astrophysics Data System (ADS)
Chen, Ying; Lowengrub, John; Shen, Jie; Wang, Cheng; Wise, Steven
2018-07-01
We develop efficient energy stable numerical methods for solving isotropic and strongly anisotropic Cahn-Hilliard systems with the Willmore regularization. The scheme, which involves adaptive mesh refinement and a nonlinear multigrid finite difference method, is constructed based on a convex splitting approach. We prove that, for the isotropic Cahn-Hilliard system with the Willmore regularization, the total free energy of the system is non-increasing for any time step and mesh sizes. A straightforward modification of the scheme is then used to solve the regularized strongly anisotropic Cahn-Hilliard system, and it is numerically verified that the discrete energy of the anisotropic system is also non-increasing, and can be efficiently solved by using the modified stable method. We present numerical results in both two and three dimensions that are in good agreement with those in earlier work on the topics. Numerical simulations are presented to demonstrate the accuracy and efficiency of the proposed methods.
Gangodagamage, Chandana; Wullschleger, Stan
2014-07-03
The dataset represents microtopographic characterization of the ice-wedge polygon landscape in Barrow, Alaska. Three microtopographic features are delineated using 0.25 m high resolution digital elevation dataset derived from LiDAR. The troughs, rims, and centers are the three categories in this classification scheme. The polygon troughs are the surface expression of the ice-wedges that are in lower elevations than the interior polygon. The elevated shoulders of the polygon interior immediately adjacent to the polygon troughs are the polygon rims for the low center polygons. In case of high center polygons, these features are the topographic highs. In this classification scheme, both topographic highs and rims are considered as polygon rims. The next version of the dataset will include more refined classification scheme including separate classes for rims ad topographic highs. The interior part of the polygon just adjacent to the polygon rims are the polygon centers.
NASA Astrophysics Data System (ADS)
Al-Fadhalah, Khaled; Aleem, Muhammad
2018-04-01
Repetitive thermomechanical processing (TMP) was applied for evaluating the effect of strain-induced α'-martensite transformation and reversion annealing on microstructure refinement and mechanical properties of 304 austenitic stainless steel. The first TMP scheme consisted of four cycles of tensile deformation to strain of 0.4, while the second TMP scheme applied two cycles of tensile straining to 0.6. For both schemes, tensile tests were conducted at 173 K (- 100 °C) followed by 5-minute annealing at 1073 K (800 °C). The volume fraction of α'-martensite in deformed samples increased with increasing cycles, reaching a maximum of 98 vol pct. Examination of annealed microstructure by electron backscattered diffraction indicated that increasing strain and/or number of cycles resulted in stronger reversion to austenite with finer grain size of 1 μm. Yet, increasing strain reduced the formation of Σ3 boundaries. The annealing textures generally show reversion of α'-martensite texture components to the austenite texture of brass and copper orientations. The increase in strain and/or number of cycles resulted in stronger intensity of copper orientation, accompanied by the formation of recrystallization texture components of Goss, cube, and rotated cube. The reduction in grain size with increasing cycles caused an increase in yield strength. It also resulted in an increase in strain hardening rate during deformation due to the increase in the formation of α'-martensite. The increase in strain hardening rate occurred in two consecutive stages, marked as stages II and III. The strain hardening in stage II is due to the formation of α'-martensite from either austenite or ɛ-martensite, while the stage-III strain hardening is attributed to the necessity to break the α'-martensite-banded structure for forming block-type martensite at high strains.
Galaxy Tagging: photometric redshift refinement and group richness enhancement
NASA Astrophysics Data System (ADS)
Kafle, P. R.; Robotham, A. S. G.; Driver, S. P.; Deeley, S.; Norberg, P.; Drinkwater, M. J.; Davies, L. J.
2018-06-01
We present a new scheme, galtag, for refining the photometric redshift measurements of faint galaxies by probabilistically tagging them to observed galaxy groups constructed from a brighter, magnitude-limited spectroscopy survey. First, this method is tested on the DESI light-cone data constructed on the GALFORM galaxy formation model to tests its validity. We then apply it to the photometric observations of galaxies in the Kilo-Degree Imaging Survey (KiDS) over a 1 deg2 region centred at 15h. This region contains Galaxy and Mass Assembly (GAMA) deep spectroscopic observations (i-band<22) and an accompanying group catalogue to r-band<19.8. We demonstrate that even with some trade-off in sample size, an order of magnitude improvement on the accuracy of photometric redshifts is achievable when using galtag. This approach provides both refined photometric redshift measurements and group richness enhancement. In combination these products will hugely improve the scientific potential of both photometric and spectroscopic datasets. The galtag software will be made publicly available at https://github.com/pkaf/galtag.git.
Ratnarajan, Gokulan; Newsom, Wendy; French, Karen; Kean, Jane; Chang, Lydia; Parker, Mike; Garway-Heath, David F; Bourne, Rupert R A
2013-03-01
To assess the impact of referral refinement criteria on the number of patients referred to, and first-visit discharges from, the Hospital Eye Service (HES) in relation to the National Institute for Health & Clinical Excellence (NICE) Glaucoma Guidelines, Joint College Group Guidance (JCG) and the NICE commissioning guidance. All low-risk (one risk factor: suspicious optic disc, abnormal visual field (VF), raised intra-ocular pressure (IOP) (22-28 mmHg) or IOP asymmetry (>5 mmHg) and high-risk (more than one risk factor, shallow anterior chamber or IOP >28 mmHg) referrals to the HES from 2006 to 2011 were analysed. Low-risk referrals were seen by Optometrists with a specialist interest in glaucoma and high-risk referrals were referred directly to the HES. Two thousand nine hundred and twelve patient records were analysed. The highest Consultant first-visit discharge rates were for referrals based on IOP alone (45% for IOP 22-28 mmHg) and IOP asymmetry (53%), VF defect alone (46%) and for abnormal IOP and VF (54%). The lowest first-visit discharge rates were for referrals for suspicious optic disc (19%) and IOP >28 mmHg (22%). 73% of patients aged 65-80 and 60% of patients aged >80 who were referred by the OSI due to an IOP between 22-28 mmHg would have satisfied the JCG criteria for non-referral. For patients referred with an IOP >28 mmHg and an otherwise normal examination, adherence to the NICE commissioning guidance would have resulted in 6% fewer referrals. In 2010 this scheme reduced the number of patients attending the HES by 15%, which resulted in a saving of £16 258 (13%). The results support that referrals for a raised IOP alone or in combination with an abnormal VF be classified as low-risk and undergo referral refinement. Adherence to the JCG and the NICE commissioning guidance as onward referral criteria for specialist optometrists in this referral refinement scheme would result in fewer referrals. Ophthalmic & Physiological Optics © 2013 The College of Optometrists.
Grid Convergence of High Order Methods for Multiscale Complex Unsteady Viscous Compressible Flows
NASA Technical Reports Server (NTRS)
Sjoegreen, B.; Yee, H. C.
2001-01-01
Grid convergence of several high order methods for the computation of rapidly developing complex unsteady viscous compressible flows with a wide range of physical scales is studied. The recently developed adaptive numerical dissipation control high order methods referred to as the ACM and wavelet filter schemes are compared with a fifth-order weighted ENO (WENO) scheme. The two 2-D compressible full Navier-Stokes models considered do not possess known analytical and experimental data. Fine grid solutions from a standard second-order TVD scheme and a MUSCL scheme with limiters are used as reference solutions. The first model is a 2-D viscous analogue of a shock tube problem which involves complex shock/shear/boundary-layer interactions. The second model is a supersonic reactive flow concerning fuel breakup. The fuel mixing involves circular hydrogen bubbles in air interacting with a planar moving shock wave. Both models contain fine scale structures and are stiff in the sense that even though the unsteadiness of the flows are rapidly developing, extreme grid refinement and time step restrictions are needed to resolve all the flow scales as well as the chemical reaction scales.
Recent Progress on the Parallel Implementation of Moving-Body Overset Grid Schemes
NASA Technical Reports Server (NTRS)
Wissink, Andrew; Allen, Edwin (Technical Monitor)
1998-01-01
Viscous calculations about geometrically complex bodies in which there is relative motion between component parts is one of the most computationally demanding problems facing CFD researchers today. This presentation documents results from the first two years of a CHSSI-funded effort within the U.S. Army AFDD to develop scalable dynamic overset grid methods for unsteady viscous calculations with moving-body problems. The first pan of the presentation will focus on results from OVERFLOW-D1, a parallelized moving-body overset grid scheme that employs traditional Chimera methodology. The two processes that dominate the cost of such problems are the flow solution on each component and the intergrid connectivity solution. Parallel implementations of the OVERFLOW flow solver and DCF3D connectivity software are coupled with a proposed two-part static-dynamic load balancing scheme and tested on the IBM SP and Cray T3E multi-processors. The second part of the presentation will cover some recent results from OVERFLOW-D2, a new flow solver that employs Cartesian grids with various levels of refinement, facilitating solution adaption. A study of the parallel performance of the scheme on large distributed- memory multiprocessor computer architectures will be reported.
NASA Astrophysics Data System (ADS)
Gaffney, Kevin P.; Aghaei, Faranak; Battiste, James; Zheng, Bin
2017-03-01
Detection of residual brain tumor is important to evaluate efficacy of brain cancer surgery, determine optimal strategy of further radiation therapy if needed, and assess ultimate prognosis of the patients. Brain MR is a commonly used imaging modality for this task. In order to distinguish between residual tumor and surgery induced scar tissues, two sets of MRI scans are conducted pre- and post-gadolinium contrast injection. The residual tumors are only enhanced in the post-contrast injection images. However, subjective reading and quantifying this type of brain MR images faces difficulty in detecting real residual tumor regions and measuring total volume of the residual tumor. In order to help solve this clinical difficulty, we developed and tested a new interactive computer-aided detection scheme, which consists of three consecutive image processing steps namely, 1) segmentation of the intracranial region, 2) image registration and subtraction, 3) tumor segmentation and refinement. The scheme also includes a specially designed and implemented graphical user interface (GUI) platform. When using this scheme, two sets of pre- and post-contrast injection images are first automatically processed to detect and quantify residual tumor volume. Then, a user can visually examine segmentation results and conveniently guide the scheme to correct any detection or segmentation errors if needed. The scheme has been repeatedly tested using five cases. Due to the observed high performance and robustness of the testing results, the scheme is currently ready for conducting clinical studies and helping clinicians investigate the association between this quantitative image marker and outcome of patients.
Large-eddy simulation/Reynolds-averaged Navier-Stokes hybrid schemes for high speed flows
NASA Astrophysics Data System (ADS)
Xiao, Xudong
Three LES/RANS hybrid schemes have been proposed for the prediction of high speed separated flows. Each method couples the k-zeta (Enstrophy) BANS model with an LES subgrid scale one-equation model by using a blending function that is coordinate system independent. Two of these functions are based on turbulence dissipation length scale and grid size, while the third one has no explicit dependence on the grid. To implement the LES/RANS hybrid schemes, a new rescaling-reintroducing method is used to generate time-dependent turbulent inflow conditions. The hybrid schemes have been tested on a Mach 2.88 flow over 25 degree compression-expansion ramp and a Mach 2.79 flow over 20 degree compression ramp. A special computation procedure has been designed to prevent the separation zone from expanding upstream to the recycle-plane. The code is parallelized using Message Passing Interface (MPI) and is optimized for running on IBM-SP3 parallel machine. The scheme was validated first for a flat plate. It was shown that the blending function has to be monotonic to prevent the RANS region from appearing in the LES region. In the 25 deg ramp case, the hybrid schemes provided better agreement with experiment in the recovery region. Grid refinement studies demonstrated the importance of using a grid independent blend function and further improvement with experiment in the recovery region. In the 20 deg ramp case, with a relatively finer grid, the hybrid scheme characterized by grid independent blending function well predicted the flow field in both the separation region and the recovery region. Therefore, with "appropriately" fine grid, current hybrid schemes are promising for the simulation of shock wave/boundary layer interaction problems.
NASA Technical Reports Server (NTRS)
Putcha, N. S.; Reddy, J. N.
1986-01-01
A mixed shear flexible finite element, with relaxed continuity, is developed for the geometrically linear and nonlinear analysis of layered anisotropic plates. The element formulation is based on a refined higher order theory which satisfies the zero transverse shear stress boundary conditions on the top and bottom faces of the plate and requires no shear correction coefficients. The mixed finite element developed herein consists of eleven degrees of freedom per node which include three displacements, two rotations and six moment resultants. The element is evaluated for its accuracy in the analysis of the stability and vibration of anisotropic rectangular plates with different lamination schemes and boundary conditions. The mixed finite element described here for the higher order theory gives very accurate results for buckling loads and natural frequencies.
There is No Free Lunch: Tradeoffs in the Utility of Learned Knowledge
NASA Technical Reports Server (NTRS)
Kedar, Smadar T.; McKusick, Kathleen B.
1992-01-01
With the recent introduction of learning in integrated systems, there is a need to measure the utility of learned knowledge for these more complex systems. A difficulty arrises when there are multiple, possibly conflicting, utility metrics to be measured. In this paper, we present schemes which trade off conflicting utility metrics in order to achieve some global performance objectives. In particular, we present a case study of a multi-strategy machine learning system, mutual theory refinement, which refines world models for an integrated reactive system, the Entropy Reduction Engine. We provide experimental results on the utility of learned knowledge in two conflicting metrics - improved accuracy and degraded efficiency. We then demonstrate two ways to trade off these metrics. In each, some learned knowledge is either approximated or dynamically 'forgotten' so as to improve efficiency while degrading accuracy only slightly.
An assessment of the adaptive unstructured tetrahedral grid, Euler Flow Solver Code FELISA
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Erickson, Larry L.
1994-01-01
A three-dimensional solution-adaptive Euler flow solver for unstructured tetrahedral meshes is assessed, and the accuracy and efficiency of the method for predicting sonic boom pressure signatures about simple generic models are demonstrated. Comparison of computational and wind tunnel data and enhancement of numerical solutions by means of grid adaptivity are discussed. The mesh generation is based on the advancing front technique. The FELISA code consists of two solvers, the Taylor-Galerkin and the Runge-Kutta-Galerkin schemes, both of which are spacially discretized by the usual Galerkin weighted residual finite-element methods but with different explicit time-marching schemes to steady state. The solution-adaptive grid procedure is based on either remeshing or mesh refinement techniques. An alternative geometry adaptive procedure is also incorporated.
Collaborative Localization and Location Verification in WSNs
Miao, Chunyu; Dai, Guoyong; Ying, Kezhen; Chen, Qingzhang
2015-01-01
Localization is one of the most important technologies in wireless sensor networks. A lightweight distributed node localization scheme is proposed by considering the limited computational capacity of WSNs. The proposed scheme introduces the virtual force model to determine the location by incremental refinement. Aiming at solving the drifting problem and malicious anchor problem, a location verification algorithm based on the virtual force mode is presented. In addition, an anchor promotion algorithm using the localization reliability model is proposed to re-locate the drifted nodes. Extended simulation experiments indicate that the localization algorithm has relatively high precision and the location verification algorithm has relatively high accuracy. The communication overhead of these algorithms is relative low, and the whole set of reliable localization methods is practical as well as comprehensive. PMID:25954948
On some Approximation Schemes for Steady Compressible Viscous Flow
NASA Astrophysics Data System (ADS)
Bause, M.; Heywood, J. G.; Novotny, A.; Padula, M.
This paper continues our development of approximation schemes for steady compressible viscous flow based on an iteration between a Stokes like problem for the velocity and a transport equation for the density, with the aim of improving their suitability for computations. Such schemes seem attractive for computations because they offer a reduction to standard problems for which there is already highly refined software, and because of the guidance that can be drawn from an existence theory based on them. Our objective here is to modify a recent scheme of Heywood and Padula [12], to improve its convergence properties. This scheme improved upon an earlier scheme of Padula [21], [23] through the use of a special ``effective pressure'' in linking the Stokes and transport problems. However, its convergence is limited for several reasons. Firstly, the steady transport equation itself is only solvable for general velocity fields if they satisfy certain smallness conditions. These conditions are met here by using a rescaled variant of the steady transport equation based on a pseudo time step for the equation of continuity. Another matter limiting the convergence of the scheme in [12] is that the Stokes linearization, which is a linearization about zero, has an inevitably small range of convergence. We replace it here with an Oseen or Newton linearization, either of which has a wider range of convergence, and converges more rapidly. The simplicity of the scheme offered in [12] was conducive to a relatively simple and clearly organized proof of its convergence. The proofs of convergence for the more complicated schemes proposed here are structured along the same lines. They strengthen the theorems of existence and uniqueness in [12] by weakening the smallness conditions that are needed. The expected improvement in the computational performance of the modified schemes has been confirmed by Bause [2], in an ongoing investigation.
Interpolation Method Needed for Numerical Uncertainty
NASA Technical Reports Server (NTRS)
Groves, Curtis E.; Ilie, Marcel; Schallhorn, Paul A.
2014-01-01
Using Computational Fluid Dynamics (CFD) to predict a flow field is an approximation to the exact problem and uncertainties exist. There is a method to approximate the errors in CFD via Richardson's Extrapolation. This method is based off of progressive grid refinement. To estimate the errors, the analyst must interpolate between at least three grids. This paper describes a study to find an appropriate interpolation scheme that can be used in Richardson's extrapolation or other uncertainty method to approximate errors.
Computerized Liver Volumetry on MRI by Using 3D Geodesic Active Contour Segmentation
Huynh, Hieu Trung; Karademir, Ibrahim; Oto, Aytekin; Suzuki, Kenji
2014-01-01
OBJECTIVE Our purpose was to develop an accurate automated 3D liver segmentation scheme for measuring liver volumes on MRI. SUBJECTS AND METHODS Our scheme for MRI liver volumetry consisted of three main stages. First, the preprocessing stage was applied to T1-weighted MRI of the liver in the portal venous phase to reduce noise and produce the boundary-enhanced image. This boundary-enhanced image was used as a speed function for a 3D fast-marching algorithm to generate an initial surface that roughly approximated the shape of the liver. A 3D geodesic-active-contour segmentation algorithm refined the initial surface to precisely determine the liver boundaries. The liver volumes determined by our scheme were compared with those manually traced by a radiologist, used as the reference standard. RESULTS The two volumetric methods reached excellent agreement (intraclass correlation coefficient, 0.98) without statistical significance (p = 0.42). The average (± SD) accuracy was 99.4% ± 0.14%, and the average Dice overlap coefficient was 93.6% ± 1.7%. The mean processing time for our automated scheme was 1.03 ± 0.13 minutes, whereas that for manual volumetry was 24.0 ± 4.4 minutes (p < 0.001). CONCLUSION The MRI liver volumetry based on our automated scheme agreed excellently with reference-standard volumetry, and it required substantially less completion time. PMID:24370139
Computerized liver volumetry on MRI by using 3D geodesic active contour segmentation.
Huynh, Hieu Trung; Karademir, Ibrahim; Oto, Aytekin; Suzuki, Kenji
2014-01-01
Our purpose was to develop an accurate automated 3D liver segmentation scheme for measuring liver volumes on MRI. Our scheme for MRI liver volumetry consisted of three main stages. First, the preprocessing stage was applied to T1-weighted MRI of the liver in the portal venous phase to reduce noise and produce the boundary-enhanced image. This boundary-enhanced image was used as a speed function for a 3D fast-marching algorithm to generate an initial surface that roughly approximated the shape of the liver. A 3D geodesic-active-contour segmentation algorithm refined the initial surface to precisely determine the liver boundaries. The liver volumes determined by our scheme were compared with those manually traced by a radiologist, used as the reference standard. The two volumetric methods reached excellent agreement (intraclass correlation coefficient, 0.98) without statistical significance (p = 0.42). The average (± SD) accuracy was 99.4% ± 0.14%, and the average Dice overlap coefficient was 93.6% ± 1.7%. The mean processing time for our automated scheme was 1.03 ± 0.13 minutes, whereas that for manual volumetry was 24.0 ± 4.4 minutes (p < 0.001). The MRI liver volumetry based on our automated scheme agreed excellently with reference-standard volumetry, and it required substantially less completion time.
Modeling of shock wave propagation in large amplitude ultrasound.
Pinton, Gianmarco F; Trahey, Gregg E
2008-01-01
The Rankine-Hugoniot relation for shock wave propagation describes the shock speed of a nonlinear wave. This paper investigates time-domain numerical methods that solve the nonlinear parabolic wave equation, or the Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation, and the conditions they require to satisfy the Rankine-Hugoniot relation. Two numerical methods commonly used in hyperbolic conservation laws are adapted to solve the KZK equation: Godunov's method and the monotonic upwind scheme for conservation laws (MUSCL). It is shown that they satisfy the Rankine-Hugoniot relation regardless of attenuation. These two methods are compared with the current implicit solution based method. When the attenuation is small, such as in water, the current method requires a degree of grid refinement that is computationally impractical. All three numerical methods are compared in simulations for lithotripters and high intensity focused ultrasound (HIFU) where the attenuation is small compared to the nonlinearity because much of the propagation occurs in water. The simulations are performed on grid sizes that are consistent with present-day computational resources but are not sufficiently refined for the current method to satisfy the Rankine-Hugoniot condition. It is shown that satisfying the Rankine-Hugoniot conditions has a significant impact on metrics relevant to lithotripsy (such as peak pressures) and HIFU (intensity). Because the Godunov and MUSCL schemes satisfy the Rankine-Hugoniot conditions on coarse grids, they are particularly advantageous for three-dimensional simulations.
NASA Astrophysics Data System (ADS)
Liu, Tao; Im, Jungho; Quackenbush, Lindi J.
2015-12-01
This study provides a novel approach to individual tree crown delineation (ITCD) using airborne Light Detection and Ranging (LiDAR) data in dense natural forests using two main steps: crown boundary refinement based on a proposed Fishing Net Dragging (FiND) method, and segment merging based on boundary classification. FiND starts with approximate tree crown boundaries derived using a traditional watershed method with Gaussian filtering and refines these boundaries using an algorithm that mimics how a fisherman drags a fishing net. Random forest machine learning is then used to classify boundary segments into two classes: boundaries between trees and boundaries between branches that belong to a single tree. Three groups of LiDAR-derived features-two from the pseudo waveform generated along with crown boundaries and one from a canopy height model (CHM)-were used in the classification. The proposed ITCD approach was tested using LiDAR data collected over a mountainous region in the Adirondack Park, NY, USA. Overall accuracy of boundary classification was 82.4%. Features derived from the CHM were generally more important in the classification than the features extracted from the pseudo waveform. A comprehensive accuracy assessment scheme for ITCD was also introduced by considering both area of crown overlap and crown centroids. Accuracy assessment using this new scheme shows the proposed ITCD achieved 74% and 78% as overall accuracy, respectively, for deciduous and mixed forest.
Large-eddy simulation of wind turbine wake interactions on locally refined Cartesian grids
NASA Astrophysics Data System (ADS)
Angelidis, Dionysios; Sotiropoulos, Fotis
2014-11-01
Performing high-fidelity numerical simulations of turbulent flow in wind farms remains a challenging issue mainly because of the large computational resources required to accurately simulate the turbine wakes and turbine/turbine interactions. The discretization of the governing equations on structured grids for mesoscale calculations may not be the most efficient approach for resolving the large disparity of spatial scales. A 3D Cartesian grid refinement method enabling the efficient coupling of the Actuator Line Model (ALM) with locally refined unstructured Cartesian grids adapted to accurately resolve tip vortices and multi-turbine interactions, is presented. Second order schemes are employed for the discretization of the incompressible Navier-Stokes equations in a hybrid staggered/non-staggered formulation coupled with a fractional step method that ensures the satisfaction of local mass conservation to machine zero. The current approach enables multi-resolution LES of turbulent flow in multi-turbine wind farms. The numerical simulations are in good agreement with experimental measurements and are able to resolve the rich dynamics of turbine wakes on grids containing only a small fraction of the grid nodes that would be required in simulations without local mesh refinement. This material is based upon work supported by the Department of Energy under Award Number DE-EE0005482 and the National Science Foundation under Award number NSF PFI:BIC 1318201.
Efficient Low Dissipative High Order Schemes for Multiscale MHD Flows
NASA Technical Reports Server (NTRS)
Sjoegreen, Bjoern; Yee, Helen C.; Mansour, Nagi (Technical Monitor)
2002-01-01
Accurate numerical simulations of complex multiscale compressible viscous flows, especially high speed turbulence combustion and acoustics, demand high order schemes with adaptive numerical dissipation controls. Standard high resolution shock-capturing methods are too dissipative to capture the small scales and/or long-time wave propagations without extreme grid refinements and small time steps. An integrated approach for the control of numerical dissipation in high order schemes for the compressible Euler and Navier-Stokes equations has been developed and verified by the authors and collaborators. These schemes are suitable for the problems in question. Basically, the scheme consists of sixth-order or higher non-dissipative spatial difference operators as the base scheme. To control the amount of numerical dissipation, multiresolution wavelets are used as sensors to adaptively limit the amount and to aid the selection and/or blending of the appropriate types of numerical dissipation to be used. Magnetohydrodynamics (MHD) waves play a key role in drag reduction in highly maneuverable high speed combat aircraft, in space weather forecasting, and in the understanding of the dynamics of the evolution of our solar system and the main sequence stars. Although there exist a few well-studied second and third-order high-resolution shock-capturing schemes for the MHD in the literature, these schemes are too diffusive and not practical for turbulence/combustion MHD flows. On the other hand, extension of higher than third-order high-resolution schemes to the MHD system of equations is not straightforward. Unlike the hydrodynamic equations, the inviscid MHD system is non-strictly hyperbolic with non-convex fluxes. The wave structures and shock types are different from their hydrodynamic counterparts. Many of the non-traditional hydrodynamic shocks are not fully understood. Consequently, reliable and highly accurate numerical schemes for multiscale MHD equations pose a great challenge to algorithm development. In addition, controlling the numerical error of the divergence free condition of the magnetic fields for high order methods has been a stumbling block. Lower order methods are not practical for the astrophysical problems in question. We propose to extend our hydrodynamics schemes to the MHD equations with several desired properties over commonly used MHD schemes.
Hybrid Upwinding for Two-Phase Flow in Heterogeneous Porous Media with Buoyancy and Capillarity
NASA Astrophysics Data System (ADS)
Hamon, F. P.; Mallison, B.; Tchelepi, H.
2016-12-01
In subsurface flow simulation, efficient discretization schemes for the partial differential equations governing multiphase flow and transport are critical. For highly heterogeneous porous media, the temporal discretization of choice is often the unconditionally stable fully implicit (backward-Euler) method. In this scheme, the simultaneous update of all the degrees of freedom requires solving large algebraic nonlinear systems at each time step using Newton's method. This is computationally expensive, especially in the presence of strong capillary effects driven by abrupt changes in porosity and permeability between different rock types. Therefore, discretization schemes that reduce the simulation cost by improving the nonlinear convergence rate are highly desirable. To speed up nonlinear convergence, we present an efficient fully implicit finite-volume scheme for immiscible two-phase flow in the presence of strong capillary forces. In this scheme, the discrete viscous, buoyancy, and capillary spatial terms are evaluated separately based on physical considerations. We build on previous work on Implicit Hybrid Upwinding (IHU) by using the upstream saturations with respect to the total velocity to compute the relative permeabilities in the viscous term, and by determining the directionality of the buoyancy term based on the phase density differences. The capillary numerical flux is decomposed into a rock- and geometry-dependent transmissibility factor, a nonlinear capillary diffusion coefficient, and an approximation of the saturation gradient. Combining the viscous, buoyancy, and capillary terms, we obtain a numerical flux that is consistent, bounded, differentiable, and monotone for homogeneous one-dimensional flow. The proposed scheme also accounts for spatially discontinuous capillary pressure functions. Specifically, at the interface between two rock types, the numerical scheme accurately honors the entry pressure condition by solving a local nonlinear problem to compute the numerical flux. Heterogeneous numerical tests demonstrate that this extended IHU scheme is non-oscillatory and convergent upon refinement. They also illustrate the superior accuracy and nonlinear convergence rate of the IHU scheme compared with the standard phase-based upstream weighting approach.
Konstantakopoulou, E; Harper, R A; Edgar, D F; Lawrenson, J G
2014-05-29
To explore the views of optometrists, general practitioners (GPs) and ophthalmologists regarding the development and organisation of community-based enhanced optometric services. Qualitative study using free-text questionnaires and telephone interviews. A minor eye conditions scheme (MECS) and a glaucoma referral refinement scheme (GRRS) are based on accredited community optometry practices. 41 optometrists, 6 ophthalmologists and 25 GPs. The most common reason given by optometrists for participation in enhanced schemes was to further their professional development; however, as providers of 'for-profit' healthcare, it was clear that participants had also considered the impact of the schemes on their business. Lack of fit with the 'retail' business model of optometry was a frequently given reason for non-participation. The methods used for training and accreditation were generally thought to be appropriate, and participating optometrists welcomed the opportunities for ongoing training. The ophthalmologists involved in the MECS and GRRS expressed very positive views regarding the schemes and widely acknowledged that the new care pathways would reduce unnecessary referrals and shorten patient waiting times. GPs involved in the MECS were also very supportive. They felt that the scheme provided an 'expert' local opinion that could potentially reduce the number of secondary care referrals. The results of this study demonstrated strong stakeholder support for the development of community-based enhanced optometric services. Although optometrists welcomed the opportunity to develop their professional skills and knowledge, enhanced schemes must also provide a sufficient financial incentive so as not to compromise the profitability of their business. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
An Optimised System for Generating Multi-Resolution Dtms Using NASA Mro Datasets
NASA Astrophysics Data System (ADS)
Tao, Y.; Muller, J.-P.; Sidiropoulos, P.; Veitch-Michaelis, J.; Yershov, V.
2016-06-01
Within the EU FP-7 iMars project, a fully automated multi-resolution DTM processing chain, called Co-registration ASP-Gotcha Optimised (CASP-GO) has been developed, based on the open source NASA Ames Stereo Pipeline (ASP). CASP-GO includes tiepoint based multi-resolution image co-registration and an adaptive least squares correlation-based sub-pixel refinement method called Gotcha. The implemented system guarantees global geo-referencing compliance with respect to HRSC (and thence to MOLA), provides refined stereo matching completeness and accuracy based on the ASP normalised cross-correlation. We summarise issues discovered from experimenting with the use of the open-source ASP DTM processing chain and introduce our new working solutions. These issues include global co-registration accuracy, de-noising, dealing with failure in matching, matching confidence estimation, outlier definition and rejection scheme, various DTM artefacts, uncertainty estimation, and quality-efficiency trade-offs.
A multistage motion vector processing method for motion-compensated frame interpolation.
Huang, Ai- Mei; Nguyen, Truong Q
2008-05-01
In this paper, a novel, low-complexity motion vector processing algorithm at the decoder is proposed for motion-compensated frame interpolation or frame rate up-conversion. We address the problems of having broken edges and deformed structures in an interpolated frame by hierarchically refining motion vectors on different block sizes. Our method explicitly considers the reliability of each received motion vector and has the capability of preserving the structure information. This is achieved by analyzing the distribution of residual energies and effectively merging blocks that have unreliable motion vectors. The motion vector reliability information is also used as a prior knowledge in motion vector refinement using a constrained vector median filter to avoid choosing identical unreliable one. We also propose using chrominance information in our method. Experimental results show that the proposed scheme has better visual quality and is also robust, even in video sequences with complex scenes and fast motion.
Bakas, Spyridon; Zeng, Ke; Sotiras, Aristeidis; Rathore, Saima; Akbari, Hamed; Gaonkar, Bilwaj; Rozycki, Martin; Pati, Sarthak; Davatzikos, Christos
2016-01-01
We present an approach for segmenting low- and high-grade gliomas in multimodal magnetic resonance imaging volumes. The proposed approach is based on a hybrid generative-discriminative model. Firstly, a generative approach based on an Expectation-Maximization framework that incorporates a glioma growth model is used to segment the brain scans into tumor, as well as healthy tissue labels. Secondly, a gradient boosting multi-class classification scheme is used to refine tumor labels based on information from multiple patients. Lastly, a probabilistic Bayesian strategy is employed to further refine and finalize the tumor segmentation based on patient-specific intensity statistics from the multiple modalities. We evaluated our approach in 186 cases during the training phase of the BRAin Tumor Segmentation (BRATS) 2015 challenge and report promising results. During the testing phase, the algorithm was additionally evaluated in 53 unseen cases, achieving the best performance among the competing methods.
A Novel Four-Node Quadrilateral Smoothing Element for Stress Enhancement and Error Estimation
NASA Technical Reports Server (NTRS)
Tessler, A.; Riggs, H. R.; Dambach, M.
1998-01-01
A four-node, quadrilateral smoothing element is developed based upon a penalized-discrete-least-squares variational formulation. The smoothing methodology recovers C1-continuous stresses, thus enabling effective a posteriori error estimation and automatic adaptive mesh refinement. The element formulation is originated with a five-node macro-element configuration consisting of four triangular anisoparametric smoothing elements in a cross-diagonal pattern. This element pattern enables a convenient closed-form solution for the degrees of freedom of the interior node, resulting from enforcing explicitly a set of natural edge-wise penalty constraints. The degree-of-freedom reduction scheme leads to a very efficient formulation of a four-node quadrilateral smoothing element without any compromise in robustness and accuracy of the smoothing analysis. The application examples include stress recovery and error estimation in adaptive mesh refinement solutions for an elasticity problem and an aerospace structural component.
Nagy, Szilvia; Pipek, János
2015-12-21
In wavelet based electronic structure calculations, introducing a new, finer resolution level is usually an expensive task, this is why often a two-level approximation is used with very fine starting resolution level. This process results in large matrices to calculate with and a large number of coefficients to be stored. In our previous work we have developed an adaptively refined solution scheme that determines the indices, where the refined basis functions are to be included, and later a method for predicting the next, finer resolution coefficients in a very economic way. In the present contribution, we would like to determine whether the method can be applied for predicting not only the first, but also the other, higher resolution level coefficients. Also the energy expectation values of the predicted wave functions are studied, as well as the scaling behaviour of the coefficients in the fine resolution limit.
Toward Automatic Verification of Goal-Oriented Flow Simulations
NASA Technical Reports Server (NTRS)
Nemec, Marian; Aftosmis, Michael J.
2014-01-01
We demonstrate the power of adaptive mesh refinement with adjoint-based error estimates in verification of simulations governed by the steady Euler equations. The flow equations are discretized using a finite volume scheme on a Cartesian mesh with cut cells at the wall boundaries. The discretization error in selected simulation outputs is estimated using the method of adjoint-weighted residuals. Practical aspects of the implementation are emphasized, particularly in the formulation of the refinement criterion and the mesh adaptation strategy. Following a thorough code verification example, we demonstrate simulation verification of two- and three-dimensional problems. These involve an airfoil performance database, a pressure signature of a body in supersonic flow and a launch abort with strong jet interactions. The results show reliable estimates and automatic control of discretization error in all simulations at an affordable computational cost. Moreover, the approach remains effective even when theoretical assumptions, e.g., steady-state and solution smoothness, are relaxed.
Passive microwave algorithm development and evaluation
NASA Technical Reports Server (NTRS)
Petty, Grant W.
1995-01-01
The scientific objectives of this grant are: (1) thoroughly evaluate, both theoretically and empirically, all available Special Sensor Microwave Imager (SSM/I) retrieval algorithms for column water vapor, column liquid water, and surface wind speed; (2) where both appropriate and feasible, develop, validate, and document satellite passive microwave retrieval algorithms that offer significantly improved performance compared with currently available algorithms; and (3) refine and validate a novel physical inversion scheme for retrieving rain rate over the ocean. This report summarizes work accomplished or in progress during the first year of a three year grant. The emphasis during the first year has been on the validation and refinement of the rain rate algorithm published by Petty and on the analysis of independent data sets that can be used to help evaluate the performance of rain rate algorithms over remote areas of the ocean. Two articles in the area of global oceanic precipitation are attached.
First Prismatic Building Model Reconstruction from Tomosar Point Clouds
NASA Astrophysics Data System (ADS)
Sun, Y.; Shahzad, M.; Zhu, X.
2016-06-01
This paper demonstrates for the first time the potential of explicitly modelling the individual roof surfaces to reconstruct 3-D prismatic building models using spaceborne tomographic synthetic aperture radar (TomoSAR) point clouds. The proposed approach is modular and works as follows: it first extracts the buildings via DSM generation and cutting-off the ground terrain. The DSM is smoothed using BM3D denoising method proposed in (Dabov et al., 2007) and a gradient map of the smoothed DSM is generated based on height jumps. Watershed segmentation is then adopted to oversegment the DSM into different regions. Subsequently, height and polygon complexity constrained merging is employed to refine (i.e., to reduce) the retrieved number of roof segments. Coarse outline of each roof segment is then reconstructed and later refined using quadtree based regularization plus zig-zag line simplification scheme. Finally, height is associated to each refined roof segment to obtain the 3-D prismatic model of the building. The proposed approach is illustrated and validated over a large building (convention center) in the city of Las Vegas using TomoSAR point clouds generated from a stack of 25 images using Tomo-GENESIS software developed at DLR.
Effect of biogenic fermentation impurities on lactic acid hydrogenation to propylene glycol.
Zhang, Zhigang; Jackson, James E; Miller, Dennis J
2008-09-01
The effect of residual impurities from glucose fermentation to lactic acid (LA) on subsequent ruthenium-catalyzed hydrogenation of LA to propylene glycol (PG) is examined. Whereas refined LA feed exhibits stable conversion to PG over carbon-supported ruthenium catalyst in a trickle bed reactor, partially refined LA from fermentation shows a steep decline in PG production over short (<40 h) reaction times followed by a further slow decay in performance. Addition of model impurities to refined LA has varying effects: organic acids, sugars, or inorganic salts have little effect on conversion; alanine, a model amino acid, results in a strong but reversible decline in conversion via competitive adsorption between alanine and LA on the Ru surface. The sulfur-containing amino acids cysteine and methionine irreversibly poison the catalyst for LA conversion. Addition of 0.1 wt% albumin as a model protein leads to slow decline in rate, consistent with pore plugging or combined pore plugging and poisoning of the Ru surface. This study points to the need for integrated design and operation of biological processes and chemical processes in the biorefinery in order to make efficient conversion schemes viable.
MODIS Solar Calibration Simulation Assisted Refinement
NASA Technical Reports Server (NTRS)
Waluschka, Eugene; Xiaoxiong, Xiong; Guenther, Bruce; Barnes, William; Moyer, David; Salomonson, Vincent V.
2004-01-01
A detailed optical radiometric model has been created of the MODIS instruments solar calibration process. This model takes into account the orientation and distance of the spacecraft with respect to the sun, the correlated motions of the scan mirror and the sun, all of the optical elements, the detector locations on the visible and near IR focal planes, the solar diffuser and the attenuation screen with all of its hundreds of pinholes. An efficient computational scheme, takes into account all of these factors and has produced results which reproduce the observed time dependent intensity variations on the two focal planes with considerable fidelity. This agreement between predictions and observations, has given insight to the causes of some small time dependent variations and how to incorporate them into the overall calibration scheme. The radiometric model is described and modeled and actual measurements are presented and compared.
NASA Technical Reports Server (NTRS)
Sjogreen, Bjoern; Yee, H. C.
2007-01-01
Flows containing steady or nearly steady strong shocks in parts of the flow field, and unsteady turbulence with shocklets on other parts of the flow field are difficult to capture accurately and efficiently employing the same numerical scheme even under the multiblock grid or adaptive grid refinement framework. On one hand, sixth-order or higher shock-capturing methods are appropriate for unsteady turbulence with shocklets. On the other hand, lower order shock-capturing methods are more effective for strong steady shocks in terms of convergence. In order to minimize the shortcomings of low order and high order shock-capturing schemes for the subject flows,a multi- block overlapping grid with different orders of accuracy on different blocks is proposed. Test cases to illustrate the performance of the new solver are included.
Leap-dynamics: efficient sampling of conformational space of proteins and peptides in solution.
Kleinjung, J; Bayley, P; Fraternali, F
2000-03-31
A molecular simulation scheme, called Leap-dynamics, that provides efficient sampling of protein conformational space in solution is presented. The scheme is a combined approach using a fast sampling method, imposing conformational 'leaps' to force the system over energy barriers, and molecular dynamics (MD) for refinement. The presence of solvent is approximated by a potential of mean force depending on the solvent accessible surface area. The method has been successfully applied to N-acetyl-L-alanine-N-methylamide (alanine dipeptide), sampling experimentally observed conformations inaccessible to MD alone under the chosen conditions. The method predicts correctly the increased partial flexibility of the mutant Y35G compared to native bovine pancreatic trypsin inhibitor. In particular, the improvement over MD consists of the detection of conformational flexibility that corresponds closely to slow motions identified by nuclear magnetic resonance techniques.
Two-dimensional mesh embedding for Galerkin B-spline methods
NASA Technical Reports Server (NTRS)
Shariff, Karim; Moser, Robert D.
1995-01-01
A number of advantages result from using B-splines as basis functions in a Galerkin method for solving partial differential equations. Among them are arbitrary order of accuracy and high resolution similar to that of compact schemes but without the aliasing error. This work develops another property, namely, the ability to treat semi-structured embedded or zonal meshes for two-dimensional geometries. This can drastically reduce the number of grid points in many applications. Both integer and non-integer refinement ratios are allowed. The report begins by developing an algorithm for choosing basis functions that yield the desired mesh resolution. These functions are suitable products of one-dimensional B-splines. Finally, test cases for linear scalar equations such as the Poisson and advection equation are presented. The scheme is conservative and has uniformly high order of accuracy throughout the domain.
One-loop corrections to light cone wave functions: The dipole picture DIS cross section
NASA Astrophysics Data System (ADS)
Hänninen, H.; Lappi, T.; Paatelainen, R.
2018-06-01
We develop methods to perform loop calculations in light cone perturbation theory using a helicity basis, refining the method introduced in our earlier work. In particular this includes implementing a consistent way to contract the four-dimensional tensor structures from the helicity vectors with d-dimensional tensors arising from loop integrals, in a way that can be fully automatized. We demonstrate this explicitly by calculating the one-loop correction to the virtual photon to quark-antiquark dipole light cone wave function. This allows us to calculate the deep inelastic scattering cross section in the dipole formalism to next-to-leading order accuracy. Our results, obtained using the four dimensional helicity scheme, agree with the recent calculation by Beuf using conventional dimensional regularization, confirming the regularization scheme independence of this cross section.
Thompson, Bryony A; Spurdle, Amanda B; Plazzer, John-Paul; Greenblatt, Marc S; Akagi, Kiwamu; Al-Mulla, Fahd; Bapat, Bharati; Bernstein, Inge; Capellá, Gabriel; den Dunnen, Johan T; du Sart, Desiree; Fabre, Aurelie; Farrell, Michael P; Farrington, Susan M; Frayling, Ian M; Frebourg, Thierry; Goldgar, David E; Heinen, Christopher D; Holinski-Feder, Elke; Kohonen-Corish, Maija; Robinson, Kristina Lagerstedt; Leung, Suet Yi; Martins, Alexandra; Moller, Pal; Morak, Monika; Nystrom, Minna; Peltomaki, Paivi; Pineda, Marta; Qi, Ming; Ramesar, Rajkumar; Rasmussen, Lene Juel; Royer-Pokora, Brigitte; Scott, Rodney J; Sijmons, Rolf; Tavtigian, Sean V; Tops, Carli M; Weber, Thomas; Wijnen, Juul; Woods, Michael O; Macrae, Finlay; Genuardi, Maurizio
2014-02-01
The clinical classification of hereditary sequence variants identified in disease-related genes directly affects clinical management of patients and their relatives. The International Society for Gastrointestinal Hereditary Tumours (InSiGHT) undertook a collaborative effort to develop, test and apply a standardized classification scheme to constitutional variants in the Lynch syndrome-associated genes MLH1, MSH2, MSH6 and PMS2. Unpublished data submission was encouraged to assist in variant classification and was recognized through microattribution. The scheme was refined by multidisciplinary expert committee review of the clinical and functional data available for variants, applied to 2,360 sequence alterations, and disseminated online. Assessment using validated criteria altered classifications for 66% of 12,006 database entries. Clinical recommendations based on transparent evaluation are now possible for 1,370 variants that were not obviously protein truncating from nomenclature. This large-scale endeavor will facilitate the consistent management of families suspected to have Lynch syndrome and demonstrates the value of multidisciplinary collaboration in the curation and classification of variants in public locus-specific databases.
Mixed finite-difference scheme for analysis of simply supported thick plates.
NASA Technical Reports Server (NTRS)
Noor, A. K.
1973-01-01
A mixed finite-difference scheme is presented for the stress and free vibration analysis of simply supported nonhomogeneous and layered orthotropic thick plates. The analytical formulation is based on the linear, three-dimensional theory of orthotropic elasticity and a Fourier approach is used to reduce the governing equations to six first-order ordinary differential equations in the thickness coordinate. The governing equations possess a symmetric coefficient matrix and are free of derivatives of the elastic characteristics of the plate. In the finite difference discretization two interlacing grids are used for the different fundamental unknowns in such a way as to reduce both the local discretization error and the bandwidth of the resulting finite-difference field equations. Numerical studies are presented for the effects of reducing the interior and boundary discretization errors and of mesh refinement on the accuracy and convergence of solutions. It is shown that the proposed scheme, in addition to a number of other advantages, leads to highly accurate results, even when a small number of finite difference intervals is used.
NASA Astrophysics Data System (ADS)
Minotti, Luca; Savaré, Giuseppe
2018-02-01
We propose the new notion of Visco-Energetic solutions to rate-independent systems {(X, E,} d) driven by a time dependent energy E and a dissipation quasi-distance d in a general metric-topological space X. As for the classic Energetic approach, solutions can be obtained by solving a modified time Incremental Minimization Scheme, where at each step the dissipation quasi-distance d is incremented by a viscous correction {δ} (for example proportional to the square of the distance d), which penalizes far distance jumps by inducing a localized version of the stability condition. We prove a general convergence result and a typical characterization by Stability and Energy Balance in a setting comparable to the standard energetic one, thus capable of covering a wide range of applications. The new refined Energy Balance condition compensates for the localized stability and provides a careful description of the jump behavior: at every jump the solution follows an optimal transition, which resembles in a suitable variational sense the discrete scheme that has been implemented for the whole construction.
Adaptive CFD schemes for aerospace propulsion
NASA Astrophysics Data System (ADS)
Ferrero, A.; Larocca, F.
2017-05-01
The flow fields which can be observed inside several components of aerospace propulsion systems are characterised by the presence of very localised phenomena (boundary layers, shock waves,...) which can deeply influence the performances of the system. In order to accurately evaluate these effects by means of Computational Fluid Dynamics (CFD) simulations, it is necessary to locally refine the computational mesh. In this way the degrees of freedom related to the discretisation are focused in the most interesting regions and the computational cost of the simulation remains acceptable. In the present work, a discontinuous Galerkin (DG) discretisation is used to numerically solve the equations which describe the flow field. The local nature of the DG reconstruction makes it possible to efficiently exploit several adaptive schemes in which the size of the elements (h-adaptivity) and the order of reconstruction (p-adaptivity) are locally changed. After a review of the main adaptation criteria, some examples related to compressible flows in turbomachinery are presented. An hybrid hp-adaptive algorithm is also proposed and compared with a standard h-adaptive scheme in terms of computational efficiency.
Plazzer, John-Paul; Greenblatt, Marc S.; Akagi, Kiwamu; Al-Mulla, Fahd; Bapat, Bharati; Bernstein, Inge; Capellá, Gabriel; den Dunnen, Johan T.; du Sart, Desiree; Fabre, Aurelie; Farrell, Michael P.; Farrington, Susan M.; Frayling, Ian M.; Frebourg, Thierry; Goldgar, David E.; Heinen, Christopher D.; Holinski-Feder, Elke; Kohonen-Corish, Maija; Robinson, Kristina Lagerstedt; Leung, Suet Yi; Martins, Alexandra; Moller, Pal; Morak, Monika; Nystrom, Minna; Peltomaki, Paivi; Pineda, Marta; Qi, Ming; Ramesar, Rajkumar; Rasmussen, Lene Juel; Royer-Pokora, Brigitte; Scott, Rodney J.; Sijmons, Rolf; Tavtigian, Sean V.; Tops, Carli M.; Weber, Thomas; Wijnen, Juul; Woods, Michael O.; Macrae, Finlay; Genuardi, Maurizio
2015-01-01
Clinical classification of sequence variants identified in hereditary disease genes directly affects clinical management of patients and their relatives. The International Society for Gastrointestinal Hereditary Tumours (InSiGHT) undertook a collaborative effort to develop, test and apply a standardized classification scheme to constitutional variants in the Lynch Syndrome genes MLH1, MSH2, MSH6 and PMS2. Unpublished data submission was encouraged to assist variant classification, and recognized by microattribution. The scheme was refined by multidisciplinary expert committee review of clinical and functional data available for variants, applied to 2,360 sequence alterations, and disseminated online. Assessment using validated criteria altered classifications for 66% of 12,006 database entries. Clinical recommendations based on transparent evaluation are now possible for 1,370 variants not obviously protein-truncating from nomenclature. This large-scale endeavor will facilitate consistent management of suspected Lynch Syndrome families, and demonstrates the value of multidisciplinary collaboration for curation and classification of variants in public locus-specific databases. PMID:24362816
Interface conditions for domain decomposition with radical grid refinement
NASA Technical Reports Server (NTRS)
Scroggs, Jeffrey S.
1991-01-01
Interface conditions for coupling the domains in a physically motivated domain decomposition method are discussed. The domain decomposition is based on an asymptotic-induced method for the numerical solution of hyperbolic conservation laws with small viscosity. The method consists of multiple stages. The first stage is to obtain a first approximation using a first-order method, such as the Godunov scheme. Subsequent stages of the method involve solving internal-layer problem via a domain decomposition. The method is derived and justified via singular perturbation techniques.
A survey of particle contamination in electronic devices
NASA Technical Reports Server (NTRS)
Adolphsen, J. W.; Kagdis, W. A.; Timmins, A. R.
1976-01-01
The experiences are given of a number of National Aeronautics and Space Administration (NASA) and Space and Missile System Organization (SAMSO) contractors with particle contamination, and the methods used for its prevention and detection, evaluates the bases for the different schemes, assesses their effectiveness, and identifies the problems associated with each. It recommends specific short-range tests or approaches appropriate to individual part-type categories and recommends that specific tasks be initiated to refine techniques and to resolve technical and application facets of promising solutions.
Pricing and simulation for real estate index options: Radial basis point interpolation
NASA Astrophysics Data System (ADS)
Gong, Pu; Zou, Dong; Wang, Jiayue
2018-06-01
This study employs the meshfree radial basis point interpolation (RBPI) for pricing real estate derivatives contingent on real estate index. This method combines radial and polynomial basis functions, which can guarantee the interpolation scheme with Kronecker property and effectively improve accuracy. An exponential change of variables, a mesh refinement algorithm and the Richardson extrapolation are employed in this study to implement the RBPI. Numerical results are presented to examine the computational efficiency and accuracy of our method.
[Reform and practice of teaching methods for culture of medicinal plant].
Si, Jinping; Zhu, Yuqiu; Liu, Jingjing; Bai, Yan; Zhang, Xinfeng
2012-02-01
Culture of pharmaceutical plant is a comprehensive multi-disciplinary theory, which has a long history of application. In order to improve the quality of this course, some reformation schemes have been carried out, including stimulating enthusiasm for learning, refining the basic concepts and theories, promoting the case study, emphasis on latest achievements, enhancing exercise in laboratory and planting base, and guiding students to do scientific and technological innovation. Meanwhile, the authors point out some teaching problems of this course.
NASA Astrophysics Data System (ADS)
Horstmann, Jan Tobias; Le Garrec, Thomas; Mincu, Daniel-Ciprian; Lévêque, Emmanuel
2017-11-01
Despite the efficiency and low dissipation of the stream-collide scheme of the discrete-velocity Boltzmann equation, which is nowadays implemented in many lattice Boltzmann solvers, a major drawback exists over alternative discretization schemes, i.e. finite-volume or finite-difference, that is the limitation to Cartesian uniform grids. In this paper, an algorithm is presented that combines the positive features of each scheme in a hybrid lattice Boltzmann method. In particular, the node-based streaming of the distribution functions is coupled with a second-order finite-volume discretization of the advection term of the Boltzmann equation under the Bhatnagar-Gross-Krook approximation. The algorithm is established on a multi-domain configuration, with the individual schemes being solved on separate sub-domains and connected by an overlapping interface of at least 2 grid cells. A critical parameter in the coupling is the CFL number equal to unity, which is imposed by the stream-collide algorithm. Nevertheless, a semi-implicit treatment of the collision term in the finite-volume formulation allows us to obtain a stable solution for this condition. The algorithm is validated in the scope of three different test cases on a 2D periodic mesh. It is shown that the accuracy of the combined discretization schemes agrees with the order of each separate scheme involved. The overall numerical error of the hybrid algorithm in the macroscopic quantities is contained between the error of the two individual algorithms. Finally, we demonstrate how such a coupling can be used to adapt to anisotropic flows with some gradual mesh refinement in the FV domain.
NASA Astrophysics Data System (ADS)
Syrakos, Alexandros; Varchanis, Stylianos; Dimakopoulos, Yannis; Goulas, Apostolos; Tsamopoulos, John
2017-12-01
Finite volume methods (FVMs) constitute a popular class of methods for the numerical simulation of fluid flows. Among the various components of these methods, the discretisation of the gradient operator has received less attention despite its fundamental importance with regards to the accuracy of the FVM. The most popular gradient schemes are the divergence theorem (DT) (or Green-Gauss) scheme and the least-squares (LS) scheme. Both are widely believed to be second-order accurate, but the present study shows that in fact the common variant of the DT gradient is second-order accurate only on structured meshes whereas it is zeroth-order accurate on general unstructured meshes, and the LS gradient is second-order and first-order accurate, respectively. This is explained through a theoretical analysis and is confirmed by numerical tests. The schemes are then used within a FVM to solve a simple diffusion equation on unstructured grids generated by several methods; the results reveal that the zeroth-order accuracy of the DT gradient is inherited by the FVM as a whole, and the discretisation error does not decrease with grid refinement. On the other hand, use of the LS gradient leads to second-order accurate results, as does the use of alternative, consistent, DT gradient schemes, including a new iterative scheme that makes the common DT gradient consistent at almost no extra cost. The numerical tests are performed using both an in-house code and the popular public domain partial differential equation solver OpenFOAM.
Raja, Muhammad Asif Zahoor; Zameer, Aneela; Khan, Aziz Ullah; Wazwaz, Abdul Majid
2016-01-01
In this study, a novel bio-inspired computing approach is developed to analyze the dynamics of nonlinear singular Thomas-Fermi equation (TFE) arising in potential and charge density models of an atom by exploiting the strength of finite difference scheme (FDS) for discretization and optimization through genetic algorithms (GAs) hybrid with sequential quadratic programming. The FDS procedures are used to transform the TFE differential equations into a system of nonlinear equations. A fitness function is constructed based on the residual error of constituent equations in the mean square sense and is formulated as the minimization problem. Optimization of parameters for the system is carried out with GAs, used as a tool for viable global search integrated with SQP algorithm for rapid refinement of the results. The design scheme is applied to solve TFE for five different scenarios by taking various step sizes and different input intervals. Comparison of the proposed results with the state of the art numerical and analytical solutions reveals that the worth of our scheme in terms of accuracy and convergence. The reliability and effectiveness of the proposed scheme are validated through consistently getting optimal values of statistical performance indices calculated for a sufficiently large number of independent runs to establish its significance.
Cornford, S. L.; Martin, D. F.; Lee, V.; ...
2016-05-13
At least in conventional hydrostatic ice-sheet models, the numerical error associated with grounding line dynamics can be reduced by modifications to the discretization scheme. These involve altering the integration formulae for the basal traction and/or driving stress close to the grounding line and exhibit lower – if still first-order – error in the MISMIP3d experiments. MISMIP3d may not represent the variety of real ice streams, in that it lacks strong lateral stresses, and imposes a large basal traction at the grounding line. We study resolution sensitivity in the context of extreme forcing simulations of the entire Antarctic ice sheet, using the BISICLES adaptive mesh ice-sheet model with two schemes: the original treatment, and a scheme, which modifies the discretization of the basal traction. The second scheme does indeed improve accuracy – by around a factor of two – for a given mesh spacing, butmore » $$\\lesssim 1$$ km resolution is still necessary. For example, in coarser resolution simulations Thwaites Glacier retreats so slowly that other ice streams divert its trunk. In contrast, with $$\\lesssim 1$$ km meshes, the same glacier retreats far more quickly and triggers the final phase of West Antarctic collapse a century before any such diversion can take place.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jie; Ni, Ming-Jiu, E-mail: mjni@ucas.ac.cn
2014-01-01
The numerical simulation of Magnetohydrodynamics (MHD) flows with complex boundaries has been a topic of great interest in the development of a fusion reactor blanket for the difficulty to accurately simulate the Hartmann layers and side layers along arbitrary geometries. An adaptive version of a consistent and conservative scheme has been developed for simulating the MHD flows. Besides, the present study forms the first attempt to apply the cut-cell approach for irregular wall-bounded MHD flows, which is more flexible and conveniently implemented under adaptive mesh refinement (AMR) technique. It employs a Volume-of-Fluid (VOF) approach to represent the fluid–conducting wall interfacemore » that makes it possible to solve the fluid–solid coupling magnetic problems, emphasizing at how electric field solver is implemented when conductivity is discontinuous in cut-cell. For the irregular cut-cells, the conservative interpolation technique is applied to calculate the Lorentz force at cell-center. On the other hand, it will be shown how consistent and conservative scheme is implemented on fine/coarse mesh boundaries when using AMR technique. Then, the applied numerical schemes are validated by five test simulations and excellent agreement was obtained for all the cases considered, simultaneously showed good consistency and conservative properties.« less
Student Teachers’ Proof Schemes on Proof Tasks Involving Inequality: Deductive or Inductive?
NASA Astrophysics Data System (ADS)
Rosyidi, A. H.; Kohar, A. W.
2018-01-01
Exploring student teachers’ proof ability is crucial as it is important for improving the quality of their learning process and help their future students learn how to construct a proof. Hence, this study aims at exploring at the proof schemes of student teachers in the beginning of their studies. Data were collected from 130 proofs resulted by 65 Indonesian student teachers on two proof tasks involving algebraic inequality. To analyse, the proofs were classified into the refined proof schemes level proposed by Lee (2016) ranging from inductive, which only provides irrelevant inferences, to deductive proofs, which consider addressing formal representation. Findings present several examples of each of Lee’s level on the student teachers’ proofs spanning from irrelevant inferences, novice use of examples or logical reasoning, strategic use examples for reasoning, deductive inferences with major and minor logical coherence, and deductive proof with informal and formal representation. Besides, it was also found that more than half of the students’ proofs coded as inductive schemes, which does not meet the requirement for doing the proof for the proof tasks examined in this study. This study suggests teacher educators in teacher colleges to reform the curriculum regarding proof learning which can accommodate the improvement of student teachers’ proving ability from inductive to deductive proof as well from informal to formal proof.
Emperical Laws in Economics Uncovered Using Methods in Statistical Mechanics
NASA Astrophysics Data System (ADS)
Stanley, H. Eugene
2001-06-01
In recent years, statistical physicists and computational physicists have determined that physical systems which consist of a large number of interacting particles obey universal "scaling laws" that serve to demonstrate an intrinsic self-similarity operating in such systems. Further, the parameters appearing in these scaling laws appear to be largely independent of the microscopic details. Since economic systems also consist of a large number of interacting units, it is plausible that scaling theory can be usefully applied to economics. To test this possibility using realistic data sets, a number of scientists have begun analyzing economic data using methods of statistical physics [1]. We have found evidence for scaling (and data collapse), as well as universality, in various quantities, and these recent results will be reviewed in this talk--starting with the most recent study [2]. We also propose models that may lead to some insight into these phenomena. These results will be discussed, as well as the overall rationale for why one might expect scaling principles to hold for complex economic systems. This work on which this talk is based is supported by BP, and was carried out in collaboration with L. A. N. Amaral S. V. Buldyrev, D. Canning, P. Cizeau, X. Gabaix, P. Gopikrishnan, S. Havlin, Y. Lee, Y. Liu, R. N. Mantegna, K. Matia, M. Meyer, C.-K. Peng, V. Plerou, M. A. Salinger, and M. H. R. Stanley. [1.] See, e.g., R. N. Mantegna and H. E. Stanley, Introduction to Econophysics: Correlations & Complexity in Finance (Cambridge University Press, Cambridge, 1999). [2.] P. Gopikrishnan, B. Rosenow, V. Plerou, and H. E. Stanley, "Identifying Business Sectors from Stock Price Fluctuations," e-print cond-mat/0011145; V. Plerou, P. Gopikrishnan, L. A. N. Amaral, X. Gabaix, and H. E. Stanley, "Diffusion and Economic Fluctuations," Phys. Rev. E (Rapid Communications) 62, 3023-3026 (2000); P. Gopikrishnan, V. Plerou, X. Gabaix, and H. E. Stanley, "Statistical Properties of Share Volume Traded in Financial Markets," Phys. Rev. E (Rapid Communications) 62, 4493-4496 (2000).
Anti-atherosclerotic therapy based on botanicals.
Orekhov, Alexander N; Sobenin, Igor A; Korneev, Nikolay V; Kirichenko, Tatyana V; Myasoedova, Veronika A; Melnichenko, Alexandra A; Balcells, Mercedes; Edelman, Elazer R; Bobryshev, Yuri V
2013-04-01
Natural products including botanicals for both therapy of clinical manifestations of atherosclerosis and reduction of atherosclerosis risk factors are topics of recent patents. Only a few recent patents are relevant to the direct antiatherosclerotic therapy leading to regression of atherosclerotic lesions. Earlier, using a cellular model we have developed and patented several anti-atherosclerotic drugs. The AMAR (Atherosclerosis Monitoring and Atherogenicity Reduction) study was designed to estimate the effect of two-year treatment with time-released garlic-based drug Allicor on the progression of carotid atherosclerosis in 196 asymptomatic men aged 40-74 in double-blinded placebo-controlled randomized clinical study. The primary outcome was the rate of atherosclerosis progression, measured by high-resolution B-mode ultrasonography as the increase in carotid intima-media thickness (IMT) of the far wall of common carotid arteries. The mean rate of IMT changes in Allicor-treated group (-0.022±0.007 mm per year) was significantly different (P = 0.002) from the placebo group in which there was a moderate progression of 0.015±0.008 mm at the overall mean baseline IMT of 0.931±0.009 mm. A significant correlation was found between the changes in blood serum atherogenicity (the ability of serum to induce cholesterol accumulation in cultured cells) during the study and the changes in intima-media thickness of common carotid arteries (r = 0.144, P = 0.045). Thus, the results of AMAR study demonstrate that long-term treatment with Allicor has a direct anti-atherosclerotic effect on carotid atherosclerosis and this effect is likely to be due to serum atherogenicity inhibition. The beneficial effects of other botanicals including Inflaminat (calendula, elder and violet), phytoestrogen- rich Karinat (garlic powder, extract of grape seeds, green tea leafs, hop cones, β-carotene, α-tocopherol and ascorbic acid) on atherosclerosis have also been revealed in clinical studies which enforces a view that botanicals might represent promising drugs for anti-atherosclerotic therapy.
Aerosol Complexity and Implications for Predictability and Short-Term Forecasting
NASA Technical Reports Server (NTRS)
Colarco, Peter
2016-01-01
There are clear NWP and climate impacts from including aerosol radiative and cloud interactions. Changes in dynamics and cloud fields affect aerosol lifecycle, plume height, long-range transport, overall forcing of the climate system, etc. Inclusion of aerosols in NWP systems has benefit to surface field biases (e.g., T2m, U10m). Including aerosol affects has impact on analysis increments and can have statistically significant impacts on, e.g., tropical cyclogenesis. Above points are made especially with respect to aerosol radiative interactions, but aerosol-cloud interaction is a bigger signal on the global system. Many of these impacts are realized even in models with relatively simple (bulk) aerosol schemes (approx.10 -20 tracers). Simple schemes though imply simple representation of aerosol absorption and importantly for aerosol-cloud interaction particle-size distribution. Even so, more complex schemes exhibit a lot of diversity between different models, with issues such as size selection both for emitted particles and for modes. Prospects for complex sectional schemes to tune modal (and even bulk) schemes toward better selection of size representation. I think this is a ripe topic for more research -Systematic documentation of benefits of no vs. climatological vs. interactive (direct and then direct+indirect) aerosols. Document aerosol impact on analysis increments, inclusion in NWP data assimilation operator -Further refinement of baseline assumptions in model design (e.g., absorption, particle size distribution). Did not get into model resolution and interplay of other physical processes with aerosols (e.g., moist physics, obviously important), chemistry
Low-energy effective Hamiltonians for correlated electron systems beyond density functional theory
NASA Astrophysics Data System (ADS)
Hirayama, Motoaki; Miyake, Takashi; Imada, Masatoshi; Biermann, Silke
2017-08-01
We propose a refined scheme of deriving an effective low-energy Hamiltonian for materials with strong electronic Coulomb correlations beyond density functional theory (DFT). By tracing out the electronic states away from the target degrees of freedom in a controlled way by a perturbative scheme, we construct an effective Hamiltonian for a restricted low-energy target space incorporating the effects of high-energy degrees of freedom in an effective manner. The resulting effective Hamiltonian can afterwards be solved by accurate many-body solvers. We improve this "multiscale ab initio scheme for correlated electrons" (MACE) primarily in two directions by elaborating and combining two frameworks developed by Hirayama et al. [M. Hirayama, T. Miyake, and M. Imada, Phys. Rev. B 87, 195144 (2013), 10.1103/PhysRevB.87.195144] and Casula et al. [M. Casula, P. Werner, L. Vaugier, F. Aryasetiawan, T. Miyake, A. J. Millis, and S. Biermann, Phys. Rev. Lett. 109, 126408 (2012), 10.1103/PhysRevLett.109.126408]: (1) Double counting of electronic correlations between the DFT and the low-energy solver is avoided by using the constrained G W scheme; and (2) the frequency dependent interactions emerging from the partial trace summation are successfully separated into a nonlocal part that is treated following ideas by Hirayama et al. and a local part treated nonperturbatively in the spirit of Casula et al. and are incorporated into the renormalization of the low-energy dispersion. The scheme is favorably tested on the example of SrVO3.
Fully automated MR liver volumetry using watershed segmentation coupled with active contouring.
Huynh, Hieu Trung; Le-Trong, Ngoc; Bao, Pham The; Oto, Aytek; Suzuki, Kenji
2017-02-01
Our purpose is to develop a fully automated scheme for liver volume measurement in abdominal MR images, without requiring any user input or interaction. The proposed scheme is fully automatic for liver volumetry from 3D abdominal MR images, and it consists of three main stages: preprocessing, rough liver shape generation, and liver extraction. The preprocessing stage reduced noise and enhanced the liver boundaries in 3D abdominal MR images. The rough liver shape was revealed fully automatically by using the watershed segmentation, thresholding transform, morphological operations, and statistical properties of the liver. An active contour model was applied to refine the rough liver shape to precisely obtain the liver boundaries. The liver volumes calculated by the proposed scheme were compared to the "gold standard" references which were estimated by an expert abdominal radiologist. The liver volumes computed by using our developed scheme excellently agreed (Intra-class correlation coefficient was 0.94) with the "gold standard" manual volumes by the radiologist in the evaluation with 27 cases from multiple medical centers. The running time was 8.4 min per case on average. We developed a fully automated liver volumetry scheme in MR, which does not require any interaction by users. It was evaluated with cases from multiple medical centers. The liver volumetry performance of our developed system was comparable to that of the gold standard manual volumetry, and it saved radiologists' time for manual liver volumetry of 24.7 min per case.
2010-01-01
Background The reconstruction of protein complexes from the physical interactome of organisms serves as a building block towards understanding the higher level organization of the cell. Over the past few years, several independent high-throughput experiments have helped to catalogue enormous amount of physical protein interaction data from organisms such as yeast. However, these individual datasets show lack of correlation with each other and also contain substantial number of false positives (noise). Over these years, several affinity scoring schemes have also been devised to improve the qualities of these datasets. Therefore, the challenge now is to detect meaningful as well as novel complexes from protein interaction (PPI) networks derived by combining datasets from multiple sources and by making use of these affinity scoring schemes. In the attempt towards tackling this challenge, the Markov Clustering algorithm (MCL) has proved to be a popular and reasonably successful method, mainly due to its scalability, robustness, and ability to work on scored (weighted) networks. However, MCL produces many noisy clusters, which either do not match known complexes or have additional proteins that reduce the accuracies of correctly predicted complexes. Results Inspired by recent experimental observations by Gavin and colleagues on the modularity structure in yeast complexes and the distinctive properties of "core" and "attachment" proteins, we develop a core-attachment based refinement method coupled to MCL for reconstruction of yeast complexes from scored (weighted) PPI networks. We combine physical interactions from two recent "pull-down" experiments to generate an unscored PPI network. We then score this network using available affinity scoring schemes to generate multiple scored PPI networks. The evaluation of our method (called MCL-CAw) on these networks shows that: (i) MCL-CAw derives larger number of yeast complexes and with better accuracies than MCL, particularly in the presence of natural noise; (ii) Affinity scoring can effectively reduce the impact of noise on MCL-CAw and thereby improve the quality (precision and recall) of its predicted complexes; (iii) MCL-CAw responds well to most available scoring schemes. We discuss several instances where MCL-CAw was successful in deriving meaningful complexes, and where it missed a few proteins or whole complexes due to affinity scoring of the networks. We compare MCL-CAw with several recent complex detection algorithms on unscored and scored networks, and assess the relative performance of the algorithms on these networks. Further, we study the impact of augmenting physical datasets with computationally inferred interactions for complex detection. Finally, we analyse the essentiality of proteins within predicted complexes to understand a possible correlation between protein essentiality and their ability to form complexes. Conclusions We demonstrate that core-attachment based refinement in MCL-CAw improves the predictions of MCL on yeast PPI networks. We show that affinity scoring improves the performance of MCL-CAw. PMID:20939868
NASA Technical Reports Server (NTRS)
Gallardo, V. C.; Storace, A. S.; Gaffney, E. F.; Bach, L. J.; Stallone, M. J.
1981-01-01
The component element method was used to develop a transient dynamic analysis computer program which is essentially based on modal synthesis combined with a central, finite difference, numerical integration scheme. The methodology leads to a modular or building-block technique that is amenable to computer programming. To verify the analytical method, turbine engine transient response analysis (TETRA), was applied to two blade-out test vehicles that had been previously instrumented and tested. Comparison of the time dependent test data with those predicted by TETRA led to recommendations for refinement or extension of the analytical method to improve its accuracy and overcome its shortcomings. The development of working equations, their discretization, numerical solution scheme, the modular concept of engine modelling, the program logical structure and some illustrated results are discussed. The blade-loss test vehicles (rig full engine), the type of measured data, and the engine structural model are described.
A Least-Squares Finite Element Method for Electromagnetic Scattering Problems
NASA Technical Reports Server (NTRS)
Wu, Jie; Jiang, Bo-nan
1996-01-01
The least-squares finite element method (LSFEM) is applied to electromagnetic scattering and radar cross section (RCS) calculations. In contrast to most existing numerical approaches, in which divergence-free constraints are omitted, the LSFF-M directly incorporates two divergence equations in the discretization process. The importance of including the divergence equations is demonstrated by showing that otherwise spurious solutions with large divergence occur near the scatterers. The LSFEM is based on unstructured grids and possesses full flexibility in handling complex geometry and local refinement Moreover, the LSFEM does not require any special handling, such as upwinding, staggered grids, artificial dissipation, flux-differencing, etc. Implicit time discretization is used and the scheme is unconditionally stable. By using a matrix-free iterative method, the computational cost and memory requirement for the present scheme is competitive with other approaches. The accuracy of the LSFEM is verified by several benchmark test problems.
Refining and end use study of coal liquids II - linear programming analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lowe, C.; Tam, S.
1995-12-31
A DOE-funded study is underway to determine the optimum refinery processing schemes for producing transportation fuels that will meet CAAA regulations from direct and indirect coal liquids. The study consists of three major parts: pilot plant testing of critical upgrading processes, linear programming analysis of different processing schemes, and engine emission testing of final products. Currently, fractions of a direct coal liquid produced form bituminous coal are being tested in sequence of pilot plant upgrading processes. This work is discussed in a separate paper. The linear programming model, which is the subject of this paper, has been completed for themore » petroleum refinery and is being modified to handle coal liquids based on the pilot plant test results. Preliminary coal liquid evaluation studies indicate that, if a refinery expansion scenario is adopted, then the marginal value of the coal liquid (over the base petroleum crude) is $3-4/bbl.« less
NASA Astrophysics Data System (ADS)
Cervone, A.; Manservisi, S.; Scardovelli, R.
2010-09-01
A multilevel VOF approach has been coupled to an accurate finite element Navier-Stokes solver in axisymmetric geometry for the simulation of incompressible liquid jets with high density ratios. The representation of the color function over a fine grid has been introduced to reduce the discontinuity of the interface at the cell boundary. In the refined grid the automatic breakup and coalescence occur at a spatial scale much smaller than the coarse grid spacing. To reduce memory requirements, we have implemented on the fine grid a compact storage scheme which memorizes the color function data only in the mixed cells. The capillary force is computed by using the Laplace-Beltrami operator and a volumetric approach for the two principal curvatures. Several simulations of axisymmetric jets have been performed to show the accuracy and robustness of the proposed scheme.
NASA Technical Reports Server (NTRS)
Janus, J. Mark; Whitfield, David L.
1990-01-01
Improvements are presented of a computer algorithm developed for the time-accurate flow analysis of rotating machines. The flow model is a finite volume method utilizing a high-resolution approximate Riemann solver for interface flux definitions. The numerical scheme is a block LU implicit iterative-refinement method which possesses apparent unconditional stability. Multiblock composite gridding is used to orderly partition the field into a specified arrangement of blocks exhibiting varying degrees of similarity. Block-block relative motion is achieved using local grid distortion to reduce grid skewness and accommodate arbitrary time step selection. A general high-order numerical scheme is applied to satisfy the geometric conservation law. An even-blade-count counterrotating unducted fan configuration is chosen for a computational study comparing solutions resulting from altering parameters such as time step size and iteration count. The solutions are compared with measured data.
Brown, Kate L; Crowe, Sonya; Pagel, Christina; Bull, Catherine; Muthialu, Nagarajan; Gibbs, John; Cunningham, David; Utley, Martin; Tsang, Victor T; Franklin, Rodney
2013-08-01
To categorise records according to primary cardiac diagnosis in the United Kingdom Central Cardiac Audit Database in order to add this information to a risk adjustment model for paediatric cardiac surgery. Codes from the International Paediatric Congenital Cardiac Code were mapped to recognisable primary cardiac diagnosis groupings, allocated using a hierarchy and less refined diagnosis groups, based on the number of functional ventricles and presence of aortic obstruction. A National Clinical Audit Database. Patients Children undergoing cardiac interventions: the proportions for each diagnosis scheme are presented for 13,551 first patient surgical episodes since 2004. In Scheme 1, the most prevalent diagnoses nationally were ventricular septal defect (13%), patent ductus arteriosus (10.4%), and tetralogy of Fallot (9.5%). In Scheme 2, the prevalence of a biventricular heart without aortic obstruction was 64.2% and with aortic obstruction was 14.1%; the prevalence of a functionally univentricular heart without aortic obstruction was 4.3% and with aortic obstruction was 4.7%; the prevalence of unknown (ambiguous) number of ventricles was 8.4%; and the prevalence of acquired heart disease only was 2.2%. Diagnostic groups added to procedural information: of the 17% of all operations classed as "not a specific procedure", 97.1% had a diagnosis identified in Scheme 1 and 97.2% in Scheme 2. Diagnostic information adds to surgical procedural data when the complexity of case mix is analysed in a national database. These diagnostic categorisation schemes may be used for future investigation of the frequency of conditions and evaluation of long-term outcome over a series of procedures.
A PDE Sensitivity Equation Method for Optimal Aerodynamic Design
NASA Technical Reports Server (NTRS)
Borggaard, Jeff; Burns, John
1996-01-01
The use of gradient based optimization algorithms in inverse design is well established as a practical approach to aerodynamic design. A typical procedure uses a simulation scheme to evaluate the objective function (from the approximate states) and its gradient, then passes this information to an optimization algorithm. Once the simulation scheme (CFD flow solver) has been selected and used to provide approximate function evaluations, there are several possible approaches to the problem of computing gradients. One popular method is to differentiate the simulation scheme and compute design sensitivities that are then used to obtain gradients. Although this black-box approach has many advantages in shape optimization problems, one must compute mesh sensitivities in order to compute the design sensitivity. In this paper, we present an alternative approach using the PDE sensitivity equation to develop algorithms for computing gradients. This approach has the advantage that mesh sensitivities need not be computed. Moreover, when it is possible to use the CFD scheme for both the forward problem and the sensitivity equation, then there are computational advantages. An apparent disadvantage of this approach is that it does not always produce consistent derivatives. However, for a proper combination of discretization schemes, one can show asymptotic consistency under mesh refinement, which is often sufficient to guarantee convergence of the optimal design algorithm. In particular, we show that when asymptotically consistent schemes are combined with a trust-region optimization algorithm, the resulting optimal design method converges. We denote this approach as the sensitivity equation method. The sensitivity equation method is presented, convergence results are given and the approach is illustrated on two optimal design problems involving shocks.
Parallel Cartesian grid refinement for 3D complex flow simulations
NASA Astrophysics Data System (ADS)
Angelidis, Dionysios; Sotiropoulos, Fotis
2013-11-01
A second order accurate method for discretizing the Navier-Stokes equations on 3D unstructured Cartesian grids is presented. Although the grid generator is based on the oct-tree hierarchical method, fully unstructured data-structure is adopted enabling robust calculations for incompressible flows, avoiding both the need of synchronization of the solution between different levels of refinement and usage of prolongation/restriction operators. The current solver implements a hybrid staggered/non-staggered grid layout, employing the implicit fractional step method to satisfy the continuity equation. The pressure-Poisson equation is discretized by using a novel second order fully implicit scheme for unstructured Cartesian grids and solved using an efficient Krylov subspace solver. The momentum equation is also discretized with second order accuracy and the high performance Newton-Krylov method is used for integrating them in time. Neumann and Dirichlet conditions are used to validate the Poisson solver against analytical functions and grid refinement results to a significant reduction of the solution error. The effectiveness of the fractional step method results in the stability of the overall algorithm and enables the performance of accurate multi-resolution real life simulations. This material is based upon work supported by the Department of Energy under Award Number DE-EE0005482.
Electron Beam Melting and Refining of Metals: Computational Modeling and Optimization
Vutova, Katia; Donchev, Veliko
2013-01-01
Computational modeling offers an opportunity for a better understanding and investigation of thermal transfer mechanisms. It can be used for the optimization of the electron beam melting process and for obtaining new materials with improved characteristics that have many applications in the power industry, medicine, instrument engineering, electronics, etc. A time-dependent 3D axis-symmetrical heat model for simulation of thermal transfer in metal ingots solidified in a water-cooled crucible at electron beam melting and refining (EBMR) is developed. The model predicts the change in the temperature field in the casting ingot during the interaction of the beam with the material. A modified Pismen-Rekford numerical scheme to discretize the analytical model is developed. These equation systems, describing the thermal processes and main characteristics of the developed numerical method, are presented. In order to optimize the technological regimes, different criteria for better refinement and obtaining dendrite crystal structures are proposed. Analytical problems of mathematical optimization are formulated, discretized and heuristically solved by cluster methods. Using important for the practice simulation results, suggestions can be made for EBMR technology optimization. The proposed tool is important and useful for studying, control, optimization of EBMR process parameters and improving of the quality of the newly produced materials. PMID:28788351
Solving the transport equation with quadratic finite elements: Theory and applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferguson, J.M.
1997-12-31
At the 4th Joint Conference on Computational Mathematics, the author presented a paper introducing a new quadratic finite element scheme (QFEM) for solving the transport equation. In the ensuing year the author has obtained considerable experience in the application of this method, including solution of eigenvalue problems, transmission problems, and solution of the adjoint form of the equation as well as the usual forward solution. He will present detailed results, and will also discuss other refinements of his transport codes, particularly for 3-dimensional problems on rectilinear and non-rectilinear grids.
An alternative regionalization scheme for defining nutrient criteria for rivers and streams
Robertson, Dale M.; Saad, David A.; Wieben, Ann M.
2001-01-01
The environmental nutrient zone approach can be applied to specific states or nutrient ecoregions and used to develop criteria as a function of stream type. This approach can also be applied on the basis of environmental characteristics of the watershed alone rather than the general environmental characteristics from the region in which the site is located. The environmental nutrient zone approach will enable states to refine the basic nutrient criteria established by the USEPA by developing attainable criteria given the environmental characteristics where the streams are located.
Interpolation Method Needed for Numerical Uncertainty Analysis of Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Groves, Curtis; Ilie, Marcel; Schallhorn, Paul
2014-01-01
Using Computational Fluid Dynamics (CFD) to predict a flow field is an approximation to the exact problem and uncertainties exist. There is a method to approximate the errors in CFD via Richardson's Extrapolation. This method is based off of progressive grid refinement. To estimate the errors in an unstructured grid, the analyst must interpolate between at least three grids. This paper describes a study to find an appropriate interpolation scheme that can be used in Richardson's extrapolation or other uncertainty method to approximate errors. Nomenclature
Hydrometallurgical methods of recovery of scandium from the wastes of various technologies
NASA Astrophysics Data System (ADS)
Molchanova, T. V.; Akimova, I. D.; Smirnov, K. M.; Krylova, O. K.; Zharova, E. V.
2017-03-01
The recovery of scandium from the wastes of the production of uranium, titanium, iron-vanadium, and alumina is studied. The applied acid schemes of scandium transfer to a solution followed by ion-exchange recovery and extraction concentration of scandium ensure the precipitation of crude scandium oxides containing up to 5% Sc2O3. Scandium oxides of 99.96-99.99% purity are formed after additional refining of these crude oxides according to an extraction technology using a mixture 15% multiradical phosphine oxide or Cyanex-925 + 15% tributyl phosphate in kerosene.
Boundary fitting based segmentation of fluorescence microscopy images
NASA Astrophysics Data System (ADS)
Lee, Soonam; Salama, Paul; Dunn, Kenneth W.; Delp, Edward J.
2015-03-01
Segmentation is a fundamental step in quantifying characteristics, such as volume, shape, and orientation of cells and/or tissue. However, quantification of these characteristics still poses a challenge due to the unique properties of microscopy volumes. This paper proposes a 2D segmentation method that utilizes a combination of adaptive and global thresholding, potentials, z direction refinement, branch pruning, end point matching, and boundary fitting methods to delineate tubular objects in microscopy volumes. Experimental results demonstrate that the proposed method achieves better performance than an active contours based scheme.
An adaptive method for a model of two-phase reactive flow on overlapping grids
NASA Astrophysics Data System (ADS)
Schwendeman, D. W.
2008-11-01
A two-phase model of heterogeneous explosives is handled computationally by a new numerical approach that is a modification of the standard Godunov scheme. The approach generates well-resolved and accurate solutions using adaptive mesh refinement on overlapping grids, and treats rationally the nozzling terms that render the otherwise hyperbolic model incapable of a conservative representation. The evolution and structure of detonation waves for a variety of one and two-dimensional configurations will be discussed with a focus given to problems of detonation diffraction and failure.
NASA Astrophysics Data System (ADS)
Zeng, Jicai; Zha, Yuanyuan; Zhang, Yonggen; Shi, Liangsheng; Zhu, Yan; Yang, Jinzhong
2017-11-01
Multi-scale modeling of the localized groundwater flow problems in a large-scale aquifer has been extensively investigated under the context of cost-benefit controversy. An alternative is to couple the parent and child models with different spatial and temporal scales, which may result in non-trivial sub-model errors in the local areas of interest. Basically, such errors in the child models originate from the deficiency in the coupling methods, as well as from the inadequacy in the spatial and temporal discretizations of the parent and child models. In this study, we investigate the sub-model errors within a generalized one-way coupling scheme given its numerical stability and efficiency, which enables more flexibility in choosing sub-models. To couple the models at different scales, the head solution at parent scale is delivered downward onto the child boundary nodes by means of the spatial and temporal head interpolation approaches. The efficiency of the coupling model is improved either by refining the grid or time step size in the parent and child models, or by carefully locating the sub-model boundary nodes. The temporal truncation errors in the sub-models can be significantly reduced by the adaptive local time-stepping scheme. The generalized one-way coupling scheme is promising to handle the multi-scale groundwater flow problems with complex stresses and heterogeneity.
NASA Astrophysics Data System (ADS)
Schwing, Alan Michael
For computational fluid dynamics, the governing equations are solved on a discretized domain of nodes, faces, and cells. The quality of the grid or mesh can be a driving source for error in the results. While refinement studies can help guide the creation of a mesh, grid quality is largely determined by user expertise and understanding of the flow physics. Adaptive mesh refinement is a technique for enriching the mesh during a simulation based on metrics for error, impact on important parameters, or location of important flow features. This can offload from the user some of the difficult and ambiguous decisions necessary when discretizing the domain. This work explores the implementation of adaptive mesh refinement in an implicit, unstructured, finite-volume solver. Consideration is made for applying modern computational techniques in the presence of hanging nodes and refined cells. The approach is developed to be independent of the flow solver in order to provide a path for augmenting existing codes. It is designed to be applicable for unsteady simulations and refinement and coarsening of the grid does not impact the conservatism of the underlying numerics. The effect on high-order numerical fluxes of fourth- and sixth-order are explored. Provided the criteria for refinement is appropriately selected, solutions obtained using adapted meshes have no additional error when compared to results obtained on traditional, unadapted meshes. In order to leverage large-scale computational resources common today, the methods are parallelized using MPI. Parallel performance is considered for several test problems in order to assess scalability of both adapted and unadapted grids. Dynamic repartitioning of the mesh during refinement is crucial for load balancing an evolving grid. Development of the methods outlined here depend on a dual-memory approach that is described in detail. Validation of the solver developed here against a number of motivating problems shows favorable comparisons across a range of regimes. Unsteady and steady applications are considered in both subsonic and supersonic flows. Inviscid and viscous simulations achieve similar results at a much reduced cost when employing dynamic mesh adaptation. Several techniques for guiding adaptation are compared. Detailed analysis of statistics from the instrumented solver enable understanding of the costs associated with adaptation. Adaptive mesh refinement shows promise for the test cases presented here. It can be considerably faster than using conventional grids and provides accurate results. The procedures for adapting the grid are light-weight enough to not require significant computational time and yield significant reductions in grid size.
On Spurious Numerics in Solving Reactive Equations
NASA Technical Reports Server (NTRS)
Kotov, D. V; Yee, H. C.; Wang, W.; Shu, C.-W.
2013-01-01
The objective of this study is to gain a deeper understanding of the behavior of high order shock-capturing schemes for problems with stiff source terms and discontinuities and on corresponding numerical prediction strategies. The studies by Yee et al. (2012) and Wang et al. (2012) focus only on solving the reactive system by the fractional step method using the Strang splitting (Strang 1968). It is a common practice by developers in computational physics and engineering simulations to include a cut off safeguard if densities are outside the permissible range. Here we compare the spurious behavior of the same schemes by solving the fully coupled reactive system without the Strang splitting vs. using the Strang splitting. Comparison between the two procedures and the effects of a cut off safeguard is the focus the present study. The comparison of the performance of these schemes is largely based on the degree to which each method captures the correct location of the reaction front for coarse grids. Here "coarse grids" means standard mesh density requirement for accurate simulation of typical non-reacting flows of similar problem setup. It is remarked that, in order to resolve the sharp reaction front, local refinement beyond standard mesh density is still needed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dorda, Antonius, E-mail: dorda@tugraz.at; Schürrer, Ferdinand, E-mail: ferdinand.schuerrer@tugraz.at
2015-03-01
We present a novel numerical scheme for the deterministic solution of the Wigner transport equation, especially suited to deal with situations in which strong quantum effects are present. The unique feature of the algorithm is the expansion of the Wigner function in local basis functions, similar to finite element or finite volume methods. This procedure yields a discretization of the pseudo-differential operator that conserves the particle density on arbitrarily chosen grids. The high flexibility in refining the grid spacing together with the weighted essentially non-oscillatory (WENO) scheme for the advection term allows for an accurate and well-resolved simulation of themore » phase space dynamics. A resonant tunneling diode is considered as test case and a detailed convergence study is given by comparing the results to a non-equilibrium Green's functions calculation. The impact of the considered domain size and of the grid spacing is analyzed. The obtained convergence of the results towards a quasi-exact agreement of the steady state Wigner and Green's functions computations demonstrates the accuracy of the scheme, as well as the high flexibility to adjust to different physical situations.« less
NASA Astrophysics Data System (ADS)
Juneja, A.; Lathrop, D. P.; Sreenivasan, K. R.; Stolovitzky, G.
1994-06-01
A family of schemes is outlined for constructing stochastic fields that are close to turbulence. The fields generated from the more sophisticated versions of these schemes differ little in terms of one-point and two-point statistics from velocity fluctuations in high-Reynolds-number turbulence; we shall designate such fields as synthetic turbulence. All schemes, implemented here in one dimension, consist of the following three ingredients, but differ in various details. First, a simple multiplicative procedure is utilized for generating an intermittent signal which has the same properties as those of the turbulent energy dissipation rate ɛ. Second, the properties of the intermittent signal averaged over an interval of size r are related to those of longitudinal velocity increments Δu(r), evaluated over the same distance r, through a stochastic variable V introduced in the spirit of Kolmogorov's refined similarity hypothesis. The third and final step, which partially resembles a well-known procedure for constructing fractional Brownian motion, consists of suitably combining velocity increments to construct an artificial velocity signal. Various properties of the synthetic turbulence are obtained both analytically and numerically, and found to be in good agreement with measurements made in the atmospheric surface layer. A brief review of some previous models is provided.
Using adaptive grid in modeling rocket nozzle flow
NASA Technical Reports Server (NTRS)
Chow, Alan S.; Jin, Kang-Ren
1992-01-01
The mechanical behavior of a rocket motor internal flow field results in a system of nonlinear partial differential equations which cannot be solved analytically. However, this system of equations called the Navier-Stokes equations can be solved numerically. The accuracy and the convergence of the solution of the system of equations will depend largely on how precisely the sharp gradients in the domain of interest can be resolved. With the advances in computer technology, more sophisticated algorithms are available to improve the accuracy and convergence of the solutions. An adaptive grid generation is one of the schemes which can be incorporated into the algorithm to enhance the capability of numerical modeling. It is equivalent to putting intelligence into the algorithm to optimize the use of computer memory. With this scheme, the finite difference domain of the flow field called the grid does neither have to be very fine nor strategically placed at the location of sharp gradients. The grid is self adapting as the solution evolves. This scheme significantly improves the methodology of solving flow problems in rocket nozzles by taking the refinement part of grid generation out of the hands of computational fluid dynamics (CFD) specialists and place it into the computer algorithm itself.
Dorda, Antonius; Schürrer, Ferdinand
2015-01-01
We present a novel numerical scheme for the deterministic solution of the Wigner transport equation, especially suited to deal with situations in which strong quantum effects are present. The unique feature of the algorithm is the expansion of the Wigner function in local basis functions, similar to finite element or finite volume methods. This procedure yields a discretization of the pseudo-differential operator that conserves the particle density on arbitrarily chosen grids. The high flexibility in refining the grid spacing together with the weighted essentially non-oscillatory (WENO) scheme for the advection term allows for an accurate and well-resolved simulation of the phase space dynamics. A resonant tunneling diode is considered as test case and a detailed convergence study is given by comparing the results to a non-equilibrium Green's functions calculation. The impact of the considered domain size and of the grid spacing is analyzed. The obtained convergence of the results towards a quasi-exact agreement of the steady state Wigner and Green's functions computations demonstrates the accuracy of the scheme, as well as the high flexibility to adjust to different physical situations. PMID:25892748
Dorda, Antonius; Schürrer, Ferdinand
2015-03-01
We present a novel numerical scheme for the deterministic solution of the Wigner transport equation, especially suited to deal with situations in which strong quantum effects are present. The unique feature of the algorithm is the expansion of the Wigner function in local basis functions, similar to finite element or finite volume methods. This procedure yields a discretization of the pseudo-differential operator that conserves the particle density on arbitrarily chosen grids. The high flexibility in refining the grid spacing together with the weighted essentially non-oscillatory (WENO) scheme for the advection term allows for an accurate and well-resolved simulation of the phase space dynamics. A resonant tunneling diode is considered as test case and a detailed convergence study is given by comparing the results to a non-equilibrium Green's functions calculation. The impact of the considered domain size and of the grid spacing is analyzed. The obtained convergence of the results towards a quasi-exact agreement of the steady state Wigner and Green's functions computations demonstrates the accuracy of the scheme, as well as the high flexibility to adjust to different physical situations.
A moving mesh unstaggered constrained transport scheme for magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Mocz, Philip; Pakmor, Rüdiger; Springel, Volker; Vogelsberger, Mark; Marinacci, Federico; Hernquist, Lars
2016-11-01
We present a constrained transport (CT) algorithm for solving the 3D ideal magnetohydrodynamic (MHD) equations on a moving mesh, which maintains the divergence-free condition on the magnetic field to machine-precision. Our CT scheme uses an unstructured representation of the magnetic vector potential, making the numerical method simple and computationally efficient. The scheme is implemented in the moving mesh code AREPO. We demonstrate the performance of the approach with simulations of driven MHD turbulence, a magnetized disc galaxy, and a cosmological volume with primordial magnetic field. We compare the outcomes of these experiments to those obtained with a previously implemented Powell divergence-cleaning scheme. While CT and the Powell technique yield similar results in idealized test problems, some differences are seen in situations more representative of astrophysical flows. In the turbulence simulations, the Powell cleaning scheme artificially grows the mean magnetic field, while CT maintains this conserved quantity of ideal MHD. In the disc simulation, CT gives slower magnetic field growth rate and saturates to equipartition between the turbulent kinetic energy and magnetic energy, whereas Powell cleaning produces a dynamically dominant magnetic field. Such difference has been observed in adaptive-mesh refinement codes with CT and smoothed-particle hydrodynamics codes with divergence-cleaning. In the cosmological simulation, both approaches give similar magnetic amplification, but Powell exhibits more cell-level noise. CT methods in general are more accurate than divergence-cleaning techniques, and, when coupled to a moving mesh can exploit the advantages of automatic spatial/temporal adaptivity and reduced advection errors, allowing for improved astrophysical MHD simulations.
Development and feasibility testing of the Pediatric Emergency Discharge Interaction Coding Scheme.
Curran, Janet A; Taylor, Alexandra; Chorney, Jill; Porter, Stephen; Murphy, Andrea; MacPhee, Shannon; Bishop, Andrea; Haworth, Rebecca
2017-08-01
Discharge communication is an important aspect of high-quality emergency care. This study addresses the gap in knowledge on how to describe discharge communication in a paediatric emergency department (ED). The objective of this feasibility study was to develop and test a coding scheme to characterize discharge communication between health-care providers (HCPs) and caregivers who visit the ED with their children. The Pediatric Emergency Discharge Interaction Coding Scheme (PEDICS) and coding manual were developed following a review of the literature and an iterative refinement process involving HCP observations, inter-rater assessments and team consensus. The coding scheme was pilot-tested through observations of HCPs across a range of shifts in one urban paediatric ED. Overall, 329 patient observations were carried out across 50 observational shifts. Inter-rater reliability was evaluated in 16% of the observations. The final version of the PEDICS contained 41 communication elements. Kappa scores were greater than .60 for the majority of communication elements. The most frequently observed communication elements were under the Introduction node and the least frequently observed were under the Social Concerns node. HCPs initiated the majority of the communication. Pediatric Emergency Discharge Interaction Coding Scheme addresses an important gap in the discharge communication literature. The tool is useful for mapping patterns of discharge communication between HCPs and caregivers. Results from our pilot test identified deficits in specific areas of discharge communication that could impact adherence to discharge instructions. The PEDICS would benefit from further testing with a different sample of HCPs. © 2017 The Authors. Health Expectations Published by John Wiley & Sons Ltd.
Scaling behavior of ground-state energy cluster expansion for linear polyenes
NASA Astrophysics Data System (ADS)
Griffin, L. L.; Wu, Jian; Klein, D. J.; Schmalz, T. G.; Bytautas, L.
Ground-state energies for linear-chain polyenes are additively expanded in a sequence of terms for chemically relevant conjugated substructures of increasing size. The asymptotic behavior of the large-substructure limit (i.e., high-polymer limit) is investigated as a means of characterizing the rapidity of convergence and consequent utility of this energy cluster expansion. Consideration is directed to computations via: simple Hückel theory, a refined Hückel scheme with geometry optimization, restricted Hartree-Fock self-consistent field (RHF-SCF) solutions of fixed bond-length Parisier-Parr-Pople (PPP)/Hubbard models, and ab initio SCF approaches with and without geometry optimization. The cluster expansion in what might be described as the more "refined" approaches appears to lead to qualitatively more rapid convergence: exponentially fast as opposed to an inverse power at the simple Hückel or SCF-Hubbard levels. The substructural energy cluster expansion then seems to merit special attention. Its possible utility in making accurate extrapolations from finite systems to extended polymers is noted.
Ghost-gluon vertex in the presence of the Gribov horizon
NASA Astrophysics Data System (ADS)
Mintz, B. W.; Palhares, L. F.; Sorella, S. P.; Pereira, A. D.
2018-02-01
We consider Yang-Mills theories quantized in the Landau gauge in the presence of the Gribov horizon via the refined Gribov-Zwanziger (RGZ) framework. As the restriction of the gauge path integral to the Gribov region is taken into account, the resulting gauge field propagators display a nontrivial infrared behavior, being very close to the ones observed in lattice gauge field theory simulations. In this work, we explore a higher correlation function in the refined Gribov-Zwanziger theory: the ghost-gluon interaction vertex, at one-loop level. We show explicit compatibility with kinematical constraints, as required by the Ward identities of the theory, and obtain analytical expressions in the limit of vanishing gluon momentum. We find that the RGZ results are nontrivial in the infrared regime, being compatible with lattice Yang-Mills simulations in both SU(2) and SU(3), as well as with solutions from Schwinger-Dyson equations in different truncation schemes, Functional Renormalization Group analysis, and the renormalization group-improved Curci-Ferrari model.
Fast High Resolution Volume Carving for 3D Plant Shoot Reconstruction
Scharr, Hanno; Briese, Christoph; Embgenbroich, Patrick; Fischbach, Andreas; Fiorani, Fabio; Müller-Linow, Mark
2017-01-01
Volume carving is a well established method for visual hull reconstruction and has been successfully applied in plant phenotyping, especially for 3d reconstruction of small plants and seeds. When imaging larger plants at still relatively high spatial resolution (≤1 mm), well known implementations become slow or have prohibitively large memory needs. Here we present and evaluate a computationally efficient algorithm for volume carving, allowing e.g., 3D reconstruction of plant shoots. It combines a well-known multi-grid representation called “Octree” with an efficient image region integration scheme called “Integral image.” Speedup with respect to less efficient octree implementations is about 2 orders of magnitude, due to the introduced refinement strategy “Mark and refine.” Speedup is about a factor 1.6 compared to a highly optimized GPU implementation using equidistant voxel grids, even without using any parallelization. We demonstrate the application of this method for trait derivation of banana and maize plants. PMID:29033961
Multi-stakeholder perspectives of locally commissioned enhanced optometric services
Baker, H; Harper, R A; Edgar, D F; Lawrenson, J G
2016-01-01
Objectives To explore views of all stakeholders (patients, optometrists, general practitioners (GPs), commissioners and ophthalmologists) regarding the operation of community-based enhanced optometric services. Design Qualitative study using mixed methods (patient satisfaction surveys, semi-structured telephone interviews and optometrist focus groups). Setting A minor eye conditions scheme (MECS) and glaucoma referral refinement scheme (GRRS) provided by accredited community optometrists. Participants 189 patients, 25 community optometrists, 4 glaucoma specialist hospital optometrists (GRRS), 5 ophthalmologists, 6 GPs (MECS), 4 commissioners. Results Overall, 99% (GRRS) and 100% (MECS) patients were satisfied with their optometrists’ examination. The vast majority rated the following as ‘very good’; examination duration, optometrists’ listening skills, explanations of tests and management, patient involvement in decision-making, treating the patient with care and concern. 99% of MECS patients would recommend the service. Manchester optometrists were enthusiastic about GRRS, feeling fortunate to practise in a ‘pro-optometry’ area. No major negatives were reported, although both schemes were limited to patients resident within certain postcode areas, and some inappropriate GP referrals occurred (MECS). Communication with hospitals was praised in GRRS but was variable, depending on hospital (MECS). Training for both schemes was valuable and appropriate but should be ongoing. MECS GPs were very supportive, reporting the scheme would reduce secondary care referral numbers, although some MECS patients were referred back to GPs for medication. Ophthalmologists (MECS and GRRS) expressed very positive views and widely acknowledged that these new care pathways would reduce unnecessary referrals and shorten patient waiting times. Commissioners felt both schemes met or exceeded expectations in terms of quality of care, allowing patients to be seen quicker and more efficiently. Conclusions Locally commissioned schemes can be a positive experience for all involved. With appropriate training, clear referral pathways and good communication, community optometrists can offer high-quality services that are highly acceptable to patients, health professionals and commissioners. PMID:27798000
Implicit adaptive mesh refinement for 2D reduced resistive magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Philip, Bobby; Chacón, Luis; Pernice, Michael
2008-10-01
An implicit structured adaptive mesh refinement (SAMR) solver for 2D reduced magnetohydrodynamics (MHD) is described. The time-implicit discretization is able to step over fast normal modes, while the spatial adaptivity resolves thin, dynamically evolving features. A Jacobian-free Newton-Krylov method is used for the nonlinear solver engine. For preconditioning, we have extended the optimal "physics-based" approach developed in [L. Chacón, D.A. Knoll, J.M. Finn, An implicit, nonlinear reduced resistive MHD solver, J. Comput. Phys. 178 (2002) 15-36] (which employed multigrid solver technology in the preconditioner for scalability) to SAMR grids using the well-known Fast Adaptive Composite grid (FAC) method [S. McCormick, Multilevel Adaptive Methods for Partial Differential Equations, SIAM, Philadelphia, PA, 1989]. A grid convergence study demonstrates that the solver performance is independent of the number of grid levels and only depends on the finest resolution considered, and that it scales well with grid refinement. The study of error generation and propagation in our SAMR implementation demonstrates that high-order (cubic) interpolation during regridding, combined with a robustly damping second-order temporal scheme such as BDF2, is required to minimize impact of grid errors at coarse-fine interfaces on the overall error of the computation for this MHD application. We also demonstrate that our implementation features the desired property that the overall numerical error is dependent only on the finest resolution level considered, and not on the base-grid resolution or on the number of refinement levels present during the simulation. We demonstrate the effectiveness of the tool on several challenging problems.
NASA Astrophysics Data System (ADS)
Rybakin, B.; Bogatencov, P.; Secrieru, G.; Iliuha, N.
2013-10-01
The paper deals with a parallel algorithm for calculations on multiprocessor computers and GPU accelerators. The calculations of shock waves interaction with low-density bubble results and the problem of the gas flow with the forces of gravity are presented. This algorithm combines a possibility to capture a high resolution of shock waves, the second-order accuracy for TVD schemes, and a possibility to observe a low-level diffusion of the advection scheme. Many complex problems of continuum mechanics are numerically solved on structured or unstructured grids. To improve the accuracy of the calculations is necessary to choose a sufficiently small grid (with a small cell size). This leads to the drawback of a substantial increase of computation time. Therefore, for the calculations of complex problems it is reasonable to use the method of Adaptive Mesh Refinement. That is, the grid refinement is performed only in the areas of interest of the structure, where, e.g., the shock waves are generated, or a complex geometry or other such features exist. Thus, the computing time is greatly reduced. In addition, the execution of the application on the resulting sequence of nested, decreasing nets can be parallelized. Proposed algorithm is based on the AMR method. Utilization of AMR method can significantly improve the resolution of the difference grid in areas of high interest, and from other side to accelerate the processes of the multi-dimensional problems calculating. Parallel algorithms of the analyzed difference models realized for the purpose of calculations on graphic processors using the CUDA technology [1].
NASA Astrophysics Data System (ADS)
Petersson, Anders; Rodgers, Arthur
2010-05-01
The finite difference method on a uniform Cartesian grid is a highly efficient and easy to implement technique for solving the elastic wave equation in seismic applications. However, the spacing in a uniform Cartesian grid is fixed throughout the computational domain, whereas the resolution requirements in realistic seismic simulations usually are higher near the surface than at depth. This can be seen from the well-known formula h ≤ L-P which relates the grid spacing h to the wave length L, and the required number of grid points per wavelength P for obtaining an accurate solution. The compressional and shear wave lengths in the earth generally increase with depth and are often a factor of ten larger below the Moho discontinuity (at about 30 km depth), than in sedimentary basins near the surface. A uniform grid must have a grid spacing based on the small wave lengths near the surface, which results in over-resolving the solution at depth. As a result, the number of points in a uniform grid is unnecessarily large. In the wave propagation project (WPP) code, we address the over-resolution-at-depth issue by generalizing our previously developed single grid finite difference scheme to work on a composite grid consisting of a set of structured rectangular grids of different spacings, with hanging nodes on the grid refinement interfaces. The computational domain in a regional seismic simulation often extends to depth 40-50 km. Hence, using a refinement ratio of two, we need about three grid refinements from the bottom of the computational domain to the surface, to keep the local grid size in approximate parity with the local wave lengths. The challenge of the composite grid approach is to find a stable and accurate method for coupling the solution across the grid refinement interface. Of particular importance is the treatment of the solution at the hanging nodes, i.e., the fine grid points which are located in between coarse grid points. WPP implements a new, energy conserving, coupling procedure for the elastic wave equation at grid refinement interfaces. When used together with our single grid finite difference scheme, it results in a method which is provably stable, without artificial dissipation, for arbitrary heterogeneous isotropic elastic materials. The new coupling procedure is based on satisfying the summation-by-parts principle across refinement interfaces. From a practical standpoint, an important advantage of the proposed method is the absence of tunable numerical parameters, which seldom are appreciated by application experts. In WPP, the composite grid discretization is combined with a curvilinear grid approach that enables accurate modeling of free surfaces on realistic (non-planar) topography. The overall method satisfies the summation-by-parts principle and is stable under a CFL time step restriction. A feature of great practical importance is that WPP automatically generates the composite grid based on the user provided topography and the depths of the grid refinement interfaces. The WPP code has been verified extensively, for example using the method of manufactured solutions, by solving Lamb's problem, by solving various layer over half- space problems and comparing to semi-analytic (FK) results, and by simulating scenario earthquakes where results from other seismic simulation codes are available. WPP has also been validated against seismographic recordings of moderate earthquakes. WPP performs well on large parallel computers and has been run on up to 32,768 processors using about 26 Billion grid points (78 Billion DOF) and 41,000 time steps. WPP is an open source code that is available under the Gnu general public license.
NASA Astrophysics Data System (ADS)
Abdel-Aty, Mahmoud
2016-07-01
The modeling of a complex system requires the analysis of all microscopic constituents and in particular of their interactions [1]. The interest in this research field has increased considering also recent developments in the information sciences. However interaction among scholars working in various fields of the applied sciences can be considered the true motor for the definition of a general framework for the analysis of complex systems. In particular biological systems constitute the platform where many scientists have decided to collaborate in order to gain a global description of the system. Among others, cancer-immune system competition (see [2] and the review papers [3,4]) has attracted much attention.
Amaral, Margarida D; Boj, Sylvia F; Shaw, James; Leipziger, Jens; Beekman, Jeffrey M
2018-06-01
The European Cystic Fibrosis Society (ECFS) Basic Science Working Group (BSWG) organized a session on the topic "Cystic Fibrosis: Beyond the Airways", within the 15th ECFS Basic Science Conference which gathered around 200 researchers working in the basic science of CF. The session was organized and chaired by Margarida Amaral (BioISI, University of Lisboa, Portugal) and Jeffrey Beekman (University Medical Centre Utrecht, Netherlands) as Chair and Vice-Chair of the BSWG and its purpose was to bring attention of participants of the ECFS Basic Science Conference to "more forgotten" organs in CF disease. In this report we attempt to review and integrate the ideas that emerged at the session. Copyright © 2018 European Cystic Fibrosis Society. All rights reserved.
[Succession caused by beaver (Castor fiber L.) life activity: II. A refined Markov model].
Logofet; Evstigneev, O I; Aleinikov, A A; Morozova, A O
2015-01-01
The refined Markov model of cyclic zoogenic successions caused by beaver (Castor fiber L.) life activity represents a discrete chain of the following six states: flooded forest, swamped forest, pond, grassy swamp, shrubby swamp, and wet forest, which correspond to certain stages of succession. Those stages are defined, and a conceptual scheme of probable transitions between them for one time step is constructed from the knowledge of beaver behaviour in small river floodplains of "Bryanskii Les" Reserve. We calibrated the corresponding matrix of transition probabilities according to the optimization principle: minimizing differences between the model outcome and reality; the model generates a distribution of relative areas corresponding to the stages of succession, that has to be compared to those gained from case studies in the Reserve during 2002-2006. The time step is chosen to equal 2 years, and the first-step data in the sum of differences are given various weights, w (between 0 and 1). The value of w = 0.2 is selected due to its optimality and for some additional reasons. By the formulae of finite homogeneous Markov chain theory, we obtained the main results of the calibrated model, namely, a steady-state distribution of stage areas, indexes of cyclicity, and the mean durations (M(j)) of succession stages. The results of calibration give an objective quantitative nature to the expert knowledge of the course of succession and get a proper interpretation. The 2010 data, which are not involved in the calibration procedure, enabled assessing the quality of prediction by the homogeneous model in short-term (from the 2006 situation): the error of model area distribution relative to the distribution observed in 2010 falls into the range of 9-17%, the best prognosis being given by the least optimal matrices (rejected values of w). This indicates a formally heterogeneous nature of succession processes in time. Thus, the refined version of the homogeneous Markov chain has not eliminated all the contradictions between the model results and expert knowledge, which suggests a further model development towards a "logically inhomogeneous" version or/and refusal to postulate the Markov property in the conceptual scheme of succession.
NASA Astrophysics Data System (ADS)
Angelidis, Dionysios; Chawdhary, Saurabh; Sotiropoulos, Fotis
2016-11-01
A novel numerical method is developed for solving the 3D, unsteady, incompressible Navier-Stokes equations on locally refined fully unstructured Cartesian grids in domains with arbitrarily complex immersed boundaries. Owing to the utilization of the fractional step method on an unstructured Cartesian hybrid staggered/non-staggered grid layout, flux mismatch and pressure discontinuity issues are avoided and the divergence free constraint is inherently satisfied to machine zero. Auxiliary/hanging nodes are used to facilitate the discretization of the governing equations. The second-order accuracy of the solver is ensured by using multi-dimension Lagrange interpolation operators and appropriate differencing schemes at the interface of regions with different levels of refinement. The sharp interface immersed boundary method is augmented with local near-boundary refinement to handle arbitrarily complex boundaries. The discrete momentum equation is solved with the matrix free Newton-Krylov method and the Krylov-subspace method is employed to solve the Poisson equation. The second-order accuracy of the proposed method on unstructured Cartesian grids is demonstrated by solving the Poisson equation with a known analytical solution. A number of three-dimensional laminar flow simulations of increasing complexity illustrate the ability of the method to handle flows across a range of Reynolds numbers and flow regimes. Laminar steady and unsteady flows past a sphere and the oblique vortex shedding from a circular cylinder mounted between two end walls demonstrate the accuracy, the efficiency and the smooth transition of scales and coherent structures across refinement levels. Large-eddy simulation (LES) past a miniature wind turbine rotor, parameterized using the actuator line approach, indicates the ability of the fully unstructured solver to simulate complex turbulent flows. Finally, a geometry resolving LES of turbulent flow past a complete hydrokinetic turbine illustrates the potential of the method to simulate turbulent flows past geometrically complex bodies on locally refined meshes. In all the cases, the results are found to be in very good agreement with published data and savings in computational resources are achieved.
An Adaptive Mesh Algorithm: Mesh Structure and Generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scannapieco, Anthony J.
2016-06-21
The purpose of Adaptive Mesh Refinement is to minimize spatial errors over the computational space not to minimize the number of computational elements. The additional result of the technique is that it may reduce the number of computational elements needed to retain a given level of spatial accuracy. Adaptive mesh refinement is a computational technique used to dynamically select, over a region of space, a set of computational elements designed to minimize spatial error in the computational model of a physical process. The fundamental idea is to increase the mesh resolution in regions where the physical variables are represented bymore » a broad spectrum of modes in k-space, hence increasing the effective global spectral coverage of those physical variables. In addition, the selection of the spatially distributed elements is done dynamically by cyclically adjusting the mesh to follow the spectral evolution of the system. Over the years three types of AMR schemes have evolved; block, patch and locally refined AMR. In block and patch AMR logical blocks of various grid sizes are overlaid to span the physical space of interest, whereas in locally refined AMR no logical blocks are employed but locally nested mesh levels are used to span the physical space. The distinction between block and patch AMR is that in block AMR the original blocks refine and coarsen entirely in time, whereas in patch AMR the patches change location and zone size with time. The type of AMR described herein is a locally refi ned AMR. In the algorithm described, at any point in physical space only one zone exists at whatever level of mesh that is appropriate for that physical location. The dynamic creation of a locally refi ned computational mesh is made practical by a judicious selection of mesh rules. With these rules the mesh is evolved via a mesh potential designed to concentrate the nest mesh in regions where the physics is modally dense, and coarsen zones in regions where the physics is modally sparse.« less
Advanced Numerical Methods and Software Approaches for Semiconductor Device Simulation
Carey, Graham F.; Pardhanani, A. L.; Bova, S. W.
2000-01-01
In this article we concisely present several modern strategies that are applicable to driftdominated carrier transport in higher-order deterministic models such as the driftdiffusion, hydrodynamic, and quantum hydrodynamic systems. The approaches include extensions of “upwind” and artificial dissipation schemes, generalization of the traditional Scharfetter – Gummel approach, Petrov – Galerkin and streamline-upwind Petrov Galerkin (SUPG), “entropy” variables, transformations, least-squares mixed methods and other stabilized Galerkin schemes such as Galerkin least squares and discontinuous Galerkin schemes. The treatment is representative rather than an exhaustive review and several schemes are mentioned only briefly with appropriate reference to the literature. Some of themore » methods have been applied to the semiconductor device problem while others are still in the early stages of development for this class of applications. We have included numerical examples from our recent research tests with some of the methods. A second aspect of the work deals with algorithms that employ unstructured grids in conjunction with adaptive refinement strategies. The full benefits of such approaches have not yet been developed in this application area and we emphasize the need for further work on analysis, data structures and software to support adaptivity. Finally, we briefly consider some aspects of software frameworks. These include dial-an-operator approaches such as that used in the industrial simulator PROPHET, and object-oriented software support such as those in the SANDIA National Laboratory framework SIERRA.« less
Hawkins, Jemma L; Oliver, Emily J; Wyatt-Williams, Jeannie; Scale, Elaine; van Woerden, Hugo C
2014-10-01
Exercise referral schemes are established within community-based health care; however, they have been criticized for failing to evidence long-term behavior change relative to usual care. As such, recent reviews have called for refinement of their delivery with a focus on embedded strategies targeting client motivation. This research letter presents findings from an initial pilot trial conducted within Wales' National Exercise Referral Scheme (NERS), examining the feasibility of using validated physical activity monitoring devices and an accompanying online platform within standard scheme delivery. 30 individuals referred to generic or cardiovascular pathways were offered the system; of these 17 agreed to participate. Common reasons for declining were clustered into lack of technology literacy or access, condition severity, or fear of costs associated with losing the device. Analysis of follow-up interviews after 4 weeks of use indicated that while participants found the monitoring devices practical and informative, only a minority (n = 4) were using the system in full. Crucially, the system element most aligned with contemporary theories of motivation (the online portal) was not used as expected. In addition, feedback from exercise referral professionals indicated that there were demands for support from clients, which might be mitigated by more effective independent system use. Recommendations for larger scale trials using similar systems include consideration of targeted patient groups, equity of access, and providing adequate technological support that is currently beyond the capacity of the NERS system. © The Author(s) 2014.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boucaud, Ph.; De Soto, F.; Rodriguez-Quintero, J.
This article reports on the detailed study of the three-gluon vertex in four-dimensional $SU(3)$ Yang-Mills theory employing lattice simulations with large physical volumes and high statistics. A meticulous scrutiny of the so-called symmetric and asymmetric kinematical configurations is performed and it is shown that the associated form-factor changes sign at a given range of momenta. Here, the lattice results are compared to the model independent predictions of Schwinger-Dyson equations and a very good agreement among the two is found.
Boucaud, Ph.; De Soto, F.; Rodriguez-Quintero, J.; ...
2017-06-14
This article reports on the detailed study of the three-gluon vertex in four-dimensional $SU(3)$ Yang-Mills theory employing lattice simulations with large physical volumes and high statistics. A meticulous scrutiny of the so-called symmetric and asymmetric kinematical configurations is performed and it is shown that the associated form-factor changes sign at a given range of momenta. Here, the lattice results are compared to the model independent predictions of Schwinger-Dyson equations and a very good agreement among the two is found.
The block adaptive multigrid method applied to the solution of the Euler equations
NASA Technical Reports Server (NTRS)
Pantelelis, Nikos
1993-01-01
In the present study, a scheme capable of solving very fast and robust complex nonlinear systems of equations is presented. The Block Adaptive Multigrid (BAM) solution method offers multigrid acceleration and adaptive grid refinement based on the prediction of the solution error. The proposed solution method was used with an implicit upwind Euler solver for the solution of complex transonic flows around airfoils. Very fast results were obtained (18-fold acceleration of the solution) using one fourth of the volumes of a global grid with the same solution accuracy for two test cases.
Vortex breakdown simulation - A circumspect study of the steady, laminar, axisymmetric model
NASA Technical Reports Server (NTRS)
Salas, M. D.; Kuruvila, G.
1989-01-01
The incompressible axisymmetric steady Navier-Stokes equations are written using the streamfunction-vorticity formulation. The resulting equations are discretized using a second-order central-difference scheme. The discretized equations are linearized and then solved using an exact LU decomposition, Gaussian elimination, and Newton iteration. Solutions are presented for Reynolds numbers (based on vortex core radius) 100-1800 and swirl parameter 0.9-1.1. The effects of inflow boundary conditions, the location of farfield and outflow boundaries, and mesh refinement are examined. Finally, the stability of the steady solutions is investigated by solving the time-dependent equations.
A geometry-adaptive IB-LBM for FSI problems at moderate and high Reynolds numbers
NASA Astrophysics Data System (ADS)
Tian, Fangbao; Xu, Lincheng; Young, John; Lai, Joseph C. S.
2017-11-01
An FSI framework combining the LBM and an improved IBM is introduced for FSI problems at moderate and high Reynolds numbers. In this framework, the fluid dynamics is obtained by the LBM. The FSI boundary conditions are handled by an improved IBM based on the feedback scheme where the feedback coefficient is mathematically derived and explicitly approximated. The Lagrangian force is divided into two parts: one is caused by the mismatching of the flow velocity and the boundary velocity at previous time step, and the other is caused by the boundary acceleration. Such treatment significantly enhances the numerical stability. A geometry-adaptive refinement is applied to provide fine resolution around the immersed geometries. The overlapping grids between two adjacent refinements consist of two layers. The movement of fluid-structure interfaces only causes adding or removing grids at the boundaries of refinements. Finally, the classic Smagorinsky large eddy simulation model is incorporated into the framework to model turbulent flows at relatively high Reynolds numbers. Several validation cases are conducted to verify the accuracy and fidelity of the present solver over a range of Reynolds numbers. Mr L. Xu acknowledges the support of the University International Postgraduate Award by University of New South Wales. Dr. F.-B. Tian is the recipient of an Australian Research Council Discovery Early Career Researcher Award (Project Number DE160101098).
Four-dimensional MRI using an internal respiratory surrogate derived by dimensionality reduction
NASA Astrophysics Data System (ADS)
Uh, Jinsoo; Ayaz Khan, M.; Hua, Chiaho
2016-11-01
This study aimed to develop a practical and accurate 4-dimensional (4D) magnetic resonance imaging (MRI) method using a non-navigator, image-based internal respiratory surrogate derived by dimensionality reduction (DR). The use of DR has been previously suggested but not implemented for reconstructing 4D MRI, despite its practical advantages. We compared multiple image-acquisition schemes and refined a retrospective-sorting process to optimally implement a DR-derived surrogate. The comparison included an unconventional scheme that acquires paired slices alternately to mitigate the internal surrogate’s dependency on a specific slice location. We introduced ‘target-oriented sorting’, as opposed to conventional binning, to quantify the coherence in retrospectively sorted images, thereby determining the minimal scan time needed for sufficient coherence. This study focused on evaluating the proposed method using digital phantoms which provided unequivocal gold standard. The evaluation indicated that the DR-based respiratory surrogate is highly accurate: the error in amplitude percentile of the surrogate signal was less than 5% with the optimal scheme. Acquiring alternating paired slices was superior to the conventional scheme of acquiring individual slices; the advantage of the unconventional scheme was more pronounced when a substantial phase shift occurred across slice locations. The analysis of coherence across sorted images confirmed the advantage of higher sampling efficiencies in non-navigator respiratory surrogates. We determined that a scan time of 20 s per imaging slice was sufficient to achieve a mean coherence error of less than 1% for the tested respiratory patterns. The clinical applicability of the proposed 4D MRI has been demonstrated with volunteers and patients. The diaphragm motion in 4D MRI was consistent with that in dynamic 2D imaging which was regarded as the gold standard (difference within 1.8 mm on average).
Addressing Common Cloud-Radiation Errors from 4-hour to 4-week Model Prediction
NASA Astrophysics Data System (ADS)
Benjamin, S.; Sun, S.; Grell, G. A.; Green, B.; Olson, J.; Kenyon, J.; James, E.; Smirnova, T. G.; Brown, J. M.
2017-12-01
Cloud-radiation representation in models for subgrid-scale clouds is a known gap from subseasonal-to-seasonal models down to storm-scale models applied for forecast duration of only a few hours. NOAA/ESRL has been applying common physical parameterizations for scale-aware deep/shallow convection and boundary-layer mixing over this wide range of time and spatial scales, with some progress to be reported in this presentation. The Grell-Freitas scheme (2014, Atmos. Chem. Phys.) and MYNN boundary-layer EDMF scheme (Olson / Benjamin et al. 2016 Mon. Wea. Rev.) have been applied and tested extensively for the NOAA hourly updated 3-km High-Resolution Rapid Refresh (HRRR) and 13-km Rapid Refresh (RAP) model/assimilation systems over the United States and North America, with targeting toward improvement to boundary-layer evolution and cloud-radiation representation in all seasons. This representation is critical for both warm-season severe convective storm forecasting and for winter-storm prediction of snow and mixed precipitation. At the same time the Grell-Freitas scheme has been applied also as an option for subseasonal forecasting toward improved US week 3-4 prediction with the FIM-HYCOM coupled model (Green et al 2017, MWR). Cloud/radiation evaluation using CERES satellite-based estimates have been applied to both 12-h RAP (13km) and also during Weeks 1-4 from 32-day FIM-HYCOM (60km) forecasts. Initial results reveal that improved cloud representation is needed for both resolutions and now is guiding further refinement for cloud representation including with the Grell-Freitas scheme and with the updated MYNN-EDMF scheme (both now also in global testing as well as with the 3km HRRR and 13km RAP models).
Gait development on Minitaur, a direct drive quadrupedal robot
NASA Astrophysics Data System (ADS)
Blackman, Daniel J.; Nicholson, John V.; Ordonez, Camilo; Miller, Bruce D.; Clark, Jonathan E.
2016-05-01
This paper describes the development of a dynamic, quadrupedal robot designed for rapid traversal and interaction in human environments. We explore improvements to both physical and control methods to a legged robot (Minitaur) in order to improve the speed and stability of its gaits and increase the range of obstacles that it can overcome, with an eye toward negotiating man-made terrains such as stairs. These modifications include an analysis of physical compliance, an investigation of foot and leg design, and the implementation of ground and obstacle contact sensing for inclusion in the control schemes. Structural and mechanical improvements were made to reduce undesired compliance for more consistent agreement with dynamic models, which necessitated refinement of foot design for greater durability. Contact sensing was implemented into the control scheme for identifying obstacles and deviations in surface level for negotiation of varying terrain. Overall the incorporation of these features greatly enhances the mobility of the dynamic quadrupedal robot and helps to establish a basis for overcoming obstacles.
Analytic energy gradient of projected Hartree-Fock within projection after variation
NASA Astrophysics Data System (ADS)
Uejima, Motoyuki; Ten-no, Seiichiro
2017-03-01
We develop a geometrical optimization technique for the projection-after-variation (PAV) scheme of the recently refined projected Hartree-Fock (PHF) as a fast alternative to the variation-after-projection (VAP) approach for optimizing the structures of molecules/clusters in symmetry-adapted electronic states at the mean-field computational cost. PHF handles the nondynamic correlation effects by restoring the symmetry of a broken-symmetry single reference wavefunction and moreover enables a black-box treatment of orbital selections. Using HF orbitals instead of PHF orbitals, our approach saves the computational cost for the orbital optimization, avoiding the convergence problem that sometimes emerges in the VAP scheme. We show that PAV-PHF provides geometries comparable to those of the complete active space self-consistent field and VAP-PHF for the tested systems, namely, CH2, O3, and the [Cu2O2 ] 2 + core, where nondynamic correlation is abundant. The proposed approach is useful for large systems mainly dominated by nondynamic correlation to find stable structures in many symmetry-adapted states.
Cultivation of students' engineering designing ability based on optoelectronic system course project
NASA Astrophysics Data System (ADS)
Cao, Danhua; Wu, Yubin; Li, Jingping
2017-08-01
We carry out teaching based on optoelectronic related course group, aiming at junior students majored in Optoelectronic Information Science and Engineering. " Optoelectronic System Course Project " is product-designing-oriented and lasts for a whole semester. It provides a chance for students to experience the whole process of product designing, and improve their abilities to search literature, proof schemes, design and implement their schemes. In teaching process, each project topic is carefully selected and repeatedly refined to guarantee the projects with the knowledge integrity, engineering meanings and enjoyment. Moreover, we set up a top team with professional and experienced teachers, and build up learning community. Meanwhile, the communication between students and teachers as well as the interaction among students are taken seriously in order to improve their team-work ability and communicational skills. Therefore, students are not only able to have a chance to review the knowledge hierarchy of optics, electronics, and computer sciences, but also are able to improve their engineering mindset and innovation consciousness.
NASA Astrophysics Data System (ADS)
Derigs, Dominik; Winters, Andrew R.; Gassner, Gregor J.; Walch, Stefanie; Bohm, Marvin
2018-07-01
The paper presents two contributions in the context of the numerical simulation of magnetized fluid dynamics. First, we show how to extend the ideal magnetohydrodynamics (MHD) equations with an inbuilt magnetic field divergence cleaning mechanism in such a way that the resulting model is consistent with the second law of thermodynamics. As a byproduct of these derivations, we show that not all of the commonly used divergence cleaning extensions of the ideal MHD equations are thermodynamically consistent. Secondly, we present a numerical scheme obtained by constructing a specific finite volume discretization that is consistent with the discrete thermodynamic entropy. It includes a mechanism to control the discrete divergence error of the magnetic field by construction and is Galilean invariant. We implement the new high-order MHD solver in the adaptive mesh refinement code FLASH where we compare the divergence cleaning efficiency to the constrained transport solver available in FLASH (unsplit staggered mesh scheme).
Newmark local time stepping on high-performance computing architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rietmann, Max, E-mail: max.rietmann@erdw.ethz.ch; Institute of Geophysics, ETH Zurich; Grote, Marcus, E-mail: marcus.grote@unibas.ch
In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strongmore » element-size contrasts (more than 100x). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.« less
Hrdá, Marcela; Kulich, Tomáš; Repiský, Michal; Noga, Jozef; Malkina, Olga L; Malkin, Vladimir G
2014-09-05
A recently developed Thouless-expansion-based diagonalization-free approach for improving the efficiency of self-consistent field (SCF) methods (Noga and Šimunek, J. Chem. Theory Comput. 2010, 6, 2706) has been adapted to the four-component relativistic scheme and implemented within the program package ReSpect. In addition to the implementation, the method has been thoroughly analyzed, particularly with respect to cases for which it is difficult or computationally expensive to find a good initial guess. Based on this analysis, several modifications of the original algorithm, refining its stability and efficiency, are proposed. To demonstrate the robustness and efficiency of the improved algorithm, we present the results of four-component diagonalization-free SCF calculations on several heavy-metal complexes, the largest of which contains more than 80 atoms (about 6000 4-spinor basis functions). The diagonalization-free procedure is about twice as fast as the corresponding diagonalization. Copyright © 2014 Wiley Periodicals, Inc.
Photorefraction of eyes: history and future prospects.
Howland, Howard C
2009-06-01
A brief history of photorefraction, i.e., the refraction of eyes by photography or computer image capture, is given. The method of photorefraction originated from an optical scheme for secret communication across the Berlin wall. This scheme used a lens whose focus about infinity was modulated by a movable reflecting surface. From this device, it was recognized that the vertebrate eye was such a reflector and that its double-pass pointspread could be used to compute its degree of defocus. Subsequently, a second, totally independent invention, more accurately termed "photoretinoscopy," used an eccentric light source and obtained retinoscopic-like images of the reflex in the pupil of the subject's eyes. Photoretinoscopy has become the preferred method of photorefraction and has been instantiated in a wide variety of devices used in vision screening and research. This has been greatly helped by the parallel development of computer and digital camera technology. It seems likely that photorefractive methods will continue to be refined and may eventually become ubiquitous in clinical practice.
A High Order Finite Difference Scheme with Sharp Shock Resolution for the Euler Equations
NASA Technical Reports Server (NTRS)
Gerritsen, Margot; Olsson, Pelle
1996-01-01
We derive a high-order finite difference scheme for the Euler equations that satisfies a semi-discrete energy estimate, and present an efficient strategy for the treatment of discontinuities that leads to sharp shock resolution. The formulation of the semi-discrete energy estimate is based on a symmetrization of the Euler equations that preserves the homogeneity of the flux vector, a canonical splitting of the flux derivative vector, and the use of difference operators that satisfy a discrete analogue to the integration by parts procedure used in the continuous energy estimate. Around discontinuities or sharp gradients, refined grids are created on which the discrete equations are solved after adding a newly constructed artificial viscosity. The positioning of the sub-grids and computation of the viscosity are aided by a detection algorithm which is based on a multi-scale wavelet analysis of the pressure grid function. The wavelet theory provides easy to implement mathematical criteria to detect discontinuities, sharp gradients and spurious oscillations quickly and efficiently.
Advanced adaptive computational methods for Navier-Stokes simulations in rotorcraft aerodynamics
NASA Technical Reports Server (NTRS)
Stowers, S. T.; Bass, J. M.; Oden, J. T.
1993-01-01
A phase 2 research and development effort was conducted in area transonic, compressible, inviscid flows with an ultimate goal of numerically modeling complex flows inherent in advanced helicopter blade designs. The algorithms and methodologies therefore are classified as adaptive methods, which are error estimation techniques for approximating the local numerical error, and automatically refine or unrefine the mesh so as to deliver a given level of accuracy. The result is a scheme which attempts to produce the best possible results with the least number of grid points, degrees of freedom, and operations. These types of schemes automatically locate and resolve shocks, shear layers, and other flow details to an accuracy level specified by the user of the code. The phase 1 work involved a feasibility study of h-adaptive methods for steady viscous flows, with emphasis on accurate simulation of vortex initiation, migration, and interaction. Phase 2 effort focused on extending these algorithms and methodologies to a three-dimensional topology.
Self-adaptive relevance feedback based on multilevel image content analysis
NASA Astrophysics Data System (ADS)
Gao, Yongying; Zhang, Yujin; Fu, Yu
2001-01-01
In current content-based image retrieval systems, it is generally accepted that obtaining high-level image features is a key to improve the querying. Among the related techniques, relevance feedback has become a hot research aspect because it combines the information from the user to refine the querying results. In practice, many methods have been proposed to achieve the goal of relevance feedback. In this paper, a new scheme for relevance feedback is proposed. Unlike previous methods for relevance feedback, our scheme provides a self-adaptive operation. First, based on multi- level image content analysis, the relevant images from the user could be automatically analyzed in different levels and the querying could be modified in terms of different analysis results. Secondly, to make it more convenient to the user, the procedure of relevance feedback could be led with memory or without memory. To test the performance of the proposed method, a practical semantic-based image retrieval system has been established, and the querying results gained by our self-adaptive relevance feedback are given.
Self-adaptive relevance feedback based on multilevel image content analysis
NASA Astrophysics Data System (ADS)
Gao, Yongying; Zhang, Yujin; Fu, Yu
2000-12-01
In current content-based image retrieval systems, it is generally accepted that obtaining high-level image features is a key to improve the querying. Among the related techniques, relevance feedback has become a hot research aspect because it combines the information from the user to refine the querying results. In practice, many methods have been proposed to achieve the goal of relevance feedback. In this paper, a new scheme for relevance feedback is proposed. Unlike previous methods for relevance feedback, our scheme provides a self-adaptive operation. First, based on multi- level image content analysis, the relevant images from the user could be automatically analyzed in different levels and the querying could be modified in terms of different analysis results. Secondly, to make it more convenient to the user, the procedure of relevance feedback could be led with memory or without memory. To test the performance of the proposed method, a practical semantic-based image retrieval system has been established, and the querying results gained by our self-adaptive relevance feedback are given.
Direct numerical simulation of the sea flows around blunt bodies
NASA Astrophysics Data System (ADS)
Matyushin, Pavel V.; Gushchin, Valentin A.
2015-11-01
The aim of the present paper is the demonstration of the opportunities of the mathematical modeling of the separated flows of the sea water around blunt bodies on the basis of the Navier-Stokes equations (NSE) in the Boussinesq approximation. The 3D density stratified incompressible viscous fluid flows around a sphere have been investigated by means of the direct numerical simulation (DNS) on supercomputers and the visualization of the 3D vortex structures in the wake. For solving of NSE the Splitting on physical factors Method for Incompressible Fluid flows (SMIF) with hybrid explicit finite difference scheme (second-order accuracy in space, minimum scheme viscosity and dispersion, capable for work in wide range of the Reynolds (Re) and the internal Froude (Fr) numbers and monotonous) has been developed and successfully applied. The different transitions in sphere wakes with increasing of Re (10 < Re < 500) and decreasing of Fr (0.005 < Fr < 100) have been investigated in details. Thus the classifications of the viscous fluid flow regimes around a sphere have been refined.
Modeling atmospheric mineral aerosol chemistry to predict heterogeneous photooxidation of SO2
NASA Astrophysics Data System (ADS)
Yu, Zechen; Jang, Myoseon; Park, Jiyeon
2017-08-01
The photocatalytic ability of airborne mineral dust particles is known to heterogeneously promote SO2 oxidation, but prediction of this phenomenon is not fully taken into account by current models. In this study, the Atmospheric Mineral Aerosol Reaction (AMAR) model was developed to capture the influence of air-suspended mineral dust particles on sulfate formation in various environments. In the model, SO2 oxidation proceeds in three phases including the gas phase, the inorganic-salted aqueous phase (non-dust phase), and the dust phase. Dust chemistry is described as the absorption-desorption kinetics of SO2 and NOx (partitioning between the gas phase and the multilayer coated dust). The reaction of absorbed SO2 on dust particles occurs via two major paths: autoxidation of SO2 in open air and photocatalytic mechanisms under UV light. The kinetic mechanism of autoxidation was first leveraged using controlled indoor chamber data in the presence of Arizona Test Dust (ATD) particles without UV light, and then extended to photochemistry. With UV light, SO2 photooxidation was promoted by surface oxidants (OH radicals) that are generated via the photocatalysis of semiconducting metal oxides (electron-hole theory) of ATD particles. This photocatalytic rate constant was derived from the integration of the combinational product of the dust absorbance spectrum and wave-dependent actinic flux for the full range of wavelengths of the light source. The predicted concentrations of sulfate and nitrate using the AMAR model agreed well with outdoor chamber data that were produced under natural sunlight. For seven consecutive hours of photooxidation of SO2 in an outdoor chamber, dust chemistry at the low NOx level was attributed to 55 % of total sulfate (56 ppb SO2, 290 µg m-3 ATD, and NOx less than 5 ppb). At high NOx ( > 50 ppb of NOx with low hydrocarbons), sulfate formation was also greatly promoted by dust chemistry, but it was suppressed by the competition between NO2 and SO2, which both consume the dust-surface oxidants (OH radicals or ozone).
NASA Astrophysics Data System (ADS)
Pansing, Craig W.; Hua, Hong; Rolland, Jannick P.
2005-08-01
Head-mounted display (HMD) technologies find a variety of applications in the field of 3D virtual and augmented environments, 3D scientific visualization, as well as wearable displays. While most of the current HMDs use head pose to approximate line of sight, we propose to investigate approaches and designs for integrating eye tracking capability into HMDs from a low-level system design perspective and to explore schemes for optimizing system performance. In this paper, we particularly propose to optimize the illumination scheme, which is a critical component in designing an eye tracking-HMD (ET-HMD) integrated system. An optimal design can improve not only eye tracking accuracy, but also robustness. Using LightTools, we present the simulation of a complete eye illumination and imaging system using an eye model along with multiple near infrared LED (IRLED) illuminators and imaging optics, showing the irradiance variation of the different eye structures. The simulation of dark pupil effects along with multiple 1st-order Purkinje images will be presented. A parametric analysis is performed to investigate the relationships between the IRLED configurations and the irradiance distribution at the eye, and a set of optimal configuration parameters is recommended. The analysis will be further refined by actual eye image acquisition and processing.
Gálvez, Akemi; Iglesias, Andrés
2013-01-01
Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently.
Gálvez, Akemi; Iglesias, Andrés
2013-01-01
Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently. PMID:24376380
NASA Astrophysics Data System (ADS)
Ishida, H.; Ota, Y.; Sekiguchi, M.; Sato, Y.
2016-12-01
A three-dimensional (3D) radiative transfer calculation scheme is developed to estimate horizontal transport of radiation energy in a very high resolution (with the order of 10 m in spatial grid) simulation of cloud evolution, especially for horizontally inhomogeneous clouds such as shallow cumulus and stratocumulus. Horizontal radiative transfer due to inhomogeneous clouds seems to cause local heating/cooling in an atmosphere with a fine spatial scale. It is, however, usually difficult to estimate the 3D effects, because the 3D radiative transfer often needs a large resource for computation compared to a plane-parallel approximation. This study attempts to incorporate a solution scheme that explicitly solves the 3D radiative transfer equation into a numerical simulation, because this scheme has an advantage in calculation for a sequence of time evolution (i.e., the scene at a time is little different from that at the previous time step). This scheme is also appropriate to calculation of radiation with strong absorption, such as the infrared regions. For efficient computation, this scheme utilizes several techniques, e.g., the multigrid method for iteration solution, and a correlated-k distribution method refined for efficient approximation of the wavelength integration. For a case study, the scheme is applied to an infrared broadband radiation calculation in a broken cloud field generated with a large eddy simulation model. The horizontal transport of infrared radiation, which cannot be estimated by the plane-parallel approximation, and its variation in time can be retrieved. The calculation result elucidates that the horizontal divergences and convergences of infrared radiation flux are not negligible, especially at the boundaries of clouds and within optically thin clouds, and the radiative cooling at lateral boundaries of clouds may reduce infrared radiative heating in clouds. In a future work, the 3D effects on radiative heating/cooling will be able to be included into atmospheric numerical models.
NASA Astrophysics Data System (ADS)
Donmez, Orhan
We present a general procedure to solve the General Relativistic Hydrodynamical (GRH) equations with Adaptive-Mesh Refinement (AMR) and model of an accretion disk around a black hole. To do this, the GRH equations are written in a conservative form to exploit their hyperbolic character. The numerical solutions of the general relativistic hydrodynamic equations is done by High Resolution Shock Capturing schemes (HRSC), specifically designed to solve non-linear hyperbolic systems of conservation laws. These schemes depend on the characteristic information of the system. We use Marquina fluxes with MUSCL left and right states to solve GRH equations. First, we carry out different test problems with uniform and AMR grids on the special relativistic hydrodynamics equations to verify the second order convergence of the code in 1D, 2 D and 3D. Second, we solve the GRH equations and use the general relativistic test problems to compare the numerical solutions with analytic ones. In order to this, we couple the flux part of general relativistic hydrodynamic equation with a source part using Strang splitting. The coupling of the GRH equations is carried out in a treatment which gives second order accurate solutions in space and time. The test problems examined include shock tubes, geodesic flows, and circular motion of particle around the black hole. Finally, we apply this code to the accretion disk problems around the black hole using the Schwarzschild metric at the background of the computational domain. We find spiral shocks on the accretion disk. They are observationally expected results. We also examine the star-disk interaction near a massive black hole. We find that when stars are grounded down or a hole is punched on the accretion disk, they create shock waves which destroy the accretion disk.
Flash Updates of GSC projects (GSC8 Meeting)
Glockner, Frank Oliver; Markowitz, Victor; Kyrpides, Nikos; Meyer, Folker; Amaral-Zettler, Linda; Cole, James
2018-01-25
The Genomic Standards Consortium was formed in September 2005. It is an international, open-membership working body which promotes standardization in the description of genomes and the exchange and integration of genomic data. The 2009 meeting was an activity of a five-year funding Research Coordination Network from the National Science Foundation and was organized held at the DOE Joint Genome Institute with organizational support provided by the JGI and by the University of California - San Diego. In quick succession Frank Oliver Glockner (MPI-Bremen), Victor Markowitz (LBNL), Nikos Kyripides (JGI), Folker Meyer (ANL), Linda Amaral-Zettler (Marine Biology Lab), and James Cole (Michigan State University) provide updates on a number of topics related to GSC projects at the Genomic Standards Consortium 8th meeting at the DOE JGI in Walnut Creek, CA on Sept. 9, 2009.
Flash Updates of GSC projects (GSC8 Meeting)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glockner, Frank Oliver; Markowitz, Victor; Kyrpides, Nikos
2009-09-09
The Genomic Standards Consortium was formed in September 2005. It is an international, open-membership working body which promotes standardization in the description of genomes and the exchange and integration of genomic data. The 2009 meeting was an activity of a five-year funding Research Coordination Network from the National Science Foundation and was organized held at the DOE Joint Genome Institute with organizational support provided by the JGI and by the University of California - San Diego. In quick succession Frank Oliver Glockner (MPI-Bremen), Victor Markowitz (LBNL), Nikos Kyripides (JGI), Folker Meyer (ANL), Linda Amaral-Zettler (Marine Biology Lab), and James Colemore » (Michigan State University) provide updates on a number of topics related to GSC projects at the Genomic Standards Consortium 8th meeting at the DOE JGI in Walnut Creek, CA on Sept. 9, 2009.« less
NASA Technical Reports Server (NTRS)
Kumar, Rajendra (Inventor)
1991-01-01
A multistage estimator is provided for the parameters of a received carrier signal possibly phase-modulated by unknown data and experiencing very high Doppler, Doppler rate, etc., as may arise, for example, in the case of Global Positioning Systems (GPS) where the signal parameters are directly related to the position, velocity and jerk of the GPS ground-based receiver. In a two-stage embodiment of the more general multistage scheme, the first stage, selected to be a modified least squares algorithm referred to as differential least squares (DLS), operates as a coarse estimator resulting in higher rms estimation errors but with a relatively small probability of the frequency estimation error exceeding one-half of the sampling frequency, provides relatively coarse estimates of the frequency and its derivatives. The second stage of the estimator, an extended Kalman filter (EKF), operates on the error signal available from the first stage refining the overall estimates of the phase along with a more refined estimate of frequency as well and in the process also reduces the number of cycle slips.
NASA Astrophysics Data System (ADS)
Salvalaglio, Marco; Backofen, Rainer; Voigt, Axel; Elder, Ken R.
2017-08-01
One of the major difficulties in employing phase-field crystal (PFC) modeling and the associated amplitude (APFC) formulation is the ability to tune model parameters to match experimental quantities. In this work, we address the problem of tuning the defect core and interface energies in the APFC formulation. We show that the addition of a single term to the free-energy functional can be used to increase the solid-liquid interface and defect energies in a well-controlled fashion, without any major change to other features. The influence of the newly added term is explored in two-dimensional triangular and honeycomb structures as well as bcc and fcc lattices in three dimensions. In addition, a finite-element method (FEM) is developed for the model that incorporates a mesh refinement scheme. The combination of the FEM and mesh refinement to simulate amplitude expansion with a new energy term provides a method of controlling microscopic features such as defect and interface energies while simultaneously delivering a coarse-grained examination of the system.
Fast-kick-off monotonically convergent algorithm for searching optimal control fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, Sheng-Lun; Ho, Tak-San; Rabitz, Herschel
2011-09-15
This Rapid Communication presents a fast-kick-off search algorithm for quickly finding optimal control fields in the state-to-state transition probability control problems, especially those with poorly chosen initial control fields. The algorithm is based on a recently formulated monotonically convergent scheme [T.-S. Ho and H. Rabitz, Phys. Rev. E 82, 026703 (2010)]. Specifically, the local temporal refinement of the control field at each iteration is weighted by a fractional inverse power of the instantaneous overlap of the backward-propagating wave function, associated with the target state and the control field from the previous iteration, and the forward-propagating wave function, associated with themore » initial state and the concurrently refining control field. Extensive numerical simulations for controls of vibrational transitions and ultrafast electron tunneling show that the new algorithm not only greatly improves the search efficiency but also is able to attain good monotonic convergence quality when further frequency constraints are required. The algorithm is particularly effective when the corresponding control dynamics involves a large number of energy levels or ultrashort control pulses.« less
Appearance-based representative samples refining method for palmprint recognition
NASA Astrophysics Data System (ADS)
Wen, Jiajun; Chen, Yan
2012-07-01
The sparse representation can deal with the lack of sample problem due to utilizing of all the training samples. However, the discrimination ability will degrade when more training samples are used for representation. We propose a novel appearance-based palmprint recognition method. We aim to find a compromise between the discrimination ability and the lack of sample problem so as to obtain a proper representation scheme. Under the assumption that the test sample can be well represented by a linear combination of a certain number of training samples, we first select the representative training samples according to the contributions of the samples. Then we further refine the training samples by an iteration procedure, excluding the training sample with the least contribution to the test sample for each time. Experiments on PolyU multispectral palmprint database and two-dimensional and three-dimensional palmprint database show that the proposed method outperforms the conventional appearance-based palmprint recognition methods. Moreover, we also explore and find out the principle of the usage for the key parameters in the proposed algorithm, which facilitates to obtain high-recognition accuracy.
NASA Technical Reports Server (NTRS)
Kumar, Rajendra (Inventor)
1990-01-01
A multistage estimator is provided for the parameters of a received carrier signal possibly phase-modulated by unknown data and experiencing very high Doppler, Doppler rate, etc., as may arise, for example, in the case of Global Positioning Systems (GPS) where the signal parameters are directly related to the position, velocity and jerk of the GPS ground-based receiver. In a two-stage embodiment of the more general multistage scheme, the first stage, selected to be a modified least squares algorithm referred to as differential least squares (DLS), operates as a coarse estimator resulting in higher rms estimation errors but with a relatively small probability of the frequency estimation error exceeding one-half of the sampling frequency, provides relatively coarse estimates of the frequency and its derivatives. The second stage of the estimator, an extended Kalman filter (EKF), operates on the error signal available from the first stage refining the overall estimates of the phase along with a more refined estimate of frequency as well and in the process also reduces the number of cycle slips.
Mathematical modeling of polymer flooding using the unstructured Voronoi grid
NASA Astrophysics Data System (ADS)
Kireev, T. F.; Bulgakova, G. T.; Khatmullin, I. F.
2017-12-01
Effective recovery of unconventional oil reserves necessitates development of enhanced oil recovery techniques such as polymer flooding. The study investigated the model of polymer flooding with effects of adsorption and water salinity. The model takes into account six components that include elements of the classic black oil model. These components are polymer, salt, water, dead oil, dry gas and dissolved gas. Solution of the problem is obtained by finite volume method on unstructured Voronoi grid using fully implicit scheme and the Newton’s method. To compare several different grid configurations numerical simulation of polymer flooding is performed. The oil rates obtained by a hexagonal locally refined Voronoi grid are shown to be more accurate than the oil rates obtained by a rectangular grid with the same number of cells. The latter effect is caused by high solution accuracy near the wells due to the local grid refinement. Minimization of the grid orientation effect caused by the hexagonal pattern is also demonstrated. However, in the inter-well regions with large Voronoi cells flood front tends to flatten and the water breakthrough moment is smoothed.
PF-WFS Shell Inspection Update December 2016
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vigil, Anthony Eugene; Ledoux, Reina Rebecca; Gonzales, Antonio R.
Since the last project update in FY16:Q2, PF-WFS personnel have advanced in understanding of shell inspection on Coordinate Measuring Machines {CMM} and refined the PF-WFS process to the point it was decided to convert shell inspection from the Sheffield #1 gage to Lietz CM Ms. As a part of introspection on the quality of this process many sets of data have been reviewed and analyzed. This analysis included Sheffield to CMM comparisons, CMM inspection repeatability, fixturing differences, quality check development, probing approach changes. This update report will touch on these improvements that have built the confidence in this process tomore » mainstream it inspecting shells. In addition to the CMM programming advancements, the continuation in refinement of input and outputs for the CMM program has created an archiving scheme, input spline files, an output metafile, and inspection report package. This project will continue to mature. Part designs may require program modifications to accommodate "new to this process" part designs. Technology limitations tied to security and performance are requiring possible changes to computer configurations to support an automated process.« less
Numerical Study of Richtmyer-Meshkov Instability with Re-Shock
NASA Astrophysics Data System (ADS)
Wong, Man Long; Livescu, Daniel; Lele, Sanjiva
2017-11-01
The interaction of a Mach 1.45 shock wave with a perturbed planar interface between two gases with an Atwood number 0.68 is studied through 2D and 3D shock-capturing adaptive mesh refinement (AMR) simulations with physical diffusive and viscous terms. The simulations have initial conditions similar to those in the actual experiment conducted by Poggi et al. [1998]. The development of the flow and evolution of mixing due to the interactions with the first shock and the re-shock are studied together with the sensitivity of various global parameters to the properties of the initial perturbation. Grid resolutions needed for fully resolved and 2D and 3D simulations are also evaluated. Simulations are conducted with an in-house AMR solver HAMeRS built on the SAMRAI library. The code utilizes the high-order localized dissipation weighted compact nonlinear scheme [Wong and Lele, 2017] for shock-capturing and different sensors including the wavelet sensor [Wong and Lele, 2016] to identify regions for grid refinement. First and third authors acknowledge the project sponsor LANL.
Galaxy clusters in simulations of the local Universe: a matter of constraints
NASA Astrophysics Data System (ADS)
Sorce, Jenny G.; Tempel, Elmo
2018-06-01
To study the full formation and evolution history of galaxy clusters and their population, high-resolution simulations of the latter are flourishing. However, comparing observed clusters to the simulated ones on a one-to-one basis to refine the models and theories down to the details is non-trivial. The large variety of clusters limits the comparisons between observed and numerical clusters. Simulations resembling the local Universe down to the cluster scales permit pushing the limit. Simulated and observed clusters can be matched on a one-to-one basis for direct comparisons provided that clusters are well reproduced besides being in the proper large-scale environment. Comparing random and local Universe-like simulations obtained with differently grouped observational catalogues of peculiar velocities, this paper shows that the grouping scheme used to remove non-linear motions in the catalogues that constrain the simulations affects the quality of the numerical clusters. With a less aggressive grouping scheme - galaxies still falling on to clusters are preserved - combined with a bias minimization scheme, the mass of the dark matter haloes, simulacra for five local clusters - Virgo, Centaurus, Coma, Hydra, and Perseus - is increased by 39 per cent closing the gap with observational mass estimates. Simulacra are found on average in 89 per cent of the simulations, an increase of 5 per cent with respect to the previous grouping scheme. The only exception is Perseus. Since the Perseus-Pisces region is not well covered by the used peculiar velocity catalogue, the latest release lets us foresee a better simulacrum for Perseus in a near future.
NASA Astrophysics Data System (ADS)
Martín–Moruno, Prado; Visser, Matt
2017-11-01
The (generalized) Rainich conditions are algebraic conditions which are polynomial in the (mixed-component) stress-energy tensor. As such they are logically distinct from the usual classical energy conditions (NEC, WEC, SEC, DEC), and logically distinct from the usual Hawking-Ellis (Segré-Plebański) classification of stress-energy tensors (type I, type II, type III, type IV). There will of course be significant inter-connections between these classification schemes, which we explore in the current article. Overall, we shall argue that it is best to view the (generalized) Rainich conditions as a refinement of the classical energy conditions and the usual Hawking-Ellis classification.
A-posteriori error estimation for the finite point method with applications to compressible flow
NASA Astrophysics Data System (ADS)
Ortega, Enrique; Flores, Roberto; Oñate, Eugenio; Idelsohn, Sergio
2017-08-01
An a-posteriori error estimate with application to inviscid compressible flow problems is presented. The estimate is a surrogate measure of the discretization error, obtained from an approximation to the truncation terms of the governing equations. This approximation is calculated from the discrete nodal differential residuals using a reconstructed solution field on a modified stencil of points. Both the error estimation methodology and the flow solution scheme are implemented using the Finite Point Method, a meshless technique enabling higher-order approximations and reconstruction procedures on general unstructured discretizations. The performance of the proposed error indicator is studied and applications to adaptive grid refinement are presented.
NASA Technical Reports Server (NTRS)
Crane, R. K.; Blood, D. W.
1979-01-01
A single model for a standard of comparison for other models when dealing with rain attenuation problems in system design and experimentation is proposed. Refinements to the Global Rain Production Model are incorporated. Path loss and noise estimation procedures as the basic input to systems design for earth-to-space microwave links operating at frequencies from 1 to 300 GHz are provided. Topics covered include gaseous absorption, attenuation by rain, ionospheric and tropospheric scintillation, low elevation angle effects, radome attenuation, diversity schemes, link calculation, and receiver noise emission by atmospheric gases, rain, and antenna contributions.
An adaptive interpolation scheme for molecular potential energy surfaces
NASA Astrophysics Data System (ADS)
Kowalewski, Markus; Larsson, Elisabeth; Heryudono, Alfa
2016-08-01
The calculation of potential energy surfaces for quantum dynamics can be a time consuming task—especially when a high level of theory for the electronic structure calculation is required. We propose an adaptive interpolation algorithm based on polyharmonic splines combined with a partition of unity approach. The adaptive node refinement allows to greatly reduce the number of sample points by employing a local error estimate. The algorithm and its scaling behavior are evaluated for a model function in 2, 3, and 4 dimensions. The developed algorithm allows for a more rapid and reliable interpolation of a potential energy surface within a given accuracy compared to the non-adaptive version.
Asynchronous variational integration using continuous assumed gradient elements.
Wolff, Sebastian; Bucher, Christian
2013-03-01
Asynchronous variational integration (AVI) is a tool which improves the numerical efficiency of explicit time stepping schemes when applied to finite element meshes with local spatial refinement. This is achieved by associating an individual time step length to each spatial domain. Furthermore, long-term stability is ensured by its variational structure. This article presents AVI in the context of finite elements based on a weakened weak form (W2) Liu (2009) [1], exemplified by continuous assumed gradient elements Wolff and Bucher (2011) [2]. The article presents the main ideas of the modified AVI, gives implementation notes and a recipe for estimating the critical time step.
An edge-based solution-adaptive method applied to the AIRPLANE code
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Thomas, Scott D.; Cliff, Susan E.
1995-01-01
Computational methods to solve large-scale realistic problems in fluid flow can be made more efficient and cost effective by using them in conjunction with dynamic mesh adaption procedures that perform simultaneous coarsening and refinement to capture flow features of interest. This work couples the tetrahedral mesh adaption scheme, 3D_TAG, with the AIRPLANE code to solve complete aircraft configuration problems in transonic and supersonic flow regimes. Results indicate that the near-field sonic boom pressure signature of a cone-cylinder is improved, the oblique and normal shocks are better resolved on a transonic wing, and the bow shock ahead of an unstarted inlet is better defined.
Mapping ocean tides with satellites - A computer simulation
NASA Technical Reports Server (NTRS)
Won, I. J.; Kuo, J. T.; Jachens, R. C.
1978-01-01
As a preliminary study for the future worldwide direct mapping of the open ocean tide with satellites equipped with precision altimeters we conducted a simulated study using sets of artificially generated altimeter data constructed from a realistic geoid and four pairs of major tides in the northeastern Pacific Ocean. Recovery of the original geoid and eight tidal maps is accomplished by a space-time, least squares harmonic analysis scheme. The resultant maps appear fairly satisfactory even when random noises up to + or - 100 cm are added to the altimeter data of sufficient space-time density. The method also produces a refined geoid which is rigorously corrected for the dynamic tides.
Defining Clonal Color in Fluorescent Multi-Clonal Tracking
Wu, Juwell W.; Turcotte, Raphaël; Alt, Clemens; Runnels, Judith M.; Tsao, Hensin; Lin, Charles P.
2016-01-01
Clonal heterogeneity and selection underpin many biological processes including development and tumor progression. Combinatorial fluorescent protein expression in germline cells has proven its utility for tracking the formation and regeneration of different organ systems. Such cell populations encoded by combinatorial fluorescent proteins are also attractive tools for understanding clonal expansion and clonal competition in cancer. However, the assignment of clonal identity requires an analytical framework in which clonal markings can be parameterized and validated. Here we present a systematic and quantitative method for RGB analysis of fluorescent melanoma cancer clones. We then demonstrate refined clonal trackability of melanoma cells using this scheme. PMID:27073117
Refining of plant oils to chemicals by olefin metathesis.
Chikkali, Samir; Mecking, Stefan
2012-06-11
Plant oils are attractive substrates for the chemical industry. Their scope for the production of chemicals can be expanded by sophisticated catalytic conversions. Olefin metathesis is an example, which also illustrates generic issues of "biorefining" to chemicals. Utilization on a large scale requires high catalyst activities, which influences the choice of the metathesis reaction. The mixture of different fatty acids composing a technical-grade plant oil substrate gives rise to a range of products. This decisively determines possible process schemes, and potentially provides novel chemicals and intermediates not employed to date. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Investigation of supersonic jet plumes using an improved two-equation turbulence model
NASA Technical Reports Server (NTRS)
Lakshmanan, B.; Abdol-Hamid, Khaled S.
1994-01-01
Supersonic jet plumes were studied using a two-equation turbulence model employing corrections for compressible dissipation and pressure-dilatation. A space-marching procedure based on an upwind numerical scheme was used to solve the governing equations and turbulence transport equations. The computed results indicate that two-equation models employing corrections for compressible dissipation and pressure-dilatation yield improved agreement with the experimental data. In addition, the numerical study demonstrates that the computed results are sensitive to the effect of grid refinement and insensitive to the type of velocity profiles used at the inflow boundary for the cases considered in the present study.
NASA Astrophysics Data System (ADS)
Jha, Pradeep Kumar
Capturing the effects of detailed-chemistry on turbulent combustion processes is a central challenge faced by the numerical combustion community. However, the inherent complexity and non-linear nature of both turbulence and chemistry require that combustion models rely heavily on engineering approximations to remain computationally tractable. This thesis proposes a computationally efficient algorithm for modelling detailed-chemistry effects in turbulent diffusion flames and numerically predicting the associated flame properties. The cornerstone of this combustion modelling tool is the use of parallel Adaptive Mesh Refinement (AMR) scheme with the recently proposed Flame Prolongation of Intrinsic low-dimensional manifold (FPI) tabulated-chemistry approach for modelling complex chemistry. The effect of turbulence on the mean chemistry is incorporated using a Presumed Conditional Moment (PCM) approach based on a beta-probability density function (PDF). The two-equation k-w turbulence model is used for modelling the effects of the unresolved turbulence on the mean flow field. The finite-rate of methane-air combustion is represented here by using the GRI-Mech 3.0 scheme. This detailed mechanism is used to build the FPI tables. A state of the art numerical scheme based on a parallel block-based solution-adaptive algorithm has been developed to solve the Favre-averaged Navier-Stokes (FANS) and other governing partial-differential equations using a second-order accurate, fully-coupled finite-volume formulation on body-fitted, multi-block, quadrilateral/hexahedral mesh for two-dimensional and three-dimensional flow geometries, respectively. A standard fourth-order Runge-Kutta time-marching scheme is used for time-accurate temporal discretizations. Numerical predictions of three different diffusion flames configurations are considered in the present work: a laminar counter-flow flame; a laminar co-flow diffusion flame; and a Sydney bluff-body turbulent reacting flow. Comparisons are made between the predicted results of the present FPI scheme and Steady Laminar Flamelet Model (SLFM) approach for diffusion flames. The effects of grid resolution on the predicted overall flame solutions are also assessed. Other non-reacting flows have also been considered to further validate other aspects of the numerical scheme. The present schemes predict results which are in good agreement with published experimental results and reduces the computational cost involved in modelling turbulent diffusion flames significantly, both in terms of storage and processing time.
Evolution of optimal Lévy-flight strategies in human mental searches
NASA Astrophysics Data System (ADS)
Radicchi, Filippo; Baronchelli, Andrea
2012-06-01
Recent analysis of empirical data [Radicchi, Baronchelli, and Amaral, PloS ONE1932-620310.1371/journal.pone.0029910 7, e029910 (2012)] showed that humans adopt Lévy-flight strategies when exploring the bid space in online auctions. A game theoretical model proved that the observed Lévy exponents are nearly optimal, being close to the exponent value that guarantees the maximal economical return to players. Here, we rationalize these findings by adopting an evolutionary perspective. We show that a simple evolutionary process is able to account for the empirical measurements with the only assumption that the reproductive fitness of the players is proportional to their search ability. Contrary to previous modeling, our approach describes the emergence of the observed exponent without resorting to any strong assumptions on the initial searching strategies. Our results generalize earlier research, and open novel questions in cognitive, behavioral, and evolutionary sciences.
Relaxation of the resistive superconducting state in boron-doped diamond films
NASA Astrophysics Data System (ADS)
Kardakova, A.; Shishkin, A.; Semenov, A.; Goltsman, G. N.; Ryabchun, S.; Klapwijk, T. M.; Bousquet, J.; Eon, D.; Sacépé, B.; Klein, Th.; Bustarret, E.
2016-02-01
We report a study of the relaxation time of the restoration of the resistive superconducting state in single crystalline boron-doped diamond using amplitude-modulated absorption of (sub-)THz radiation (AMAR). The films grown on an insulating diamond substrate have a low carrier density of about 2.5 ×1021cm-3 and a critical temperature of about 2 K . By changing the modulation frequency we find a high-frequency rolloff which we associate with the characteristic time of energy relaxation between the electron and the phonon systems or the relaxation time for nonequilibrium superconductivity. Our main result is that the electron-phonon scattering time varies clearly as T-2, over the accessible temperature range of 1.7 to 2.2 K. In addition, we find, upon approaching the critical temperature Tc, evidence for an increasing relaxation time on both sides of Tc.
NASA Astrophysics Data System (ADS)
La Porta, Caterina A. M.; Zapperi, Stefano
2016-07-01
The process of inflammation tries to protect the body after an injury due to biological causes such as the presence of pathogens or chemicals, or to physical processes such as burns or cuts. The biological rationale for this process has the main goal of eliminating the cause of the injury and then repairing the damaged tissues. We can distinguish two kinds of inflammations: acute and chronic. In acute inflammation, a series of events involving the local vascular systems, the immune system and various cells within the injured tissue work together to eradicate the harmful stimuli. If the inflammation does not resolve the problem, it can evolve into a chronic inflammation, where the type of cells involved changes and there is a simultaneous destruction and healing of the tissue from the inflammation process.
A portable foot-parameter-extracting system
NASA Astrophysics Data System (ADS)
Zhang, MingKai; Liang, Jin; Li, Wenpan; Liu, Shifan
2016-03-01
In order to solve the problem of automatic foot measurement in garment customization, a new automatic footparameter- extracting system based on stereo vision, photogrammetry and heterodyne multiple frequency phase shift technology is proposed and implemented. The key technologies applied in the system are studied, including calibration of projector, alignment of point clouds, and foot measurement. Firstly, a new projector calibration algorithm based on plane model has been put forward to get the initial calibration parameters and a feature point detection scheme of calibration board image is developed. Then, an almost perfect match of two clouds is achieved by performing a first alignment using the Sampled Consensus - Initial Alignment algorithm (SAC-IA) and refining the alignment using the Iterative Closest Point algorithm (ICP). Finally, the approaches used for foot-parameterextracting and the system scheme are presented in detail. Experimental results show that the RMS error of the calibration result is 0.03 pixel and the foot parameter extracting experiment shows the feasibility of the extracting algorithm. Compared with the traditional measurement method, the system can be more portable, accurate and robust.
NASA Astrophysics Data System (ADS)
Bulovich, S. V.; Smirnov, E. M.
2018-05-01
The paper covers application of the artificial viscosity technique to numerical simulation of unsteady one-dimensional multiphase compressible flows on the base of the multi-fluid approach. The system of the governing equations is written under assumption of the pressure equilibrium between the "fluids" (phases). No interfacial exchange is taken into account. A model for evaluation of the artificial viscosity coefficient that (i) assumes identity of this coefficient for all interpenetrating phases and (ii) uses the multiphase-mixture Wood equation for evaluation of a scale speed of sound has been suggested. Performance of the artificial viscosity technique has been evaluated via numerical solution of a model problem of pressure discontinuity breakdown in a three-fluid medium. It has been shown that a relatively simple numerical scheme, explicit and first-order, combined with the suggested artificial viscosity model, predicts a physically correct behavior of the moving shock and expansion waves, and a subsequent refinement of the computational grid results in a monotonic approaching to an asymptotic time-dependent solution, without non-physical oscillations.
Output-Adaptive Tetrahedral Cut-Cell Validation for Sonic Boom Prediction
NASA Technical Reports Server (NTRS)
Park, Michael A.; Darmofal, David L.
2008-01-01
A cut-cell approach to Computational Fluid Dynamics (CFD) that utilizes the median dual of a tetrahedral background grid is described. The discrete adjoint is also calculated, which permits adaptation based on improving the calculation of a specified output (off-body pressure signature) in supersonic inviscid flow. These predicted signatures are compared to wind tunnel measurements on and off the configuration centerline 10 body lengths below the model to validate the method for sonic boom prediction. Accurate mid-field sonic boom pressure signatures are calculated with the Euler equations without the use of hybrid grid or signature propagation methods. Highly-refined, shock-aligned anisotropic grids were produced by this method from coarse isotropic grids created without prior knowledge of shock locations. A heuristic reconstruction limiter provided stable flow and adjoint solution schemes while producing similar signatures to Barth-Jespersen and Venkatakrishnan limiters. The use of cut-cells with an output-based adaptive scheme completely automated this accurate prediction capability after a triangular mesh is generated for the cut surface. This automation drastically reduces the manual intervention required by existing methods.
Electromagnetic mixing laws: A supersymmetric approach
NASA Astrophysics Data System (ADS)
Niez, J. J.
2010-02-01
In this article we address the old problem of finding the effective dielectric constant of materials described either by a local random dielectric constant, or by a set of non-overlapping spherical inclusions randomly dispersed in a host. We use a unified theoretical framework, such that all the most important Electromagnetic Mixing Laws (EML) can be recovered as the first iterative step of a family of results, thus opening the way to future improvements through the refinements of the approximation schemes. When the material is described by a set of immersed inclusions characterized by their spatial correlation functions, we exhibit an EML which, being featured by a minimal approximation scheme, does not come from the multiple scattering paradigm. It is made of a pure Hori-Yonezawa formula, corrected by a power series of the inclusion density. The coefficients of the latter, which are given as sums of standard diagrams, are recast into electromagnetic quantities which calculation is amenable numerically thanks to codes available on the web. The methods used and developed in this work are generic and can be used in a large variety of areas ranging from mechanics to thermodynamics.
Liu, Yun; Li, Hong; Sun, Sida; Fang, Sheng
2017-09-01
An enhanced air dispersion modelling scheme is proposed to cope with the building layout and complex terrain of a typical Chinese nuclear power plant (NPP) site. In this modelling, the California Meteorological Model (CALMET) and the Stationary Wind Fit and Turbulence (SWIFT) are coupled with the Risø Mesoscale PUFF model (RIMPUFF) for refined wind field calculation. The near-field diffusion coefficient correction scheme of the Atmospheric Relative Concentrations in the Building Wakes Computer Code (ARCON96) is adopted to characterize dispersion in building arrays. The proposed method is evaluated by a wind tunnel experiment that replicates the typical Chinese NPP site. For both wind speed/direction and air concentration, the enhanced modelling predictions agree well with the observations. The fraction of the predictions within a factor of 2 and 5 of observations exceeds 55% and 82% respectively in the building area and the complex terrain area. This demonstrates the feasibility of the new enhanced modelling for typical Chinese NPP sites. Copyright © 2017 Elsevier Ltd. All rights reserved.
Integrated testing strategy (ITS) for bioaccumulation assessment under REACH.
Lombardo, Anna; Roncaglioni, Alessandra; Benfentati, Emilio; Nendza, Monika; Segner, Helmut; Fernández, Alberto; Kühne, Ralph; Franco, Antonio; Pauné, Eduard; Schüürmann, Gerrit
2014-08-01
REACH (registration, evaluation, authorisation and restriction of chemicals) regulation requires that all the chemicals produced or imported in Europe above 1 tonne/year are registered. To register a chemical, physicochemical, toxicological and ecotoxicological information needs to be reported in a dossier. REACH promotes the use of alternative methods to replace, refine and reduce the use of animal (eco)toxicity testing. Within the EU OSIRIS project, integrated testing strategies (ITSs) have been developed for the rational use of non-animal testing approaches in chemical hazard assessment. Here we present an ITS for evaluating the bioaccumulation potential of organic chemicals. The scheme includes the use of all available data (also the non-optimal ones), waiving schemes, analysis of physicochemical properties related to the end point and alternative methods (both in silico and in vitro). In vivo methods are used only as last resort. Using the ITS, in vivo testing could be waived for about 67% of the examined compounds, but bioaccumulation potential could be estimated on the basis of non-animal methods. The presented ITS is freely available through a web tool. Copyright © 2014 Elsevier Ltd. All rights reserved.
A scheme for racquet sports video analysis with the combination of audio-visual information
NASA Astrophysics Data System (ADS)
Xing, Liyuan; Ye, Qixiang; Zhang, Weigang; Huang, Qingming; Yu, Hua
2005-07-01
As a very important category in sports video, racquet sports video, e.g. table tennis, tennis and badminton, has been paid little attention in the past years. Considering the characteristics of this kind of sports video, we propose a new scheme for structure indexing and highlight generating based on the combination of audio and visual information. Firstly, a supervised classification method is employed to detect important audio symbols including impact (ball hit), audience cheers, commentator speech, etc. Meanwhile an unsupervised algorithm is proposed to group video shots into various clusters. Then, by taking advantage of temporal relationship between audio and visual signals, we can specify the scene clusters with semantic labels including rally scenes and break scenes. Thirdly, a refinement procedure is developed to reduce false rally scenes by further audio analysis. Finally, an exciting model is proposed to rank the detected rally scenes from which many exciting video clips such as game (match) points can be correctly retrieved. Experiments on two types of representative racquet sports video, table tennis video and tennis video, demonstrate encouraging results.
CREST v2.1 Refined by a Distributed Linear Reservoir Routing Scheme
NASA Astrophysics Data System (ADS)
Shen, X.; Hong, Y.; Zhang, K.; Hao, Z.; Wang, D.
2014-12-01
Hydrologic modeling is important in water resources management, and flooding disaster warning and management. Routing scheme is among the most important components of a hydrologic model. In this study, we replace the lumped LRR (linear reservoir routing) scheme used in previous versions of the distributed hydrological model, CREST (coupled routing and excess storage) by a newly proposed distributed LRR method, which is theoretically more suitable for distributed hydrological models. Consequently, we have effectively solved the problems of: 1) low values of channel flow in daily simulation, 2) discontinuous flow value along the river network during flood events and 3) irrational model parameters. The CREST model equipped with both the routing schemes have been tested in the Gan River basin. The distributed LRR scheme has been confirmed to outperform the lumped counterpart by two comparisons, hydrograph validation and visual speculation of the continuity of stream flow along the river: 1) The CREST v2.1 (version 2.1) with the implementation of the distributed LRR achieved excellent result of [NSCE(Nash coefficient), CC (correlation coefficient), bias] =[0.897, 0.947 -1.57%] while the original CREST v2.0 produced only negative NSCE, close to zero CC and large bias. 2) CREST v2.1 produced more naturally smooth river flow pattern along the river network while v2.0 simulated bumping and discontinuous discharge along the mainstream. Moreover, we further observe that by using the distributed LRR method, 1) all model parameters fell within their reasonable region after an automatic optimization; 2) CREST forced by satellite-based precipitation and PET products produces a reasonably well result, i.e., (NSCE, CC, bias) = (0.756, 0.871, -0.669%) in the case study, although there is still room to improve regarding their low spatial resolution and underestimation of the heavy rainfall events in the satellite products.
NASA Astrophysics Data System (ADS)
Gao, Tian; Qiu, Ling; Hammer, Mårten; Gunnarsson, Allan
2012-02-01
Temporal and spatial vegetation structure has impact on biodiversity qualities. Yet, current schemes of biotope mapping do only to a limited extend incorporate these factors in the mapping. The purpose of this study is to evaluate the application of a modified biotope mapping scheme that includes temporal and spatial vegetation structure. A refined scheme was developed based on a biotope classification, and applied to a green structure system in Helsingborg city in southern Sweden. It includes four parameters of vegetation structure: continuity of forest cover, age of dominant trees, horizontal structure, and vertical structure. The major green structure sites were determined by interpretation of panchromatic aerial photographs assisted with a field survey. A set of biotope maps was constructed on the basis of each level of modified classification. An evaluation of the scheme included two aspects in particular: comparison of species richness between long-continuity and short-continuity forests based on identification of woodland continuity using ancient woodland indicators (AWI) species and related historical documents, and spatial distribution of animals in the green space in relation to vegetation structure. The results indicate that (1) the relationship between forest continuity: according to verification of historical documents, the richness of AWI species was higher in long-continuity forests; Simpson's diversity was significantly different between long- and short-continuity forests; the total species richness and Shannon's diversity were much higher in long-continuity forests shown a very significant difference. (2) The spatial vegetation structure and age of stands influence the richness and abundance of the avian fauna and rabbits, and distance to the nearest tree and shrub was a strong determinant of presence for these animal groups. It is concluded that continuity of forest cover, age of dominant trees, horizontal and vertical structures of vegetation should now be included in urban biotope classifications.
Jegstrup, I; Thon, R; Hansen, A K; Hoitinga, M Ritskes
2003-01-01
A thorough welfare evaluation performed as part of a general phenotype characterization for both transgenic and traditional mouse strains could not only contribute to the improvement of the welfare of laboratory animals, but could also be of benefit to scientists, laboratory veterinarians and the inspecting authorities. A literature review has been performed to identify and critically evaluate already existing protocols for phenotype and welfare characterization. There are several relevant schemes available, among others the SHIRPA method, the modified score sheet of Morton and Griffiths, the FRIMORFO phenotype characterization scheme and the behavioural phenotype schemes as described by Crawley. These protocols have been evaluated according to four goals: Their ability (1) to reveal any special needs or problems with a transgenic strain, (2) to cover the informational needs of the purchaser/user of the strain, (3) to refine the welfare of the transgenic animal model by identifying relevant humane endpoints, (4) to prevent the duplication of animal models that have already been developed. The protocols described are useful for characterizing the phenotype and judging welfare disturbances, however the total amount of information and the degree of detail varies considerably from one scheme to another. We present a proposal regarding the practical application of the various schemes that will secure proper treatment and the identification of humane endpoints. It is advocated that with every purchase of a particular strain, an instruction document should accompany the strain. This document needs to give detailed descriptions of the typical characteristics of the strain, as well as necessary actions concerning relevant treatment and humane endpoints. At the moment no such documents are required. The introduction of these types of documents will contribute to improvements in animal welfare as well as experimental results in laboratory animal experimentation.
Discontinuous Galerkin method for multicomponent chemically reacting flows and combustion
NASA Astrophysics Data System (ADS)
Lv, Yu; Ihme, Matthias
2014-08-01
This paper presents the development of a discontinuous Galerkin (DG) method for application to chemically reacting flows in subsonic and supersonic regimes under the consideration of variable thermo-viscous-diffusive transport properties, detailed and stiff reaction chemistry, and shock capturing. A hybrid-flux formulation is developed for treatment of the convective fluxes, combining a conservative Riemann-solver and an extended double-flux scheme. A computationally efficient splitting scheme is proposed, in which advection and diffusion operators are solved in the weak form, and the chemically stiff substep is advanced in the strong form using a time-implicit scheme. The discretization of the viscous-diffusive transport terms follows the second form of Bassi and Rebay, and the WENO-based limiter due to Zhong and Shu is extended to multicomponent systems. Boundary conditions are developed for subsonic and supersonic flow conditions, and the algorithm is coupled to thermochemical libraries to account for detailed reaction chemistry and complex transport. The resulting DG method is applied to a series of test cases of increasing physico-chemical complexity. Beginning with one- and two-dimensional multispecies advection and shock-fluid interaction problems, computational efficiency, convergence, and conservation properties are demonstrated. This study is followed by considering a series of detonation and supersonic combustion problems to investigate the convergence-rate and the shock-capturing capability in the presence of one- and multistep reaction chemistry. The DG algorithm is then applied to diffusion-controlled deflagration problems. By examining convergence properties for polynomial order and spatial resolution, and comparing these with second-order finite-volume solutions, it is shown that optimal convergence is achieved and that polynomial refinement provides advantages in better resolving the localized flame structure and complex flow-field features associated with multidimensional and hydrodynamic/thermo-diffusive instabilities in deflagration and detonation systems. Comparisons with standard third- and fifth-order WENO schemes are presented to illustrate the benefit of the DG scheme for application to detonation and multispecies flow/shock-interaction problems.
Improvement of the 2D/1D Method in MPACT Using the Sub-Plane Scheme
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graham, Aaron M; Collins, Benjamin S; Downar, Thomas
Oak Ridge National Laboratory and the University of Michigan are jointly developing the MPACTcode to be the primary neutron transport code for the Virtual Environment for Reactor Applications (VERA). To solve the transport equation, MPACT uses the 2D/1D method, which decomposes the problem into a stack of 2D planes that are then coupled with a 1D axial calculation. MPACT uses the Method of Characteristics for the 2D transport calculations and P3 for the 1D axial calculations, then accelerates the solution using the 3D Coarse mesh Finite Dierence (CMFD) method. Increasing the number of 2D MOC planes will increase the accuracymore » of the alculation, but will increase the computational burden of the calculations and can cause slow convergence or instability. To prevent these problems while maintaining accuracy, the sub-plane scheme has been implemented in MPACT. This method sub-divides the MOC planes into sub-planes, refining the 1D P3 and 3D CMFD calculations without increasing the number of 2D MOC planes. To test the sub-plane scheme, three of the VERA Progression Problems were selected: Problem 3, a single assembly problem; Problem 4, a 3x3 assembly problem with control rods and pyrex burnable poisons; and Problem 5, a quarter core problem. These three problems demonstrated that the sub-plane scheme can accurately produce intra-plane axial flux profiles that preserve the accuracy of the fine mesh solution. The eigenvalue dierences are negligibly small, and dierences in 3D power distributions are less than 0.1% for realistic axial meshes. Furthermore, the convergence behavior with the sub-plane scheme compares favorably with the conventional 2D/1D method, and the computational expense is decreased for all calculations due to the reduction in expensive MOC calculations.« less
NASA Astrophysics Data System (ADS)
Ivan, L.; De Sterck, H.; Susanto, A.; Groth, C. P. T.
2015-02-01
A fourth-order accurate finite-volume scheme for hyperbolic conservation laws on three-dimensional (3D) cubed-sphere grids is described. The approach is based on a central essentially non-oscillatory (CENO) finite-volume method that was recently introduced for two-dimensional compressible flows and is extended to 3D geometries with structured hexahedral grids. Cubed-sphere grids feature hexahedral cells with nonplanar cell surfaces, which are handled with high-order accuracy using trilinear geometry representations in the proposed approach. Varying stencil sizes and slope discontinuities in grid lines occur at the boundaries and corners of the six sectors of the cubed-sphere grid where the grid topology is unstructured, and these difficulties are handled naturally with high-order accuracy by the multidimensional least-squares based 3D CENO reconstruction with overdetermined stencils. A rotation-based mechanism is introduced to automatically select appropriate smaller stencils at degenerate block boundaries, where fewer ghost cells are available and the grid topology changes, requiring stencils to be modified. Combining these building blocks results in a finite-volume discretization for conservation laws on 3D cubed-sphere grids that is uniformly high-order accurate in all three grid directions. While solution-adaptivity is natural in the multi-block setting of our code, high-order accurate adaptive refinement on cubed-sphere grids is not pursued in this paper. The 3D CENO scheme is an accurate and robust solution method for hyperbolic conservation laws on general hexahedral grids that is attractive because it is inherently multidimensional by employing a K-exact overdetermined reconstruction scheme, and it avoids the complexity of considering multiple non-central stencil configurations that characterizes traditional ENO schemes. Extensive numerical tests demonstrate fourth-order convergence for stationary and time-dependent Euler and magnetohydrodynamic flows on cubed-sphere grids, and robustness against spurious oscillations at 3D shocks. Performance tests illustrate efficiency gains that can be potentially achieved using fourth-order schemes as compared to second-order methods for the same error level. Applications on extended cubed-sphere grids incorporating a seventh root block that discretizes the interior of the inner sphere demonstrate the versatility of the spatial discretization method.
Ganguly, Parthasarathi; Jehan, Kate; de Costa, Ayesha; Mavalankar, Dileep; Smith, Helen
2014-11-05
In India a lack of access to emergency obstetric care contributes to maternal deaths. In 2005 Gujarat state launched a public-private partnership (PPP) programme, Chiranjeevi Yojana (CY), under which the state pays accredited private obstetricians a fixed fee for providing free intrapartum care to poor and tribal women. A million women have delivered under CY so far. The participation of private obstetricians in the partnership is central to the programme's effectiveness. We explored with private obstetricians the reasons and experiences that influenced their decisions to participate in the CY programme. In this qualitative study we interviewed 24 purposefully selected private obstetricians in Gujarat. We explored their views on the scheme, the reasons and experiences leading up to decisions to participate, not participate or withdraw from the CY, as well as their opinions about the scheme's impact. We analysed data using the Framework approach. Participants expressed a tension between doing public good and making a profit. Bureaucratic procedures and perceptions of programme misuse seemed to influence providers to withdraw from the programme or not participate at all. Providers feared that participating in CY would lower the status of their practices and some were deterred by the likelihood of more clinically difficult cases among eligible CY beneficiaries. Some providers resented taking on what they saw as a state responsibility to provide safe maternity services to poor women. Younger obstetricians in the process of establishing private practices, and those in more remote, 'less competitive' areas, were more willing to participate in CY. Some doctors had reservations over the quality of care that doctors could provide given the financial constraints of the scheme. While some private obstetricians willingly participate in CY and are satisfied with its functioning, a larger number shared concerns about participation. Operational difficulties and a trust deficit between the public and private health sectors affect retention of private providers in the scheme. Further refinement of the scheme, in consultation with private partners, and trust building initiatives could strengthen the programme. These findings offer lessons to those developing public-private partnerships to widen access to health services for underprivileged groups.
Love, Seth; Chalmers, Katy; Ince, Paul; Esiri, Margaret; Attems, Johannes; Kalaria, Raj; Jellinger, Kurt; Yamada, Masahito; McCarron, Mark; Minett, Thais; Matthews, Fiona; Greenberg, Steven; Mann, David; Kehoe, Patrick Gavin
2015-01-01
In a collaboration involving 11 groups with research interests in cerebral amyloid angiopathy (CAA), we used a two-stage process to develop and in turn validate a new consensus protocol and scoring scheme for the assessment of CAA and associated vasculopathic abnormalities in post-mortem brain tissue. Stage one used an iterative Delphi-style survey to develop the consensus protocol. The resultant scoring scheme was tested on a series of digital images and paraffin sections that were circulated blind to a number of scorers. The scoring scheme and choice of staining methods were refined by open-forum discussion. The agreed protocol scored parenchymal and meningeal CAA on a 0-3 scale, capillary CAA as present/absent and vasculopathy on 0-2 scale, in the 4 cortical lobes that were scored separately. A further assessment involving three centres was then undertaken. Neuropathologists in three centres (Bristol, Oxford and Sheffield) independently scored sections from 75 cases (25 from each centre) and high inter-rater reliability was demonstrated. Stage two used the results of the three-centre assessment to validate the protocol by investigating previously described associations between APOE genotype (previously determined), and both CAA and vasculopathy. Association of capillary CAA with or without arteriolar CAA with APOE ε4 was confirmed. However APOE ε2 was also found to be a strong risk factor for the development of CAA, not only in AD but also in elderly non-demented controls. Further validation of this protocol and scoring scheme is encouraged, to aid its wider adoption to facilitate collaborative and replication studies of CAA.[This corrects the article on p. 19 in vol. 3, PMID: 24754000.].
Love, Seth; Chalmers, Katy; Ince, Paul; Esiri, Margaret; Attems, Johannes; Kalaria, Raj; Jellinger, Kurt; Yamada, Masahito; McCarron, Mark; Minett, Thais; Matthews, Fiona; Greenberg, Steven; Mann, David; Kehoe, Patrick Gavin
2015-01-01
In a collaboration involving 11 groups with research interests in cerebral amyloid angiopathy (CAA), we used a two-stage process to develop and in turn validate a new consensus protocol and scoring scheme for the assessment of CAA and associated vasculopathic abnormalities in post-mortem brain tissue. Stage one used an iterative Delphi-style survey to develop the consensus protocol. The resultant scoring scheme was tested on a series of digital images and paraffin sections that were circulated blind to a number of scorers. The scoring scheme and choice of staining methods were refined by open-forum discussion. The agreed protocol scored parenchymal and meningeal CAA on a 0-3 scale, capillary CAA as present/absent and vasculopathy on 0-2 scale, in the 4 cortical lobes that were scored separately. A further assessment involving three centres was then undertaken. Neuropathologists in three centres (Bristol, Oxford and Sheffield) independently scored sections from 75 cases (25 from each centre) and high inter-rater reliability was demonstrated. Stage two used the results of the three-centre assessment to validate the protocol by investigating previously described associations between APOE genotype (previously determined), and both CAA and vasculopathy. Association of capillary CAA with or without arteriolar CAA with APOE ε4 was confirmed. However APOE ε2 was also found to be a strong risk factor for the development of CAA, not only in AD but also in elderly non-demented controls. Further validation of this protocol and scoring scheme is encouraged, to aid its wider adoption to facilitate collaborative and replication studies of CAA. PMID:26807344
Love, Seth; Chalmers, Katy; Ince, Paul; Esiri, Margaret; Attems, Johannes; Jellinger, Kurt; Yamada, Masahito; McCarron, Mark; Minett, Thais; Matthews, Fiona; Greenberg, Steven; Mann, David; Kehoe, Patrick Gavin
2014-01-01
In a collaboration involving 11 groups with research interests in cerebral amyloid angiopathy (CAA), we used a two-stage process to develop and in turn validate a new consensus protocol and scoring scheme for the assessment of CAA and associated vasculopathic abnormalities in post-mortem brain tissue. Stage one used an iterative Delphi-style survey to develop the consensus protocol. The resultant scoring scheme was tested on a series of digital images and paraffin sections that were circulated blind to a number of scorers. The scoring scheme and choice of staining methods were refined by open-forum discussion. The agreed protocol scored parenchymal and meningeal CAA on a 0-3 scale, capillary CAA as present/absent and vasculopathy on 0-2 scale, in the 4 cortical lobes that were scored separately. A further assessment involving three centres was then undertaken. Neuropathologists in three centres (Bristol, Oxford and Sheffield) independently scored sections from 75 cases (25 from each centre) and high inter-rater reliability was demonstrated. Stage two used the results of the three-centre assessment to validate the protocol by investigating previously described associations between APOE genotype (previously determined), and both CAA and vasculopathy. Association of capillary CAA with or without arteriolar CAA with APOE ε4 was confirmed. However APOE ε2 was also found to be a strong risk factor for the development of CAA, not only in AD but also in elderly non-demented controls. Further validation of this protocol and scoring scheme is encouraged, to aid its wider adoption to facilitate collaborative and replication studies of CAA. PMID:24754000
Geological mapping of the Rainbow Massif, Mid-Atlantic Ridge, 36°14'N
NASA Astrophysics Data System (ADS)
Ildefonse, B.; Fouquet, Y.; Hoisé, E.; Dyment, J.; Gente, P.; Thibaud, R.; Bissessur, D.; Yatheesh, V.; Momardream 2008 Scientific Party*, T.
2008-12-01
The Rainbow hydrothermal field at 36°14'N on the Mid-Atlantic Ridge is one of the few known sites hosted in ultramafic basement. The Rainbow Massif is located along the non-transform offset between the AMAR and South AMAR second-order ridge segments, and presents the characteristic dome morphology of oceanic core complexes, although no corrugated surface has been observed so far. One of the objectives of Cruises MOMAR DREAM (July 2007, R/V Pourquoi Pas ?; Aug-Sept 2008, R/V Atalante) was to study the petrological and structural context of the hydrothermal system at the scale of the Rainbow Massif. Our geological sampling complements previous ones achieved during Cruises FLORES (1997) and IRIS (2001), and consisted in dredge hauls, and submersible dives by manned submersible Nautile and ROV Victor. The tectonics of the Rainbow Massif is dominated by a N-S trending fault pattern on the western flank of the massif, and a series of SW-NW ridges on its northeastern side. The active hydrothermal site is located in the area were these two systems crosscut. The most abundant recovered rock type is peridotite (harzburgite and dunite) that presents a variety of serpentinization styles and intensity, and a variety of deformation styles (commonly undeformed, sometimes displaying ductile or brittle foliations). Serpentinites are frequently oxidized. Some peridotite samples have melt impregnation textures. Massive chromitite was recovered in one dredge haul. Variously evolved gabbroic rocks were collected as discrete samples or as centimeter to decimeter-thick dikes in peridotites. Basalts and fresh basaltic glass were also sampled in talus and sediments on the southwestern and northeastern flanks of the massif. Our sampling is consistent with the lithological variability encountered in oceanic core complexes along the Mid-Atlantic Ridge and Southwest Indian Ridge. The stockwork of the hydrothermal system has been sampled on the western side of the present-day hydrothermal field, along N-S trending normal fault scarps, and within the talus underneath. It is made of massive sulfides, strongly altered serpentinites, and breccias containing elements of iron sulfide/oxide impregnated serpentinites. * K. Bukas, V. Cueff Gauchard, L. Durand, F. Gaill, C. Konn, F. Lartaud, N. Le Bris, G. Musset, A. Nunes, J. Renard, V. Riou, A. Tasiemski, P. Torres, I. Vojdani, M. Zbinden
Spatiotemporal Local-Remote Senor Fusion (ST-LRSF) for Cooperative Vehicle Positioning.
Jeong, Han-You; Nguyen, Hoa-Hung; Bhawiyuga, Adhitya
2018-04-04
Vehicle positioning plays an important role in the design of protocols, algorithms, and applications in the intelligent transport systems. In this paper, we present a new framework of spatiotemporal local-remote sensor fusion (ST-LRSF) that cooperatively improves the accuracy of absolute vehicle positioning based on two state estimates of a vehicle in the vicinity: a local sensing estimate, measured by the on-board exteroceptive sensors, and a remote sensing estimate, received from neighbor vehicles via vehicle-to-everything communications. Given both estimates of vehicle state, the ST-LRSF scheme identifies the set of vehicles in the vicinity, determines the reference vehicle state, proposes a spatiotemporal dissimilarity metric between two reference vehicle states, and presents a greedy algorithm to compute a minimal weighted matching (MWM) between them. Given the outcome of MWM, the theoretical position uncertainty of the proposed refinement algorithm is proven to be inversely proportional to the square root of matching size. To further reduce the positioning uncertainty, we also develop an extended Kalman filter model with the refined position of ST-LRSF as one of the measurement inputs. The numerical results demonstrate that the proposed ST-LRSF framework can achieve high positioning accuracy for many different scenarios of cooperative vehicle positioning.
Review of the development of laser fluorosensors for oil spill application.
Brown, Carl E; Fingas, Mervin F
2003-01-01
As laser fluorosensors provide their own source of excitation, they are known as active sensors. Being active sensors, laser fluorosensors can be employed around the clock, in daylight or in total darkness. Certain compounds, such as aromatic hydrocarbons, present in petroleum oils absorb ultraviolet laser light and become electronically excited. This excitation is quickly removed by the process of fluorescence emission, primarily in the visible region of the spectrum. By careful choice of the excitation laser wavelength and range-gated detection at selected emission wavelengths, petroleum oils can be detected and classified into three broad categories: light refined, crude or heavy refined. This paper will review the development of laser fluorosensors for oil spill application, with emphasis on system components such as excitation laser source, and detection schemes that allow these unique sensors to be employed for the detection and classification of petroleum oils. There have been a number of laser fluorosensors developed in recent years, many of which are strictly research and development tools. Certain of these fluorosensors have been ship-borne instruments that have been mounted in aircraft for the occasional airborne mission. Other systems are mounted permanently on aircraft for use in either surveillance or spill response roles.
NASA Astrophysics Data System (ADS)
Bouaynaya, N.; Schonfeld, Dan
2005-03-01
Many real world applications in computer and multimedia such as augmented reality and environmental imaging require an elastic accurate contour around a tracked object. In the first part of the paper we introduce a novel tracking algorithm that combines a motion estimation technique with the Bayesian Importance Sampling framework. We use Adaptive Block Matching (ABM) as the motion estimation technique. We construct the proposal density from the estimated motion vector. The resulting algorithm requires a small number of particles for efficient tracking. The tracking is adaptive to different categories of motion even with a poor a priori knowledge of the system dynamics. Particulary off-line learning is not needed. A parametric representation of the object is used for tracking purposes. In the second part of the paper, we refine the tracking output from a parametric sample to an elastic contour around the object. We use a 1D active contour model based on a dynamic programming scheme to refine the output of the tracker. To improve the convergence of the active contour, we perform the optimization over a set of randomly perturbed initial conditions. Our experiments are applied to head tracking. We report promising tracking results in complex environments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neal, C.R.; Davidson, J.P.
The Malaitan alnoite contains a rich and varied megacryst suite of unprecedented compositional range. The authors have undertaken trace element and isotope modeling in order to formulate a petrogenetic scheme which links the host alnoeite to its entrained megacrysts. This requires that a proto-alnoeite magma is the product of zone refining initiated by diapiric upwelling (where the initial melt passes through 200 times its volume of mantle). Isotopic evidence indicates the source of the proto-alnoeite contains a time-integrated LREE-depleted signature. Impingement upon the rigid lithosphere halts or dramatically slows the upward progress of the mantle diapir. At this point, themore » magma cools and megacryst fractionation begins with augites crystallizing first, followed by subcalcic diopsides and finally phlogopites. Garnet probably crystallizes over the entire range of clinopyroxene fractionation. Estimated proportions of fractionating phases are 30% augite, 24.5% subcalcic diopside, 27% garnet, 12.9% phlogopite, 5% bronzite, 0.5% ilmenite, and 0.1% zircon. As this proto-alnoeite magma crystallizes, it assimilates a subducted component of seawater-altered basalt which underplates the Ontong Java Plateau. This is witnessed in the isotopic composition of the megacrysts and alnoeite.« less
Recent advances in high-order WENO finite volume methods for compressible multiphase flows
NASA Astrophysics Data System (ADS)
Dumbser, Michael
2013-10-01
We present two new families of better than second order accurate Godunov-type finite volume methods for the solution of nonlinear hyperbolic partial differential equations with nonconservative products. One family is based on a high order Arbitrary-Lagrangian-Eulerian (ALE) formulation on moving meshes, which allows to resolve the material contact wave in a very sharp way when the mesh is moved at the speed of the material interface. The other family of methods is based on a high order Adaptive Mesh Refinement (AMR) strategy, where the mesh can be strongly refined in the vicinity of the material interface. Both classes of schemes have several building blocks in common, in particular: a high order WENO reconstruction operator to obtain high order of accuracy in space; the use of an element-local space-time Galerkin predictor step which evolves the reconstruction polynomials in time and that allows to reach high order of accuracy in time in one single step; the use of a path-conservative approach to treat the nonconservative terms of the PDE. We show applications of both methods to the Baer-Nunziato model for compressible multiphase flows.
NASA Astrophysics Data System (ADS)
Nagai, Haruyasu; Terada, Hiroaki; Tsuduki, Katsunori; Katata, Genki; Ota, Masakazu; Furuno, Akiko; Akari, Shusaku
2017-09-01
In order to assess the radiological dose to the public resulting from the Fukushima Daiichi Nuclear Power Station (FDNPS) accident in Japan, especially for the early phase of the accident when no measured data are available for that purpose, the spatial and temporal distribution of radioactive materials in the environment are reconstructed by computer simulations. In this study, by refining the source term of radioactive materials discharged into the atmosphere and modifying the atmospheric transport, dispersion and deposition model (ATDM), the atmospheric dispersion simulation of radioactive materials is improved. Then, a database of spatiotemporal distribution of radioactive materials in the air and on the ground surface is developed from the output of the simulation. This database is used in other studies for the dose assessment by coupling with the behavioral pattern of evacuees from the FDNPS accident. By the improvement of the ATDM simulation to use a new meteorological model and sophisticated deposition scheme, the ATDM simulations reproduced well the 137Cs and 131I deposition patterns. For the better reproducibility of dispersion processes, further refinement of the source term was carried out by optimizing it to the improved ATDM simulation by using new monitoring data.
Intelligent multi-spectral IR image segmentation
NASA Astrophysics Data System (ADS)
Lu, Thomas; Luong, Andrew; Heim, Stephen; Patel, Maharshi; Chen, Kang; Chao, Tien-Hsin; Chow, Edward; Torres, Gilbert
2017-05-01
This article presents a neural network based multi-spectral image segmentation method. A neural network is trained on the selected features of both the objects and background in the longwave (LW) Infrared (IR) images. Multiple iterations of training are performed until the accuracy of the segmentation reaches satisfactory level. The segmentation boundary of the LW image is used to segment the midwave (MW) and shortwave (SW) IR images. A second neural network detects the local discontinuities and refines the accuracy of the local boundaries. This article compares the neural network based segmentation method to the Wavelet-threshold and Grab-Cut methods. Test results have shown increased accuracy and robustness of this segmentation scheme for multi-spectral IR images.
An adaptive interpolation scheme for molecular potential energy surfaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kowalewski, Markus, E-mail: mkowalew@uci.edu; Larsson, Elisabeth; Heryudono, Alfa
The calculation of potential energy surfaces for quantum dynamics can be a time consuming task—especially when a high level of theory for the electronic structure calculation is required. We propose an adaptive interpolation algorithm based on polyharmonic splines combined with a partition of unity approach. The adaptive node refinement allows to greatly reduce the number of sample points by employing a local error estimate. The algorithm and its scaling behavior are evaluated for a model function in 2, 3, and 4 dimensions. The developed algorithm allows for a more rapid and reliable interpolation of a potential energy surface within amore » given accuracy compared to the non-adaptive version.« less
System identification of the JPL micro-precision interferometer truss - Test-analysis reconciliation
NASA Technical Reports Server (NTRS)
Red-Horse, J. R.; Marek, E. L.; Levine-West, M.
1993-01-01
The JPL Micro-Precision Interferometer (MPI) is a testbed for studying the use of control-structure interaction technology in the design of space-based interferometers. A layered control architecture will be employed to regulate the interferometer optical system to tolerances in the nanometer range. An important aspect of designing and implementing the control schemes for such a system is the need for high fidelity, test-verified analytical structural models. This paper focuses on one aspect of the effort to produce such a model for the MPI structure, test-analysis model reconciliation. Pretest analysis, modal testing, and model refinement results are summarized for a series of tests at both the component and full system levels.
Numerical analysis of flow about a total temperature sensor
NASA Technical Reports Server (NTRS)
Von Lavante, Ernst; Bruns, Russell L., Jr.; Sanetrik, Mark D.; Lam, Tim
1989-01-01
The unsteady flowfield about an airfoil-shaped inlet temperature sensor has been investigated using the thin-layer and full Navier-Stokes equations. A finite-volume formulation of the governing equations was used in conjunction with a Runge-Kutta time stepping scheme to analyze the flow about the sensor. Flow characteristics for this configuration were established at Mach numbers of 0.5 and 0.8 for different Reynolds numbers. The results were obtained for configurations of increasing complexity; important physical phenomena such as shock formation, boundary-layer separation, and unsteady wake formation were noted. Based on the computational results, recommendations for further study and refinement of the inlet temperature sensor were made.
NASA Technical Reports Server (NTRS)
Venuturmilli, Rajasekhar; Zhang, Yong; Chen, Lea-Der
2003-01-01
Enclosed flames are found in many industrial applications such as power plants, gas-turbine combustors and jet engine afterburners. A better understanding of the burner stability limits can lead to development of combustion systems that extend the lean and rich limits of combustor operations. This paper reports a fundamental study of the stability limits of co-flow laminar jet diffusion flames. A numerical study was conducted that used an adaptive mesh refinement scheme in the calculation. Experiments were conducted in two test rigs with two different fuels and diluted with three inert species. The numerical stability limits were compared with microgravity experimental data. Additional normal-gravity experimental results were also presented.
Multigrid techniques for the solution of the passive scalar advection-diffusion equation
NASA Technical Reports Server (NTRS)
Phillips, R. E.; Schmidt, F. W.
1985-01-01
The solution of elliptic passive scalar advection-diffusion equations is required in the analysis of many turbulent flow and convective heat transfer problems. The accuracy of the solution may be affected by the presence of regions containing large gradients of the dependent variables. The multigrid concept of local grid refinement is a method for improving the accuracy of the calculations in these problems. In combination with the multilevel acceleration techniques, an accurate and efficient computational procedure is developed. In addition, a robust implementation of the QUICK finite-difference scheme is described. Calculations of a test problem are presented to quantitatively demonstrate the advantages of the multilevel-multigrid method.
Electronic structure probed with positronium: Theoretical viewpoint
NASA Astrophysics Data System (ADS)
Kuriplach, Jan; Barbiellini, Bernardo
2018-05-01
We inspect carefully how the positronium can be used to study the electronic structure of materials. Recent combined experimental and computational study [A.C.L. Jones et al., Phys. Rev. Lett. 117, 216402 (2016)] has shown that the positronium affinity can be used to benchmark the exchange-correlation approximations in copper. Here we investigate whether an improvement can be achieved by increasing the numerical precision of calculations and by employing the strongly constrained and appropriately normed (SCAN) scheme, and extend the study to other selected systems like aluminum and high entropy alloys. From the methodological viewpoint, the computations of the positronium affinity are further refined and an alternative way of determining the electron chemical potential using charged supercells is examined.
NASA Astrophysics Data System (ADS)
Kirstetter, G.; Popinet, S.; Fullana, J. M.; Lagrée, P. Y.; Josserand, C.
2015-12-01
The full resolution of shallow-water equations for modeling flash floods may have a high computational cost, so that majority of flood simulation softwares used for flood forecasting uses a simplification of this model : 1D approximations, diffusive or kinematic wave approximations or exotic models using non-physical free parameters. These kind of approximations permit to save a lot of computational time by sacrificing in an unquantified way the precision of simulations. To reduce drastically the cost of such 2D simulations by quantifying the lost of precision, we propose a 2D shallow-water flow solver built with the open source code Basilisk1, which is using adaptive refinement on a quadtree grid. This solver uses a well-balanced central-upwind scheme, which is at second order in time and space, and treats the friction and rain terms implicitly in finite volume approach. We demonstrate the validity of our simulation on the case of the flood of Tewkesbury (UK) occurred in July 2007, as shown on Fig. 1. On this case, a systematic study of the impact of the chosen criterium for adaptive refinement is performed. The criterium which has the best computational time / precision ratio is proposed. Finally, we present the power law giving the computational time in respect to the maximum resolution and we show that this law for our 2D simulation is close to the one of 1D simulation, thanks to the fractal dimension of the topography. [1] http://basilisk.fr/
Dynamic fisheye grids for binary black hole simulations
NASA Astrophysics Data System (ADS)
Zilhão, Miguel; Noble, Scott C.
2014-03-01
We present a new warped gridding scheme adapted to simulating gas dynamics in binary black hole spacetimes. The grid concentrates grid points in the vicinity of each black hole to resolve the smaller scale structures there, and rarefies grid points away from each black hole to keep the overall problem size at a practical level. In this respect, our system can be thought of as a ‘double’ version of the fisheye coordinate system, used before in numerical relativity codes for evolving binary black holes. The gridding scheme is constructed as a mapping between a uniform coordinate system—in which the equations of motion are solved—to the distorted system representing the spatial locations of our grid points. Since we are motivated to eventually use this system for circumbinary disc calculations, we demonstrate how the distorted system can be constructed to asymptote to the typical spherical polar coordinate system, amenable to efficiently simulating orbiting gas flows about central objects with little numerical diffusion. We discuss its implementation in the Harm3d code, tailored to evolve the magnetohydrodynamics equations in curved spacetimes. We evaluate the performance of the system’s implementation in Harm3d with a series of tests, such as the advected magnetic field loop test, magnetized Bondi accretion, and evolutions of hydrodynamic discs about a single black hole and about a binary black hole. Like we have done with Harm3d, this gridding scheme can be implemented in other unigrid codes as a (possibly) simpler alternative to adaptive mesh refinement.
NASA Technical Reports Server (NTRS)
Jiang, Yi-Tsann
1993-01-01
A general solution adaptive scheme-based on a remeshing technique is developed for solving the two-dimensional and quasi-three-dimensional Euler and Favre-averaged Navier-Stokes equations. The numerical scheme is formulated on an unstructured triangular mesh utilizing an edge-based pointer system which defines the edge connectivity of the mesh structure. Jameson's four-stage hybrid Runge-Kutta scheme is used to march the solution in time. The convergence rate is enhanced through the use of local time stepping and implicit residual averaging. As the solution evolves, the mesh is regenerated adaptively using flow field information. Mesh adaptation parameters are evaluated such that an estimated local numerical error is equally distributed over the whole domain. For inviscid flows, the present approach generates a complete unstructured triangular mesh using the advancing front method. For turbulent flows, the approach combines a local highly stretched structured triangular mesh in the boundary layer region with an unstructured mesh in the remaining regions to efficiently resolve the important flow features. One-equation and two-equation turbulence models are incorporated into the present unstructured approach. Results are presented for a wide range of flow problems including two-dimensional multi-element airfoils, two-dimensional cascades, and quasi-three-dimensional cascades. This approach is shown to gain flow resolution in the refined regions while achieving a great reduction in the computational effort and storage requirements since solution points are not wasted in regions where they are not required.
NASA Technical Reports Server (NTRS)
Jiang, Yi-Tsann; Usab, William J., Jr.
1993-01-01
A general solution adaptive scheme based on a remeshing technique is developed for solving the two-dimensional and quasi-three-dimensional Euler and Favre-averaged Navier-Stokes equations. The numerical scheme is formulated on an unstructured triangular mesh utilizing an edge-based pointer system which defines the edge connectivity of the mesh structure. Jameson's four-stage hybrid Runge-Kutta scheme is used to march the solution in time. The convergence rate is enhanced through the use of local time stepping and implicit residual averaging. As the solution evolves, the mesh is regenerated adaptively using flow field information. Mesh adaptation parameters are evaluated such that an estimated local numerical error is equally distributed over the whole domain. For inviscid flows, the present approach generates a complete unstructured triangular mesh using the advancing front method. For turbulent flows, the approach combines a local highly stretched structured triangular mesh in the boundary layer region with an unstructured mesh in the remaining regions to efficiently resolve the important flow features. One-equation and two-equation turbulence models are incorporated into the present unstructured approach. Results are presented for a wide range of flow problems including two-dimensional multi-element airfoils, two-dimensional cascades, and quasi-three-dimensional cascades. This approach is shown to gain flow resolution in the refined regions while achieving a great reduction in the computational effort and storage requirements since solution points are not wasted in regions where they are not required.
Multiresolution strategies for the numerical solution of optimal control problems
NASA Astrophysics Data System (ADS)
Jain, Sachin
There exist many numerical techniques for solving optimal control problems but less work has been done in the field of making these algorithms run faster and more robustly. The main motivation of this work is to solve optimal control problems accurately in a fast and efficient way. Optimal control problems are often characterized by discontinuities or switchings in the control variables. One way of accurately capturing the irregularities in the solution is to use a high resolution (dense) uniform grid. This requires a large amount of computational resources both in terms of CPU time and memory. Hence, in order to accurately capture any irregularities in the solution using a few computational resources, one can refine the mesh locally in the region close to an irregularity instead of refining the mesh uniformly over the whole domain. Therefore, a novel multiresolution scheme for data compression has been designed which is shown to outperform similar data compression schemes. Specifically, we have shown that the proposed approach results in fewer grid points in the grid compared to a common multiresolution data compression scheme. The validity of the proposed mesh refinement algorithm has been verified by solving several challenging initial-boundary value problems for evolution equations in 1D. The examples have demonstrated the stability and robustness of the proposed algorithm. The algorithm adapted dynamically to any existing or emerging irregularities in the solution by automatically allocating more grid points to the region where the solution exhibited sharp features and fewer points to the region where the solution was smooth. Thereby, the computational time and memory usage has been reduced significantly, while maintaining an accuracy equivalent to the one obtained using a fine uniform mesh. Next, a direct multiresolution-based approach for solving trajectory optimization problems is developed. The original optimal control problem is transcribed into a nonlinear programming (NLP) problem that is solved using standard NLP codes. The novelty of the proposed approach hinges on the automatic calculation of a suitable, nonuniform grid over which the NLP problem is solved, which tends to increase numerical efficiency and robustness. Control and/or state constraints are handled with ease, and without any additional computational complexity. The proposed algorithm is based on a simple and intuitive method to balance several conflicting objectives, such as accuracy of the solution, convergence, and speed of the computations. The benefits of the proposed algorithm over uniform grid implementations are demonstrated with the help of several nontrivial examples. Furthermore, two sequential multiresolution trajectory optimization algorithms for solving problems with moving targets and/or dynamically changing environments have been developed. For such problems, high accuracy is desirable only in the immediate future, yet the ultimate mission objectives should be accommodated as well. An intelligent trajectory generation for such situations is thus enabled by introducing the idea of multigrid temporal resolution to solve the associated trajectory optimization problem on a non-uniform grid across time that is adapted to: (i) immediate future, and (ii) potential discontinuities in the state and control variables.
New MHD feedback control schemes using the MARTe framework in RFX-mod
NASA Astrophysics Data System (ADS)
Piron, Chiara; Manduchi, Gabriele; Marrelli, Lionello; Piovesan, Paolo; Zanca, Paolo
2013-10-01
Real-time feedback control of MHD instabilities is a topic of major interest in magnetic thermonuclear fusion, since it allows to optimize a device performance even beyond its stability bounds. The stability properties of different magnetic configurations are important test benches for real-time control systems. RFX-mod, a Reversed Field Pinch experiment that can also operate as a tokamak, is a well suited device to investigate this topic. It is equipped with a sophisticated magnetic feedback system that controls MHD instabilities and error fields by means of 192 active coils and a corresponding grid of sensors. In addition, the RFX-mod control system has recently gained new potentialities thanks to the introduction of the MARTe framework and of a new CPU architecture. These capabilities allow to study new feedback algorithms relevant to both RFP and tokamak operation and to contribute to the debate on the optimal feedback strategy. This work focuses on the design of new feedback schemes. For this purpose new magnetic sensors have been explored, together with new algorithms that refine the de-aliasing computation of the radial sideband harmonics. The comparison of different sensor and feedback strategy performance is described in both RFP and tokamak experiments.
Massive parallel 3D PIC simulation of negative ion extraction
NASA Astrophysics Data System (ADS)
Revel, Adrien; Mochalskyy, Serhiy; Montellano, Ivar Mauricio; Wünderlich, Dirk; Fantz, Ursel; Minea, Tiberiu
2017-09-01
The 3D PIC-MCC code ONIX is dedicated to modeling Negative hydrogen/deuterium Ion (NI) extraction and co-extraction of electrons from radio-frequency driven, low pressure plasma sources. It provides valuable insight on the complex phenomena involved in the extraction process. In previous calculations, a mesh size larger than the Debye length was used, implying numerical electron heating. Important steps have been achieved in terms of computation performance and parallelization efficiency allowing successful massive parallel calculations (4096 cores), imperative to resolve the Debye length. In addition, the numerical algorithms have been improved in terms of grid treatment, i.e., the electric field near the complex geometry boundaries (plasma grid) is calculated more accurately. The revised model preserves the full 3D treatment, but can take advantage of a highly refined mesh. ONIX was used to investigate the role of the mesh size, the re-injection scheme for lost particles (extracted or wall absorbed), and the electron thermalization process on the calculated extracted current and plasma characteristics. It is demonstrated that all numerical schemes give the same NI current distribution for extracted ions. Concerning the electrons, the pair-injection technique is found well-adapted to simulate the sheath in front of the plasma grid.
NASA Astrophysics Data System (ADS)
Li, Xianzhe; Jiang, Ping; Zhang, Yan; Ma, Weichun
2016-12-01
This study utilizes 521,631 activity data points from the 2007 Shanghai Pollution Source Census to compile a stationary carbon emission inventory for Shanghai. The inventory generated from our dataset shows that a large portion of Shanghai's total energy use consists of coal-oriented energy consumption. The electricity and heat production industries, iron and steel mills, and the petroleum refining industry are the main carbon emitters. In addition, most of these industries are located in Baoshan District, which is Shanghai's largest contributor of carbon emissions. Policy makers can use the enterpriselevel carbon emission inventory and the method designed in this study to construct sound carbon emission reduction policies. The carbon trading scheme to be established in Shanghai based on the developed carbon inventory is also introduced in this paper with the aim of promoting the monitoring, reporting and verification of carbon trading. Moreover, we believe that it might be useful to consider the participation of industries, such as those for food processing, beverage, and tobacco, in Shanghai's carbon trading scheme. Based on the results contained herein, we recommend establishing a comprehensive carbon emission inventory by inputting data from the pollution source census used in this study.
NASA Astrophysics Data System (ADS)
Venkatachari, Balaji Shankar; Chang, Chau-Lyan
2016-11-01
The focus of this study is scale-resolving simulations of the canonical normal shock- isotropic turbulence interaction using unstructured tetrahedral meshes and the space-time conservation element solution element (CESE) method. Despite decades of development in unstructured mesh methods and its potential benefits of ease of mesh generation around complex geometries and mesh adaptation, direct numerical or large-eddy simulations of turbulent flows are predominantly carried out using structured hexahedral meshes. This is due to the lack of consistent multi-dimensional numerical formulations in conventional schemes for unstructured meshes that can resolve multiple physical scales and flow discontinuities simultaneously. The CESE method - due to its Riemann-solver-free shock capturing capabilities, non-dissipative baseline schemes, and flux conservation in time as well as space - has the potential to accurately simulate turbulent flows using tetrahedral meshes. As part of the study, various regimes of the shock-turbulence interaction (wrinkled and broken shock regimes) will be investigated along with a study on how adaptive refinement of tetrahedral meshes benefits this problem. The research funding for this paper has been provided by Revolutionary Computational Aerosciences (RCA) subproject under the NASA Transformative Aeronautics Concepts Program (TACP).
Gradient Echo Quantum Memory in Warm Atomic Vapor
Pinel, Olivier; Hosseini, Mahdi; Sparkes, Ben M.; Everett, Jesse L.; Higginbottom, Daniel; Campbell, Geoff T.; Lam, Ping Koy; Buchler, Ben C.
2013-01-01
Gradient echo memory (GEM) is a protocol for storing optical quantum states of light in atomic ensembles. The primary motivation for such a technology is that quantum key distribution (QKD), which uses Heisenberg uncertainty to guarantee security of cryptographic keys, is limited in transmission distance. The development of a quantum repeater is a possible path to extend QKD range, but a repeater will need a quantum memory. In our experiments we use a gas of rubidium 87 vapor that is contained in a warm gas cell. This makes the scheme particularly simple. It is also a highly versatile scheme that enables in-memory refinement of the stored state, such as frequency shifting and bandwidth manipulation. The basis of the GEM protocol is to absorb the light into an ensemble of atoms that has been prepared in a magnetic field gradient. The reversal of this gradient leads to rephasing of the atomic polarization and thus recall of the stored optical state. We will outline how we prepare the atoms and this gradient and also describe some of the pitfalls that need to be avoided, in particular four-wave mixing, which can give rise to optical gain. PMID:24300586
A weakly-compressible Cartesian grid approach for hydrodynamic flows
NASA Astrophysics Data System (ADS)
Bigay, P.; Oger, G.; Guilcher, P.-M.; Le Touzé, D.
2017-11-01
The present article aims at proposing an original strategy to solve hydrodynamic flows. In introduction, the motivations for this strategy are developed. It aims at modeling viscous and turbulent flows including complex moving geometries, while avoiding meshing constraints. The proposed approach relies on a weakly-compressible formulation of the Navier-Stokes equations. Unlike most hydrodynamic CFD (Computational Fluid Dynamics) solvers usually based on implicit incompressible formulations, a fully-explicit temporal scheme is used. A purely Cartesian grid is adopted for numerical accuracy and algorithmic simplicity purposes. This characteristic allows an easy use of Adaptive Mesh Refinement (AMR) methods embedded within a massively parallel framework. Geometries are automatically immersed within the Cartesian grid with an AMR compatible treatment. The method proposed uses an Immersed Boundary Method (IBM) adapted to the weakly-compressible formalism and imposed smoothly through a regularization function, which stands as another originality of this work. All these features have been implemented within an in-house solver based on this WCCH (Weakly-Compressible Cartesian Hydrodynamic) method which meets the above requirements whilst allowing the use of high-order (> 3) spatial schemes rarely used in existing hydrodynamic solvers. The details of this WCCH method are presented and validated in this article.
NASA Astrophysics Data System (ADS)
Benaskeur, Abder R.; Roy, Jean
2001-08-01
Sensor Management (SM) has to do with how to best manage, coordinate and organize the use of sensing resources in a manner that synergistically improves the process of data fusion. Based on the contextual information, SM develops options for collecting further information, allocates and directs the sensors towards the achievement of the mission goals and/or tunes the parameters for the realtime improvement of the effectiveness of the sensing process. Conscious of the important role that SM has to play in modern data fusion systems, we are currently studying advanced SM Concepts that would help increase the survivability of the current Halifax and Iroquois Class ships, as well as their possible future upgrades. For this purpose, a hierarchical scheme has been proposed for data fusion and resource management adaptation, based on the control theory and within the process refinement paradigm of the JDL data fusion model, and taking into account the multi-agent model put forward by the SASS Group for the situation analysis process. The novelty of this work lies in the unified framework that has been defined for tackling the adaptation of both the fusion process and the sensor/weapon management.
Electromagnetic mixing laws: A supersymmetric approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niez, J.J.
2010-02-15
In this article we address the old problem of finding the effective dielectric constant of materials described either by a local random dielectric constant, or by a set of non-overlapping spherical inclusions randomly dispersed in a host. We use a unified theoretical framework, such that all the most important Electromagnetic Mixing Laws (EML) can be recovered as the first iterative step of a family of results, thus opening the way to future improvements through the refinements of the approximation schemes. When the material is described by a set of immersed inclusions characterized by their spatial correlation functions, we exhibit anmore » EML which, being featured by a minimal approximation scheme, does not come from the multiple scattering paradigm. It is made of a pure Hori-Yonezawa formula, corrected by a power series of the inclusion density. The coefficients of the latter, which are given as sums of standard diagrams, are recast into electromagnetic quantities which calculation is amenable numerically thanks to codes available on the web. The methods used and developed in this work are generic and can be used in a large variety of areas ranging from mechanics to thermodynamics.« less
Yang, Yi Isaac; Parrinello, Michele
2018-06-12
Collective variables are used often in many enhanced sampling methods, and their choice is a crucial factor in determining sampling efficiency. However, at times, searching for good collective variables can be challenging. In a recent paper, we combined time-lagged independent component analysis with well-tempered metadynamics in order to obtain improved collective variables from metadynamics runs that use lower quality collective variables [ McCarty, J.; Parrinello, M. J. Chem. Phys. 2017 , 147 , 204109 ]. In this work, we extend these ideas to variationally enhanced sampling. This leads to an efficient scheme that is able to make use of the many advantages of the variational scheme. We apply the method to alanine-3 in water. From an alanine-3 variationally enhanced sampling trajectory in which all the six dihedral angles are biased, we extract much better collective variables able to describe in exquisite detail the protein complex free energy surface in a low dimensional representation. The success of this investigation is helped by a more accurate way of calculating the correlation functions needed in the time-lagged independent component analysis and from the introduction of a new basis set to describe the dihedral angles arrangement.
Gradient echo quantum memory in warm atomic vapor.
Pinel, Olivier; Hosseini, Mahdi; Sparkes, Ben M; Everett, Jesse L; Higginbottom, Daniel; Campbell, Geoff T; Lam, Ping Koy; Buchler, Ben C
2013-11-11
Gradient echo memory (GEM) is a protocol for storing optical quantum states of light in atomic ensembles. The primary motivation for such a technology is that quantum key distribution (QKD), which uses Heisenberg uncertainty to guarantee security of cryptographic keys, is limited in transmission distance. The development of a quantum repeater is a possible path to extend QKD range, but a repeater will need a quantum memory. In our experiments we use a gas of rubidium 87 vapor that is contained in a warm gas cell. This makes the scheme particularly simple. It is also a highly versatile scheme that enables in-memory refinement of the stored state, such as frequency shifting and bandwidth manipulation. The basis of the GEM protocol is to absorb the light into an ensemble of atoms that has been prepared in a magnetic field gradient. The reversal of this gradient leads to rephasing of the atomic polarization and thus recall of the stored optical state. We will outline how we prepare the atoms and this gradient and also describe some of the pitfalls that need to be avoided, in particular four-wave mixing, which can give rise to optical gain.
Camera-pose estimation via projective Newton optimization on the manifold.
Sarkis, Michel; Diepold, Klaus
2012-04-01
Determining the pose of a moving camera is an important task in computer vision. In this paper, we derive a projective Newton algorithm on the manifold to refine the pose estimate of a camera. The main idea is to benefit from the fact that the 3-D rigid motion is described by the special Euclidean group, which is a Riemannian manifold. The latter is equipped with a tangent space defined by the corresponding Lie algebra. This enables us to compute the optimization direction, i.e., the gradient and the Hessian, at each iteration of the projective Newton scheme on the tangent space of the manifold. Then, the motion is updated by projecting back the variables on the manifold itself. We also derive another version of the algorithm that employs homeomorphic parameterization to the special Euclidean group. We test the algorithm on several simulated and real image data sets. Compared with the standard Newton minimization scheme, we are now able to obtain the full numerical formula of the Hessian with a 60% decrease in computational complexity. Compared with Levenberg-Marquardt, the results obtained are more accurate while having a rather similar complexity.
Progressive transmission of pseudo-color images. Appendix 1: Item 4. M.S. Thesis
NASA Technical Reports Server (NTRS)
Hadenfeldt, Andrew C.
1991-01-01
The transmission of digital images can require considerable channel bandwidth. The cost of obtaining such a channel can be prohibitive, or the channel might simply not be available. In this case, progressive transmission (PT) can be useful. PT presents the user with a coarse initial image approximation, and then proceeds to refine it. In this way, the user tends to receive information about the content of the image sooner than if a sequential transmission method is used. PT finds application in image data base browsing, teleconferencing, medical and other applications. A PT scheme is developed for use with a particular type of image data, the pseudo-color or color mapped image. Such images consist of a table of colors called a colormap, plus a 2-D array of index values which indicate which colormap entry is to be used to display a given pixel. This type of image presents some unique problems for a PT coder, and techniques for overcoming these problems are developed. A computer simulation of the color mapped PT scheme is developed to evaluate its performance. Results of simulation using several test images are presented.
Tan, Edwin T.; Martin, Sarah R.; Fortier, Michelle A.; Kain, Zeev N.
2012-01-01
Objective To develop and validate a behavioral coding measure, the Children's Behavior Coding System-PACU (CBCS-P), for children's distress and nondistress behaviors while in the postanesthesia recovery unit. Methods A multidisciplinary team examined videotapes of children in the PACU and developed a coding scheme that subsequently underwent a refinement process (CBCS-P). To examine the reliability and validity of the coding system, 121 children and their parents were videotaped during their stay in the PACU. Participants were healthy children undergoing elective, outpatient surgery and general anesthesia. The CBCS-P was utilized and objective data from medical charts (analgesic consumption and pain scores) were extracted to establish validity. Results Kappa values indicated good-to-excellent (κ's > .65) interrater reliability of the individual codes. The CBCS-P had good criterion validity when compared to children's analgesic consumption and pain scores. Conclusions The CBCS-P is a reliable, observational coding method that captures children's distress and nondistress postoperative behaviors. These findings highlight the importance of considering context in both the development and application of observational coding schemes. PMID:22167123
NASA Astrophysics Data System (ADS)
El-Wardany, Tahany; Lynch, Mathew; Gu, Wenjiong; Hsu, Arthur; Klecka, Michael; Nardi, Aaron; Viens, Daniel
This paper proposes an optimization framework enabling the integration of multi-scale / multi-physics simulation codes to perform structural optimization design for additively manufactured components. Cold spray was selected as the additive manufacturing (AM) process and its constraints were identified and included in the optimization scheme. The developed framework first utilizes topology optimization to maximize stiffness for conceptual design. The subsequent step applies shape optimization to refine the design for stress-life fatigue. The component weight was reduced by 20% while stresses were reduced by 75% and the rigidity was improved by 37%. The framework and analysis codes were implemented using Altair software as well as an in-house loading code. The optimized design was subsequently produced by the cold spray process.
Image-adaptive and robust digital wavelet-domain watermarking for images
NASA Astrophysics Data System (ADS)
Zhao, Yi; Zhang, Liping
2018-03-01
We propose a new frequency domain wavelet based watermarking technique. The key idea of our scheme is twofold: multi-tier solution representation of image and odd-even quantization embedding/extracting watermark. Because many complementary watermarks need to be hidden, the watermark image designed is image-adaptive. The meaningful and complementary watermark images was embedded into the original image (host image) by odd-even quantization modifying coefficients, which was selected from the detail wavelet coefficients of the original image, if their magnitudes are larger than their corresponding Just Noticeable Difference thresholds. The tests show good robustness against best-known attacks such as noise addition, image compression, median filtering, clipping as well as geometric transforms. Further research may improve the performance by refining JND thresholds.
Toward a Conceptual Knowledge Management Framework in Health
Lau, Francis
2004-01-01
This paper describes a conceptual organizing scheme for managing knowledge within the health setting. First, a brief review of the notions of knowledge and knowledge management is provided. This is followed by a detailed depiction of our proposed knowledge management framework, which focuses on the concepts of production, use, and refinement of three specific knowledge sources-policy, evidence, and experience. These concepts are operationalized through a set of knowledge management methods and tools tailored for the health setting. We include two case studies around knowledge translation on parent-child relations and virtual networks in community health research to illustrate how this knowledge management framework can be operationalized within specific contexts and the issues involved. We conclude with the lessons learned and implications. PMID:18066388
Source term evaluation for combustion modeling
NASA Technical Reports Server (NTRS)
Sussman, Myles A.
1993-01-01
A modification is developed for application to the source terms used in combustion modeling. The modification accounts for the error of the finite difference scheme in regions where chain-branching chemical reactions produce exponential growth of species densities. The modification is first applied to a one-dimensional scalar model problem. It is then generalized to multiple chemical species, and used in quasi-one-dimensional computations of shock-induced combustion in a channel. Grid refinement studies demonstrate the improved accuracy of the method using this modification. The algorithm is applied in two spatial dimensions and used in simulations of steady and unsteady shock-induced combustion. Comparisons with ballistic range experiments give confidence in the numerical technique and the 9-species hydrogen-air chemistry model.
Designing Adaptive Low Dissipative High Order Schemes
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sjoegreen, B.; Parks, John W. (Technical Monitor)
2002-01-01
Proper control of the numerical dissipation/filter to accurately resolve all relevant multiscales of complex flow problems while still maintaining nonlinear stability and efficiency for long-time numerical integrations poses a great challenge to the design of numerical methods. The required type and amount of numerical dissipation/filter are not only physical problem dependent, but also vary from one flow region to another. This is particularly true for unsteady high-speed shock/shear/boundary-layer/turbulence/acoustics interactions and/or combustion problems since the dynamics of the nonlinear effect of these flows are not well-understood. Even with extensive grid refinement, it is of paramount importance to have proper control on the type and amount of numerical dissipation/filter in regions where it is needed.
Ulibarri, Monica D; Roesch, Scott; Rangel, M Gudelia; Staines, Hugo; Amaro, Hortensia; Strathdee, Steffanie A
2015-01-01
A significant body of research among female sex workers (FSWs) has focused on individual-level HIV risk factors. Comparatively little is known about their non-commercial, steady partners who may heavily influence their behavior and HIV risk. This cross-sectional study of 214 FSWs who use drugs and their male steady partners aged ≥18 in two Mexico-U.S. border cities utilized a path-analytic model for dyadic data based upon the Actor-Partner Interdependence Model to examine relationships between sexual relationship power, intimate partner violence (IPV), depression symptoms, and unprotected sex. FSWs' relationship power, IPV perpetration and victimization were significantly associated with unprotected sex within the relationship. Male partners' depression symptoms were significantly associated with unprotected sex within the relationship. Future HIV prevention interventions for FSWs and their male partners should address issues of sexual relationship power, IPV, and mental health both individually and in the context of their relationship.
NASA Astrophysics Data System (ADS)
Sivasundaram, Seenith
2016-07-01
The review paper [1] is devoted to the survey of different structures that have been developed for the modeling and analysis of various types of fibrosis. Biomathematics, bioinformatics, biomechanics and biophysics modeling have been treated by means of a brief description of the different models developed. The review is impressive and clearly written, addressed to a reader interested not only in the theoretical modeling but also in the biological description. The models have been described without recurring to technical statements or mathematical equations thus allowing the non-specialist reader to understand what framework is more suitable at a certain observation scale. The review [1] concludes with the possibility to develop a multiscale approach considering also the definition of a therapeutical strategy for pathological fibrosis. In particular the control and optimization of therapeutics action is an important issue and this article aims at commenting on this topic.
Clinical experience of PDT in Brazil: a 300 patient overview
NASA Astrophysics Data System (ADS)
Kurachi, Cristina; Ferreira, Juliana; Marcassa, Luis G.; Cestari Filho, Guilherme A.; Souza, Cacilda S.; Bagnato, Vanderlei S.
2005-04-01
Clinical application of Photodynamic Therapy (PDT) in Brazil is a result of a pioneering work in a collaborative program involving the Physics Institute and the Medical School of the University of Sao Paulo and the Amaral Carvalho Cancer Hospital in the city of Jau, Sao Paulo. This work began in 1997 with the first patient treated in 1999. Up to the end of 2003 this program has treated over 300 patients and the ones with correct follow up had their lesions included in this report. The majority of the lesions were of non-melanoma skin cancer located on the head and neck region, but the group has also treated Esophagus, Bladder, Gynecological, chest wall recurrence of breast cancer, among others. The results have shown to be compatible with internationally reported data, and we have modified some application procedures towards to a better benefit for the patient and an optimization of the results. We present the overall results observed after 5 year of experimental clinical treatment.
Cross-sectional mapping for refined beam elements with applications to shell-like structures
NASA Astrophysics Data System (ADS)
Pagani, A.; de Miguel, A. G.; Carrera, E.
2017-06-01
This paper discusses the use of higher-order mapping functions for enhancing the physical representation of refined beam theories. Based on the Carrera unified formulation (CUF), advanced one-dimensional models are formulated by expressing the displacement field as a generic expansion of the generalized unknowns. According to CUF, a novel physically/geometrically consistent model is devised by employing Legendre-like polynomial sets to approximate the generalized unknowns at the cross-sectional level, whereas a local mapping technique based on the blending functions method is used to describe the exact physical boundaries of the cross-section domain. Classical and innovative finite element methods, including hierarchical p-elements and locking-free integration schemes, are utilized to solve the governing equations of the unified beam theory. Several numerical applications accounting for small displacements/rotations and strains are discussed, including beam structures with cross-sectional curved edges, cylindrical shells, and thin-walled aeronautical wing structures with reinforcements. The results from the proposed methodology are widely assessed by comparisons with solutions from the literature and commercial finite element software tools. The attention is focussed on the high computational efficiency and the marked capabilities of the present beam model, which can deal with a broad spectrum of structural problems with unveiled accuracy in terms of geometrical representation of the domain boundaries.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chipman, V D
Two-dimensional axisymmetric hydrodynamic models were developed using GEODYN to simulate the propagation of air blasts resulting from a series of high explosive detonations conducted at Kirtland Air Force Base in August and September of 2007. Dubbed Humble Redwood I (HR-1), these near-surface chemical high explosive detonations consisted of seven shots of varying height or depth of burst. Each shot was simulated numerically using GEODYN. An adaptive mesh refinement scheme based on air pressure gradients was employed such that the mesh refinement tracked the advancing shock front where sharp discontinuities existed in the state variables, but allowed the mesh to sufficientlymore » relax behind the shock front for runtime efficiency. Comparisons of overpressure, sound speed, and positive phase impulse from the GEODYN simulations were made to the recorded data taken from each HR-1 shot. Where the detonations occurred above ground or were shallowly buried (no deeper than 1 m), the GEODYN model was able to simulate the sound speeds, peak overpressures, and positive phase impulses to within approximately 1%, 23%, and 6%, respectively, of the actual recorded data, supporting the use of numerical simulation of the air blast as a forensic tool in determining the yield of an otherwise unknown explosion.« less
The regulatory acceptance of alternatives in the European Union.
Warbrick, E Vicky; Evans, Peter F
2004-06-01
Recently, progress has been made toward the regulatory acceptance of replacements in the European Union (EU), particularly with the introduction of in vitro methods for the prediction of skin corrosivity, dermal penetration, phototoxicity and embryotoxicity. In vitro genotoxicity tests are well established, and testing for this endpoint can be completed without animals, provided that clear negative outcomes are obtained. Tiered approaches including in vitro tests can also be used to address skin and eye irritation endpoints. Reductions and/or refinements in animal use are being achieved following the replacement of the oral LD50 test with alternative methods and the adoption of reduced test packages for materials, such as closed-system intermediates and certain polymers. Furthermore, the use of a "read-across" approach has reduced animal testing. Substantial gains in refinement will also be made with the recent acceptance of the local lymph node assay for skin sensitisation and the development of an acute inhalation toxicity method that avoids lethality as the endpoint. For the future, under the proposed EU Registration, Evaluation and Authorisation of Chemicals (REACH) scheme, it is envisaged that, where suitable in vitro methods exist, these should be used to support registration of substances produced at up to ten tonnes per annum. This proposal can only accelerate the further development, validation and regulatory acceptance of such alternative methods.
Cryo-EM image alignment based on nonuniform fast Fourier transform.
Yang, Zhengfan; Penczek, Pawel A
2008-08-01
In single particle analysis, two-dimensional (2-D) alignment is a fundamental step intended to put into register various particle projections of biological macromolecules collected at the electron microscope. The efficiency and quality of three-dimensional (3-D) structure reconstruction largely depends on the computational speed and alignment accuracy of this crucial step. In order to improve the performance of alignment, we introduce a new method that takes advantage of the highly accurate interpolation scheme based on the gridding method, a version of the nonuniform fast Fourier transform, and utilizes a multi-dimensional optimization algorithm for the refinement of the orientation parameters. Using simulated data, we demonstrate that by using less than half of the sample points and taking twice the runtime, our new 2-D alignment method achieves dramatically better alignment accuracy than that based on quadratic interpolation. We also apply our method to image to volume registration, the key step in the single particle EM structure refinement protocol. We find that in this case the accuracy of the method not only surpasses the accuracy of the commonly used real-space implementation, but results are achieved in much shorter time, making gridding-based alignment a perfect candidate for efficient structure determination in single particle analysis.
Spatiotemporal Local-Remote Senor Fusion (ST-LRSF) for Cooperative Vehicle Positioning
Bhawiyuga, Adhitya
2018-01-01
Vehicle positioning plays an important role in the design of protocols, algorithms, and applications in the intelligent transport systems. In this paper, we present a new framework of spatiotemporal local-remote sensor fusion (ST-LRSF) that cooperatively improves the accuracy of absolute vehicle positioning based on two state estimates of a vehicle in the vicinity: a local sensing estimate, measured by the on-board exteroceptive sensors, and a remote sensing estimate, received from neighbor vehicles via vehicle-to-everything communications. Given both estimates of vehicle state, the ST-LRSF scheme identifies the set of vehicles in the vicinity, determines the reference vehicle state, proposes a spatiotemporal dissimilarity metric between two reference vehicle states, and presents a greedy algorithm to compute a minimal weighted matching (MWM) between them. Given the outcome of MWM, the theoretical position uncertainty of the proposed refinement algorithm is proven to be inversely proportional to the square root of matching size. To further reduce the positioning uncertainty, we also develop an extended Kalman filter model with the refined position of ST-LRSF as one of the measurement inputs. The numerical results demonstrate that the proposed ST-LRSF framework can achieve high positioning accuracy for many different scenarios of cooperative vehicle positioning. PMID:29617341
Glioma CpG island methylator phenotype (G-CIMP): biological and clinical implications.
Malta, Tathiane M; de Souza, Camila F; Sabedot, Thais S; Silva, Tiago C; Mosella, Maritza S; Kalkanis, Steven N; Snyder, James; Castro, Ana Valeria B; Noushmehr, Houtan
2018-04-09
Gliomas are a heterogeneous group of brain tumors with distinct biological and clinical properties. Despite advances in surgical techniques and clinical regimens, treatment of high-grade glioma remains challenging and carries dismal rates of therapeutic success and overall survival. Challenges include the molecular complexity of gliomas, as well as inconsistencies in histopathological grading, resulting in an inaccurate prediction of disease progression and failure in the use of standard therapy. The updated 2016 World Health Organization (WHO) classification of tumors of the central nervous system reflects a refinement of tumor diagnostics by integrating the genotypic and phenotypic features, thereby narrowing the defined subgroups. The new classification recommends molecular diagnosis of isocitrate dehydrogenase (IDH) mutational status in gliomas. IDH-mutant gliomas manifest the cytosine-phosphate-guanine (CpG) island methylator phenotype (G-CIMP). Notably, the recent identification of clinically relevant subsets of G-CIMP tumors (G-CIMP-high and G-CIMP-low) provides a further refinement in glioma classification that is independent of grade and histology. This scheme may be useful for predicting patient outcome and may be translated into effective therapeutic strategies tailored to each patient. In this review, we highlight the evolution of our understanding of the G-CIMP subsets and how recent advances in characterizing the genome and epigenome of gliomas may influence future basic and translational research.
Puliti, R; Mattia, C A; Paduano, L
1998-08-01
The crystallographic study of a new hydrated form of alpha-cyclodextrin (cyclohexaamylose) is reported. C36H60O30 . 11H2O; space group P2(1)2(1)2(1) with cell constants a = 13.839(3), b = 15.398(3), c = 24.209(7) A; final discrepancy index R = 0.057 for the 5182 observed reflections and 632 refined parameters. Besides four ordered water molecules placed outside alpha-cyclodextrins, the structure shows regions of severely disordered solvent mainly confined in the oligosaccharide cavities. The contribution of the observed disorder has been computed via Fourier inversions of the residual electron density and incorporated into the structure factors in further refinements of the ordered part. The alpha-cyclodextrin molecule assumes a relaxed round shape stabilised by a ring sequence of all the six possible O2 ... O3 intramolecular hydrogen bonds. The four ordered water molecules take part in an extensive network of hydrogen bonds (infinite chains and loops) without modifying the scheme of intramolecular H-bonds or the (-)gauche conformation of O-6-H hydroxyl groups. The structure shows a new molecular arrangement, for an "empty" hydrated alpha-cyclodextrin, like that "brick-type" observed for alpha-CD in the iodoanilide trihydrate complex crystallising in an isomorphous cell.
Li, Haiou; Lu, Liyao; Chen, Rong; Quan, Lijun; Xia, Xiaoyan; Lü, Qiang
2014-01-01
Structural information related to protein-peptide complexes can be very useful for novel drug discovery and design. The computational docking of protein and peptide can supplement the structural information available on protein-peptide interactions explored by experimental ways. Protein-peptide docking of this paper can be described as three processes that occur in parallel: ab-initio peptide folding, peptide docking with its receptor, and refinement of some flexible areas of the receptor as the peptide is approaching. Several existing methods have been used to sample the degrees of freedom in the three processes, which are usually triggered in an organized sequential scheme. In this paper, we proposed a parallel approach that combines all the three processes during the docking of a folding peptide with a flexible receptor. This approach mimics the actual protein-peptide docking process in parallel way, and is expected to deliver better performance than sequential approaches. We used 22 unbound protein-peptide docking examples to evaluate our method. Our analysis of the results showed that the explicit refinement of the flexible areas of the receptor facilitated more accurate modeling of the interfaces of the complexes, while combining all of the moves in parallel helped the constructing of energy funnels for predictions.
Clocks to Computers: A Machine-Based “Big Picture” of the History of Modern Science.
van Lunteren, Frans
2016-12-01
Over the last few decades there have been several calls for a “big picture” of the history of science. There is a general need for a concise overview of the rise of modern science, with a clear structure allowing for a rough division into periods. This essay proposes such a scheme, one that is both elementary and comprehensive. It focuses on four machines, which can be seen to have mediated between science and society during successive periods of time: the clock, the balance, the steam engine, and the computer. Following an extended developmental phase, each of these machines came to play a highly visible role in Western societies, both socially and economically. Each of these machines, moreover, was used as a powerful resource for the understanding of both inorganic and organic nature. More specifically, their metaphorical use helped to construe and refine some key concepts that would play a prominent role in such understanding. In each case the key concept would at some point be considered to represent the ultimate building block of reality. Finally, in a refined form, each of these machines would eventually make its entry in scientific research, thereby strengthening the ties between these machines and nature.
Cryo-EM Image Alignment Based on Nonuniform Fast Fourier Transform
Yang, Zhengfan; Penczek, Pawel A.
2008-01-01
In single particle analysis, two-dimensional (2-D) alignment is a fundamental step intended to put into register various particle projections of biological macromolecules collected at the electron microscope. The efficiency and quality of three-dimensional (3-D) structure reconstruction largely depends on the computational speed and alignment accuracy of this crucial step. In order to improve the performance of alignment, we introduce a new method that takes advantage of the highly accurate interpolation scheme based on the gridding method, a version of the nonuniform Fast Fourier Transform, and utilizes a multi-dimensional optimization algorithm for the refinement of the orientation parameters. Using simulated data, we demonstrate that by using less than half of the sample points and taking twice the runtime, our new 2-D alignment method achieves dramatically better alignment accuracy than that based on quadratic interpolation. We also apply our method to image to volume registration, the key step in the single particle EM structure refinement protocol. We find that in this case the accuracy of the method not only surpasses the accuracy of the commonly used real-space implementation, but results are achieved in much shorter time, making gridding-based alignment a perfect candidate for efficient structure determination in single particle analysis. PMID:18499351
Morgan, Fiona; Battersby, Alysia; Weightman, Alison L; Searchfield, Lydia; Turley, Ruth; Morgan, Helen; Jagroo, James; Ellis, Simon
2016-03-05
Physical inactivity levels are rising worldwide with major implications for the health of the population and the prevalence of non-communicable diseases. Exercise referral schemes (ERS) continue to be a popular intervention utilised by healthcare practitioners to increase physical activity. We undertook a systematic review of views studies in order to inform guidance from the UK National Institute of Health and Care Excellence (NICE) on exercise referral schemes to promote physical activity. This paper reports on the participant views identified, to inform those seeking to refine schemes to increase attendance and adherence. Fifteen databases and a wide range of websites and grey literature sources were searched systematically for publications from 1995 to June 2013. In addition, a range of supplementary methods including, a call for evidence by NICE, contacting authors, reference list checking and citation tracking were utilised to identify additional research. Studies were included where they detailed schemes for adults aged 19 years or older who were 'inactive' (i.e. they are not currently meeting UK physical activity guidelines). Study selection was conducted independently in duplicate. Quality assessment was undertaken by one reviewer and checked by a second, with 20 % of papers being considered independently in duplicate. Papers were coded in qualitative data analysis software Atlas.ti. This review was reported in accordance with PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement). Evidence from 33 UK-relevant studies identified that support from providers, other attendees and family was an important facilitator of adherence and 'making exercise a habit' post programme, as was the variety and personalised nature of sessions offered. Barriers to attendance included the inconvenient timing of sessions, their cost and location. An intimidating gym atmosphere, a dislike of the music and TV and a lack of confidence in operating gym equipment were frequently reported. These findings provide valuable insights that commissioners and providers should consider. The main themes were consistent across a large number of studies and further research should concentrate on programmes that reflect these findings.
A Godunov-like point-centered essentially Lagrangian hydrodynamic approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morgan, Nathaniel R.; Waltz, Jacob I.; Burton, Donald E.
We present an essentially Lagrangian hydrodynamic scheme suitable for modeling complex compressible flows on tetrahedron meshes. The scheme reduces to a purely Lagrangian approach when the flow is linear or if the mesh size is equal to zero; as a result, we use the term essentially Lagrangian for the proposed approach. The motivation for developing a hydrodynamic method for tetrahedron meshes is because tetrahedron meshes have some advantages over other mesh topologies. Notable advantages include reduced complexity in generating conformal meshes, reduced complexity in mesh reconnection, and preserving tetrahedron cells with automatic mesh refinement. A challenge, however, is tetrahedron meshesmore » do not correctly deform with a lower order (i.e. piecewise constant) staggered-grid hydrodynamic scheme (SGH) or with a cell-centered hydrodynamic (CCH) scheme. The SGH and CCH approaches calculate the strain via the tetrahedron, which can cause artificial stiffness on large deformation problems. To resolve the stiffness problem, we adopt the point-centered hydrodynamic approach (PCH) and calculate the evolution of the flow via an integration path around the node. The PCH approach stores the conserved variables (mass, momentum, and total energy) at the node. The evolution equations for momentum and total energy are discretized using an edge-based finite element (FE) approach with linear basis functions. A multidirectional Riemann-like problem is introduced at the center of the tetrahedron to account for discontinuities in the flow such as a shock. Conservation is enforced at each tetrahedron center. The multidimensional Riemann-like problem used here is based on Lagrangian CCH work [8, 19, 37, 38, 44] and recent Lagrangian SGH work [33-35, 39, 45]. In addition, an approximate 1D Riemann problem is solved on each face of the nodal control volume to advect mass, momentum, and total energy. The 1D Riemann problem produces fluxes [18] that remove a volume error in the PCH discretization. A 2-stage Runge–Kutta method is used to evolve the solution in time. The details of the new hydrodynamic scheme are discussed; likewise, results from numerical test problems are presented.« less
A Godunov-like point-centered essentially Lagrangian hydrodynamic approach
Morgan, Nathaniel R.; Waltz, Jacob I.; Burton, Donald E.; ...
2014-10-28
We present an essentially Lagrangian hydrodynamic scheme suitable for modeling complex compressible flows on tetrahedron meshes. The scheme reduces to a purely Lagrangian approach when the flow is linear or if the mesh size is equal to zero; as a result, we use the term essentially Lagrangian for the proposed approach. The motivation for developing a hydrodynamic method for tetrahedron meshes is because tetrahedron meshes have some advantages over other mesh topologies. Notable advantages include reduced complexity in generating conformal meshes, reduced complexity in mesh reconnection, and preserving tetrahedron cells with automatic mesh refinement. A challenge, however, is tetrahedron meshesmore » do not correctly deform with a lower order (i.e. piecewise constant) staggered-grid hydrodynamic scheme (SGH) or with a cell-centered hydrodynamic (CCH) scheme. The SGH and CCH approaches calculate the strain via the tetrahedron, which can cause artificial stiffness on large deformation problems. To resolve the stiffness problem, we adopt the point-centered hydrodynamic approach (PCH) and calculate the evolution of the flow via an integration path around the node. The PCH approach stores the conserved variables (mass, momentum, and total energy) at the node. The evolution equations for momentum and total energy are discretized using an edge-based finite element (FE) approach with linear basis functions. A multidirectional Riemann-like problem is introduced at the center of the tetrahedron to account for discontinuities in the flow such as a shock. Conservation is enforced at each tetrahedron center. The multidimensional Riemann-like problem used here is based on Lagrangian CCH work [8, 19, 37, 38, 44] and recent Lagrangian SGH work [33-35, 39, 45]. In addition, an approximate 1D Riemann problem is solved on each face of the nodal control volume to advect mass, momentum, and total energy. The 1D Riemann problem produces fluxes [18] that remove a volume error in the PCH discretization. A 2-stage Runge–Kutta method is used to evolve the solution in time. The details of the new hydrodynamic scheme are discussed; likewise, results from numerical test problems are presented.« less
A morphing-based scheme for large deformation analysis with stereo-DIC
NASA Astrophysics Data System (ADS)
Genovese, Katia; Sorgente, Donato
2018-05-01
A key step in the DIC-based image registration process is the definition of the initial guess for the non-linear optimization routine aimed at finding the parameters describing the pixel subset transformation. This initialization may result very challenging and possibly fail when dealing with pairs of largely deformed images such those obtained from two angled-views of not-flat objects or from the temporal undersampling of rapidly evolving phenomena. To address this problem, we developed a procedure that generates a sequence of intermediate synthetic images for gradually tracking the pixel subset transformation between the two extreme configurations. To this scope, a proper image warping function is defined over the entire image domain through the adoption of a robust feature-based algorithm followed by a NURBS-based interpolation scheme. This allows a fast and reliable estimation of the initial guess of the deformation parameters for the subsequent refinement stage of the DIC analysis. The proposed method is described step-by-step by illustrating the measurement of the large and heterogeneous deformation of a circular silicone membrane undergoing axisymmetric indentation. A comparative analysis of the results is carried out by taking as a benchmark a standard reference-updating approach. Finally, the morphing scheme is extended to the most general case of the correspondence search between two largely deformed textured 3D geometries. The feasibility of this latter approach is demonstrated on a very challenging case: the full-surface measurement of the severe deformation (> 150% strain) suffered by an aluminum sheet blank subjected to a pneumatic bulge test.
An HP Adaptive Discontinuous Galerkin Method for Hyperbolic Conservation Laws. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Bey, Kim S.
1994-01-01
This dissertation addresses various issues for model classes of hyperbolic conservation laws. The basic approach developed in this work employs a new family of adaptive, hp-version, finite element methods based on a special discontinuous Galerkin formulation for hyperbolic problems. The discontinuous Galerkin formulation admits high-order local approximations on domains of quite general geometry, while providing a natural framework for finite element approximations and for theoretical developments. The use of hp-versions of the finite element method makes possible exponentially convergent schemes with very high accuracies in certain cases; the use of adaptive hp-schemes allows h-refinement in regions of low regularity and p-enrichment to deliver high accuracy, while keeping problem sizes manageable and dramatically smaller than many conventional approaches. The use of discontinuous Galerkin methods is uncommon in applications, but the methods rest on a reasonable mathematical basis for low-order cases and has local approximation features that can be exploited to produce very efficient schemes, especially in a parallel, multiprocessor environment. The place of this work is to first and primarily focus on a model class of linear hyperbolic conservation laws for which concrete mathematical results, methodologies, error estimates, convergence criteria, and parallel adaptive strategies can be developed, and to then briefly explore some extensions to more general cases. Next, we provide preliminaries to the study and a review of some aspects of the theory of hyperbolic conservation laws. We also provide a review of relevant literature on this subject and on the numerical analysis of these types of problems.
NASA Astrophysics Data System (ADS)
Bilyeu, David
This dissertation presents an extension of the Conservation Element Solution Element (CESE) method from second- to higher-order accuracy. The new method retains the favorable characteristics of the original second-order CESE scheme, including (i) the use of the space-time integral equation for conservation laws, (ii) a compact mesh stencil, (iii) the scheme will remain stable up to a CFL number of unity, (iv) a fully explicit, time-marching integration scheme, (v) true multidimensionality without using directional splitting, and (vi) the ability to handle two- and three-dimensional geometries by using unstructured meshes. This algorithm has been thoroughly tested in one, two and three spatial dimensions and has been shown to obtain the desired order of accuracy for solving both linear and non-linear hyperbolic partial differential equations. The scheme has also shown its ability to accurately resolve discontinuities in the solutions. Higher order unstructured methods such as the Discontinuous Galerkin (DG) method and the Spectral Volume (SV) methods have been developed for one-, two- and three-dimensional application. Although these schemes have seen extensive development and use, certain drawbacks of these methods have been well documented. For example, the explicit versions of these two methods have very stringent stability criteria. This stability criteria requires that the time step be reduced as the order of the solver increases, for a given simulation on a given mesh. The research presented in this dissertation builds upon the work of Chang, who developed a fourth-order CESE scheme to solve a scalar one-dimensional hyperbolic partial differential equation. The completed research has resulted in two key deliverables. The first is a detailed derivation of a high-order CESE methods on unstructured meshes for solving the conservation laws in two- and three-dimensional spaces. The second is the code implementation of these numerical methods in a computer code. For code development, a one-dimensional solver for the Euler equations was developed. This work is an extension of Chang's work on the fourth-order CESE method for solving a one-dimensional scalar convection equation. A generic formulation for the nth-order CESE method, where n ≥ 4, was derived. Indeed, numerical implementation of the scheme confirmed that the order of convergence was consistent with the order of the scheme. For the two- and three-dimensional solvers, SOLVCON was used as the basic framework for code implementation. A new solver kernel for the fourth-order CESE method has been developed and integrated into the framework provided by SOLVCON. The main part of SOLVCON, which deals with unstructured meshes and parallel computing, remains intact. The SOLVCON code for data transmission between computer nodes for High Performance Computing (HPC). To validate and verify the newly developed high-order CESE algorithms, several one-, two- and three-dimensional simulations where conducted. For the arbitrary order, one-dimensional, CESE solver, three sets of governing equations were selected for simulation: (i) the linear convection equation, (ii) the linear acoustic equations, (iii) the nonlinear Euler equations. All three systems of equations were used to verify the order of convergence through mesh refinement. In addition the Euler equations were used to solve the Shu-Osher and Blastwave problems. These two simulations demonstrated that the new high-order CESE methods can accurately resolve discontinuities in the flow field.For the two-dimensional, fourth-order CESE solver, the Euler equation was employed in four different test cases. The first case was used to verify the order of convergence through mesh refinement. The next three cases demonstrated the ability of the new solver to accurately resolve discontinuities in the flows. This was demonstrated through: (i) the interaction between acoustic waves and an entropy pulse, (ii) supersonic flow over a circular blunt body, (iii) supersonic flow over a guttered wedge. To validate and verify the three-dimensional, fourth-order CESE solver, two different simulations where selected. The first used the linear convection equations to demonstrate fourth-order convergence. The second used the Euler equations to simulate supersonic flow over a spherical body to demonstrate the scheme's ability to accurately resolve shocks. All test cases used are well known benchmark problems and as such, there are multiple sources available to validate the numerical results. Furthermore, the simulations showed that the high-order CESE solver was stable at a CFL number near unity.
NASA Technical Reports Server (NTRS)
Sjoegreen, B.; Yee, H. C.
2001-01-01
The recently developed essentially fourth-order or higher low dissipative shock-capturing scheme of Yee, Sandham and Djomehri (1999) aimed at minimizing nu- merical dissipations for high speed compressible viscous flows containing shocks, shears and turbulence. To detect non smooth behavior and control the amount of numerical dissipation to be added, Yee et al. employed an artificial compression method (ACM) of Harten (1978) but utilize it in an entirely different context than Harten originally intended. The ACM sensor consists of two tuning parameters and is highly physical problem dependent. To minimize the tuning of parameters and physical problem dependence, new sensors with improved detection properties are proposed. The new sensors are derived from utilizing appropriate non-orthogonal wavelet basis functions and they can be used to completely switch to the extra numerical dissipation outside shock layers. The non-dissipative spatial base scheme of arbitrarily high order of accuracy can be maintained without compromising its stability at all parts of the domain where the solution is smooth. Two types of redundant non-orthogonal wavelet basis functions are considered. One is the B-spline wavelet (Mallat & Zhong 1992) used by Gerritsen and Olsson (1996) in an adaptive mesh refinement method, to determine regions where re nement should be done. The other is the modification of the multiresolution method of Harten (1995) by converting it to a new, redundant, non-orthogonal wavelet. The wavelet sensor is then obtained by computing the estimated Lipschitz exponent of a chosen physical quantity (or vector) to be sensed on a chosen wavelet basis function. Both wavelet sensors can be viewed as dual purpose adaptive methods leading to dynamic numerical dissipation control and improved grid adaptation indicators. Consequently, they are useful not only for shock-turbulence computations but also for computational aeroacoustics and numerical combustion. In addition, these sensors are scheme independent and can be stand alone options for numerical algorithm other than the Yee et al. scheme.
Recent assimilation developments of FOAM the Met Office ocean forecast system
NASA Astrophysics Data System (ADS)
Lea, Daniel; Martin, Matthew; Waters, Jennifer; Mirouze, Isabelle; While, James; King, Robert
2015-04-01
FOAM is the Met Office's operational ocean forecasting system. This system comprises a range of models from a 1/4 degree resolution global to 1/12 degree resolution regional models and shelf seas models at 7 km resolution. The system is made up of the ocean model NEMO (Nucleus for European Modeling of the Ocean), the Los Alomos sea ice model CICE and the NEMOVAR assimilation run in 3D-VAR FGAT mode. Work is ongoing to transition to both a higher resolution global ocean model at 1/12 degrees and to run FOAM in coupled models. The FOAM system generally performs well. One area of concern however is the performance in the tropics where spurious oscillations and excessive vertical velocity gradients are found after assimilation. NEMOVAR includes a balance operator which in the extra-tropics uses geostrophic balance to produce velocity increments which balance the density increments applied. In the tropics, however, the main balance is between the pressure gradients produced by the density gradient and the applied wind stress. A scheme is presented which aims to maintain this balance when increments are applied. Another issue in FOAM is that there are sometimes persistent temperature and salinity errors which are not effectively corrected by the assimilation. The standard NEMOVAR has a single correlation length scale based on the local Rossby radius. This means that observations in the extra tropics have influence on the model only on short length-scales. In order to maximise the information extracted from the observations and to correct large scale model biases a multiple correlation length-scale scheme has been developed. This includes a larger length scale which spreads observation information further. Various refinements of the scheme are also explored including reducing the longer length scale component at the edge of the sea ice and in areas with high potential vorticity gradients. A related scheme which varies the correlation length scale in the shelf seas is also described.
A numerical study of hypersonic stagnation heat transfer predictions at a coordinate singularity
NASA Technical Reports Server (NTRS)
Grasso, Francesco; Gnoffo, Peter A.
1990-01-01
The problem of grid induced errors associated with a coordinate singularity on heating predictions in the stagnation region of a three-dimensional body in hypersonic flow is examined. The test problem is for Mach 10 flow over an Aeroassist Flight Experiment configuration. This configuration is composed of an elliptic nose, a raked elliptic cone, and a circular shoulder. Irregularities in the heating predictions in the vicinity of the coordinate singularity, located at the axis of the elliptic nose near the stagnation point, are examined with respect to grid refinement and grid restructuring. The algorithm is derived using a finite-volume formulation. An upwind-biased total-variation diminishing scheme is employed for the inviscid flux contribution, and central differences are used for the viscous terms.
NASA Technical Reports Server (NTRS)
Morehead, R. L.; Atwell, M. J.; Melcher, J. C.; Hurlbert, E. A.
2016-01-01
A prototype cold helium active pressurization system was incorporated into an existing liquid oxygen (LOX) / liquid methane (LCH4) prototype planetary lander and hot-fire tested to collect vehicle-level performance data. Results from this hot-fire test series were used to validate integrated models of the vehicle helium and propulsion systems and demonstrate system effectiveness for a throttling lander. Pressurization systems vary greatly in complexity and efficiency between vehicles, so a pressurization performance metric was also developed as a means to compare different active pressurization schemes. This implementation of an active repress system is an initial sizing draft. Refined implementations will be tested in the future, improving the general knowledge base for a cryogenic lander-based cold helium system.
The implementation and use of Ada on distributed systems with high reliability requirements
NASA Technical Reports Server (NTRS)
Knight, J. C.
1987-01-01
Performance analysis was begin on the Ada implementations. The goal is to supply the system designer with tools that will allow a rational decision to be made about whether a particular implementation can support a given application early in the design cycle. Primary activities were: analysis of the original approach to recovery in distributed Ada programs using the Advanced Transport Operating System (ATOPS) example; review and assessment of the original approach which was found to be capable of improvement; preparation and presentation of a paper at the 1987 Washington DC Ada Symposium; development of a refined approach to recovery that is presently being applied to the ATOPS example; and design and development of a performance assessment scheme for Ada programs based on a flexible user-driven benchmarking system.
Self-learning fuzzy controllers based on temporal back propagation
NASA Technical Reports Server (NTRS)
Jang, Jyh-Shing R.
1992-01-01
This paper presents a generalized control strategy that enhances fuzzy controllers with self-learning capability for achieving prescribed control objectives in a near-optimal manner. This methodology, termed temporal back propagation, is model-insensitive in the sense that it can deal with plants that can be represented in a piecewise-differentiable format, such as difference equations, neural networks, GMDH structures, and fuzzy models. Regardless of the numbers of inputs and outputs of the plants under consideration, the proposed approach can either refine the fuzzy if-then rules if human experts, or automatically derive the fuzzy if-then rules obtained from human experts are not available. The inverted pendulum system is employed as a test-bed to demonstrate the effectiveness of the proposed control scheme and the robustness of the acquired fuzzy controller.
Measuring test productivity - The elusive dream
NASA Astrophysics Data System (ADS)
Ward, D. T.; Cross, E. J., Jr.
1983-11-01
The paper summarizes definitions and terminology relating to measurement of Test and Evaluation productivity before settling on the appropriate criteria for such a measurement model. A productivity measurement scheme suited for use by Test and Evaluation organizations is suggested. This mathematical model is a simplified version of one proposed by the American Productivity Center and applied to an aircraft maintenance facility by Fletcher. It includes only four primary variables: safety, schedule, cost, and deficiencies reported with varying degrees of objectivity and subjectivity involved in quantifying them. A hypothetical example of a fighter aircraft flight test program is used to illustrate the application of the productivity measurement model. The proposed model is intended to serve as a first iteration procedure and should be tested against real test programs to verify and refine it.
Dictionary Indexing of Electron Channeling Patterns.
Singh, Saransh; De Graef, Marc
2017-02-01
The dictionary-based approach to the indexing of diffraction patterns is applied to electron channeling patterns (ECPs). The main ingredients of the dictionary method are introduced, including the generalized forward projector (GFP), the relevant detector model, and a scheme to uniformly sample orientation space using the "cubochoric" representation. The GFP is used to compute an ECP "master" pattern. Derivative free optimization algorithms, including the Nelder-Mead simplex and the bound optimization by quadratic approximation are used to determine the correct detector parameters and to refine the orientation obtained from the dictionary approach. The indexing method is applied to poly-silicon and shows excellent agreement with the calibrated values. Finally, it is shown that the method results in a mean disorientation error of 1.0° with 0.5° SD for a range of detector parameters.
Odor Recognition vs. Classification in Artificial Olfaction
NASA Astrophysics Data System (ADS)
Raman, Baranidharan; Hertz, Joshua; Benkstein, Kurt; Semancik, Steve
2011-09-01
Most studies in chemical sensing have focused on the problem of precise identification of chemical species that were exposed during the training phase (the recognition problem). However, generalization of training to predict the chemical composition of untrained gases based on their similarity with analytes in the training set (the classification problem) has received very limited attention. These two analytical tasks pose conflicting constraints on the system. While correct recognition requires detection of molecular features that are unique to an analyte, generalization to untrained chemicals requires detection of features that are common across a desired class of analytes. A simple solution that addresses both issues simultaneously can be obtained from biological olfaction, where the odor class and identity information are decoupled and extracted individually over time. Mimicking this approach, we proposed a hierarchical scheme that allowed initial discrimination between broad chemical classes (e.g. contains oxygen) followed by finer refinements using additional data into sub-classes (e.g. ketones vs. alcohols) and, eventually, specific compositions (e.g. ethanol vs. methanol) [1]. We validated this approach using an array of temperature-controlled chemiresistors. We demonstrated that a small set of training analytes is sufficient to allow generalization to novel chemicals and that the scheme provides robust categorization despite aging. Here, we provide further characterization of this approach.
Unified Alignment of Protein-Protein Interaction Networks.
Malod-Dognin, Noël; Ban, Kristina; Pržulj, Nataša
2017-04-19
Paralleling the increasing availability of protein-protein interaction (PPI) network data, several network alignment methods have been proposed. Network alignments have been used to uncover functionally conserved network parts and to transfer annotations. However, due to the computational intractability of the network alignment problem, aligners are heuristics providing divergent solutions and no consensus exists on a gold standard, or which scoring scheme should be used to evaluate them. We comprehensively evaluate the alignment scoring schemes and global network aligners on large scale PPI data and observe that three methods, HUBALIGN, L-GRAAL and NATALIE, regularly produce the most topologically and biologically coherent alignments. We study the collective behaviour of network aligners and observe that PPI networks are almost entirely aligned with a handful of aligners that we unify into a new tool, Ulign. Ulign enables complete alignment of two networks, which traditional global and local aligners fail to do. Also, multiple mappings of Ulign define biologically relevant soft clusterings of proteins in PPI networks, which may be used for refining the transfer of annotations across networks. Hence, PPI networks are already well investigated by current aligners, so to gain additional biological insights, a paradigm shift is needed. We propose such a shift come from aligning all available data types collectively rather than any particular data type in isolation from others.
Two-dimensional imaging in a lightweight portable MRI scanner without gradient coils.
Cooley, Clarissa Zimmerman; Stockmann, Jason P; Armstrong, Brandon D; Sarracanie, Mathieu; Lev, Michael H; Rosen, Matthew S; Wald, Lawrence L
2015-02-01
As the premiere modality for brain imaging, MRI could find wider applicability if lightweight, portable systems were available for siting in unconventional locations such as intensive care units, physician offices, surgical suites, ambulances, emergency rooms, sports facilities, or rural healthcare sites. We construct and validate a truly portable (<100 kg) and silent proof-of-concept MRI scanner which replaces conventional gradient encoding with a rotating lightweight cryogen-free, low-field magnet. When rotated about the object, the inhomogeneous field pattern is used as a rotating spatial encoding magnetic field (rSEM) to create generalized projections which encode the iteratively reconstructed two-dimensional (2D) image. Multiple receive channels are used to disambiguate the nonbijective encoding field. The system is validated with experimental images of 2D test phantoms. Similar to other nonlinear field encoding schemes, the spatial resolution is position dependent with blurring in the center, but is shown to be likely sufficient for many medical applications. The presented MRI scanner demonstrates the potential for portability by simultaneously relaxing the magnet homogeneity criteria and eliminating the gradient coil. This new architecture and encoding scheme shows convincing proof of concept images that are expected to be further improved with refinement of the calibration and methodology. © 2014 Wiley Periodicals, Inc.
Dynamic coupling of subsurface and seepage flows solved within a regularized partition formulation
NASA Astrophysics Data System (ADS)
Marçais, J.; de Dreuzy, J.-R.; Erhel, J.
2017-11-01
Hillslope response to precipitations is characterized by sharp transitions from purely subsurface flow dynamics to simultaneous surface and subsurface flows. Locally, the transition between these two regimes is triggered by soil saturation. Here we develop an integrative approach to simultaneously solve the subsurface flow, locate the potential fully saturated areas and deduce the generated saturation excess overland flow. This approach combines the different dynamics and transitions in a single partition formulation using discontinuous functions. We propose to regularize the system of partial differential equations and to use classic spatial and temporal discretization schemes. We illustrate our methodology on the 1D hillslope storage Boussinesq equations (Troch et al., 2003). We first validate the numerical scheme on previous numerical experiments without saturation excess overland flow. Then we apply our model to a test case with dynamic transitions from purely subsurface flow dynamics to simultaneous surface and subsurface flows. Our results show that discretization respects mass balance both locally and globally, converges when the mesh or time step are refined. Moreover the regularization parameter can be taken small enough to ensure accuracy without suffering of numerical artefacts. Applied to some hundreds of realistic hillslope cases taken from Western side of France (Brittany), the developed method appears to be robust and efficient.
APPLICATION OF TRAVEL TIME RELIABILITY FOR PERFORMANCE ORIENTED OPERATIONAL PLANNING OF EXPRESSWAYS
NASA Astrophysics Data System (ADS)
Mehran, Babak; Nakamura, Hideki
Evaluation of impacts of congestion improvement scheme s on travel time reliability is very significant for road authorities since travel time reliability repr esents operational performance of expressway segments. In this paper, a methodology is presented to estimate travel tim e reliability prior to implementation of congestion relief schemes based on travel time variation modeling as a function of demand, capacity, weather conditions and road accident s. For subject expressway segmen ts, traffic conditions are modeled over a whole year considering demand and capacity as random variables. Patterns of demand and capacity are generated for each five minute interval by appl ying Monte-Carlo simulation technique, and accidents are randomly generated based on a model that links acci dent rate to traffic conditions. A whole year analysis is performed by comparing de mand and available capacity for each scenario and queue length is estimated through shockwave analysis for each time in terval. Travel times are estimated from refined speed-flow relationships developed for intercity expressways and buffer time index is estimated consequently as a measure of travel time reliability. For validation, estimated reliability indices are compared with measured values from empirical data, and it is shown that the proposed method is suitable for operational evaluation and planning purposes.
Analytical Methods of Decoupling the Automotive Engine Torque Roll Axis
NASA Astrophysics Data System (ADS)
JEONG, TAESEOK; SINGH, RAJENDRA
2000-06-01
This paper analytically examines the multi-dimensional mounting schemes of an automotive engine-gearbox system when excited by oscillating torques. In particular, the issue of torque roll axis decoupling is analyzed in significant detail since it is poorly understood. New dynamic decoupling axioms are presented an d compared with the conventional elastic axis mounting and focalization methods. A linear time-invariant system assumption is made in addition to a proportionally damped system. Only rigid-body modes of the powertrain are considered and the chassis elements are assumed to be rigid. Several simplified physical systems are considered and new closed-form solutions for symmetric and asymmetric engine-mounting systems are developed. These clearly explain the design concepts for the 4-point mounting scheme. Our analytical solutions match with the existing design formulations that are only applicable to symmetric geometries. Spectra for all six rigid-body motions are predicted using the alternate decoupling methods and the closed-form solutions are verified. Also, our method is validated by comparing modal solutions with prior experimental and analytical studies. Parametric design studies are carried out to illustrate the methodology. Chief contributions of this research include the development of new or refined analytical models and closed-form solutions along with improved design strategies for the torque roll axis decoupling.
Evaporation rate of nucleating clusters.
Zapadinsky, Evgeni
2011-11-21
The Becker-Döring kinetic scheme is the most frequently used approach to vapor liquid nucleation. In the present study it has been extended so that master equations for all cluster configurations are included into consideration. In the Becker-Döring kinetic scheme the nucleation rate is calculated through comparison of the balanced steady state and unbalanced steady state solutions of the set of kinetic equations. It is usually assumed that the balanced steady state produces equilibrium cluster distribution, and the evaporation rates are identical in the balanced and unbalanced steady state cases. In the present study we have shown that the evaporation rates are not identical in the equilibrium and unbalanced steady state cases. The evaporation rate depends on the number of clusters at the limit of the cluster definition. We have shown that the ratio of the number of n-clusters at the limit of the cluster definition to the total number of n-clusters is different in equilibrium and unbalanced steady state cases. This causes difference in evaporation rates for these cases and results in a correction factor to the nucleation rate. According to rough estimation it is 10(-1) by the order of magnitude and can be lower if carrier gas effectively equilibrates the clusters. The developed approach allows one to refine the correction factor with Monte Carlo and molecular dynamic simulations.
Raja, Muhammad Asif Zahoor; Kiani, Adiqa Kausar; Shehzad, Azam; Zameer, Aneela
2016-01-01
In this study, bio-inspired computing is exploited for solving system of nonlinear equations using variants of genetic algorithms (GAs) as a tool for global search method hybrid with sequential quadratic programming (SQP) for efficient local search. The fitness function is constructed by defining the error function for systems of nonlinear equations in mean square sense. The design parameters of mathematical models are trained by exploiting the competency of GAs and refinement are carried out by viable SQP algorithm. Twelve versions of the memetic approach GA-SQP are designed by taking a different set of reproduction routines in the optimization process. Performance of proposed variants is evaluated on six numerical problems comprising of system of nonlinear equations arising in the interval arithmetic benchmark model, kinematics, neurophysiology, combustion and chemical equilibrium. Comparative studies of the proposed results in terms of accuracy, convergence and complexity are performed with the help of statistical performance indices to establish the worth of the schemes. Accuracy and convergence of the memetic computing GA-SQP is found better in each case of the simulation study and effectiveness of the scheme is further established through results of statistics based on different performance indices for accuracy and complexity.
LIBRA-WA: a web application for ligand binding site detection and protein function recognition.
Toti, Daniele; Viet Hung, Le; Tortosa, Valentina; Brandi, Valentina; Polticelli, Fabio
2018-03-01
Recently, LIBRA, a tool for active/ligand binding site prediction, was described. LIBRA's effectiveness was comparable to similar state-of-the-art tools; however, its scoring scheme, output presentation, dependence on local resources and overall convenience were amenable to improvements. To solve these issues, LIBRA-WA, a web application based on an improved LIBRA engine, has been developed, featuring a novel scoring scheme consistently improving LIBRA's performance, and a refined algorithm that can identify binding sites hosted at the interface between different subunits. LIBRA-WA also sports additional functionalities like ligand clustering and a completely redesigned interface for an easier analysis of the output. Extensive tests on 373 apoprotein structures indicate that LIBRA-WA is able to identify the biologically relevant ligand/ligand binding site in 357 cases (∼96%), with the correct prediction ranking first in 349 cases (∼98% of the latter, ∼94% of the total). The earlier stand-alone tool has also been updated and dubbed LIBRA+, by integrating LIBRA-WA's improved engine for cross-compatibility purposes. LIBRA-WA and LIBRA+ are available at: http://www.computationalbiology.it/software.html. polticel@uniroma3.it. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
A taxonomy for mechanical ventilation: 10 fundamental maxims.
Chatburn, Robert L; El-Khatib, Mohamad; Mireles-Cabodevila, Eduardo
2014-11-01
The American Association for Respiratory Care has declared a benchmark for competency in mechanical ventilation that includes the ability to "apply to practice all ventilation modes currently available on all invasive and noninvasive mechanical ventilators." This level of competency presupposes the ability to identify, classify, compare, and contrast all modes of ventilation. Unfortunately, current educational paradigms do not supply the tools to achieve such goals. To fill this gap, we expand and refine a previously described taxonomy for classifying modes of ventilation and explain how it can be understood in terms of 10 fundamental constructs of ventilator technology: (1) defining a breath, (2) defining an assisted breath, (3) specifying the means of assisting breaths based on control variables specified by the equation of motion, (4) classifying breaths in terms of how inspiration is started and stopped, (5) identifying ventilator-initiated versus patient-initiated start and stop events, (6) defining spontaneous and mandatory breaths, (7) defining breath sequences (8), combining control variables and breath sequences into ventilatory patterns, (9) describing targeting schemes, and (10) constructing a formal taxonomy for modes of ventilation composed of control variable, breath sequence, and targeting schemes. Having established the theoretical basis of the taxonomy, we demonstrate a step-by-step procedure to classify any mode on any mechanical ventilator. Copyright © 2014 by Daedalus Enterprises.
Structure factor of liquid alkali metals using a classical-plasma reference system
NASA Astrophysics Data System (ADS)
Pastore, G.; Tosi, M. P.
1984-06-01
This paper presents calculations of the liquid structure factor of the alkali metals near freezing, starting from the classical plasma of bare ions as reference liquid. The indirect ion-ion interaction arising from electronic screening is treated by an optimized random phase approximation (ORPA), imposing physical requirements as in the original ORPA scheme developed by Weeks, Chandler and Andersen for liquids with strongly repulsive core potentials. A comparison of the results with computer simulation data for a model of liquid rubidium shows that the present approach overcomes the well-known difficulties met in applying to these metals the standard ORPA based on a reference liquid of neutral hard spheres. The optimization scheme is also shown to be equivalent to a reduction of the range of the indirect interaction in momentum space, as proposed empirically in an earlier work. Comparison with experiment for the other alkalis shows that a good overall representation of the data can be obtained for sodium, potassium and cesium, but not for lithium, when one uses a very simple form of the electron-ion potential adjusted to the liquid compressibility. The small-angle scattering region is finally examined more carefully in the light of recent data of Waseda, with a view to possible refinements of the pseudopotential model.
Estimation of distributed Fermat-point location for wireless sensor networking.
Huang, Po-Hsian; Chen, Jiann-Liang; Larosa, Yanuarius Teofilus; Chiang, Tsui-Lien
2011-01-01
This work presents a localization scheme for use in wireless sensor networks (WSNs) that is based on a proposed connectivity-based RF localization strategy called the distributed Fermat-point location estimation algorithm (DFPLE). DFPLE applies triangle area of location estimation formed by intersections of three neighboring beacon nodes. The Fermat point is determined as the shortest path from three vertices of the triangle. The area of estimated location then refined using Fermat point to achieve minimum error in estimating sensor nodes location. DFPLE solves problems of large errors and poor performance encountered by localization schemes that are based on a bounding box algorithm. Performance analysis of a 200-node development environment reveals that, when the number of sensor nodes is below 150, the mean error decreases rapidly as the node density increases, and when the number of sensor nodes exceeds 170, the mean error remains below 1% as the node density increases. Second, when the number of beacon nodes is less than 60, normal nodes lack sufficient beacon nodes to enable their locations to be estimated. However, the mean error changes slightly as the number of beacon nodes increases above 60. Simulation results revealed that the proposed algorithm for estimating sensor positions is more accurate than existing algorithms, and improves upon conventional bounding box strategies.
Modeling of Turbulent Natural Convection in Enclosed Tall Cavities
NASA Astrophysics Data System (ADS)
Goloviznin, V. M.; Korotkin, I. A.; Finogenov, S. A.
2017-12-01
It was shown in our previous work (J. Appl. Mech. Tech. Phys 57 (7), 1159-1171 (2016)) that the eddy-resolving parameter-free CABARET scheme as applied to two-and three-dimensional de Vahl Davis benchmark tests (thermal convection in a square cavity) yields numerical results on coarse (20 × 20 and 20 × 20 × 20) grids that agree surprisingly well with experimental data and highly accurate computations for Rayleigh numbers of up to 1014. In the present paper, the sensitivity of this phenomenon to the cavity shape (varying from cubical to highly elongated) is analyzed. Box-shaped computational domains with aspect ratios of 1: 4, 1: 10, and 1: 28.6 are considered. The results produced by the CABARET scheme are compared with experimental data (aspect ratio of 1: 28.6), DNS results (aspect ratio of 1: 4), and an empirical formula (aspect ratio of 1: 10). In all the cases, the CABARET-based integral parameters of the cavity flow agree well with the other authors' results. Notably coarse grids with mesh refinement toward the walls are used in the CABARET calculations. It is shown that acceptable numerical accuracy on extremely coarse grids is achieved for an aspect ratio of up to 1: 10. For higher aspect ratios, the number of grid cells required for achieving prescribed accuracy grows significantly.
2D Imaging in a Lightweight Portable MRI Scanner without Gradient Coils
Cooley, Clarissa Zimmerman; Stockmann, Jason P.; Armstrong, Brandon D.; Sarracanie, Mathieu; Lev, Michael H.; Rosen, Matthew S.; Wald, Lawrence L.
2014-01-01
Purpose As the premiere modality for brain imaging, MRI could find wider applicability if lightweight, portable systems were available for siting in unconventional locations such as Intensive Care Units, physician offices, surgical suites, ambulances, emergency rooms, sports facilities, or rural healthcare sites. Methods We construct and validate a truly portable (<100kg) and silent proof-of-concept MRI scanner which replaces conventional gradient encoding with a rotating lightweight cryogen-free, low-field magnet. When rotated about the object, the inhomogeneous field pattern is used as a rotating Spatial Encoding Magnetic field (rSEM) to create generalized projections which encode the iteratively reconstructed 2D image. Multiple receive channels are used to disambiguate the non-bijective encoding field. Results The system is validated with experimental images of 2D test phantoms. Similar to other non-linear field encoding schemes, the spatial resolution is position dependent with blurring in the center, but is shown to be likely sufficient for many medical applications. Conclusion The presented MRI scanner demonstrates the potential for portability by simultaneously relaxing the magnet homogeneity criteria and eliminating the gradient coil. This new architecture and encoding scheme shows convincing proof of concept images that are expected to be further improved with refinement of the calibration and methodology. PMID:24668520
Designing effective animations for computer science instruction
NASA Astrophysics Data System (ADS)
Grillmeyer, Oliver
This study investigated the potential for animations of Scheme functions to help novice computer science students understand difficult programming concepts. These animations used an instructional framework inspired by theories of constructivism and knowledge integration. The framework had students make predictions, reflect, and specify examples to animate to promote autonomous learning and result in more integrated knowledge. The framework used animated pivotal cases to help integrate disconnected ideas and restructure students' incomplete ideas by illustrating weaknesses in their existing models. The animations scaffolded learners, making the thought processes of experts more visible by modeling complex and tacit information. The animation design was guided by prior research and a methodology of design and refinement. Analysis of pilot studies led to the development of four design concerns to aid animation designers: clearly illustrate the mapping between objects in animations with the actual objects they represent, show causal connections between elements, draw attention to the salient features of the modeled system, and create animations that reduce complexity. Refined animations based on these design concerns were compared to computer-based tools, text-based instruction, and simpler animations that do not embody the design concerns. Four studies comprised this dissertation work. Two sets of animated presentations of list creation functions were compared to control groups. No significant differences were found in support of animations. Three different animated models of traces of recursive functions ranging from concrete to abstract representations were compared. No differences in learning gains were found between the three models in test performance. Three models of animations of applicative operators were compared with students using the replacement modeler and the Scheme interpreter. Significant differences were found favoring animations that addressed causality and salience in their design. Lastly, two binary tree search algorithm animations designed to reduce complexity were compared with hand-tracing of calls. Students made fewer mistakes in predicting the tree traversal when guided by the animations. However, the posttest findings were inconsistent. In summary, animations designed based on the design concerns did not consistently add value to instruction in the form investigated in this research.
Developing a Procedure for Segmenting Meshed Heat Networks of Heat Supply Systems without Outflows
NASA Astrophysics Data System (ADS)
Tokarev, V. V.
2018-06-01
The heat supply systems of cities have, as a rule, a ring structure with the possibility of redistributing the flows. Despite the fact that a ring structure is more reliable than a radial one, the operators of heat networks prefer to use them in normal modes according to the scheme without overflows of the heat carrier between the heat mains. With such a scheme, it is easier to adjust the networks and to detect and locate faults in them. The article proposes a formulation of the heat network segmenting problem. The problem is set in terms of optimization with the heat supply system's excessive hydraulic power used as the optimization criterion. The heat supply system computer model has a hierarchically interconnected multilevel structure. Since iterative calculations are only carried out for the level of trunk heat networks, decomposing the entire system into levels allows the dimensionality of the solved subproblems to be reduced by an order of magnitude. An attempt to solve the problem by fully enumerating possible segmentation versions does not seem to be feasible for systems of really existing sizes. The article suggests a procedure for searching rational segmentation of heat supply networks with limiting the search to versions of dividing the system into segments near the flow convergence nodes with subsequent refining of the solution. The refinement is performed in two stages according to the total excess hydraulic power criterion. At the first stage, the loads are redistributed among the sources. After that, the heat networks are divided into independent fragments, and the possibility of increasing the excess hydraulic power in the obtained fragments is checked by shifting the division places inside a fragment. The proposed procedure has been approbated taking as an example a municipal heat supply system involving six heat mains fed from a common source, 24 loops within the feeding mains plane, and more than 5000 consumers. Application of the proposed segmentation procedure made it possible to find a version with required hydraulic power in the heat supply system on 3% less than the one found using the simultaneous segmentation method.
NASA Astrophysics Data System (ADS)
Fambri, Francesco; Dumbser, Michael; Zanotti, Olindo
2017-11-01
This paper presents an arbitrary high-order accurate ADER Discontinuous Galerkin (DG) method on space-time adaptive meshes (AMR) for the solution of two important families of non-linear time dependent partial differential equations for compressible dissipative flows : the compressible Navier-Stokes equations and the equations of viscous and resistive magnetohydrodynamics in two and three space-dimensions. The work continues a recent series of papers concerning the development and application of a proper a posteriori subcell finite volume limiting procedure suitable for discontinuous Galerkin methods (Dumbser et al., 2014, Zanotti et al., 2015 [40,41]). It is a well known fact that a major weakness of high order DG methods lies in the difficulty of limiting discontinuous solutions, which generate spurious oscillations, namely the so-called 'Gibbs phenomenon'. In the present work, a nonlinear stabilization of the scheme is sequentially and locally introduced only for troubled cells on the basis of a novel a posteriori detection criterion, i.e. the MOOD approach. The main benefits of the MOOD paradigm, i.e. the computational robustness even in the presence of strong shocks, are preserved and the numerical diffusion is considerably reduced also for the limited cells by resorting to a proper sub-grid. In practice the method first produces a so-called candidate solution by using a high order accurate unlimited DG scheme. Then, a set of numerical and physical detection criteria is applied to the candidate solution, namely: positivity of pressure and density, absence of floating point errors and satisfaction of a discrete maximum principle in the sense of polynomials. Furthermore, in those cells where at least one of these criteria is violated the computed candidate solution is detected as troubled and is locally rejected. Subsequently, a more reliable numerical solution is recomputed a posteriori by employing a more robust but still very accurate ADER-WENO finite volume scheme on the subgrid averages within that troubled cell. Finally, a high order DG polynomial is reconstructed back from the evolved subcell averages. We apply the whole approach for the first time to the equations of compressible gas dynamics and magnetohydrodynamics in the presence of viscosity, thermal conductivity and magnetic resistivity, therefore extending our family of adaptive ADER-DG schemes to cases for which the numerical fluxes also depend on the gradient of the state vector. The distinguished high-resolution properties of the presented numerical scheme standout against a wide number of non-trivial test cases both for the compressible Navier-Stokes and the viscous and resistive magnetohydrodynamics equations. The present results show clearly that the shock-capturing capability of the news schemes is significantly enhanced within a cell-by-cell Adaptive Mesh Refinement (AMR) implementation together with time accurate local time stepping (LTS).
Wavelet-enabled progressive data Access and Storage Protocol (WASP)
NASA Astrophysics Data System (ADS)
Clyne, J.; Frank, L.; Lesperance, T.; Norton, A.
2015-12-01
Current practices for storing numerical simulation outputs hail from an era when the disparity between compute and I/O performance was not as great as it is today. The memory contents for every sample, computed at every grid point location, are simply saved at some prescribed temporal frequency. Though straightforward, this approach fails to take advantage of the coherency in neighboring grid points that invariably exists in numerical solutions to mathematical models. Exploiting such coherence is essential to digital multimedia; DVD-Video, digital cameras, streaming movies and audio are all possible today because of transform-based compression schemes that make substantial reductions in data possible by taking advantage of the strong correlation between adjacent samples in both space and time. Such methods can also be exploited to enable progressive data refinement in a manner akin to that used in ubiquitous digital mapping applications: views from far away are shown in coarsened detail to provide context, and can be progressively refined as the user zooms in on a localized region of interest. The NSF funded WASP project aims to provide a common, NetCDF-compatible software framework for supporting wavelet-based, multi-scale, progressive data, enabling interactive exploration of large data sets for the geoscience communities. This presentation will provide an overview of this work in progress to develop community cyber-infrastructure for the efficient analysis of very large data sets.
NASA Technical Reports Server (NTRS)
Combi, M. R.; Kabin, K.; Gombosi, T. I.; DeZeeuw, D. L.; Powell, K. G.
1998-01-01
The first results for applying a three-dimensional multimedia ideal MHD model for the mass-loaded flow of Jupiter's corotating magnetospheric plasma past Io are presented. The model is able to consider simultaneously physically realistic conditions for ion mass loading, ion-neutral drag, and intrinsic magnetic field in a full global calculation without imposing artificial dissipation. Io is modeled with an extended neutral atmosphere which loads the corotating plasma torus flow with mass, momentum, and energy. The governing equations are solved using adaptive mesh refinement on an unstructured Cartesian grid using an upwind scheme for AHMED. For the work described in this paper we explored a range of models without an intrinsic magnetic field for Io. We compare our results with particle and field measurements made during the December 7, 1995, flyby of to, as published by the Galileo Orbiter experiment teams. For two extreme cases of lower boundary conditions at Io, our model can quantitatively explain the variation of density along the spacecraft trajectory and can reproduce the general appearance of the variations of magnetic field and ion pressure and temperature. The net fresh ion mass-loading rates are in the range of approximately 300-650 kg/s, and equivalent charge exchange mass-loading rates are in the range approximately 540-1150 kg/s in the vicinity of Io.
NASA Astrophysics Data System (ADS)
Eaves, Nick A.; Zhang, Qingan; Liu, Fengshan; Guo, Hongsheng; Dworkin, Seth B.; Thomson, Murray J.
2016-10-01
Mitigation of soot emissions from combustion devices is a global concern. For example, recent EURO 6 regulations for vehicles have placed stringent limits on soot emissions. In order to allow design engineers to achieve the goal of reduced soot emissions, they must have the tools to so. Due to the complex nature of soot formation, which includes growth and oxidation, detailed numerical models are required to gain fundamental insights into the mechanisms of soot formation. A detailed description of the CoFlame FORTRAN code which models sooting laminar coflow diffusion flames is given. The code solves axial and radial velocity, temperature, species conservation, and soot aggregate and primary particle number density equations. The sectional particle dynamics model includes nucleation, PAH condensation and HACA surface growth, surface oxidation, coagulation, fragmentation, particle diffusion, and thermophoresis. The code utilizes a distributed memory parallelization scheme with strip-domain decomposition. The public release of the CoFlame code, which has been refined in terms of coding structure, to the research community accompanies this paper. CoFlame is validated against experimental data for reattachment length in an axi-symmetric pipe with a sudden expansion, and ethylene-air and methane-air diffusion flames for multiple soot morphological parameters and gas-phase species. Finally, the parallel performance and computational costs of the code is investigated.
Design and Performance of McRas in SCMs and GEOS I/II GCMs
NASA Technical Reports Server (NTRS)
Sud, Yogesh C.; Einaudi, Franco (Technical Monitor)
2000-01-01
The design of a prognostic cloud scheme named McRAS (Microphysics of clouds with Relaxed Arakawa-Schubert Scheme) for general circulation models (GCMs) will be discussed. McRAS distinguishes three types of clouds: (1) convective, (2) stratiform, and (3) boundary-layer types. The convective clouds transform and merge into stratiform clouds on an hourly time-scale, while the boundary-layer clouds merge into the stratiform clouds instantly. The cloud condensate converts into precipitation following the auto-conversion equations of Sundqvist that contain a parametric adaptation for the Bergeron-Findeisen process of ice crystal growth and collection of cloud condensate by precipitation. All clouds convect, advect, as well as diffuse both horizontally and vertically with a fully interactive cloud-microphysics throughout the life-cycle of the cloud, while the optical properties of clouds are derived from the statistical distribution of hydrometeors and idealized cloud geometry. An evaluation of McRAS in a single column model (SCM) with the GATE Phase III and 5-ARN CART datasets has shown that together with the rest of the model physics, McRAS can simulate the observed temperature, humidity, and precipitation without many systematic errors. The time history and time mean incloud water and ice distribution, fractional cloudiness, cloud optical thickness, origin of precipitation in the convective anvil and towers, and the convective updraft and downdraft velocities and mass fluxes all show a realistic behavior. Performance of McRAS in GEOS 11 GCM shows several satisfactory features but some of the remaining deficiencies suggest need for additional research involving convective triggers and inhibitors, provision for continuously detraining updraft, a realistic scheme for cumulus gravity wave drag, and refinements to physical conditions for ascertaining cloud detrainment level.
Effectiveness of vegetation-based biodiversity offset metrics as surrogates for ants.
Hanford, Jayne K; Crowther, Mathew S; Hochuli, Dieter F
2017-02-01
Biodiversity offset schemes are globally popular policy tools for balancing the competing demands of conservation and development. Trading currencies for losses and gains in biodiversity value at development and credit sites are usually based on several vegetation attributes combined to yield a simple score (multimetric), but the score is rarely validated prior to implementation. Inaccurate biodiversity trading currencies are likely to accelerate global biodiversity loss through unrepresentative trades of losses and gains. We tested a model vegetation multimetric (i.e., vegetation structural and compositional attributes) typical of offset trading currencies to determine whether it represented measurable components of compositional and functional biodiversity. Study sites were located in remnant patches of a critically endangered ecological community in western Sydney, Australia, an area representative of global conflicts between conservation and expanding urban development. We sampled ant fauna composition with pitfall traps and enumerated removal by ants of native plant seeds from artificial seed containers (seed depots). Ants are an excellent model taxon because they are strongly associated with habitat complexity, respond rapidly to environmental change, and are functionally important at many trophic levels. The vegetation multimetric did not predict differences in ant community composition or seed removal, despite underlying assumptions that biodiversity trading currencies used in offset schemes represent all components of a site's biodiversity value. This suggests that vegetation multimetrics are inadequate surrogates for total biodiversity value. These findings highlight the urgent need to refine existing offsetting multimetrics to ensure they meet underlying assumptions of surrogacy. Despite the best intentions, offset schemes will never achieve their goal of no net loss of biodiversity values if trades are based on metrics unrepresentative of total biodiversity. © 2016 Society for Conservation Biology.
Toward an optimisation technique for dynamically monitored environment
NASA Astrophysics Data System (ADS)
Shurrab, Orabi M.
2016-10-01
The data fusion community has introduced multiple procedures of situational assessments; this is to facilitate timely responses to emerging situations. More directly, the process refinement of the Joint Directors of Laboratories (JDL) is a meta-process to assess and improve the data fusion task during real-time operation. In other wording, it is an optimisation technique to verify the overall data fusion performance, and enhance it toward the top goals of the decision-making resources. This paper discusses the theoretical concept of prioritisation. Where the analysts team is required to keep an up to date with the dynamically changing environment, concerning different domains such as air, sea, land, space and cyberspace. Furthermore, it demonstrates an illustration example of how various tracking activities are ranked, simultaneously into a predetermined order. Specifically, it presents a modelling scheme for a case study based scenario, where the real-time system is reporting different classes of prioritised events. Followed by a performance metrics for evaluating the prioritisation process of situational awareness (SWA) domain. The proposed performance metrics has been designed and evaluated using an analytical approach. The modelling scheme represents the situational awareness system outputs mathematically, in the form of a list of activities. Such methods allowed the evaluation process to conduct a rigorous analysis of the prioritisation process, despite any constrained related to a domain-specific configuration. After conducted three levels of assessments over three separates scenario, The Prioritisation Capability Score (PCS) has provided an appropriate scoring scheme for different ranking instances, Indeed, from the data fusion perspectives, the proposed metric has assessed real-time system performance adequately, and it is capable of conducting a verification process, to direct the operator's attention to any issue, concerning the prioritisation capability of situational awareness domain.
NASA Astrophysics Data System (ADS)
Montané, Francesc; Fox, Andrew M.; Arellano, Avelino F.; MacBean, Natasha; Alexander, M. Ross; Dye, Alex; Bishop, Daniel A.; Trouet, Valerie; Babst, Flurin; Hessl, Amy E.; Pederson, Neil; Blanken, Peter D.; Bohrer, Gil; Gough, Christopher M.; Litvak, Marcy E.; Novick, Kimberly A.; Phillips, Richard P.; Wood, Jeffrey D.; Moore, David J. P.
2017-09-01
How carbon (C) is allocated to different plant tissues (leaves, stem, and roots) determines how long C remains in plant biomass and thus remains a central challenge for understanding the global C cycle. We used a diverse set of observations (AmeriFlux eddy covariance tower observations, biomass estimates from tree-ring data, and leaf area index (LAI) measurements) to compare C fluxes, pools, and LAI data with those predicted by a land surface model (LSM), the Community Land Model (CLM4.5). We ran CLM4.5 for nine temperate (including evergreen and deciduous) forests in North America between 1980 and 2013 using four different C allocation schemes: i. dynamic C allocation scheme (named "D-CLM4.5") with one dynamic allometric parameter, which allocates C to the stem and leaves to vary in time as a function of annual net primary production (NPP); ii. an alternative dynamic C allocation scheme (named "D-Litton"), where, similar to (i), C allocation is a dynamic function of annual NPP, but unlike (i) includes two dynamic allometric parameters involving allocation to leaves, stem, and coarse roots; iii.-iv. a fixed C allocation scheme with two variants, one representative of observations in evergreen (named "F-Evergreen") and the other of observations in deciduous forests (named "F-Deciduous"). D-CLM4.5 generally overestimated gross primary production (GPP) and ecosystem respiration, and underestimated net ecosystem exchange (NEE). In D-CLM4.5, initial aboveground biomass in 1980 was largely overestimated (between 10 527 and 12 897 g C m-2) for deciduous forests, whereas aboveground biomass accumulation through time (between 1980 and 2011) was highly underestimated (between 1222 and 7557 g C m-2) for both evergreen and deciduous sites due to a lower stem turnover rate in the sites than the one used in the model. D-CLM4.5 overestimated LAI in both evergreen and deciduous sites because the leaf C-LAI relationship in the model did not match the observed leaf C-LAI relationship at our sites. Although the four C allocation schemes gave similar results for aggregated C fluxes, they translated to important differences in long-term aboveground biomass accumulation and aboveground NPP. For deciduous forests, D-Litton gave more realistic Cstem / Cleaf ratios and strongly reduced the overestimation of initial aboveground biomass and aboveground NPP for deciduous forests by D-CLM4.5. We identified key structural and parameterization deficits that need refinement to improve the accuracy of LSMs in the near future. These include changing how C is allocated in fixed and dynamic schemes based on data from current forest syntheses and different parameterization of allocation schemes for different forest types. Our results highlight the utility of using measurements of aboveground biomass to evaluate and constrain the C allocation scheme in LSMs, and suggest that stem turnover is overestimated by CLM4.5 for these AmeriFlux sites. Understanding the controls of turnover will be critical to improving long-term C processes in LSMs.
Montané, Francesc; Fox, Andrew M.; Arellano, Avelino F.; ...
2017-09-22
How carbon (C) is allocated to different plant tissues (leaves, stem, and roots) determines how long C remains in plant biomass and thus remains a central challenge for understanding the global C cycle. We used a diverse set of observations (AmeriFlux eddy covariance tower observations, biomass estimates from tree-ring data, and leaf area index (LAI) measurements) to compare C fluxes, pools, and LAI data with those predicted by a land surface model (LSM), the Community Land Model (CLM4.5). We ran CLM4.5 for nine temperate (including evergreen and deciduous) forests in North America between 1980 and 2013 using four different C allocationmore » schemes: i. dynamic C allocation scheme (named "D-CLM4.5") with one dynamic allometric parameter, which allocates C to the stem and leaves to vary in time as a function of annual net primary production (NPP); ii. an alternative dynamic C allocation scheme (named "D-Litton"), where, similar to (i), C allocation is a dynamic function of annual NPP, but unlike (i) includes two dynamic allometric parameters involving allocation to leaves, stem, and coarse roots; iii.–iv. a fixed C allocation scheme with two variants, one representative of observations in evergreen (named "F-Evergreen") and the other of observations in deciduous forests (named "F-Deciduous"). D-CLM4.5 generally overestimated gross primary production (GPP) and ecosystem respiration, and underestimated net ecosystem exchange (NEE). In D-CLM4.5, initial aboveground biomass in 1980 was largely overestimated (between 10 527 and 12 897 g C m -2) for deciduous forests, whereas aboveground biomass accumulation through time (between 1980 and 2011) was highly underestimated (between 1222 and 7557 g C m -2) for both evergreen and deciduous sites due to a lower stem turnover rate in the sites than the one used in the model. D-CLM4.5 overestimated LAI in both evergreen and deciduous sites because the leaf C–LAI relationship in the model did not match the observed leaf C–LAI relationship at our sites. Although the four C allocation schemes gave similar results for aggregated C fluxes, they translated to important differences in long-term aboveground biomass accumulation and aboveground NPP. For deciduous forests, D-Litton gave more realistic C stem/C leaf ratios and strongly reduced the overestimation of initial aboveground biomass and aboveground NPP for deciduous forests by D-CLM4.5. We identified key structural and parameterization deficits that need refinement to improve the accuracy of LSMs in the near future. These include changing how C is allocated in fixed and dynamic schemes based on data from current forest syntheses and different parameterization of allocation schemes for different forest types. Our results highlight the utility of using measurements of aboveground biomass to evaluate and constrain the C allocation scheme in LSMs, and suggest that stem turnover is overestimated by CLM4.5 for these AmeriFlux sites. Understanding the controls of turnover will be critical to improving long-term C processes in LSMs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Montané, Francesc; Fox, Andrew M.; Arellano, Avelino F.
How carbon (C) is allocated to different plant tissues (leaves, stem, and roots) determines how long C remains in plant biomass and thus remains a central challenge for understanding the global C cycle. We used a diverse set of observations (AmeriFlux eddy covariance tower observations, biomass estimates from tree-ring data, and leaf area index (LAI) measurements) to compare C fluxes, pools, and LAI data with those predicted by a land surface model (LSM), the Community Land Model (CLM4.5). We ran CLM4.5 for nine temperate (including evergreen and deciduous) forests in North America between 1980 and 2013 using four different C allocationmore » schemes: i. dynamic C allocation scheme (named "D-CLM4.5") with one dynamic allometric parameter, which allocates C to the stem and leaves to vary in time as a function of annual net primary production (NPP); ii. an alternative dynamic C allocation scheme (named "D-Litton"), where, similar to (i), C allocation is a dynamic function of annual NPP, but unlike (i) includes two dynamic allometric parameters involving allocation to leaves, stem, and coarse roots; iii.–iv. a fixed C allocation scheme with two variants, one representative of observations in evergreen (named "F-Evergreen") and the other of observations in deciduous forests (named "F-Deciduous"). D-CLM4.5 generally overestimated gross primary production (GPP) and ecosystem respiration, and underestimated net ecosystem exchange (NEE). In D-CLM4.5, initial aboveground biomass in 1980 was largely overestimated (between 10 527 and 12 897 g C m -2) for deciduous forests, whereas aboveground biomass accumulation through time (between 1980 and 2011) was highly underestimated (between 1222 and 7557 g C m -2) for both evergreen and deciduous sites due to a lower stem turnover rate in the sites than the one used in the model. D-CLM4.5 overestimated LAI in both evergreen and deciduous sites because the leaf C–LAI relationship in the model did not match the observed leaf C–LAI relationship at our sites. Although the four C allocation schemes gave similar results for aggregated C fluxes, they translated to important differences in long-term aboveground biomass accumulation and aboveground NPP. For deciduous forests, D-Litton gave more realistic C stem/C leaf ratios and strongly reduced the overestimation of initial aboveground biomass and aboveground NPP for deciduous forests by D-CLM4.5. We identified key structural and parameterization deficits that need refinement to improve the accuracy of LSMs in the near future. These include changing how C is allocated in fixed and dynamic schemes based on data from current forest syntheses and different parameterization of allocation schemes for different forest types. Our results highlight the utility of using measurements of aboveground biomass to evaluate and constrain the C allocation scheme in LSMs, and suggest that stem turnover is overestimated by CLM4.5 for these AmeriFlux sites. Understanding the controls of turnover will be critical to improving long-term C processes in LSMs.« less
NASA Astrophysics Data System (ADS)
Bower, Keith; Choularton, Tom; Latham, John; Sahraei, Jalil; Salter, Stephen
2006-11-01
A simplified version of the model of marine stratocumulus clouds developed by Bower, Jones and Choularton [Bower, K.N., Jones, A., and Choularton, T.W., 1999. A modeling study of aerosol processing by stratocumulus clouds and its impact on GCM parameterisations of cloud and aerosol. Atmospheric Research, Vol. 50, Nos. 3-4, The Great Dun Fell Experiment, 1995-special issue, 317-344.] was used to examine the sensitivity of the albedo-enhancement global warming mitigation scheme proposed by Latham [Latham, J., 1990. Control of global warming? Nature 347, 339-340; Latham, J., 2002. Amelioration of global warming by controlled enhancement of the albedo and longevity of low-level maritime clouds. Atmos. Sci. Letters (doi:10.1006/Asle.2002.0048).] to the cloud and environmental aerosol characteristics, as well as those of the seawater aerosol of salt-mass ms and number concentration Δ N, which-under the scheme-are advertently introduced into the clouds. Values of albedo-change Δ A and droplet number concentration Nd were calculated for a wide range of values of ms, Δ N, updraught speed W, cloud thickness Δ Z and cloud-base temperature TB: for three measured aerosol spectra, corresponding to ambient air of negligible, moderate and high levels of pollution. Our choices of parameter value ranges were determined by the extent of their applicability to the mitigation scheme, whose current formulation is still somewhat preliminary, thus rendering unwarranted in this study the utilisation of refinements incorporated into other stratocumulus models. In agreement with earlier studies: (1) Δ A was found to be very sensitive to Δ N and (within certain constraints) insensitive to changes in ms, W, Δ Z and TB; (2) Δ A was greatest for clouds formed in pure air and least for highly polluted air. In many situations considered to be within the ambit of the mitigation scheme, the calculated Δ A values exceeded those estimated by earlier workers as being necessary to produce a cooling sufficient to compensate, globally, for the warming resulting from a doubling of the atmospheric carbon dioxide concentration. Our calculations provide quantitative support for the physical viability of the mitigation scheme and offer new insights into its technological requirements.
A cell-vertex multigrid method for the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Radespiel, R.
1989-01-01
A cell-vertex scheme for the Navier-Stokes equations, which is based on central difference approximations and Runge-Kutta time stepping, is described. Using local time stepping, implicit residual smoothing, a multigrid method, and carefully controlled artificial dissipative terms, very good convergence rates are obtained for a wide range of two- and three-dimensional flows over airfoils and wings. The accuracy of the code is examined by grid refinement studies and comparison with experimental data. For an accurate prediction of turbulent flows with strong separations, a modified version of the nonequilibrium turbulence model of Johnson and King is introduced, which is well suited for an implementation into three-dimensional Navier-Stokes codes. It is shown that the solutions for three-dimensional flows with strong separations can be dramatically improved, when a nonequilibrium model of turbulence is used.
Elliptic generation of composite three-dimensional grids about realistic aircraft
NASA Technical Reports Server (NTRS)
Sorenson, R. L.
1986-01-01
An elliptic method for generating composite grids about realistic aircraft is presented. A body-conforming grid is first generated about the entire aircraft by the solution of Poisson's differential equation. This grid has relatively coarse spacing, and it covers the entire physical domain. At boundary surfaces, cell size is controlled and cell skewness is nearly eliminated by inhomogeneous terms, which are found automatically by the program. Certain regions of the grid in which high gradients are expected, and which map into rectangular solids in the computational domain, are then designated for zonal refinement. Spacing in the zonal grids is reduced by adding points with a simple, algebraic scheme. Details of the grid generation method are presented along with results of the present application, a wing-body configuration based on the F-16 fighter aircraft.
Motion Planning and Synthesis of Human-Like Characters in Constrained Environments
NASA Astrophysics Data System (ADS)
Zhang, Liangjun; Pan, Jia; Manocha, Dinesh
We give an overview of our recent work on generating naturally-looking human motion in constrained environments with multiple obstacles. This includes a whole-body motion planning algorithm for high DOF human-like characters. The planning problem is decomposed into a sequence of low dimensional sub-problems. We use a constrained coordination scheme to solve the sub-problems in an incremental manner and a local path refinement algorithm to compute collision-free paths in tight spaces and satisfy the statically stable constraint on CoM. We also present a hybrid algorithm to generate plausible motion by combing the motion computed by our planner with mocap data. We demonstrate the performance of our algorithm on a 40 DOF human-like character and generate efficient motion strategies for object placement, bending, walking, and lifting in complex environments.
Vogelmeier, Claus F; Criner, Gerard J; Martínez, Fernando J; Anzueto, Antonio; Barnes, Peter J; Bourbeau, Jean; Celli, Bartolome R; Chen, Rongchang; Decramer, Marc; Fabbri, Leonardo M; Frith, Peter; Halpin, David M G; López Varela, M Victorina; Nishimura, Masaharu; Roche, Nicolás; Rodríguez-Roisin, Roberto; Sin, Don D; Singh, Dave; Stockley, Robert; Vestbo, Jørgen; Wedzicha, Jadwiga A; Agustí, Alvar
2017-03-01
This Executive Summary of the Global Strategy for the Diagnosis, Management, and Prevention of COPD (GOLD) 2017 Report focuses primarily on the revised and novel parts of the document. The most significant changes include: 1) the assessment of COPD has been refined to separate the spirometric assessment from symptom evaluation. ABCD groups are now proposed to be derived exclusively from patient symptoms and their history of exacerbations; 2) for each of the groups A to D, escalation strategies for pharmacological treatments are proposed; 3) the concept of de-escalation of therapy is introduced in the treatment assessment scheme; 4) nonpharmacologic therapies are comprehensively presented and; 5) the importance of comorbid conditions in managing COPD is reviewed. Copyright © 2017 SEPAR. Publicado por Elsevier España, S.L.U. All rights reserved.
The implementation and data analysis of an interferometer for intense short pulse laser experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Jaebum; Baldis, Hector A.; Chen, Hui
We present an interferometry setup and the detailed fringe analysis method for intense short pulse (SP) laser experiments. The interferometry scheme was refined through multiple campaigns to investigate the effects of pre-plasmas on energetic electrons at the Jupiter Laser Facility at Lawrence Livermore National Laboratory. The interferometer used a frequency doubled (more » $${\\it\\lambda}=0.527~{\\rm\\mu}\\text{m}$$) 0.5 ps long optical probe beam to measure the pre-plasma density, an invaluable parameter to better understand how varying pre-plasma conditions affect the characteristics of the energetic electrons. The hardware of the diagnostic, data analysis and example data are presented. Here, the diagnostic setup and the analysis procedure can be employed for any other SP laser experiments and interferograms, respectively.« less
3D Indoor Positioning of UAVs with Spread Spectrum Ultrasound and Time-of-Flight Cameras
Aguilera, Teodoro
2017-01-01
This work proposes the use of a hybrid acoustic and optical indoor positioning system for the accurate 3D positioning of Unmanned Aerial Vehicles (UAVs). The acoustic module of this system is based on a Time-Code Division Multiple Access (T-CDMA) scheme, where the sequential emission of five spread spectrum ultrasonic codes is performed to compute the horizontal vehicle position following a 2D multilateration procedure. The optical module is based on a Time-Of-Flight (TOF) camera that provides an initial estimation for the vehicle height. A recursive algorithm programmed on an external computer is then proposed to refine the estimated position. Experimental results show that the proposed system can increase the accuracy of a solely acoustic system by 70–80% in terms of positioning mean square error. PMID:29301211
The implementation and data analysis of an interferometer for intense short pulse laser experiments
Park, Jaebum; Baldis, Hector A.; Chen, Hui
2016-08-03
We present an interferometry setup and the detailed fringe analysis method for intense short pulse (SP) laser experiments. The interferometry scheme was refined through multiple campaigns to investigate the effects of pre-plasmas on energetic electrons at the Jupiter Laser Facility at Lawrence Livermore National Laboratory. The interferometer used a frequency doubled (more » $${\\it\\lambda}=0.527~{\\rm\\mu}\\text{m}$$) 0.5 ps long optical probe beam to measure the pre-plasma density, an invaluable parameter to better understand how varying pre-plasma conditions affect the characteristics of the energetic electrons. The hardware of the diagnostic, data analysis and example data are presented. Here, the diagnostic setup and the analysis procedure can be employed for any other SP laser experiments and interferograms, respectively.« less
Disentangling Complexity in Bayesian Automatic Adaptive Quadrature
NASA Astrophysics Data System (ADS)
Adam, Gheorghe; Adam, Sanda
2018-02-01
The paper describes a Bayesian automatic adaptive quadrature (BAAQ) solution for numerical integration which is simultaneously robust, reliable, and efficient. Detailed discussion is provided of three main factors which contribute to the enhancement of these features: (1) refinement of the m-panel automatic adaptive scheme through the use of integration-domain-length-scale-adapted quadrature sums; (2) fast early problem complexity assessment - enables the non-transitive choice among three execution paths: (i) immediate termination (exceptional cases); (ii) pessimistic - involves time and resource consuming Bayesian inference resulting in radical reformulation of the problem to be solved; (iii) optimistic - asks exclusively for subrange subdivision by bisection; (3) use of the weaker accuracy target from the two possible ones (the input accuracy specifications and the intrinsic integrand properties respectively) - results in maximum possible solution accuracy under minimum possible computing time.
NASA Astrophysics Data System (ADS)
Navas-Montilla, A.; Murillo, J.
2017-07-01
When designing a numerical scheme for the resolution of conservation laws, the selection of a particular source term discretization (STD) may seem irrelevant whenever it ensures convergence with mesh refinement, but it has a decisive impact on the solution. In the framework of the Shallow Water Equations (SWE), well-balanced STD based on quiescent equilibrium are unable to converge to physically based solutions, which can be constructed considering energy arguments. Energy based discretizations can be designed assuming dissipation or conservation, but in any case, the STD procedure required should not be merely based on ad hoc approximations. The STD proposed in this work is derived from the Generalized Hugoniot Locus obtained from the Generalized Rankine Hugoniot conditions and the Integral Curve across the contact wave associated to the bed step. In any case, the STD must allow energy-dissipative solutions: steady and unsteady hydraulic jumps, for which some numerical anomalies have been documented in the literature. These anomalies are the incorrect positioning of steady jumps and the presence of a spurious spike of discharge inside the cell containing the jump. The former issue can be addressed by proposing a modification of the energy-conservative STD that ensures a correct dissipation rate across the hydraulic jump, whereas the latter is of greater complexity and cannot be fixed by simply choosing a suitable STD, as there are more variables involved. The problem concerning the spike of discharge is a well-known problem in the scientific community, also known as slowly-moving shock anomaly, it is produced by a nonlinearity of the Hugoniot locus connecting the states at both sides of the jump. However, it seems that this issue is more a feature than a problem when considering steady solutions of the SWE containing hydraulic jumps. The presence of the spurious spike in the discharge has been taken for granted and has become a feature of the solution. Even though it does not disturb the rest of the solution in steady cases, when considering transient cases it produces a very undesirable shedding of spurious oscillations downstream that should be circumvented. Based on spike-reducing techniques (originally designed for homogeneous Euler equations) that propose the construction of interpolated fluxes in the untrustworthy regions, we design a novel Roe-type scheme for the SWE with discontinuous topography that reduces the presence of the aforementioned spurious spike. The resulting spike-reducing method in combination with the proposed STD ensures an accurate positioning of steady jumps, provides convergence with mesh refinement, which was not possible for previous methods that cannot avoid the spike.
Monte Carlo Studies of Phase Separation in Compressible 2-dim Ising Models
NASA Astrophysics Data System (ADS)
Mitchell, S. J.; Landau, D. P.
2006-03-01
Using high resolution Monte Carlo simulations, we study time-dependent domain growth in compressible 2-dim ferromagnetic (s=1/2) Ising models with continuous spin positions and spin-exchange moves [1]. Spins interact with slightly modified Lennard-Jones potentials, and we consider a model with no lattice mismatch and one with 4% mismatch. For comparison, we repeat calculations for the rigid Ising model [2]. For all models, large systems (512^2) and long times (10^ 6 MCS) are examined over multiple runs, and the growth exponent is measured in the asymptotic scaling regime. For the rigid model and the compressible model with no lattice mismatch, the growth exponent is consistent with the theoretically expected value of 1/3 [1] for Model B type growth. However, we find that non-zero lattice mismatch has a significant and unexpected effect on the growth behavior.Supported by the NSF.[1] D.P. Landau and K. Binder, A Guide to Monte Carlo Simulations in Statistical Physics, second ed. (Cambridge University Press, New York, 2005).[2] J. Amar, F. Sullivan, and R.D. Mountain, Phys. Rev. B 37, 196 (1988).
Thermographic diagnostics to discriminate skin lesions: a clinical study
NASA Astrophysics Data System (ADS)
Stringasci, Mirian Denise; Moriyama, Lilian Tan; Salvio, Ana Gabriela; Bagnato, Vanderlei Salvador; Kurachi, Cristina
2015-06-01
Cancer is responsible for about 13% of all causes of death in the world. Over 7 million people die annually of this disease. In most cases, the survival rates are greater when diagnosed in early stages. It is known that tumor lesions present a different temperature compared with the normal tissues. Some studies have been performed in an attempt to establish new diagnosis methods, targeting this temperature difference. In this study, we aim to investigate the use of a handheld thermographic camera to discriminate skin lesions. The patients presenting Basal Cell Carcinoma, Squamous Cell Carcinoma, Actinic Keratosis, Pigmented Seborrheic Keratosis, Melanoma or Intradermal Nevus lesions have been investigated at the Skin Departament of Amaral Carvalho Hospital. Patients are selected by a dermatologist, and the lesion images are recorded using an infrared camera. The images are evaluated taken into account the temperature level, and differences into lesion areas, borders, and between altered and normal skin. The present results show that thermography may be an important tool for aiding in the clinical diagnostics of superficial skin lesions.
Ulibarri, Monica D.; Roesch, Scott; Rangel, M. Gudelia; Staines, Hugo; Amaro, Hortensia; Strathdee, Steffanie A.
2014-01-01
A significant body of research among female sex workers (FSWs) has focused on individual-level HIV risk factors. Comparatively little is known about their non-commercial, steady partners who may heavily influence their behavior and HIV risk. This cross-sectional study of 214 FSWs who use drugs and their male steady partners aged ≥18 in two Mexico-U.S. border cities utilized a path-analytic model for dyadic data based upon the Actor-Partner Interdependence Model to examine relationships between sexual relationship power, intimate partner violence (IPV), depression symptoms, and unprotected sex. FSWs’ relationship power, IPV perpetration and victimization were significantly associated with unprotected sex within the relationship. Male partners’ depression symptoms were significantly associated with unprotected sex within the relationship. Future HIV prevention interventions for FSWs and their male partners should address issues of sexual relationship power, IPV, and mental health both individually and in the context of their relationship. PMID:24743959
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bevelhimer, Mark S.; Colby, Jonathan; Adonizio, Mary Ann
2016-07-31
One of the most important biological questions facing the marine and hydrokinetic (MHK) energy industry is whether fish and marine mammals that encounter MHK devices are likely to be struck by moving components. For hydrokinetic (HK) devices, i.e., those that generate energy from flowing water, this concern is greatest for large organisms because their increased length increases the probability that they will be struck as they pass through the area of blade sweep and because their increased mass means that the force absorbed if struck is greater and potentially more damaging (Amaral et al. 2015). Key to answering this questionmore » is understanding whether aquatic organisms change their swimming behavior as they encounter a device in a way that decreases their likelihood of being struck and possibly injured by the device. Whether near-field or far-field behavior results in general avoidance of or attraction to HK devices is a significant factor in the possible risk of physical contact with rotating turbine blades (Cada and Bevelhimer 2011).« less
NASA Technical Reports Server (NTRS)
Marvin, Margaret R.; Wolfe, Glenn M.; Salawitch, Ross J.; Canty, Timothy P.; Roberts, Sandra J.; Travis, Katherine R.; Aiken, Kenneth C.; de Gouw, Joost A.; Graus, Martin; Hanisco, Thomas F.;
2017-01-01
Isoprene oxidation schemes vary greatly among gas-phase chemical mechanisms, with potentially significant ramifications for air quality modeling and interpretation of satellite observations in biogenic-rich regions. In this study, in situ observations from the 2013 SENEX mission are combined with a constrained O-D photochemical box model to evaluate isoprene chemistry among five commonly used gas-phase chemical mechanisms: CBO5, CB6r2, MCMv3.2, MCMv3.3.1, and a recent version of GEOS-Chem. Mechanisms are evaluated and inter-compared with respect to formaldehyde (HCHO), a high-yield product of isoprene oxidation. Though underestimated by all considered mechanisms, observed HCHO mixing ratios are best reproduced by MCMv3.3.1 (normalized mean bias = -15%), followed by GEOS-Chem (-17%), MCMv3.2 (-25%), CB6r2 (-32%) and CB05 (-33%). Inter-comparison of HCHO production rates reveals that major restructuring of the isoprene oxidation scheme in the Carbon Bond mechanism increases HCHO production by only approx. 5% in CB6r2 relative to CBO5, while further refinement of the complex isoprene scheme in the Master Chemical Mechanism increases HCHO production by approx. 16% in MCMv3.3.1 relative to MCMv3.2. The GEOS-Chem mechanism provides a good approximation of the explicit isoprene chemistry in MCMv3.3.1 and generally reproduces the magnitude and source distribution of HCHO production rates. We analytically derive improvements to the isoprene scheme in CB6r2 and incorporate these changes into a new mechanism called CB6r2-UMD, which is designed to preserve computational efficiency. The CB6r2-UMD mechanism mimics production of HCHO in MCMv3.3.1 and demonstrates good agreement with observed mixing ratios from SENEX (-14%). Improved simulation of HCHO also impacts modeled ozone: at approx. 0.3 ppb NO, the ozone production rate increases approx. 3% between CB6r2 and CB6r2-UMD, and rises another approx. 4% when HCHO is constrained to match observations.
An Instrumented Glove to Assess Manual Dexterity in Simulation-Based Neurosurgical Education
Lemos, Juan Diego; Hernandez, Alher Mauricio; Soto-Romero, Georges
2017-01-01
The traditional neurosurgical apprenticeship scheme includes the assessment of trainee’s manual skills carried out by experienced surgeons. However, the introduction of surgical simulation technology presents a new paradigm where residents can refine surgical techniques on a simulator before putting them into practice in real patients. Unfortunately, in this new scheme, an experienced surgeon will not always be available to evaluate trainee’s performance. For this reason, it is necessary to develop automatic mechanisms to estimate metrics for assessing manual dexterity in a quantitative way. Authors have proposed some hardware-software approaches to evaluate manual dexterity on surgical simulators. This paper presents IGlove, a wearable device that uses inertial sensors embedded on an elastic glove to capture hand movements. Metrics to assess manual dexterity are estimated from sensors signals using data processing and information analysis algorithms. It has been designed to be used with a neurosurgical simulator called Daubara NS Trainer, but can be easily adapted to another benchtop- and manikin-based medical simulators. The system was tested with a sample of 14 volunteers who performed a test that was designed to simultaneously evaluate their fine motor skills and the IGlove’s functionalities. Metrics obtained by each of the participants are presented as results in this work; it is also shown how these metrics are used to automatically evaluate the level of manual dexterity of each volunteer. PMID:28468268
NASA Astrophysics Data System (ADS)
Greene, Patrick T.; Eldredge, Jeff D.; Zhong, Xiaolin; Kim, John
2016-07-01
In this paper, we present a method for performing uniformly high-order direct numerical simulations of high-speed flows over arbitrary geometries. The method was developed with the goal of simulating and studying the effects of complex isolated roughness elements on the stability of hypersonic boundary layers. The simulations are carried out on Cartesian grids with the geometries imposed by a third-order cut-stencil method. A fifth-order hybrid weighted essentially non-oscillatory scheme was implemented to capture any steep gradients in the flow created by the geometries and a third-order Runge-Kutta method is used for time advancement. A multi-zone refinement method was also utilized to provide extra resolution at locations with expected complex physics. The combination results in a globally fourth-order scheme in space and third order in time. Results confirming the method's high order of convergence are shown. Two-dimensional and three-dimensional test cases are presented and show good agreement with previous results. A simulation of Mach 3 flow over the logo of the Ubuntu Linux distribution is shown to demonstrate the method's capabilities for handling complex geometries. Results for Mach 6 wall-bounded flow over a three-dimensional cylindrical roughness element are also presented. The results demonstrate that the method is a promising tool for the study of hypersonic roughness-induced transition.
Developing clinical indicators for the secondary health system in India.
Thakur, Harshad; Chavhan, S; Jotkar, Raju; Mukherjee, Kanchan
2008-08-01
One of the prime goals of any health system is to deliver good and competent quality of healthcare. Through World Bank-assisted Maharashtra Health Systems Development Project, Government of Maharashtra in India developed and implemented clinical indicators to improve quality. During this, clinical areas eligible for monitoring quality of care and roles of health staff working at various levels were identified. Brainstorming discussion sessions were conducted to refine list of potential clinical indicators and to identify implementation problems. It was implemented in four stages. (a) Self-explanatory tool of record, standard operating procedures and training manual were prepared during tools preparation stage. (b) Pilot implementation was done to monitor the usefulness of indicators, document the experiences and standardize the system accordingly. (c) The final selection of indicators was done taking into consideration points like data reliability, indicator usefulness etc. For final implementation, 15 indicators for district and 6 indicators for rural hospitals were selected. (d) Transfer of skills was done through training of various hospital functionaries. Selection and prioritization of clinical indicators is the most crucial part. Active participation of local employees is essential for sustainability of the scheme. It is also important to ensure that data recorded/reported is both reliable and valid, to conduct monthly review of the scheme at various levels and to link it with the quality improvement programme.
Saucedo-Espinosa, Mario A.; Lapizco-Encinas, Blanca H.
2016-01-01
Current monitoring is a well-established technique for the characterization of electroosmotic (EO) flow in microfluidic devices. This method relies on monitoring the time response of the electric current when a test buffer solution is displaced by an auxiliary solution using EO flow. In this scheme, each solution has a different ionic concentration (and electric conductivity). The difference in the ionic concentration of the two solutions defines the dynamic time response of the electric current and, hence, the current signal to be measured: larger concentration differences result in larger measurable signals. A small concentration difference is needed, however, to avoid dispersion at the interface between the two solutions, which can result in undesired pressure-driven flow that conflicts with the EO flow. Additional challenges arise as the conductivity of the test solution decreases, leading to a reduced electric current signal that may be masked by noise during the measuring process, making for a difficult estimation of an accurate EO mobility. This contribution presents a new scheme for current monitoring that employs multiple channels arranged in parallel, producing an increase in the signal-to-noise ratio of the electric current to be measured and increasing the estimation accuracy. The use of this parallel approach is particularly useful in the estimation of the EO mobility in systems where low conductivity mediums are required, such as insulator based dielectrophoresis devices. PMID:27375813
Simulating hydrodynamics and ice cover in Lake Erie using an unstructured grid model
NASA Astrophysics Data System (ADS)
Fujisaki-Manome, A.; Wang, J.
2016-02-01
An unstructured grid Finite-Volume Coastal Ocean Model (FVCOM) is applied to Lake Erie to simulate seasonal ice cover. The model is coupled with an unstructured-grid, finite-volume version of the Los Alamos Sea Ice Model (UG-CICE). We replaced the original 2-time-step Euler forward scheme in time integration by the central difference (i.e., leapfrog) scheme to assure a neutrally inertial stability. The modified version of FVCOM coupled with the ice model is applied to the shallow freshwater lake in this study using unstructured grids to represent the complicated coastline in the Laurentian Great Lakes and refining the spatial resolution locally. We conducted multi-year simulations in Lake Erie from 2002 to 2013. The results were compared with the observed ice extent, water surface temperature, ice thickness, currents, and water temperature profiles. Seasonal and interannual variation of ice extent and water temperature was captured reasonably, while the modeled thermocline was somewhat diffusive. The modeled ice thickness tends to be systematically thinner than the observed values. The modeled lake currents compared well with measurements obtained from an Acoustic Doppler Current Profiler located in the deep part of the lake, whereas the simulated currents deviated from measurements near the surface, possibly due to the model's inability to reproduce the sharp thermocline during the summer and the lack of detailed representation of offshore wind fields in the interpolated meteorological forcing.
MYCIN II: design and implementation of a therapy reference with complex content-based indexing.
Kim, D. K.; Fagan, L. M.; Jones, K. T.; Berrios, D. C.; Yu, V. L.
1998-01-01
We describe the construction of MYCIN II, a prototype system that provides for content-based markup and search of a forthcoming clinical therapeutics textbook, Antimicrobial Therapy and Vaccines. Existing commercial search technology for digital references utilizes generic tools such as textword-based searches with geographical or statistical refinements. We suggest that the drawbacks of such systems significantly restrict their use in everyday clinical practice. This is in spite of the fact that there is a great need for the information contained within these same references. The system we describe is intended to supplement keyword searching so that certain important questions can be asked easily and can be answered reliably (in terms of precision and recall). Our method attacks this problem in a restricted domain of knowledge-clinical infectious disease. For example, we would like to be able to answer the class of questions exemplified by the following query: "What antimicrobial agents can be used to treat endocarditis caused by Eikenella corrodens?" We have compiled and analyzed a list of such questions to develop a concept-based markup scheme. This scheme was then applied within an HTML markup to electronically "highlight" passages from three textbook chapters. We constructed a functioning web-based search interface. Our system also provides semi-automated querying of PubMed using our concept markup and the user's actions as a guide. PMID:9929205
MYCIN II: design and implementation of a therapy reference with complex content-based indexing.
Kim, D K; Fagan, L M; Jones, K T; Berrios, D C; Yu, V L
1998-01-01
We describe the construction of MYCIN II, a prototype system that provides for content-based markup and search of a forthcoming clinical therapeutics textbook, Antimicrobial Therapy and Vaccines. Existing commercial search technology for digital references utilizes generic tools such as textword-based searches with geographical or statistical refinements. We suggest that the drawbacks of such systems significantly restrict their use in everyday clinical practice. This is in spite of the fact that there is a great need for the information contained within these same references. The system we describe is intended to supplement keyword searching so that certain important questions can be asked easily and can be answered reliably (in terms of precision and recall). Our method attacks this problem in a restricted domain of knowledge-clinical infectious disease. For example, we would like to be able to answer the class of questions exemplified by the following query: "What antimicrobial agents can be used to treat endocarditis caused by Eikenella corrodens?" We have compiled and analyzed a list of such questions to develop a concept-based markup scheme. This scheme was then applied within an HTML markup to electronically "highlight" passages from three textbook chapters. We constructed a functioning web-based search interface. Our system also provides semi-automated querying of PubMed using our concept markup and the user's actions as a guide.
Comparisons of linear and nonlinear pyramid schemes for signal and image processing
NASA Astrophysics Data System (ADS)
Morales, Aldo W.; Ko, Sung-Jea
1997-04-01
Linear filters banks are being used extensively in image and video applications. New research results in wavelet applications for compression and de-noising are constantly appearing in the technical literature. On the other hand, non-linear filter banks are also being used regularly in image pyramid algorithms. There are some inherent advantages in using non-linear filters instead of linear filters when non-Gaussian processes are present in images. However, a consistent way of comparing performance criteria between these two schemes has not been fully developed yet. In this paper a recently discovered tool, sample selection probabilities, is used to compare the behavior of linear and non-linear filters. In the conversion from weights of order statistics (OS) filters to coefficients of the impulse response is obtained through these probabilities. However, the reverse problem: the conversion from coefficients of the impulse response to the weights of OS filters is not yet fully understood. One of the reasons for this difficulty is the highly non-linear nature of the partitions and generating function used. In the present paper the problem is posed as an optimization of integer linear programming subject to constraints directly obtained from the coefficients of the impulse response. Although the technique to be presented in not completely refined, it certainly appears to be promising. Some results will be shown.
NASA Astrophysics Data System (ADS)
Borazjani, Iman; Asgharzadeh, Hafez
2015-11-01
Flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates with explicit and semi-implicit schemes. Implicit schemes can be used to overcome these restrictions. However, implementing implicit solver for nonlinear equations including Navier-Stokes is not straightforward. Newton-Krylov subspace methods (NKMs) are one of the most advanced iterative methods to solve non-linear equations such as implicit descritization of the Navier-Stokes equation. The efficiency of NKMs massively depends on the Jacobian formation method, e.g., automatic differentiation is very expensive, and matrix-free methods slow down as the mesh is refined. Analytical Jacobian is inexpensive method, but derivation of analytical Jacobian for Navier-Stokes equation on staggered grid is challenging. The NKM with a novel analytical Jacobian was developed and validated against Taylor-Green vortex and pulsatile flow in a 90 degree bend. The developed method successfully handled the complex geometries such as an intracranial aneurysm with multiple overset grids, and immersed boundaries. It is shown that the NKM with an analytical Jacobian is 3 to 25 times faster than the fixed-point implicit Runge-Kutta method, and more than 100 times faster than automatic differentiation depending on the grid (size) and the flow problem. The developed methods are fully parallelized with parallel efficiency of 80-90% on the problems tested.
Using Facet Clusters to Map Learner Modes of Reasoning
NASA Astrophysics Data System (ADS)
Vokos, Stamatis; DeWater, L. S.; Seeley, L.; Kraus, P.
2006-12-01
The Department of Physics and the School of Education at Seattle Pacific University, together with FACET Innovations, LLC, are beginning the second year of a five-year NSF TPC project, Improving the Effectiveness of Teacher Diagnostic Skills and Tools. We are working in partnership with school districts in Washington State to use formative assessment as a means to helping teachers and precollege students deepen their understanding of foundational topics in physical science. We utilize a theoretical framework of knowledge-in-pieces to identify and categorize widespread productive and unproductive modes of reasoning in the topical areas of Properties of Matter, Heat and Temperature, and Physical and Chemical Changes. In this talk, we describe the development and iterative refinement of certain facet clusters of student ideas, as well as the usefulness and limitations of such a mapping scheme. * Supported in part by NSF grant #ESI-0455796, The Boeing Corporation, and the SPU Science Initiative.
GPCRdb: an information system for G protein-coupled receptors
Isberg, Vignir; Mordalski, Stefan; Munk, Christian; Rataj, Krzysztof; Harpsøe, Kasper; Hauser, Alexander S.; Vroling, Bas; Bojarski, Andrzej J.; Vriend, Gert; Gloriam, David E.
2016-01-01
Recent developments in G protein-coupled receptor (GPCR) structural biology and pharmacology have greatly enhanced our knowledge of receptor structure-function relations, and have helped improve the scientific foundation for drug design studies. The GPCR database, GPCRdb, serves a dual role in disseminating and enabling new scientific developments by providing reference data, analysis tools and interactive diagrams. This paper highlights new features in the fifth major GPCRdb release: (i) GPCR crystal structure browsing, superposition and display of ligand interactions; (ii) direct deposition by users of point mutations and their effects on ligand binding; (iii) refined snake and helix box residue diagram looks; and (iii) phylogenetic trees with receptor classification colour schemes. Under the hood, the entire GPCRdb front- and back-ends have been re-coded within one infrastructure, ensuring a smooth browsing experience and development. GPCRdb is available at http://www.gpcrdb.org/ and it's open source code at https://bitbucket.org/gpcr/protwis. PMID:26582914
NASA Astrophysics Data System (ADS)
Li, Yongbo; Li, Guoyan; Yang, Yuantao; Liang, Xihui; Xu, Minqiang
2018-05-01
The fault diagnosis of planetary gearboxes is crucial to reduce the maintenance costs and economic losses. This paper proposes a novel fault diagnosis method based on adaptive multi-scale morphological filter (AMMF) and modified hierarchical permutation entropy (MHPE) to identify the different health conditions of planetary gearboxes. In this method, AMMF is firstly adopted to remove the fault-unrelated components and enhance the fault characteristics. Second, MHPE is utilized to extract the fault features from the denoised vibration signals. Third, Laplacian score (LS) approach is employed to refine the fault features. In the end, the obtained features are fed into the binary tree support vector machine (BT-SVM) to accomplish the fault pattern identification. The proposed method is numerically and experimentally demonstrated to be able to recognize the different fault categories of planetary gearboxes.
Global Existence Analysis of Cross-Diffusion Population Systems for Multiple Species
NASA Astrophysics Data System (ADS)
Chen, Xiuqing; Daus, Esther S.; Jüngel, Ansgar
2018-02-01
The existence of global-in-time weak solutions to reaction-cross-diffusion systems for an arbitrary number of competing population species is proved. The equations can be derived from an on-lattice random-walk model with general transition rates. In the case of linear transition rates, it extends the two-species population model of Shigesada, Kawasaki, and Teramoto. The equations are considered in a bounded domain with homogeneous Neumann boundary conditions. The existence proof is based on a refined entropy method and a new approximation scheme. Global existence follows under a detailed balance or weak cross-diffusion condition. The detailed balance condition is related to the symmetry of the mobility matrix, which mirrors Onsager's principle in thermodynamics. Under detailed balance (and without reaction) the entropy is nonincreasing in time, but counter-examples show that the entropy may increase initially if detailed balance does not hold.
Advances in land modeling of KIAPS based on the Noah Land Surface Model
NASA Astrophysics Data System (ADS)
Koo, Myung-Seo; Baek, Sunghye; Seol, Kyung-Hee; Cho, Kyoungmi
2017-08-01
As of 2013, the Noah Land Surface Model (LSM) version 2.7.1 was implemented in a new global model being developed at the Korea Institute of Atmospheric Prediction Systems (KIAPS). This land surface scheme is further refined in two aspects, by adding new physical processes and by updating surface input parameters. Thus, the treatment of glacier land, sea ice, and snow cover are addressed more realistically. Inconsistencies in the amount of absorbed solar flux at ground level by the land surface and radiative processes are rectified. In addition, new parameters are available by using 1-km land cover data, which had usually not been possible at a global scale. Land surface albedo/emissivity climatology is newly created using Moderate-Resolution Imaging Spectroradiometer (MODIS) satellitebased data and adjusted parameterization. These updates have been applied to the KIAPS-developed model and generally provide a positive impact on near-surface weather forecasting.
[Disease management programs from a health insurer's point of view].
Szymkowiak, Christof; Walkenhorst, Karen; Straub, Christoph
2003-06-01
Disease Management Programmes represent a great challenge to the German statutory health insurance system. According to politicians, disease management programmes are an appropriate tool for increasing the level of care for chronically ill patients significantly, while at the same time they can slow down the cost explosion in health care. The statutory health insurers' point of view yields a more refined picture of the chances and risks involved. The chances are that a medical guideline-based, evidence-based, co-operative care of the chronically ill could be established. But also, there are the risks of misuse of disease management programmes and of misallocation of funds due to the ill-advised linkage with the so-called risk compensation scheme (RSA) balancing the sickness funds' structural deficits through redistribution. The nation-wide introduction of disease management programmes appears to be a gigantic experiment whose aim is to change the care of chronically ill patients and whose outcome is unpredictable.
Lattice Boltzmann and Navier-Stokes Cartesian CFD Approaches for Airframe Noise Predictions
NASA Technical Reports Server (NTRS)
Barad, Michael F.; Kocheemoolayil, Joseph G.; Kiris, Cetin C.
2017-01-01
Lattice Boltzmann (LB) and compressible Navier-Stokes (NS) equations based computational fluid dynamics (CFD) approaches are compared for simulating airframe noise. Both LB and NS CFD approaches are implemented within the Launch Ascent and Vehicle Aerodynamics (LAVA) framework. Both schemes utilize the same underlying Cartesian structured mesh paradigm with provision for local adaptive grid refinement and sub-cycling in time. We choose a prototypical massively separated, wake-dominated flow ideally suited for Cartesian-grid based approaches in this study - The partially-dressed, cavity-closed nose landing gear (PDCC-NLG) noise problem from AIAA's Benchmark problems for Airframe Noise Computations (BANC) series of workshops. The relative accuracy and computational efficiency of the two approaches are systematically compared. Detailed comments are made on the potential held by LB to significantly reduce time-to-solution for a desired level of accuracy within the context of modeling airframes noise from first principles.
NASA Technical Reports Server (NTRS)
Fritsch, J. Michael; Kain, John S.
1997-01-01
Research efforts during the second year have centered on improving the manner in which convective stabilization is achieved in the Penn State/NCAR mesoscale model MM5. Ways of improving this stabilization have been investigated by (1) refining the partitioning between the Kain-Fritsch convective parameterization scheme and the grid scale by introducing a form of moist convective adjustment; (2) using radar data to define locations of subgrid-scale convection during a dynamic initialization period; and (3) parameterizing deep-convective feedbacks as subgrid-scale sources and sinks of mass. These investigations were conducted by simulating a long-lived convectively-generated mesoscale vortex that occurred during 14-18 Jul. 1982 and the 10-11 Jun. 1985 squall line that occurred over the Kansas-Oklahoma region during the PRE-STORM experiment. The long-lived vortex tracked across the central Plains states and was responsible for multiple convective outbreaks during its lifetime.
Computer simulations of phase field drops on super-hydrophobic surfaces
NASA Astrophysics Data System (ADS)
Fedeli, Livio
2017-09-01
We present a novel quasi-Newton continuation procedure that efficiently solves the system of nonlinear equations arising from the discretization of a phase field model for wetting phenomena. We perform a comparative numerical analysis that shows the improved speed of convergence gained with respect to other numerical schemes. Moreover, we discuss the conditions that, on a theoretical level, guarantee the convergence of this method. At each iterative step, a suitable continuation procedure develops and passes to the nonlinear solver an accurate initial guess. Discretization performs through cell-centered finite differences. The resulting system of equations is solved on a composite grid that uses dynamic mesh refinement and multi-grid techniques. The final code achieves three-dimensional, realistic computer experiments comparable to those produced in laboratory settings. This code offers not only new insights into the phenomenology of super-hydrophobicity, but also serves as a reliable predictive tool for the study of hydrophobic surfaces.
Toxicity testing of polymer materials for dialysis equipment: reconsidering in vivo testing.
Sauer, U G; Liebsch, M; Kolar, R
2000-01-01
In fulfilment of the aims of the European Union Biocidal Directive (Directive 98/8/EC), Technical Guidance Documents are currently being compiled. Part I of these Technical Guidance Documents covers data requirements for active substances and biocidal products. The Three Rs principle has been applied in certain parts of the toxicity and ecotoxicity testing scheme for pesticides, such as testing for acute oral toxicity, skin and eye irritation, skin sensitisation, and dermal absorption. Further recommendations on how to proceed with regard to the continuing replacement, reduction and refinement of animal experiments in this field of regulatory testing are included for consideration. In this context, besides stressing the necessity to validate and accept further alternatives, emphasis is placed on providing the possibility of waiving unnecessary tests and on the continuous evaluation of whether certain tests are needed at all. 2000 FRAME.
Automating the expert consensus paradigm for robust lung tissue classification
NASA Astrophysics Data System (ADS)
Rajagopalan, Srinivasan; Karwoski, Ronald A.; Raghunath, Sushravya; Bartholmai, Brian J.; Robb, Richard A.
2012-03-01
Clinicians confirm the efficacy of dynamic multidisciplinary interactions in diagnosing Lung disease/wellness from CT scans. However, routine clinical practice cannot readily accomodate such interactions. Current schemes for automating lung tissue classification are based on a single elusive disease differentiating metric; this undermines their reliability in routine diagnosis. We propose a computational workflow that uses a collection (#: 15) of probability density functions (pdf)-based similarity metrics to automatically cluster pattern-specific (#patterns: 5) volumes of interest (#VOI: 976) extracted from the lung CT scans of 14 patients. The resultant clusters are refined for intra-partition compactness and subsequently aggregated into a super cluster using a cluster ensemble technique. The super clusters were validated against the consensus agreement of four clinical experts. The aggregations correlated strongly with expert consensus. By effectively mimicking the expertise of physicians, the proposed workflow could make automation of lung tissue classification a clinical reality.
Goal-based h-adaptivity of the 1-D diamond difference discrete ordinate method
NASA Astrophysics Data System (ADS)
Jeffers, R. S.; Kópházi, J.; Eaton, M. D.; Févotte, F.; Hülsemann, F.; Ragusa, J.
2017-04-01
The quantity of interest (QoI) associated with a solution of a partial differential equation (PDE) is not, in general, the solution itself, but a functional of the solution. Dual weighted residual (DWR) error estimators are one way of providing an estimate of the error in the QoI resulting from the discretisation of the PDE. This paper aims to provide an estimate of the error in the QoI due to the spatial discretisation, where the discretisation scheme being used is the diamond difference (DD) method in space and discrete ordinate (SN) method in angle. The QoI are reaction rates in detectors and the value of the eigenvalue (Keff) for 1-D fixed source and eigenvalue (Keff criticality) neutron transport problems respectively. Local values of the DWR over individual cells are used as error indicators for goal-based mesh refinement, which aims to give an optimal mesh for a given QoI.
Zhang, Zhe; Schindler, Christina E. M.; Lange, Oliver F.; Zacharias, Martin
2015-01-01
The high-resolution refinement of docked protein-protein complexes can provide valuable structural and mechanistic insight into protein complex formation complementing experiment. Monte Carlo (MC) based approaches are frequently applied to sample putative interaction geometries of proteins including also possible conformational changes of the binding partners. In order to explore efficiency improvements of the MC sampling, several enhanced sampling techniques, including temperature or Hamiltonian replica exchange and well-tempered ensemble approaches, have been combined with the MC method and were evaluated on 20 protein complexes using unbound partner structures. The well-tempered ensemble method combined with a 2-dimensional temperature and Hamiltonian replica exchange scheme (WTE-H-REMC) was identified as the most efficient search strategy. Comparison with prolonged MC searches indicates that the WTE-H-REMC approach requires approximately 5 times fewer MC steps to identify near native docking geometries compared to conventional MC searches. PMID:26053419
NASA Technical Reports Server (NTRS)
Chen, H. C.; Yu, N. Y.
1991-01-01
An Euler flow solver was developed for predicting the airframe/propulsion integration effects for an aft-mounted turboprop transport. This solver employs a highly efficient multigrid scheme, with a successive mesh-refinement procedure to accelerate the convergence of the solution. A new dissipation model was also implemented to render solutions that are grid insensitive. The propeller power effects are simulated by the actuator disk concept. An embedded flow solution method was developed for predicting the detailed flow characteristics in the local vicinity of an aft-mounted propfan engine in the presence of a flow field induced by a complete aircraft. Results from test case analysis are presented. A user's guide for execution of computer programs, including format of various input files, sample job decks, and sample input files, is provided in an accompanying volume.
Experimental and Theoretical Study of Propeller Spinner/Shank Interference. M.S. Thesis
NASA Technical Reports Server (NTRS)
Cornell, C. C.
1986-01-01
A fundamental experimental and theoretical investigation into the aerodynamic interference associated with propeller spinner and shank regions was conducted. The research program involved a theoretical assessment of solutions previously proposed, followed by a systematic experimental study to supplement the existing data base. As a result, a refined computational procedure was established for prediction of interference effects in terms of interference drag and resolved into propeller thrust and torque components. These quantities were examined with attention to engineering parameters such as two spinner finess ratios, three blade shank forms, and two/three/four/six/eight blades. Consideration of the physics of the phenomena aided in the logical deduction of two individual interference quantities (cascade effects and spinner/shank juncture interference). These interference effects were semi-empirically modeled using existing theories and placed into a compatible form with an existing propeller performance scheme which provided the basis for examples of application.
NASA Astrophysics Data System (ADS)
Li, Yongbo; Yang, Yuantao; Li, Guoyan; Xu, Minqiang; Huang, Wenhu
2017-07-01
Health condition identification of planetary gearboxes is crucial to reduce the downtime and maximize productivity. This paper aims to develop a novel fault diagnosis method based on modified multi-scale symbolic dynamic entropy (MMSDE) and minimum redundancy maximum relevance (mRMR) to identify the different health conditions of planetary gearbox. MMSDE is proposed to quantify the regularity of time series, which can assess the dynamical characteristics over a range of scales. MMSDE has obvious advantages in the detection of dynamical changes and computation efficiency. Then, the mRMR approach is introduced to refine the fault features. Lastly, the obtained new features are fed into the least square support vector machine (LSSVM) to complete the fault pattern identification. The proposed method is numerically and experimentally demonstrated to be able to recognize the different fault types of planetary gearboxes.
Reeve, Belinda
2011-09-01
This article examines whether responsive regulation has potential to improve the regulatory framework which controls free-to-air television advertising to children, so that the regulatory scheme can be used more effectively as a tool for obesity prevention. It presents two apparently conflicting arguments, the first being that responsive regulation, particularly monitoring and enforcement measures, can be used to refine the regulation of children's food advertising. The second argument is that there are limits to the improvements that responsive regulation can achieve, since it is trying to achieve the wrong goal, namely placing controls on misleading or deceptive advertising techniques rather than diminishing the sheer volume of advertisements to which children are exposed. These two positions reflect a conflict between public health experts and governments regarding the role of industry in chronic disease prevention, as well as a broader debate about how best to regulate industry.