Sample records for fvm-bem method based

  1. BioFVM: an efficient, parallelized diffusive transport solver for 3-D biological simulations

    PubMed Central

    Ghaffarizadeh, Ahmadreza; Friedman, Samuel H.; Macklin, Paul

    2016-01-01

    Motivation: Computational models of multicellular systems require solving systems of PDEs for release, uptake, decay and diffusion of multiple substrates in 3D, particularly when incorporating the impact of drugs, growth substrates and signaling factors on cell receptors and subcellular systems biology. Results: We introduce BioFVM, a diffusive transport solver tailored to biological problems. BioFVM can simulate release and uptake of many substrates by cell and bulk sources, diffusion and decay in large 3D domains. It has been parallelized with OpenMP, allowing efficient simulations on desktop workstations or single supercomputer nodes. The code is stable even for large time steps, with linear computational cost scalings. Solutions are first-order accurate in time and second-order accurate in space. The code can be run by itself or as part of a larger simulator. Availability and implementation: BioFVM is written in C ++ with parallelization in OpenMP. It is maintained and available for download at http://BioFVM.MathCancer.org and http://BioFVM.sf.net under the Apache License (v2.0). Contact: paul.macklin@usc.edu. Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26656933

  2. User's Manual for FEM-BEM Method. 1.0

    NASA Technical Reports Server (NTRS)

    Butler, Theresa; Deshpande, M. D. (Technical Monitor)

    2002-01-01

    A user's manual for using FORTRAN code to perform electromagnetic analysis of arbitrarily shaped material cylinders using a hybrid method that combines the finite element method (FEM) and the boundary element method (BEM). In this method, the material cylinder is enclosed by a fictitious boundary and the Maxwell's equations are solved by FEM inside the boundary and by BEM outside the boundary. The electromagnetic scattering on several arbitrarily shaped material cylinders using this FORTRAN code is computed to as examples.

  3. An adaptive multi-moment FVM approach for incompressible flows

    NASA Astrophysics Data System (ADS)

    Liu, Cheng; Hu, Changhong

    2018-04-01

    In this study, a multi-moment finite volume method (FVM) based on block-structured adaptive Cartesian mesh is proposed for simulating incompressible flows. A conservative interpolation scheme following the idea of the constrained interpolation profile (CIP) method is proposed for the prolongation operation of the newly created mesh. A sharp immersed boundary (IB) method is used to model the immersed rigid body. A moving least squares (MLS) interpolation approach is applied for reconstruction of the velocity field around the solid surface. An efficient method for discretization of Laplacian operators on adaptive meshes is proposed. Numerical simulations on several test cases are carried out for validation of the proposed method. For the case of viscous flow past an impulsively started cylinder (Re = 3000 , 9500), the computed surface vorticity coincides with the result of the body-fitted method. For the case of a fast pitching NACA 0015 airfoil at moderate Reynolds numbers (Re = 10000 , 45000), the predicted drag coefficient (CD) and lift coefficient (CL) agree well with other numerical or experimental results. For 2D and 3D simulations of viscous flow past a pitching plate with prescribed motions (Re = 5000 , 40000), the predicted CD, CL and CM (moment coefficient) are in good agreement with those obtained by other numerical methods.

  4. Higher Order, Hybrid BEM/FEM Methods Applied to Antenna Modeling

    NASA Technical Reports Server (NTRS)

    Fink, P. W.; Wilton, D. R.; Dobbins, J. A.

    2002-01-01

    In this presentation, the authors address topics relevant to higher order modeling using hybrid BEM/FEM formulations. The first of these is the limitation on convergence rates imposed by geometric modeling errors in the analysis of scattering by a dielectric sphere. The second topic is the application of an Incomplete LU Threshold (ILUT) preconditioner to solve the linear system resulting from the BEM/FEM formulation. The final tOpic is the application of the higher order BEM/FEM formulation to antenna modeling problems. The authors have previously presented work on the benefits of higher order modeling. To achieve these benefits, special attention is required in the integration of singular and near-singular terms arising in the surface integral equation. Several methods for handling these terms have been presented. It is also well known that achieving he high rates of convergence afforded by higher order bases may als'o require the employment of higher order geometry models. A number of publications have described the use of quadratic elements to model curved surfaces. The authors have shown in an EFIE formulation, applied to scattering by a PEC .sphere, that quadratic order elements may be insufficient to prevent the domination of modeling errors. In fact, on a PEC sphere with radius r = 0.58 Lambda(sub 0), a quartic order geometry representation was required to obtain a convergence benefi.t from quadratic bases when compared to the convergence rate achieved with linear bases. Initial trials indicate that, for a dielectric sphere of the same radius, - requirements on the geometry model are not as severe as for the PEC sphere. The authors will present convergence results for higher order bases as a function of the geometry model order in the hybrid BEM/FEM formulation applied to dielectric spheres. It is well known that the system matrix resulting from the hybrid BEM/FEM formulation is ill -conditioned. For many real applications, a good preconditioner is required

  5. OpenACC performance for simulating 2D radial dambreak using FVM HLLE flux

    NASA Astrophysics Data System (ADS)

    Gunawan, P. H.; Pahlevi, M. R.

    2018-03-01

    The aim of this paper is to investigate the performances of openACC platform for computing 2D radial dambreak. Here, the shallow water equation will be used to describe and simulate 2D radial dambreak with finite volume method (FVM) using HLLE flux. OpenACC is a parallel computing platform based on GPU cores. Indeed, from this research this platform is used to minimize computational time on the numerical scheme performance. The results show the using OpenACC, the computational time is reduced. For the dry and wet radial dambreak simulations using 2048 grids, the computational time of parallel is obtained 575.984 s and 584.830 s respectively for both simulations. These results show the successful of OpenACC when they are compared with the serial time of dry and wet radial dambreak simulations which are collected 28047.500 s and 29269.40 s respectively.

  6. BEM-based simulation of lung respiratory deformation for CT-guided biopsy.

    PubMed

    Chen, Dong; Chen, Weisheng; Huang, Lipeng; Feng, Xuegang; Peters, Terry; Gu, Lixu

    2017-09-01

    Accurate and real-time prediction of the lung and lung tumor deformation during respiration are important considerations when performing a peripheral biopsy procedure. However, most existing work focused on offline whole lung simulation using 4D image data, which is not applicable in real-time image-guided biopsy with limited image resources. In this paper, we propose a patient-specific biomechanical model based on the boundary element method (BEM) computed from CT images to estimate the respiration motion of local target lesion region, vessel tree and lung surface for the real-time biopsy guidance. This approach applies pre-computation of various BEM parameters to facilitate the requirement for real-time lung motion simulation. The resulting boundary condition at end inspiratory phase is obtained using a nonparametric discrete registration with convex optimization, and the simulation of the internal tissue is achieved by applying a tetrahedron-based interpolation method depend on expert-determined feature points on the vessel tree model. A reference needle is tracked to update the simulated lung motion during biopsy guidance. We evaluate the model by applying it for respiratory motion estimations of ten patients. The average symmetric surface distance (ASSD) and the mean target registration error (TRE) are employed to evaluate the proposed model. Results reveal that it is possible to predict the lung motion with ASSD of [Formula: see text] mm and a mean TRE of [Formula: see text] mm at largest over the entire respiratory cycle. In the CT-/electromagnetic-guided biopsy experiment, the whole process was assisted by our BEM model and final puncture errors in two studies were 3.1 and 2.0 mm, respectively. The experiment results reveal that both the accuracy of simulation and real-time performance meet the demands of clinical biopsy guidance.

  7. Solving transient conduction and radiation heat transfer problems using the lattice Boltzmann method and the finite volume method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mishra, Subhash C.; Roy, Hillol K.

    2007-04-10

    The lattice Boltzmann method (LBM) was used to solve the energy equation of a transient conduction-radiation heat transfer problem. The finite volume method (FVM) was used to compute the radiative information. To study the compatibility of the LBM for the energy equation and the FVM for the radiative transfer equation, transient conduction and radiation heat transfer problems in 1-D planar and 2-D rectangular geometries were considered. In order to establish the suitability of the LBM, the energy equations of the two problems were also solved using the FVM of the computational fluid dynamics. The FVM used in the radiative heatmore » transfer was employed to compute the radiative information required for the solution of the energy equation using the LBM or the FVM (of the CFD). To study the compatibility and suitability of the LBM for the solution of energy equation and the FVM for the radiative information, results were analyzed for the effects of various parameters such as the scattering albedo, the conduction-radiation parameter and the boundary emissivity. The results of the LBM-FVM combination were found to be in excellent agreement with the FVM-FVM combination. The number of iterations and CPU times in both the combinations were found comparable.« less

  8. Two dimensional fully nonlinear numerical wave tank based on the BEM

    NASA Astrophysics Data System (ADS)

    Sun, Zhe; Pang, Yongjie; Li, Hongwei

    2012-12-01

    The development of a two dimensional numerical wave tank (NWT) with a rocker or piston type wavemaker based on the high order boundary element method (BEM) and mixed Eulerian-Lagrangian (MEL) is examined. The cauchy principle value (CPV) integral is calculated by a special Gauss type quadrature and a change of variable. In addition the explicit truncated Taylor expansion formula is employed in the time-stepping process. A modified double nodes method is assumed to tackle the corner problem, as well as the damping zone technique is used to absorb the propagation of the free surface wave at the end of the tank. A variety of waves are generated by the NWT, for example; a monochromatic wave, solitary wave and irregular wave. The results confirm the NWT model is efficient and stable.

  9. Hybrid fully nonlinear BEM-LBM numerical wave tank with applications in naval hydrodynamics

    NASA Astrophysics Data System (ADS)

    Mivehchi, Amin; Grilli, Stephan T.; Dahl, Jason M.; O'Reilly, Chris M.; Harris, Jeffrey C.; Kuznetsov, Konstantin; Janssen, Christian F.

    2017-11-01

    simulation of the complex dynamics response of ships in waves is typically modeled by nonlinear potential flow theory, usually solved with a higher order BEM. In some cases, the viscous/turbulent effects around a structure and in its wake need to be accurately modeled to capture the salient physics of the problem. Here, we present a fully 3D model based on a hybrid perturbation method. In this method, the velocity and pressure are decomposed as the sum of an inviscid flow and viscous perturbation. The inviscid part is solved over the whole domain using a BEM based on cubic spline element. These inviscid results are then used to force a near-field perturbation solution on a smaller domain size, which is solved with a NS model based on LBM-LES, and implemented on GPUs. The BEM solution for large grids is greatly accelerated by using a parallelized FMM, which is efficiently implemented on large and small clusters, yielding an almost linear scaling with the number of unknowns. A new representation of corners and edges is implemented, which improves the global accuracy of the BEM solver, particularly for moving boundaries. We present model results and the recent improvements of the BEM, alongside results of the hybrid model, for applications to problems. Office of Naval Research Grants N000141310687 and N000141612970.

  10. Application of different variants of the BEM in numerical modeling of bioheat transfer problems.

    PubMed

    Majchrzak, Ewa

    2013-09-01

    Heat transfer processes proceeding in the living organisms are described by the different mathematical models. In particular, the typical continuous model of bioheat transfer bases on the most popular Pennes equation, but the Cattaneo-Vernotte equation and the dual phase lag equation are also used. It should be pointed out that in parallel are also examined the vascular models, and then for the large blood vessels and tissue domain the energy equations are formulated separately. In the paper the different variants of the boundary element method as a tool of numerical solution of bioheat transfer problems are discussed. For the steady state problems and the vascular models the classical BEM algorithm and also the multiple reciprocity BEM are presented. For the transient problems connected with the heating of tissue, the various tissue models are considered for which the 1st scheme of the BEM, the BEM using discretization in time and the general BEM are applied. Examples of computations illustrate the possibilities of practical applications of boundary element method in the scope of bioheat transfer problems.

  11. Development of an integrated BEM approach for hot fluid structure interaction

    NASA Technical Reports Server (NTRS)

    Dargush, Gary F.; Banerjee, Prasanta K.; Honkala, Keith A.

    1988-01-01

    In the present work, the boundary element method (BEM) is chosen as the basic analysis tool, principally because the definition of temperature, flux, displacement and traction are very precise on a boundary-based discretization scheme. One fundamental difficulty is, of course, that a BEM formulation requires a considerable amount of analytical work, which is not needed in the other numerical methods. Progress made toward the development of a boundary element formulation for the study of hot fluid-structure interaction in Earth-to-Orbit engine hot section components is reported. The primary thrust of the program to date has been directed quite naturally toward the examination of fluid flow, since boundary element methods for fluids are at a much less developed state.

  12. Sandra Lipsitz Bem (1944-2014).

    PubMed

    Golden, Carla; McHugh, Maureen

    2015-04-01

    This article memorializes Sandra Lipsitz Bem (1944-2014). Bem was a feminism psychologist whose incisive writing and research transformed the psychology of gender and contributed significantly to our understanding of sex-typing, psychological androgyny, gender schema theory, and sexual inequality. Bem and her husband, Daryl Bem, were active in the feminist community in Pittsburgh, and worked with the National Organization for Women to challenge gender-segregated job advertisements in a lawsuit against the Pittsburgh Press in 1969. The Bems co-wrote an influential article, "Case Study of a Nonconscious Ideology: Training the Woman to Know Her Place" (1970) using the word "sexism" when it was not widely known. She created the Bem Sex-Role Inventory (BSRI) and conducted research showing that conventional gender typing was not necessarily correlated with psychological adjustment. Her publications won her enduring recognition and awards, including the American Psychological Association Distinguished Scientific Award for Early Career Contribution (1976), Distinguished Publication Awards from the Association for Women in Psychology (AWP; 1977, 1994), the Young Scholar Award from the American Association of University Women (1980), and, posthumously, the Distinguished Career Award (AWP, 2014). (c) 2015 APA, all rights reserved).

  13. Transforming BIM to BEM: Generation of Building Geometry for the NASA Ames Sustainability Base BIM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Donnell, James T.; Maile, Tobias; Rose, Cody

    Typical processes of whole Building Energy simulation Model (BEM) generation are subjective, labor intensive, time intensive and error prone. Essentially, these typical processes reproduce already existing data, i.e. building models already created by the architect. Accordingly, Lawrence Berkeley National Laboratory (LBNL) developed a semi-automated process that enables reproducible conversions of Building Information Model (BIM) representations of building geometry into a format required by building energy modeling (BEM) tools. This is a generic process that may be applied to all building energy modeling tools but to date has only been used for EnergyPlus. This report describes and demonstrates each stage inmore » the semi-automated process for building geometry using the recently constructed NASA Ames Sustainability Base throughout. This example uses ArchiCAD (Graphisoft, 2012) as the originating CAD tool and EnergyPlus as the concluding whole building energy simulation tool. It is important to note that the process is also applicable for professionals that use other CAD tools such as Revit (“Revit Architecture,” 2012) and DProfiler (Beck Technology, 2012) and can be extended to provide geometry definitions for BEM tools other than EnergyPlus. Geometry Simplification Tool (GST) was used during the NASA Ames project and was the enabling software that facilitated semi-automated data transformations. GST has now been superseded by Space Boundary Tool (SBT-1) and will be referred to as SBT-1 throughout this report. The benefits of this semi-automated process are fourfold: 1) reduce the amount of time and cost required to develop a whole building energy simulation model, 2) enable rapid generation of design alternatives, 3) improve the accuracy of BEMs and 4) result in significantly better performing buildings with significantly lower energy consumption than those created using the traditional design process, especially if the simulation model was used as a

  14. The effect of implementation strength of basic emergency obstetric and newborn care (BEmONC) on facility deliveries and the met need for BEmONC at the primary health care level in Ethiopia.

    PubMed

    Tiruneh, Gizachew Tadele; Karim, Ali Mehryar; Avan, Bilal Iqbal; Zemichael, Nebreed Fesseha; Wereta, Tewabech Gebrekiristos; Wickremasinghe, Deepthi; Keweti, Zinar Nebi; Kebede, Zewditu; Betemariam, Wuleta Aklilu

    2018-05-02

    Basic emergency obstetric and newborn care (BEmONC) is a primary health care level initiative promoted in low- and middle-income countries to reduce maternal and newborn mortality. Tailored support, including BEmONC training to providers, mentoring and monitoring through supportive supervision, provision of equipment and supplies, strengthening referral linkages, and improving infection-prevention practice, was provided in a package of interventions to 134 health centers, covering 91 rural districts of Ethiopia to ensure timely BEmONC care. In recent years, there has been a growing interest in measuring program implementation strength to evaluate public health gains. To assess the effectiveness of the BEmONC initiative, this study measures its implementation strength and examines the effect of its variability across intervention health centers on the rate of facility deliveries and the met need for BEmONC. Before and after data from 134 intervention health centers were collected in April 2013 and July 2015. A BEmONC implementation strength index was constructed from seven input and five process indicators measured through observation, record review, and provider interview; while facility delivery rate and the met need for expected obstetric complications were measured from service statistics and patient records. We estimated the dose-response relationships between outcome and explanatory variables of interest using regression methods. The BEmONC implementation strength index score, which ranged between zero and 10, increased statistically significantly from 4.3 at baseline to 6.7 at follow-up (p < .05). Correspondingly, the health center delivery rate significantly increased from 24% to 56% (p < .05). There was a dose-response relationship between the explanatory and outcome variables. For every unit increase in BEmONC implementation strength score there was a corresponding average of 4.5 percentage points (95% confidence interval: 2.1-6.9) increase in

  15. Interactions between the bud emergence proteins Bem1p and Bem2p and Rho-type GTPases in yeast.

    PubMed

    Peterson, J; Zheng, Y; Bender, L; Myers, A; Cerione, R; Bender, A

    1994-12-01

    The SH3 domain-containing protein Bem1p is needed for normal bud emergence and mating projection formation, two processes that require asymmetric reorganizations of the cortical cytoskeleton in Saccharomyces cerevisiae. To identify proteins that functionally and/or physically interact with Bem1p, we screened for mutations that display synthetic lethality with a mutant allele of the BEM1 gene and for genes whose products display two-hybrid interactions with the Bem1 protein. CDC24, which is required for bud emergence and encodes a GEF (guanine-nucleotide exchange factor) for the essential Rho-type GTPase Cdc42p, was identified during both screens. The COOH-terminal 75 amino acids of Cdc24p, outside of the GEF domain, can interact with a portion of Bem1p that lacks both SH3 domains. Bacterially expressed Cdc24p and Bem1p bind to each other in vitro, indicating that no other yeast proteins are required for this interaction. The most frequently identified gene that arose from the bem1 synthetic-lethal screen was the bud-emergence gene BEM2 (Bender and Pringle. 1991. Mol. Cell Biol. 11:1295-1395), which is allelic with IPL2 (increase in ploidy; Chan and Botstein, 1993. Genetics. 135:677-691). Here we show that Bem2p contains a GAP (GTPase-activating protein) domain for Rho-type GTPases, and that this portion of Bem2p can stimulate in vitro the GTPase activity of Rho1p, a second essential yeast Rho-type GTPase. Cells deleted for BEM2 become large and multinucleate. These and other genetic, two-hybrid, biochemical, and phenotypic data suggest that multiple Rho-type GTPases control the reorganization of the cortical cytoskeleton in yeast and that the functions of these GTPases are tightly coupled. Also, these findings raise the possibility that Bem1p may regulate or be a target of action of one or more of these GTPases.

  16. Interactions between the bud emergence proteins Bem1p and Bem2p and Rho- type GTPases in yeast

    PubMed Central

    1994-01-01

    The SH3 domain-containing protein Bem1p is needed for normal bud emergence and mating projection formation, two processes that require asymmetric reorganizations of the cortical cytoskeleton in Saccharomyces cerevisiae. To identify proteins that functionally and/or physically interact with Bem1p, we screened for mutations that display synthetic lethality with a mutant allele of the BEM1 gene and for genes whose products display two-hybrid interactions with the Bem1 protein. CDC24, which is required for bud emergence and encodes a GEF (guanine- nucleotide exchange factor) for the essential Rho-type GTPase Cdc42p, was identified during both screens. The COOH-terminal 75 amino acids of Cdc24p, outside of the GEF domain, can interact with a portion of Bem1p that lacks both SH3 domains. Bacterially expressed Cdc24p and Bem1p bind to each other in vitro, indicating that no other yeast proteins are required for this interaction. The most frequently identified gene that arose from the bem1 synthetic-lethal screen was the bud-emergence gene BEM2 (Bender and Pringle. 1991. Mol. Cell Biol. 11:1295-1395), which is allelic with IPL2 (increase in ploidy; Chan and Botstein, 1993. Genetics. 135:677-691). Here we show that Bem2p contains a GAP (GTPase-activating protein) domain for Rho-type GTPases, and that this portion of Bem2p can stimulate in vitro the GTPase activity of Rho1p, a second essential yeast Rho-type GTPase. Cells deleted for BEM2 become large and multinucleate. These and other genetic, two-hybrid, biochemical, and phenotypic data suggest that multiple Rho-type GTPases control the reorganization of the cortical cytoskeleton in yeast and that the functions of these GTPases are tightly coupled. Also, these findings raise the possibility that Bem1p may regulate or be a target of action of one or more of these GTPases. PMID:7962098

  17. Phosphorylation of Bem2p and Bem3p may contribute to local activation of Cdc42p at bud emergence

    PubMed Central

    Knaus, Michèle; Pelli-Gulli, Marie-Pierre; van Drogen, Frank; Springer, Sander; Jaquenoud, Malika; Peter, Matthias

    2007-01-01

    Site-specific activation of the Rho-type GTPase Cdc42p is critical for the establishment of cell polarity. Here we investigated the role and regulation of the GTPase-activating enzymes (GAPs) Bem2p and Bem3p for Cdc42p activation and actin polarization at bud emergence in Saccharomyces cerevisiae. Bem2p and Bem3p are localized throughout the cytoplasm and the cell cortex in unbudded G1 cells, but accumulate at sites of polarization after bud emergence. Inactivation of Bem2p results in hyperactivation of Cdc42p and polarization toward multiple sites. Bem2p and Bem3p are hyperphosphorylated at bud emergence most likely by the Cdc28p-Cln2p kinase. This phosphorylation appears to inhibit their GAP activity in vivo, as non-phosphorylatable Bem3p mutants are hyperactive and interfere with Cdc42p activation. Taken together, our results indicate that Bem2p and Bem3p may function as global inhibitors of Cdc42p activation during G1, and their inactivation by the Cdc28p/Cln kinase contributes to site-specific activation of Cdc42p at bud emergence. PMID:17914457

  18. Issues and Methods Concerning the Evaluation of Hypersingular and Near-Hypersingular Integrals in BEM Formulations

    NASA Technical Reports Server (NTRS)

    Fink, P. W.; Khayat, M. A.; Wilton, D. R.

    2005-01-01

    It is known that higher order modeling of the sources and the geometry in Boundary Element Modeling (BEM) formulations is essential to highly efficient computational electromagnetics. However, in order to achieve the benefits of hIgher order basis and geometry modeling, the singular and near-singular terms arising in BEM formulations must be integrated accurately. In particular, the accurate integration of near-singular terms, which occur when observation points are near but not on source regions of the scattering object, has been considered one of the remaining limitations on the computational efficiency of integral equation methods. The method of singularity subtraction has been used extensively for the evaluation of singular and near-singular terms. Piecewise integration of the source terms in this manner, while manageable for bases of constant and linear orders, becomes unwieldy and prone to error for bases of higher order. Furthermore, we find that the singularity subtraction method is not conducive to object-oriented programming practices, particularly in the context of multiple operators. To extend the capabilities, accuracy, and maintainability of general-purpose codes, the subtraction method is being replaced in favor of the purely numerical quadrature schemes. These schemes employ singularity cancellation methods in which a change of variables is chosen such that the Jacobian of the transformation cancels the singularity. An example of the sin,oularity cancellation approach is the Duffy method, which has two major drawbacks: 1) In the resulting integrand, it produces an angular variation about the singular point that becomes nearly-singular for observation points close to an edge of the parent element, and 2) it appears not to work well when applied to nearly-singular integrals. Recently, the authors have introduced the transformation u(x(prime))= sinh (exp -1) x(prime)/Square root of ((y prime (exp 2))+ z(exp 2) for integrating functions of the form I

  19. Unsteady three-dimensional thermal field prediction in turbine blades using nonlinear BEM

    NASA Technical Reports Server (NTRS)

    Martin, Thomas J.; Dulikravich, George S.

    1993-01-01

    A time-and-space accurate and computationally efficient fully three dimensional unsteady temperature field analysis computer code has been developed for truly arbitrary configurations. It uses boundary element method (BEM) formulation based on an unsteady Green's function approach, multi-point Gaussian quadrature spatial integration on each panel, and a highly clustered time-step integration. The code accepts either temperatures or heat fluxes as boundary conditions that can vary in time on a point-by-point basis. Comparisons of the BEM numerical results and known analytical unsteady results for simple shapes demonstrate very high accuracy and reliability of the algorithm. An example of computed three dimensional temperature and heat flux fields in a realistically shaped internally cooled turbine blade is also discussed.

  20. Accurate load prediction by BEM with airfoil data from 3D RANS simulations

    NASA Astrophysics Data System (ADS)

    Schneider, Marc S.; Nitzsche, Jens; Hennings, Holger

    2016-09-01

    In this paper, two methods for the extraction of airfoil coefficients from 3D CFD simulations of a wind turbine rotor are investigated, and these coefficients are used to improve the load prediction of a BEM code. The coefficients are extracted from a number of steady RANS simulations, using either averaging of velocities in annular sections, or an inverse BEM approach for determination of the induction factors in the rotor plane. It is shown that these 3D rotor polars are able to capture the rotational augmentation at the inner part of the blade as well as the load reduction by 3D effects close to the blade tip. They are used as input to a simple BEM code and the results of this BEM with 3D rotor polars are compared to the predictions of BEM with 2D airfoil coefficients plus common empirical corrections for stall delay and tip loss. While BEM with 2D airfoil coefficients produces a very different radial distribution of loads than the RANS simulation, the BEM with 3D rotor polars manages to reproduce the loads from RANS very accurately for a variety of load cases, as long as the blade pitch angle is not too different from the cases from which the polars were extracted.

  1. Edge gradients evaluation for 2D hybrid finite volume method model

    USDA-ARS?s Scientific Manuscript database

    In this study, a two-dimensional depth-integrated hydrodynamic model was developed using FVM on a hybrid unstructured collocated mesh system. To alleviate the negative effects of mesh irregularity and non-uniformity, a conservative evaluation method for edge gradients based on the second-order Tayl...

  2. An efficicient data structure for three-dimensional vertex based finite volume method

    NASA Astrophysics Data System (ADS)

    Akkurt, Semih; Sahin, Mehmet

    2017-11-01

    A vertex based three-dimensional finite volume algorithm has been developed using an edge based data structure.The mesh data structure of the given algorithm is similar to ones that exist in the literature. However, the data structures are redesigned and simplied in order to fit requirements of the vertex based finite volume method. In order to increase the cache efficiency, the data access patterns for the vertex based finite volume method are investigated and these datas are packed/allocated in a way that they are close to each other in the memory. The present data structure is not limited with tetrahedrons, arbitrary polyhedrons are also supported in the mesh without putting any additional effort. Furthermore, the present data structure also supports adaptive refinement and coarsening. For the implicit and parallel implementation of the FVM algorithm, PETSc and MPI libraries are employed. The performance and accuracy of the present algorithm are tested for the classical benchmark problems by comparing the CPU time for the open source algorithms.

  3. Analysis of 3D poroelastodynamics using BEM based on modified time-step scheme

    NASA Astrophysics Data System (ADS)

    Igumnov, L. A.; Petrov, A. N.; Vorobtsov, I. V.

    2017-10-01

    The development of 3d boundary elements modeling of dynamic partially saturated poroelastic media using a stepping scheme is presented in this paper. Boundary Element Method (BEM) in Laplace domain and the time-stepping scheme for numerical inversion of the Laplace transform are used to solve the boundary value problem. The modified stepping scheme with a varied integration step for quadrature coefficients calculation using the symmetry of the integrand function and integral formulas of Strongly Oscillating Functions was applied. The problem with force acting on a poroelastic prismatic console end was solved using the developed method. A comparison of the results obtained by the traditional stepping scheme with the solutions obtained by this modified scheme shows that the computational efficiency is better with usage of combined formulas.

  4. Development of an integrated BEM approach for hot fluid structure interaction: BEST-FSI: Boundary Element Solution Technique for Fluid Structure Interaction

    NASA Technical Reports Server (NTRS)

    Dargush, G. F.; Banerjee, P. K.; Shi, Y.

    1992-01-01

    As part of the continuing effort at NASA LeRC to improve both the durability and reliability of hot section Earth-to-orbit engine components, significant enhancements must be made in existing finite element and finite difference methods, and advanced techniques, such as the boundary element method (BEM), must be explored. The BEM was chosen as the basic analysis tool because the critical variables (temperature, flux, displacement, and traction) can be very precisely determined with a boundary-based discretization scheme. Additionally, model preparation is considerably simplified compared to the more familiar domain-based methods. Furthermore, the hyperbolic character of high speed flow is captured through the use of an analytical fundamental solution, eliminating the dependence of the solution on the discretization pattern. The price that must be paid in order to realize these advantages is that any BEM formulation requires a considerable amount of analytical work, which is typically absent in the other numerical methods. All of the research accomplishments of a multi-year program aimed toward the development of a boundary element formulation for the study of hot fluid-structure interaction in Earth-to-orbit engine hot section components are detailed. Most of the effort was directed toward the examination of fluid flow, since BEM's for fluids are at a much less developed state. However, significant strides were made, not only in the analysis of thermoviscous fluids, but also in the solution of the fluid-structure interaction problem.

  5. Extrusion Process by Finite Volume Method Using OpenFoam Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matos Martins, Marcelo; Tonini Button, Sergio; Divo Bressan, Jose

    The computational codes are very important tools to solve engineering problems. In the analysis of metal forming process, such as extrusion, this is not different because the computational codes allow analyzing the process with reduced cost. Traditionally, the Finite Element Method is used to solve solid mechanic problems, however, the Finite Volume Method (FVM) have been gaining force in this field of applications. This paper presents the velocity field and friction coefficient variation results, obtained by numerical simulation using the OpenFoam Software and the FVM to solve an aluminum direct cold extrusion process.

  6. Glenn-ht/bem Conjugate Heat Transfer Solver for Large-scale Turbomachinery Models

    NASA Technical Reports Server (NTRS)

    Divo, E.; Steinthorsson, E.; Rodriquez, F.; Kassab, A. J.; Kapat, J. S.; Heidmann, James D. (Technical Monitor)

    2003-01-01

    A coupled Boundary Element/Finite Volume Method temperature-forward/flux-hack algorithm is developed for conjugate heat transfer (CHT) applications. A loosely coupled strategy is adopted with each field solution providing boundary conditions for the other in an iteration seeking continuity of temperature and heat flux at the fluid-solid interface. The NASA Glenn Navier-Stokes code Glenn-HT is coupled to a 3-D BEM steady state heat conduction code developed at the University of Central Florida. Results from CHT simulation of a 3-D film-cooled blade section are presented and compared with those computed by a two-temperature approach. Also presented are current developments of an iterative domain decomposition strategy accommodating large numbers of unknowns in the BEM. The blade is artificially sub-sectioned in the span-wise direction, 3-D BEM solutions are obtained in the subdomains, and interface temperatures are averaged symmetrically when the flux is updated while the fluxes are averaged anti-symmetrically to maintain continuity of heat flux when the temperatures are updated. An initial guess for interface temperatures uses a physically-based 1-D conduction argument to provide an effective starting point and significantly reduce iteration. 2-D and 3-D results show the process converges efficiently and offers substantial computational and storage savings. Future developments include a parallel multi-grid implementation of the approach under MPI for computation on PC clusters.

  7. Generalized source Finite Volume Method for radiative transfer equation in participating media

    NASA Astrophysics Data System (ADS)

    Zhang, Biao; Xu, Chuan-Long; Wang, Shi-Min

    2017-03-01

    Temperature monitoring is very important in a combustion system. In recent years, non-intrusive temperature reconstruction has been explored intensively on the basis of calculating arbitrary directional radiative intensities. In this paper, a new method named Generalized Source Finite Volume Method (GSFVM) was proposed. It was based on radiative transfer equation and Finite Volume Method (FVM). This method can be used to calculate arbitrary directional radiative intensities and is proven to be accurate and efficient. To verify the performance of this method, six test cases of 1D, 2D, and 3D radiative transfer problems were investigated. The numerical results show that the efficiency of this method is close to the radial basis function interpolation method, but the accuracy and stability is higher than that of the interpolation method. The accuracy of the GSFVM is similar to that of the Backward Monte Carlo (BMC) algorithm, while the time required by the GSFVM is much shorter than that of the BMC algorithm. Therefore, the GSFVM can be used in temperature reconstruction and improvement on the accuracy of the FVM.

  8. Higher Order Bases in a 2D Hybrid BEM/FEM Formulation

    NASA Technical Reports Server (NTRS)

    Fink, Patrick W.; Wilton, Donald R.

    2002-01-01

    The advantages of using higher order, interpolatory basis functions are examined in the analysis of transverse electric (TE) plane wave scattering by homogeneous, dielectric cylinders. A boundary-element/finite-element (BEM/FEM) hybrid formulation is employed in which the interior dielectric region is modeled with the vector Helmholtz equation, and a radiation boundary condition is supplied by an Electric Field Integral Equation (EFIE). An efficient method of handling the singular self-term arising in the EFIE is presented. The iterative solution of the partially dense system of equations is obtained using the Quasi-Minimal Residual (QMR) algorithm with an Incomplete LU Threshold (ILUT) preconditioner. Numerical results are shown for the case of an incident wave impinging upon a square dielectric cylinder. The convergence of the solution is shown versus the number of unknowns as a function of the completeness order of the basis functions.

  9. Development of an integrated BEM for hot fluid-structure interaction

    NASA Technical Reports Server (NTRS)

    Banerjee, P. K.; Dargush, G. F.

    1989-01-01

    The Boundary Element Method (BEM) is chosen as a basic analysis tool principally because the definition of quantities like fluxes, temperature, displacements, and velocities is very precise on a boundary base discretization scheme. One fundamental difficulty is, of course, that the entire analysis requires a very considerable amount of analytical work which is not present in other numerical methods. During the last 18 months all of this analytical work was completed and a two-dimensional, general purpose code was written. Some of the early results are described. It is anticipated that within the next two to three months almost all two-dimensional idealizations will be examined. It should be noted that the analytical work for the three-dimensional case has also been done and numerical implementation will begin next year.

  10. Lattice Boltzmann Method of Different BGA Orientations on I-Type Dispensing Method

    PubMed Central

    Gan, Z. L.; Ishak, M. H. H.; Abdullah, M. Z.; Khor, Soon Fuat

    2016-01-01

    This paper studies the three dimensional (3D) simulation of fluid flows through the ball grid array (BGA) to replicate the real underfill encapsulation process. The effect of different solder bump arrangements of BGA on the flow front, pressure and velocity of the fluid is investigated. The flow front, pressure and velocity for different time intervals are determined and analyzed for potential problems relating to solder bump damage. The simulation results from Lattice Boltzmann Method (LBM) code will be validated with experimental findings as well as the conventional Finite Volume Method (FVM) code to ensure highly accurate simulation setup. Based on the findings, good agreement can be seen between LBM and FVM simulations as well as the experimental observations. It was shown that only LBM is capable of capturing the micro-voids formation. This study also shows an increasing trend in fluid filling time for BGA with perimeter, middle empty and full orientations. The perimeter orientation has a higher pressure fluid at the middle region of BGA surface compared to middle empty and full orientation. This research would shed new light for a highly accurate simulation of encapsulation process using LBM and help to further increase the reliability of the package produced. PMID:27454872

  11. Applications of FEM and BEM in two-dimensional fracture mechanics problems

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Steeve, B. E.; Swanson, G. R.

    1992-01-01

    A comparison of the finite element method (FEM) and boundary element method (BEM) for the solution of two-dimensional plane strain problems in fracture mechanics is presented in this paper. Stress intensity factors (SIF's) were calculated using both methods for elastic plates with either a single-edge crack or an inclined-edge crack. In particular, two currently available programs, ANSYS for finite element analysis and BEASY for boundary element analysis, were used.

  12. 1RM prediction: a novel methodology based on the force-velocity and load-velocity relationships.

    PubMed

    Picerno, Pietro; Iannetta, Danilo; Comotto, Stefania; Donati, Marco; Pecoraro, Fabrizio; Zok, Mounir; Tollis, Giorgio; Figura, Marco; Varalda, Carlo; Di Muzio, Davide; Patrizio, Federica; Piacentini, Maria Francesca

    2016-10-01

    This study aimed to evaluate the accuracy of a novel approach for predicting the one-repetition maximum (1RM). The prediction is based on the force-velocity and load-velocity relationships determined from measured force and velocity data collected during resistance-training exercises with incremental submaximal loads. 1RM was determined as the load corresponding to the intersection of these two curves, where the gravitational force exceeds the force that the subject can exert. The proposed force-velocity-based method (FVM) was tested on 37 participants (23.9 ± 3.1 year; BMI 23.44 ± 2.45) with no specific resistance-training experience, and the predicted 1RM was compared to that achieved using a direct method (DM) in chest-press (CP) and leg-press (LP) exercises. The mean 1RM in CP was 99.5 kg (±27.0) for DM and 100.8 kg (±27.2) for FVM (SEE = 1.2 kg), whereas the mean 1RM in LP was 249.3 kg (±60.2) for DM and 251.1 kg (±60.3) for FVM (SEE = 2.1 kg). A high correlation was found between the two methods for both CP and LP exercises (0.999, p < 0.001). Good agreement between the two methods emerged from the Bland and Altman plot analysis. These findings suggest the use of the proposed methodology as a valid alternative to other indirect approaches for 1RM prediction. The mathematical construct is simply based on the definition of the 1RM, and it is fed with subject's muscle strength capacities measured during a specific exercise. Its reliability is, thus, expected to be not affected by those factors that typically jeopardize regression-based approaches.

  13. A stabilized element-based finite volume method for poroelastic problems

    NASA Astrophysics Data System (ADS)

    Honório, Hermínio T.; Maliska, Clovis R.; Ferronato, Massimiliano; Janna, Carlo

    2018-07-01

    The coupled equations of Biot's poroelasticity, consisting of stress equilibrium and fluid mass balance in deforming porous media, are numerically solved. The governing partial differential equations are discretized by an Element-based Finite Volume Method (EbFVM), which can be used in three dimensional unstructured grids composed of elements of different types. One of the difficulties for solving these equations is the numerical pressure instability that can arise when undrained conditions take place. In this paper, a stabilization technique is developed to overcome this problem by employing an interpolation function for displacements that considers also the pressure gradient effect. The interpolation function is obtained by the so-called Physical Influence Scheme (PIS), typically employed for solving incompressible fluid flows governed by the Navier-Stokes equations. Classical problems with analytical solutions, as well as three-dimensional realistic cases are addressed. The results reveal that the proposed stabilization technique is able to eliminate the spurious pressure instabilities arising under undrained conditions at a low computational cost.

  14. Diffraction of seismic waves from 3-D canyons and alluvial basins modeled using the Fast Multipole-accelerated BEM

    NASA Astrophysics Data System (ADS)

    Chaillat, S.; Bonnet, M.; Semblat, J.

    2007-12-01

    Seismic wave propagation and amplification in complex media is a major issue in the field of seismology. To compute seismic wave propagation in complex geological structures such as in alluvial basins, various numerical methods have been proposed. The main advantage of the Boundary Element Method (BEM) is that only the domain boundaries (and possibly interfaces) are discretized, leading to a reduction of the number of degrees of freedom. The main drawback of the standard BEM is that the governing matrix is full and non- symmetric, which gives rise to high computational and memory costs. In other areas where the BEM is used (electromagnetism, acoustics), considerable speedup of solution time and decrease of memory requirements have been achieved through the development, over the last decade, of the Fast Multipole Method (FMM). The goal of the FMM is to speed up the matrix-vector product computation needed at each iteration of the GMRES iterative solver. Moreover, the governing matrix is never explicitly formed, which leads to a storage requirement well below the memory necessary for holding the complete matrix. The FMM-accelerated BEM therefore achieves substantial savings in both CPU time and memory. In this work, the FMM is extended to the 3-D frequency-domain elastodynamics and applied to the computation of seismic wave propagation in 3-D. The efficiency of the present FMM-BEM is demonstrated on seismology- oriented examples. First, the diffraction of a plane wave or a point source by a 3-D canyon is studied. The influence of the size of the meshed part of the free surface is studied, and computations are performed for non- dimensional frequencies higher than those considered in other studies (thanks to the use of the FM-BEM), with which comparisons are made whenever possible. The method is also applied to analyze the diffraction of a plane wave or a point source by a 3-D alluvial basin. A parametrical study is performed on the effect of the shape of the basin

  15. A critical analysis of some popular methods for the discretisation of the gradient operator in finite volume methods

    NASA Astrophysics Data System (ADS)

    Syrakos, Alexandros; Varchanis, Stylianos; Dimakopoulos, Yannis; Goulas, Apostolos; Tsamopoulos, John

    2017-12-01

    Finite volume methods (FVMs) constitute a popular class of methods for the numerical simulation of fluid flows. Among the various components of these methods, the discretisation of the gradient operator has received less attention despite its fundamental importance with regards to the accuracy of the FVM. The most popular gradient schemes are the divergence theorem (DT) (or Green-Gauss) scheme and the least-squares (LS) scheme. Both are widely believed to be second-order accurate, but the present study shows that in fact the common variant of the DT gradient is second-order accurate only on structured meshes whereas it is zeroth-order accurate on general unstructured meshes, and the LS gradient is second-order and first-order accurate, respectively. This is explained through a theoretical analysis and is confirmed by numerical tests. The schemes are then used within a FVM to solve a simple diffusion equation on unstructured grids generated by several methods; the results reveal that the zeroth-order accuracy of the DT gradient is inherited by the FVM as a whole, and the discretisation error does not decrease with grid refinement. On the other hand, use of the LS gradient leads to second-order accurate results, as does the use of alternative, consistent, DT gradient schemes, including a new iterative scheme that makes the common DT gradient consistent at almost no extra cost. The numerical tests are performed using both an in-house code and the popular public domain partial differential equation solver OpenFOAM.

  16. Multilevel fast multipole method based on a potential formulation for 3D electromagnetic scattering problems.

    PubMed

    Fall, Mandiaye; Boutami, Salim; Glière, Alain; Stout, Brian; Hazart, Jerome

    2013-06-01

    A combination of the multilevel fast multipole method (MLFMM) and boundary element method (BEM) can solve large scale photonics problems of arbitrary geometry. Here, MLFMM-BEM algorithm based on a scalar and vector potential formulation, instead of the more conventional electric and magnetic field formulations, is described. The method can deal with multiple lossy or lossless dielectric objects of arbitrary geometry, be they nested, in contact, or dispersed. Several examples are used to demonstrate that this method is able to efficiently handle 3D photonic scatterers involving large numbers of unknowns. Absorption, scattering, and extinction efficiencies of gold nanoparticle spheres, calculated by the MLFMM, are compared with Mie's theory. MLFMM calculations of the bistatic radar cross section (RCS) of a gold sphere near the plasmon resonance and of a silica coated gold sphere are also compared with Mie theory predictions. Finally, the bistatic RCS of a nanoparticle gold-silver heterodimer calculated with MLFMM is compared with unmodified BEM calculations.

  17. Differential Effectiveness of Two Classification Procedures on the Bem Sex Role Inventory

    ERIC Educational Resources Information Center

    Orlofsky, Jacob L.; And Others

    1977-01-01

    A median split and a difference/median split method were used to classify college students into masculine, feminine, androgynous and undifferentiated sex role orientations using the Bem Sex Role Inventory. The difference/ median split procedure was more successful in discriminating between sex role groups and in predicting sex role ideology. (EVH)

  18. Androgyny Versus Gender Schema: A Comment on Bem's Gender Schema Theory.

    ERIC Educational Resources Information Center

    Spence, Janet T.; Helmreich, Robert L.

    1981-01-01

    A logical contradiction in Bem's (1981) theory is outlined. The Bem Sex Role Inventory cannot measure a unidimensional construct, gender schema, and two independent constructs--masculinity and femininity. Such instruments measure self-images of instrumental and expressive personality traits which show little relationship to the constructs…

  19. Notes on 'Bemächtigungstrieb' and Strachey's translation as 'instinct for mastery'.

    PubMed

    White, Kristin

    2010-08-01

    This short paper looks at Freud's use of the term 'Bemächtigungstrieb' and its translation by Strachey as 'instinct for mastery' when Freud was describing the motives behind his grandson's game with the wooden reel and string in Beyond the Pleasure Principle. The word 'Macht' [power], which is contained in the word 'Bemächtigung' points to Freud's difficult relationship with Alfred Adler, whose early theories on the aggressive drive and later theories on 'striving for power' were initially rejected by Freud. Looking at the changes in Freud's reception of Adlerian terms, some of which he later integrated into his own theory, throws light on his choice of the word 'Bemächtigungstrieb' in 1920, when he was just beginning to introduce his thoughts on the death instinct. A slightly different translation of the word 'Bemächtigungstrieb', one which takes these historical and theoretical aspects into account, could make these connections clearer for the English reader. Copyright © 2010 Institute of Psychoanalysis.

  20. Coupled numerical approach combining finite volume and lattice Boltzmann methods for multi-scale multi-physicochemical processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Li; He, Ya-Ling; Kang, Qinjun

    2013-12-15

    A coupled (hybrid) simulation strategy spatially combining the finite volume method (FVM) and the lattice Boltzmann method (LBM), called CFVLBM, is developed to simulate coupled multi-scale multi-physicochemical processes. In the CFVLBM, computational domain of multi-scale problems is divided into two sub-domains, i.e., an open, free fluid region and a region filled with porous materials. The FVM and LBM are used for these two regions, respectively, with information exchanged at the interface between the two sub-domains. A general reconstruction operator (RO) is proposed to derive the distribution functions in the LBM from the corresponding macro scalar, the governing equation of whichmore » obeys the convection–diffusion equation. The CFVLBM and the RO are validated in several typical physicochemical problems and then are applied to simulate complex multi-scale coupled fluid flow, heat transfer, mass transport, and chemical reaction in a wall-coated micro reactor. The maximum ratio of the grid size between the FVM and LBM regions is explored and discussed. -- Highlights: •A coupled simulation strategy for simulating multi-scale phenomena is developed. •Finite volume method and lattice Boltzmann method are coupled. •A reconstruction operator is derived to transfer information at the sub-domains interface. •Coupled multi-scale multiple physicochemical processes in micro reactor are simulated. •Techniques to save computational resources and improve the efficiency are discussed.« less

  1. Comparison of three-dimensional poisson solution methods for particle-based simulation and inhomogeneous dielectrics.

    PubMed

    Berti, Claudio; Gillespie, Dirk; Bardhan, Jaydeep P; Eisenberg, Robert S; Fiegna, Claudio

    2012-07-01

    Particle-based simulation represents a powerful approach to modeling physical systems in electronics, molecular biology, and chemical physics. Accounting for the interactions occurring among charged particles requires an accurate and efficient solution of Poisson's equation. For a system of discrete charges with inhomogeneous dielectrics, i.e., a system with discontinuities in the permittivity, the boundary element method (BEM) is frequently adopted. It provides the solution of Poisson's equation, accounting for polarization effects due to the discontinuity in the permittivity by computing the induced charges at the dielectric boundaries. In this framework, the total electrostatic potential is then found by superimposing the elemental contributions from both source and induced charges. In this paper, we present a comparison between two BEMs to solve a boundary-integral formulation of Poisson's equation, with emphasis on the BEMs' suitability for particle-based simulations in terms of solution accuracy and computation speed. The two approaches are the collocation and qualocation methods. Collocation is implemented following the induced-charge computation method of D. Boda et al. [J. Chem. Phys. 125, 034901 (2006)]. The qualocation method is described by J. Tausch et al. [IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 20, 1398 (2001)]. These approaches are studied using both flat and curved surface elements to discretize the dielectric boundary, using two challenging test cases: a dielectric sphere embedded in a different dielectric medium and a toy model of an ion channel. Earlier comparisons of the two BEM approaches did not address curved surface elements or semiatomistic models of ion channels. Our results support the earlier findings that for flat-element calculations, qualocation is always significantly more accurate than collocation. On the other hand, when the dielectric boundary is discretized with curved surface elements, the

  2. Control of cellular morphogenesis by the Ip12/Bem2 GTPase-activating protein: possible role of protein phosphorylation

    PubMed Central

    1994-01-01

    The IPL2 gene is known to be required for normal polarized cell growth in the budding yeast Saccharomyces cerevisiae. We now show that IPL2 is identical to the previously identified BEM2 gene. bem2 mutants are defective in bud site selection at 26 degrees C and localized cell surface growth and organization of the actin cytoskeleton at 37 degrees C. BEM2 encodes a protein with a COOH-terminal domain homologous to sequences found in several GTPase-activating proteins, including human Bcr. The GTPase-activating protein-domain from the Bem2 protein (Bem2p) or human Bcr can functionally substitute for Bem2p. The Rho1 and Rho2 GTPases are the likely in vivo targets of Bem2p because bem2 mutant phenotypes can be partially suppressed by increasing the gene dosage of RHO1 or RHO2. CDC55 encodes the putative regulatory B subunit of protein phosphatase 2A, and mutations in BEM2 have previously been identified as suppressors of the cdc55-1 mutation. We show here that mutations in the previously identified GRR1 gene can suppress bem2 mutations. grr1 and cdc55 mutants are both elongated in shape and cold- sensitive for growth, and cells lacking both GRR1 and CDC55 exhibit a synthetic lethal phenotype. bem2 mutant phenotypes also can be suppressed by the SSD1-vl (also known as SRK1) mutation, which was shown previously to suppress mutations in the protein phosphatase- encoding SIT4 gene. Cells lacking both BEM2 and SIT4 exhibit a synthetic lethal phenotype even in the presence of the SSD1-v1 suppressor. These genetic interactions together suggest that protein phosphorylation and dephosphorylation play an important role in the BEM2-mediated process of polarized cell growth. PMID:7962097

  3. Control of cellular morphogenesis by the Ip12/Bem2 GTPase-activating protein: possible role of protein phosphorylation.

    PubMed

    Kim, Y J; Francisco, L; Chen, G C; Marcotte, E; Chan, C S

    1994-12-01

    The IPL2 gene is known to be required for normal polarized cell growth in the budding yeast Saccharomyces cerevisiae. We now show that IPL2 is identical to the previously identified BEM2 gene. bem2 mutants are defective in bud site selection at 26 degrees C and localized cell surface growth and organization of the actin cytoskeleton at 37 degrees C. BEM2 encodes a protein with a COOH-terminal domain homologous to sequences found in several GTPase-activating proteins, including human Bcr. The GTPase-activating protein-domain from the Bem2 protein (Bem2p) or human Bcr can functionally substitute for Bem2p. The Rho1 and Rho2 GTPases are the likely in vivo targets of Bem2p because bem2 mutant phenotypes can be partially suppressed by increasing the gene dosage of RHO1 or RHO2. CDC55 encodes the putative regulatory B subunit of protein phosphatase 2A, and mutations in BEM2 have previously been identified as suppressors of the cdc55-1 mutation. We show here that mutations in the previously identified GRR1 gene can suppress bem2 mutations. grr1 and cdc55 mutants are both elongated in shape and cold-sensitive for growth, and cells lacking both GRR1 and CDC55 exhibit a synthetic lethal phenotype. bem2 mutant phenotypes also can be suppressed by the SSD1-vl (also known as SRK1) mutation, which was shown previously to suppress mutations in the protein phosphatase-encoding SIT4 gene. Cells lacking both BEM2 and SIT4 exhibit a synthetic lethal phenotype even in the presence of the SSD1-v1 suppressor. These genetic interactions together suggest that protein phosphorylation and dephosphorylation play an important role in the BEM2-mediated process of polarized cell growth.

  4. Application of a Three-Dimensional Poroelastic BEM to Modeling the Biphasic Mechanics of Cell-Matrix Interactions in Articular Cartilage (REVISION)

    PubMed Central

    Haider, Mansoor A.; Guilak, Farshid

    2009-01-01

    Articular cartilage exhibits viscoelasticity in response to mechanical loading that is well described using biphasic or poroelastic continuum models. To date, boundary element methods (BEMs) have not been employed in modeling biphasic tissue mechanics. A three dimensional direct poroelastic BEM, formulated in the Laplace transform domain, is applied to modeling stress relaxation in cartilage. Macroscopic stress relaxation of a poroelastic cylinder in uni-axial confined compression is simulated and validated against a theoretical solution. Microscopic cell deformation due to poroelastic stress relaxation is also modeled. An extended Laplace inversion method is employed to accurately represent mechanical responses in the time domain. PMID:19851478

  5. Application of a Three-Dimensional Poroelastic BEM to Modeling the Biphasic Mechanics of Cell-Matrix Interactions in Articular Cartilage (REVISION).

    PubMed

    Haider, Mansoor A; Guilak, Farshid

    2007-06-15

    Articular cartilage exhibits viscoelasticity in response to mechanical loading that is well described using biphasic or poroelastic continuum models. To date, boundary element methods (BEMs) have not been employed in modeling biphasic tissue mechanics. A three dimensional direct poroelastic BEM, formulated in the Laplace transform domain, is applied to modeling stress relaxation in cartilage. Macroscopic stress relaxation of a poroelastic cylinder in uni-axial confined compression is simulated and validated against a theoretical solution. Microscopic cell deformation due to poroelastic stress relaxation is also modeled. An extended Laplace inversion method is employed to accurately represent mechanical responses in the time domain.

  6. The Bem Sex-Role Inventory: Continuing Theoretical Problems

    ERIC Educational Resources Information Center

    Choi, Namok; Fuqua, Dale R.; Newman, Jody L.

    2008-01-01

    Pedhazur and Tetenbaum speculated that factor structures from self-ratings of the Bem Sex-Role Inventory (BSRI) personality traits would be different from factor structures from desirability ratings of the same traits. To explore this hypothesis, both desirability ratings of BSRI traits (both "for a man" and "for a woman") and…

  7. A Further Validation of the Bem Sex Role Inventory (BSRI): A Multitrait-Multimethod Study.

    ERIC Educational Resources Information Center

    Wong, Frank Y.; And Others

    1990-01-01

    To test the validity of the Bem Sex Role Inventory, 72 same-sex pairs of previously acquainted undergraduates rated themselves and partners on the BSRI as well as the Marlowe Crowne Social Desirability Scale. The results brought into question Bem's contention that masculinity and femininity are orthogonal constructs. (DM)

  8. A Bayes factor meta-analysis of Bem's ESP claim.

    PubMed

    Rouder, Jeffrey N; Morey, Richard D

    2011-08-01

    In recent years, statisticians and psychologists have provided the critique that p-values do not capture the evidence afforded by data and are, consequently, ill suited for analysis in scientific endeavors. The issue is particular salient in the assessment of the recent evidence provided for ESP by Bem (2011) in the mainstream Journal of Personality and Social Psychology. Wagenmakers, Wetzels, Borsboom, and van der Maas (Journal of Personality and Social Psychology, 100, 426-432, 2011) have provided an alternative Bayes factor assessment of Bem's data, but their assessment was limited to examining each experiment in isolation. We show here that the variant of the Bayes factor employed by Wagenmakers et al. is inappropriate for making assessments across multiple experiments, and cannot be used to gain an accurate assessment of the total evidence in Bem's data. We develop a meta-analytic Bayes factor that describes how researchers should update their prior beliefs about the odds of hypotheses in light of data across several experiments. We find that the evidence that people can feel the future with neutral and erotic stimuli to be slight, with Bayes factors of 3.23 and 1.57, respectively. There is some evidence, however, for the hypothesis that people can feel the future with emotionally valenced nonerotic stimuli, with a Bayes factor of about 40. Although this value is certainly noteworthy, we believe it is orders of magnitude lower than what is required to overcome appropriate skepticism of ESP.

  9. A Factor Analysis of the Bem Sex Role Inventory and the Personal Attributes Questionnaire.

    ERIC Educational Resources Information Center

    Choi, Namok; Jenkins, Stephen J.

    This study investigated the dimensions of sex role orientation measured by the revised Bem Sex Role Inventory (BSRI; S. Bem, 1974) and the revised Personal Attributes Questionnaire (PAQ; J. Spence, R. Helmreich, and J. Strapp, 1975). Participants were 651 undergraduates in introductory psychology courses. The sample was approximately 50% male and…

  10. Integrated multidisciplinary CAD/CAE environment for micro-electro-mechanical systems (MEMS)

    NASA Astrophysics Data System (ADS)

    Przekwas, Andrzej J.

    1999-03-01

    Computational design of MEMS involves several strongly coupled physical disciplines, including fluid mechanics, heat transfer, stress/deformation dynamics, electronics, electro/magneto statics, calorics, biochemistry and others. CFDRC is developing a new generation multi-disciplinary CAD systems for MEMS using high-fidelity field solvers on unstructured, solution-adaptive grids for a full range of disciplines. The software system, ACE + MEMS, includes all essential CAD tools; geometry/grid generation for multi- discipline, multi-equation solvers, GUI, tightly coupled configurable 3D field solvers for FVM, FEM and BEM and a 3D visualization/animation tool. The flow/heat transfer/calorics/chemistry equations are solved with unstructured adaptive FVM solver, stress/deformation are computed with a FEM STRESS solver and a FAST BEM solver is used to solve linear heat transfer, electro/magnetostatics and elastostatics equations on adaptive polygonal surface grids. Tight multidisciplinary coupling and automatic interoperability between the tools was achieved by designing a comprehensive database structure and APIs for complete model definition. The virtual model definition is implemented in data transfer facility, a publicly available tool described in this paper. The paper presents overall description of the software architecture and MEMS design flow in ACE + MEMS. It describes current status, ongoing effort and future plans for the software. The paper also discusses new concepts of mixed-level and mixed- dimensionality capability in which 1D microfluidic networks are simulated concurrently with 3D high-fidelity models of discrete components.

  11. Three-Dimensional BEM and FEM Submodelling in a Cracked FML Full Scale Aeronautic Panel

    NASA Astrophysics Data System (ADS)

    Citarella, R.; Cricrì, G.

    2014-06-01

    This paper concerns the numerical characterization of the fatigue strength of a flat stiffened panel, designed as a fiber metal laminate (FML) and made of Aluminum alloy and Fiber Glass FRP. The panel is full scale and was tested (in a previous work) under fatigue biaxial loads, applied by means of a multi-axial fatigue machine: an initial through the thickness notch was created in the panel and the aforementioned biaxial fatigue load applied, causing a crack initiation and propagation in the Aluminum layers. Moreover, (still in a previous work), the fatigue test was simulated by the Dual Boundary Element Method (DBEM) in a bidimensional approach. Now, in order to validate the assumptions made in the aforementioned DBEM approach and concerning the delamination area size and the fiber integrity during crack propagation, three-dimensional BEM and FEM submodelling analyses are realized. Due to the lack of experimental data on the delamination area size (normally increasing as the crack propagates), such area is calculated by iterative three-dimensional BEM or FEM analyses, considering the inter-laminar stresses and a delamination criterion. Such three-dimensional analyses, but in particular the FEM proposed model, can also provide insights into the fiber rupture problem. These DBEM-BEM or DBEM-FEM approaches aims at providing a general purpose evaluation tool for a better understanding of the fatigue resistance of FML panels, providing a deeper insight into the role of fiber stiffness and of delamination extension on the stress intensity factors.

  12. FloorspaceJS - A New, Open Source, Web-Based Geometry Editor for Building Energy Modeling (BEM): Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Macumber, Daniel L; Horowitz, Scott G; Schott, Marjorie

    Across most industries, desktop applications are being rapidly migrated to web applications for a variety of reasons. Web applications are inherently cross platform, mobile, and easier to distribute than desktop applications. Fueling this trend are a wide range of free, open source libraries and frameworks that make it incredibly easy to develop powerful web applications. The building energy modeling community is just beginning to pick up on these larger trends, with a small but growing number of building energy modeling applications starting on or moving to the web. This paper presents a new, open source, web based geometry editor formore » Building Energy Modeling (BEM). The editor is written completely in JavaScript and runs in a modern web browser. The editor works on a custom JSON file format and is designed to be integrated into a variety of web and desktop applications. The web based editor is available to use as a standalone web application at: https://nrel.github.io/openstudio-geometry-editor/. An example integration is demonstrated with the OpenStudio desktop application. Finally, the editor can be easily integrated with a wide range of possible building energy modeling web applications.« less

  13. A boundary integral method for numerical computation of radar cross section of 3D targets using hybrid BEM/FEM with edge elements

    NASA Astrophysics Data System (ADS)

    Dodig, H.

    2017-11-01

    This contribution presents the boundary integral formulation for numerical computation of time-harmonic radar cross section for 3D targets. Method relies on hybrid edge element BEM/FEM to compute near field edge element coefficients that are associated with near electric and magnetic fields at the boundary of the computational domain. Special boundary integral formulation is presented that computes radar cross section directly from these edge element coefficients. Consequently, there is no need for near-to-far field transformation (NTFFT) which is common step in RCS computations. By the end of the paper it is demonstrated that the formulation yields accurate results for canonical models such as spheres, cubes, cones and pyramids. Method has demonstrated accuracy even in the case of dielectrically coated PEC sphere at interior resonance frequency which is common problem for computational electromagnetic codes.

  14. Radiative heat transfer in strongly forward scattering media using the discrete ordinates method

    NASA Astrophysics Data System (ADS)

    Granate, Pedro; Coelho, Pedro J.; Roger, Maxime

    2016-03-01

    The discrete ordinates method (DOM) is widely used to solve the radiative transfer equation, often yielding satisfactory results. However, in the presence of strongly forward scattering media, this method does not generally conserve the scattering energy and the phase function asymmetry factor. Because of this, the normalization of the phase function has been proposed to guarantee that the scattering energy and the asymmetry factor are conserved. Various authors have used different normalization techniques. Three of these are compared in the present work, along with two other methods, one based on the finite volume method (FVM) and another one based on the spherical harmonics discrete ordinates method (SHDOM). In addition, the approximation of the Henyey-Greenstein phase function by a different one is investigated as an alternative to the phase function normalization. The approximate phase function is given by the sum of a Dirac delta function, which accounts for the forward scattering peak, and a smoother scaled phase function. In this study, these techniques are applied to three scalar radiative transfer test cases, namely a three-dimensional cubic domain with a purely scattering medium, an axisymmetric cylindrical enclosure containing an emitting-absorbing-scattering medium, and a three-dimensional transient problem with collimated irradiation. The present results show that accurate predictions are achieved for strongly forward scattering media when the phase function is normalized in such a way that both the scattered energy and the phase function asymmetry factor are conserved. The normalization of the phase function may be avoided using the FVM or the SHDOM to evaluate the in-scattering term of the radiative transfer equation. Both methods yield results whose accuracy is similar to that obtained using the DOM along with normalization of the phase function. Very satisfactory predictions were also achieved using the delta-M phase function, while the delta

  15. Bem Sex Role Inventory Undifferentiated Score: A Comparison of Sexual Dysfunction Patients with Sexual Offenders.

    ERIC Educational Resources Information Center

    Dwyer, Margretta; And Others

    1988-01-01

    Examined Bem Sex Role undifferentiated scores on 93 male sex offenders as compared with 50 male sexually dysfunctional patients. Chi-square analyses revealed significant difference: offenders obtained undifferentiated scores more often than did sexual dysfunctional population. Concluded that Bem Sex Role Inventory is useful in identifying sexual…

  16. A dynamic model of the piezoelectric traveling wave rotary ultrasonic motor stator with the finite volume method.

    PubMed

    Renteria Marquez, I A; Bolborici, V

    2017-05-01

    This manuscript presents a method to model in detail the piezoelectric traveling wave rotary ultrasonic motor (PTRUSM) stator response under the action of DC and AC voltages. The stator is modeled with a discrete two dimensional system of equations using the finite volume method (FVM). In order to obtain accurate results, a model of the stator bridge is included into the stator model. The model of the stator under the action of DC voltage is presented first, and the results of the model are compared versus a similar model using the commercial finite element software COMSOL Multiphysics. One can observe that there is a difference of less than 5% between the displacements of the stator using the proposed model and the one with COMSOL Multiphysics. After that, the model of the stator under the action of AC voltages is presented. The time domain analysis shows the generation of the traveling wave in the stator surface. One can use this model to accurately calculate the stator surface velocities, elliptical motion of the stator surface and the amplitude and shape of the stator traveling wave. A system of equations discretized with the finite volume method can easily be transformed into electrical circuits, because of that, FVM may be a better choice to develop a model-based control strategy for the PTRUSM. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. The construct validity of the Bem Sex-Role Inventory for heterosexual and gay men.

    PubMed

    Chung, Y B

    1995-01-01

    This study examined the construct validity of the Bem Sex-Role Inventory (BSRI; Bem, 1978) for heterosexual and gay men. Sixty heterosexual and 63 gay male participants were recruited through networking and advertisements. These two groups were of equivalent age, socioeconomic background, race, student status, and educational level. They completed the Lifestyle Questionnaire assessing sexual orientation and the BSRI assessing sex-role orientation. The internal consistency and discriminant validity of the BSRI scales were examined by corrected item-total correlations, coefficient alphas, inter-scale correlations, and factor analysis. Results suggested that the BSRI was equally valid for heterosexual and gay men, and the psychometric data reported in the BSRI Manual (Bem, 1981) were essentially replicated. However, the short-form BSRI is recommended for use with male respondents because of the problematic non-short-form Femininity items.

  18. Intercomparison of 3D pore-scale flow and solute transport simulation methods

    DOE PAGES

    Mehmani, Yashar; Schoenherr, Martin; Pasquali, Andrea; ...

    2015-09-28

    Multiple numerical approaches have been developed to simulate porous media fluid flow and solute transport at the pore scale. These include 1) methods that explicitly model the three-dimensional geometry of pore spaces and 2) methods that conceptualize the pore space as a topologically consistent set of stylized pore bodies and pore throats. In previous work we validated a model of the first type, using computational fluid dynamics (CFD) codes employing a standard finite volume method (FVM), against magnetic resonance velocimetry (MRV) measurements of pore-scale velocities. Here we expand that validation to include additional models of the first type based onmore » the lattice Boltzmann method (LBM) and smoothed particle hydrodynamics (SPH), as well as a model of the second type, a pore-network model (PNM). The PNM approach used in the current study was recently improved and demonstrated to accurately simulate solute transport in a two-dimensional experiment. While the PNM approach is computationally much less demanding than direct numerical simulation methods, the effect of conceptualizing complex three-dimensional pore geometries on solute transport in the manner of PNMs has not been fully determined. We apply all four approaches (FVM-based CFD, LBM, SPH and PNM) to simulate pore-scale velocity distributions and (for capable codes) nonreactive solute transport, and intercompare the model results. Comparisons are drawn both in terms of macroscopic variables (e.g., permeability, solute breakthrough curves) and microscopic variables (e.g., local velocities and concentrations). Generally good agreement was achieved among the various approaches, but some differences were observed depending on the model context. The intercomparison work was challenging because of variable capabilities of the codes, and inspired some code enhancements to allow consistent comparison of flow and transport simulations across the full suite of methods. This paper provides support for

  19. Intercomparison of 3D pore-scale flow and solute transport simulation methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiaofan; Mehmani, Yashar; Perkins, William A.

    2016-09-01

    Multiple numerical approaches have been developed to simulate porous media fluid flow and solute transport at the pore scale. These include 1) methods that explicitly model the three-dimensional geometry of pore spaces and 2) methods that conceptualize the pore space as a topologically consistent set of stylized pore bodies and pore throats. In previous work we validated a model of the first type, using computational fluid dynamics (CFD) codes employing a standard finite volume method (FVM), against magnetic resonance velocimetry (MRV) measurements of pore-scale velocities. Here we expand that validation to include additional models of the first type based onmore » the lattice Boltzmann method (LBM) and smoothed particle hydrodynamics (SPH), as well as a model of the second type, a pore-network model (PNM). The PNM approach used in the current study was recently improved and demonstrated to accurately simulate solute transport in a two-dimensional experiment. While the PNM approach is computationally much less demanding than direct numerical simulation methods, the effect of conceptualizing complex three-dimensional pore geometries on solute transport in the manner of PNMs has not been fully determined. We apply all four approaches (FVM-based CFD, LBM, SPH and PNM) to simulate pore-scale velocity distributions and (for capable codes) nonreactive solute transport, and intercompare the model results. Comparisons are drawn both in terms of macroscopic variables (e.g., permeability, solute breakthrough curves) and microscopic variables (e.g., local velocities and concentrations). Generally good agreement was achieved among the various approaches, but some differences were observed depending on the model context. The intercomparison work was challenging because of variable capabilities of the codes, and inspired some code enhancements to allow consistent comparison of flow and transport simulations across the full suite of methods. This study provides support for

  20. Estimation of Lightning Levels on a Launcher Using a BEM-Compressed Model

    NASA Astrophysics Data System (ADS)

    Silly, J.; Chaigne, B.; Aspas-Puertolas, J.; Herlem, Y.

    2016-05-01

    As development cycles in the space industry are being considerably reduced, it seems mandatory to deploy in parallel fast analysis methods for engineering purposes, but without sacrificing accuracy. In this paper we present the application of such methods to early Phase A-B [1] evaluation of lightning constraints on a launch vehicle.A complete 3D parametric model of a launcher has been thus developed and simulated with a Boundary Element Method (BEM)-frequency simulator (equipped with a low frequency algorithm). The time domain values of the observed currents and fields are obtained by post-treatment using an inverse discrete Fourier transform (IDFT).This model is used for lightning studies, especially the simulation are useful to analyse the influence of lightning injected currents on resulting circulated currents on external cable raceways. The description of the model and some of those results are presented in this article.

  1. Anaerobic Mercury Methylation and Demethylation by Geobacter bemidjiensis Bem.

    PubMed

    Lu, Xia; Liu, Yurong; Johs, Alexander; Zhao, Linduo; Wang, Tieshan; Yang, Ziming; Lin, Hui; Elias, Dwayne A; Pierce, Eric M; Liang, Liyuan; Barkay, Tamar; Gu, Baohua

    2016-04-19

    Microbial methylation and demethylation are two competing processes controlling the net production and bioaccumulation of neurotoxic methylmercury (MeHg) in natural ecosystems. Although mercury (Hg) methylation by anaerobic microorganisms and demethylation by aerobic Hg-resistant bacteria have both been extensively studied, little attention has been given to MeHg degradation by anaerobic bacteria, particularly the iron-reducing bacterium Geobacter bemidjiensis Bem. Here we report, for the first time, that the strain G. bemidjiensis Bem can mediate a suite of Hg transformations, including Hg(II) reduction, Hg(0) oxidation, MeHg production and degradation under anoxic conditions. Results suggest that G. bemidjiensis utilizes a reductive demethylation pathway to degrade MeHg, with elemental Hg(0) as the major reaction product, possibly due to the presence of genes encoding homologues of an organomercurial lyase (MerB) and a mercuric reductase (MerA). In addition, the cells can strongly sorb Hg(II) and MeHg, reduce or oxidize Hg, resulting in both time and concentration-dependent Hg species transformations. Moderate concentrations (10-500 μM) of Hg-binding ligands such as cysteine enhance Hg(II) methylation but inhibit MeHg degradation. These findings indicate a cycle of Hg methylation and demethylation among anaerobic bacteria, thereby influencing net MeHg production in anoxic water and sediments.

  2. Exploratory and Confirmatory Studies of the Structure of the Bem Sex Role Inventory Short Form with Two Divergent Samples

    ERIC Educational Resources Information Center

    Choi, Namok; Fuqua, Dale R.; Newman, Jody L.

    2009-01-01

    The short form of the Bem Sex Role Inventory (BSRI) contains half as many items as the long form and yet has often demonstrated better reliability and validity. This study uses exploratory and confirmatory factor analytic methods to examine the structure of the short form of the BSRI. A structure noted elsewhere also emerged here, consisting of…

  3. FEM/BEM impedance and power analysis for measured LGS SH-SAW devices.

    PubMed

    Kenny, Thomas D; Pollard, Thomas B; Berkenpas, Eric; da Cunha, Mauricio Pereira

    2006-02-01

    Pure shear horizontal piezoelectrically active surface and bulk acoustic waves (SH-SAW and SH-BAW) exist along rotated Y-cuts, Euler angles (0 degrees, theta, 90 degrees), of trigonal class 32 group crystals, which include the LGX family of crystals (langasite, langatate, and langanite). In this paper both SH-SAW and SH-BAW generated by finite-length, interdigital transducers (IDTs) on langasite, Euler angles (0 degrees, 22 degrees, 90 degrees), are simulated using combined finite- and boundary-element methods (FEM/BEM). Aluminum and gold IDT electrodes ranging in thickness from 600 A to 2000 A have been simulated, fabricated, and tested, with both free and metalized surfaces outside the IDT regions considered. Around the device's operating frequency, the percent difference between the calculated IDT impedance magnitude using the FEM/BEM model and the measurements is better than 5% for the different metal layers and thicknesses considered. The proportioning of SH-SAW and SH-BAW power is analyzed as a function of the number of IDT electrodes; type of electrode metal; and relative thickness of the electrode film, h/wavelength, where wavelength is the SH-SAW wavelength. Simulation results show that moderate mechanical loading by gold electrodes increases the proportion of input power converted to SH-SAW. For example, with a split-electrode IDT, comprising 238 electrodes with a relative thickness h/wavelength = 0.63% and surrounded by an infinitesimally thin conducting film, nearly 9% more input power is radiated as SH-SAW when gold instead of aluminum electrodes are used.

  4. Enhancing BEM simulations of a stalled wind turbine using a 3D correction model

    NASA Astrophysics Data System (ADS)

    Bangga, Galih; Hutomo, Go; Syawitri, Taurista; Kusumadewi, Tri; Oktavia, Winda; Sabila, Ahmad; Setiadi, Herlambang; Faisal, Muhamad; Hendranata, Yongki; Lastomo, Dwi; Putra, Louis; Kristiadi, Stefanus; Bumi, Ilmi

    2018-03-01

    Nowadays wind turbine rotors are usually employed with pitch control mechanisms to avoid deep stall conditions. Despite that, wind turbines often operate under pitch fault situation causing massive flow separation to occur. Pure Blade Element Momentum (BEM) approaches are not designed for this situation and inaccurate load predictions are already expected. In the present studies, BEM predictions are improved through the inclusion of a stall delay model for a wind turbine rotor operating under pitch fault situation of -2.3° towards stall. The accuracy of the stall delay model is assessed by comparing the results with available Computational Fluid Dynamics (CFD) simulations data.

  5. Numerical solution to the oblique derivative boundary value problem on non-uniform grids above the Earth topography

    NASA Astrophysics Data System (ADS)

    Medl'a, Matej; Mikula, Karol; Čunderlík, Róbert; Macák, Marek

    2018-01-01

    The paper presents a numerical solution of the oblique derivative boundary value problem on and above the Earth's topography using the finite volume method (FVM). It introduces a novel method for constructing non-uniform hexahedron 3D grids above the Earth's surface. It is based on an evolution of a surface, which approximates the Earth's topography, by mean curvature. To obtain optimal shapes of non-uniform 3D grid, the proposed evolution is accompanied by a tangential redistribution of grid nodes. Afterwards, the Laplace equation is discretized using FVM developed for such a non-uniform grid. The oblique derivative boundary condition is treated as a stationary advection equation, and we derive a new upwind type discretization suitable for non-uniform 3D grids. The discretization of the Laplace equation together with the discretization of the oblique derivative boundary condition leads to a linear system of equations. The solution of this system gives the disturbing potential in the whole computational domain including the Earth's surface. Numerical experiments aim to show properties and demonstrate efficiency of the developed FVM approach. The first experiments study an experimental order of convergence of the method. Then, a reconstruction of the harmonic function on the Earth's topography, which is generated from the EGM2008 or EIGEN-6C4 global geopotential model, is presented. The obtained FVM solutions show that refining of the computational grid leads to more precise results. The last experiment deals with local gravity field modelling in Slovakia using terrestrial gravity data. The GNSS-levelling test shows accuracy of the obtained local quasigeoid model.

  6. Anaerobic Mercury Methylation and Demethylation by Geobacter bemidjiensis Bem

    DOE PAGES

    Lu, Xia; Liu, Yurong; Johs, Alexander; ...

    2016-03-28

    Two competing processes controlling the net production and bioaccumulation of neurotoxic methylmercury (MeHg) in natural ecosystems are microbial methylation and demethylation. Though mercury (Hg) methylation by anaerobic microorganisms and demethylation by aerobic Hg-resistant bacteria have both been extensively studied, little attention has been given to MeHg degradation by anaerobic bacteria, particularly the iron-reducing bacterium Geobacter bemidjensis Bem. Here we report, for the first time, that the strain G. bemidjensis Bem can methylate inorganic Hg and degrade MeHg concurrently under anoxic conditions. Our results suggest that G. bemidjensis cells utilize a reductive demethylation pathway to degrade MeHg, with elemental Hg(0) asmore » the major reaction product, possibly due to the presence of homologs encoding both organo-mercurial lyase (MerB) and mercuric reductase (MerA) in this organism. In addition, the cells can mediate multiple reactions including Hg/MeHg sorption, Hg reduction and oxidation, resulting in both time and concentration dependent Hg species transformations. Moderate concentrations (10 500 M) of Hg-binding ligands such as cysteine enhance Hg(II) methylation but inhibit MeHg degradation. These findings indicate a cycle of methylation and demethylation among anaerobic bacteria and suggest that mer-mediated demethylation may play a role in the net balance of MeHg production in anoxic water and sediments.« less

  7. Hybrid BEM/empirical approach for scattering of correlated sources in rocket noise prediction

    NASA Astrophysics Data System (ADS)

    Barbarino, Mattia; Adamo, Francesco P.; Bianco, Davide; Bartoccini, Daniele

    2017-09-01

    Empirical models such as the Eldred standard model are commonly used for rocket noise prediction. Such models directly provide a definition of the Sound Pressure Level through the quadratic pressure term by uncorrelated sources. In this paper, an improvement of the Eldred Standard model has been formulated. This new formulation contains an explicit expression for the acoustic pressure of each noise source, in terms of amplitude and phase, in order to investigate the sources correlation effects and to propagate them through a wave equation. In particular, the correlation effects between adjacent and not-adjacent sources have been modeled and analyzed. The noise prediction obtained with the revised Eldred-based model has then been used for formulating an empirical/BEM (Boundary Element Method) hybrid approach that allows an evaluation of the scattering effects. In the framework of the European Space Agency funded program VECEP (VEga Consolidation and Evolution Programme), these models have been applied for the prediction of the aeroacoustics loads of the VEGA (Vettore Europeo di Generazione Avanzata - Advanced Generation European Carrier Rocket) launch vehicle at lift-off and the results have been compared with experimental data.

  8. A Coupling Strategy of FEM and BEM for the Solution of a 3D Industrial Crack Problem

    NASA Astrophysics Data System (ADS)

    Kouitat Njiwa, Richard; Taha Niane, Ngadia; Frey, Jeremy; Schwartz, Martin; Bristiel, Philippe

    2015-03-01

    Analyzing crack stability in an industrial context is challenging due to the geometry of the structure. The finite element method is effective for defect-free problems. The boundary element method is effective for problems in simple geometries with singularities. We present a strategy that takes advantage of both approaches. Within the iterative solution procedure, the FEM solves a defect-free problem over the structure while the BEM solves the crack problem over a fictitious domain with simple geometry. The effectiveness of the approach is demonstrated on some simple examples which allow comparison with literature results and on an industrial problem.

  9. Topology optimization analysis based on the direct coupling of the boundary element method and the level set method

    NASA Astrophysics Data System (ADS)

    Vitório, Paulo Cezar; Leonel, Edson Denner

    2017-12-01

    The structural design must ensure suitable working conditions by attending for safe and economic criteria. However, the optimal solution is not easily available, because these conditions depend on the bodies' dimensions, materials strength and structural system configuration. In this regard, topology optimization aims for achieving the optimal structural geometry, i.e. the shape that leads to the minimum requirement of material, respecting constraints related to the stress state at each material point. The present study applies an evolutionary approach for determining the optimal geometry of 2D structures using the coupling of the boundary element method (BEM) and the level set method (LSM). The proposed algorithm consists of mechanical modelling, topology optimization approach and structural reconstruction. The mechanical model is composed of singular and hyper-singular BEM algebraic equations. The topology optimization is performed through the LSM. Internal and external geometries are evolved by the LS function evaluated at its zero level. The reconstruction process concerns the remeshing. Because the structural boundary moves at each iteration, the body's geometry change and, consequently, a new mesh has to be defined. The proposed algorithm, which is based on the direct coupling of such approaches, introduces internal cavities automatically during the optimization process, according to the intensity of Von Mises stress. The developed optimization model was applied in two benchmarks available in the literature. Good agreement was observed among the results, which demonstrates its efficiency and accuracy.

  10. Simulation and Analysis of Mechanical Properties of Silica Aerogels: From Rationalization to Prediction

    PubMed Central

    Ma, Hao; Zheng, Xiaoyang; Luo, Xuan; Yang, Fan

    2018-01-01

    Silica aerogels are highly porous 3D nanostructures and have exhibited excellent physio-chemical properties. Although silica aerogels have broad potential in many fields, the poor mechanical properties greatly limit further applications. In this study, we have applied the finite volume method (FVM) method to calculate the mechanical properties of silica aerogels with different geometric properties such as particle size, pore size, ligament diameter, etc. The FVM simulation results show that a power law correlation existing between relative density and mechanical properties (elastic modulus and yield stress) of silica aerogels, which are consistent with experimental and literature studies. In addition, depending on the relative densities, different strategies are proposed in order to synthesize silica aerogels with better mechanical performance by adjusting the distribution of pore size and ligament diameter of aerogels. Finally, the results suggest that it is possible to synthesize silica aerogels with ultra-low density as well as high strength and stiffness as long as the textural features are well controlled. It is believed that the FVM simulation methodology could be a valuable tool to study mechanical performance of silica aerogel based materials in the future. PMID:29385745

  11. Simulation and Analysis of Mechanical Properties of Silica Aerogels: From Rationalization to Prediction.

    PubMed

    Ma, Hao; Zheng, Xiaoyang; Luo, Xuan; Yi, Yong; Yang, Fan

    2018-01-30

    Silica aerogels are highly porous 3D nanostructures and have exhibited excellent physio-chemical properties. Although silica aerogels have broad potential in many fields, the poor mechanical properties greatly limit further applications. In this study, we have applied the finite volume method (FVM) method to calculate the mechanical properties of silica aerogels with different geometric properties such as particle size, pore size, ligament diameter, etc. The FVM simulation results show that a power law correlation existing between relative density and mechanical properties (elastic modulus and yield stress) of silica aerogels, which are consistent with experimental and literature studies. In addition, depending on the relative densities, different strategies are proposed in order to synthesize silica aerogels with better mechanical performance by adjusting the distribution of pore size and ligament diameter of aerogels. Finally, the results suggest that it is possible to synthesize silica aerogels with ultra-low density as well as high strength and stiffness as long as the textural features are well controlled. It is believed that the FVM simulation methodology could be a valuable tool to study mechanical performance of silica aerogel based materials in the future.

  12. A new fast direct solver for the boundary element method

    NASA Astrophysics Data System (ADS)

    Huang, S.; Liu, Y. J.

    2017-09-01

    A new fast direct linear equation solver for the boundary element method (BEM) is presented in this paper. The idea of the new fast direct solver stems from the concept of the hierarchical off-diagonal low-rank matrix. The hierarchical off-diagonal low-rank matrix can be decomposed into the multiplication of several diagonal block matrices. The inverse of the hierarchical off-diagonal low-rank matrix can be calculated efficiently with the Sherman-Morrison-Woodbury formula. In this paper, a more general and efficient approach to approximate the coefficient matrix of the BEM with the hierarchical off-diagonal low-rank matrix is proposed. Compared to the current fast direct solver based on the hierarchical off-diagonal low-rank matrix, the proposed method is suitable for solving general 3-D boundary element models. Several numerical examples of 3-D potential problems with the total number of unknowns up to above 200,000 are presented. The results show that the new fast direct solver can be applied to solve large 3-D BEM models accurately and with better efficiency compared with the conventional BEM.

  13. Antibody Treatment of Ebola and Sudan Virus Infection via a Uniquely Exposed Epitope within the Glycoprotein Receptor Binding Site

    DTIC Science & Technology

    2016-06-14

    characterize FVM04 binding to EBOV GP, FVM04 Fab in complex with GP∆Muc was analyzed by negative stain electron microscopy. The binding location of...FVM04 revealed an epitope consistent with the crest region residues derived by mutagenesis studies. The class averages suggest that only one FVM04 Fab ...binds to each GP trimer (Figure 3A-B). It is likely that the binding orientation and proximity to the threefold axis precludes additional FVM04 Fabs

  14. Refinement of Methods for Evaluation of Near-Hypersingular Integrals in BEM Formulations

    NASA Technical Reports Server (NTRS)

    Fink, Patricia W.; Khayat, Michael A.; Wilton, Donald R.

    2006-01-01

    In this paper, we present advances in singularity cancellation techniques applied to integrals in BEM formulations that are nearly hypersingular. Significant advances have been made recently in singularity cancellation techniques applied to 1 R type kernels [M. Khayat, D. Wilton, IEEE Trans. Antennas and Prop., 53, pp. 3180-3190, 2005], as well as to the gradients of these kernels [P. Fink, D. Wilton, and M. Khayat, Proc. ICEAA, pp. 861-864, Torino, Italy, 2005] on curved subdomains. In these approaches, the source triangle is divided into three tangent subtriangles with a common vertex at the normal projection of the observation point onto the source element or the extended surface containing it. The geometry of a typical tangent subtriangle and its local rectangular coordinate system with origin at the projected observation point is shown in Fig. 1. Whereas singularity cancellation techniques for 1 R type kernels are now nearing maturity, the efficient handling of near-hypersingular kernels still needs attention. For example, in the gradient reference above, techniques are presented for computing the normal component of the gradient relative to the plane containing the tangent subtriangle. These techniques, summarized in the transformations in Table 1, are applied at the sub-triangle level and correspond particularly to the case in which the normal projection of the observation point lies within the boundary of the source element. They are found to be highly efficient as z approaches zero. Here, we extend the approach to cover two instances not previously addressed. First, we consider the case in which the normal projection of the observation point lies external to the source element. For such cases, we find that simple modifications to the transformations of Table 1 permit significant savings in computational cost. Second, we present techniques that permit accurate computation of the tangential components of the gradient; i.e., tangent to the plane containing

  15. Comparison of four methods of surface roughness assessment of corneal stromal bed after lamellar cutting

    PubMed Central

    Jumelle, Clotilde; Hamri, Alina; Egaud, Gregory; Mauclair, Cyril; Reynaud, Stephanie; Dumas, Virginie; Pereira, Sandrine; Garcin, Thibaud; Gain, Philippe; Thuret, Gilles

    2017-01-01

    Corneal lamellar cutting with a blade or femtosecond laser (FSL) is commonly used during refractive surgery and corneal grafts. Surface roughness of the cutting plane influences postoperative visual acuity but is difficult to assess reliably. For the first time, we compared chromatic confocal microscopy (CCM) with scanning electron microscopy, atomic force microscopy (AFM) and focus-variation microscopy (FVM) to characterize surfaces of variable roughness after FSL cutting. The small area allowed by AFM hinders conclusive roughness analysis, especially with irregular cuts. FVM does not always differentiate between smooth and rough surfaces. Finally, CCM allows analysis of large surfaces and differentiates between surface states. PMID:29188095

  16. Probabilistic boundary element method

    NASA Technical Reports Server (NTRS)

    Cruse, T. A.; Raveendra, S. T.

    1989-01-01

    The purpose of the Probabilistic Structural Analysis Method (PSAM) project is to develop structural analysis capabilities for the design analysis of advanced space propulsion system hardware. The boundary element method (BEM) is used as the basis of the Probabilistic Advanced Analysis Methods (PADAM) which is discussed. The probabilistic BEM code (PBEM) is used to obtain the structural response and sensitivity results to a set of random variables. As such, PBEM performs analogous to other structural analysis codes such as finite elements in the PSAM system. For linear problems, unlike the finite element method (FEM), the BEM governing equations are written at the boundary of the body only, thus, the method eliminates the need to model the volume of the body. However, for general body force problems, a direct condensation of the governing equations to the boundary of the body is not possible and therefore volume modeling is generally required.

  17. Numerical solution of the exterior oblique derivative BVP using the direct BEM formulation

    NASA Astrophysics Data System (ADS)

    Čunderlík, Róbert; Špir, Róbert; Mikula, Karol

    2016-04-01

    The fixed gravimetric boundary value problem (FGBVP) represents an exterior oblique derivative problem for the Laplace equation. A direct formulation of the boundary element method (BEM) for the Laplace equation leads to a boundary integral equation (BIE) where a harmonic function is represented as a superposition of the single-layer and double-layer potential. Such a potential representation is applied to obtain a numerical solution of FGBVP. The oblique derivative problem is treated by a decomposition of the gradient of the unknown disturbing potential into its normal and tangential components. Our numerical scheme uses the collocation with linear basis functions. It involves a triangulated discretization of the Earth's surface as our computational domain considering its complicated topography. To achieve high-resolution numerical solutions, parallel implementations using the MPI subroutines as well as an iterative elimination of far zones' contributions are performed. Numerical experiments present a reconstruction of a harmonic function above the Earth's topography given by the spherical harmonic approach, namely by the EGM2008 geopotential model up to degree 2160. The SRTM30 global topography model is used to approximate the Earth's surface by the triangulated discretization. The obtained BEM solution with the resolution 0.05 deg (12,960,002 nodes) is compared with EGM2008. The standard deviation of residuals 5.6 cm indicates a good agreement. The largest residuals are obviously in high mountainous regions. They are negative reaching up to -0.7 m in Himalayas and about -0.3 m in Andes and Rocky Mountains. A local refinement in the area of Slovakia confirms an improvement of the numerical solution in this mountainous region despite of the fact that the Earth's topography is here considered in more details.

  18. Bem Sex Role Inventory Validation in the International Mobility in Aging Study.

    PubMed

    Ahmed, Tamer; Vafaei, Afshin; Belanger, Emmanuelle; Phillips, Susan P; Zunzunegui, Maria-Victoria

    2016-09-01

    This study investigated the measurement structure of the Bem Sex Role Inventory (BSRI) with different factor analysis methods. Most previous studies on validity applied exploratory factor analysis (EFA) to examine the BSRI. We aimed to assess the psychometric properties and construct validity of the 12-item short-form BSRI in a sample administered to 1,995 older adults from wave 1 of the International Mobility in Aging Study (IMIAS). We used Cronbach's alpha to assess internal consistency reliability and confirmatory factor analysis (CFA) to assess psychometric properties. EFA revealed a three-factor model, further confirmed by CFA and compared with the original two-factor structure model. Results revealed that a two-factor solution (instrumentality-expressiveness) has satisfactory construct validity and superior fit to data compared to the three-factor solution. The two-factor solution confirms expected gender differences in older adults. The 12-item BSRI provides a brief, psychometrically sound, and reliable instrument in international samples of older adults.

  19. Manufacturing challenge: An employee perception of the impact of BEM variables on motivation

    NASA Astrophysics Data System (ADS)

    Nyaude, Alaster

    The study examines the impact of Thomas F. Gilbert's Behavior Engineering Model (BEM) variables on employee perception of motivation at an aerospace equipment manufacturing plant in Georgia. The research process involved literature review, and determination of an appropriate survey instrument for the study. The Hersey-Chevalier modified PROBE instrument (Appendix C) was used with Dr Roger Chevalier's validation. The participants' responses were further examined to determine the influence of demographic control variables of age, gender, length of service with the company and education on employee perception of motivation. The results indicated that the top three highly motivating variables were knowledge and skills, capacity and resources. Knowledge and skills was perceived to be highly motivating, capacity as second highly motivating and resources as the third highly motivating variable. Interestingly, the fourth highly motivating variable was information, the fifth was motives and the sixth was incentives. The results also showed that demographic control variables had no influence on employee perception of motivation. Further research may be required to understand to what extend these BEM variables impact employee perceptions of motivation.

  20. Design of horizontal-axis wind turbine using blade element momentum method

    NASA Astrophysics Data System (ADS)

    Bobonea, Andreea; Pricop, Mihai Victor

    2013-10-01

    The study of mathematical models applied to wind turbine design in recent years, principally in electrical energy generation, has become significant due to the increasing use of renewable energy sources with low environmental impact. Thus, this paper shows an alternative mathematical scheme for the wind turbine design, based on the Blade Element Momentum (BEM) Theory. The results from the BEM method are greatly dependent on the precision of the lift and drag coefficients. The basic of BEM method assumes the blade can be analyzed as a number of independent element in spanwise direction. The induced velocity at each element is determined by performing the momentum balance for a control volume containing the blade element. The aerodynamic forces on the element are calculated using the lift and drag coefficient from the empirical two-dimensional wind tunnel test data at the geometric angle of attack (AOA) of the blade element relative to the local flow velocity.

  1. Effects of Aerobic Growth on the Fatty Acid and Hydrocarbon Compositions of Geobacter bemidjiensis BemT.

    PubMed

    Ueno, Akio; Shimizu, Satoru; Hashimoto, Mikako; Adachi, Takumi; Matsushita, Takako; Okuyama, Hidetoshi; Yoshida, Kiyohito

    2017-01-01

    Geobacter spp., regarded as strict anaerobes, have been reported to grow under aerobic conditions. To elucidate the role of fatty acids in aerobiosis of Geobacter spp., we studied the effect of aerobiosis on fatty acid composition and turnover in G. bemidjiensis Bem T . G. bemidjiensis Bem T was grown under the following different culture conditions: anaerobic culture for 4 days (type 1) and type 1 culture followed by 2-day anaerobic (type 2) or aerobic culture (anaerobic-to-aerobic shift; type 3). The mean cell weight of the type 3 culture was approximately 2.5-fold greater than that of type 1 and 2 cultures. The fatty acid methyl ester and hydrocarbon fraction contained hexadecanoic (16:0), 9-cis-hexadecenoic [16:1(9c)], tetradecanoic (14:0), tetradecenoic [14:1(7c)] acids, hentriacontanonaene, and hopanoids, but not long-chain polyunsaturated fatty acids. The type 3 culture contained higher levels of 14:0 and 14:1(7c) and lower levels of 16:0 and 16:1(9c) compared with type 1 and 2 cultures. The weight ratio of extracted lipid per dry cell was lower in the type 3 culture than in the type 1 and 2 cultures. We concluded that anaerobically-grown G. bemidjiensis Bem T followed by aerobiosis were enhanced in growth, fatty acid turnover, and de novo fatty acid synthesis.

  2. Comparative Factor Analyses of the Personal Attributes Questionnaire and the Bem Sex-Role Inventory.

    ERIC Educational Resources Information Center

    Antill, John K.; Cunningham, John D.

    1982-01-01

    Compared the Personal Attributes Questionnaire (PAQ) and the Bem Sex Role Inventory (BSRI) as measures of androgyny. Results showed that femininty (Concern for Others) and masculinity (Dominance) accounted for most of the variance, but for PAQ, clusters of male- and female-valued items (i.e., Extroversion and Insecurity) formed subsidiary factors.…

  3. Symmetrical or Non-Symmetrical Debonds at Fiber-Matrix Interfaces: A Study by BEM and Finite Fracture Mechanics on Elastic Interfaces

    NASA Astrophysics Data System (ADS)

    Muñoz-Reja, Mar; Távara, Luis; Mantič, Vladislav

    A recently proposed criterion is used to study the behavior of debonds produced at a fiber-matrix interface. The criterion is based on the Linear Elastic-(Perfectly) Brittle Interface Model (LEBIM) combined with a Finite Fracture Mechanics (FFM) approach, where the stress and energy criteria are suitably coupled. Special attention is given to the discussion about the symmetry of the debond onset and growth in an isolated single fiber specimen under uniaxial transverse tension. A common composite material system, glass fiber-epoxy matrix, is considered. The present methodology uses a two-dimensional (2D) Boundary Element Method (BEM) code to carry out the analysis of interface failure. The present results show that a non-symmetrical interface crack configuration (debonds at one side only) is produced by a lower critical remote load than the symmetrical case (debonds at both sides). Thus, the non-symmetrical solution is the preferred one, which agrees with the experimental evidences found in the literature.

  4. An accurate and efficient acoustic eigensolver based on a fast multipole BEM and a contour integral method

    NASA Astrophysics Data System (ADS)

    Zheng, Chang-Jun; Gao, Hai-Feng; Du, Lei; Chen, Hai-Bo; Zhang, Chuanzeng

    2016-01-01

    An accurate numerical solver is developed in this paper for eigenproblems governed by the Helmholtz equation and formulated through the boundary element method. A contour integral method is used to convert the nonlinear eigenproblem into an ordinary eigenproblem, so that eigenvalues can be extracted accurately by solving a set of standard boundary element systems of equations. In order to accelerate the solution procedure, the parameters affecting the accuracy and efficiency of the method are studied and two contour paths are compared. Moreover, a wideband fast multipole method is implemented with a block IDR (s) solver to reduce the overall solution cost of the boundary element systems of equations with multiple right-hand sides. The Burton-Miller formulation is employed to identify the fictitious eigenfrequencies of the interior acoustic problems with multiply connected domains. The actual effect of the Burton-Miller formulation on tackling the fictitious eigenfrequency problem is investigated and the optimal choice of the coupling parameter as α = i / k is confirmed through exterior sphere examples. Furthermore, the numerical eigenvalues obtained by the developed method are compared with the results obtained by the finite element method to show the accuracy and efficiency of the developed method.

  5. Structure of the tandem PX-PH domains of Bem3 from Saccharomyces cerevisiae.

    PubMed

    Ali, Imtiaz; Eu, Sungmin; Koch, Daniel; Bleimling, Nathalie; Goody, Roger S; Müller, Matthias P

    2018-05-01

    The structure of the tandem lipid-binding PX and pleckstrin-homology (PH) domains of the Cdc42 GTPase-activating protein Bem3 from Saccharomyces cerevisiae (strain S288c) has been determined to a resolution of 2.2 Å (R work = 21.1%, R free = 23.4%). It shows that the domains adopt a relative orientation that enables them to simultaneously bind to a membrane and suggests possible cooperativity in membrane binding. open access.

  6. Structure of the tandem PX-PH domains of Bem3 from Saccharomyces cerevisiae

    PubMed Central

    Ali, Imtiaz; Eu, Sungmin; Bleimling, Nathalie

    2018-01-01

    The structure of the tandem lipid-binding PX and pleckstrin-homology (PH) domains of the Cdc42 GTPase-activating protein Bem3 from Saccharomyces cerevisiae (strain S288c) has been determined to a resolution of 2.2 Å (R work = 21.1%, R free = 23.4%). It shows that the domains adopt a relative orientation that enables them to simultaneously bind to a membrane and suggests possible cooperativity in membrane binding. PMID:29718000

  7. Validation of finite element and boundary element methods for predicting structural vibration and radiated noise

    NASA Technical Reports Server (NTRS)

    Seybert, A. F.; Wu, X. F.; Oswald, Fred B.

    1992-01-01

    Analytical and experimental validation of methods to predict structural vibration and radiated noise are presented. A rectangular box excited by a mechanical shaker was used as a vibrating structure. Combined finite element method (FEM) and boundary element method (BEM) models of the apparatus were used to predict the noise radiated from the box. The FEM was used to predict the vibration, and the surface vibration was used as input to the BEM to predict the sound intensity and sound power. Vibration predicted by the FEM model was validated by experimental modal analysis. Noise predicted by the BEM was validated by sound intensity measurements. Three types of results are presented for the total radiated sound power: (1) sound power predicted by the BEM modeling using vibration data measured on the surface of the box; (2) sound power predicted by the FEM/BEM model; and (3) sound power measured by a sound intensity scan. The sound power predicted from the BEM model using measured vibration data yields an excellent prediction of radiated noise. The sound power predicted by the combined FEM/BEM model also gives a good prediction of radiated noise except for a shift of the natural frequencies that are due to limitations in the FEM model.

  8. Particle-based simulation of charge transport in discrete-charge nano-scale systems: the electrostatic problem

    PubMed Central

    2012-01-01

    The fast and accurate computation of the electric forces that drive the motion of charged particles at the nanometer scale represents a computational challenge. For this kind of system, where the discrete nature of the charges cannot be neglected, boundary element methods (BEM) represent a better approach than finite differences/finite elements methods. In this article, we compare two different BEM approaches to a canonical electrostatic problem in a three-dimensional space with inhomogeneous dielectrics, emphasizing their suitability for particle-based simulations: the iterative method proposed by Hoyles et al. and the Induced Charge Computation introduced by Boda et al. PMID:22338640

  9. Particle-based simulation of charge transport in discrete-charge nano-scale systems: the electrostatic problem.

    PubMed

    Berti, Claudio; Gillespie, Dirk; Eisenberg, Robert S; Fiegna, Claudio

    2012-02-16

    The fast and accurate computation of the electric forces that drive the motion of charged particles at the nanometer scale represents a computational challenge. For this kind of system, where the discrete nature of the charges cannot be neglected, boundary element methods (BEM) represent a better approach than finite differences/finite elements methods. In this article, we compare two different BEM approaches to a canonical electrostatic problem in a three-dimensional space with inhomogeneous dielectrics, emphasizing their suitability for particle-based simulations: the iterative method proposed by Hoyles et al. and the Induced Charge Computation introduced by Boda et al.

  10. Comparison of the lifting-line free vortex wake method and the blade-element-momentum theory regarding the simulated loads of multi-MW wind turbines

    NASA Astrophysics Data System (ADS)

    Hauptmann, S.; Bülk, M.; Schön, L.; Erbslöh, S.; Boorsma, K.; Grasso, F.; Kühn, M.; Cheng, P. W.

    2014-12-01

    Design load simulations for wind turbines are traditionally based on the blade- element-momentum theory (BEM). The BEM approach is derived from a simplified representation of the rotor aerodynamics and several semi-empirical correction models. A more sophisticated approach to account for the complex flow phenomena on wind turbine rotors can be found in the lifting-line free vortex wake method. This approach is based on a more physics based representation, especially for global flow effects. This theory relies on empirical correction models only for the local flow effects, which are associated with the boundary layer of the rotor blades. In this paper the lifting-line free vortex wake method is compared to a state- of-the-art BEM formulation with regard to aerodynamic and aeroelastic load simulations of the 5MW UpWind reference wind turbine. Different aerodynamic load situations as well as standardised design load cases that are sensitive to the aeroelastic modelling are evaluated in detail. This benchmark makes use of the AeroModule developed by ECN, which has been coupled to the multibody simulation code SIMPACK.

  11. Calm water resistance prediction of a bulk carrier using Reynolds averaged Navier-Stokes based solver

    NASA Astrophysics Data System (ADS)

    Rahaman, Md. Mashiur; Islam, Hafizul; Islam, Md. Tariqul; Khondoker, Md. Reaz Hasan

    2017-12-01

    Maneuverability and resistance prediction with suitable accuracy is essential for optimum ship design and propulsion power prediction. This paper aims at providing some of the maneuverability characteristics of a Japanese bulk carrier model, JBC in calm water using a computational fluid dynamics solver named SHIP Motion and OpenFOAM. The solvers are based on the Reynolds average Navier-Stokes method (RaNS) and solves structured grid using the Finite Volume Method (FVM). This paper comprises the numerical results of calm water test for the JBC model with available experimental results. The calm water test results include the total drag co-efficient, average sinkage, and trim data. Visualization data for pressure distribution on the hull surface and free water surface have also been included. The paper concludes that the presented solvers predict the resistance and maneuverability characteristics of the bulk carrier with reasonable accuracy utilizing minimum computational resources.

  12. Functional Analysis of BcBem1 and Its Interaction Partners in Botrytis cinerea: Impact on Differentiation and Virulence

    PubMed Central

    Schumacher, Julia; Kokkelink, Leonie; Tudzynski, Paul

    2014-01-01

    In phytopathogenic fungi the establishment and maintenance of polarity is not only essential for vegetative growth and differentiation, but also for penetration and colonization of host tissues. We investigated orthologs of members of the yeast polarity complex in the grey mould fungus Botrytis cinerea: the scaffold proteins Bem1 and Far1, the GEF (guanine nucleotide exchange factor) Cdc24, and the formin Bni1 (named Sep1 in B. cinerea). BcBem1 does not play an important role in regular hyphal growth, but has significant impact on spore formation and germination, on the establishment of conidial anastomosis tubes (CATs) and on virulence. As in other fungi, BcBem1 interacts with the GEF BcCdc24 and the formin BcSep1, indicating that in B. cinerea the apical complex has a similar structure as in yeast. A functional analysis of BcCdc24 suggests that it is essential for growth, since it was not possible to obtain homokaryotic deletion mutants. Heterokaryons of Δcdc24 (supposed to exhibit reduced bccdc24 transcript levels) already show a strong phenotype: an inability to penetrate the host tissue, a significantly reduced growth rate and malformation of conidia, which tend to burst as observed for Δbcbem1. Also the formin BcSep1 has significant impact on hyphal growth and development, whereas the role of the putative ortholog of the yeast scaffold protein Far1 remains open: Δbcfar1 mutants have no obvious phenotypes. PMID:24797931

  13. A new method for true and spurious eigensolutions of arbitrary cavities using the combined Helmholtz exterior integral equation formulation method.

    PubMed

    Chen, I L; Chen, J T; Kuo, S R; Liang, M T

    2001-03-01

    Integral equation methods have been widely used to solve interior eigenproblems and exterior acoustic problems (radiation and scattering). It was recently found that the real-part boundary element method (BEM) for the interior problem results in spurious eigensolutions if the singular (UT) or the hypersingular (LM) equation is used alone. The real-part BEM results in spurious solutions for interior problems in a similar way that the singular integral equation (UT method) results in fictitious solutions for the exterior problem. To solve this problem, a Combined Helmholtz Exterior integral Equation Formulation method (CHEEF) is proposed. Based on the CHEEF method, the spurious solutions can be filtered out if additional constraints from the exterior points are chosen carefully. Finally, two examples for the eigensolutions of circular and rectangular cavities are considered. The optimum numbers and proper positions for selecting the points in the exterior domain are analytically studied. Also, numerical experiments were designed to verify the analytical results. It is worth pointing out that the nodal line of radiation mode of a circle can be rotated due to symmetry, while the nodal line of the rectangular is on a fixed position.

  14. An adaptive grid scheme using the boundary element method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Munipalli, R.; Anderson, D.A.

    1996-09-01

    A technique to solve the Poisson grid generation equations by Green`s function related methods has been proposed, with the source terms being purely position dependent. The use of distributed singularities in the flow domain coupled with the boundary element method (BEM) formulation is presented in this paper as a natural extension of the Green`s function method. This scheme greatly simplifies the adaption process. The BEM reduces the dimensionality of the given problem by one. Internal grid-point placement can be achieved for a given boundary distribution by adding continuous and discrete source terms in the BEM formulation. A distribution of vortexmore » doublets is suggested as a means of controlling grid-point placement and grid-line orientation. Examples for sample adaption problems are presented and discussed. 15 refs., 20 figs.« less

  15. Fuzzy scalar and vector median filters based on fuzzy distances.

    PubMed

    Chatzis, V; Pitas, I

    1999-01-01

    In this paper, the fuzzy scalar median (FSM) is proposed, defined by using ordering of fuzzy numbers based on fuzzy minimum and maximum operations defined by using the extension principle. Alternatively, the FSM is defined from the minimization of a fuzzy distance measure, and the equivalence of the two definitions is proven. Then, the fuzzy vector median (FVM) is proposed as an extension of vector median, based on a novel distance definition of fuzzy vectors, which satisfy the property of angle decomposition. By defining properly the fuzziness of a value, the combination of the basic properties of the classical scalar and vector median (VM) filter with other desirable characteristics can be succeeded.

  16. Using the Bem and Klein Grid Scores to Predict Health Services Usage by Men

    PubMed Central

    Reynolds, Grace L.; Fisher, Dennis G.; Dyo, Melissa; Huckabay, Loucine M.

    2016-01-01

    We examined the association between scores on the Bem Sex Roles Inventory (BSRI), Klein Sexual Orientation Grid (KSOG) and utilization of hospital inpatient services, emergency departments, and outpatient clinic visits in the past 12 months among 53 men (mean age 39 years). The femininity subscale score on the BSRI, ever having had gonorrhea and age were the three variables identified in a multivariate linear regression significantly predicting use of total health services. This supports the hypothesis that sex roles can assist our understanding of men’s use of health services. PMID:27337618

  17. A hybrid Boundary Element Unstructured Transmission-line (BEUT) method for accurate 2D electromagnetic simulation

    NASA Astrophysics Data System (ADS)

    Simmons, Daniel; Cools, Kristof; Sewell, Phillip

    2016-11-01

    Time domain electromagnetic simulation tools have the ability to model transient, wide-band applications, and non-linear problems. The Boundary Element Method (BEM) and the Transmission Line Modeling (TLM) method are both well established numerical techniques for simulating time-varying electromagnetic fields. The former surface based method can accurately describe outwardly radiating fields from piecewise uniform objects and efficiently deals with large domains filled with homogeneous media. The latter volume based method can describe inhomogeneous and non-linear media and has been proven to be unconditionally stable. Furthermore, the Unstructured TLM (UTLM) enables modelling of geometrically complex objects by using triangular meshes which removes staircasing and unnecessary extensions of the simulation domain. The hybridization of BEM and UTLM which is described in this paper is named the Boundary Element Unstructured Transmission-line (BEUT) method. It incorporates the advantages of both methods. The theory and derivation of the 2D BEUT method is described in this paper, along with any relevant implementation details. The method is corroborated by studying its correctness and efficiency compared to the traditional UTLM method when applied to complex problems such as the transmission through a system of Luneburg lenses and the modelling of antenna radomes for use in wireless communications.

  18. A hybrid Boundary Element Unstructured Transmission-line (BEUT) method for accurate 2D electromagnetic simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simmons, Daniel, E-mail: daniel.simmons@nottingham.ac.uk; Cools, Kristof; Sewell, Phillip

    Time domain electromagnetic simulation tools have the ability to model transient, wide-band applications, and non-linear problems. The Boundary Element Method (BEM) and the Transmission Line Modeling (TLM) method are both well established numerical techniques for simulating time-varying electromagnetic fields. The former surface based method can accurately describe outwardly radiating fields from piecewise uniform objects and efficiently deals with large domains filled with homogeneous media. The latter volume based method can describe inhomogeneous and non-linear media and has been proven to be unconditionally stable. Furthermore, the Unstructured TLM (UTLM) enables modelling of geometrically complex objects by using triangular meshes which removesmore » staircasing and unnecessary extensions of the simulation domain. The hybridization of BEM and UTLM which is described in this paper is named the Boundary Element Unstructured Transmission-line (BEUT) method. It incorporates the advantages of both methods. The theory and derivation of the 2D BEUT method is described in this paper, along with any relevant implementation details. The method is corroborated by studying its correctness and efficiency compared to the traditional UTLM method when applied to complex problems such as the transmission through a system of Luneburg lenses and the modelling of antenna radomes for use in wireless communications. - Graphical abstract:.« less

  19. An application of boundary element method calculations to hearing aid systems: The influence of the human head

    NASA Astrophysics Data System (ADS)

    Rasmussen, Karsten B.; Juhl, Peter

    2004-05-01

    Boundary element method (BEM) calculations are used for the purpose of predicting the acoustic influence of the human head in two cases. In the first case the sound source is the mouth and in the second case the sound is plane waves arriving from different directions in the horizontal plane. In both cases the sound field is studied in relation to two positions above the right ear being representative of hearing aid microphone positions. Both cases are relevant for hearing aid development. The calculations are based upon a direct BEM implementation in Matlab. The meshing is based on the original geometrical data files describing the B&K Head and Torso Simulator 4128 combined with a 3D scan of the pinna.

  20. Experimental validation of finite element and boundary element methods for predicting structural vibration and radiated noise

    NASA Technical Reports Server (NTRS)

    Seybert, A. F.; Wu, T. W.; Wu, X. F.

    1994-01-01

    This research report is presented in three parts. In the first part, acoustical analyses were performed on modes of vibration of the housing of a transmission of a gear test rig developed by NASA. The modes of vibration of the transmission housing were measured using experimental modal analysis. The boundary element method (BEM) was used to calculate the sound pressure and sound intensity on the surface of the housing and the radiation efficiency of each mode. The radiation efficiency of each of the transmission housing modes was then compared to theoretical results for a finite baffled plate. In the second part, analytical and experimental validation of methods to predict structural vibration and radiated noise are presented. A rectangular box excited by a mechanical shaker was used as a vibrating structure. Combined finite element method (FEM) and boundary element method (BEM) models of the apparatus were used to predict the noise level radiated from the box. The FEM was used to predict the vibration, while the BEM was used to predict the sound intensity and total radiated sound power using surface vibration as the input data. Vibration predicted by the FEM model was validated by experimental modal analysis; noise predicted by the BEM was validated by measurements of sound intensity. Three types of results are presented for the total radiated sound power: sound power predicted by the BEM model using vibration data measured on the surface of the box; sound power predicted by the FEM/BEM model; and sound power measured by an acoustic intensity scan. In the third part, the structure used in part two was modified. A rib was attached to the top plate of the structure. The FEM and BEM were then used to predict structural vibration and radiated noise respectively. The predicted vibration and radiated noise were then validated through experimentation.

  1. Mining Building Energy Management System Data Using Fuzzy Anomaly Detection and Linguistic Descriptions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dumidu Wijayasekara; Ondrej Linda; Milos Manic

    Building Energy Management Systems (BEMSs) are essential components of modern buildings that utilize digital control technologies to minimize energy consumption while maintaining high levels of occupant comfort. However, BEMSs can only achieve these energy savings when properly tuned and controlled. Since indoor environment is dependent on uncertain criteria such as weather, occupancy, and thermal state, performance of BEMS can be sub-optimal at times. Unfortunately, the complexity of BEMS control mechanism, the large amount of data available and inter-relations between the data can make identifying these sub-optimal behaviors difficult. This paper proposes a novel Fuzzy Anomaly Detection and Linguistic Description (Fuzzy-ADLD)more » based method for improving the understandability of BEMS behavior for improved state-awareness. The presented method is composed of two main parts: 1) detection of anomalous BEMS behavior and 2) linguistic representation of BEMS behavior. The first part utilizes modified nearest neighbor clustering algorithm and fuzzy logic rule extraction technique to build a model of normal BEMS behavior. The second part of the presented method computes the most relevant linguistic description of the identified anomalies. The presented Fuzzy-ADLD method was applied to real-world BEMS system and compared against a traditional alarm based BEMS. In six different scenarios, the Fuzzy-ADLD method identified anomalous behavior either as fast as or faster (an hour or more), that the alarm based BEMS. In addition, the Fuzzy-ADLD method identified cases that were missed by the alarm based system, demonstrating potential for increased state-awareness of abnormal building behavior.« less

  2. Comparison of the convolution quadrature method and enhanced inverse FFT with application in elastodynamic boundary element method

    NASA Astrophysics Data System (ADS)

    Schanz, Martin; Ye, Wenjing; Xiao, Jinyou

    2016-04-01

    Transient problems can often be solved with transformation methods, where the inverse transformation is usually performed numerically. Here, the discrete Fourier transform in combination with the exponential window method is compared with the convolution quadrature method formulated as inverse transformation. Both are inverse Laplace transforms, which are formally identical but use different complex frequencies. A numerical study is performed, first with simple convolution integrals and, second, with a boundary element method (BEM) for elastodynamics. Essentially, when combined with the BEM, the discrete Fourier transform needs less frequency calculations, but finer mesh compared to the convolution quadrature method to obtain the same level of accuracy. If further fast methods like the fast multipole method are used to accelerate the boundary element method the convolution quadrature method is better, because the iterative solver needs much less iterations to converge. This is caused by the larger real part of the complex frequencies necessary for the calculation, which improves the conditions of system matrix.

  3. Using the Hilbert uniqueness method in a reconstruction algorithm for electrical impedance tomography.

    PubMed

    Dai, W W; Marsili, P M; Martinez, E; Morucci, J P

    1994-05-01

    This paper presents a new version of the layer stripping algorithm in the sense that it works essentially by repeatedly stripping away the outermost layer of the medium after having determined the conductivity value in this layer. In order to stabilize the ill posed boundary value problem related to each layer, we base our algorithm on the Hilbert uniqueness method (HUM) and implement it with the boundary element method (BEM).

  4. Application of the Boundary Element Method to Fatigue Crack Growth Analysis

    DTIC Science & Technology

    1988-09-01

    III, and Noetic PROBE in Section IV. Correlation of the boundary element method and modeling techniques employed in this study were shown with the...distribution unlimited I I I Preface! 3 The purpose of this study was to apply the boundary element method (BEM) to two dimensional fracture mechanics...problems, and to use the BEM to analyze the interference effects of holes on cracks through a parametric study of a two hole 3 tension strip. The study

  5. A 2.5-dimensional method for the prediction of structure-borne low-frequency noise from concrete rail transit bridges.

    PubMed

    Li, Qi; Song, Xiaodong; Wu, Dingjun

    2014-05-01

    Predicting structure-borne noise from bridges subjected to moving trains using the three-dimensional (3D) boundary element method (BEM) is a time consuming process. This paper presents a two-and-a-half dimensional (2.5D) BEM-based procedure for simulating bridge-borne low-frequency noise with higher efficiency, yet no loss of accuracy. The two-dimensional (2D) BEM of a bridge with a constant cross section along the track direction is adopted to calculate the spatial modal acoustic transfer vectors (MATVs) of the bridge using the space-wave number transforms of its 3D modal shapes. The MATVs calculated using the 2.5D method are then validated by those computed using the 3D BEM. The bridge-borne noise is finally obtained through the MATVs and modal coordinate responses of the bridge, considering time-varying vehicle-track-bridge dynamic interaction. The presented procedure is applied to predict the sound pressure radiating from a U-shaped concrete bridge, and the computed results are compared with those obtained from field tests on Shanghai rail transit line 8. The numerical results match well with the measured results in both time and frequency domains at near-field points. Nevertheless, the computed results are smaller than the measured ones for far-field points, mainly due to the sound radiation from adjacent spans neglected in the current model.

  6. A Galleria Boundary Element Method for two-dimensional nonlinear magnetostatics

    NASA Astrophysics Data System (ADS)

    Brovont, Aaron D.

    The Boundary Element Method (BEM) is a numerical technique for solving partial differential equations that is used broadly among the engineering disciplines. The main advantage of this method is that one needs only to mesh the boundary of a solution domain. A key drawback is the myriad of integrals that must be evaluated to populate the full system matrix. To this day these integrals have been evaluated using numerical quadrature. In this research, a Galerkin formulation of the BEM is derived and implemented to solve two-dimensional magnetostatic problems with a focus on accurate, rapid computation. To this end, exact, closed-form solutions have been derived for all the integrals comprising the system matrix as well as those required to compute fields in post-processing; the need for numerical integration has been eliminated. It is shown that calculation of the system matrix elements using analytical solutions is 15-20 times faster than with numerical integration of similar accuracy. Furthermore, through the example analysis of a c-core inductor, it is demonstrated that the present BEM formulation is a competitive alternative to the Finite Element Method (FEM) for linear magnetostatic analysis. Finally, the BEM formulation is extended to analyze nonlinear magnetostatic problems via the Dual Reciprocity Method (DRBEM). It is shown that a coarse, meshless analysis using the DRBEM is able to achieve RMS error of 3-6% compared to a commercial FEM package in lightly saturated conditions.

  7. CFD-based design load analysis of 5MW offshore wind turbine

    NASA Astrophysics Data System (ADS)

    Tran, T. T.; Ryu, G. J.; Kim, Y. H.; Kim, D. H.

    2012-11-01

    The structure and aerodynamic loads acting on NREL 5MW reference wind turbine blade are calculated and analyzed based on advanced Computational Fluid Dynamics (CFD) and unsteady Blade Element Momentum (BEM). A detailed examination of the six force components has been carried out (three force components and three moment components). Structure load (gravity and inertia load) and aerodynamic load have been obtained by additional structural calculations (CFD or BEM, respectively,). In CFD method, the Reynolds Average Navier-Stokes approach was applied to solve the continuity equation of mass conservation and momentum balance so that the complex flow around wind turbines was modeled. Written in C programming language, a User Defined Function (UDF) code which defines transient velocity profile according to the Extreme Operating Gust condition was compiled into commercial FLUENT package. Furthermore, the unsteady BEM with 3D stall model has also adopted to investigate load components on wind turbine rotor. The present study introduces a comparison between advanced CFD and unsteady BEM for determining load on wind turbine rotor. Results indicate that there are good agreements between both present methods. It is importantly shown that six load components on wind turbine rotor is significant effect under Extreme Operating Gust (EOG) condition. Using advanced CFD and additional structural calculations, this study has succeeded to construct accuracy numerical methodology to estimate total load of wind turbine that compose of aerodynamic load and structure load.

  8. Design of a Double Anode Magnetron Injection Gun for Q-band Gyro-TWT Using Boundary Element Method

    NASA Astrophysics Data System (ADS)

    Li, Zhiliang; Feng, Jinjun; Liu, Bentian

    2018-04-01

    This paper presents a novel design code for double anode magnetron injection guns (MIGs) in gyro-devices based on boundary element method (BEM). The physical and mathematical models were constructed, and then the code using BEM for MIG's calculation was developed. Using the code, a double anode MIG for a Q-band gyrotron traveling-wave tube (gyro-TWT) amplifier operating in the circular TE01 mode at the fundamental cyclotron harmonic was designed. In order to verify the reliability of this code, velocity spread and guiding center radius of the MIG simulated by the BEM code were compared with these from the commonly used EGUN code, showing a reasonable agreement. Then, a Q-band gyro-TWT was fabricated and tested. The testing results show that the device has achieved an average power of 5kW and peak power ≥ 150 kW at a 3% duty cycle within bandwidth of 2 GHz, and maximum output peak power of 220 kW, with a corresponding saturated gain of 50.9 dB and efficiency of 39.8%. This paper demonstrates that the BEM code can be used as an effective approach for analysis of electron optics system in gyro-devices.

  9. Boundary elements; Proceedings of the Fifth International Conference, Hiroshima, Japan, November 8-11, 1983

    NASA Astrophysics Data System (ADS)

    Brebbia, C. A.; Futagami, T.; Tanaka, M.

    The boundary-element method (BEM) in computational fluid and solid mechanics is examined in reviews and reports of theoretical studies and practical applications. Topics presented include the fundamental mathematical principles of BEMs, potential problems, EM-field problems, heat transfer, potential-wave problems, fluid flow, elasticity problems, fracture mechanics, plates and shells, inelastic problems, geomechanics, dynamics, industrial applications of BEMs, optimization methods based on the BEM, numerical techniques, and coupling.

  10. Computation of Sound Propagation by Boundary Element Method

    NASA Technical Reports Server (NTRS)

    Guo, Yueping

    2005-01-01

    This report documents the development of a Boundary Element Method (BEM) code for the computation of sound propagation in uniform mean flows. The basic formulation and implementation follow the standard BEM methodology; the convective wave equation and the boundary conditions on the surfaces of the bodies in the flow are formulated into an integral equation and the method of collocation is used to discretize this equation into a matrix equation to be solved numerically. New features discussed here include the formulation of the additional terms due to the effects of the mean flow and the treatment of the numerical singularities in the implementation by the method of collocation. The effects of mean flows introduce terms in the integral equation that contain the gradients of the unknown, which is undesirable if the gradients are treated as additional unknowns, greatly increasing the sizes of the matrix equation, or if numerical differentiation is used to approximate the gradients, introducing numerical error in the computation. It is shown that these terms can be reformulated in terms of the unknown itself, making the integral equation very similar to the case without mean flows and simple for numerical implementation. To avoid asymptotic analysis in the treatment of numerical singularities in the method of collocation, as is conventionally done, we perform the surface integrations in the integral equation by using sub-triangles so that the field point never coincide with the evaluation points on the surfaces. This simplifies the formulation and greatly facilitates the implementation. To validate the method and the code, three canonic problems are studied. They are respectively the sound scattering by a sphere, the sound reflection by a plate in uniform mean flows and the sound propagation over a hump of irregular shape in uniform flows. The first two have analytical solutions and the third is solved by the method of Computational Aeroacoustics (CAA), all of which

  11. Multi-domain boundary element method for axi-symmetric layered linear acoustic systems

    NASA Astrophysics Data System (ADS)

    Reiter, Paul; Ziegelwanger, Harald

    2017-12-01

    Homogeneous porous materials like rock wool or synthetic foam are the main tool for acoustic absorption. The conventional absorbing structure for sound-proofing consists of one or multiple absorbers placed in front of a rigid wall, with or without air-gaps in between. Various models exist to describe these so called multi-layered acoustic systems mathematically for incoming plane waves. However, there is no efficient method to calculate the sound field in a half space above a multi layered acoustic system for an incoming spherical wave. In this work, an axi-symmetric multi-domain boundary element method (BEM) for absorbing multi layered acoustic systems and incoming spherical waves is introduced. In the proposed BEM formulation, a complex wave number is used to model absorbing materials as a fluid and a coordinate transformation is introduced which simplifies singular integrals of the conventional BEM to non-singular radial and angular integrals. The radial and angular part are integrated analytically and numerically, respectively. The output of the method can be interpreted as a numerical half space Green's function for grounds consisting of layered materials.

  12. Investigation of the current yaw engineering models for simulation of wind turbines in BEM and comparison with CFD and experiment

    NASA Astrophysics Data System (ADS)

    Rahimi, H.; Hartvelt, M.; Peinke, J.; Schepers, J. G.

    2016-09-01

    The aim of this work is to investigate the capabilities of current engineering tools based on Blade Element Momentum (BEM) and free vortex wake codes for the prediction of key aerodynamic parameters of wind turbines in yawed flow. Axial induction factor and aerodynamic loads of three wind turbines (NREL VI, AVATAR and INNWIND.EU) were investigated using wind tunnel measurements and numerical simulations for 0 and 30 degrees of yaw. Results indicated that for axial conditions there is a good agreement between all codes in terms of mean values of aerodynamic parameters, however in yawed flow significant deviations were observed. This was due to unsteady phenomena such as advancing & retreating and skewed wake effect. These deviations were more visible in aerodynamic parameters in comparison to the rotor azimuthal angle for the sections at the root and tip where the skewed wake effect plays a major role.

  13. Finite volume multigrid method of the planar contraction flow of a viscoelastic fluid

    NASA Astrophysics Data System (ADS)

    Moatssime, H. Al; Esselaoui, D.; Hakim, A.; Raghay, S.

    2001-08-01

    This paper reports on a numerical algorithm for the steady flow of viscoelastic fluid. The conservative and constitutive equations are solved using the finite volume method (FVM) with a hybrid scheme for the velocities and first-order upwind approximation for the viscoelastic stress. A non-uniform staggered grid system is used. The iterative SIMPLE algorithm is employed to relax the coupled momentum and continuity equations. The non-linear algebraic equations over the flow domain are solved iteratively by the symmetrical coupled Gauss-Seidel (SCGS) method. In both, the full approximation storage (FAS) multigrid algorithm is used. An Oldroyd-B fluid model was selected for the calculation. Results are reported for planar 4:1 abrupt contraction at various Weissenberg numbers. The solutions are found to be stable and smooth. The solutions show that at high Weissenberg number the domain must be long enough. The convergence of the method has been verified with grid refinement. All the calculations have been performed on a PC equipped with a Pentium III processor at 550 MHz. Copyright

  14. A finite-volume module for all-scale Earth-system modelling at ECMWF

    NASA Astrophysics Data System (ADS)

    Kühnlein, Christian; Malardel, Sylvie; Smolarkiewicz, Piotr

    2017-04-01

    We highlight recent advancements in the development of the finite-volume module (FVM) (Smolarkiewicz et al., 2016) for the IFS at ECMWF. FVM represents an alternative dynamical core that complements the operational spectral dynamical core of the IFS with new capabilities. Most notably, these include a compact-stencil finite-volume discretisation, flexible meshes, conservative non-oscillatory transport and all-scale governing equations. As a default, FVM solves the compressible Euler equations in a geospherical framework (Szmelter and Smolarkiewicz, 2010). The formulation incorporates a generalised terrain-following vertical coordinate. A hybrid computational mesh, fully unstructured in the horizontal and structured in the vertical, enables efficient global atmospheric modelling. Moreover, a centred two-time-level semi-implicit integration scheme is employed with 3D implicit treatment of acoustic, buoyant, and rotational modes. The associated 3D elliptic Helmholtz problem is solved using a preconditioned Generalised Conjugate Residual approach. The solution procedure employs the non-oscillatory finite-volume MPDATA advection scheme that is bespoke for the compressible dynamics on the hybrid mesh (Kühnlein and Smolarkiewicz, 2017). The recent progress of FVM is illustrated with results of benchmark simulations of intermediate complexity, and comparison to the operational spectral dynamical core of the IFS. C. Kühnlein, P.K. Smolarkiewicz: An unstructured-mesh finite-volume MPDATA for compressible atmospheric dynamics, J. Comput. Phys. (2017), in press. P.K. Smolarkiewicz, W. Deconinck, M. Hamrud, C. Kühnlein, G. Mozdzynski, J. Szmelter, N.P. Wedi: A finite-volume module for simulating global all-scale atmospheric flows, J. Comput. Phys. 314 (2016) 287-304. J. Szmelter, P.K. Smolarkiewicz: An edge-based unstructured mesh discretisation in geospherical framework, J. Comput. Phys. 229 (2010) 4980-4995.

  15. Solving Large-Scale Inverse Magnetostatic Problems using the Adjoint Method

    PubMed Central

    Bruckner, Florian; Abert, Claas; Wautischer, Gregor; Huber, Christian; Vogler, Christoph; Hinze, Michael; Suess, Dieter

    2017-01-01

    An efficient algorithm for the reconstruction of the magnetization state within magnetic components is presented. The occurring inverse magnetostatic problem is solved by means of an adjoint approach, based on the Fredkin-Koehler method for the solution of the forward problem. Due to the use of hybrid FEM-BEM coupling combined with matrix compression techniques the resulting algorithm is well suited for large-scale problems. Furthermore the reconstruction of the magnetization state within a permanent magnet as well as an optimal design application are demonstrated. PMID:28098851

  16. Solution of Grad-Shafranov equation by the method of fundamental solutions

    NASA Astrophysics Data System (ADS)

    Nath, D.; Kalra, M. S.; Kalra

    2014-06-01

    In this paper we have used the Method of Fundamental Solutions (MFS) to solve the Grad-Shafranov (GS) equation for the axisymmetric equilibria of tokamak plasmas with monomial sources. These monomials are the individual terms appearing on the right-hand side of the GS equation if one expands the nonlinear terms into polynomials. Unlike the Boundary Element Method (BEM), the MFS does not involve any singular integrals and is a meshless boundary-alone method. Its basic idea is to create a fictitious boundary around the actual physical boundary of the computational domain. This automatically removes the involvement of singular integrals. The results obtained by the MFS match well with the earlier results obtained using the BEM. The method is also applied to Solov'ev profiles and it is found that the results are in good agreement with analytical results.

  17. An Euler-Lagrange method considering bubble radial dynamics for modeling sonochemical reactors.

    PubMed

    Jamshidi, Rashid; Brenner, Gunther

    2014-01-01

    Unsteady numerical computations are performed to investigate the flow field, wave propagation and the structure of bubbles in sonochemical reactors. The turbulent flow field is simulated using a two-equation Reynolds-Averaged Navier-Stokes (RANS) model. The distribution of the acoustic pressure is solved based on the Helmholtz equation using a finite volume method (FVM). The radial dynamics of a single bubble are considered by applying the Keller-Miksis equation to consider the compressibility of the liquid to the first order of acoustical Mach number. To investigate the structure of bubbles, a one-way coupling Euler-Lagrange approach is used to simulate the bulk medium and the bubbles as the dispersed phase. Drag, gravity, buoyancy, added mass, volume change and first Bjerknes forces are considered and their orders of magnitude are compared. To verify the implemented numerical algorithms, results for one- and two-dimensional simplified test cases are compared with analytical solutions. The results show good agreement with experimental results for the relationship between the acoustic pressure amplitude and the volume fraction of the bubbles. The two-dimensional axi-symmetric results are in good agreement with experimentally observed structure of bubbles close to sonotrode. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. Analysis of random structure-acoustic interaction problems using coupled boundary element and finite element methods

    NASA Technical Reports Server (NTRS)

    Mei, Chuh; Pates, Carl S., III

    1994-01-01

    A coupled boundary element (BEM)-finite element (FEM) approach is presented to accurately model structure-acoustic interaction systems. The boundary element method is first applied to interior, two and three-dimensional acoustic domains with complex geometry configurations. Boundary element results are very accurate when compared with limited exact solutions. Structure-interaction problems are then analyzed with the coupled FEM-BEM method, where the finite element method models the structure and the boundary element method models the interior acoustic domain. The coupled analysis is compared with exact and experimental results for a simplistic model. Composite panels are analyzed and compared with isotropic results. The coupled method is then extended for random excitation. Random excitation results are compared with uncoupled results for isotropic and composite panels.

  19. Coupled Finite Volume and Finite Element Method Analysis of a Complex Large-Span Roof Structure

    NASA Astrophysics Data System (ADS)

    Szafran, J.; Juszczyk, K.; Kamiński, M.

    2017-12-01

    The main goal of this paper is to present coupled Computational Fluid Dynamics and structural analysis for the precise determination of wind impact on internal forces and deformations of structural elements of a longspan roof structure. The Finite Volume Method (FVM) serves for a solution of the fluid flow problem to model the air flow around the structure, whose results are applied in turn as the boundary tractions in the Finite Element Method problem structural solution for the linear elastostatics with small deformations. The first part is carried out with the use of ANSYS 15.0 computer system, whereas the FEM system Robot supports stress analysis in particular roof members. A comparison of the wind pressure distribution throughout the roof surface shows some differences with respect to that available in the engineering designing codes like Eurocode, which deserves separate further numerical studies. Coupling of these two separate numerical techniques appears to be promising in view of future computational models of stochastic nature in large scale structural systems due to the stochastic perturbation method.

  20. Coupled discrete element and finite volume solution of two classical soil mechanics problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Feng; Drumm, Eric; Guiochon, Georges A

    One dimensional solutions for the classic critical upward seepage gradient/quick condition and the time rate of consolidation problems are obtained using coupled routines for the finite volume method (FVM) and discrete element method (DEM), and the results compared with the analytical solutions. The two phase flow in a system composed of fluid and solid is simulated with the fluid phase modeled by solving the averaged Navier-Stokes equation using the FVM and the solid phase is modeled using the DEM. A framework is described for the coupling of two open source computer codes: YADE-OpenDEM for the discrete element method and OpenFOAMmore » for the computational fluid dynamics. The particle-fluid interaction is quantified using a semi-empirical relationship proposed by Ergun [12]. The two classical verification problems are used to explore issues encountered when using coupled flow DEM codes, namely, the appropriate time step size for both the fluid and mechanical solution processes, the choice of the viscous damping coefficient, and the number of solid particles per finite fluid volume.« less

  1. Investigation of Coupled model of Pore network and Continuum in shale gas

    NASA Astrophysics Data System (ADS)

    Cao, G.; Lin, M.

    2016-12-01

    Flow in shale spanning over many scales, makes the majority of conventional treatment methods disabled. For effectively simulating, a coupled model of pore-scale and continuum-scale was proposed in this paper. Based on the SEM image, we decompose organic-rich-shale into two subdomains: kerogen and inorganic matrix. In kerogen, the nanoscale pore-network is the main storage space and migration pathway so that the molecular phenomena (slip and diffusive transport) is significant. Whereas, inorganic matrix, with relatively large pores and micro fractures, the flow is approximate to Darcy. We use pore-scale network models (PNM) to represent kerogen and continuum-scale models (FVM or FEM) to represent matrix. Finite element mortars are employed to couple pore- and continuum-scale models by enforcing continuity of pressures and fluxes at shared boundary interfaces. In our method, the process in the coupled model is described by pressure square equation, and uses Dirichlet boundary conditions. We discuss several problems: the optimal element number of mortar faces, two categories boundary faces of pore network, the difference between 2D and 3D models, and the difference between continuum models FVM and FEM in mortars. We conclude that: (1) too coarse mesh in mortars will decrease the accuracy, while too fine mesh will lead to an ill-condition even singular system, the optimal element number is depended on boundary pores and nodes number. (2) pore network models are adjacent to two different mortar faces (PNM to PNM, PNM to continuum model), incidental repeated mortar nodes must be deleted. (3) 3D models can be replaced by 2D models under certain condition. (4) FVM is more convenient than FEM, for its simplicity in assigning interface nodes pressure and calculating interface fluxes. This work is supported by the Strategic Priority Research Program of the Chinese Academy of Sciences (XDB10020302), the 973 Program (2014CB239004), the Key Instrument Developing Project of the

  2. BEM for wave equation with boundary in arbitrary motion and applications to compressible potential aerodynamics of airplanes and helicopters

    NASA Technical Reports Server (NTRS)

    Morino, Luigi; Bharadvaj, Bala K.; Freedman, Marvin I.; Tseng, Kadin

    1988-01-01

    The wave equation for an object in arbitrary motion is investigated analytically using a BEM approach, and practical applications to potential flows of compressible fluids around aircraft wings and helicopter rotors are considered. The treatment accounts for arbitrary combined rotational and translational motion of the reference frame and for the wake motion. The numerical implementation as a computer algorithm is demonstrated on problems with prescribed and free wakes, the former in compressible flows and the latter for incompressible flows; results are presented graphically and briefly characterized.

  3. The Reduction of Ducted Fan Engine Noise Via A Boundary Integral Equation Method

    NASA Technical Reports Server (NTRS)

    Tweed, J.; Dunn, M.

    1997-01-01

    The development of a Boundary Integral Equation Method (BIEM) for the prediction of ducted fan engine noise is discussed. The method is motivated by the need for an efficient and versatile computational tool to assist in parametric noise reduction studies. In this research, the work in reference 1 was extended to include passive noise control treatment on the duct interior. The BEM considers the scattering of incident sound generated by spinning point thrust dipoles in a uniform flow field by a thin cylindrical duct. The acoustic field is written as a superposition of spinning modes. Modal coefficients of acoustic pressure are calculated term by term. The BEM theoretical framework is based on Helmholtz potential theory. A boundary value problem is converted to a boundary integral equation formulation with unknown single and double layer densities on the duct wall. After solving for the unknown densities, the acoustic field is easily calculated. The main feature of the BIEM is the ability to compute any portion of the sound field without the need to compute the entire field. Other noise prediction methods such as CFD and Finite Element methods lack this property. Additional BIEM attributes include versatility, ease of use, rapid noise predictions, coupling of propagation and radiation both forward and aft, implementable on midrange personal computers, and valid over a wide range of frequencies.

  4. Translating building information modeling to building energy modeling using model view definition.

    PubMed

    Jeong, WoonSeong; Kim, Jong Bum; Clayton, Mark J; Haberl, Jeff S; Yan, Wei

    2014-01-01

    This paper presents a new approach to translate between Building Information Modeling (BIM) and Building Energy Modeling (BEM) that uses Modelica, an object-oriented declarative, equation-based simulation environment. The approach (BIM2BEM) has been developed using a data modeling method to enable seamless model translations of building geometry, materials, and topology. Using data modeling, we created a Model View Definition (MVD) consisting of a process model and a class diagram. The process model demonstrates object-mapping between BIM and Modelica-based BEM (ModelicaBEM) and facilitates the definition of required information during model translations. The class diagram represents the information and object relationships to produce a class package intermediate between the BIM and BEM. The implementation of the intermediate class package enables system interface (Revit2Modelica) development for automatic BIM data translation into ModelicaBEM. In order to demonstrate and validate our approach, simulation result comparisons have been conducted via three test cases using (1) the BIM-based Modelica models generated from Revit2Modelica and (2) BEM models manually created using LBNL Modelica Buildings library. Our implementation shows that BIM2BEM (1) enables BIM models to be translated into ModelicaBEM models, (2) enables system interface development based on the MVD for thermal simulation, and (3) facilitates the reuse of original BIM data into building energy simulation without an import/export process.

  5. Translating Building Information Modeling to Building Energy Modeling Using Model View Definition

    PubMed Central

    Kim, Jong Bum; Clayton, Mark J.; Haberl, Jeff S.

    2014-01-01

    This paper presents a new approach to translate between Building Information Modeling (BIM) and Building Energy Modeling (BEM) that uses Modelica, an object-oriented declarative, equation-based simulation environment. The approach (BIM2BEM) has been developed using a data modeling method to enable seamless model translations of building geometry, materials, and topology. Using data modeling, we created a Model View Definition (MVD) consisting of a process model and a class diagram. The process model demonstrates object-mapping between BIM and Modelica-based BEM (ModelicaBEM) and facilitates the definition of required information during model translations. The class diagram represents the information and object relationships to produce a class package intermediate between the BIM and BEM. The implementation of the intermediate class package enables system interface (Revit2Modelica) development for automatic BIM data translation into ModelicaBEM. In order to demonstrate and validate our approach, simulation result comparisons have been conducted via three test cases using (1) the BIM-based Modelica models generated from Revit2Modelica and (2) BEM models manually created using LBNL Modelica Buildings library. Our implementation shows that BIM2BEM (1) enables BIM models to be translated into ModelicaBEM models, (2) enables system interface development based on the MVD for thermal simulation, and (3) facilitates the reuse of original BIM data into building energy simulation without an import/export process. PMID:25309954

  6. A broadband fast multipole accelerated boundary element method for the three dimensional Helmholtz equation.

    PubMed

    Gumerov, Nail A; Duraiswami, Ramani

    2009-01-01

    The development of a fast multipole method (FMM) accelerated iterative solution of the boundary element method (BEM) for the Helmholtz equations in three dimensions is described. The FMM for the Helmholtz equation is significantly different for problems with low and high kD (where k is the wavenumber and D the domain size), and for large problems the method must be switched between levels of the hierarchy. The BEM requires several approximate computations (numerical quadrature, approximations of the boundary shapes using elements), and these errors must be balanced against approximations introduced by the FMM and the convergence criterion for iterative solution. These different errors must all be chosen in a way that, on the one hand, excess work is not done and, on the other, that the error achieved by the overall computation is acceptable. Details of translation operators for low and high kD, choice of representations, and BEM quadrature schemes, all consistent with these approximations, are described. A novel preconditioner using a low accuracy FMM accelerated solver as a right preconditioner is also described. Results of the developed solvers for large boundary value problems with 0.0001 less, similarkD less, similar500 are presented and shown to perform close to theoretical expectations.

  7. Accurate reconstruction of the optical parameter distribution in participating medium based on the frequency-domain radiative transfer equation

    NASA Astrophysics Data System (ADS)

    Qiao, Yao-Bin; Qi, Hong; Zhao, Fang-Zhou; Ruan, Li-Ming

    2016-12-01

    Reconstructing the distribution of optical parameters in the participating medium based on the frequency-domain radiative transfer equation (FD-RTE) to probe the internal structure of the medium is investigated in the present work. The forward model of FD-RTE is solved via the finite volume method (FVM). The regularization term formatted by the generalized Gaussian Markov random field model is used in the objective function to overcome the ill-posed nature of the inverse problem. The multi-start conjugate gradient (MCG) method is employed to search the minimum of the objective function and increase the efficiency of convergence. A modified adjoint differentiation technique using the collimated radiative intensity is developed to calculate the gradient of the objective function with respect to the optical parameters. All simulation results show that the proposed reconstruction algorithm based on FD-RTE can obtain the accurate distributions of absorption and scattering coefficients. The reconstructed images of the scattering coefficient have less errors than those of the absorption coefficient, which indicates the former are more suitable to probing the inner structure. Project supported by the National Natural Science Foundation of China (Grant No. 51476043), the Major National Scientific Instruments and Equipment Development Special Foundation of China (Grant No. 51327803), and the Foundation for Innovative Research Groups of the National Natural Science Foundation of China (Grant No. 51121004).

  8. Relevance of aerodynamic modelling for load reduction control strategies of two-bladed wind turbines

    NASA Astrophysics Data System (ADS)

    Luhmann, B.; Cheng, P. W.

    2014-06-01

    A new load reduction concept is being developed for the two-bladed prototype of the Skywind 3.5MW wind turbine. Due to transport and installation advantages both offshore and in complex terrain two-bladed turbine designs are potentially more cost-effective than comparable three-bladed configurations. A disadvantage of two-bladed wind turbines is the increased fatigue loading, which is a result of asymmetrically distributed rotor forces. The innovative load reduction concept of the Skywind prototype consists of a combination of cyclic pitch control and tumbling rotor kinematics to mitigate periodic structural loading. Aerodynamic design tools must be able to model correctly the advanced dynamics of the rotor. In this paper the impact of the aerodynamic modelling approach is investigated for critical operational modes of a two-bladed wind turbine. Using a lifting line free wake vortex code (FVM) the physical limitations of the classical blade element momentum theory (BEM) can be evaluated. During regular operation vertical shear and yawed inflow are the main contributors to periodic blade load asymmetry. It is shown that the near wake interaction of the blades under such conditions is not fully captured by the correction models of BEM approach. The differing prediction of local induction causes a high fatigue load uncertainty especially for two-bladed turbines. The implementation of both cyclic pitch control and a tumbling rotor can mitigate the fatigue loading by increasing the aerodynamic and structural damping. The influence of the time and space variant vorticity distribution in the near wake is evaluated in detail for different cyclic pitch control functions and tumble dynamics respectively. It is demonstrated that dynamic inflow as well as wake blade interaction have a significant impact on the calculated blade forces and need to be accounted for by the aerodynamic modelling approach. Aeroelastic simulations are carried out using the high fidelity multi body

  9. Blade pitch optimization methods for vertical-axis wind turbines

    NASA Astrophysics Data System (ADS)

    Kozak, Peter

    Vertical-axis wind turbines (VAWTs) offer an inherently simpler design than horizontal-axis machines, while their lower blade speed mitigates safety and noise concerns, potentially allowing for installation closer to populated and ecologically sensitive areas. While VAWTs do offer significant operational advantages, development has been hampered by the difficulty of modeling the aerodynamics involved, further complicated by their rotating geometry. This thesis presents results from a simulation of a baseline VAWT computed using Star-CCM+, a commercial finite-volume (FVM) code. VAWT aerodynamics are shown to be dominated at low tip-speed ratios by dynamic stall phenomena and at high tip-speed ratios by wake-blade interactions. Several optimization techniques have been developed for the adjustment of blade pitch based on finite-volume simulations and streamtube models. The effectiveness of the optimization procedure is evaluated and the basic architecture for a feedback control system is proposed. Implementation of variable blade pitch is shown to increase a baseline turbine's power output between 40%-100%, depending on the optimization technique, improving the turbine's competitiveness when compared with a commercially-available horizontal-axis turbine.

  10. A Novel Quasi-3D Method for Cascade Flow Considering Axial Velocity Density Ratio

    NASA Astrophysics Data System (ADS)

    Chen, Zhiqiang; Zhou, Ming; Xu, Quanyong; Huang, Xudong

    2018-03-01

    A novel quasi-3D Computational Fluid Dynamics (CFD) method of mid-span flow simulation for compressor cascades is proposed. Two dimension (2D) Reynolds-Averaged Navier-Stokes (RANS) method is shown facing challenge in predicting mid-span flow with a unity Axial Velocity Density Ratio (AVDR). Three dimension (3D) RANS solution also shows distinct discrepancies if the AVDR is not predicted correctly. In this paper, 2D and 3D CFD results discrepancies are analyzed and a novel quasi-3D CFD method is proposed. The new quasi-3D model is derived by reducing 3D RANS Finite Volume Method (FVM) discretization over a one-spanwise-layer structured mesh cell. The sidewall effect is considered by two parts. The first part is explicit interface fluxes of mass, momentum and energy as well as turbulence. The second part is a cell boundary scaling factor representing sidewall boundary layer contraction. The performance of the novel quasi-3D method is validated on mid-span pressure distribution, pressure loss and shock prediction of two typical cascades. The results show good agreement with the experiment data on cascade SJ301-20 and cascade AC6-10 at all test condition. The proposed quasi-3D method shows superior accuracy over traditional 2D RANS method and 3D RANS method in performance prediction of compressor cascade.

  11. Reliable and efficient a posteriori error estimation for adaptive IGA boundary element methods for weakly-singular integral equations

    PubMed Central

    Feischl, Michael; Gantner, Gregor; Praetorius, Dirk

    2015-01-01

    We consider the Galerkin boundary element method (BEM) for weakly-singular integral equations of the first-kind in 2D. We analyze some residual-type a posteriori error estimator which provides a lower as well as an upper bound for the unknown Galerkin BEM error. The required assumptions are weak and allow for piecewise smooth parametrizations of the boundary, local mesh-refinement, and related standard piecewise polynomials as well as NURBS. In particular, our analysis gives a first contribution to adaptive BEM in the frame of isogeometric analysis (IGABEM), for which we formulate an adaptive algorithm which steers the local mesh-refinement and the multiplicity of the knots. Numerical experiments underline the theoretical findings and show that the proposed adaptive strategy leads to optimal convergence. PMID:26085698

  12. Roughness and pH changes of enamel surface induced by soft drinks in vitro-applications of stylus profilometry, focus variation 3D scanning microscopy and micro pH sensor.

    PubMed

    Fujii, Mie; Kitasako, Yuichi; Sadr, Alireza; Tagami, Junji

    2011-01-01

    This study aimed to evaluate enamel surface roughness (Ra) and pH before and after erosion by soft drinks. Enamel was exposed to a soft drink (cola, orange juice or green tea) for 1, 5 or 60 min; Ra was measured using contact-stylus surface profilometry (SSP) and non-contact focus variation 3D microscope (FVM). Surface pH was measured using a micro pH sensor. Data were analyzed at significance level of alpha=0.05. There was a significant correlation in Ra between SSP and FVM. FVM images showed no changes in the surface morphology after various periods of exposure to green tea. Unlike cola and orange juice, exposure to green tea did not significantly affect Ra or pH. A significant correlation was observed between surface pH and Ra change after exposure to the drinks. Optical surface analysis and micro pH sensor may be useful tools for non-damaging, quantitative assessment of soft drinks erosion on enamel.

  13. Geometric and boundary element method simulations of acoustic reflections from rough, finite, or non-planar surfaces

    NASA Astrophysics Data System (ADS)

    Rathsam, Jonathan

    This dissertation seeks to advance the current state of computer-based sound field simulations for room acoustics. The first part of the dissertation assesses the reliability of geometric sound-field simulations, which are approximate in nature. The second part of the dissertation uses the rigorous boundary element method (BEM) to learn more about reflections from finite reflectors: planar and non-planar. Acoustical designers commonly use geometric simulations to predict sound fields quickly. Geometric simulation of reflections from rough surfaces is still under refinement. The first project in this dissertation investigates the scattering coefficient, which quantifies the degree of diffuse reflection from rough surfaces. The main result is that predicted reverberation time varies inversely with scattering coefficient if the sound field is nondiffuse. Additional results include a flow chart that enables acoustical designers to gauge how sensitive predicted results are to their choice of scattering coefficient. Geometric acoustics is a high-frequency approximation to wave acoustics. At low frequencies, more pronounced wave phenomena cause deviations between real-world values and geometric predictions. Acoustical designers encounter the limits of geometric acoustics in particular when simulating the low frequency response from finite suspended reflector panels. This dissertation uses the rigorous BEM to develop an improved low-frequency radiation model for smooth, finite reflectors. The improved low frequency model is suggested in two forms for implementation in geometric models. Although BEM simulations require more computation time than geometric simulations, BEM results are highly accurate. The final section of this dissertation uses the BEM to investigate the sound field around non-planar reflectors. The author has added convex edges rounded away from the source side of finite, smooth reflectors to minimize coloration of reflections caused by interference from

  14. Numerical analysis on interactions between fluid flow and structure deformation in plate-fin heat exchanger by Galerkin method

    NASA Astrophysics Data System (ADS)

    Liu, Jing-cheng; Wei, Xiu-ting; Zhou, Zhi-yong; Wei, Zhen-wen

    2018-03-01

    The fluid-structure interaction performance of plate-fin heat exchanger (PFHE) with serrated fins in large scale air-separation equipment was investigated in this paper. The stress and deformation of fins were analyzed, besides, the interaction equations were deduced by Galerkin method. The governing equations of fluid flow and heat transfer in PFHE were deduced by finite volume method (FVM). The distribution of strain and stress were calculated in large scale air separation equipment and the coupling situation of serrated fins under laminar situation was analyzed. The results indicated that the interactions between fins and fluid flow in the exchanger have significant impacts on heat transfer enhancement, meanwhile, the strain and stress of fins includes dynamic pressure of the sealing head and flow impact with the increase of flow velocity. The impacts are especially significant at the conjunction of two fins because of the non-alignment fins. It can be concluded that the soldering process and channel width led to structure deformation of fins in the exchanger, and degraded heat transfer efficiency.

  15. Pregnancy outcome and placental pathology in small for gestational age neonates in relation to the severity of their growth restriction.

    PubMed

    Gluck, Ohad; Schreiber, Letizia; Marciano, Adi; Mizrachi, Yossi; Bar, Jacob; Kovo, Michal

    2017-12-03

    To investigate neonatal outcome and placental pathology in pregnancies complicated with small for gestational age neonates (SGA), in relation to the severity of growth restriction. The medical records and placental histology reports of all neonates with a birth-weight (BW) ≤10th percentile, born between 24-42 weeks, during 2010-2015, were reviewed. Placental lesions were classified into maternal and fetal vascular malperfusion (MVM and FVM) lesions. Results were compared between neonates with BW <5th percentile (severe SGA group), neonates with BW between 5th-10th percentile (mild SGA group) and a control group of appropriate for gestational age (AGA) neonates. Composite neonatal outcome was defined as one or more of early complications. Overall, 753 neonates were included, 238 in the severe SGA group, 266 in the mild SGA group, and 249 in the control group. The severe SGA group had higher rates of composite adverse neonatal outcome as compared with the mild SGA and control groups (37.2 versus 17.6%, versus 24.5%, respectively, p < .001). The SGA group was characterized by higher rates of placental MVM and FVM lesions, compared with controls (p < .001 for both). After controlling for confounders, using a multivariate regression analysis, the likelihood of detecting placental MVM and FVM lesions was increased as neonatal birthweight decreased. Worse neonatal outcome and more placental MVM and FVM lesions correlate with the severity of neonatal growth restriction in a "dose-dependent" manner.

  16. Coupled heat transfer model and experiment study of semitransparent barrier materials in aerothermal environment

    NASA Astrophysics Data System (ADS)

    Wang, Da-Lin; Qi, Hong

    Semi-transparent materials (such as IR optical windows) are widely used for heat protection or transfer, temperature and image measurement, and safety in energy , space, military, and information technology applications. They are used, for instance, ceramic coatings for thermal barriers of spacecrafts or gas turbine blades, and thermal image observation under extreme or some dangerous environments. In this paper, the coupled conduction and radiation heat transfer model is established to describe temperature distribution of semitransparent thermal barrier medium within the aerothermal environment. In order to investigate this numerical model, one semi-transparent sample with black coating was considered, and photothermal properties were measured. At last, Finite Volume Method (FVM) was used to solve the coupled model, and the temperature responses from the sample surfaces were obtained. In addition, experiment study was also taken into account. In the present experiment, aerodynamic heat flux was simulated by one electrical heater, and two experiment cases were designed in terms of the duration of aerodynamic heating. One case is that the heater irradiates one surface of the sample continually until the other surface temperature up to constant, and the other case is that the heater works only 130 s. The surface temperature responses of these two cases were recorded. Finally, FVM model of the coupling conduction-radiation heat transfer was validated based on the experiment study with relative error less than 5%.

  17. On the Treatment of Field Quantities and Elemental Continuity in FEM Solutions.

    PubMed

    Jallepalli, Ashok; Docampo-Sanchez, Julia; Ryan, Jennifer K; Haimes, Robert; Kirby, Robert M

    2018-01-01

    As the finite element method (FEM) and the finite volume method (FVM), both traditional and high-order variants, continue their proliferation into various applied engineering disciplines, it is important that the visualization techniques and corresponding data analysis tools that act on the results produced by these methods faithfully represent the underlying data. To state this in another way: the interpretation of data generated by simulation needs to be consistent with the numerical schemes that underpin the specific solver technology. As the verifiable visualization literature has demonstrated: visual artifacts produced by the introduction of either explicit or implicit data transformations, such as data resampling, can sometimes distort or even obfuscate key scientific features in the data. In this paper, we focus on the handling of elemental continuity, which is often only continuous or piecewise discontinuous, when visualizing primary or derived fields from FEM or FVM simulations. We demonstrate that traditional data handling and visualization of these fields introduce visual errors. In addition, we show how the use of the recently proposed line-SIAC filter provides a way of handling elemental continuity issues in an accuracy-conserving manner with the added benefit of casting the data in a smooth context even if the representation is element discontinuous.

  18. International comparison of observation-specific spatial buffers: maximizing the ability to estimate physical activity.

    PubMed

    Frank, Lawrence D; Fox, Eric H; Ulmer, Jared M; Chapman, James E; Kershaw, Suzanne E; Sallis, James F; Conway, Terry L; Cerin, Ester; Cain, Kelli L; Adams, Marc A; Smith, Graham R; Hinckson, Erica; Mavoa, Suzanne; Christiansen, Lars B; Hino, Adriano Akira F; Lopes, Adalberto A S; Schipperijn, Jasper

    2017-01-23

    Advancements in geographic information systems over the past two decades have increased the specificity by which an individual's neighborhood environment may be spatially defined for physical activity and health research. This study investigated how different types of street network buffering methods compared in measuring a set of commonly used built environment measures (BEMs) and tested their performance on associations with physical activity outcomes. An internationally-developed set of objective BEMs using three different spatial buffering techniques were used to evaluate the relative differences in resulting explanatory power on self-reported physical activity outcomes. BEMs were developed in five countries using 'sausage,' 'detailed-trimmed,' and 'detailed,' network buffers at a distance of 1 km around participant household addresses (n = 5883). BEM values were significantly different (p < 0.05) for 96% of sausage versus detailed-trimmed buffer comparisons and 89% of sausage versus detailed network buffer comparisons. Results showed that BEM coefficients in physical activity models did not differ significantly across buffering methods, and in most cases BEM associations with physical activity outcomes had the same level of statistical significance across buffer types. However, BEM coefficients differed in significance for 9% of the sausage versus detailed models, which may warrant further investigation. Results of this study inform the selection of spatial buffering methods to estimate physical activity outcomes using an internationally consistent set of BEMs. Using three different network-based buffering methods, the findings indicate significant variation among BEM values, however associations with physical activity outcomes were similar across each buffering technique. The study advances knowledge by presenting consistently assessed relationships between three different network buffer types and utilitarian travel, sedentary behavior, and leisure

  19. A fast isogeometric BEM for the three dimensional Laplace- and Helmholtz problems

    NASA Astrophysics Data System (ADS)

    Dölz, Jürgen; Harbrecht, Helmut; Kurz, Stefan; Schöps, Sebastian; Wolf, Felix

    2018-03-01

    We present an indirect higher order boundary element method utilising NURBS mappings for exact geometry representation and an interpolation-based fast multipole method for compression and reduction of computational complexity, to counteract the problems arising due to the dense matrices produced by boundary element methods. By solving Laplace and Helmholtz problems via a single layer approach we show, through a series of numerical examples suitable for easy comparison with other numerical schemes, that one can indeed achieve extremely high rates of convergence of the pointwise potential through the utilisation of higher order B-spline-based ansatz functions.

  20. Application of the Boundary Element Method to Elastic Wave Scattering Problems in Ultrasonic Nondestructive Evaluation.

    NASA Astrophysics Data System (ADS)

    Schafbuch, Paul Jay

    The boundary element method (BEM) is used to numerically simulate the interaction of ultrasonic waves with material defects such as voids, inclusions, and open cracks. The time harmonic formulation is in 3D and therefore allows flaws of arbitrary shape to be modeled. The BEM makes such problems feasible because the underlying boundary integral equation only requires a surface (2D) integration and difficulties associated with the seemingly infinite extent of the host domain are not encountered. The computer code utilized in this work is built upon recent advances in elastodynamic boundary element theory such as a scheme for self adjusting integration order and singular integration regularization. Incident fields may be taken as compressional or shear plane waves or predicted by an approximate Gauss -Hermite beam model. The code is highly optimized for voids and has been coupled with computer aided engineering packages for automated flaw shape definition and mesh generation. Subsequent graphical display of intermediate results supports model refinement and physical interpretation. Final results are typically cast in a nondestructive evaluation (NDE) context as either scattering amplitudes or flaw signals (via a measurement model based on a reciprocity integral). The near field is also predicted which allows for improved physical insight into the scattering process and the evaluation of certain modeling approximations. The accuracy of the BEM approach is first examined by comparing its predictions to those of other models for single, isolated scatterers. The comparisons are with the predictions of analytical solutions for spherical defects and with MOOT and T-matrix calculations for axisymmetric flaws. Experimental comparisons are also made for volumetric shapes with different characteristic dimensions in all three directions, since no other numerical approach has yet produced results of this type. Theoretical findings regarding the fictitious eigenfrequency

  1. Application of the boundary element method to elastic wave scattering problems in ultrasonic nondestructive evaluation

    NASA Astrophysics Data System (ADS)

    Schafbuch, Paul Jay

    1991-02-01

    The boundary element method (BEM) is used to numerically simulate the interaction of ultrasonic waves with material defects such as voids, inclusions, and open cracks. The time harmonic formulation is in 3D and therefore allows flaws of arbitrary shape to be modeled. The BEM makes such problems feasible because the underlying boundary integral equation only requires a surface (2D) integration and difficulties associated with the seemingly infinite extent of the host domain are not encountered. The computer code utilized in this work is built upon recent advances in elastodynamic boundary element theory such as a scheme for self adjusting integration order and singular integration regularization. Incident fields may be taken as compressional or shear plane waves or predicted by an approximate Gauss-Hermite beam model. The code is highly optimized for voids and has been coupled with computer aided engineering packages for automated flaw shape definition and mesh generation. Subsequent graphical display of intermediate results supports model refinement and physical interpretation. Final results are typically cast in a nondestructive evaluation (NDE) context as either scattering amplitudes or flaw signals (via a measurement model based on a reciprocity integral). The near field is also predicted which allows for improved physical insight into the scattering process and the evaluation of certain modeling approximations. The accuracy of the BEM approach is first examined by comparing its predictions to those of other models for single, isolated scatters. The comparisons are with the predictions of analytical solutions for spherical defects and with MOOT and T-matrix calculations for axisymmetric flaws. Experimental comparisons are also made for volumetric shapes with different characteristic dimensions in all three directions, since no other numerical approach has yet produced results of this type. Theoretical findings regarding the fictitious eigenfrequency difficulty

  2. Boundary element method for 2D materials and thin films.

    PubMed

    Hrtoň, M; Křápek, V; Šikola, T

    2017-10-02

    2D materials emerge as a viable platform for the control of light at the nanoscale. In this context the need has arisen for a fast and reliable tool capable of capturing their strictly 2D nature in 3D light scattering simulations. So far, 2D materials and their patterned structures (ribbons, discs, etc.) have been mostly treated as very thin films of subnanometer thickness with an effective dielectric function derived from their 2D optical conductivity. In this study an extension to the existing framework of the boundary element method (BEM) with 2D materials treated as a conductive interface between two media is presented. The testing of our enhanced method on problems with known analytical solutions reveals that for certain types of tasks the new modification is faster than the original BEM algorithm. Furthermore, the representation of 2D materials as an interface allows us to simulate problems in which their optical properties depend on spatial coordinates. Such spatial dependence can occur naturally or can be tailored artificially to attain new functional properties.

  3. 3D Higher Order Modeling in the BEM/FEM Hybrid Formulation

    NASA Technical Reports Server (NTRS)

    Fink, P. W.; Wilton, D. R.

    2000-01-01

    Higher order divergence- and curl-conforming bases have been shown to provide significant benefits, in both convergence rate and accuracy, in the 2D hybrid finite element/boundary element formulation (P. Fink and D. Wilton, National Radio Science Meeting, Boulder, CO, Jan. 2000). A critical issue in achieving the potential for accuracy of the approach is the accurate evaluation of all matrix elements. These involve products of high order polynomials and, in some instances, singular Green's functions. In the 2D formulation, the use of a generalized Gaussian quadrature method was found to greatly facilitate the computation and to improve the accuracy of the boundary integral equation self-terms. In this paper, a 3D, hybrid electric field formulation employing higher order bases and higher order elements is presented. The improvements in convergence rate and accuracy, compared to those resulting from lower order modeling, are established. Techniques developed to facilitate the computation of the boundary integral self-terms are also shown to improve the accuracy of these terms. Finally, simple preconditioning techniques are used in conjunction with iterative solution procedures to solve the resulting linear system efficiently. In order to handle the boundary integral singularities in the 3D formulation, the parent element- either a triangle or rectangle-is subdivided into a set of sub-triangles with a common vertex at the singularity. The contribution to the integral from each of the sub-triangles is computed using the Duffy transformation to remove the singularity. This method is shown to greatly facilitate t'pe self-term computation when the bases are of higher order. In addition, the sub-triangles can be further divided to achieve near arbitrary accuracy in the self-term computation. An efficient method for subdividing the parent element is presented. The accuracy obtained using higher order bases is compared to that obtained using lower order bases when the number

  4. Gender Differences: Examination of the 12-Item Bem Sex Role Inventory (BSRI-12) in an Older Brazilian Population

    PubMed Central

    Carver, Lisa F.; Vafaei, Afshin; Guerra, Ricardo; Freire, Aline; Phillips, Susan P.

    2013-01-01

    Objectives Although gender is often acknowledged as a determinant of health, measuring its components, other than biological sex, is uncommon. The Bem Sex Role Inventory (BSRI) quantifies self-attribution of traits, indicative of gender roles. The BSRI has been used with participants across cultures and countries, but rarely in an older population in Brazil, as we have done in this study. Our primary objective was to determine whether the BSRI-12 can be used to explore gender in an older Brazilian population. Methods The BSRI was completed by volunteer participants, all community dwelling adults aged 65+ living in Natal, Brazil. Exploratory factor analysis was performed, followed by a varimax rotation (orthogonal solution) for iteration to examine the underlying gender roles of feminine, masculine, androgynous and undifferentiated, and to validate the BSRI in older adults in Brazil. Results The 278 participants, (80 men, 198 women) were 65–99 years old (average 73.6 for men, 74.7 for women). Age difference between sexes was not significant (p = 0.22). A 12 item version of the BSRI (BSRI-12) previously validated among Spanish seniors was used and showed validity with 5 BSRI-12 items (Cronbach=0.66) loading as feminine, 6 items (Cronbach=0.51) loading onto masculine roles and neither overlapping with the category of biological sex of respondent. Conclusions Although the BSRI-12 appears to be a valid indicator of gender among elderly Brazilians, the gender role status identified with the BSRI-12 was not correlated with being male or female. PMID:24098482

  5. An assessment of the DORT method on simple scatterers using boundary element modelling.

    PubMed

    Gélat, P; Ter Haar, G; Saffari, N

    2015-05-07

    The ability to focus through ribs overcomes an important limitation of a high-intensity focused ultrasound (HIFU) system for the treatment of liver tumours. Whilst it is important to generate high enough acoustic pressures at the treatment location for tissue lesioning, it is also paramount to ensure that the resulting ultrasonic dose on the ribs remains below a specified threshold, since ribs both strongly absorb and reflect ultrasound. The DORT (décomposition de l'opérateur de retournement temporel) method has the ability to focus on and through scatterers immersed in an acoustic medium selectively without requiring prior knowledge of their location or geometry. The method requires a multi-element transducer and is implemented via a singular value decomposition of the measured matrix of inter-element transfer functions. The efficacy of a method of focusing through scatterers is often assessed by comparing the specific absorption rate (SAR) at the surface of the scatterer, and at the focal region. The SAR can be obtained from a knowledge of the acoustic pressure magnitude and the acoustic properties of the medium and scatterer. It is well known that measuring acoustic pressures with a calibrated hydrophone at or near a hard surface presents experimental challenges, potentially resulting in increased measurement uncertainties. Hence, the DORT method is usually assessed experimentally by measuring the SAR at locations on the surface of the scatterer after the latter has been removed from the acoustic medium. This is also likely to generate uncertainties in the acoustic pressure measurement. There is therefore a strong case for assessing the efficacy of the DORT method through a validated theoretical model. The boundary element method (BEM) applied to exterior acoustic scattering problems is well-suited for such an assessment. In this study, BEM was used to implement the DORT method theoretically on locally reacting spherical scatterers, and to assess its focusing

  6. Analysis Method for Laterally Loaded Pile Groups Using an Advanced Modeling of Reinforced Concrete Sections.

    PubMed

    Stacul, Stefano; Squeglia, Nunziante

    2018-02-15

    A Boundary Element Method (BEM) approach was developed for the analysis of pile groups. The proposed method includes: the non-linear behavior of the soil by a hyperbolic modulus reduction curve; the non-linear response of reinforced concrete pile sections, also taking into account the influence of tension stiffening; the influence of suction by increasing the stiffness of shallow portions of soil and modeled using the Modified Kovacs model; pile group shadowing effect, modeled using an approach similar to that proposed in the Strain Wedge Model for pile groups analyses. The proposed BEM method saves computational effort compared to more sophisticated codes such as VERSAT-P3D, PLAXIS 3D and FLAC-3D, and provides reliable results using input data from a standard site investigation. The reliability of this method was verified by comparing results from data from full scale and centrifuge tests on single piles and pile groups. A comparison is presented between measured and computed data on a laterally loaded fixed-head pile group composed by reinforced concrete bored piles. The results of the proposed method are shown to be in good agreement with those obtained in situ.

  7. Analysis Method for Laterally Loaded Pile Groups Using an Advanced Modeling of Reinforced Concrete Sections

    PubMed Central

    2018-01-01

    A Boundary Element Method (BEM) approach was developed for the analysis of pile groups. The proposed method includes: the non-linear behavior of the soil by a hyperbolic modulus reduction curve; the non-linear response of reinforced concrete pile sections, also taking into account the influence of tension stiffening; the influence of suction by increasing the stiffness of shallow portions of soil and modeled using the Modified Kovacs model; pile group shadowing effect, modeled using an approach similar to that proposed in the Strain Wedge Model for pile groups analyses. The proposed BEM method saves computational effort compared to more sophisticated codes such as VERSAT-P3D, PLAXIS 3D and FLAC-3D, and provides reliable results using input data from a standard site investigation. The reliability of this method was verified by comparing results from data from full scale and centrifuge tests on single piles and pile groups. A comparison is presented between measured and computed data on a laterally loaded fixed-head pile group composed by reinforced concrete bored piles. The results of the proposed method are shown to be in good agreement with those obtained in situ. PMID:29462857

  8. Numerical analysis of heat transfer in the exhaust gas flow in a diesel power generator

    NASA Astrophysics Data System (ADS)

    Brito, C. H. G.; Maia, C. B.; Sodré, J. R.

    2016-09-01

    This work presents a numerical study of heat transfer in the exhaust duct of a diesel power generator. The analysis was performed using two different approaches: the Finite Difference Method (FDM) and the Finite Volume Method (FVM), this last one by means of a commercial computer software, ANSYS CFX®. In FDM, the energy conservation equation was solved taking into account the estimated velocity profile for fully developed turbulent flow inside a tube and literature correlations for heat transfer. In FVM, the mass conservation, momentum, energy and transport equations were solved for turbulent quantities by the K-ω SST model. In both methods, variable properties were considered for the exhaust gas composed by six species: CO2, H2O, H2, O2, CO and N2. The entry conditions for the numerical simulations were given by experimental data available. The results were evaluated for the engine operating under loads of 0, 10, 20, and 37.5 kW. Test mesh and convergence were performed to determine the numerical error and uncertainty of the simulations. The results showed a trend of increasing temperature gradient with load increase. The general behaviour of the velocity and temperature profiles obtained by the numerical models were similar, with some divergence arising due to the assumptions made for the resolution of the models.

  9. Imaging quality analysis of computer-generated holograms using the point-based method and slice-based method

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen; Chen, Siqing; Zheng, Huadong; Sun, Tao; Yu, Yingjie; Gao, Hongyue; Asundi, Anand K.

    2017-06-01

    Computer holography has made a notably progress in recent years. The point-based method and slice-based method are chief calculation algorithms for generating holograms in holographic display. Although both two methods are validated numerically and optically, the differences of the imaging quality of these methods have not been specifically analyzed. In this paper, we analyze the imaging quality of computer-generated phase holograms generated by point-based Fresnel zone plates (PB-FZP), point-based Fresnel diffraction algorithm (PB-FDA) and slice-based Fresnel diffraction algorithm (SB-FDA). The calculation formula and hologram generation with three methods are demonstrated. In order to suppress the speckle noise, sequential phase-only holograms are generated in our work. The results of reconstructed images numerically and experimentally are also exhibited. By comparing the imaging quality, the merits and drawbacks with three methods are analyzed. Conclusions are given by us finally.

  10. Development Of A Numerical Tow Tank With Wave Generation To Supplement Experimental Efforts

    DTIC Science & Technology

    2017-12-01

    vehicles CAD computer aided design CFD computational fluid dynamics FVM finite volume method IO information operations ISR intelligence, surveillance, and...deliver a product that I am truly proud of. xv THIS PAGE INTENTIONALLY LEFT BLANK xvi CHAPTER 1: Introduction 1.1 Importance of Tow Tank Testing Modern...wedge installation. 1 In 2016, NPS student Ensign Ryan Tran adapted an existing vertical plunging wedge wave maker design used at the U.S. Naval

  11. A combined application of boundary-element and Runge-Kutta methods in three-dimensional elasticity and poroelasticity

    NASA Astrophysics Data System (ADS)

    Igumnov, Leonid; Ipatov, Aleksandr; Belov, Aleksandr; Petrov, Andrey

    2015-09-01

    The report presents the development of the time-boundary element methodology and a description of the related software based on a stepped method of numerical inversion of the integral Laplace transform in combination with a family of Runge-Kutta methods for analyzing 3-D mixed initial boundary-value problems of the dynamics of inhomogeneous elastic and poro-elastic bodies. The results of the numerical investigation are presented. The investigation methodology is based on direct-approach boundary integral equations of 3-D isotropic linear theories of elasticity and poroelasticity in Laplace transforms. Poroelastic media are described using Biot models with four and five base functions. With the help of the boundary-element method, solutions in time are obtained, using the stepped method of numerically inverting Laplace transform on the nodes of Runge-Kutta methods. The boundary-element method is used in combination with the collocation method, local element-by-element approximation based on the matched interpolation model. The results of analyzing wave problems of the effect of a non-stationary force on elastic and poroelastic finite bodies, a poroelastic half-space (also with a fictitious boundary) and a layered half-space weakened by a cavity, and a half-space with a trench are presented. Excitation of a slow wave in a poroelastic medium is studied, using the stepped BEM-scheme on the nodes of Runge-Kutta methods.

  12. Stabilization of time domain acoustic boundary element method for the interior problem with impedance boundary conditions.

    PubMed

    Jang, Hae-Won; Ih, Jeong-Guon

    2012-04-01

    The time domain boundary element method (BEM) is associated with numerical instability that typically stems from the time marching scheme. In this work, a formulation of time domain BEM is derived to deal with all types of boundary conditions adopting a multi-input, multi-output, infinite impulse response structure. The fitted frequency domain impedance data are converted into a time domain expression as a form of an infinite impulse response filter, which can also invoke a modeling error. In the calculation, the response at each time step is projected onto the wave vector space of natural radiation modes, which can be obtained from the eigensolutions of the single iterative matrix. To stabilize the computation, unstable oscillatory modes are nullified, and the same decay rate is used for two nonoscillatory modes. As a test example, a transient sound field within a partially lined, parallelepiped box is used, within which a point source is excited by an octave band impulse. In comparison with the results of the inverse Fourier transform of a frequency domain BEM, the average of relative difference norm in the stabilized time response is found to be 4.4%.

  13. Assessment of different radiative transfer equation solvers for combined natural convection and radiation heat transfer problems

    NASA Astrophysics Data System (ADS)

    Sun, Yujia; Zhang, Xiaobing; Howell, John R.

    2017-06-01

    This work investigates the performance of the DOM, FVM, P1, SP3 and P3 methods for 2D combined natural convection and radiation heat transfer for an absorbing, emitting medium. The Monte Carlo method is used to solve the RTE coupled with the energy equation, and its results are used as benchmark solutions. Effects of the Rayleigh number, Planck number and optical thickness are considered, all covering several orders of magnitude. Temperature distributions, heat transfer rate and computational performance in terms of accuracy and computing time are presented and analyzed.

  14. Numerical approach for ECT by using boundary element method with Laplace transform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Enokizono, M.; Todaka, T.; Shibao, K.

    1997-03-01

    This paper presents an inverse analysis by using BEM with Laplace transform. The method is applied to a simple problem in the eddy current testing (ECT). Some crack shapes in a conductive specimen are estimated from distributions of the transient eddy current on its sensing surface and magnetic flux density in the liftoff space. Because the transient behavior includes information on various frequency components, the method is applicable to the shape estimation of a comparative small crack.

  15. Linear and nonlinear dynamic analysis by boundary element method. Ph.D. Thesis, 1986 Final Report

    NASA Technical Reports Server (NTRS)

    Ahmad, Shahid

    1991-01-01

    An advanced implementation of the direct boundary element method (BEM) applicable to free-vibration, periodic (steady-state) vibration and linear and nonlinear transient dynamic problems involving two and three-dimensional isotropic solids of arbitrary shape is presented. Interior, exterior, and half-space problems can all be solved by the present formulation. For the free-vibration analysis, a new real variable BEM formulation is presented which solves the free-vibration problem in the form of algebraic equations (formed from the static kernels) and needs only surface discretization. In the area of time-domain transient analysis, the BEM is well suited because it gives an implicit formulation. Although the integral formulations are elegant, because of the complexity of the formulation it has never been implemented in exact form. In the present work, linear and nonlinear time domain transient analysis for three-dimensional solids has been implemented in a general and complete manner. The formulation and implementation of the nonlinear, transient, dynamic analysis presented here is the first ever in the field of boundary element analysis. Almost all the existing formulation of BEM in dynamics use the constant variation of the variables in space and time which is very unrealistic for engineering problems and, in some cases, it leads to unacceptably inaccurate results. In the present work, linear and quadratic isoparametric boundary elements are used for discretization of geometry and functional variations in space. In addition, higher order variations in time are used. These methods of analysis are applicable to piecewise-homogeneous materials, such that not only problems of the layered media and the soil-structure interaction can be analyzed but also a large problem can be solved by the usual sub-structuring technique. The analyses have been incorporated in a versatile, general-purpose computer program. Some numerical problems are solved and, through comparisons

  16. THOR: an open-source exo-GCM

    NASA Astrophysics Data System (ADS)

    Grosheintz, Luc; Mendonça, João; Käppeli, Roger; Lukas Grimm, Simon; Mishra, Siddhartha; Heng, Kevin

    2015-12-01

    In this talk, I will present THOR, the first fully conservative, GPU-accelerated exo-GCM (general circulation model) on a nearly uniform, global grid that treats shocks and is non-hydrostatic. THOR will be freely available to the community as a standard tool.Unlike most GCMs THOR solves the full, non-hydrostatic Euler equations instead of the primitive equations. The equations are solved on a global three-dimensional icosahedral grid by a second order Finite Volume Method (FVM). Icosahedral grids are nearly uniform refinements of an icosahedron. We've implemented three different versions of this grid. FVM conserves the prognostic variables (density, momentum and energy) exactly and doesn't require a diffusion term (artificial viscosity) in the Euler equations to stabilize our solver. Historically FVM was designed to treat discontinuities correctly. Hence it excels at resolving shocks, including those present in hot exoplanetary atmospheres.Atmospheres are generally in near hydrostatic equilibrium. We therefore implement a well-balancing technique recently developed at the ETH Zurich. This well-balancing ensures that our FVM maintains hydrostatic equilibrium to machine precision. Better yet, it is able to resolve pressure perturbations from this equilibrium as small as one part in 100'000. It is important to realize that these perturbations are significantly smaller than the truncation error of the same scheme without well-balancing. If during the course of the simulation (due to forcing) the atmosphere becomes non-hydrostatic, our solver continues to function correctly.THOR just passed an important mile stone. We've implemented the explicit part of the solver. The explicit solver is useful to study instabilities or local problems on relatively short time scales. I'll show some nice properties of the explicit THOR. An explicit solver is not appropriate for climate study because the time step is limited by the sound speed. Therefore, we are working on the first fully

  17. Boundary Element Method in a Self-Gravitating Elastic Half-Space and Its Application to Deformation Induced by Magma Chambers

    NASA Astrophysics Data System (ADS)

    Fang, M.; Hager, B. H.

    2014-12-01

    In geophysical applications the boundary element method (BEM) often carries the essential physics in addition to being an efficient numerical scheme. For use of the BEM in a self-gravitating uniform half-space, we made extra effort and succeeded in deriving the fundamental solution analytically in closed-form. A problem that goes deep into the heart of the classic BEM is encountered when we try to apply the new fundamental solution in BEM for deformation field induced by a magma chamber or a fluid-filled reservoir. The central issue of the BEM is the singular integral arising from determination of the boundary values. A widely employed technique is to rescale the singular boundary point into a small finite volume and then shrink it to extract the limits. This operation boils down to the calculation of the so-called C-matrix. Authors in the past take the liberty of either adding or subtracting a small volume. By subtracting a small volume, the C-matrix is (1/2)I on a smooth surface, where I is the identity matrix; by adding a small volume, we arrive at the same C-matrix in the form of I - (1/2)I. This evenness is a result of the spherical symmetry of Kelvin's fundamental solution employed. When the spherical symmetry is broken by gravity, the C-matrix is polarized. And we face the choice between right and wrong, for adding and subtracting a small volume yield different C-matrices. Close examination reveals that both derivations, addition and subtraction of a small volume, are ad hoc. To resolve the issue we revisit the Somigliana identity with a new derivation and careful step-by-step anatomy. The result proves that even though both adding and subtracting a small volume appear to twist the original boundary, only addition essentially modifies the original boundary and consequently modifies the physics of the original problem in a subtle way. The correct procedure is subtraction. We complete a new BEM theory by introducing in full analytical form what we call the

  18. The validity of the 12-item Bem Sex Role Inventory in older Spanish population: an examination of the androgyny model.

    PubMed

    Vafaei, Afshin; Alvarado, Beatriz; Tomás, Concepcion; Muro, Carmen; Martinez, Beatriz; Zunzunegui, Maria Victoria

    2014-01-01

    The Bem Sex Role Inventory (BSRI) is the most commonly used and validated gender role measurement tool across countries and age groups. However, it has been rarely validated in older adults and sporadically used in aging and health studies. Perceived gender role is a crucial part of a person's identity and an established determinant of health. Androgyny model suggests that those with high levels of both masculinity and femininity (androgynous) are more adaptive and hence have better health. Our objectives were to explore the validity of BSRI in an older Spanish population, to compare different standard methods of measuring gender roles, and to examine their impact on health indicators. The BSRI and health indicator questions were completed by 120 community-dwelling adults aged 65+ living in Aragon, Spain. Exploratory factor analysis was performed to examine psychometric properties of the BSRI. Androgyny was measured by three approaches: geometric mean, t-ratio, and traditional four-gender groups classification. Relationships between health indicators and gender roles were explored. Factor analysis resulted in two-factor solution consistent with the original masculine and feminine items with high loadings and good reliability. There were no associations between biological sex and gender roles. Different gender role measurement approaches classified participants differently into gender role groups. Overall, androgyny was associated with better mobility and physical and mental health. The traditional four groups approach showed higher compatibility with the androgyny model and was better able to disentangle the differential impact of gender roles on health. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  19. Passive interior noise reduction analysis of King Air 350 turboprop aircraft using boundary element method/finite element method (BEM/FEM)

    NASA Astrophysics Data System (ADS)

    Dandaroy, Indranil; Vondracek, Joseph; Hund, Ron; Hartley, Dayton

    2005-09-01

    The objective of this study was to develop a vibro-acoustic computational model of the Raytheon King Air 350 turboprop aircraft with an intent to reduce propfan noise in the cabin. To develop the baseline analysis, an acoustic cavity model of the aircraft interior and a structural dynamics model of the aircraft fuselage were created. The acoustic model was an indirect boundary element method representation using SYSNOISE, while the structural model was a finite-element method normal modes representation in NASTRAN and subsequently imported to SYSNOISE. In the acoustic model, the fan excitation sources were represented employing the Ffowcs Williams-Hawkings equation. The acoustic and the structural models were fully coupled in SYSNOISE and solved to yield the baseline response of acoustic pressure in the aircraft interior and vibration on the aircraft structure due to fan noise. Various vibration absorbers, tuned to fundamental blade passage tone (100 Hz) and its first harmonic (200 Hz), were applied to the structural model to study their effect on cabin noise reduction. Parametric studies were performed to optimize the number and location of these passive devices. Effects of synchrophasing and absorptive noise treatments applied to the aircraft interior were also investigated for noise reduction.

  20. Singular boundary method for global gravity field modelling

    NASA Astrophysics Data System (ADS)

    Cunderlik, Robert

    2014-05-01

    The singular boundary method (SBM) and method of fundamental solutions (MFS) are meshless boundary collocation techniques that use the fundamental solution of a governing partial differential equation (e.g. the Laplace equation) as their basis functions. They have been developed to avoid singular numerical integration as well as mesh generation in the traditional boundary element method (BEM). SBM have been proposed to overcome a main drawback of MFS - its controversial fictitious boundary outside the domain. The key idea of SBM is to introduce a concept of the origin intensity factors that isolate singularities of the fundamental solution and its derivatives using some appropriate regularization techniques. Consequently, the source points can be placed directly on the real boundary and coincide with the collocation nodes. In this study we deal with SBM applied for high-resolution global gravity field modelling. The first numerical experiment presents a numerical solution to the fixed gravimetric boundary value problem. The achieved results are compared with the numerical solutions obtained by MFS or the direct BEM indicating efficiency of all methods. In the second numerical experiments, SBM is used to derive the geopotential and its first derivatives from the Tzz components of the gravity disturbing tensor observed by the GOCE satellite mission. A determination of the origin intensity factors allows to evaluate the disturbing potential and gravity disturbances directly on the Earth's surface where the source points are located. To achieve high-resolution numerical solutions, the large-scale parallel computations are performed on the cluster with 1TB of the distributed memory and an iterative elimination of far zones' contributions is applied.

  1. ZZ-Type a posteriori error estimators for adaptive boundary element methods on a curve☆

    PubMed Central

    Feischl, Michael; Führer, Thomas; Karkulik, Michael; Praetorius, Dirk

    2014-01-01

    In the context of the adaptive finite element method (FEM), ZZ-error estimators named after Zienkiewicz and Zhu (1987) [52] are mathematically well-established and widely used in practice. In this work, we propose and analyze ZZ-type error estimators for the adaptive boundary element method (BEM). We consider weakly singular and hyper-singular integral equations and prove, in particular, convergence of the related adaptive mesh-refining algorithms. Throughout, the theoretical findings are underlined by numerical experiments. PMID:24748725

  2. Language Practitioners' Reflections on Method-Based and Post-Method Pedagogies

    ERIC Educational Resources Information Center

    Soomro, Abdul Fattah; Almalki, Mansoor S.

    2017-01-01

    Method-based pedagogies are commonly applied in teaching English as a foreign language all over the world. However, in the last quarter of the 20th century, the concept of such pedagogies based on the application of a single best method in EFL started to be viewed with concerns by some scholars. In response to the growing concern against the…

  3. Model-Based Method for Sensor Validation

    NASA Technical Reports Server (NTRS)

    Vatan, Farrokh

    2012-01-01

    Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).

  4. The Base 32 Method: An Improved Method for Coding Sibling Constellations.

    ERIC Educational Resources Information Center

    Perfetti, Lawrence J. Carpenter

    1990-01-01

    Offers new sibling constellation coding method (Base 32) for genograms using binary and base 32 numbers that saves considerable microcomputer memory. Points out that new method will result in greater ability to store and analyze larger amounts of family data. (Author/CM)

  5. 3D Simulation of Multiple Simultaneous Hydraulic Fractures with Different Initial Lengths in Rock

    NASA Astrophysics Data System (ADS)

    Tang, X.; Rayudu, N. M.; Singh, G.

    2017-12-01

    Hydraulic fracturing is widely used technique for extracting shale gas. During this process, fractures with various initial lengths are induced in rock mass with hydraulic pressure. Understanding the mechanism of propagation and interaction between these induced hydraulic cracks is critical for optimizing the fracking process. In this work, numerical results are presented for investigating the effect of in-situ parameters and fluid properties on growth and interaction of multi simultaneous hydraulic fractures. A fully coupled 3D fracture simulator, TOUGH- GFEM is used for simulating the effect of different vital parameters, including in-situ stress, initial fracture length, fracture spacing, fluid viscosity and flow rate on induced hydraulic fractures growth. This TOUGH-GFEM simulator is based on 3D finite volume method (FVM) and partition of unity element method (PUM). Displacement correlation method (DCM) is used for calculating multi - mode (Mode I, II, III) stress intensity factors. Maximum principal stress criteria is used for crack propagation. Key words: hydraulic fracturing, TOUGH, partition of unity element method , displacement correlation method, 3D fracturing simulator

  6. Active Electro-Location of Objects in the Underwater Environment Based on the Mixed Polarization Multiple Signal Classification Algorithm

    PubMed Central

    Guo, Lili; Qi, Junwei; Xue, Wei

    2018-01-01

    This article proposes a novel active localization method based on the mixed polarization multiple signal classification (MP-MUSIC) algorithm for positioning a metal target or an insulator target in the underwater environment by using a uniform circular antenna (UCA). The boundary element method (BEM) is introduced to analyze the boundary of the target by use of a matrix equation. In this method, an electric dipole source as a part of the locating system is set perpendicularly to the plane of the UCA. As a result, the UCA can only receive the induction field of the target. The potential of each electrode of the UCA is used as spatial-temporal localization data, and it does not need to obtain the field component in each direction compared with the conventional fields-based localization method, which can be easily implemented in practical engineering applications. A simulation model and a physical experiment are constructed. The simulation and the experiment results provide accurate positioning performance, with the help of verifying the effectiveness of the proposed localization method in underwater target locating. PMID:29439495

  7. Highly efficient preparation of sphingoid bases from glucosylceramides by chemoenzymatic method[S

    PubMed Central

    Gowda, Siddabasave Gowda B.; Usuki, Seigo; Hammam, Mostafa A. S.; Murai, Yuta; Igarashi, Yasuyuki; Monde, Kenji

    2016-01-01

    Sphingoid base derivatives have attracted increasing attention as promising chemotherapeutic candidates against lifestyle diseases such as diabetes and cancer. Natural sphingoid bases can be a potential resource instead of those derived by time-consuming total organic synthesis. In particular, glucosylceramides (GlcCers) in food plants are enriched sources of sphingoid bases, differing from those of animals. Several chemical methodologies to transform GlcCers to sphingoid bases have already investigated; however, these conventional methods using acid or alkaline hydrolysis are not efficient due to poor reaction yield, producing complex by-products and resulting in separation problems. In this study, an extremely efficient and practical chemoenzymatic transformation method has been developed using microwave-enhanced butanolysis of GlcCers and a large amount of readily available almond β-glucosidase for its deglycosylation reaction of lysoGlcCers. The method is superior to conventional acid/base hydrolysis methods in its rapidity and its reaction cleanness (no isomerization, no rearrangement) with excellent overall yield. PMID:26667669

  8. Modeling and Analysis of the Static Characteristics and Dynamic Responses of Herringbone-grooved Thrust Bearings

    NASA Astrophysics Data System (ADS)

    Yu, Yunluo; Pu, Guang; Jiang, Kyle

    2017-12-01

    This paper describes a theoretical investigation of static and dynamic characteristics of herringbone-grooved air thrust bearings. Firstly, Finite Difference Method (FDM) and Finite Volume Method (FVM) are used in combination to solve the non-linear Reynolds equation and to find the pressure distribution of the film and the total loading capacity of the bearing. The influence of design parameters on air film gap characteristics, including the air film thickness, depth of the groove and rotating speed, are analyzed based on the FDM model. The simulation results show that hydrostatic thrust bearings can achieve a better load capacity with less air consumption than herringbone grooved thrust bearings at low compressibility number; herringbone grooved thrust bearings can achieve a higher load capacity but with more air consumption than hydrostatic thrust bearing at high compressibility number; herringbone grooved thrust bearings would lose stability at high rotating speeds, and the stability increases with the depth of the grooves.

  9. An RBF-based compression method for image-based relighting.

    PubMed

    Leung, Chi-Sing; Wong, Tien-Tsin; Lam, Ping-Man; Choy, Kwok-Hung

    2006-04-01

    In image-based relighting, a pixel is associated with a number of sampled radiance values. This paper presents a two-level compression method. In the first level, the plenoptic property of a pixel is approximated by a spherical radial basis function (SRBF) network. That means that the spherical plenoptic function of each pixel is represented by a number of SRBF weights. In the second level, we apply a wavelet-based method to compress these SRBF weights. To reduce the visual artifact due to quantization noise, we develop a constrained method for estimating the SRBF weights. Our proposed approach is superior to JPEG, JPEG2000, and MPEG. Compared with the spherical harmonics approach, our approach has a lower complexity, while the visual quality is comparable. The real-time rendering method for our SRBF representation is also discussed.

  10. Ratio-based vs. model-based methods to correct for urinary creatinine concentrations.

    PubMed

    Jain, Ram B

    2016-08-01

    Creatinine-corrected urinary analyte concentration is usually computed as the ratio of the observed level of analyte concentration divided by the observed level of the urinary creatinine concentration (UCR). This ratio-based method is flawed since it implicitly assumes that hydration is the only factor that affects urinary creatinine concentrations. On the contrary, it has been shown in the literature, that age, gender, race/ethnicity, and other factors also affect UCR. Consequently, an optimal method to correct for UCR should correct for hydration as well as other factors like age, gender, and race/ethnicity that affect UCR. Model-based creatinine correction in which observed UCRs are used as an independent variable in regression models has been proposed. This study was conducted to evaluate the performance of ratio-based and model-based creatinine correction methods when the effects of gender, age, and race/ethnicity are evaluated one factor at a time for selected urinary analytes and metabolites. It was observed that ratio-based method leads to statistically significant pairwise differences, for example, between males and females or between non-Hispanic whites (NHW) and non-Hispanic blacks (NHB), more often than the model-based method. However, depending upon the analyte of interest, the reverse is also possible. The estimated ratios of geometric means (GM), for example, male to female or NHW to NHB, were also compared for the two methods. When estimated UCRs were higher for the group (for example, males) in the numerator of this ratio, these ratios were higher for the model-based method, for example, male to female ratio of GMs. When estimated UCR were lower for the group (for example, NHW) in the numerator of this ratio, these ratios were higher for the ratio-based method, for example, NHW to NHB ratio of GMs. Model-based method is the method of choice if all factors that affect UCR are to be accounted for.

  11. Effect of a chamber orchestra on direct sound and early reflections for performers on stage: A boundary element method study.

    PubMed

    Panton, Lilyan; Holloway, Damien; Cabrera, Densil

    2017-04-01

    Early reflections are known to be important to musicians performing on stage, but acoustic measurements are usually made on empty stages. This work investigates how a chamber orchestra setup on stage affects early reflections from the stage enclosure. A boundary element method (BEM) model of a chamber orchestra is validated against full scale measurements with seated and standing subjects in an anechoic chamber and against auditorium measurements, demonstrating that the BEM simulation gives realistic results. Using the validated BEM model, an investigation of how a chamber orchestra attenuates and scatters both the direct sound and the first-order reflections is presented for two different sized "shoe-box" stage enclosures. The first-order reflections from the stage are investigated individually: at and above the 250 Hz band, horizontal reflections from stage walls are attenuated to varying degrees, while the ceiling reflection is relatively unaffected. Considering the overall effect of the chamber orchestra on the direct sound and first-order reflections, differences of 2-5 dB occur in the 1000 Hz octave band when the ceiling reflection is excluded (slightly reduced when including the unobstructed ceiling reflection). A tilted side wall case showed the orchestra has a reduced effect with a small elevation of the lateral reflections.

  12. Case-based explanation of non-case-based learning methods.

    PubMed Central

    Caruana, R.; Kangarloo, H.; Dionisio, J. D.; Sinha, U.; Johnson, D.

    1999-01-01

    We show how to generate case-based explanations for non-case-based learning methods such as artificial neural nets or decision trees. The method uses the trained model (e.g., the neural net or the decision tree) as a distance metric to determine which cases in the training set are most similar to the case that needs to be explained. This approach is well suited to medical domains, where it is important to understand predictions made by complex machine learning models, and where training and clinical practice makes users adept at case interpretation. PMID:10566351

  13. Treecode-based generalized Born method

    NASA Astrophysics Data System (ADS)

    Xu, Zhenli; Cheng, Xiaolin; Yang, Haizhao

    2011-02-01

    We have developed a treecode-based O(Nlog N) algorithm for the generalized Born (GB) implicit solvation model. Our treecode-based GB (tGB) is based on the GBr6 [J. Phys. Chem. B 111, 3055 (2007)], an analytical GB method with a pairwise descreening approximation for the R6 volume integral expression. The algorithm is composed of a cutoff scheme for the effective Born radii calculation, and a treecode implementation of the GB charge-charge pair interactions. Test results demonstrate that the tGB algorithm can reproduce the vdW surface based Poisson solvation energy with an average relative error less than 0.6% while providing an almost linear-scaling calculation for a representative set of 25 proteins with different sizes (from 2815 atoms to 65456 atoms). For a typical system of 10k atoms, the tGB calculation is three times faster than the direct summation as implemented in the original GBr6 model. Thus, our tGB method provides an efficient way for performing implicit solvent GB simulations of larger biomolecular systems at longer time scales.

  14. Pyrolyzed-parylene based sensors and method of manufacture

    NASA Technical Reports Server (NTRS)

    Tai, Yu-Chong (Inventor); Liger, Matthieu (Inventor); Miserendino, Scott (Inventor); Konishi, Satoshi (Inventor)

    2007-01-01

    A method (and resulting structure) for fabricating a sensing device. The method includes providing a substrate comprising a surface region and forming an insulating material overlying the surface region. The method also includes forming a film of carbon based material overlying the insulating material and treating to the film of carbon based material to pyrolyzed the carbon based material to cause formation of a film of substantially carbon based material having a resistivity ranging within a predetermined range. The method also provides at least a portion of the pyrolyzed carbon based material in a sensor application and uses the portion of the pyrolyzed carbon based material in the sensing application. In a specific embodiment, the sensing application is selected from chemical, humidity, piezoelectric, radiation, mechanical strain or temperature.

  15. METHOD OF JOINING CARBIDES TO BASE METALS

    DOEpatents

    Krikorian, N.H.; Farr, J.D.; Witteman, W.G.

    1962-02-13

    A method is described for joining a refractory metal carbide such as UC or ZrC to a refractory metal base such as Ta or Nb. The method comprises carburizing the surface of the metal base and then sintering the base and carbide at temperatures of about 2000 deg C in a non-oxidizing atmosphere, the base and carbide being held in contact during the sintering step. To reduce the sintering temperature and time, a sintering aid such as iron, nickel, or cobait is added to the carbide, not to exceed 5 wt%. (AEC)

  16. NOTE: Solving the ECG forward problem by means of a meshless finite element method

    NASA Astrophysics Data System (ADS)

    Li, Z. S.; Zhu, S. A.; He, Bin

    2007-07-01

    The conventional numerical computational techniques such as the finite element method (FEM) and the boundary element method (BEM) require laborious and time-consuming model meshing. The new meshless FEM only uses the boundary description and the node distribution and no meshing of the model is required. This paper presents the fundamentals and implementation of meshless FEM and the meshless FEM method is adapted to solve the electrocardiography (ECG) forward problem. The method is evaluated on a single-layer torso model, in which the analytical solution exists, and tested in a realistic geometry homogeneous torso model, with satisfactory results being obtained. The present results suggest that the meshless FEM may provide an alternative for ECG forward solutions.

  17. Adaptive Set-Based Methods for Association Testing

    PubMed Central

    Su, Yu-Chen; Gauderman, W. James; Kiros, Berhane; Lewinger, Juan Pablo

    2017-01-01

    With a typical sample size of a few thousand subjects, a single genomewide association study (GWAS) using traditional one-SNP-at-a-time methods can only detect genetic variants conferring a sizable effect on disease risk. Set-based methods, which analyze sets of SNPs jointly, can detect variants with smaller effects acting within a gene, a pathway, or other biologically relevant sets. While self-contained set-based methods (those that test sets of variants without regard to variants not in the set) are generally more powerful than competitive set-based approaches (those that rely on comparison of variants in the set of interest with variants not in the set), there is no consensus as to which self-contained methods are best. In particular, several self-contained set tests have been proposed to directly or indirectly ‘adapt’ to the a priori unknown proportion and distribution of effects of the truly associated SNPs in the set, which is a major determinant of their power. A popular adaptive set-based test is the adaptive rank truncated product (ARTP), which seeks the set of SNPs that yields the best-combined evidence of association. We compared the standard ARTP, several ARTP variations we introduced, and other adaptive methods in a comprehensive simulation study to evaluate their performance. We used permutations to assess significance for all the methods and thus provide a level playing field for comparison. We found the standard ARTP test to have the highest power across our simulations followed closely by the global model of random effects (GMRE) and a LASSO based test. PMID:26707371

  18. Comparing Methods for UAV-Based Autonomous Surveillance

    NASA Technical Reports Server (NTRS)

    Freed, Michael; Harris, Robert; Shafto, Michael

    2004-01-01

    We describe an approach to evaluating algorithmic and human performance in directing UAV-based surveillance. Its key elements are a decision-theoretic framework for measuring the utility of a surveillance schedule and an evaluation testbed consisting of 243 scenarios covering a well-defined space of possible missions. We apply this approach to two example UAV-based surveillance methods, a TSP-based algorithm and a human-directed approach, then compare them to identify general strengths, and weaknesses of each method.

  19. Kinematics of a vertical axis wind turbine with a variable pitch angle

    NASA Astrophysics Data System (ADS)

    Jakubowski, Mateusz; Starosta, Roman; Fritzkowski, Pawel

    2018-01-01

    A computational model for the kinematics of a vertical axis wind turbine (VAWT) is presented. A H-type rotor turbine with a controlled pitch angle is considered. The aim of this solution is to improve the VAWT productivity. The discussed method is related to a narrow computational branch based on the Blade Element Momentum theory (BEM theory). The paper can be regarded as a theoretical basis and an introduction to further studies with the application of BEM. The obtained torque values show the main advantage of using the variable pitch angle.

  20. High speed propeller acoustics and aerodynamics - A boundary element approach

    NASA Technical Reports Server (NTRS)

    Farassat, F.; Myers, M. K.; Dunn, M. H.

    1989-01-01

    The Boundary Element Method (BEM) is applied in this paper to the problems of acoustics and aerodynamics of high speed propellers. The underlying theory is described based on the linearized Ffowcs Williams-Hawkings equation. The surface pressure on the blade is assumed unknown in the aerodynamic problem. It is obtained by solving a singular integral equation. The acoustic problem is then solved by moving the field point inside the fluid medium and evaluating some surface and line integrals. Thus the BEM provides a powerful technique in calculation of high speed propeller aerodynamics and acoustics.

  1. Infrared identification of internal overheating components inside an electric control cabinet by inverse heat transfer problem

    NASA Astrophysics Data System (ADS)

    Yang, Li; Wang, Ye; Liu, Huikai; Yan, Guanghui; Kou, Wei

    2014-11-01

    The components overheating inside an object, such as inside an electric control cabinet, a moving object, and a running machine, can easily lead to equipment failure or fire accident. The infrared remote sensing method is used to inspect the surface temperature of object to identify the overheating components inside the object in recent years. It has important practical application of using infrared thermal imaging surface temperature measurement to identify the internal overheating elements inside an electric control cabinet. In this paper, through the establishment of test bench of electric control cabinet, the experimental study was conducted on the inverse identification technology of internal overheating components inside an electric control cabinet using infrared thermal imaging. The heat transfer model of electric control cabinet was built, and the temperature distribution of electric control cabinet with internal overheating element is simulated using the finite volume method (FVM). The outer surface temperature of electric control cabinet was measured using the infrared thermal imager. Combining the computer image processing technology and infrared temperature measurement, the surface temperature distribution of electric control cabinet was extracted, and using the identification algorithm of inverse heat transfer problem (IHTP) the position and temperature of internal overheating element were identified. The results obtained show that for single element overheating inside the electric control cabinet the identifying errors of the temperature and position were 2.11% and 5.32%. For multiple elements overheating inside the electric control cabinet the identifying errors of the temperature and positions were 3.28% and 15.63%. The feasibility and effectiveness of the method of IHTP and the correctness of identification algorithm of FVM were validated.

  2. Description logic-based methods for auditing frame-based medical terminological systems.

    PubMed

    Cornet, Ronald; Abu-Hanna, Ameen

    2005-07-01

    Medical terminological systems (TSs) play an increasingly important role in health care by supporting recording, retrieval and analysis of patient information. As the size and complexity of TSs are growing, the need arises for means to audit them, i.e. verify and maintain (logical) consistency and (semantic) correctness of their contents. This is not only important for the management of TSs but also for providing their users with confidence about the reliability of their contents. Formal methods have the potential to play an important role in the audit of TSs, although there are few empirical studies to assess the benefits of using these methods. In this paper we propose a method based on description logics (DLs) for the audit of TSs. This method is based on the migration of the medical TS from a frame-based representation to a DL-based one. Our method is characterized by a process in which initially stringent assumptions are made about concept definitions. The assumptions allow the detection of concepts and relations that might comprise a source of logical inconsistency. If the assumptions hold then definitions are to be altered to eliminate the inconsistency, otherwise the assumptions are revised. In order to demonstrate the utility of the approach in a real-world case study we audit a TS in the intensive care domain and discuss decisions pertaining to building DL-based representations. This case study demonstrates that certain types of inconsistencies can indeed be detected by applying the method to a medical terminological system. The added value of the method described in this paper is that it provides a means to evaluate the compliance to a number of common modeling principles in a formal manner. The proposed method reveals potential modeling inconsistencies, helping to audit and (if possible) improve the medical TS. In this way, it contributes to providing confidence in the contents of the terminological system.

  3. Comparison of Text-Based and Visual-Based Programming Input Methods for First-Time Learners

    ERIC Educational Resources Information Center

    Saito, Daisuke; Washizaki, Hironori; Fukazawa, Yoshiaki

    2017-01-01

    Aim/Purpose: When learning to program, both text-based and visual-based input methods are common. However, it is unclear which method is more appropriate for first-time learners (first learners). Background: The differences in the learning effect between text-based and visual-based input methods for first learners are compared the using a…

  4. Graph reconstruction using covariance-based methods.

    PubMed

    Sulaimanov, Nurgazy; Koeppl, Heinz

    2016-12-01

    Methods based on correlation and partial correlation are today employed in the reconstruction of a statistical interaction graph from high-throughput omics data. These dedicated methods work well even for the case when the number of variables exceeds the number of samples. In this study, we investigate how the graphs extracted from covariance and concentration matrix estimates are related by using Neumann series and transitive closure and through discussing concrete small examples. Considering the ideal case where the true graph is available, we also compare correlation and partial correlation methods for large realistic graphs. In particular, we perform the comparisons with optimally selected parameters based on the true underlying graph and with data-driven approaches where the parameters are directly estimated from the data.

  5. T-matrix method in plasmonics: An overview

    NASA Astrophysics Data System (ADS)

    Khlebtsov, Nikolai G.

    2013-07-01

    Optical properties of isolated and coupled plasmonic nanoparticles (NPs) are of great interest for many applications in nanophotonics, nanobiotechnology, and nanomedicine owing to rapid progress in fabrication, characterization, and surface functionalization technologies. To simulate optical responses from plasmonic nanostructures, various electromagnetic analytical and numerical methods have been adapted, tested, and used during the past two decades. Currently, the most popular numerical techniques are those that do not suffer from geometrical and composition limitations, e.g., the discrete dipole approximation (DDA), the boundary (finite) element method (BEM, FEM), the finite difference time domain method (FDTDM), and others. However, the T-matrix method still has its own niche in plasmonic science because of its great numerical efficiency, especially for systems with randomly oriented particles and clusters. In this review, I consider the application of the T-matrix method to various plasmonic problems, including dipolar, multipolar, and anisotropic properties of metal NPs; sensing applications; surface enhanced Raman scattering; optics of 1D-3D nanoparticle assemblies; plasmonic particles and clusters near and on substrates; and manipulation of plasmonic NPs with laser tweezers.

  6. EEG feature selection method based on decision tree.

    PubMed

    Duan, Lijuan; Ge, Hui; Ma, Wei; Miao, Jun

    2015-01-01

    This paper aims to solve automated feature selection problem in brain computer interface (BCI). In order to automate feature selection process, we proposed a novel EEG feature selection method based on decision tree (DT). During the electroencephalogram (EEG) signal processing, a feature extraction method based on principle component analysis (PCA) was used, and the selection process based on decision tree was performed by searching the feature space and automatically selecting optimal features. Considering that EEG signals are a series of non-linear signals, a generalized linear classifier named support vector machine (SVM) was chosen. In order to test the validity of the proposed method, we applied the EEG feature selection method based on decision tree to BCI Competition II datasets Ia, and the experiment showed encouraging results.

  7. Adaptive Set-Based Methods for Association Testing.

    PubMed

    Su, Yu-Chen; Gauderman, William James; Berhane, Kiros; Lewinger, Juan Pablo

    2016-02-01

    With a typical sample size of a few thousand subjects, a single genome-wide association study (GWAS) using traditional one single nucleotide polymorphism (SNP)-at-a-time methods can only detect genetic variants conferring a sizable effect on disease risk. Set-based methods, which analyze sets of SNPs jointly, can detect variants with smaller effects acting within a gene, a pathway, or other biologically relevant sets. Although self-contained set-based methods (those that test sets of variants without regard to variants not in the set) are generally more powerful than competitive set-based approaches (those that rely on comparison of variants in the set of interest with variants not in the set), there is no consensus as to which self-contained methods are best. In particular, several self-contained set tests have been proposed to directly or indirectly "adapt" to the a priori unknown proportion and distribution of effects of the truly associated SNPs in the set, which is a major determinant of their power. A popular adaptive set-based test is the adaptive rank truncated product (ARTP), which seeks the set of SNPs that yields the best-combined evidence of association. We compared the standard ARTP, several ARTP variations we introduced, and other adaptive methods in a comprehensive simulation study to evaluate their performance. We used permutations to assess significance for all the methods and thus provide a level playing field for comparison. We found the standard ARTP test to have the highest power across our simulations followed closely by the global model of random effects (GMRE) and a least absolute shrinkage and selection operator (LASSO)-based test. © 2015 WILEY PERIODICALS, INC.

  8. Research related to improved computer aided design software package. [comparative efficiency of finite, boundary, and hybrid element methods in elastostatics

    NASA Technical Reports Server (NTRS)

    Walston, W. H., Jr.

    1986-01-01

    The comparative computational efficiencies of the finite element (FEM), boundary element (BEM), and hybrid boundary element-finite element (HVFEM) analysis techniques are evaluated for representative bounded domain interior and unbounded domain exterior problems in elastostatics. Computational efficiency is carefully defined in this study as the computer time required to attain a specified level of solution accuracy. The study found the FEM superior to the BEM for the interior problem, while the reverse was true for the exterior problem. The hybrid analysis technique was found to be comparable or superior to both the FEM and BEM for both the interior and exterior problems.

  9. Shrinkage regression-based methods for microarray missing value imputation.

    PubMed

    Wang, Hsiuying; Chiu, Chia-Chun; Wu, Yi-Ching; Wu, Wei-Sheng

    2013-01-01

    Missing values commonly occur in the microarray data, which usually contain more than 5% missing values with up to 90% of genes affected. Inaccurate missing value estimation results in reducing the power of downstream microarray data analyses. Many types of methods have been developed to estimate missing values. Among them, the regression-based methods are very popular and have been shown to perform better than the other types of methods in many testing microarray datasets. To further improve the performances of the regression-based methods, we propose shrinkage regression-based methods. Our methods take the advantage of the correlation structure in the microarray data and select similar genes for the target gene by Pearson correlation coefficients. Besides, our methods incorporate the least squares principle, utilize a shrinkage estimation approach to adjust the coefficients of the regression model, and then use the new coefficients to estimate missing values. Simulation results show that the proposed methods provide more accurate missing value estimation in six testing microarray datasets than the existing regression-based methods do. Imputation of missing values is a very important aspect of microarray data analyses because most of the downstream analyses require a complete dataset. Therefore, exploring accurate and efficient methods for estimating missing values has become an essential issue. Since our proposed shrinkage regression-based methods can provide accurate missing value estimation, they are competitive alternatives to the existing regression-based methods.

  10. A highly detailed FEM volume conductor model based on the ICBM152 average head template for EEG source imaging and TCS targeting.

    PubMed

    Haufe, Stefan; Huang, Yu; Parra, Lucas C

    2015-08-01

    In electroencephalographic (EEG) source imaging as well as in transcranial current stimulation (TCS), it is common to model the head using either three-shell boundary element (BEM) or more accurate finite element (FEM) volume conductor models. Since building FEMs is computationally demanding and labor intensive, they are often extensively reused as templates even for subjects with mismatching anatomies. BEMs can in principle be used to efficiently build individual volume conductor models; however, the limiting factor for such individualization are the high acquisition costs of structural magnetic resonance images. Here, we build a highly detailed (0.5mm(3) resolution, 6 tissue type segmentation, 231 electrodes) FEM based on the ICBM152 template, a nonlinear average of 152 adult human heads, which we call ICBM-NY. We show that, through more realistic electrical modeling, our model is similarly accurate as individual BEMs. Moreover, through using an unbiased population average, our model is also more accurate than FEMs built from mismatching individual anatomies. Our model is made available in Matlab format.

  11. Development of BEM for ceramic composites

    NASA Technical Reports Server (NTRS)

    Henry, D. P.; Banerjee, P. K.; Dargush, G. F.

    1991-01-01

    It is evident that for proper micromechanical analysis of ceramic composites, one needs to use a numerical method that is capable of idealizing the individual fibers or individual bundles of fibers embedded within a three-dimensional ceramic matrix. The analysis must be able to account for high stress or temperature gradients from diffusion of stress or temperature from the fiber to the ceramic matrix and allow for interaction between the fibers through the ceramic matrix. The analysis must be sophisticated enough to deal with the failure of fibers described by a series of increasingly sophisticated constitutive models. Finally, the analysis must deal with micromechanical modeling of the composite under nonlinear thermal and dynamic loading. This report details progress made towards the development of a boundary element code designed for the micromechanical studies of an advanced ceramic composite. Additional effort has been made in generalizing the implementation to allow the program to be applicable to real problems in the aerospace industry.

  12. Ontology-Based Method for Fault Diagnosis of Loaders.

    PubMed

    Xu, Feixiang; Liu, Xinhui; Chen, Wei; Zhou, Chen; Cao, Bingwei

    2018-02-28

    This paper proposes an ontology-based fault diagnosis method which overcomes the difficulty of understanding complex fault diagnosis knowledge of loaders and offers a universal approach for fault diagnosis of all loaders. This method contains the following components: (1) An ontology-based fault diagnosis model is proposed to achieve the integrating, sharing and reusing of fault diagnosis knowledge for loaders; (2) combined with ontology, CBR (case-based reasoning) is introduced to realize effective and accurate fault diagnoses following four steps (feature selection, case-retrieval, case-matching and case-updating); and (3) in order to cover the shortages of the CBR method due to the lack of concerned cases, ontology based RBR (rule-based reasoning) is put forward through building SWRL (Semantic Web Rule Language) rules. An application program is also developed to implement the above methods to assist in finding the fault causes, fault locations and maintenance measures of loaders. In addition, the program is validated through analyzing a case study.

  13. Ontology-Based Method for Fault Diagnosis of Loaders

    PubMed Central

    Liu, Xinhui; Chen, Wei; Zhou, Chen; Cao, Bingwei

    2018-01-01

    This paper proposes an ontology-based fault diagnosis method which overcomes the difficulty of understanding complex fault diagnosis knowledge of loaders and offers a universal approach for fault diagnosis of all loaders. This method contains the following components: (1) An ontology-based fault diagnosis model is proposed to achieve the integrating, sharing and reusing of fault diagnosis knowledge for loaders; (2) combined with ontology, CBR (case-based reasoning) is introduced to realize effective and accurate fault diagnoses following four steps (feature selection, case-retrieval, case-matching and case-updating); and (3) in order to cover the shortages of the CBR method due to the lack of concerned cases, ontology based RBR (rule-based reasoning) is put forward through building SWRL (Semantic Web Rule Language) rules. An application program is also developed to implement the above methods to assist in finding the fault causes, fault locations and maintenance measures of loaders. In addition, the program is validated through analyzing a case study. PMID:29495646

  14. Evaluation of medical students of teacher-based and student-based teaching methods in Infectious diseases course.

    PubMed

    Ghasemzadeh, I; Aghamolaei, T; Hosseini-Parandar, F

    2015-01-01

    Introduction: In recent years, medical education has changed dramatically and many medical schools in the world have been trying for expand modern training methods. Purpose of the research is to appraise the medical students of teacher-based and student-based teaching methods in Infectious diseases course, in the Medical School of Hormozgan Medical Sciences University. Methods: In this interventional study, a total of 52 medical scholars that used Section in this Infectious diseases course were included. About 50% of this course was presented by a teacher-based teaching method (lecture) and 50% by a student-based teaching method (problem-based learning). The satisfaction of students regarding these methods was assessed by a questionnaire and a test was used to measure their learning. information are examined with using SPSS 19 and paired t-test. Results: The satisfaction of students of student-based teaching method (problem-based learning) was more positive than their satisfaction of teacher-based teaching method (lecture).The mean score of students in teacher-based teaching method was 12.03 (SD=4.08) and in the student-based teaching method it was 15.50 (SD=4.26) and where is a considerable variation among them (p<0.001). Conclusion: The use of the student-based teaching method (problem-based learning) in comparison with the teacher-based teaching method (lecture) to present the Infectious diseases course led to the student satisfaction and provided additional learning opportunities.

  15. Gender differences: examination of the 12-item bem sex role inventory (BSRI-12) in an older Brazilian population.

    PubMed

    Carver, Lisa F; Vafaei, Afshin; Guerra, Ricardo; Freire, Aline; Phillips, Susan P

    2013-01-01

    Although gender is often acknowledged as a determinant of health, measuring its components, other than biological sex, is uncommon. The Bem Sex Role Inventory (BSRI) quantifies self-attribution of traits, indicative of gender roles. The BSRI has been used with participants across cultures and countries, but rarely in an older population in Brazil, as we have done in this study. Our primary objective was to determine whether the BSRI-12 can be used to explore gender in an older Brazilian population. The BSRI was completed by volunteer participants, all community dwelling adults aged 65+ living in Natal, Brazil. Exploratory factor analysis was performed, followed by a varimax rotation (orthogonal solution) for iteration to examine the underlying gender roles of feminine, masculine, androgynous and undifferentiated, and to validate the BSRI in older adults in Brazil. The 278 participants, (80 men, 198 women) were 65-99 years old (average 73.6 for men, 74.7 for women). Age difference between sexes was not significant (p = 0.22). A 12 item version of the BSRI (BSRI-12) previously validated among Spanish seniors was used and showed validity with 5 BSRI-12 items (Cronbach=0.66) loading as feminine, 6 items (Cronbach=0.51) loading onto masculine roles and neither overlapping with the category of biological sex of respondent. Although the BSRI-12 appears to be a valid indicator of gender among elderly Brazilians, the gender role status identified with the BSRI-12 was not correlated with being male or female.

  16. Drug exposure in register-based research—An expert-opinion based evaluation of methods

    PubMed Central

    Taipale, Heidi; Koponen, Marjaana; Tolppanen, Anna-Maija; Hartikainen, Sirpa; Ahonen, Riitta; Tiihonen, Jari

    2017-01-01

    Background In register-based pharmacoepidemiological studies, construction of drug exposure periods from drug purchases is a major methodological challenge. Various methods have been applied but their validity is rarely evaluated. Our objective was to conduct an expert-opinion based evaluation of the correctness of drug use periods produced by different methods. Methods Drug use periods were calculated with three fixed methods: time windows, assumption of one Defined Daily Dose (DDD) per day and one tablet per day, and with PRE2DUP that is based on modelling of individual drug purchasing behavior. Expert-opinion based evaluation was conducted with 200 randomly selected purchase histories of warfarin, bisoprolol, simvastatin, risperidone and mirtazapine in the MEDALZ-2005 cohort (28,093 persons with Alzheimer’s disease). Two experts reviewed purchase histories and judged which methods had joined correct purchases and gave correct duration for each of 1000 drug exposure periods. Results The evaluated correctness of drug use periods was 70–94% for PRE2DUP, and depending on grace periods and time window lengths 0–73% for tablet methods, 0–41% for DDD methods and 0–11% for time window methods. The highest rate of evaluated correct solutions for each method class were observed for 1 tablet per day with 180 days grace period (TAB_1_180, 43–73%), and 1 DDD per day with 180 days grace period (1–41%). Time window methods produced at maximum only 11% correct solutions. The best performing fixed method TAB_1_180 reached highest correctness for simvastatin 73% (95% CI 65–81%) whereas 89% (95% CI 84–94%) of PRE2DUP periods were judged as correct. Conclusions This study shows inaccuracy of fixed methods and the urgent need for new data-driven methods. In the expert-opinion based evaluation, the lowest error rates were observed with data-driven method PRE2DUP. PMID:28886089

  17. Crack image segmentation based on improved DBC method

    NASA Astrophysics Data System (ADS)

    Cao, Ting; Yang, Nan; Wang, Fengping; Gao, Ting; Wang, Weixing

    2017-11-01

    With the development of computer vision technology, crack detection based on digital image segmentation method arouses global attentions among researchers and transportation ministries. Since the crack always exhibits the random shape and complex texture, it is still a challenge to accomplish reliable crack detection results. Therefore, a novel crack image segmentation method based on fractal DBC (differential box counting) is introduced in this paper. The proposed method can estimate every pixel fractal feature based on neighborhood information which can consider the contribution from all possible direction in the related block. The block moves just one pixel every time so that it could cover all the pixels in the crack image. Unlike the classic DBC method which only describes fractal feature for the related region, this novel method can effectively achieve crack image segmentation according to the fractal feature of every pixel. The experiment proves the proposed method can achieve satisfactory results in crack detection.

  18. A shape-based inter-layer contours correspondence method for ICT-based reverse engineering

    PubMed Central

    Duan, Liming; Yang, Shangpeng; Zhang, Gui; Feng, Fei; Gu, Minghui

    2017-01-01

    The correspondence of a stack of planar contours in ICT (industrial computed tomography)-based reverse engineering, a key step in surface reconstruction, is difficult when the contours or topology of the object are complex. Given the regularity of industrial parts and similarity of the inter-layer contours, a specialized shape-based inter-layer contours correspondence method for ICT-based reverse engineering was presented to solve the above problem based on the vectorized contours. In this paper, the vectorized contours extracted from the slices consist of three graphical primitives: circles, arcs and segments. First, the correspondence of the inter-layer primitives is conducted based on the characteristics of the primitives. Second, based on the corresponded primitives, the inter-layer contours correspond with each other using the proximity rules and exhaustive search. The proposed method can make full use of the shape information to handle industrial parts with complex structures. The feasibility and superiority of this method have been demonstrated via the related experiments. This method can play an instructive role in practice and provide a reference for the related research. PMID:28489867

  19. A shape-based inter-layer contours correspondence method for ICT-based reverse engineering.

    PubMed

    Duan, Liming; Yang, Shangpeng; Zhang, Gui; Feng, Fei; Gu, Minghui

    2017-01-01

    The correspondence of a stack of planar contours in ICT (industrial computed tomography)-based reverse engineering, a key step in surface reconstruction, is difficult when the contours or topology of the object are complex. Given the regularity of industrial parts and similarity of the inter-layer contours, a specialized shape-based inter-layer contours correspondence method for ICT-based reverse engineering was presented to solve the above problem based on the vectorized contours. In this paper, the vectorized contours extracted from the slices consist of three graphical primitives: circles, arcs and segments. First, the correspondence of the inter-layer primitives is conducted based on the characteristics of the primitives. Second, based on the corresponded primitives, the inter-layer contours correspond with each other using the proximity rules and exhaustive search. The proposed method can make full use of the shape information to handle industrial parts with complex structures. The feasibility and superiority of this method have been demonstrated via the related experiments. This method can play an instructive role in practice and provide a reference for the related research.

  20. [Comparison of two algorithms for development of design space-overlapping method and probability-based method].

    PubMed

    Shao, Jing-Yuan; Qu, Hai-Bin; Gong, Xing-Chu

    2018-05-01

    In this work, two algorithms (overlapping method and the probability-based method) for design space calculation were compared by using the data collected from extraction process of Codonopsis Radix as an example. In the probability-based method, experimental error was simulated to calculate the probability of reaching the standard. The effects of several parameters on the calculated design space were studied, including simulation number, step length, and the acceptable probability threshold. For the extraction process of Codonopsis Radix, 10 000 times of simulation and 0.02 for the calculation step length can lead to a satisfactory design space. In general, the overlapping method is easy to understand, and can be realized by several kinds of commercial software without coding programs, but the reliability of the process evaluation indexes when operating in the design space is not indicated. Probability-based method is complex in calculation, but can provide the reliability to ensure that the process indexes can reach the standard within the acceptable probability threshold. In addition, there is no probability mutation in the edge of design space by probability-based method. Therefore, probability-based method is recommended for design space calculation. Copyright© by the Chinese Pharmaceutical Association.

  1. Information fusion methods based on physical laws.

    PubMed

    Rao, Nageswara S V; Reister, David B; Barhen, Jacob

    2005-01-01

    We consider systems whose parameters satisfy certain easily computable physical laws. Each parameter is directly measured by a number of sensors, or estimated using measurements, or both. The measurement process may introduce both systematic and random errors which may then propagate into the estimates. Furthermore, the actual parameter values are not known since every parameter is measured or estimated, which makes the existing sample-based fusion methods inapplicable. We propose a fusion method for combining the measurements and estimators based on the least violation of physical laws that relate the parameters. Under fairly general smoothness and nonsmoothness conditions on the physical laws, we show the asymptotic convergence of our method and also derive distribution-free performance bounds based on finite samples. For suitable choices of the fuser classes, we show that for each parameter the fused estimate is probabilistically at least as good as its best measurement as well as best estimate. We illustrate the effectiveness of this method for a practical problem of fusing well-log data in methane hydrate exploration.

  2. An XML-based method for astronomy software designing

    NASA Astrophysics Data System (ADS)

    Liao, Mingxue; Aili, Yusupu; Zhang, Jin

    XML-based method for standardization of software designing is introduced and analyzed and successfully applied to renovating the hardware and software of the digital clock at Urumqi Astronomical Station. Basic strategy for eliciting time information from the new digital clock of FT206 in the antenna control program is introduced. By FT206, the need to compute how many centuries passed since a certain day with sophisticated formulas is eliminated and it is no longer necessary to set right UT time for the computer holding control over antenna because the information about year, month, day are all deduced from Julian day dwelling in FT206, rather than from computer time. With XML-based method and standard for software designing, various existing designing methods are unified, communications and collaborations between developers are facilitated, and thus Internet-based mode of developing software becomes possible. The trend of development of XML-based designing method is predicted.

  3. The Boundary Integral Equation Method for Porous Media Flow

    NASA Astrophysics Data System (ADS)

    Anderson, Mary P.

    Just as groundwater hydrologists are breathing sighs of relief after the exertions of learning the finite element method, a new technique has reared its nodes—the boundary integral equation method (BIEM) or the boundary equation method (BEM), as it is sometimes called. As Liggett and Liu put it in the preface to The Boundary Integral Equation Method for Porous Media Flow, “Lately, the Boundary Integral Equation Method (BIEM) has emerged as a contender in the computation Derby.” In fact, in July 1984, the 6th International Conference on Boundary Element Methods in Engineering will be held aboard the Queen Elizabeth II, en route from Southampton to New York. These conferences are sponsored by the Department of Civil Engineering at Southampton College (UK), whose members are proponents of BIEM. The conferences have featured papers on applications of BIEM to all aspects of engineering, including flow through porous media. Published proceedings are available, as are textbooks on application of BIEM to engineering problems. There is even a 10-minute film on the subject.

  4. 3-D Wave-Structure Interaction with Coastal Sediments - A Multi-Physics/Multi-Solution-Techniques Approach

    DTIC Science & Technology

    2008-01-01

    element method (BEM). Reynolds averaged Navier-Stokes (RANS) and the particle finite element method ( PFEM ) will be used in the water/mine/sand domain...and deformable sandy seabed (median grain diameter: 0.2 mm) 12 SOLID/FEM SAND/SPH GEOMATERIALS FNPF/BEM FNPF/BEMRANS/ PFEM

  5. Gel-based methods in redox proteomics.

    PubMed

    Charles, Rebecca; Jayawardhana, Tamani; Eaton, Philip

    2014-02-01

    The key to understanding the full significance of oxidants in health and disease is the development of tools and methods that allow the study of proteins that sense and transduce changes in cellular redox. Oxidant-reactive deprotonated thiols commonly operate as redox sensors in proteins and a variety of methods have been developed that allow us to monitor their oxidative modification. This outline review specifically focuses on gel-based methods used to detect, quantify and identify protein thiol oxidative modifications. The techniques we discuss fall into one of two broad categories. Firstly, methods that allow oxidation of thiols in specific proteins or the global cellular pool to be monitored are discussed. These typically utilise thiol-labelling reagents that add a reporter moiety (e.g. affinity tag, fluorophore, chromophore), in which loss of labelling signifies oxidation. Secondly, we outline methods that allow specific thiol oxidation states of proteins (e.g. S-sulfenylation, S-nitrosylation, S-thionylation and interprotein disulfide bond formation) to be investigated. A variety of different gel-based methods for identifying thiol proteins that are sensitive to oxidative modifications have been developed. These methods can aid the detection and quantification of thiol redox state, as well as identifying the sensor protein. By understanding how cellular redox is sensed and transduced to a functional effect by protein thiol redox sensors, this will help us better appreciate the role of oxidants in health and disease. This article is part of a Special Issue entitled Current methods to study reactive oxygen species - pros and cons and biophysics of membrane proteins. Guest Editor: Christine Winterbourn. Copyright © 2013 Elsevier B.V. All rights reserved.

  6. Numerical Modelling and Analysis of Hydrostatic Thrust Air Bearings for High Loading Capacities and Low Air Consumption

    NASA Astrophysics Data System (ADS)

    Yu, Yunluo; Pu, Guang; Jiang, Kyle

    2017-12-01

    The paper presents a numerical simulation study on hydrostatic thrust air bearings to assess the load capacity, compressed air consumptions, and the dynamic response. Finite Difference Method (FDM) and Finite Volume Method (FVM) are combined to solve the non-linear Reynolds equation to find the pressure distribution of the air bearing gas film and the total loading capacity of the bearing. The influence of design parameters on air film gap characteristics, including the air film thickness, supplied pressure, depth of the groove and external load, are investigated based on the proposed FDM model. The simulation results show that the thrust air bearings with a groove have a higher load capacity and air consumption than without a groove, and the load capacity and air consumption both increase with the depth of the groove. Bearings without the groove are better damped than those with the grooves, and the stability of thrust bearing decreases when the groove depth increases. The stability of the thrust bearings is also affected by their loading.

  7. Pitting corrosion as a mixed system: coupled deterministic-probabilistic simulation of pit growth

    NASA Astrophysics Data System (ADS)

    Ibrahim, Israr B. M.; Fonna, S.; Pidaparti, R.

    2018-05-01

    Stochastic behavior of pitting corrosion poses a unique challenge in its computational analysis. However, it also stems from electrochemical activity causing general corrosion. In this paper, a framework for corrosion pit growth simulation based on the coupling of the Cellular Automaton (CA) and Boundary Element Methods (BEM) is presented. The framework assumes that pitting corrosion is controlled by electrochemical activity inside the pit cavity. The BEM provides the prediction of electrochemical activity given the geometrical data and polarization curves, while the CA is used to simulate the evolution of pit shapes based on electrochemical activity provided by BEM. To demonstrate the methodology, a sample case of local corrosion cells formed in pitting corrosion with varied dimensions and polarization functions is considered. Results show certain shapes tend to grow in certain types of environments. Some pit shapes appear to pose a higher risk by being potentially significant stress raisers or potentially increasing the rate of corrosion under the surface. Furthermore, these pits are comparable to commonly observed pit shapes in general corrosion environments.

  8. Different Sarcocystis spp. are present in bovine eosinophilic myositis.

    PubMed

    Vangeel, Lieve; Houf, Kurt; Geldhof, Peter; De Preter, Katleen; Vercruysse, Jozef; Ducatelle, Richard; Chiers, Koen

    2013-11-08

    It has been suggested that Sarcocystis species are associated with bovine eosinophilic myositis (BEM). To date, parasite identification in this myopathy has been based on morphological techniques. The aim of the present study was to use molecular techniques to identify Sarcocystis species inside lesions of BEM. Histologically, BEM lesions of 97 condemned carcasses were examined for the presence of Sarcocystis species. Intralesional and extralesional cysts were collected using laser capture microdissection and the species was determined with a PCR-based technique based on 18S rDNA. Intralesional sarcocysts or remnants were found in BEM lesions in 28% of the carcasses. The majority (82%) of intralesional Sarcocystis species were found to be S. hominis. However S. cruzi and S. hirsuta were also found, as well as an unidentified species. It can be concluded that Sarcocystis species present in lesions of BEM are not restricted to one species. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Experimental investigation of performance and dynamic loading of an axial-flow marine hydrokinetic turbine with comparison to predicted design values from BEM computations

    NASA Astrophysics Data System (ADS)

    van Ness, Katherine; Hill, Craig; Aliseda, Alberto; Polagye, Brian

    2017-11-01

    Experimental measurements of a 0.45-m diameter, variable-pitch marine hydrokinetic (MHK) turbine were collected in a tow tank at different tip speed ratios and blade pitch angles. The coefficients of power and thrust are computed from direct measurements of torque, force and angular speed at the hub level. Loads on individual blades were measured with a six-degree of freedom load cell mounted at the root of one of the turbine blades. This information is used to validate the performance predictions provided by blade element model (BEM) simulations used in the turbine design, specifically the open-source code WTPerf developed by the National Renewable Energy Lab (NREL). Predictions of blade and hub loads by NREL's AeroDyn are also validated for the first time for an axial-flow MHK turbine. The influence of design twist angle, combined with the variable pitch angle, on the flow separation and subsequent blade loading will be analyzed with the complementary information from simulations and experiments. Funding for this research was provided by the United States Naval Facilities Engineering Command.

  10. Adaptive reconnection-based arbitrary Lagrangian Eulerian method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bo, Wurigen; Shashkov, Mikhail

    We present a new adaptive Arbitrary Lagrangian Eulerian (ALE) method. This method is based on the reconnection-based ALE (ReALE) methodology of Refs. [35], [34] and [6]. The main elements in a standard ReALE method are: an explicit Lagrangian phase on an arbitrary polygonal (in 2D) mesh in which the solution and positions of grid nodes are updated; a rezoning phase in which a new grid is defined by changing the connectivity (using Voronoi tessellation) but not the number of cells; and a remapping phase in which the Lagrangian solution is transferred onto the new grid. Furthermore, in the standard ReALEmore » method, the rezoned mesh is smoothed by using one or several steps toward centroidal Voronoi tessellation, but it is not adapted to the solution in any way.« less

  11. Adaptive reconnection-based arbitrary Lagrangian Eulerian method

    DOE PAGES

    Bo, Wurigen; Shashkov, Mikhail

    2015-07-21

    We present a new adaptive Arbitrary Lagrangian Eulerian (ALE) method. This method is based on the reconnection-based ALE (ReALE) methodology of Refs. [35], [34] and [6]. The main elements in a standard ReALE method are: an explicit Lagrangian phase on an arbitrary polygonal (in 2D) mesh in which the solution and positions of grid nodes are updated; a rezoning phase in which a new grid is defined by changing the connectivity (using Voronoi tessellation) but not the number of cells; and a remapping phase in which the Lagrangian solution is transferred onto the new grid. Furthermore, in the standard ReALEmore » method, the rezoned mesh is smoothed by using one or several steps toward centroidal Voronoi tessellation, but it is not adapted to the solution in any way.« less

  12. Lunar-base construction equipment and methods evaluation

    NASA Technical Reports Server (NTRS)

    Boles, Walter W.; Ashley, David B.; Tucker, Richard L.

    1993-01-01

    A process for evaluating lunar-base construction equipment and methods concepts is presented. The process is driven by the need for more quantitative, systematic, and logical methods for assessing further research and development requirements in an area where uncertainties are high, dependence upon terrestrial heuristics is questionable, and quantitative methods are seldom applied. Decision theory concepts are used in determining the value of accurate information and the process is structured as a construction-equipment-and-methods selection methodology. Total construction-related, earth-launch mass is the measure of merit chosen for mathematical modeling purposes. The work is based upon the scope of the lunar base as described in the National Aeronautics and Space Administration's Office of Exploration's 'Exploration Studies Technical Report, FY 1989 Status'. Nine sets of conceptually designed construction equipment are selected as alternative concepts. It is concluded that the evaluation process is well suited for assisting in the establishment of research agendas in an approach that is first broad, with a low level of detail, followed by more-detailed investigations into areas that are identified as critical due to high degrees of uncertainty and sensitivity.

  13. Development of BEM for ceramic composites

    NASA Technical Reports Server (NTRS)

    Henry, D. P.; Banerjee, P. K.; Dargush, G. F.

    1990-01-01

    Details on the progress made during the first three years of a five-year program towards the development of a boundary element code are presented. This code was designed for the micromechanical studies of advance ceramic composites. Additional effort was made in generalizing the implementation to allow the program to be applicable to real problems in the aerospace industry. The ceramic composite formulations developed were implemented in the three-dimensional boundary element computer code BEST3D. BEST3D was adopted as the base for the ceramic composite program, so that many of the enhanced features of this general purpose boundary element code could by utilized. Some of these facilities include sophisticated numerical integration, the capability of local definition of boundary conditions, and the use of quadratic shape functions for modeling geometry and field variables on the boundary. The multi-region implementation permits a body to be modeled in substructural parts; thus dramatically reducing the cost of the analysis. Furthermore, it allows a body consisting of regions of different ceramic matrices and inserts to be studied.

  14. Utility of Combining a Simulation-Based Method With a Lecture-Based Method for Fundoscopy Training in Neurology Residency.

    PubMed

    Gupta, Deepak K; Khandker, Namir; Stacy, Kristin; Tatsuoka, Curtis M; Preston, David C

    2017-10-01

    Fundoscopic examination is an essential component of the neurologic examination. Competence in its performance is mandated as a required clinical skill for neurology residents by the American Council of Graduate Medical Education. Government and private insurance agencies require its performance and documentation for moderate- and high-level neurologic evaluations. Traditionally, assessment and teaching of this key clinical examination technique have been difficult in neurology residency training. To evaluate the utility of a simulation-based method and the traditional lecture-based method for assessment and teaching of fundoscopy to neurology residents. This study was a prospective, single-blinded, education research study of 48 neurology residents recruited from July 1, 2015, through June 30, 2016, at a large neurology residency training program. Participants were equally divided into control and intervention groups after stratification by training year. Baseline and postintervention assessments were performed using questionnaire, survey, and fundoscopy simulators. After baseline assessment, both groups initially received lecture-based training, which covered fundamental knowledge on the components of fundoscopy and key neurologic findings observed on fundoscopic examination. The intervention group additionally received simulation-based training, which consisted of an instructor-led, hands-on workshop that covered practical skills of performing fundoscopic examination and identifying neurologically relevant findings on another fundoscopy simulator. The primary outcome measures were the postintervention changes in fundoscopy knowledge, skills, and total scores. A total of 30 men and 18 women were equally distributed between the 2 groups. The intervention group had significantly higher mean (SD) increases in skills (2.5 [2.3] vs 0.8 [1.8], P = .01) and total (9.3 [4.3] vs 5.3 [5.8], P = .02) scores compared with the control group. Knowledge scores (6.8 [3

  15. Boundary element analysis of corrosion problems for pumps and pipes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miyasaka, M.; Amaya, K.; Kishimoto, K.

    1995-12-31

    Three-dimensional (3D) and axi-symmetric boundary element methods (BEM) were developed to quantitatively estimate cathodic protection and macro-cell corrosion. For 3D analysis, a multiple-region method (MRM) was developed in addition to a single-region method (SRM). The validity and usefulness of the BEMs were demonstrated by comparing numerical results with experimental data from galvanic corrosion systems of a cylindrical model and a seawater pipe, and from a cathodic protection system of an actual seawater pump. It was shown that a highly accurate analysis could be performed for fluid machines handling seawater with complex 3D fields (e.g. seawater pump) by taking account ofmore » flow rate and time dependencies of polarization curve. Compared to the 3D BEM, the axi-symmetric BEM permitted large reductions in numbers of elements and nodes, which greatly simplified analysis of axi-symmetric fields such as pipes. Computational accuracy and CPU time were compared between analyses using two approximation methods for polarization curves: a logarithmic-approximation method and a linear-approximation method.« less

  16. The reduced basis method for the electric field integral equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fares, M., E-mail: fares@cerfacs.f; Hesthaven, J.S., E-mail: Jan_Hesthaven@Brown.ed; Maday, Y., E-mail: maday@ann.jussieu.f

    We introduce the reduced basis method (RBM) as an efficient tool for parametrized scattering problems in computational electromagnetics for problems where field solutions are computed using a standard Boundary Element Method (BEM) for the parametrized electric field integral equation (EFIE). This combination enables an algorithmic cooperation which results in a two step procedure. The first step consists of a computationally intense assembling of the reduced basis, that needs to be effected only once. In the second step, we compute output functionals of the solution, such as the Radar Cross Section (RCS), independently of the dimension of the discretization space, formore » many different parameter values in a many-query context at very little cost. Parameters include the wavenumber, the angle of the incident plane wave and its polarization.« less

  17. Unsteady Fast Random Particle Mesh method for efficient prediction of tonal and broadband noises of a centrifugal fan unit

    NASA Astrophysics Data System (ADS)

    Heo, Seung; Cheong, Cheolung; Kim, Taehoon

    2015-09-01

    In this study, efficient numerical method is proposed for predicting tonal and broadband noises of a centrifugal fan unit. The proposed method is based on Hybrid Computational Aero-Acoustic (H-CAA) techniques combined with Unsteady Fast Random Particle Mesh (U-FRPM) method. The U-FRPM method is developed by extending the FRPM method proposed by Ewert et al. and is utilized to synthesize turbulence flow field from unsteady RANS solutions. The H-CAA technique combined with U-FRPM method is applied to predict broadband as well as tonal noises of a centrifugal fan unit in a household refrigerator. Firstly, unsteady flow field driven by a rotating fan is computed by solving the RANS equations with Computational Fluid Dynamic (CFD) techniques. Main source regions around the rotating fan are identified by examining the computed flow fields. Then, turbulence flow fields in the main source regions are synthesized by applying the U-FRPM method. The acoustic analogy is applied to model acoustic sources in the main source regions. Finally, the centrifugal fan noise is predicted by feeding the modeled acoustic sources into an acoustic solver based on the Boundary Element Method (BEM). The sound spectral levels predicted using the current numerical method show good agreements with the measured spectra at the Blade Pass Frequencies (BPFs) as well as in the high frequency range. On the more, the present method enables quantitative assessment of relative contributions of identified source regions to the sound field by comparing predicted sound pressure spectrum due to modeled sources.

  18. Implementing and testing a panel-based method for modeling acoustic scattering from CFD input

    NASA Astrophysics Data System (ADS)

    Swift, S. Hales

    Exposure of sailors to high levels of noise in the aircraft carrier deck environment is a problem that has serious human and economic consequences. A variety of approaches to quieting exhausting jets from high-performance aircraft are undergoing development. However, testing of noise abatement solutions at full-scale may be prohibitively costly when many possible nozzle treatments are under consideration. A relatively efficient and accurate means of predicting the noise levels resulting from engine-quieting technologies at personnel locations is needed. This is complicated by the need to model both the direct and the scattered sound field in order to determine the resultant spectrum and levels. While the direct sound field may be obtained using CFD plus surface integral methods such as the Ffowcs-Williams Hawkings method, the scattered sound field is complicated by its dependence on the geometry of the scattering surface--the aircraft carrier deck, aircraft control surfaces and other nearby structures. In this work, a time-domain boundary element method, or TD-BEM, (sometimes referred to in terms of source panels) is proposed and developed that takes advantage of and offers beneficial effects for the substantial planar components of the aircraft carrier deck environment and uses pressure gradients as its input. This method is applied to and compared with analytical results for planar surfaces, corners and spherical surfaces using an analytic point source as input. The method can also accept input from CFD data on an acoustic data surface by using the G1A pressure gradient formulation to obtain pressure gradients on the surface from the flow variables contained on the acoustic data surface. The method is also applied to a planar scattering surface characteristic of an aircraft carrier flight deck with an acoustic data surface from a supersonic jet large eddy simulation, or LES, as input to the scattering model. In this way, the process for modeling the complete

  19. Linking data sources for measurement of effective coverage in maternal and newborn health: what do we learn from individual- vs ecological-linking methods?

    PubMed

    Willey, Barbara; Waiswa, Peter; Kajjo, Darious; Munos, Melinda; Akuze, Joseph; Allen, Elizabeth; Marchant, Tanya

    2018-06-01

    Improving maternal and newborn health requires improvements in the quality of facility-based care. This is challenging to measure: routine data may be unreliable; respondents in population surveys may be unable to accurately report on quality indicators; and facility assessments lack population level denominators. We explored methods for linking access to skilled birth attendance (SBA) from household surveys to data on provision of care from facility surveys with the aim of estimating population level effective coverage reflecting access to quality care. We used data from Mayuge District, Uganda. Data from household surveys on access to SBA were linked to health facility assessment census data on readiness to provide basic emergency obstetric and newborn care (BEmONC) in the same district. One individual- and two ecological-linking methods were applied. All methods used household survey reports on where care at birth was accessed. The individual-linking method linked this to data about facility readiness from the specific facility where each woman delivered. The first ecological-linking approach used a district-wide mean estimate of facility readiness. The second used an estimate of facility readiness adjusted by level of health facility accessed. Absolute differences between estimates derived from the different linking methods were calculated, and agreement examined using Lin's concordance correlation coefficient. A total of 1177 women resident in Mayuge reported a birth during 2012-13. Of these, 664 took place in facilities within Mayuge, and were eligible for linking to the census of the district's 38 facilities. 55% were assisted by a SBA in a facility. Using the individual-linking method, effective coverage of births that took place with an SBA in a facility ready to provide BEmONC was just 10% (95% confidence interval CI 3-17). The absolute difference between the individual- and ecological-level linking method adjusting for facility level was one percentage

  20. Optimizing distance-based methods for large data sets

    NASA Astrophysics Data System (ADS)

    Scholl, Tobias; Brenner, Thomas

    2015-10-01

    Distance-based methods for measuring spatial concentration of industries have received an increasing popularity in the spatial econometrics community. However, a limiting factor for using these methods is their computational complexity since both their memory requirements and running times are in {{O}}(n^2). In this paper, we present an algorithm with constant memory requirements and shorter running time, enabling distance-based methods to deal with large data sets. We discuss three recent distance-based methods in spatial econometrics: the D&O-Index by Duranton and Overman (Rev Econ Stud 72(4):1077-1106, 2005), the M-function by Marcon and Puech (J Econ Geogr 10(5):745-762, 2010) and the Cluster-Index by Scholl and Brenner (Reg Stud (ahead-of-print):1-15, 2014). Finally, we present an alternative calculation for the latter index that allows the use of data sets with millions of firms.

  1. A Comparison of Trans Women, Trans Men, Genderqueer Individuals, and Cisgender Brothers and Sisters on the Bem Sex-Role Inventory: Ratings by Self and Siblings.

    PubMed

    Factor, Rhonda J; Rothblum, Esther D

    2017-01-01

    A U.S. national sample of 295 transgender adults (trans women, trans men, and genderqueer individuals) and their cisgender siblings completed the Bem Sex-Role Inventory about their siblings as well as themselves, which enabled a comparison between self-perceptions and sibling's perceptions of personality characteristics. Self-reported personality characteristics scored as feminine of trans women were not statistically different from those of their cisgender sisters, but they were significantly higher than self-reported femininity scores of trans men, genderqueer individuals, and cisgender brothers. Self-reported personality characteristics scored as masculine of trans men did not differ significantly from those of their cisgender brothers, but they were higher than those of trans women. Trans men and cisgender brothers were viewed by their siblings in a more sex-typed way than they rated themselves, whereas trans women and cisgender sisters were rated by their siblings in a less sex-typed way than they viewed themselves.

  2. A Channelization-Based DOA Estimation Method for Wideband Signals

    PubMed Central

    Guo, Rui; Zhang, Yue; Lin, Qianqiang; Chen, Zengping

    2016-01-01

    In this paper, we propose a novel direction of arrival (DOA) estimation method for wideband signals with sensor arrays. The proposed method splits the wideband array output into multiple frequency sub-channels and estimates the signal parameters using a digital channelization receiver. Based on the output sub-channels, a channelization-based incoherent signal subspace method (Channelization-ISM) and a channelization-based test of orthogonality of projected subspaces method (Channelization-TOPS) are proposed. Channelization-ISM applies narrowband signal subspace methods on each sub-channel independently. Then the arithmetic mean or geometric mean of the estimated DOAs from each sub-channel gives the final result. Channelization-TOPS measures the orthogonality between the signal and the noise subspaces of the output sub-channels to estimate DOAs. The proposed channelization-based method isolates signals in different bandwidths reasonably and improves the output SNR. It outperforms the conventional ISM and TOPS methods on estimation accuracy and dynamic range, especially in real environments. Besides, the parallel processing architecture makes it easy to implement on hardware. A wideband digital array radar (DAR) using direct wideband radio frequency (RF) digitization is presented. Experiments carried out in a microwave anechoic chamber with the wideband DAR are presented to demonstrate the performance. The results verify the effectiveness of the proposed method. PMID:27384566

  3. Propensity Score-Based Methods versus MTE-Based Methods in Causal Inference: Identification, Estimation, and Application

    ERIC Educational Resources Information Center

    Zhou, Xiang; Xie, Yu

    2016-01-01

    Since the seminal introduction of the propensity score (PS) by Rosenbaum and Rubin, PS-based methods have been widely used for drawing causal inferences in the behavioral and social sciences. However, the PS approach depends on the ignorability assumption: there are no unobserved confounders once observed covariates are taken into account. For…

  4. A flower image retrieval method based on ROI feature.

    PubMed

    Hong, An-Xiang; Chen, Gang; Li, Jun-Li; Chi, Zhe-Ru; Zhang, Dan

    2004-07-01

    Flower image retrieval is a very important step for computer-aided plant species recognition. In this paper, we propose an efficient segmentation method based on color clustering and domain knowledge to extract flower regions from flower images. For flower retrieval, we use the color histogram of a flower region to characterize the color features of flower and two shape-based features sets, Centroid-Contour Distance (CCD) and Angle Code Histogram (ACH), to characterize the shape features of a flower contour. Experimental results showed that our flower region extraction method based on color clustering and domain knowledge can produce accurate flower regions. Flower retrieval results on a database of 885 flower images collected from 14 plant species showed that our Region-of-Interest (ROI) based retrieval approach using both color and shape features can perform better than a method based on the global color histogram proposed by Swain and Ballard (1991) and a method based on domain knowledge-driven segmentation and color names proposed by Das et al.(1999).

  5. Flow of a Casson fluid through a locally-constricted porous channel: a numerical study

    NASA Astrophysics Data System (ADS)

    Amlimohamadi, Haleh; Akram, Maryammosadat; Sadeghy, Kayvan

    2016-05-01

    Flow of a Casson fluid through a two-dimensional porous channel containing a local constriction is numerically investigated assuming that the resistance offered by the porous medium obeys the Darcy's law. Treating the constriction as another porous medium which obeys the Darcy-Forcheimer model, the equations governing fluid flow in the main channel and the constriction itself are numerically solved using the finite-volume method (FVM) based on the pseudo-transient SIMPLE algorithm. It is shown that an increase in the porosity of the channel decreases the shear stress exerted on the constriction. On the other hand, an increase in the fluid's yield stress is predicted to increase the maximum shear stress experienced by the constriction near its crest. The porosity of the constriction itself is predicted to have a negligible effect on the plaque's shear stress. But, the momentum of the weak flow passing through the constriction is argued to lower the bulk fluid from separating downstream of the constriction.

  6. DNA-Based Methods in the Immunohematology Reference Laboratory

    PubMed Central

    Denomme, Gregory A

    2010-01-01

    Although hemagglutination serves the immunohematology reference laboratory well, when used alone, it has limited capability to resolve complex problems. This overview discusses how molecular approaches can be used in the immunohematology reference laboratory. In order to apply molecular approaches to immunohematology, knowledge of genes, DNA-based methods, and the molecular bases of blood groups are required. When applied correctly, DNA-based methods can predict blood groups to resolve ABO/Rh discrepancies, identify variant alleles, and screen donors for antigen-negative units. DNA-based testing in immunohematology is a valuable tool used to resolve blood group incompatibilities and to support patients in their transfusion needs. PMID:21257350

  7. Effects of Gas Rarefaction on Dynamic Characteristics of Micro Spiral-Grooved Thrust Bearing.

    PubMed

    Liu, Ren; Wang, Xiao-Li; Zhang, Xiao-Qing

    2012-04-01

    The effects of gas-rarefaction on dynamic characteristics of micro spiral-grooved-thrust-bearing are studied. The Reynolds equation is modified by the first order slip model, and the corresponding perturbation equations are then obtained on the basis of the linear small perturbation method. In the converted spiral-curve-coordinates system, the finite-volume-method (FVM) is employed to discrete the surface domain of micro bearing. The results show, compared with the continuum-flow model, that under the slip-flow regime, the decrease in the pressure and stiffness become obvious with the increasing of the compressibility number. Moreover, with the decrease of the relative gas-film-thickness, the deviations of dynamic coefficients between slip-flow-model and continuum-flow-model are increasing.

  8. Optimization of the gypsum-based materials by the sequential simplex method

    NASA Astrophysics Data System (ADS)

    Doleželová, Magdalena; Vimmrová, Alena

    2017-11-01

    The application of the sequential simplex optimization method for the design of gypsum based materials is described. The principles of simplex method are explained and several examples of the method usage for the optimization of lightweight gypsum and ternary gypsum based materials are given. By this method lightweight gypsum based materials with desired properties and ternary gypsum based material with higher strength (16 MPa) were successfully developed. Simplex method is a useful tool for optimizing of gypsum based materials, but the objective of the optimization has to be formulated appropriately.

  9. Deterministic and fuzzy-based methods to evaluate community resilience

    NASA Astrophysics Data System (ADS)

    Kammouh, Omar; Noori, Ali Zamani; Taurino, Veronica; Mahin, Stephen A.; Cimellaro, Gian Paolo

    2018-04-01

    Community resilience is becoming a growing concern for authorities and decision makers. This paper introduces two indicator-based methods to evaluate the resilience of communities based on the PEOPLES framework. PEOPLES is a multi-layered framework that defines community resilience using seven dimensions. Each of the dimensions is described through a set of resilience indicators collected from literature and they are linked to a measure allowing the analytical computation of the indicator's performance. The first method proposed in this paper requires data on previous disasters as an input and returns as output a performance function for each indicator and a performance function for the whole community. The second method exploits a knowledge-based fuzzy modeling for its implementation. This method allows a quantitative evaluation of the PEOPLES indicators using descriptive knowledge rather than deterministic data including the uncertainty involved in the analysis. The output of the fuzzy-based method is a resilience index for each indicator as well as a resilience index for the community. The paper also introduces an open source online tool in which the first method is implemented. A case study illustrating the application of the first method and the usage of the tool is also provided in the paper.

  10. Evaluation of a physically based quasi-linear and a conceptually based nonlinear Muskingum methods

    NASA Astrophysics Data System (ADS)

    Perumal, Muthiah; Tayfur, Gokmen; Rao, C. Madhusudana; Gurarslan, Gurhan

    2017-03-01

    Two variants of the Muskingum flood routing method formulated for accounting nonlinearity of the channel routing process are investigated in this study. These variant methods are: (1) The three-parameter conceptual Nonlinear Muskingum (NLM) method advocated by Gillin 1978, and (2) The Variable Parameter McCarthy-Muskingum (VPMM) method recently proposed by Perumal and Price in 2013. The VPMM method does not require rigorous calibration and validation procedures as required in the case of NLM method due to established relationships of its parameters with flow and channel characteristics based on hydrodynamic principles. The parameters of the conceptual nonlinear storage equation used in the NLM method were calibrated using the Artificial Intelligence Application (AIA) techniques, such as the Genetic Algorithm (GA), the Differential Evolution (DE), the Particle Swarm Optimization (PSO) and the Harmony Search (HS). The calibration was carried out on a given set of hypothetical flood events obtained by routing a given inflow hydrograph in a set of 40 km length prismatic channel reaches using the Saint-Venant (SV) equations. The validation of the calibrated NLM method was investigated using a different set of hypothetical flood hydrographs obtained in the same set of channel reaches used for calibration studies. Both the sets of solutions obtained in the calibration and validation cases using the NLM method were compared with the corresponding solutions of the VPMM method based on some pertinent evaluation measures. The results of the study reveal that the physically based VPMM method is capable of accounting for nonlinear characteristics of flood wave movement better than the conceptually based NLM method which requires the use of tedious calibration and validation procedures.

  11. Web-Based Training Methods for Behavioral Health Providers: A Systematic Review.

    PubMed

    Jackson, Carrie B; Quetsch, Lauren B; Brabson, Laurel A; Herschell, Amy D

    2018-07-01

    There has been an increase in the use of web-based training methods to train behavioral health providers in evidence-based practices. This systematic review focuses solely on the efficacy of web-based training methods for training behavioral health providers. A literature search yielded 45 articles meeting inclusion criteria. Results indicated that the serial instruction training method was the most commonly studied web-based training method. While the current review has several notable limitations, findings indicate that participating in a web-based training may result in greater post-training knowledge and skill, in comparison to baseline scores. Implications and recommendations for future research on web-based training methods are discussed.

  12. Excitation of secondary Love and Rayleigh waves in athree-dimensional sedimentary basin evaluated by the direct boundary element method with normal modes

    NASA Astrophysics Data System (ADS)

    Hatayama, Ken; Fujiwara, Hiroyuki

    1998-05-01

    This paper aims to present a new method to calculate surface waves in 3-D sedimentary basin models, based on the direct boundary element method (BEM) with vertical boundaries and normal modes, and to evaluate the excitation of secondary surface waves observed remarkably in basins. Many authors have so far developed numerical techniques to calculate the total 3-D wavefield. However, the calculation of the total wavefield does not match our purpose, because the secondary surface waves excited on the basin boundaries will be contaminated by other undesirable waves. In this paper, we prove that, in principle, it is possible to extract surface waves excited on part of the basin boundaries from the total 3-D wavefield with a formulation that uses the reflection and transmission operators defined in the space domain. In realizing this extraction in the BEM algorithm, we encounter the problem arising from the lateral and vertical truncations of boundary surfaces extending infinitely in the half-space. To compensate the truncations, we first introduce an approximate algorithm using 2.5-D and 1-D wavefields for reference media, where a 2.5-D wavefield means a 3-D wavefield with a 2-D subsurface structure, and we then demonstrate the extraction. Finally, we calculate the secondary surface waves excited on the arc shape (horizontal section) of a vertical basin boundary subject to incident SH and SV plane waves propagating perpendicularly to the chord of the arc. As a result, we find that in the SH-incident case the Love waves are predominantly excited, rather than the Rayleigh waves and that in the SV-wave incident case the Love waves as well as the Rayleigh waves are excited. This suggests that the Love waves are more detectable than the Rayleigh waves in the horizontal components of observed recordings.

  13. Qualitative Assessment of Inquiry-Based Teaching Methods

    ERIC Educational Resources Information Center

    Briggs, Michael; Long, George; Owens, Katrina

    2011-01-01

    A new approach to teaching method assessment using student focused qualitative studies and the theoretical framework of mental models is proposed. The methodology is considered specifically for the advantages it offers when applied to the assessment of inquiry-based teaching methods. The theoretical foundation of mental models is discussed, and…

  14. Discretization of the induced-charge boundary integral equation.

    PubMed

    Bardhan, Jaydeep P; Eisenberg, Robert S; Gillespie, Dirk

    2009-07-01

    Boundary-element methods (BEMs) for solving integral equations numerically have been used in many fields to compute the induced charges at dielectric boundaries. In this paper, we consider a more accurate implementation of BEM in the context of ions in aqueous solution near proteins, but our results are applicable more generally. The ions that modulate protein function are often within a few angstroms of the protein, which leads to the significant accumulation of polarization charge at the protein-solvent interface. Computing the induced charge accurately and quickly poses a numerical challenge in solving a popular integral equation using BEM. In particular, the accuracy of simulations can depend strongly on seemingly minor details of how the entries of the BEM matrix are calculated. We demonstrate that when the dielectric interface is discretized into flat tiles, the qualocation method of Tausch [IEEE Trans Comput.-Comput.-Aided Des. 20, 1398 (2001)] to compute the BEM matrix elements is always more accurate than the traditional centroid-collocation method. Qualocation is not more expensive to implement than collocation and can save significant computational time by reducing the number of boundary elements needed to discretize the dielectric interfaces.

  15. Discretization of the induced-charge boundary integral equation

    NASA Astrophysics Data System (ADS)

    Bardhan, Jaydeep P.; Eisenberg, Robert S.; Gillespie, Dirk

    2009-07-01

    Boundary-element methods (BEMs) for solving integral equations numerically have been used in many fields to compute the induced charges at dielectric boundaries. In this paper, we consider a more accurate implementation of BEM in the context of ions in aqueous solution near proteins, but our results are applicable more generally. The ions that modulate protein function are often within a few angstroms of the protein, which leads to the significant accumulation of polarization charge at the protein-solvent interface. Computing the induced charge accurately and quickly poses a numerical challenge in solving a popular integral equation using BEM. In particular, the accuracy of simulations can depend strongly on seemingly minor details of how the entries of the BEM matrix are calculated. We demonstrate that when the dielectric interface is discretized into flat tiles, the qualocation method of Tausch [IEEE Trans Comput.-Comput.-Aided Des. 20, 1398 (2001)] to compute the BEM matrix elements is always more accurate than the traditional centroid-collocation method. Qualocation is not more expensive to implement than collocation and can save significant computational time by reducing the number of boundary elements needed to discretize the dielectric interfaces.

  16. Riser Feeding Evaluation Method for Metal Castings Using Numerical Analysis

    NASA Astrophysics Data System (ADS)

    Ahmad, Nadiah

    One of the design aspects that continues to create a challenge for casting designers is the optimum design of casting feeders (risers). As liquid metal solidifies, the metal shrinks and forms cavities inside the casting. In order to avoid shrinkage cavities, risers are added to the casting shape to supply additional molten metal when shrinkage occurs during solidification. The shrinkage cavities in the casting are compensated by controlling the cooling rate to promote directional solidification. This control can be achieved by designing the casting such that the cooling begins at the sections that are farthest away from the risers and ends at the risers. Therefore, the risers will solidify last and feed the casting with the molten metal. As a result, the shrinkage cavities formed during solidification are in the risers which are later removed from the casting. Since casting designers have to usually go through iterative processes of validating the casting designs which are very costly due to expensive simulation processes or manual trials and errors on actual casting processes, this study investigates more efficient methods that will help casting designers utilize their casting experiences systematically to develop good initial casting designs. The objective is to reduce the casting design method iterations; therefore, reducing the cost involved in that design processes. The aim of this research aims at finding a method that can help casting designers design effective risers used in sand casting process of aluminum-silicon alloys by utilizing the analysis of solidification simulation. The analysis focuses on studying the significance of pressure distribution of the liquid metal at the early stage of casting solidification, when heat transfer and convective fluid flow are taken into account in the solidification simulation. The mathematical model of casting solidification was solved using the finite volume method (FVM). This study focuses to improve our

  17. Object Recognition using Feature- and Color-Based Methods

    NASA Technical Reports Server (NTRS)

    Duong, Tuan; Duong, Vu; Stubberud, Allen

    2008-01-01

    An improved adaptive method of processing image data in an artificial neural network has been developed to enable automated, real-time recognition of possibly moving objects under changing (including suddenly changing) conditions of illumination and perspective. The method involves a combination of two prior object-recognition methods one based on adaptive detection of shape features and one based on adaptive color segmentation to enable recognition in situations in which either prior method by itself may be inadequate. The chosen prior feature-based method is known as adaptive principal-component analysis (APCA); the chosen prior color-based method is known as adaptive color segmentation (ACOSE). These methods are made to interact with each other in a closed-loop system to obtain an optimal solution of the object-recognition problem in a dynamic environment. One of the results of the interaction is to increase, beyond what would otherwise be possible, the accuracy of the determination of a region of interest (containing an object that one seeks to recognize) within an image. Another result is to provide a minimized adaptive step that can be used to update the results obtained by the two component methods when changes of color and apparent shape occur. The net effect is to enable the neural network to update its recognition output and improve its recognition capability via an adaptive learning sequence. In principle, the improved method could readily be implemented in integrated circuitry to make a compact, low-power, real-time object-recognition system. It has been proposed to demonstrate the feasibility of such a system by integrating a 256-by-256 active-pixel sensor with APCA, ACOSE, and neural processing circuitry on a single chip. It has been estimated that such a system on a chip would have a volume no larger than a few cubic centimeters, could operate at a rate as high as 1,000 frames per second, and would consume in the order of milliwatts of power.

  18. Predictors of Weapon Carrying in Youth Attending Drop-in Centers

    PubMed Central

    Blumberg, Elaine J.; Liles, Sandy; Kelley, Norma J.; Hovell, Melbourne F.; Bousman, Chad A.; Shillington, Audrey M.; Ji, Ming; Clapp, John

    2012-01-01

    Objective To test and compare 2 predictive models of weapon carrying in youth (n=308) recruited from 4 drop-in centers in San Diego and Imperial counties. Methods Both models were based on the Behavioral Ecological Model (BEM). Results The first and second models significantly explained 39% and 53% of the variance in weapon carrying, respectively, and both full models shared the significant predictors of being black(−), being Hispanic (−), peer modeling of weapon carrying/jail time(+), and school suspensions(+). Conclusions Results suggest that the BEM offers a generalizable conceptual model that may inform prevention strategies for youth at greatest risk of weapon carrying. PMID:19320622

  19. A Quantum-Based Similarity Method in Virtual Screening.

    PubMed

    Al-Dabbagh, Mohammed Mumtaz; Salim, Naomie; Himmat, Mubarak; Ahmed, Ali; Saeed, Faisal

    2015-10-02

    One of the most widely-used techniques for ligand-based virtual screening is similarity searching. This study adopted the concepts of quantum mechanics to present as state-of-the-art similarity method of molecules inspired from quantum theory. The representation of molecular compounds in mathematical quantum space plays a vital role in the development of quantum-based similarity approach. One of the key concepts of quantum theory is the use of complex numbers. Hence, this study proposed three various techniques to embed and to re-represent the molecular compounds to correspond with complex numbers format. The quantum-based similarity method that developed in this study depending on complex pure Hilbert space of molecules called Standard Quantum-Based (SQB). The recall of retrieved active molecules were at top 1% and top 5%, and significant test is used to evaluate our proposed methods. The MDL drug data report (MDDR), maximum unbiased validation (MUV) and Directory of Useful Decoys (DUD) data sets were used for experiments and were represented by 2D fingerprints. Simulated virtual screening experiment show that the effectiveness of SQB method was significantly increased due to the role of representational power of molecular compounds in complex numbers forms compared to Tanimoto benchmark similarity measure.

  20. New mobile methods for dietary assessment: review of image-assisted and image-based dietary assessment methods.

    PubMed

    Boushey, C J; Spoden, M; Zhu, F M; Delp, E J; Kerr, D A

    2017-08-01

    For nutrition practitioners and researchers, assessing dietary intake of children and adults with a high level of accuracy continues to be a challenge. Developments in mobile technologies have created a role for images in the assessment of dietary intake. The objective of this review was to examine peer-reviewed published papers covering development, evaluation and/or validation of image-assisted or image-based dietary assessment methods from December 2013 to January 2016. Images taken with handheld devices or wearable cameras have been used to assist traditional dietary assessment methods for portion size estimations made by dietitians (image-assisted methods). Image-assisted approaches can supplement either dietary records or 24-h dietary recalls. In recent years, image-based approaches integrating application technology for mobile devices have been developed (image-based methods). Image-based approaches aim at capturing all eating occasions by images as the primary record of dietary intake, and therefore follow the methodology of food records. The present paper reviews several image-assisted and image-based methods, their benefits and challenges; followed by details on an image-based mobile food record. Mobile technology offers a wide range of feasible options for dietary assessment, which are easier to incorporate into daily routines. The presented studies illustrate that image-assisted methods can improve the accuracy of conventional dietary assessment methods by adding eating occasion detail via pictures captured by an individual (dynamic images). All of the studies reduced underreporting with the help of images compared with results with traditional assessment methods. Studies with larger sample sizes are needed to better delineate attributes with regards to age of user, degree of error and cost.

  1. Filmless versus film-based systems in radiographic examination costs: an activity-based costing method

    PubMed Central

    2011-01-01

    Background Since the shift from a radiographic film-based system to that of a filmless system, the change in radiographic examination costs and costs structure have been undetermined. The activity-based costing (ABC) method measures the cost and performance of activities, resources, and cost objects. The purpose of this study is to identify the cost structure of a radiographic examination comparing a filmless system to that of a film-based system using the ABC method. Methods We calculated the costs of radiographic examinations for both a filmless and a film-based system, and assessed the costs or cost components by simulating radiographic examinations in a health clinic. The cost objects of the radiographic examinations included lumbar (six views), knee (three views), wrist (two views), and other. Indirect costs were allocated to cost objects using the ABC method. Results The costs of a radiographic examination using a filmless system are as follows: lumbar 2,085 yen; knee 1,599 yen; wrist 1,165 yen; and other 1,641 yen. The costs for a film-based system are: lumbar 3,407 yen; knee 2,257 yen; wrist 1,602 yen; and other 2,521 yen. The primary activities were "calling patient," "explanation of scan," "take photographs," and "aftercare" for both filmless and film-based systems. The cost of these activities cost represented 36.0% of the total cost for a filmless system and 23.6% of a film-based system. Conclusions The costs of radiographic examinations using a filmless system and a film-based system were calculated using the ABC method. Our results provide clear evidence that the filmless system is more effective than the film-based system in providing greater value services directly to patients. PMID:21961846

  2. Evolutionary game theory using agent-based methods.

    PubMed

    Adami, Christoph; Schossau, Jory; Hintze, Arend

    2016-12-01

    Evolutionary game theory is a successful mathematical framework geared towards understanding the selective pressures that affect the evolution of the strategies of agents engaged in interactions with potential conflicts. While a mathematical treatment of the costs and benefits of decisions can predict the optimal strategy in simple settings, more realistic settings such as finite populations, non-vanishing mutations rates, stochastic decisions, communication between agents, and spatial interactions, require agent-based methods where each agent is modeled as an individual, carries its own genes that determine its decisions, and where the evolutionary outcome can only be ascertained by evolving the population of agents forward in time. While highlighting standard mathematical results, we compare those to agent-based methods that can go beyond the limitations of equations and simulate the complexity of heterogeneous populations and an ever-changing set of interactors. We conclude that agent-based methods can predict evolutionary outcomes where purely mathematical treatments cannot tread (for example in the weak selection-strong mutation limit), but that mathematics is crucial to validate the computational simulations. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Experimental validation of a numerical 3-D finite model applied to wind turbines design under vibration constraints: TREVISE platform

    NASA Astrophysics Data System (ADS)

    Sellami, Takwa; Jelassi, Sana; Darcherif, Abdel Moumen; Berriri, Hanen; Mimouni, Med Faouzi

    2018-04-01

    With the advancement of wind turbines towards complex structures, the requirement of trusty structural models has become more apparent. Hence, the vibration characteristics of the wind turbine components, like the blades and the tower, have to be extracted under vibration constraints. Although extracting the modal properties of blades is a simple task, calculating precise modal data for the whole wind turbine coupled to its tower/foundation is still a perplexing task. In this framework, this paper focuses on the investigation of the structural modeling approach of modern commercial micro-turbines. Thus, the structural model a complex designed wind turbine, which is Rutland 504, is established based on both experimental and numerical methods. A three-dimensional (3-D) numerical model of the structure was set up based on the finite volume method (FVM) using the academic finite element analysis software ANSYS. To validate the created model, experimental vibration tests were carried out using the vibration test system of TREVISE platform at ECAM-EPMI. The tests were based on the experimental modal analysis (EMA) technique, which is one of the most efficient techniques for identifying structures parameters. Indeed, the poles and residues of the frequency response functions (FRF), between input and output spectra, were calculated to extract the mode shapes and the natural frequencies of the structure. Based on the obtained modal parameters, the numerical designed model was up-dated.

  4. Local coding based matching kernel method for image classification.

    PubMed

    Song, Yan; McLoughlin, Ian Vince; Dai, Li-Rong

    2014-01-01

    This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV) techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK) method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method.

  5. Measuring geographic access to health care: raster and network-based methods

    PubMed Central

    2012-01-01

    Background Inequalities in geographic access to health care result from the configuration of facilities, population distribution, and the transportation infrastructure. In recent accessibility studies, the traditional distance measure (Euclidean) has been replaced with more plausible measures such as travel distance or time. Both network and raster-based methods are often utilized for estimating travel time in a Geographic Information System. Therefore, exploring the differences in the underlying data models and associated methods and their impact on geographic accessibility estimates is warranted. Methods We examine the assumptions present in population-based travel time models. Conceptual and practical differences between raster and network data models are reviewed, along with methodological implications for service area estimates. Our case study investigates Limited Access Areas defined by Michigan’s Certificate of Need (CON) Program. Geographic accessibility is calculated by identifying the number of people residing more than 30 minutes from an acute care hospital. Both network and raster-based methods are implemented and their results are compared. We also examine sensitivity to changes in travel speed settings and population assignment. Results In both methods, the areas identified as having limited accessibility were similar in their location, configuration, and shape. However, the number of people identified as having limited accessibility varied substantially between methods. Over all permutations, the raster-based method identified more area and people with limited accessibility. The raster-based method was more sensitive to travel speed settings, while the network-based method was more sensitive to the specific population assignment method employed in Michigan. Conclusions Differences between the underlying data models help to explain the variation in results between raster and network-based methods. Considering that the choice of data model/method may

  6. A constraint optimization based virtual network mapping method

    NASA Astrophysics Data System (ADS)

    Li, Xiaoling; Guo, Changguo; Wang, Huaimin; Li, Zhendong; Yang, Zhiwen

    2013-03-01

    Virtual network mapping problem, maps different virtual networks onto the substrate network is an extremely challenging work. This paper proposes a constraint optimization based mapping method for solving virtual network mapping problem. This method divides the problem into two phases, node mapping phase and link mapping phase, which are all NP-hard problems. Node mapping algorithm and link mapping algorithm are proposed for solving node mapping phase and link mapping phase, respectively. Node mapping algorithm adopts the thinking of greedy algorithm, mainly considers two factors, available resources which are supplied by the nodes and distance between the nodes. Link mapping algorithm is based on the result of node mapping phase, adopts the thinking of distributed constraint optimization method, which can guarantee to obtain the optimal mapping with the minimum network cost. Finally, simulation experiments are used to validate the method, and results show that the method performs very well.

  7. Effects of a Format-based Second Language Teaching Method in Kindergarten.

    ERIC Educational Resources Information Center

    Uilenburg, Noelle; Plooij, Frans X.; de Glopper, Kees; Damhuis, Resi

    2001-01-01

    Focuses on second language teaching with a format-based method. The differences between a format-based teaching method and a standard approach used as treatments in a quasi-experimental, non-equivalent control group are described in detail. Examines whether the effects of a format-based teaching method and a standard foreign language method differ…

  8. Fast Reduction Method in Dominance-Based Information Systems

    NASA Astrophysics Data System (ADS)

    Li, Yan; Zhou, Qinghua; Wen, Yongchuan

    2018-01-01

    In real world applications, there are often some data with continuous values or preference-ordered values. Rough sets based on dominance relations can effectively deal with these kinds of data. Attribute reduction can be done in the framework of dominance-relation based approach to better extract decision rules. However, the computational cost of the dominance classes greatly affects the efficiency of attribute reduction and rule extraction. This paper presents an efficient method of computing dominance classes, and further compares it with traditional method with increasing attributes and samples. Experiments on UCI data sets show that the proposed algorithm obviously improves the efficiency of the traditional method, especially for large-scale data.

  9. Forest Vegetation Management: Developments in the Science and Practice

    Treesearch

    James H. Miller

    2006-01-01

    The practices of forest vegetation management (FVM) have been widely adopted and continue to undergo country-specific modifications through extensive research. Beginnings of this component discipline of silviculture were in weed science in the 1960s and focused primarily on translating developing herbicide technology underway in agriculture to forestry uses. It was an...

  10. Chapter 11. Community analysis-based methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, Y.; Wu, C.H.; Andersen, G.L.

    2010-05-01

    Microbial communities are each a composite of populations whose presence and relative abundance in water or other environmental samples are a direct manifestation of environmental conditions, including the introduction of microbe-rich fecal material and factors promoting persistence of the microbes therein. As shown by culture-independent methods, different animal-host fecal microbial communities appear distinctive, suggesting that their community profiles can be used to differentiate fecal samples and to potentially reveal the presence of host fecal material in environmental waters. Cross-comparisons of microbial communities from different hosts also reveal relative abundances of genetic groups that can be used to distinguish sources. Inmore » increasing order of their information richness, several community analysis methods hold promise for MST applications: phospholipid fatty acid (PLFA) analysis, denaturing gradient gel electrophoresis (DGGE), terminal restriction fragment length polymorphism (TRFLP), cloning/sequencing, and PhyloChip. Specific case studies involving TRFLP and PhyloChip approaches demonstrate the ability of community-based analyses of contaminated waters to confirm a diagnosis of water quality based on host-specific marker(s). The success of community-based MST for comprehensively confirming fecal sources relies extensively upon using appropriate multivariate statistical approaches. While community-based MST is still under evaluation and development as a primary diagnostic tool, results presented herein demonstrate its promise. Coupled with its inherently comprehensive ability to capture an unprecedented amount of microbiological data that is relevant to water quality, the tools for microbial community analysis are increasingly accessible, and community-based approaches have unparalleled potential for translation into rapid, perhaps real-time, monitoring platforms.« less

  11. Therapy Decision Support Based on Recommender System Methods

    PubMed Central

    Gräßer, Felix; Beckert, Stefanie; Küster, Denise; Schmitt, Jochen; Abraham, Susanne; Malberg, Hagen

    2017-01-01

    We present a system for data-driven therapy decision support based on techniques from the field of recommender systems. Two methods for therapy recommendation, namely, Collaborative Recommender and Demographic-based Recommender, are proposed. Both algorithms aim to predict the individual response to different therapy options using diverse patient data and recommend the therapy which is assumed to provide the best outcome for a specific patient and time, that is, consultation. The proposed methods are evaluated using a clinical database incorporating patients suffering from the autoimmune skin disease psoriasis. The Collaborative Recommender proves to generate both better outcome predictions and recommendation quality. However, due to sparsity in the data, this approach cannot provide recommendations for the entire database. In contrast, the Demographic-based Recommender performs worse on average but covers more consultations. Consequently, both methods profit from a combination into an overall recommender system. PMID:29065657

  12. MR Imaging-based Semi-quantitative Methods for Knee Osteoarthritis

    PubMed Central

    JARRAYA, Mohamed; HAYASHI, Daichi; ROEMER, Frank Wolfgang; GUERMAZI, Ali

    2016-01-01

    Magnetic resonance imaging (MRI)-based semi-quantitative (SQ) methods applied to knee osteoarthritis (OA) have been introduced during the last decade and have fundamentally changed our understanding of knee OA pathology since then. Several epidemiological studies and clinical trials have used MRI-based SQ methods to evaluate different outcome measures. Interest in MRI-based SQ scoring system has led to continuous update and refinement. This article reviews the different SQ approaches for MRI-based whole organ assessment of knee OA and also discuss practical aspects of whole joint assessment. PMID:26632537

  13. Advanced development of BEM for elastic and inelastic dynamic analysis of solids

    NASA Technical Reports Server (NTRS)

    Banerjee, P. K.; Ahmad, S.; Wang, H. C.

    1989-01-01

    Direct Boundary Element formulations and their numerical implementation for periodic and transient elastic as well as inelastic transient dynamic analyses of two-dimensional, axisymmetric and three-dimensional solids are presented. The inelastic formulation is based on an initial stress approach and is the first of its kind in the field of Boundary Element Methods. This formulation employs the Navier-Cauchy equation of motion, Graffi's dynamic reciprocal theorem, Stokes' fundamental solution, and the divergence theorem, together with kinematical and constitutive equations to obtain the pertinent integral equations of the problem in the time domain within the context of the small displacement theory of elastoplasticity. The dynamic (periodic, transient as well as nonlinear transient) formulations have been applied to a range of problems. The numerical formulations presented here are included in the BEST3D and GPBEST systems.

  14. Boundary element methods for the analysis of crack growth in the presence of residual stress fields

    NASA Astrophysics Data System (ADS)

    Leitao, V. M. A.; Aliabadi, M. H.; Rooke, D. P.; Cook, R.

    1998-06-01

    Two boundary element methods of simulating crack growth in the presence of residual stress fields are presented, and the results are compared to experimental measurements. The first method utilizes linear elastic fracture mechanics (LEFM) and superimposes the solutions due to the applied load and the residual stress field. In this method, the residual stress fields are obtained from an elastoplastic BEM analysis, and numerical weight functions are used to obtain the stress intensity factors due to the fatigue loading. The second method presented is an elastoplastic fracture mechanics (EPFM) approach for crack growth simulation. A nonlinear J-integral is used in the fatigue life calculations. The methods are shown to agree well with experimental measurements of crack growth in prestressed open hole specimens. Results are also presented for the case where the prestress is applied to specimens that have been precracked.

  15. Design method of ARM based embedded iris recognition system

    NASA Astrophysics Data System (ADS)

    Wang, Yuanbo; He, Yuqing; Hou, Yushi; Liu, Ting

    2008-03-01

    With the advantages of non-invasiveness, uniqueness, stability and low false recognition rate, iris recognition has been successfully applied in many fields. Up to now, most of the iris recognition systems are based on PC. However, a PC is not portable and it needs more power. In this paper, we proposed an embedded iris recognition system based on ARM. Considering the requirements of iris image acquisition and recognition algorithm, we analyzed the design method of the iris image acquisition module, designed the ARM processing module and its peripherals, studied the Linux platform and the recognition algorithm based on this platform, finally actualized the design method of ARM-based iris imaging and recognition system. Experimental results show that the ARM platform we used is fast enough to run the iris recognition algorithm, and the data stream can flow smoothly between the camera and the ARM chip based on the embedded Linux system. It's an effective method of using ARM to actualize portable embedded iris recognition system.

  16. An image mosaic method based on corner

    NASA Astrophysics Data System (ADS)

    Jiang, Zetao; Nie, Heting

    2015-08-01

    In view of the shortcomings of the traditional image mosaic, this paper describes a new algorithm for image mosaic based on the Harris corner. Firstly, Harris operator combining the constructed low-pass smoothing filter based on splines function and circular window search is applied to detect the image corner, which allows us to have better localisation performance and effectively avoid the phenomenon of cluster. Secondly, the correlation feature registration is used to find registration pair, remove the false registration using random sampling consensus. Finally use the method of weighted trigonometric combined with interpolation function for image fusion. The experiments show that this method can effectively remove the splicing ghosting and improve the accuracy of image mosaic.

  17. An AIS-Based E-mail Classification Method

    NASA Astrophysics Data System (ADS)

    Qing, Jinjian; Mao, Ruilong; Bie, Rongfang; Gao, Xiao-Zhi

    This paper proposes a new e-mail classification method based on the Artificial Immune System (AIS), which is endowed with good diversity and self-adaptive ability by using the immune learning, immune memory, and immune recognition. In our method, the features of spam and non-spam extracted from the training sets are combined together, and the number of false positives (non-spam messages that are incorrectly classified as spam) can be reduced. The experimental results demonstrate that this method is effective in reducing the false rate.

  18. Functional connectivity analysis of the neural bases of emotion regulation: A comparison of independent component method with density-based k-means clustering method.

    PubMed

    Zou, Ling; Guo, Qian; Xu, Yi; Yang, Biao; Jiao, Zhuqing; Xiang, Jianbo

    2016-04-29

    Functional magnetic resonance imaging (fMRI) is an important tool in neuroscience for assessing connectivity and interactions between distant areas of the brain. To find and characterize the coherent patterns of brain activity as a means of identifying brain systems for the cognitive reappraisal of the emotion task, both density-based k-means clustering and independent component analysis (ICA) methods can be applied to characterize the interactions between brain regions involved in cognitive reappraisal of emotion. Our results reveal that compared with the ICA method, the density-based k-means clustering method provides a higher sensitivity of polymerization. In addition, it is more sensitive to those relatively weak functional connection regions. Thus, the study concludes that in the process of receiving emotional stimuli, the relatively obvious activation areas are mainly distributed in the frontal lobe, cingulum and near the hypothalamus. Furthermore, density-based k-means clustering method creates a more reliable method for follow-up studies of brain functional connectivity.

  19. Comparison of contact conditions obtained by direct simulation with statistical analysis for normally distributed isotropic surfaces

    NASA Astrophysics Data System (ADS)

    Uchidate, M.

    2018-09-01

    In this study, with the aim of establishing a systematic knowledge on the impact of summit extraction methods and stochastic model selection in rough contact analysis, the contact area ratio (A r /A a ) obtained by statistical contact models with different summit extraction methods was compared with a direct simulation using the boundary element method (BEM). Fifty areal topography datasets with different autocorrelation functions in terms of the power index and correlation length were used for investigation. The non-causal 2D auto-regressive model which can generate datasets with specified parameters was employed in this research. Three summit extraction methods, Nayak’s theory, 8-point analysis and watershed segmentation, were examined. With regard to the stochastic model, Bhushan’s model and BGT (Bush-Gibson-Thomas) model were applied. The values of A r /A a from the stochastic models tended to be smaller than BEM. The discrepancy between the Bhushan’s model with the 8-point analysis and BEM was slightly smaller than Nayak’s theory. The results with the watershed segmentation was similar to those with the 8-point analysis. The impact of the Wolf pruning on the discrepancy between the stochastic analysis and BEM was not very clear. In case of the BGT model which employs surface gradients, good quantitative agreement against BEM was obtained when the Nayak’s bandwidth parameter was large.

  20. Color image definition evaluation method based on deep learning method

    NASA Astrophysics Data System (ADS)

    Liu, Di; Li, YingChun

    2018-01-01

    In order to evaluate different blurring levels of color image and improve the method of image definition evaluation, this paper proposed a method based on the depth learning framework and BP neural network classification model, and presents a non-reference color image clarity evaluation method. Firstly, using VGG16 net as the feature extractor to extract 4,096 dimensions features of the images, then the extracted features and labeled images are employed in BP neural network to train. And finally achieve the color image definition evaluation. The method in this paper are experimented by using images from the CSIQ database. The images are blurred at different levels. There are 4,000 images after the processing. Dividing the 4,000 images into three categories, each category represents a blur level. 300 out of 400 high-dimensional features are trained in VGG16 net and BP neural network, and the rest of 100 samples are tested. The experimental results show that the method can take full advantage of the learning and characterization capability of deep learning. Referring to the current shortcomings of the major existing image clarity evaluation methods, which manually design and extract features. The method in this paper can extract the images features automatically, and has got excellent image quality classification accuracy for the test data set. The accuracy rate is 96%. Moreover, the predicted quality levels of original color images are similar to the perception of the human visual system.

  1. Methods for preparing colloidal nanocrystal-based thin films

    DOEpatents

    Kagan, Cherie R.; Fafarman, Aaron T.; Choi, Ji-Hyuk; Koh, Weon-kyu; Kim, David K.; Oh, Soong Ju; Lai, Yuming; Hong, Sung-Hoon; Saudari, Sangameshwar Rao; Murray, Christopher B.

    2016-05-10

    Methods of exchanging ligands to form colloidal nanocrystals (NCs) with chalcogenocyanate (xCN)-based ligands and apparatuses using the same are disclosed. The ligands may be exchanged by assembling NCs into a thin film and immersing the thin film in a solution containing xCN-based ligands. The ligands may also be exchanged by mixing a xCN-based solution with a dispersion of NCs, flocculating the mixture, centrifuging the mixture, discarding the supernatant, adding a solvent to the pellet, and dispersing the solvent and pellet to form dispersed NCs with exchanged xCN-ligands. The NCs with xCN-based ligands may be used to form thin film devices and/or other electronic, optoelectronic, and photonic devices. Devices comprising nanocrystal-based thin films and methods for forming such devices are also disclosed. These devices may be constructed by depositing NCs on to a substrate to form an NC thin film and then doping the thin film by evaporation and thermal diffusion.

  2. Numerical integration techniques for curved-element discretizations of molecule-solvent interfaces.

    PubMed

    Bardhan, Jaydeep P; Altman, Michael D; Willis, David J; Lippow, Shaun M; Tidor, Bruce; White, Jacob K

    2007-07-07

    Surface formulations of biophysical modeling problems offer attractive theoretical and computational properties. Numerical simulations based on these formulations usually begin with discretization of the surface under consideration; often, the surface is curved, possessing complicated structure and possibly singularities. Numerical simulations commonly are based on approximate, rather than exact, discretizations of these surfaces. To assess the strength of the dependence of simulation accuracy on the fidelity of surface representation, here methods were developed to model several important surface formulations using exact surface discretizations. Following and refining Zauhar's work [J. Comput.-Aided Mol. Des. 9, 149 (1995)], two classes of curved elements were defined that can exactly discretize the van der Waals, solvent-accessible, and solvent-excluded (molecular) surfaces. Numerical integration techniques are presented that can accurately evaluate nonsingular and singular integrals over these curved surfaces. After validating the exactness of the surface discretizations and demonstrating the correctness of the presented integration methods, a set of calculations are presented that compare the accuracy of approximate, planar-triangle-based discretizations and exact, curved-element-based simulations of surface-generalized-Born (sGB), surface-continuum van der Waals (scvdW), and boundary-element method (BEM) electrostatics problems. Results demonstrate that continuum electrostatic calculations with BEM using curved elements, piecewise-constant basis functions, and centroid collocation are nearly ten times more accurate than planar-triangle BEM for basis sets of comparable size. The sGB and scvdW calculations give exceptional accuracy even for the coarsest obtainable discretized surfaces. The extra accuracy is attributed to the exact representation of the solute-solvent interface; in contrast, commonly used planar-triangle discretizations can only offer improved

  3. Numerical Integration Techniques for Curved-Element Discretizations of Molecule–Solvent Interfaces

    PubMed Central

    Bardhan, Jaydeep P.; Altman, Michael D.; Willis, David J.; Lippow, Shaun M.; Tidor, Bruce; White, Jacob K.

    2012-01-01

    Surface formulations of biophysical modeling problems offer attractive theoretical and computational properties. Numerical simulations based on these formulations usually begin with discretization of the surface under consideration; often, the surface is curved, possessing complicated structure and possibly singularities. Numerical simulations commonly are based on approximate, rather than exact, discretizations of these surfaces. To assess the strength of the dependence of simulation accuracy on the fidelity of surface representation, we have developed methods to model several important surface formulations using exact surface discretizations. Following and refining Zauhar’s work (J. Comp.-Aid. Mol. Des. 9:149-159, 1995), we define two classes of curved elements that can exactly discretize the van der Waals, solvent-accessible, and solvent-excluded (molecular) surfaces. We then present numerical integration techniques that can accurately evaluate nonsingular and singular integrals over these curved surfaces. After validating the exactness of the surface discretizations and demonstrating the correctness of the presented integration methods, we present a set of calculations that compare the accuracy of approximate, planar-triangle-based discretizations and exact, curved-element-based simulations of surface-generalized-Born (sGB), surface-continuum van der Waals (scvdW), and boundary-element method (BEM) electrostatics problems. Results demonstrate that continuum electrostatic calculations with BEM using curved elements, piecewise-constant basis functions, and centroid collocation are nearly ten times more accurate than planartriangle BEM for basis sets of comparable size. The sGB and scvdW calculations give exceptional accuracy even for the coarsest obtainable discretized surfaces. The extra accuracy is attributed to the exact representation of the solute–solvent interface; in contrast, commonly used planar-triangle discretizations can only offer improved

  4. A LiDAR data-based camera self-calibration method

    NASA Astrophysics Data System (ADS)

    Xu, Lijun; Feng, Jing; Li, Xiaolu; Chen, Jianjun

    2018-07-01

    To find the intrinsic parameters of a camera, a LiDAR data-based camera self-calibration method is presented here. Parameters have been estimated using particle swarm optimization (PSO), enhancing the optimal solution of a multivariate cost function. The main procedure of camera intrinsic parameter estimation has three parts, which include extraction and fine matching of interest points in the images, establishment of cost function, based on Kruppa equations and optimization of PSO using LiDAR data as the initialization input. To improve the precision of matching pairs, a new method of maximal information coefficient (MIC) and maximum asymmetry score (MAS) was used to remove false matching pairs based on the RANSAC algorithm. Highly precise matching pairs were used to calculate the fundamental matrix so that the new cost function (deduced from Kruppa equations in terms of the fundamental matrix) was more accurate. The cost function involving four intrinsic parameters was minimized by PSO for the optimal solution. To overcome the issue of optimization pushed to a local optimum, LiDAR data was used to determine the scope of initialization, based on the solution to the P4P problem for camera focal length. To verify the accuracy and robustness of the proposed method, simulations and experiments were implemented and compared with two typical methods. Simulation results indicated that the intrinsic parameters estimated by the proposed method had absolute errors less than 1.0 pixel and relative errors smaller than 0.01%. Based on ground truth obtained from a meter ruler, the distance inversion accuracy in the experiments was smaller than 1.0 cm. Experimental and simulated results demonstrated that the proposed method was highly accurate and robust.

  5. A New Intrusion Detection Method Based on Antibody Concentration

    NASA Astrophysics Data System (ADS)

    Zeng, Jie; Li, Tao; Li, Guiyang; Li, Haibo

    Antibody is one kind of protein that fights against the harmful antigen in human immune system. In modern medical examination, the health status of a human body can be diagnosed by detecting the intrusion intensity of a specific antigen and the concentration indicator of corresponding antibody from human body’s serum. In this paper, inspired by the principle of antigen-antibody reactions, we present a New Intrusion Detection Method Based on Antibody Concentration (NIDMBAC) to reduce false alarm rate without affecting detection rate. In our proposed method, the basic definitions of self, nonself, antigen and detector in the intrusion detection domain are given. Then, according to the antigen intrusion intensity, the change of antibody number is recorded from the process of clone proliferation for detectors based on the antigen classified recognition. Finally, building upon the above works, a probabilistic calculation method for the intrusion alarm production, which is based on the correlation between the antigen intrusion intensity and the antibody concen-tration, is proposed. Our theoretical analysis and experimental results show that our proposed method has a better performance than traditional methods.

  6. A Review on Human Activity Recognition Using Vision-Based Method.

    PubMed

    Zhang, Shugang; Wei, Zhiqiang; Nie, Jie; Huang, Lei; Wang, Shuang; Li, Zhen

    2017-01-01

    Human activity recognition (HAR) aims to recognize activities from a series of observations on the actions of subjects and the environmental conditions. The vision-based HAR research is the basis of many applications including video surveillance, health care, and human-computer interaction (HCI). This review highlights the advances of state-of-the-art activity recognition approaches, especially for the activity representation and classification methods. For the representation methods, we sort out a chronological research trajectory from global representations to local representations, and recent depth-based representations. For the classification methods, we conform to the categorization of template-based methods, discriminative models, and generative models and review several prevalent methods. Next, representative and available datasets are introduced. Aiming to provide an overview of those methods and a convenient way of comparing them, we classify existing literatures with a detailed taxonomy including representation and classification methods, as well as the datasets they used. Finally, we investigate the directions for future research.

  7. Adaptive target binarization method based on a dual-camera system

    NASA Astrophysics Data System (ADS)

    Lei, Jing; Zhang, Ping; Xu, Jiangtao; Gao, Zhiyuan; Gao, Jing

    2018-01-01

    An adaptive target binarization method based on a dual-camera system that contains two dynamic vision sensors was proposed. First, a preprocessing procedure of denoising is introduced to remove the noise events generated by the sensors. Then, the complete edge of the target is retrieved and represented by events based on an event mosaicking method. Third, the region of the target is confirmed by an event-to-event method. Finally, a postprocessing procedure of image open and close operations of morphology methods is adopted to remove the artifacts caused by event-to-event mismatching. The proposed binarization method has been extensively tested on numerous degraded images with nonuniform illumination, low contrast, noise, or light spots and successfully compared with other well-known binarization methods. The experimental results, which are based on visual and misclassification error criteria, show that the proposed method performs well and has better robustness on the binarization of degraded images.

  8. A Review on Human Activity Recognition Using Vision-Based Method

    PubMed Central

    Nie, Jie

    2017-01-01

    Human activity recognition (HAR) aims to recognize activities from a series of observations on the actions of subjects and the environmental conditions. The vision-based HAR research is the basis of many applications including video surveillance, health care, and human-computer interaction (HCI). This review highlights the advances of state-of-the-art activity recognition approaches, especially for the activity representation and classification methods. For the representation methods, we sort out a chronological research trajectory from global representations to local representations, and recent depth-based representations. For the classification methods, we conform to the categorization of template-based methods, discriminative models, and generative models and review several prevalent methods. Next, representative and available datasets are introduced. Aiming to provide an overview of those methods and a convenient way of comparing them, we classify existing literatures with a detailed taxonomy including representation and classification methods, as well as the datasets they used. Finally, we investigate the directions for future research. PMID:29065585

  9. Biogas slurry pricing method based on nutrient content

    NASA Astrophysics Data System (ADS)

    Zhang, Chang-ai; Guo, Honghai; Yang, Zhengtao; Xin, Shurong

    2017-11-01

    In order to promote biogas-slurry commercialization, A method was put forward to valuate biogas slurry based on its nutrient contents. Firstly, element contents of biogas slurry was measured; Secondly, each element was valuated based on its market price, and then traffic cost, using cost and market effect were taken into account, the pricing method of biogas slurry were obtained lastly. This method could be useful in practical production. Taking cattle manure raw meterial biogas slurry and con stalk raw material biogas slurry for example, their price were 38.50 yuan RMB per ton and 28.80 yuan RMB per ton. This paper will be useful for recognizing the value of biogas projects, ensuring biogas project running, and instructing the cyclic utilization of biomass resources in China.

  10. Method of coating an iron-based article

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Magdefrau, Neal; Beals, James T.; Sun, Ellen Y.

    A method of coating an iron-based article includes a first heating step of heating a substrate that includes an iron-based material in the presence of an aluminum source material and halide diffusion activator. The heating is conducted in a substantially non-oxidizing environment, to cause the formation of an aluminum-rich layer in the iron-based material. In a second heating step, the substrate that has the aluminum-rich layer is heated in an oxidizing environment to oxidize the aluminum in the aluminum-rich layer.

  11. Comparison of Different Recruitment Methods for Sexual and Reproductive Health Research: Social Media-Based Versus Conventional Methods.

    PubMed

    Motoki, Yoko; Miyagi, Etsuko; Taguri, Masataka; Asai-Sato, Mikiko; Enomoto, Takayuki; Wark, John Dennis; Garland, Suzanne Marie

    2017-03-10

    Prior research about the sexual and reproductive health of young women has relied mostly on self-reported survey studies. Thus, participant recruitment using Web-based methods can improve sexual and reproductive health research about cervical cancer prevention. In our prior study, we reported that Facebook is a promising way to reach young women for sexual and reproductive health research. However, it remains unknown whether Web-based or other conventional recruitment methods (ie, face-to-face or flyer distribution) yield comparable survey responses from similar participants. We conducted a survey to determine whether there was a difference in the sexual and reproductive health survey responses of young Japanese women based on recruitment methods: social media-based and conventional methods. From July 2012 to March 2013 (9 months), we invited women of ages 16-35 years in Kanagawa, Japan, to complete a Web-based questionnaire. They were recruited through either a social media-based (social networking site, SNS, group) or by conventional methods (conventional group). All participants enrolled were required to fill out and submit their responses through a Web-based questionnaire about their sexual and reproductive health for cervical cancer prevention. Of the 243 participants, 52.3% (127/243) were recruited by SNS, whereas 47.7% (116/243) were recruited by conventional methods. We found no differences between recruitment methods in responses to behaviors and attitudes to sexual and reproductive health survey, although more participants from the conventional group (15%, 14/95) chose not to answer the age of first intercourse compared with those from the SNS group (5.2%, 6/116; P=.03). No differences were found between recruitment methods in the responses of young Japanese women to a Web-based sexual and reproductive health survey. ©Yoko Motoki, Etsuko Miyagi, Masataka Taguri, Mikiko Asai-Sato, Takayuki Enomoto, John Dennis Wark, Suzanne Marie Garland. Originally

  12. External Aiding Methods for IMU-Based Navigation

    DTIC Science & Technology

    2016-11-26

    Carlo simulation and particle filtering . This approach allows for the utilization of highly complex systems in a black box configuration with minimal...alternative method, which has the advantage of being less computationally demanding, is to use a Kalman filtering -based approach. The particular...Kalman filtering -based approach used here is known as linear covariance analysis. In linear covariance analysis, the nonlinear systems describing the

  13. Metamodel-based inverse method for parameter identification: elastic-plastic damage model

    NASA Astrophysics Data System (ADS)

    Huang, Changwu; El Hami, Abdelkhalak; Radi, Bouchaïb

    2017-04-01

    This article proposed a metamodel-based inverse method for material parameter identification and applies it to elastic-plastic damage model parameter identification. An elastic-plastic damage model is presented and implemented in numerical simulation. The metamodel-based inverse method is proposed in order to overcome the disadvantage in computational cost of the inverse method. In the metamodel-based inverse method, a Kriging metamodel is constructed based on the experimental design in order to model the relationship between material parameters and the objective function values in the inverse problem, and then the optimization procedure is executed by the use of a metamodel. The applications of the presented material model and proposed parameter identification method in the standard A 2017-T4 tensile test prove that the presented elastic-plastic damage model is adequate to describe the material's mechanical behaviour and that the proposed metamodel-based inverse method not only enhances the efficiency of parameter identification but also gives reliable results.

  14. Filmless versus film-based systems in radiographic examination costs: an activity-based costing method.

    PubMed

    Muto, Hiroshi; Tani, Yuji; Suzuki, Shigemasa; Yokooka, Yuki; Abe, Tamotsu; Sase, Yuji; Terashita, Takayoshi; Ogasawara, Katsuhiko

    2011-09-30

    Since the shift from a radiographic film-based system to that of a filmless system, the change in radiographic examination costs and costs structure have been undetermined. The activity-based costing (ABC) method measures the cost and performance of activities, resources, and cost objects. The purpose of this study is to identify the cost structure of a radiographic examination comparing a filmless system to that of a film-based system using the ABC method. We calculated the costs of radiographic examinations for both a filmless and a film-based system, and assessed the costs or cost components by simulating radiographic examinations in a health clinic. The cost objects of the radiographic examinations included lumbar (six views), knee (three views), wrist (two views), and other. Indirect costs were allocated to cost objects using the ABC method. The costs of a radiographic examination using a filmless system are as follows: lumbar 2,085 yen; knee 1,599 yen; wrist 1,165 yen; and other 1,641 yen. The costs for a film-based system are: lumbar 3,407 yen; knee 2,257 yen; wrist 1,602 yen; and other 2,521 yen. The primary activities were "calling patient," "explanation of scan," "take photographs," and "aftercare" for both filmless and film-based systems. The cost of these activities cost represented 36.0% of the total cost for a filmless system and 23.6% of a film-based system. The costs of radiographic examinations using a filmless system and a film-based system were calculated using the ABC method. Our results provide clear evidence that the filmless system is more effective than the film-based system in providing greater value services directly to patients.

  15. A marker-based watershed method for X-ray image segmentation.

    PubMed

    Zhang, Xiaodong; Jia, Fucang; Luo, Suhuai; Liu, Guiying; Hu, Qingmao

    2014-03-01

    Digital X-ray images are the most frequent modality for both screening and diagnosis in hospitals. To facilitate subsequent analysis such as quantification and computer aided diagnosis (CAD), it is desirable to exclude image background. A marker-based watershed segmentation method was proposed to segment background of X-ray images. The method consisted of six modules: image preprocessing, gradient computation, marker extraction, watershed segmentation from markers, region merging and background extraction. One hundred clinical direct radiograph X-ray images were used to validate the method. Manual thresholding and multiscale gradient based watershed method were implemented for comparison. The proposed method yielded a dice coefficient of 0.964±0.069, which was better than that of the manual thresholding (0.937±0.119) and that of multiscale gradient based watershed method (0.942±0.098). Special means were adopted to decrease the computational cost, including getting rid of few pixels with highest grayscale via percentile, calculation of gradient magnitude through simple operations, decreasing the number of markers by appropriate thresholding, and merging regions based on simple grayscale statistics. As a result, the processing time was at most 6s even for a 3072×3072 image on a Pentium 4 PC with 2.4GHz CPU (4 cores) and 2G RAM, which was more than one time faster than that of the multiscale gradient based watershed method. The proposed method could be a potential tool for diagnosis and quantification of X-ray images. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  16. Numerical solution of a non-linear conservation law applicable to the interior dynamics of partially molten planets

    NASA Astrophysics Data System (ADS)

    Bower, Dan J.; Sanan, Patrick; Wolf, Aaron S.

    2018-01-01

    The energy balance of a partially molten rocky planet can be expressed as a non-linear diffusion equation using mixing length theory to quantify heat transport by both convection and mixing of the melt and solid phases. Crucially, in this formulation the effective or eddy diffusivity depends on the entropy gradient, ∂S / ∂r , as well as entropy itself. First we present a simplified model with semi-analytical solutions that highlights the large dynamic range of ∂S / ∂r -around 12 orders of magnitude-for physically-relevant parameters. It also elucidates the thermal structure of a magma ocean during the earliest stage of crystal formation. This motivates the development of a simple yet stable numerical scheme able to capture the large dynamic range of ∂S / ∂r and hence provide a flexible and robust method for time-integrating the energy equation. Using insight gained from the simplified model, we consider a full model, which includes energy fluxes associated with convection, mixing, gravitational separation, and conduction that all depend on the thermophysical properties of the melt and solid phases. This model is discretised and evolved by applying the finite volume method (FVM), allowing for extended precision calculations and using ∂S / ∂r as the solution variable. The FVM is well-suited to this problem since it is naturally energy conserving, flexible, and intuitive to incorporate arbitrary non-linear fluxes that rely on lookup data. Special attention is given to the numerically challenging scenario in which crystals first form in the centre of a magma ocean. The computational framework we devise is immediately applicable to modelling high melt fraction phenomena in Earth and planetary science research. Furthermore, it provides a template for solving similar non-linear diffusion equations that arise in other science and engineering disciplines, particularly for non-linear functional forms of the diffusion coefficient.

  17. A multivariate quadrature based moment method for LES based modeling of supersonic combustion

    NASA Astrophysics Data System (ADS)

    Donde, Pratik; Koo, Heeseok; Raman, Venkat

    2012-07-01

    The transported probability density function (PDF) approach is a powerful technique for large eddy simulation (LES) based modeling of scramjet combustors. In this approach, a high-dimensional transport equation for the joint composition-enthalpy PDF needs to be solved. Quadrature based approaches provide deterministic Eulerian methods for solving the joint-PDF transport equation. In this work, it is first demonstrated that the numerical errors associated with LES require special care in the development of PDF solution algorithms. The direct quadrature method of moments (DQMOM) is one quadrature-based approach developed for supersonic combustion modeling. This approach is shown to generate inconsistent evolution of the scalar moments. Further, gradient-based source terms that appear in the DQMOM transport equations are severely underpredicted in LES leading to artificial mixing of fuel and oxidizer. To overcome these numerical issues, a semi-discrete quadrature method of moments (SeQMOM) is formulated. The performance of the new technique is compared with the DQMOM approach in canonical flow configurations as well as a three-dimensional supersonic cavity stabilized flame configuration. The SeQMOM approach is shown to predict subfilter statistics accurately compared to the DQMOM approach.

  18. A MUSIC-based method for SSVEP signal processing.

    PubMed

    Chen, Kun; Liu, Quan; Ai, Qingsong; Zhou, Zude; Xie, Sheng Quan; Meng, Wei

    2016-03-01

    The research on brain computer interfaces (BCIs) has become a hotspot in recent years because it offers benefit to disabled people to communicate with the outside world. Steady state visual evoked potential (SSVEP)-based BCIs are more widely used because of higher signal to noise ratio and greater information transfer rate compared with other BCI techniques. In this paper, a multiple signal classification based method was proposed for multi-dimensional SSVEP feature extraction. 2-second data epochs from four electrodes achieved excellent accuracy rates including idle state detection. In some asynchronous mode experiments, the recognition accuracy reached up to 100%. The experimental results showed that the proposed method attained good frequency resolution. In most situations, the recognition accuracy was higher than canonical correlation analysis, which is a typical method for multi-channel SSVEP signal processing. Also, a virtual keyboard was successfully controlled by different subjects in an unshielded environment, which proved the feasibility of the proposed method for multi-dimensional SSVEP signal processing in practical applications.

  19. Frame synchronization methods based on channel symbol measurements

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Cheung, K.-M.

    1989-01-01

    The current DSN frame synchronization procedure is based on monitoring the decoded bit stream for the appearance of a sync marker sequence that is transmitted once every data frame. The possibility of obtaining frame synchronization by processing the raw received channel symbols rather than the decoded bits is explored. Performance results are derived for three channel symbol sync methods, and these are compared with results for decoded bit sync methods reported elsewhere. It is shown that each class of methods has advantages or disadvantages under different assumptions on the frame length, the global acquisition strategy, and the desired measure of acquisition timeliness. It is shown that the sync statistics based on decoded bits are superior to the statistics based on channel symbols, if the desired operating region utilizes a probability of miss many orders of magnitude higher than the probability of false alarm. This operating point is applicable for very large frame lengths and minimal frame-to-frame verification strategy. On the other hand, the statistics based on channel symbols are superior if the desired operating point has a miss probability only a few orders of magnitude greater than the false alarm probability. This happens for small frames or when frame-to-frame verifications are required.

  20. A velocity-correction projection method based immersed boundary method for incompressible flows

    NASA Astrophysics Data System (ADS)

    Cai, Shanggui

    2014-11-01

    In the present work we propose a novel direct forcing immersed boundary method based on the velocity-correction projection method of [J.L. Guermond, J. Shen, Velocity-correction projection methods for incompressible flows, SIAM J. Numer. Anal., 41 (1)(2003) 112]. The principal idea of immersed boundary method is to correct the velocity in the vicinity of the immersed object by using an artificial force to mimic the presence of the physical boundaries. Therefore, velocity-correction projection method is preferred to its pressure-correction counterpart in the present work. Since the velocity-correct projection method is considered as a dual class of pressure-correction method, the proposed method here can also be interpreted in the way that first the pressure is predicted by treating the viscous term explicitly without the consideration of the immersed boundary, and the solenoidal velocity is used to determine the volume force on the Lagrangian points, then the non-slip boundary condition is enforced by correcting the velocity with the implicit viscous term. To demonstrate the efficiency and accuracy of the proposed method, several numerical simulations are performed and compared with the results in the literature. China Scholarship Council.

  1. PBSM3D: A finite volume, scalar-transport blowing snow model for use with variable resolution meshes

    NASA Astrophysics Data System (ADS)

    Marsh, C.; Wayand, N. E.; Pomeroy, J. W.; Wheater, H. S.; Spiteri, R. J.

    2017-12-01

    Blowing snow redistribution results in heterogeneous snowcovers that are ubiquitous in cold, windswept environments. Capturing this spatial and temporal variability is important for melt and runoff simulations. Point scale blowing snow transport models are difficult to apply in fully distributed hydrological models due to landscape heterogeneity and complex wind fields. Many existing distributed snow transport models have empirical wind flow and/or simplified wind direction algorithms that perform poorly in calculating snow redistribution where there are divergent wind flows, sharp topography, and over large spatial extents. Herein, a steady-state scalar transport model is discretized using the finite volume method (FVM), using parameterizations from the Prairie Blowing Snow Model (PBSM). PBSM has been applied in hydrological response units and grids to prairie, arctic, glacier, and alpine terrain and shows a good capability to represent snow redistribution over complex terrain. The FVM discretization takes advantage of the variable resolution mesh in the Canadian Hydrological Model (CHM) to ensure efficient calculations over small and large spatial extents. Variable resolution unstructured meshes preserve surface heterogeneity but result in fewer computational elements versus high-resolution structured (raster) grids. Snowpack, soil moisture, and streamflow observations were used to evaluate CHM-modelled outputs in a sub-arctic and an alpine basin. Newly developed remotely sensed snowcover indices allowed for validation over large basins. CHM simulations of snow hydrology were improved by inclusion of the blowing snow model. The results demonstrate the key role of snow transport processes in creating pre-melt snowcover heterogeneity and therefore governing post-melt soil moisture and runoff generation dynamics.

  2. Development of BEM for ceramic composites

    NASA Technical Reports Server (NTRS)

    Henry, D. P.; Banerjee, P. K.; Dargush, G. F.; Hopkins, D. A.; Goldberg, R. K.

    1993-01-01

    BEST-CMS (boundary element solution technology - composite modeling system) is an advanced engineering system for the micro-analysis of fiber composite structures. BEST-CMS is based upon the boundary element program BEST3D which was developed for NASA by Pratt and Whitney Aircraft and the State University of New York at Buffalo under contract NAS3-23697. BEST-CMS presently has the capabilities for elastostatic analysis, steady-state and transient heat transfer analysis, steady-state and transient concurrent thermoelastic analysis, and elastoplastic and creep analysis. The fibers are assumed to be perfectly bonded to the composite matrix, or in the case of static or steady-state analysis, the fibers may be assumed to have spring connections, thermal resistance, and/or frictional sliding between the fibers and the composite matrix. The primary objective of this user's manual is to provide an overview of all BEST-CMS capabilities, along with detailed descriptions of the input data requirements. In the next chapter, a brief review of the theoretical background is presented for each analysis category. Then, chapter three discusses the key aspects of the numerical implementation, while chapter four provides a tutorial for the beginning BEST-CMS user. The heart of the manual, however, is in chapter five, where a complete description of all data input items is provided. Within this chapter, the individual entries are grouped on a functional basis for a more coherent presentation. Chapter six includes sample problems and should be of considerable assistance to the novice. Chapter seven includes capsules of a number of fiber-composite analysis problems that have been solved using BEST-CMS. This chapter is primarily descriptive in nature and is intended merely to illustrate the level of analysis that is possible within the present BEST-CMS system. Chapter eight contains a detail description of the BEST-CMS Neutral File which is helpful in writing an interface between BEST

  3. Recommendation advertising method based on behavior retargeting

    NASA Astrophysics Data System (ADS)

    Zhao, Yao; YIN, Xin-Chun; CHEN, Zhi-Min

    2011-10-01

    Online advertising has become an important business in e-commerce. Ad recommended algorithms are the most critical part in recommendation systems. We propose a recommendation advertising method based on behavior retargeting which can avoid leakage click of advertising due to objective reasons and can observe the changes of the user's interest in time. Experiments show that our new method can have a significant effect and can be further to apply to online system.

  4. Method for rapid base sequencing in DNA and RNA with two base labeling

    DOEpatents

    Jett, James H.; Keller, Richard A.; Martin, John C.; Posner, Richard G.; Marrone, Babetta L.; Hammond, Mark L.; Simpson, Daniel J.

    1995-01-01

    Method for rapid-base sequencing in DNA and RNA with two-base labeling and employing fluorescent detection of single molecules at two wavelengths. Bases modified to accept fluorescent labels are used to replicate a single DNA or RNA strand to be sequenced. The bases are then sequentially cleaved from the replicated strand, excited with a chosen spectrum of electromagnetic radiation, and the fluorescence from individual, tagged bases detected in the order of cleavage from the strand.

  5. Numerical study of ambient pressure for laser-induced bubble near a rigid boundary

    NASA Astrophysics Data System (ADS)

    Li, BeiBei; Zhang, HongChao; Han, Bing; Lu, Jian

    2012-07-01

    The dynamics of the laser-induced bubble at different ambient pressures was numerically studied by Finite Volume Method (FVM). The velocity of the bubble wall, the liquid jet velocity at collapse, and the pressure of the water hammer while the liquid jet impacting onto the boundary are found to increase nonlinearly with increasing ambient pressure. The collapse time and the formation time of the liquid jet are found to decrease nonlinearly with increasing ambient pressure. The ratios of the jet formation time to the collapse time, and the displacement of the bubble center to the maximal radius while the jet formation stay invariant when ambient pressure changes. These ratios are independent of ambient pressure.

  6. A new image segmentation method based on multifractal detrended moving average analysis

    NASA Astrophysics Data System (ADS)

    Shi, Wen; Zou, Rui-biao; Wang, Fang; Su, Le

    2015-08-01

    In order to segment and delineate some regions of interest in an image, we propose a novel algorithm based on the multifractal detrended moving average analysis (MF-DMA). In this method, the generalized Hurst exponent h(q) is calculated for every pixel firstly and considered as the local feature of a surface. And then a multifractal detrended moving average spectrum (MF-DMS) D(h(q)) is defined by the idea of box-counting dimension method. Therefore, we call the new image segmentation method MF-DMS-based algorithm. The performance of the MF-DMS-based method is tested by two image segmentation experiments of rapeseed leaf image of potassium deficiency and magnesium deficiency under three cases, namely, backward (θ = 0), centered (θ = 0.5) and forward (θ = 1) with different q values. The comparison experiments are conducted between the MF-DMS method and other two multifractal segmentation methods, namely, the popular MFS-based and latest MF-DFS-based methods. The results show that our MF-DMS-based method is superior to the latter two methods. The best segmentation result for the rapeseed leaf image of potassium deficiency and magnesium deficiency is from the same parameter combination of θ = 0.5 and D(h(- 10)) when using the MF-DMS-based method. An interesting finding is that the D(h(- 10)) outperforms other parameters for both the MF-DMS-based method with centered case and MF-DFS-based algorithms. By comparing the multifractal nature between nutrient deficiency and non-nutrient deficiency areas determined by the segmentation results, an important finding is that the gray value's fluctuation in nutrient deficiency area is much severer than that in non-nutrient deficiency area.

  7. A geometrically based method for automated radiosurgery planning.

    PubMed

    Wagner, T H; Yi, T; Meeks, S L; Bova, F J; Brechner, B L; Chen, Y; Buatti, J M; Friedman, W A; Foote, K D; Bouchet, L G

    2000-12-01

    A geometrically based method of multiple isocenter linear accelerator radiosurgery treatment planning optimization was developed, based on a target's solid shape. Our method uses an edge detection process to determine the optimal sphere packing arrangement with which to cover the planning target. The sphere packing arrangement is converted into a radiosurgery treatment plan by substituting the isocenter locations and collimator sizes for the spheres. This method is demonstrated on a set of 5 irregularly shaped phantom targets, as well as a set of 10 clinical example cases ranging from simple to very complex in planning difficulty. Using a prototype implementation of the method and standard dosimetric radiosurgery treatment planning tools, feasible treatment plans were developed for each target. The treatment plans generated for the phantom targets showed excellent dose conformity and acceptable dose homogeneity within the target volume. The algorithm was able to generate a radiosurgery plan conforming to the Radiation Therapy Oncology Group (RTOG) guidelines on radiosurgery for every clinical and phantom target examined. This automated planning method can serve as a valuable tool to assist treatment planners in rapidly and consistently designing conformal multiple isocenter radiosurgery treatment plans.

  8. A threshold selection method based on edge preserving

    NASA Astrophysics Data System (ADS)

    Lou, Liantang; Dan, Wei; Chen, Jiaqi

    2015-12-01

    A method of automatic threshold selection for image segmentation is presented. An optimal threshold is selected in order to preserve edge of image perfectly in image segmentation. The shortcoming of Otsu's method based on gray-level histograms is analyzed. The edge energy function of bivariate continuous function is expressed as the line integral while the edge energy function of image is simulated by discretizing the integral. An optimal threshold method by maximizing the edge energy function is given. Several experimental results are also presented to compare with the Otsu's method.

  9. Fluence-based and microdosimetric event-based methods for radiation protection in space

    NASA Technical Reports Server (NTRS)

    Curtis, Stanley B.; Meinhold, C. B. (Principal Investigator)

    2002-01-01

    The National Council on Radiation Protection and Measurements (NCRP) has recently published a report (Report #137) that discusses various aspects of the concepts used in radiation protection and the difficulties in measuring the radiation environment in spacecraft for the estimation of radiation risk to space travelers. Two novel dosimetric methodologies, fluence-based and microdosimetric event-based methods, are discussed and evaluated, along with the more conventional quality factor/LET method. It was concluded that for the present, any reason to switch to a new methodology is not compelling. It is suggested that because of certain drawbacks in the presently-used conventional method, these alternative methodologies should be kept in mind. As new data become available and dosimetric techniques become more refined, the question should be revisited and that in the future, significant improvement might be realized. In addition, such concepts as equivalent dose and organ dose equivalent are discussed and various problems regarding the measurement/estimation of these quantities are presented.

  10. Method for rapid base sequencing in DNA and RNA with two base labeling

    DOEpatents

    Jett, J.H.; Keller, R.A.; Martin, J.C.; Posner, R.G.; Marrone, B.L.; Hammond, M.L.; Simpson, D.J.

    1995-04-11

    A method is described for rapid-base sequencing in DNA and RNA with two-base labeling and employing fluorescent detection of single molecules at two wavelengths. Bases modified to accept fluorescent labels are used to replicate a single DNA or RNA strand to be sequenced. The bases are then sequentially cleaved from the replicated strand, excited with a chosen spectrum of electromagnetic radiation, and the fluorescence from individual, tagged bases detected in the order of cleavage from the strand. 4 figures.

  11. Iron-based amorphous alloys and methods of synthesizing iron-based amorphous alloys

    DOEpatents

    Saw, Cheng Kiong; Bauer, William A.; Choi, Jor-Shan; Day, Dan; Farmer, Joseph C.

    2016-05-03

    A method according to one embodiment includes combining an amorphous iron-based alloy and at least one metal selected from a group consisting of molybdenum, chromium, tungsten, boron, gadolinium, nickel phosphorous, yttrium, and alloys thereof to form a mixture, wherein the at least one metal is present in the mixture from about 5 atomic percent (at %) to about 55 at %; and ball milling the mixture at least until an amorphous alloy of the iron-based alloy and the at least one metal is formed. Several amorphous iron-based metal alloys are also presented, including corrosion-resistant amorphous iron-based metal alloys and radiation-shielding amorphous iron-based metal alloys.

  12. Connecting clinical and actuarial prediction with rule-based methods.

    PubMed

    Fokkema, Marjolein; Smits, Niels; Kelderman, Henk; Penninx, Brenda W J H

    2015-06-01

    Meta-analyses comparing the accuracy of clinical versus actuarial prediction have shown actuarial methods to outperform clinical methods, on average. However, actuarial methods are still not widely used in clinical practice, and there has been a call for the development of actuarial prediction methods for clinical practice. We argue that rule-based methods may be more useful than the linear main effect models usually employed in prediction studies, from a data and decision analytic as well as a practical perspective. In addition, decision rules derived with rule-based methods can be represented as fast and frugal trees, which, unlike main effects models, can be used in a sequential fashion, reducing the number of cues that have to be evaluated before making a prediction. We illustrate the usability of rule-based methods by applying RuleFit, an algorithm for deriving decision rules for classification and regression problems, to a dataset on prediction of the course of depressive and anxiety disorders from Penninx et al. (2011). The RuleFit algorithm provided a model consisting of 2 simple decision rules, requiring evaluation of only 2 to 4 cues. Predictive accuracy of the 2-rule model was very similar to that of a logistic regression model incorporating 20 predictor variables, originally applied to the dataset. In addition, the 2-rule model required, on average, evaluation of only 3 cues. Therefore, the RuleFit algorithm appears to be a promising method for creating decision tools that are less time consuming and easier to apply in psychological practice, and with accuracy comparable to traditional actuarial methods. (c) 2015 APA, all rights reserved).

  13. A trust-based recommendation method using network diffusion processes

    NASA Astrophysics Data System (ADS)

    Chen, Ling-Jiao; Gao, Jian

    2018-09-01

    A variety of rating-based recommendation methods have been extensively studied including the well-known collaborative filtering approaches and some network diffusion-based methods, however, social trust relations are not sufficiently considered when making recommendations. In this paper, we contribute to the literature by proposing a trust-based recommendation method, named CosRA+T, after integrating the information of trust relations into the resource-redistribution process. Specifically, a tunable parameter is used to scale the resources received by trusted users before the redistribution back to the objects. Interestingly, we find an optimal scaling parameter for the proposed CosRA+T method to achieve its best recommendation accuracy, and the optimal value seems to be universal under several evaluation metrics across different datasets. Moreover, results of extensive experiments on the two real-world rating datasets with trust relations, Epinions and FriendFeed, suggest that CosRA+T has a remarkable improvement in overall accuracy, diversity and novelty. Our work takes a step towards designing better recommendation algorithms by employing multiple resources of social network information.

  14. Video Extrapolation Method Based on Time-Varying Energy Optimization and CIP.

    PubMed

    Sakaino, Hidetomo

    2016-09-01

    Video extrapolation/prediction methods are often used to synthesize new videos from images. For fluid-like images and dynamic textures as well as moving rigid objects, most state-of-the-art video extrapolation methods use non-physics-based models that learn orthogonal bases from a number of images but at high computation cost. Unfortunately, data truncation can cause image degradation, i.e., blur, artifact, and insufficient motion changes. To extrapolate videos that more strictly follow physical rules, this paper proposes a physics-based method that needs only a few images and is truncation-free. We utilize physics-based equations with image intensity and velocity: optical flow, Navier-Stokes, continuity, and advection equations. These allow us to use partial difference equations to deal with the local image feature changes. Image degradation during extrapolation is minimized by updating model parameters, where a novel time-varying energy balancer model that uses energy based image features, i.e., texture, velocity, and edge. Moreover, the advection equation is discretized by high-order constrained interpolation profile for lower quantization error than can be achieved by the previous finite difference method in long-term videos. Experiments show that the proposed energy based video extrapolation method outperforms the state-of-the-art video extrapolation methods in terms of image quality and computation cost.

  15. A method of extracting impervious surface based on rule algorithm

    NASA Astrophysics Data System (ADS)

    Peng, Shuangyun; Hong, Liang; Xu, Quanli

    2018-02-01

    The impervious surface has become an important index to evaluate the urban environmental quality and measure the development level of urbanization. At present, the use of remote sensing technology to extract impervious surface has become the main way. In this paper, a method to extract impervious surface based on rule algorithm is proposed. The main ideas of the method is to use the rule-based algorithm to extract impermeable surface based on the characteristics and the difference which is between the impervious surface and the other three types of objects (water, soil and vegetation) in the seven original bands, NDWI and NDVI. The steps can be divided into three steps: 1) Firstly, the vegetation is extracted according to the principle that the vegetation is higher in the near-infrared band than the other bands; 2) Then, the water is extracted according to the characteristic of the water with the highest NDWI and the lowest NDVI; 3) Finally, the impermeable surface is extracted based on the fact that the impervious surface has a higher NDWI value and the lowest NDVI value than the soil.In order to test the accuracy of the rule algorithm, this paper uses the linear spectral mixed decomposition algorithm, the CART algorithm, the NDII index algorithm for extracting the impervious surface based on six remote sensing image of the Dianchi Lake Basin from 1999 to 2014. Then, the accuracy of the above three methods is compared with the accuracy of the rule algorithm by using the overall classification accuracy method. It is found that the extraction method based on the rule algorithm is obviously higher than the above three methods.

  16. Real-time biscuit tile image segmentation method based on edge detection.

    PubMed

    Matić, Tomislav; Aleksi, Ivan; Hocenski, Željko; Kraus, Dieter

    2018-05-01

    In this paper we propose a novel real-time Biscuit Tile Segmentation (BTS) method for images from ceramic tile production line. BTS method is based on signal change detection and contour tracing with a main goal of separating tile pixels from background in images captured on the production line. Usually, human operators are visually inspecting and classifying produced ceramic tiles. Computer vision and image processing techniques can automate visual inspection process if they fulfill real-time requirements. Important step in this process is a real-time tile pixels segmentation. BTS method is implemented for parallel execution on a GPU device to satisfy the real-time constraints of tile production line. BTS method outperforms 2D threshold-based methods, 1D edge detection methods and contour-based methods. Proposed BTS method is in use in the biscuit tile production line. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Bearing diagnostics: A method based on differential geometry

    NASA Astrophysics Data System (ADS)

    Tian, Ye; Wang, Zili; Lu, Chen; Wang, Zhipeng

    2016-12-01

    The structures around bearings are complex, and the working environment is variable. These conditions cause the collected vibration signals to become nonlinear, non-stationary, and chaotic characteristics that make noise reduction, feature extraction, fault diagnosis, and health assessment significantly challenging. Thus, a set of differential geometry-based methods with superiorities in nonlinear analysis is presented in this study. For noise reduction, the Local Projection method is modified by both selecting the neighborhood radius based on empirical mode decomposition and determining noise subspace constrained by neighborhood distribution information. For feature extraction, Hessian locally linear embedding is introduced to acquire manifold features from the manifold topological structures, and singular values of eigenmatrices as well as several specific frequency amplitudes in spectrograms are extracted subsequently to reduce the complexity of the manifold features. For fault diagnosis, information geometry-based support vector machine is applied to classify the fault states. For health assessment, the manifold distance is employed to represent the health information; the Gaussian mixture model is utilized to calculate the confidence values, which directly reflect the health status. Case studies on Lorenz signals and vibration datasets of bearings demonstrate the effectiveness of the proposed methods.

  18. Deviation-based spam-filtering method via stochastic approach

    NASA Astrophysics Data System (ADS)

    Lee, Daekyung; Lee, Mi Jin; Kim, Beom Jun

    2018-03-01

    In the presence of a huge number of possible purchase choices, ranks or ratings of items by others often play very important roles for a buyer to make a final purchase decision. Perfectly objective rating is an impossible task to achieve, and we often use an average rating built on how previous buyers estimated the quality of the product. The problem of using a simple average rating is that it can easily be polluted by careless users whose evaluation of products cannot be trusted, and by malicious spammers who try to bias the rating result on purpose. In this letter we suggest how trustworthiness of individual users can be systematically and quantitatively reflected to build a more reliable rating system. We compute the suitably defined reliability of each user based on the user's rating pattern for all products she evaluated. We call our proposed method as the deviation-based ranking, since the statistical significance of each user's rating pattern with respect to the average rating pattern is the key ingredient. We find that our deviation-based ranking method outperforms existing methods in filtering out careless random evaluators as well as malicious spammers.

  19. Error simulation of paired-comparison-based scaling methods

    NASA Astrophysics Data System (ADS)

    Cui, Chengwu

    2000-12-01

    Subjective image quality measurement usually resorts to psycho physical scaling. However, it is difficult to evaluate the inherent precision of these scaling methods. Without knowing the potential errors of the measurement, subsequent use of the data can be misleading. In this paper, the errors on scaled values derived form paired comparison based scaling methods are simulated with randomly introduced proportion of choice errors that follow the binomial distribution. Simulation results are given for various combinations of the number of stimuli and the sampling size. The errors are presented in the form of average standard deviation of the scaled values and can be fitted reasonably well with an empirical equation that can be sued for scaling error estimation and measurement design. The simulation proves paired comparison based scaling methods can have large errors on the derived scaled values when the sampling size and the number of stimuli are small. Examples are also given to show the potential errors on actually scaled values of color image prints as measured by the method of paired comparison.

  20. OWL-based reasoning methods for validating archetypes.

    PubMed

    Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás

    2013-04-01

    Some modern Electronic Healthcare Record (EHR) architectures and standards are based on the dual model-based architecture, which defines two conceptual levels: reference model and archetype model. Such architectures represent EHR domain knowledge by means of archetypes, which are considered by many researchers to play a fundamental role for the achievement of semantic interoperability in healthcare. Consequently, formal methods for validating archetypes are necessary. In recent years, there has been an increasing interest in exploring how semantic web technologies in general, and ontologies in particular, can facilitate the representation and management of archetypes, including binding to terminologies, but no solution based on such technologies has been provided to date to validate archetypes. Our approach represents archetypes by means of OWL ontologies. This permits to combine the two levels of the dual model-based architecture in one modeling framework which can also integrate terminologies available in OWL format. The validation method consists of reasoning on those ontologies to find modeling errors in archetypes: incorrect restrictions over the reference model, non-conformant archetype specializations and inconsistent terminological bindings. The archetypes available in the repositories supported by the openEHR Foundation and the NHS Connecting for Health Program, which are the two largest publicly available ones, have been analyzed with our validation method. For such purpose, we have implemented a software tool called Archeck. Our results show that around 1/5 of archetype specializations contain modeling errors, the most common mistakes being related to coded terms and terminological bindings. The analysis of each repository reveals that different patterns of errors are found in both repositories. This result reinforces the need for making serious efforts in improving archetype design processes. Copyright © 2012 Elsevier Inc. All rights reserved.

  1. Alternative methods of flexible base compaction acceptance.

    DOT National Transportation Integrated Search

    2012-05-01

    In the Texas Department of Transportation, flexible base construction is governed by a series of stockpile : and field tests. A series of concerns with these existing methods, along with some premature failures in the : field, led to this project inv...

  2. Method for extruding pitch based foam

    DOEpatents

    Klett, James W.

    2002-01-01

    A method and apparatus for extruding pitch based foam is disclosed. The method includes the steps of: forming a viscous pitch foam; passing the precursor through an extrusion tube; and subjecting the precursor in said extrusion tube to a temperature gradient which varies along the length of the extrusion tube to form an extruded carbon foam. The apparatus includes an extrusion tube having a passageway communicatively connected to a chamber in which a viscous pitch foam formed in the chamber paring through the extrusion tube, and a heating mechanism in thermal communication with the tube for heating the viscous pitch foam along the length of the tube in accordance with a predetermined temperature gradient.

  3. Calculation of the Maxwell stress tensor and the Poisson-Boltzmann force on a solvated molecular surface using hypersingular boundary integrals

    NASA Astrophysics Data System (ADS)

    Lu, Benzhuo; Cheng, Xiaolin; Hou, Tingjun; McCammon, J. Andrew

    2005-08-01

    The electrostatic interaction among molecules solvated in ionic solution is governed by the Poisson-Boltzmann equation (PBE). Here the hypersingular integral technique is used in a boundary element method (BEM) for the three-dimensional (3D) linear PBE to calculate the Maxwell stress tensor on the solvated molecular surface, and then the PB forces and torques can be obtained from the stress tensor. Compared with the variational method (also in a BEM frame) that we proposed recently, this method provides an even more efficient way to calculate the full intermolecular electrostatic interaction force, especially for macromolecular systems. Thus, it may be more suitable for the application of Brownian dynamics methods to study the dynamics of protein/protein docking as well as the assembly of large 3D architectures involving many diffusing subunits. The method has been tested on two simple cases to demonstrate its reliability and efficiency, and also compared with our previous variational method used in BEM.

  4. On the possibility of singularities in the acoustic field of supersonic sources when BEM is applied to a wave equation

    NASA Technical Reports Server (NTRS)

    De Bernardis, E.; Farassat, F.

    1989-01-01

    Using a time domain method based on the Ffowcs Williams-Hawkings equation, a reliable explanation is provided for the origin of singularities observed in the numerical prediction of supersonic propeller noise. In the last few years Tam and, more recently, Amiet have analyzed the phenomenon from different points of view. The method proposed here offers a clear interpretation of the singularities based on a new description of sources, relating to the behavior of lines where the propeller blade surface exhibit slope discontinuity.

  5. Accurate Solution of Multi-Region Continuum Biomolecule Electrostatic Problems Using the Linearized Poisson-Boltzmann Equation with Curved Boundary Elements

    PubMed Central

    Altman, Michael D.; Bardhan, Jaydeep P.; White, Jacob K.; Tidor, Bruce

    2009-01-01

    We present a boundary-element method (BEM) implementation for accurately solving problems in biomolecular electrostatics using the linearized Poisson–Boltzmann equation. Motivating this implementation is the desire to create a solver capable of precisely describing the geometries and topologies prevalent in continuum models of biological molecules. This implementation is enabled by the synthesis of four technologies developed or implemented specifically for this work. First, molecular and accessible surfaces used to describe dielectric and ion-exclusion boundaries were discretized with curved boundary elements that faithfully reproduce molecular geometries. Second, we avoided explicitly forming the dense BEM matrices and instead solved the linear systems with a preconditioned iterative method (GMRES), using a matrix compression algorithm (FFTSVD) to accelerate matrix-vector multiplication. Third, robust numerical integration methods were employed to accurately evaluate singular and near-singular integrals over the curved boundary elements. Finally, we present a general boundary-integral approach capable of modeling an arbitrary number of embedded homogeneous dielectric regions with differing dielectric constants, possible salt treatment, and point charges. A comparison of the presented BEM implementation and standard finite-difference techniques demonstrates that for certain classes of electrostatic calculations, such as determining absolute electrostatic solvation and rigid-binding free energies, the improved convergence properties of the BEM approach can have a significant impact on computed energetics. We also demonstrate that the improved accuracy offered by the curved-element BEM is important when more sophisticated techniques, such as non-rigid-binding models, are used to compute the relative electrostatic effects of molecular modifications. In addition, we show that electrostatic calculations requiring multiple solves using the same molecular geometry

  6. Developing a Self-Report-Based Sequential Analysis Method for Educational Technology Systems: A Process-Based Usability Evaluation

    ERIC Educational Resources Information Center

    Lin, Yi-Chun; Hsieh, Ya-Hui; Hou, Huei-Tse

    2015-01-01

    The development of a usability evaluation method for educational systems or applications, called the self-report-based sequential analysis, is described herein. The method aims to extend the current practice by proposing self-report-based sequential analysis as a new usability method, which integrates the advantages of self-report in survey…

  7. Do Examinees Understand Score Reports for Alternate Methods of Scoring Computer Based Tests?

    ERIC Educational Resources Information Center

    Whittaker, Tiffany A.; Williams, Natasha J.; Dodd, Barbara G.

    2011-01-01

    This study assessed the interpretability of scaled scores based on either number correct (NC) scoring for a paper-and-pencil test or one of two methods of scoring computer-based tests: an item pattern (IP) scoring method and a method based on equated NC scoring. The equated NC scoring method for computer-based tests was proposed as an alternative…

  8. Convergence acceleration of computer methods for grounding analysis in stratified soils

    NASA Astrophysics Data System (ADS)

    Colominas, I.; París, J.; Navarrina, F.; Casteleiro, M.

    2010-06-01

    The design of safe grounding systems in electrical installations is essential to assure the protection of the equipment, the power supply continuity and the security of the persons. In order to achieve these goals, it is necessary to compute the equivalent electrical resistance of the system and the potential distribution on the earth surface when a fault condition occurs. In the last years the authors have developed a numerical formulation based on the BEM for the analysis of grounding systems embedded in uniform and layered soils. As it is known, in practical cases the underlying series have a poor rate of convergence and the use of multilayer soils requires an out of range computational cost. In this paper we present an efficient technique based on the Aitken δ2-process in order to improve the rate of convergence of the involved series expansions.

  9. A dynamic access control method based on QoS requirement

    NASA Astrophysics Data System (ADS)

    Li, Chunquan; Wang, Yanwei; Yang, Baoye; Hu, Chunyang

    2013-03-01

    A dynamic access control method is put forward to ensure the security of the sharing service in Cloud Manufacturing, according to the application characteristics of cloud manufacturing collaborative task. The role-based access control (RBAC) model is extended according to the characteristics of cloud manufacturing in this method. The constraints are considered, which are from QoS requirement of the task context to access control, based on the traditional static authorization. The fuzzy policy rules are established about the weighted interval value of permissions. The access control authorities of executable service by users are dynamically adjusted through the fuzzy reasoning based on the QoS requirement of task. The main elements of the model are described. The fuzzy reasoning algorithm of weighted interval value based QoS requirement is studied. An effective method is provided to resolve the access control of cloud manufacturing.

  10. Reconstruction of metabolic pathways by combining probabilistic graphical model-based and knowledge-based methods

    PubMed Central

    2014-01-01

    Automatic reconstruction of metabolic pathways for an organism from genomics and transcriptomics data has been a challenging and important problem in bioinformatics. Traditionally, known reference pathways can be mapped into an organism-specific ones based on its genome annotation and protein homology. However, this simple knowledge-based mapping method might produce incomplete pathways and generally cannot predict unknown new relations and reactions. In contrast, ab initio metabolic network construction methods can predict novel reactions and interactions, but its accuracy tends to be low leading to a lot of false positives. Here we combine existing pathway knowledge and a new ab initio Bayesian probabilistic graphical model together in a novel fashion to improve automatic reconstruction of metabolic networks. Specifically, we built a knowledge database containing known, individual gene / protein interactions and metabolic reactions extracted from existing reference pathways. Known reactions and interactions were then used as constraints for Bayesian network learning methods to predict metabolic pathways. Using individual reactions and interactions extracted from different pathways of many organisms to guide pathway construction is new and improves both the coverage and accuracy of metabolic pathway construction. We applied this probabilistic knowledge-based approach to construct the metabolic networks from yeast gene expression data and compared its results with 62 known metabolic networks in the KEGG database. The experiment showed that the method improved the coverage of metabolic network construction over the traditional reference pathway mapping method and was more accurate than pure ab initio methods. PMID:25374614

  11. Comparison of two DSC-based methods to predict drug-polymer solubility.

    PubMed

    Rask, Malte Bille; Knopp, Matthias Manne; Olesen, Niels Erik; Holm, René; Rades, Thomas

    2018-04-05

    The aim of the present study was to compare two DSC-based methods to predict drug-polymer solubility (melting point depression method and recrystallization method) and propose a guideline for selecting the most suitable method based on physicochemical properties of both the drug and the polymer. Using the two methods, the solubilities of celecoxib, indomethacin, carbamazepine, and ritonavir in polyvinylpyrrolidone, hydroxypropyl methylcellulose, and Soluplus® were determined at elevated temperatures and extrapolated to room temperature using the Flory-Huggins model. For the melting point depression method, it was observed that a well-defined drug melting point was required in order to predict drug-polymer solubility, since the method is based on the depression of the melting point as a function of polymer content. In contrast to previous findings, it was possible to measure melting point depression up to 20 °C below the glass transition temperature (T g ) of the polymer for some systems. Nevertheless, in general it was possible to obtain solubility measurements at lower temperatures using polymers with a low T g . Finally, for the recrystallization method it was found that the experimental composition dependence of the T g must be differentiable for compositions ranging from 50 to 90% drug (w/w) so that one T g corresponds to only one composition. Based on these findings, a guideline for selecting the most suitable thermal method to predict drug-polymer solubility based on the physicochemical properties of the drug and polymer is suggested in the form of a decision tree. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. A rule-based named-entity recognition method for knowledge extraction of evidence-based dietary recommendations

    PubMed Central

    2017-01-01

    Evidence-based dietary information represented as unstructured text is a crucial information that needs to be accessed in order to help dietitians follow the new knowledge arrives daily with newly published scientific reports. Different named-entity recognition (NER) methods have been introduced previously to extract useful information from the biomedical literature. They are focused on, for example extracting gene mentions, proteins mentions, relationships between genes and proteins, chemical concepts and relationships between drugs and diseases. In this paper, we present a novel NER method, called drNER, for knowledge extraction of evidence-based dietary information. To the best of our knowledge this is the first attempt at extracting dietary concepts. DrNER is a rule-based NER that consists of two phases. The first one involves the detection and determination of the entities mention, and the second one involves the selection and extraction of the entities. We evaluate the method by using text corpora from heterogeneous sources, including text from several scientifically validated web sites and text from scientific publications. Evaluation of the method showed that drNER gives good results and can be used for knowledge extraction of evidence-based dietary recommendations. PMID:28644863

  13. Evaluating team-based, lecture-based, and hybrid learning methods for neurology clerkship in China: a method-comparison study

    PubMed Central

    2014-01-01

    Background Neurology is complex, abstract, and difficult for students to learn. However, a good learning method for neurology clerkship training is required to help students quickly develop strong clinical thinking as well as problem-solving skills. Both the traditional lecture-based learning (LBL) and the relatively new team-based learning (TBL) methods have inherent strengths and weaknesses when applied to neurology clerkship education. However, the strengths of each method may complement the weaknesses of the other. Combining TBL with LBL may produce better learning outcomes than TBL or LBL alone. We propose a hybrid method (TBL + LBL) and designed an experiment to compare the learning outcomes with those of pure LBL and pure TBL. Methods One hundred twenty-seven fourth-year medical students attended a two-week neurology clerkship program organized by the Department of Neurology, Sun Yat-Sen Memorial Hospital. All of the students were from Grade 2007, Department of Clinical Medicine, Zhongshan School of Medicine, Sun Yat-Sen University. These students were assigned to one of three groups randomly: Group A (TBL + LBL, with 41 students), Group B (LBL, with 43 students), and Group C (TBL, with 43 students). The learning outcomes were evaluated by a questionnaire and two tests covering basic knowledge of neurology and clinical practice. Results The practice test scores of Group A were similar to those of Group B, but significantly higher than those of Group C. The theoretical test scores and the total scores of Group A were significantly higher than those of Groups B and C. In addition, 100% of the students in Group A were satisfied with the combination of TBL + LBL. Conclusions Our results support our proposal that the combination of TBL + LBL is acceptable to students and produces better learning outcomes than either method alone in neurology clerkships. In addition, the proposed hybrid method may also be suited for other medical clerkships that

  14. Hyperspectral image compressing using wavelet-based method

    NASA Astrophysics Data System (ADS)

    Yu, Hui; Zhang, Zhi-jie; Lei, Bo; Wang, Chen-sheng

    2017-10-01

    Hyperspectral imaging sensors can acquire images in hundreds of continuous narrow spectral bands. Therefore each object presented in the image can be identified from their spectral response. However, such kind of imaging brings a huge amount of data, which requires transmission, processing, and storage resources for both airborne and space borne imaging. Due to the high volume of hyperspectral image data, the exploration of compression strategies has received a lot of attention in recent years. Compression of hyperspectral data cubes is an effective solution for these problems. Lossless compression of the hyperspectral data usually results in low compression ratio, which may not meet the available resources; on the other hand, lossy compression may give the desired ratio, but with a significant degradation effect on object identification performance of the hyperspectral data. Moreover, most hyperspectral data compression techniques exploits the similarities in spectral dimensions; which requires bands reordering or regrouping, to make use of the spectral redundancy. In this paper, we explored the spectral cross correlation between different bands, and proposed an adaptive band selection method to obtain the spectral bands which contain most of the information of the acquired hyperspectral data cube. The proposed method mainly consist three steps: First, the algorithm decomposes the original hyperspectral imagery into a series of subspaces based on the hyper correlation matrix of the hyperspectral images between different bands. And then the Wavelet-based algorithm is applied to the each subspaces. At last the PCA method is applied to the wavelet coefficients to produce the chosen number of components. The performance of the proposed method was tested by using ISODATA classification method.

  15. Distance-based microfluidic quantitative detection methods for point-of-care testing.

    PubMed

    Tian, Tian; Li, Jiuxing; Song, Yanling; Zhou, Leiji; Zhu, Zhi; Yang, Chaoyong James

    2016-04-07

    Equipment-free devices with quantitative readout are of great significance to point-of-care testing (POCT), which provides real-time readout to users and is especially important in low-resource settings. Among various equipment-free approaches, distance-based visual quantitative detection methods rely on reading the visual signal length for corresponding target concentrations, thus eliminating the need for sophisticated instruments. The distance-based methods are low-cost, user-friendly and can be integrated into portable analytical devices. Moreover, such methods enable quantitative detection of various targets by the naked eye. In this review, we first introduce the concept and history of distance-based visual quantitative detection methods. Then, we summarize the main methods for translation of molecular signals to distance-based readout and discuss different microfluidic platforms (glass, PDMS, paper and thread) in terms of applications in biomedical diagnostics, food safety monitoring, and environmental analysis. Finally, the potential and future perspectives are discussed.

  16. A content-based image retrieval method for optical colonoscopy images based on image recognition techniques

    NASA Astrophysics Data System (ADS)

    Nosato, Hirokazu; Sakanashi, Hidenori; Takahashi, Eiichi; Murakawa, Masahiro

    2015-03-01

    This paper proposes a content-based image retrieval method for optical colonoscopy images that can find images similar to ones being diagnosed. Optical colonoscopy is a method of direct observation for colons and rectums to diagnose bowel diseases. It is the most common procedure for screening, surveillance and treatment. However, diagnostic accuracy for intractable inflammatory bowel diseases, such as ulcerative colitis (UC), is highly dependent on the experience and knowledge of the medical doctor, because there is considerable variety in the appearances of colonic mucosa within inflammations with UC. In order to solve this issue, this paper proposes a content-based image retrieval method based on image recognition techniques. The proposed retrieval method can find similar images from a database of images diagnosed as UC, and can potentially furnish the medical records associated with the retrieved images to assist the UC diagnosis. Within the proposed method, color histogram features and higher order local auto-correlation (HLAC) features are adopted to represent the color information and geometrical information of optical colonoscopy images, respectively. Moreover, considering various characteristics of UC colonoscopy images, such as vascular patterns and the roughness of the colonic mucosa, we also propose an image enhancement method to highlight the appearances of colonic mucosa in UC. In an experiment using 161 UC images from 32 patients, we demonstrate that our method improves the accuracy of retrieving similar UC images.

  17. Particle-based and meshless methods with Aboria

    NASA Astrophysics Data System (ADS)

    Robinson, Martin; Bruna, Maria

    Aboria is a powerful and flexible C++ library for the implementation of particle-based numerical methods. The particles in such methods can represent actual particles (e.g. Molecular Dynamics) or abstract particles used to discretise a continuous function over a domain (e.g. Radial Basis Functions). Aboria provides a particle container, compatible with the Standard Template Library, spatial search data structures, and a Domain Specific Language to specify non-linear operators on the particle set. This paper gives an overview of Aboria's design, an example of use, and a performance benchmark.

  18. [Reconstituting evaluation methods based on both qualitative and quantitative paradigms].

    PubMed

    Miyata, Hiroaki; Okubo, Suguru; Yoshie, Satoru; Kai, Ichiro

    2011-01-01

    Debate about the relationship between quantitative and qualitative paradigms is often muddled and confusing and the clutter of terms and arguments has resulted in the concepts becoming obscure and unrecognizable. In this study we conducted content analysis regarding evaluation methods of qualitative healthcare research. We extracted descriptions on four types of evaluation paradigm (validity/credibility, reliability/credibility, objectivity/confirmability, and generalizability/transferability), and classified them into subcategories. In quantitative research, there has been many evaluation methods based on qualitative paradigms, and vice versa. Thus, it might not be useful to consider evaluation methods of qualitative paradigm are isolated from those of quantitative methods. Choosing practical evaluation methods based on the situation and prior conditions of each study is an important approach for researchers.

  19. Fast and robust standard-deviation-based method for bulk motion compensation in phase-based functional OCT.

    PubMed

    Wei, Xiang; Camino, Acner; Pi, Shaohua; Cepurna, William; Huang, David; Morrison, John C; Jia, Yali

    2018-05-01

    Phase-based optical coherence tomography (OCT), such as OCT angiography (OCTA) and Doppler OCT, is sensitive to the confounding phase shift introduced by subject bulk motion. Traditional bulk motion compensation methods are limited by their accuracy and computing cost-effectiveness. In this Letter, to the best of our knowledge, we present a novel bulk motion compensation method for phase-based functional OCT. Bulk motion associated phase shift can be directly derived by solving its equation using a standard deviation of phase-based OCTA and Doppler OCT flow signals. This method was evaluated on rodent retinal images acquired by a prototype visible light OCT and human retinal images acquired by a commercial system. The image quality and computational speed were significantly improved, compared to two conventional phase compensation methods.

  20. Guided filter-based fusion method for multiexposure images

    NASA Astrophysics Data System (ADS)

    Hou, Xinglin; Luo, Haibo; Qi, Feng; Zhou, Peipei

    2016-11-01

    It is challenging to capture a high-dynamic range (HDR) scene using a low-dynamic range camera. A weighted sum-based image fusion (IF) algorithm is proposed so as to express an HDR scene with a high-quality image. This method mainly includes three parts. First, two image features, i.e., gradients and well-exposedness are measured to estimate the initial weight maps. Second, the initial weight maps are refined by a guided filter, in which the source image is considered as the guidance image. This process could reduce the noise in initial weight maps and preserve more texture consistent with the original images. Finally, the fused image is constructed by a weighted sum of source images in the spatial domain. The main contributions of this method are the estimation of the initial weight maps and the appropriate use of the guided filter-based weight maps refinement. It provides accurate weight maps for IF. Compared to traditional IF methods, this algorithm avoids image segmentation, combination, and the camera response curve calibration. Furthermore, experimental results demonstrate the superiority of the proposed method in both subjective and objective evaluations.

  1. Surrogate Based Uni/Multi-Objective Optimization and Distribution Estimation Methods

    NASA Astrophysics Data System (ADS)

    Gong, W.; Duan, Q.; Huo, X.

    2017-12-01

    Parameter calibration has been demonstrated as an effective way to improve the performance of dynamic models, such as hydrological models, land surface models, weather and climate models etc. Traditional optimization algorithms usually cost a huge number of model evaluations, making dynamic model calibration very difficult, or even computationally prohibitive. With the help of a serious of recently developed adaptive surrogate-modelling based optimization methods: uni-objective optimization method ASMO, multi-objective optimization method MO-ASMO, and probability distribution estimation method ASMO-PODE, the number of model evaluations can be significantly reduced to several hundreds, making it possible to calibrate very expensive dynamic models, such as regional high resolution land surface models, weather forecast models such as WRF, and intermediate complexity earth system models such as LOVECLIM. This presentation provides a brief introduction to the common framework of adaptive surrogate-based optimization algorithms of ASMO, MO-ASMO and ASMO-PODE, a case study of Common Land Model (CoLM) calibration in Heihe river basin in Northwest China, and an outlook of the potential applications of the surrogate-based optimization methods.

  2. Method of plasma etching Ga-based compound semiconductors

    DOEpatents

    Qiu, Weibin; Goddard, Lynford L.

    2012-12-25

    A method of plasma etching Ga-based compound semiconductors includes providing a process chamber and a source electrode adjacent to the process chamber. The process chamber contains a sample comprising a Ga-based compound semiconductor. The sample is in contact with a platen which is electrically connected to a first power supply, and the source electrode is electrically connected to a second power supply. The method includes flowing SiCl.sub.4 gas into the chamber, flowing Ar gas into the chamber, and flowing H.sub.2 gas into the chamber. RF power is supplied independently to the source electrode and the platen. A plasma is generated based on the gases in the process chamber, and regions of a surface of the sample adjacent to one or more masked portions of the surface are etched to create a substantially smooth etched surface including features having substantially vertical walls beneath the masked portions.

  3. Predictors of Weapon Carrying in Youth Attending Drop-in Centers

    ERIC Educational Resources Information Center

    Blumberg, Elaine J.; Liles, Sandy; Kelley, Norma J.; Hovell, Melbourne F.; Bousman, Chad A.; Shillington, Audrey M.; Ji, Ming; Clapp, John

    2009-01-01

    Objective: To test and compare 2 predictive models of weapon carrying in youth (n=308) recruited from 4 drop-in centers in San Diego and Imperial counties. Methods: Both models were based on the Behavioral Ecological Model (BEM). Results: The first and second models significantly explained 39% and 53% of the variance in weapon carrying,…

  4. A Coarse-Alignment Method Based on the Optimal-REQUEST Algorithm

    PubMed Central

    Zhu, Yongyun

    2018-01-01

    In this paper, we proposed a coarse-alignment method for strapdown inertial navigation systems based on attitude determination. The observation vectors, which can be obtained by inertial sensors, usually contain various types of noise, which affects the convergence rate and the accuracy of the coarse alignment. Given this drawback, we studied an attitude-determination method named optimal-REQUEST, which is an optimal method for attitude determination that is based on observation vectors. Compared to the traditional attitude-determination method, the filtering gain of the proposed method is tuned autonomously; thus, the convergence rate of the attitude determination is faster than in the traditional method. Within the proposed method, we developed an iterative method for determining the attitude quaternion. We carried out simulation and turntable tests, which we used to validate the proposed method’s performance. The experiment’s results showed that the convergence rate of the proposed optimal-REQUEST algorithm is faster and that the coarse alignment’s stability is higher. In summary, the proposed method has a high applicability to practical systems. PMID:29337895

  5. PDEs on moving surfaces via the closest point method and a modified grid based particle method

    NASA Astrophysics Data System (ADS)

    Petras, A.; Ruuth, S. J.

    2016-05-01

    Partial differential equations (PDEs) on surfaces arise in a wide range of applications. The closest point method (Ruuth and Merriman (2008) [20]) is a recent embedding method that has been used to solve a variety of PDEs on smooth surfaces using a closest point representation of the surface and standard Cartesian grid methods in the embedding space. The original closest point method (CPM) was designed for problems posed on static surfaces, however the solution of PDEs on moving surfaces is of considerable interest as well. Here we propose solving PDEs on moving surfaces using a combination of the CPM and a modification of the grid based particle method (Leung and Zhao (2009) [12]). The grid based particle method (GBPM) represents and tracks surfaces using meshless particles and an Eulerian reference grid. Our modification of the GBPM introduces a reconstruction step into the original method to ensure that all the grid points within a computational tube surrounding the surface are active. We present a number of examples to illustrate the numerical convergence properties of our combined method. Experiments for advection-diffusion equations that are strongly coupled to the velocity of the surface are also presented.

  6. An efficient temporal database design method based on EER

    NASA Astrophysics Data System (ADS)

    Liu, Zhi; Huang, Jiping; Miao, Hua

    2007-12-01

    Many existing methods of modeling temporal information are based on logical model, which makes relational schema optimization more difficult and more complicated. In this paper, based on the conventional EER model, the author attempts to analyse and abstract temporal information in the phase of conceptual modelling according to the concrete requirement to history information. Then a temporal data model named BTEER is presented. BTEER not only retains all designing ideas and methods of EER which makes BTEER have good upward compatibility, but also supports the modelling of valid time and transaction time effectively at the same time. In addition, BTEER can be transformed to EER easily and automatically. It proves in practice, this method can model the temporal information well.

  7. Computation of leaky guided waves dispersion spectrum using vibroacoustic analyses and the Matrix Pencil Method: a validation study for immersed rectangular waveguides.

    PubMed

    Mazzotti, M; Bartoli, I; Castellazzi, G; Marzani, A

    2014-09-01

    The paper aims at validating a recently proposed Semi Analytical Finite Element (SAFE) formulation coupled with a 2.5D Boundary Element Method (2.5D BEM) for the extraction of dispersion data in immersed waveguides of generic cross-section. To this end, three-dimensional vibroacoustic analyses are carried out on two waveguides of square and rectangular cross-section immersed in water using the commercial Finite Element software Abaqus/Explicit. Real wavenumber and attenuation dispersive data are extracted by means of a modified Matrix Pencil Method. It is demonstrated that the results obtained using the two techniques are in very good agreement. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. A method for radiological characterization based on fluence conversion coefficients

    NASA Astrophysics Data System (ADS)

    Froeschl, Robert

    2018-06-01

    Radiological characterization of components in accelerator environments is often required to ensure adequate radiation protection during maintenance, transport and handling as well as for the selection of the proper disposal pathway. The relevant quantities are typical the weighted sums of specific activities with radionuclide-specific weighting coefficients. Traditional methods based on Monte Carlo simulations are radionuclide creation-event based or the particle fluences in the regions of interest are scored and then off-line weighted with radionuclide production cross sections. The presented method bases the radiological characterization on a set of fluence conversion coefficients. For a given irradiation profile and cool-down time, radionuclide production cross-sections, material composition and radionuclide-specific weighting coefficients, a set of particle type and energy dependent fluence conversion coefficients is computed. These fluence conversion coefficients can then be used in a Monte Carlo transport code to perform on-line weighting to directly obtain the desired radiological characterization, either by using built-in multiplier features such as in the PHITS code or by writing a dedicated user routine such as for the FLUKA code. The presented method has been validated against the standard event-based methods directly available in Monte Carlo transport codes.

  9. On evaluating the robustness of spatial-proximity-based regionalization methods

    NASA Astrophysics Data System (ADS)

    Lebecherel, Laure; Andréassian, Vazken; Perrin, Charles

    2016-08-01

    In absence of streamflow data to calibrate a hydrological model, its parameters are to be inferred by a regionalization method. In this technical note, we discuss a specific class of regionalization methods, those based on spatial proximity, which transfers hydrological information (typically calibrated parameter sets) from neighbor gauged stations to the target ungauged station. The efficiency of any spatial-proximity-based regionalization method will depend on the density of the available streamgauging network, and the purpose of this note is to discuss how to assess the robustness of the regionalization method (i.e., its resilience to an increasingly sparse hydrometric network). We compare two options: (i) the random hydrometrical reduction (HRand) method, which consists in sub-sampling the existing gauging network around the target ungauged station, and (ii) the hydrometrical desert method (HDes), which consists in ignoring the closest gauged stations. Our tests suggest that the HDes method should be preferred, because it provides a more realistic view on regionalization performance.

  10. Blind compressed sensing image reconstruction based on alternating direction method

    NASA Astrophysics Data System (ADS)

    Liu, Qinan; Guo, Shuxu

    2018-04-01

    In order to solve the problem of how to reconstruct the original image under the condition of unknown sparse basis, this paper proposes an image reconstruction method based on blind compressed sensing model. In this model, the image signal is regarded as the product of a sparse coefficient matrix and a dictionary matrix. Based on the existing blind compressed sensing theory, the optimal solution is solved by the alternative minimization method. The proposed method solves the problem that the sparse basis in compressed sensing is difficult to represent, which restrains the noise and improves the quality of reconstructed image. This method ensures that the blind compressed sensing theory has a unique solution and can recover the reconstructed original image signal from a complex environment with a stronger self-adaptability. The experimental results show that the image reconstruction algorithm based on blind compressed sensing proposed in this paper can recover high quality image signals under the condition of under-sampling.

  11. An Improved Image Matching Method Based on Surf Algorithm

    NASA Astrophysics Data System (ADS)

    Chen, S. J.; Zheng, S. Z.; Xu, Z. G.; Guo, C. C.; Ma, X. L.

    2018-04-01

    Many state-of-the-art image matching methods, based on the feature matching, have been widely studied in the remote sensing field. These methods of feature matching which get highly operating efficiency, have a disadvantage of low accuracy and robustness. This paper proposes an improved image matching method which based on the SURF algorithm. The proposed method introduces color invariant transformation, information entropy theory and a series of constraint conditions to increase feature points detection and matching accuracy. First, the model of color invariant transformation is introduced for two matching images aiming at obtaining more color information during the matching process and information entropy theory is used to obtain the most information of two matching images. Then SURF algorithm is applied to detect and describe points from the images. Finally, constraint conditions which including Delaunay triangulation construction, similarity function and projective invariant are employed to eliminate the mismatches so as to improve matching precision. The proposed method has been validated on the remote sensing images and the result benefits from its high precision and robustness.

  12. Lagrangian based methods for coherent structure detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allshouse, Michael R., E-mail: mallshouse@chaos.utexas.edu; Peacock, Thomas, E-mail: tomp@mit.edu

    There has been a proliferation in the development of Lagrangian analytical methods for detecting coherent structures in fluid flow transport, yielding a variety of qualitatively different approaches. We present a review of four approaches and demonstrate the utility of these methods via their application to the same sample analytic model, the canonical double-gyre flow, highlighting the pros and cons of each approach. Two of the methods, the geometric and probabilistic approaches, are well established and require velocity field data over the time interval of interest to identify particularly important material lines and surfaces, and influential regions, respectively. The other twomore » approaches, implementing tools from cluster and braid theory, seek coherent structures based on limited trajectory data, attempting to partition the flow transport into distinct regions. All four of these approaches share the common trait that they are objective methods, meaning that their results do not depend on the frame of reference used. For each method, we also present a number of example applications ranging from blood flow and chemical reactions to ocean and atmospheric flows.« less

  13. Lagrangian based methods for coherent structure detection

    NASA Astrophysics Data System (ADS)

    Allshouse, Michael R.; Peacock, Thomas

    2015-09-01

    There has been a proliferation in the development of Lagrangian analytical methods for detecting coherent structures in fluid flow transport, yielding a variety of qualitatively different approaches. We present a review of four approaches and demonstrate the utility of these methods via their application to the same sample analytic model, the canonical double-gyre flow, highlighting the pros and cons of each approach. Two of the methods, the geometric and probabilistic approaches, are well established and require velocity field data over the time interval of interest to identify particularly important material lines and surfaces, and influential regions, respectively. The other two approaches, implementing tools from cluster and braid theory, seek coherent structures based on limited trajectory data, attempting to partition the flow transport into distinct regions. All four of these approaches share the common trait that they are objective methods, meaning that their results do not depend on the frame of reference used. For each method, we also present a number of example applications ranging from blood flow and chemical reactions to ocean and atmospheric flows.

  14. Image Mosaic Method Based on SIFT Features of Line Segment

    PubMed Central

    Zhu, Jun; Ren, Mingwu

    2014-01-01

    This paper proposes a novel image mosaic method based on SIFT (Scale Invariant Feature Transform) feature of line segment, aiming to resolve incident scaling, rotation, changes in lighting condition, and so on between two images in the panoramic image mosaic process. This method firstly uses Harris corner detection operator to detect key points. Secondly, it constructs directed line segments, describes them with SIFT feature, and matches those directed segments to acquire rough point matching. Finally, Ransac method is used to eliminate wrong pairs in order to accomplish image mosaic. The results from experiment based on four pairs of images show that our method has strong robustness for resolution, lighting, rotation, and scaling. PMID:24511326

  15. Base oils and methods for making the same

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ohler, Nicholas; Fisher, Karl; Tirmizi, Shakeel

    Provided herein are isoparaffins derived from hydrocarbon terpenes such as myrcene, ocimene and farnesene, and methods for making the same. In certain variations, the isoparaffins have utility as lubricant base stocks.

  16. Gradient-based interpolation method for division-of-focal-plane polarimeters.

    PubMed

    Gao, Shengkui; Gruev, Viktor

    2013-01-14

    Recent advancements in nanotechnology and nanofabrication have allowed for the emergence of the division-of-focal-plane (DoFP) polarization imaging sensors. These sensors capture polarization properties of the optical field at every imaging frame. However, the DoFP polarization imaging sensors suffer from large registration error as well as reduced spatial-resolution output. These drawbacks can be improved by applying proper image interpolation methods for the reconstruction of the polarization results. In this paper, we present a new gradient-based interpolation method for DoFP polarimeters. The performance of the proposed interpolation method is evaluated against several previously published interpolation methods by using visual examples and root mean square error (RMSE) comparison. We found that the proposed gradient-based interpolation method can achieve better visual results while maintaining a lower RMSE than other interpolation methods under various dynamic ranges of a scene ranging from dim to bright conditions.

  17. Connectivity-based, all-hexahedral mesh generation method and apparatus

    DOEpatents

    Tautges, Timothy James; Mitchell, Scott A.; Blacker, Ted D.; Murdoch, Peter

    1998-01-01

    The present invention is a computer-based method and apparatus for constructing all-hexahedral finite element meshes for finite element analysis. The present invention begins with a three-dimensional geometry and an all-quadrilateral surface mesh, then constructs hexahedral element connectivity from the outer boundary inward, and then resolves invalid connectivity. The result of the present invention is a complete representation of hex mesh connectivity only; actual mesh node locations are determined later. The basic method of the present invention comprises the step of forming hexahedral elements by making crossings of entities referred to as "whisker chords." This step, combined with a seaming operation in space, is shown to be sufficient for meshing simple block problems. Entities that appear when meshing more complex geometries, namely blind chords, merged sheets, and self-intersecting chords, are described. A method for detecting invalid connectivity in space, based on repeated edges, is also described, along with its application to various cases of invalid connectivity introduced and resolved by the method.

  18. Global positioning method based on polarized light compass system

    NASA Astrophysics Data System (ADS)

    Liu, Jun; Yang, Jiangtao; Wang, Yubo; Tang, Jun; Shen, Chong

    2018-05-01

    This paper presents a global positioning method based on a polarized light compass system. A main limitation of polarization positioning is the environment such as weak and locally destroyed polarization environments, and the solution to the positioning problem is given in this paper which is polarization image de-noising and segmentation. Therefore, the pulse coupled neural network is employed for enhancing positioning performance. The prominent advantages of the present positioning technique are as follows: (i) compared to the existing position method based on polarized light, better sun tracking accuracy can be achieved and (ii) the robustness and accuracy of positioning under weak and locally destroyed polarization environments, such as cloudy or building shielding, are improved significantly. Finally, some field experiments are given to demonstrate the effectiveness and applicability of the proposed global positioning technique. The experiments have shown that our proposed method outperforms the conventional polarization positioning method, the real time longitude and latitude with accuracy up to 0.0461° and 0.0911°, respectively.

  19. Development of a practical image-based scatter correction method for brain perfusion SPECT: comparison with the TEW method.

    PubMed

    Shidahara, Miho; Watabe, Hiroshi; Kim, Kyeong Min; Kato, Takashi; Kawatsu, Shoji; Kato, Rikio; Yoshimura, Kumiko; Iida, Hidehiro; Ito, Kengo

    2005-10-01

    An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with 99mTc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I(mub)AC with Chang's attenuation correction factor. The scatter component image is estimated by convolving I(mub)AC with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and 99mTc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine.

  20. Connotations of pixel-based scale effect in remote sensing and the modified fractal-based analysis method

    NASA Astrophysics Data System (ADS)

    Feng, Guixiang; Ming, Dongping; Wang, Min; Yang, Jianyu

    2017-06-01

    Scale problems are a major source of concern in the field of remote sensing. Since the remote sensing is a complex technology system, there is a lack of enough cognition on the connotation of scale and scale effect in remote sensing. Thus, this paper first introduces the connotations of pixel-based scale and summarizes the general understanding of pixel-based scale effect. Pixel-based scale effect analysis is essentially important for choosing the appropriate remote sensing data and the proper processing parameters. Fractal dimension is a useful measurement to analysis pixel-based scale. However in traditional fractal dimension calculation, the impact of spatial resolution is not considered, which leads that the scale effect change with spatial resolution can't be clearly reflected. Therefore, this paper proposes to use spatial resolution as the modified scale parameter of two fractal methods to further analyze the pixel-based scale effect. To verify the results of two modified methods (MFBM (Modified Windowed Fractal Brownian Motion Based on the Surface Area) and MDBM (Modified Windowed Double Blanket Method)); the existing scale effect analysis method (information entropy method) is used to evaluate. And six sub-regions of building areas and farmland areas were cut out from QuickBird images to be used as the experimental data. The results of the experiment show that both the fractal dimension and information entropy present the same trend with the decrease of spatial resolution, and some inflection points appear at the same feature scales. Further analysis shows that these feature scales (corresponding to the inflection points) are related to the actual sizes of the geo-object, which results in fewer mixed pixels in the image, and these inflection points are significantly indicative of the observed features. Therefore, the experiment results indicate that the modified fractal methods are effective to reflect the pixel-based scale effect existing in remote sensing

  1. A rule-based automatic sleep staging method.

    PubMed

    Liang, Sheng-Fu; Kuo, Chin-En; Hu, Yu-Han; Cheng, Yu-Shian

    2012-03-30

    In this paper, a rule-based automatic sleep staging method was proposed. Twelve features including temporal and spectrum analyses of the EEG, EOG, and EMG signals were utilized. Normalization was applied to each feature to eliminating individual differences. A hierarchical decision tree with fourteen rules was constructed for sleep stage classification. Finally, a smoothing process considering the temporal contextual information was applied for the continuity. The overall agreement and kappa coefficient of the proposed method applied to the all night polysomnography (PSG) of seventeen healthy subjects compared with the manual scorings by R&K rules can reach 86.68% and 0.79, respectively. This method can integrate with portable PSG system for sleep evaluation at-home in the near future. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. [Galaxy/quasar classification based on nearest neighbor method].

    PubMed

    Li, Xiang-Ru; Lu, Yu; Zhou, Jian-Ming; Wang, Yong-Jun

    2011-09-01

    With the wide application of high-quality CCD in celestial spectrum imagery and the implementation of many large sky survey programs (e. g., Sloan Digital Sky Survey (SDSS), Two-degree-Field Galaxy Redshift Survey (2dF), Spectroscopic Survey Telescope (SST), Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) program and Large Synoptic Survey Telescope (LSST) program, etc.), celestial observational data are coming into the world like torrential rain. Therefore, to utilize them effectively and fully, research on automated processing methods for celestial data is imperative. In the present work, we investigated how to recognizing galaxies and quasars from spectra based on nearest neighbor method. Galaxies and quasars are extragalactic objects, they are far away from earth, and their spectra are usually contaminated by various noise. Therefore, it is a typical problem to recognize these two types of spectra in automatic spectra classification. Furthermore, the utilized method, nearest neighbor, is one of the most typical, classic, mature algorithms in pattern recognition and data mining, and often is used as a benchmark in developing novel algorithm. For applicability in practice, it is shown that the recognition ratio of nearest neighbor method (NN) is comparable to the best results reported in the literature based on more complicated methods, and the superiority of NN is that this method does not need to be trained, which is useful in incremental learning and parallel computation in mass spectral data processing. In conclusion, the results in this work are helpful for studying galaxies and quasars spectra classification.

  3. Index cost estimate based BIM method - Computational example for sports fields

    NASA Astrophysics Data System (ADS)

    Zima, Krzysztof

    2017-07-01

    The paper presents an example ofcost estimation in the early phase of the project. The fragment of relative database containing solution, descriptions, geometry of construction object and unit cost of sports facilities was shown. The Index Cost Estimate Based BIM method calculationswith use of Case Based Reasoning were presented, too. The article presentslocal and global similarity measurement and example of BIM based quantity takeoff process. The outcome of cost calculations based on CBR method was presented as a final result of calculations.

  4. Comparing K-mer based methods for improved classification of 16S sequences.

    PubMed

    Vinje, Hilde; Liland, Kristian Hovde; Almøy, Trygve; Snipen, Lars

    2015-07-01

    The need for precise and stable taxonomic classification is highly relevant in modern microbiology. Parallel to the explosion in the amount of sequence data accessible, there has also been a shift in focus for classification methods. Previously, alignment-based methods were the most applicable tools. Now, methods based on counting K-mers by sliding windows are the most interesting classification approach with respect to both speed and accuracy. Here, we present a systematic comparison on five different K-mer based classification methods for the 16S rRNA gene. The methods differ from each other both in data usage and modelling strategies. We have based our study on the commonly known and well-used naïve Bayes classifier from the RDP project, and four other methods were implemented and tested on two different data sets, on full-length sequences as well as fragments of typical read-length. The difference in classification error obtained by the methods seemed to be small, but they were stable and for both data sets tested. The Preprocessed nearest-neighbour (PLSNN) method performed best for full-length 16S rRNA sequences, significantly better than the naïve Bayes RDP method. On fragmented sequences the naïve Bayes Multinomial method performed best, significantly better than all other methods. For both data sets explored, and on both full-length and fragmented sequences, all the five methods reached an error-plateau. We conclude that no K-mer based method is universally best for classifying both full-length sequences and fragments (reads). All methods approach an error plateau indicating improved training data is needed to improve classification from here. Classification errors occur most frequent for genera with few sequences present. For improving the taxonomy and testing new classification methods, the need for a better and more universal and robust training data set is crucial.

  5. Methods and approaches in the topology-based analysis of biological pathways

    PubMed Central

    Mitrea, Cristina; Taghavi, Zeinab; Bokanizad, Behzad; Hanoudi, Samer; Tagett, Rebecca; Donato, Michele; Voichiţa, Călin; Drăghici, Sorin

    2013-01-01

    The goal of pathway analysis is to identify the pathways significantly impacted in a given phenotype. Many current methods are based on algorithms that consider pathways as simple gene lists, dramatically under-utilizing the knowledge that such pathways are meant to capture. During the past few years, a plethora of methods claiming to incorporate various aspects of the pathway topology have been proposed. These topology-based methods, sometimes referred to as “third generation,” have the potential to better model the phenomena described by pathways. Although there is now a large variety of approaches used for this purpose, no review is currently available to offer guidance for potential users and developers. This review covers 22 such topology-based pathway analysis methods published in the last decade. We compare these methods based on: type of pathways analyzed (e.g., signaling or metabolic), input (subset of genes, all genes, fold changes, gene p-values, etc.), mathematical models, pathway scoring approaches, output (one or more pathway scores, p-values, etc.) and implementation (web-based, standalone, etc.). We identify and discuss challenges, arising both in methodology and in pathway representation, including inconsistent terminology, different data formats, lack of meaningful benchmarks, and the lack of tissue and condition specificity. PMID:24133454

  6. LEAKAGE CHARACTERISTICS OF BASE OF RIVERBANK BY SELF POTENTIAL METHOD AND EXAMINATION OF EFFECTIVENESS OF SELF POTENTIAL METHOD TO HEALTH MONITORING OF BASE OF RIVERBANK

    NASA Astrophysics Data System (ADS)

    Matsumoto, Kensaku; Okada, Takashi; Takeuchi, Atsuo; Yazawa, Masato; Uchibori, Sumio; Shimizu, Yoshihiko

    Field Measurement of Self Potential Method using Copper Sulfate Electrode was performed in base of riverbank in WATARASE River, where has leakage problem to examine leakage characteristics. Measurement results showed typical S-shape what indicates existence of flow groundwater. The results agreed with measurement results by Ministry of Land, Infrastructure and Transport with good accuracy. Results of 1m depth ground temperature detection and Chain-Array detection showed good agreement with results of the Self Potential Method. Correlation between Self Potential value and groundwater velocity was examined model experiment. The result showed apparent correlation. These results indicate that the Self Potential Method was effective method to examine the characteristics of ground water of base of riverbank in leakage problem.

  7. Efficient method of image edge detection based on FSVM

    NASA Astrophysics Data System (ADS)

    Cai, Aiping; Xiong, Xiaomei

    2013-07-01

    For efficient object cover edge detection in digital images, this paper studied traditional methods and algorithm based on SVM. It analyzed Canny edge detection algorithm existed some pseudo-edge and poor anti-noise capability. In order to provide a reliable edge extraction method, propose a new detection algorithm based on FSVM. Which contains several steps: first, trains classify sample and gives the different membership function to different samples. Then, a new training sample is formed by increase the punishment some wrong sub-sample, and use the new FSVM classification model for train and test them. Finally the edges are extracted of the object image by using the model. Experimental result shows that good edge detection image will be obtained and adding noise experiments results show that this method has good anti-noise.

  8. A Comparative Study of Two Azimuth Based Non Standard Location Methods

    DTIC Science & Technology

    2017-03-23

    Standard Location Methods Rongsong JIH U.S. Department of State / Arms Control, Verification, and Compliance Bureau, 2201 C Street, NW, Washington...COMPARATIVE STUDY OF TWO AZIMUTH-BASED NON-STANDARD LOCATION METHODS R. Jih Department of State / Arms Control, Verification, and Compliance Bureau...cable. The so-called “Yin Zhong Xian” (“引中线” in Chinese) algorithm, hereafter the YZX method , is an Oriental version of IPB-based procedure. It

  9. Molecular cancer classification using a meta-sample-based regularized robust coding method.

    PubMed

    Wang, Shu-Lin; Sun, Liuchao; Fang, Jianwen

    2014-01-01

    Previous studies have demonstrated that machine learning based molecular cancer classification using gene expression profiling (GEP) data is promising for the clinic diagnosis and treatment of cancer. Novel classification methods with high efficiency and prediction accuracy are still needed to deal with high dimensionality and small sample size of typical GEP data. Recently the sparse representation (SR) method has been successfully applied to the cancer classification. Nevertheless, its efficiency needs to be improved when analyzing large-scale GEP data. In this paper we present the meta-sample-based regularized robust coding classification (MRRCC), a novel effective cancer classification technique that combines the idea of meta-sample-based cluster method with regularized robust coding (RRC) method. It assumes that the coding residual and the coding coefficient are respectively independent and identically distributed. Similar to meta-sample-based SR classification (MSRC), MRRCC extracts a set of meta-samples from the training samples, and then encodes a testing sample as the sparse linear combination of these meta-samples. The representation fidelity is measured by the l2-norm or l1-norm of the coding residual. Extensive experiments on publicly available GEP datasets demonstrate that the proposed method is more efficient while its prediction accuracy is equivalent to existing MSRC-based methods and better than other state-of-the-art dimension reduction based methods.

  10. Comparative study on gene set and pathway topology-based enrichment methods.

    PubMed

    Bayerlová, Michaela; Jung, Klaus; Kramer, Frank; Klemm, Florian; Bleckmann, Annalen; Beißbarth, Tim

    2015-10-22

    Enrichment analysis is a popular approach to identify pathways or sets of genes which are significantly enriched in the context of differentially expressed genes. The traditional gene set enrichment approach considers a pathway as a simple gene list disregarding any knowledge of gene or protein interactions. In contrast, the new group of so called pathway topology-based methods integrates the topological structure of a pathway into the analysis. We comparatively investigated gene set and pathway topology-based enrichment approaches, considering three gene set and four topological methods. These methods were compared in two extensive simulation studies and on a benchmark of 36 real datasets, providing the same pathway input data for all methods. In the benchmark data analysis both types of methods showed a comparable ability to detect enriched pathways. The first simulation study was conducted with KEGG pathways, which showed considerable gene overlaps between each other. In this study with original KEGG pathways, none of the topology-based methods outperformed the gene set approach. Therefore, a second simulation study was performed on non-overlapping pathways created by unique gene IDs. Here, methods accounting for pathway topology reached higher accuracy than the gene set methods, however their sensitivity was lower. We conducted one of the first comprehensive comparative works on evaluating gene set against pathway topology-based enrichment methods. The topological methods showed better performance in the simulation scenarios with non-overlapping pathways, however, they were not conclusively better in the other scenarios. This suggests that simple gene set approach might be sufficient to detect an enriched pathway under realistic circumstances. Nevertheless, more extensive studies and further benchmark data are needed to systematically evaluate these methods and to assess what gain and cost pathway topology information introduces into enrichment analysis. Both

  11. Comparative Analysis of a Principal Component Analysis-Based and an Artificial Neural Network-Based Method for Baseline Removal.

    PubMed

    Carvajal, Roberto C; Arias, Luis E; Garces, Hugo O; Sbarbaro, Daniel G

    2016-04-01

    This work presents a non-parametric method based on a principal component analysis (PCA) and a parametric one based on artificial neural networks (ANN) to remove continuous baseline features from spectra. The non-parametric method estimates the baseline based on a set of sampled basis vectors obtained from PCA applied over a previously composed continuous spectra learning matrix. The parametric method, however, uses an ANN to filter out the baseline. Previous studies have demonstrated that this method is one of the most effective for baseline removal. The evaluation of both methods was carried out by using a synthetic database designed for benchmarking baseline removal algorithms, containing 100 synthetic composed spectra at different signal-to-baseline ratio (SBR), signal-to-noise ratio (SNR), and baseline slopes. In addition to deomonstrating the utility of the proposed methods and to compare them in a real application, a spectral data set measured from a flame radiation process was used. Several performance metrics such as correlation coefficient, chi-square value, and goodness-of-fit coefficient were calculated to quantify and compare both algorithms. Results demonstrate that the PCA-based method outperforms the one based on ANN both in terms of performance and simplicity. © The Author(s) 2016.

  12. Design Tool Using a New Optimization Method Based on a Stochastic Process

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroaki; Yamaguchi, Katsuhito; Ishikawa, Yoshio

    Conventional optimization methods are based on a deterministic approach since their purpose is to find out an exact solution. However, such methods have initial condition dependence and the risk of falling into local solution. In this paper, we propose a new optimization method based on the concept of path integrals used in quantum mechanics. The method obtains a solution as an expected value (stochastic average) using a stochastic process. The advantages of this method are that it is not affected by initial conditions and does not require techniques based on experiences. We applied the new optimization method to a hang glider design. In this problem, both the hang glider design and its flight trajectory were optimized. The numerical calculation results prove that performance of the method is sufficient for practical use.

  13. Comparing the Principle-Based SBH Maieutic Method to Traditional Case Study Methods of Teaching Media Ethics

    ERIC Educational Resources Information Center

    Grant, Thomas A.

    2012-01-01

    This quasi-experimental study at a Northwest university compared two methods of teaching media ethics, a class taught with the principle-based SBH Maieutic Method (n = 25) and a class taught with a traditional case study method (n = 27), with a control group (n = 21) that received no ethics training. Following a 16-week intervention, a one-way…

  14. [Application of case-based method in genetics and eugenics teaching].

    PubMed

    Li, Ya-Xuan; Zhao, Xin; Zhang, Fei-Xiong; Hu, Ying-Kao; Yan, Yue-Ming; Cai, Min-Hua; Li, Xiao-Hui

    2012-05-01

    Genetics and Eugenics is a cross-discipline between genetics and eugenics. It is a common curriculum in many Chinese universities. In order to increase the learning interest, we introduced case teaching method and got a better teaching effect. Based on our teaching practices, we summarized some experiences about this subject. In this article, the main problem of case-based method applied in Genetics and Eugenics teaching was discussed.

  15. Comparison of Standard Culture-Based Method to Culture-Independent Method for Evaluation of Hygiene Effects on the Hand Microbiome.

    PubMed

    Zapka, C; Leff, J; Henley, J; Tittl, J; De Nardo, E; Butler, M; Griggs, R; Fierer, N; Edmonds-Wilson, S

    2017-03-28

    Hands play a critical role in the transmission of microbiota on one's own body, between individuals, and on environmental surfaces. Effectively measuring the composition of the hand microbiome is important to hand hygiene science, which has implications for human health. Hand hygiene products are evaluated using standard culture-based methods, but standard test methods for culture-independent microbiome characterization are lacking. We sampled the hands of 50 participants using swab-based and glove-based methods prior to and following four hand hygiene treatments (using a nonantimicrobial hand wash, alcohol-based hand sanitizer [ABHS], a 70% ethanol solution, or tap water). We compared results among culture plate counts, 16S rRNA gene sequencing of DNA extracted directly from hands, and sequencing of DNA extracted from culture plates. Glove-based sampling yielded higher numbers of unique operational taxonomic units (OTUs) but had less diversity in bacterial community composition than swab-based sampling. We detected treatment-induced changes in diversity only by using swab-based samples ( P < 0.001); we were unable to detect changes with glove-based samples. Bacterial cell counts significantly decreased with use of the ABHS ( P < 0.05) and ethanol control ( P < 0.05). Skin hydration at baseline correlated with bacterial abundances, bacterial community composition, pH, and redness across subjects. The importance of the method choice was substantial. These findings are important to ensure improvement of hand hygiene industry methods and for future hand microbiome studies. On the basis of our results and previously published studies, we propose recommendations for best practices in hand microbiome research. IMPORTANCE The hand microbiome is a critical area of research for diverse fields, such as public health and forensics. The suitability of culture-independent methods for assessing effects of hygiene products on microbiota has not been demonstrated. This is the

  16. Assessment of fire emission inventories during the South American Biomass Burning Analysis (SAMBBA) experiment

    NASA Astrophysics Data System (ADS)

    Pereira, Gabriel; Siqueira, Ricardo; Rosário, Nilton E.; Longo, Karla L.; Freitas, Saulo R.; Cardozo, Francielle S.; Kaiser, Johannes W.; Wooster, Martin J.

    2016-06-01

    Fires associated with land use and land cover changes release large amounts of aerosols and trace gases into the atmosphere. Although several inventories of biomass burning emissions cover Brazil, there are still considerable uncertainties and differences among them. While most fire emission inventories utilize the parameters of burned area, vegetation fuel load, emission factors, and other parameters to estimate the biomass burned and its associated emissions, several more recent inventories apply an alternative method based on fire radiative power (FRP) observations to estimate the amount of biomass burned and the corresponding emissions of trace gases and aerosols. The Brazilian Biomass Burning Emission Model (3BEM) and the Fire Inventory from NCAR (FINN) are examples of the first, while the Brazilian Biomass Burning Emission Model with FRP assimilation (3BEM_FRP) and the Global Fire Assimilation System (GFAS) are examples of the latter. These four biomass burning emission inventories were used during the South American Biomass Burning Analysis (SAMBBA) field campaign. This paper analyzes and inter-compared them, focusing on eight regions in Brazil and the time period of 1 September-31 October 2012. Aerosol optical thickness (AOT550 nm) derived from measurements made by the Moderate Resolution Imaging Spectroradiometer (MODIS) operating on board the Terra and Aqua satellites is also applied to assess the inventories' consistency. The daily area-averaged pyrogenic carbon monoxide (CO) emission estimates exhibit significant linear correlations (r, p > 0.05 level, Student t test) between 3BEM and FINN and between 3BEM_ FRP and GFAS, with values of 0.86 and 0.85, respectively. These results indicate that emission estimates in this region derived via similar methods tend to agree with one other. However, they differ more from the estimates derived via the alternative approach. The evaluation of MODIS AOT550 nm indicates that model simulation driven by 3BEM and FINN

  17. An endoscopic diffuse optical tomographic method with high resolution based on the improved FOCUSS method

    NASA Astrophysics Data System (ADS)

    Qin, Zhuanping; Ma, Wenjuan; Ren, Shuyan; Geng, Liqing; Li, Jing; Yang, Ying; Qin, Yingmei

    2017-02-01

    Endoscopic DOT has the potential to apply to cancer-related imaging in tubular organs. Although the DOT has relatively large tissue penetration depth, the endoscopic DOT is limited by the narrow space of the internal tubular tissue, so as to the relatively small penetration depth. Because some adenocarcinomas including cervical adenocarcinoma are located in deep canal, it is necessary to improve the imaging resolution under the limited measurement condition. To improve the resolution, a new FOCUSS algorithm along with the image reconstruction algorithm based on the effective detection range (EDR) is developed. This algorithm is based on the region of interest (ROI) to reduce the dimensions of the matrix. The shrinking method cuts down the computation burden. To reduce the computational complexity, double conjugate gradient method is used in the matrix inversion. For a typical inner size and optical properties of the cervix-like tubular tissue, reconstructed images from the simulation data demonstrate that the proposed method achieves equivalent image quality to that obtained from the method based on EDR when the target is close the inner boundary of the model, and with higher spatial resolution and quantitative ratio when the targets are far from the inner boundary of the model. The quantitative ratio of reconstructed absorption and reduced scattering coefficient can be up to 70% and 80% under 5mm depth, respectively. Furthermore, the two close targets with different depths can be separated from each other. The proposed method will be useful to the development of endoscopic DOT technologies in tubular organs.

  18. 3-D Wave-Structure Interaction with Coastal Sediments - A Multi-Physics/Multi-Solution Techniques Approach

    DTIC Science & Technology

    2007-01-01

    Stokes (RANS) and the particle finite element method ( PFEM ) will be used in the water/mine/sand domain. Sand and the geomaterials around the sand will...wave propagation over a bottom mine at various time steps (Soil and Foam model) 8 SOLID/FEM SAND/SPH GEOMATERIALS FNPF/BEM FNPF/BEM RANS/ PFEM

  19. Multi person detection and tracking based on hierarchical level-set method

    NASA Astrophysics Data System (ADS)

    Khraief, Chadia; Benzarti, Faouzi; Amiri, Hamid

    2018-04-01

    In this paper, we propose an efficient unsupervised method for mutli-person tracking based on hierarchical level-set approach. The proposed method uses both edge and region information in order to effectively detect objects. The persons are tracked on each frame of the sequence by minimizing an energy functional that combines color, texture and shape information. These features are enrolled in covariance matrix as region descriptor. The present method is fully automated without the need to manually specify the initial contour of Level-set. It is based on combined person detection and background subtraction methods. The edge-based is employed to maintain a stable evolution, guide the segmentation towards apparent boundaries and inhibit regions fusion. The computational cost of level-set is reduced by using narrow band technique. Many experimental results are performed on challenging video sequences and show the effectiveness of the proposed method.

  20. The knowledge and understanding of preanalytical phase among biomedicine students at the University of Zagreb

    PubMed Central

    Dukic, Lora; Jokic, Anja; Kules, Josipa; Pasalic, Daria

    2016-01-01

    Introduction The educational program for health care personnel is important for reducing preanalytical errors and improving quality of laboratory test results. The aim of our study was to assess the level of knowledge on preanalytical phase in population of biomedicine students through a cross-sectional survey. Materials and methods A survey was sent to students on penultimate and final year of Faculty of Pharmacy and Biochemistry – study of medical biochemistry (FPB), Faculty of Veterinary Medicine (FVM) and School of Medicine (SM), University of Zagreb, Croatia, using the web tool SurveyMonkey. Survey was composed of demographics and 14 statements regarding the preanalytical phase of laboratory testing. Comparison of frequencies and proportions of correct answers was done with Fisher’s exact test and test of comparison of proportions, respectively. Results Study included 135 participants, median age 24 (23-40) years. Students from FPB had higher proportion of correct answers (86%) compared to students from other biomedical faculties 62%, P < 0.001. Students from FPB were more conscious of the importance of specimen mixing (P = 0.027), prevalence of preanalytical errors (P = 0.001), impact of hemolysis (P = 0.032) and lipemia interferences (P = 0.010), proper choice of anticoagulants (P = 0.001), transport conditions for ammonia sample (P < 0.001) and order of draw during blood specimen collection (P < 0.001), in comparison with students from SM and FVM. Conclusions Students from FPB are more conscious of the importance of preanalytical phase of testing in comparison with their colleagues from other biomedical faculties. No difference in knowledge between penultimate and final year of the same faculty was found. PMID:26981023

  1. On the role of heat and mass transfer into laser processability during selective laser melting AlSi12 alloy based on a randomly packed powder-bed

    NASA Astrophysics Data System (ADS)

    Wang, Lianfeng; Yan, Biao; Guo, Lijie; Gu, Dongdong

    2018-04-01

    A newly transient mesoscopic model with a randomly packed powder-bed has been proposed to investigate the heat and mass transfer and laser process quality between neighboring tracks during selective laser melting (SLM) AlSi12 alloy by finite volume method (FVM), considering the solid/liquid phase transition, variable temperature-dependent properties and interfacial force. The results apparently revealed that both the operating temperature and resultant cooling rate were obviously elevated by increasing the laser power. Accordingly, the resultant viscosity of liquid significantly reduced under a large laser power and was characterized with a large velocity, which was prone to result in a more intensive convection within pool. In this case, the sufficient heat and mass transfer occurred at the interface between the previously fabricated tracks and currently building track, revealing a strongly sufficient spreading between the neighboring tracks and a resultant high-quality surface without obvious porosity. By contrast, the surface quality of SLM-processed components with a relatively low laser power notably weakened due to the limited and insufficient heat and mass transfer at the interface of neighboring tracks. Furthermore, the experimental surface morphologies of the top surface were correspondingly acquired and were in full accordance to the calculated results via simulation.

  2. Heat transfer enhancement and pumping power optimization using CuO-water nanofluid through rectangular corrugated pipe

    NASA Astrophysics Data System (ADS)

    Salehin, Musfequs; Ehsan, Mohammad Monjurul; Islam, A. K. M. Sadrul

    2017-06-01

    Heat transfer enhancement by corrugation in fluid domain is a popular method. The rate of improvement is more when it is used highly thermal conductive fluid as heating or cooling medium. In this present study, heat transfer augmentation was investigated numerically by implementing corrugation in the fluid domain and nanofluid as the base fluid in the turbulent forced convection regime. Finite volume method (FVM) was applied to solve the continuity, momentum and energy equations. All the numerical simulations were considered for single phase flow. A rectangle corrugated pipe with 5000 W/m2 constant heat flux subjected to the corrugated wall was considered as the fluid domain. In the range of Reynolds number 15000 to 40000, thermo-physical and hydrodynamic behavior was investigated by using CuO-water nanofluid from 1% to 5% volume fraction as the base fluid through the corrugated fluid domain. Corrugation justification was performed by changing the amplitude of the corrugation and the corrugation wave length for obtaining the increased heat transfer rate with minimum pumping power. For using CuO-water nanofluid, augmentation was also found more in the rectangle corrugated pipe both in heat transfer and pumping power requirement with the increase of Reynolds number and the volume fraction of nanofluid. For the increased pumping power, optimization of pumping power by using nanofluid was also performed for economic finding.

  3. Comparison of a New Cobinamide-Based Method to a Standard Laboratory Method for Measuring Cyanide in Human Blood

    PubMed Central

    Swezey, Robert; Shinn, Walter; Green, Carol; Drover, David R.; Hammer, Gregory B.; Schulman, Scott R.; Zajicek, Anne; Jett, David A.; Boss, Gerry R.

    2013-01-01

    Most hospital laboratories do not measure blood cyanide concentrations, and samples must be sent to reference laboratories. A simple method is needed for measuring cyanide in hospitals. The authors previously developed a method to quantify cyanide based on the high binding affinity of the vitamin B12 analog, cobinamide, for cyanide and a major spectral change observed for cyanide-bound cobinamide. This method is now validated in human blood, and the findings include a mean inter-assay accuracy of 99.1%, precision of 8.75% and a lower limit of quantification of 3.27 µM cyanide. The method was applied to blood samples from children treated with sodium nitroprusside and it yielded measurable results in 88 of 172 samples (51%), whereas the reference laboratory yielded results in only 19 samples (11%). In all 19 samples, the cobinamide-based method also yielded measurable results. The two methods showed reasonable agreement when analyzed by linear regression, but not when analyzed by a standard error of the estimate or paired t-test. Differences in results between the two methods may be because samples were assayed at different times on different sample types. The cobinamide-based method is applicable to human blood, and can be used in hospital laboratories and emergency rooms. PMID:23653045

  4. Deghosting based on the transmission matrix method

    NASA Astrophysics Data System (ADS)

    Wang, Benfeng; Wu, Ru-Shan; Chen, Xiaohong

    2017-12-01

    As the developments of seismic exploration and subsequent seismic exploitation advance, marine acquisition systems with towed streamers become an important seismic data acquisition method. But the existing air-water reflective interface can generate surface related multiples, including ghosts, which can affect the accuracy and performance of the following seismic data processing algorithms. Thus, we derive a deghosting method from a new perspective, i.e. using the transmission matrix (T-matrix) method instead of inverse scattering series. The T-matrix-based deghosting algorithm includes all scattering effects and is convergent absolutely. Initially, the effectiveness of the proposed method is demonstrated using synthetic data obtained from a designed layered model, and its noise-resistant property is also illustrated using noisy synthetic data contaminated by random noise. Numerical examples on complicated data from the open SMAART Pluto model and field marine data further demonstrate the validity and flexibility of the proposed method. After deghosting, low frequency components are recovered reasonably and the fake high frequency components are attenuated, and the recovered low frequency components will be useful for the subsequent full waveform inversion. The proposed deghosting method is currently suitable for two-dimensional towed streamer cases with accurate constant depth information and its extension into variable-depth streamers in three-dimensional cases will be studied in the future.

  5. A Study of Impact Point Detecting Method Based on Seismic Signal

    NASA Astrophysics Data System (ADS)

    Huo, Pengju; Zhang, Yu; Xu, Lina; Huang, Yong

    The projectile landing position has to be determined for its recovery and range in the targeting test. In this paper, a global search method based on the velocity variance is proposed. In order to verify the applicability of this method, simulation analysis within the scope of four million square meters has been conducted in the same array structure of the commonly used linear positioning method, and MATLAB was used to compare and analyze the two methods. The compared simulation results show that the global search method based on the speed of variance has high positioning accuracy and stability, which can meet the needs of impact point location.

  6. Burton-Miller-type singular boundary method for acoustic radiation and scattering

    NASA Astrophysics Data System (ADS)

    Fu, Zhuo-Jia; Chen, Wen; Gu, Yan

    2014-08-01

    This paper proposes the singular boundary method (SBM) in conjunction with Burton and Miller's formulation for acoustic radiation and scattering. The SBM is a strong-form collocation boundary discretization technique using the singular fundamental solutions, which is mathematically simple, easy-to-program, meshless and introduces the concept of source intensity factors (SIFs) to eliminate the singularities of the fundamental solutions. Therefore, it avoids singular numerical integrals in the boundary element method (BEM) and circumvents the troublesome placement of the fictitious boundary in the method of fundamental solutions (MFS). In the present method, we derive the SIFs of exterior Helmholtz equation by means of the SIFs of exterior Laplace equation owing to the same order of singularities between the Laplace and Helmholtz fundamental solutions. In conjunction with the Burton-Miller formulation, the SBM enhances the quality of the solution, particularly in the vicinity of the corresponding interior eigenfrequencies. Numerical illustrations demonstrate efficiency and accuracy of the present scheme on some benchmark examples under 2D and 3D unbounded domains in comparison with the analytical solutions, the boundary element solutions and Dirichlet-to-Neumann finite element solutions.

  7. Method of removing and detoxifying a phosphorus-based substance

    DOEpatents

    Vandegrift, G.F.; Steindler, M.J.

    1985-05-21

    A method of removing a phosphorus-based poisonous substance from water contaminated is presented. In addition, the toxicity of the phosphorus-based substance is also subsequently destroyed. A water-immiscible organic solvent is first immobilized on a supported liquid membrane before the contaminated water is contacted with one side of the supported liquid membrane to absorb the phosphorus-based substance in the organic solvent. The other side of the supported liquid membrane is contacted with a hydroxy-affording strong base to react with phosphorus-based solvated species to form a non-toxic product.

  8. Provider cost analysis supports results-based contracting out of maternal and newborn health services: an evidence-based policy perspective.

    PubMed

    Hatcher, Peter; Shaikh, Shiraz; Fazli, Hassan; Zaidi, Shehla; Riaz, Atif

    2014-11-13

    There is dearth of evidence on provider cost of contracted out services particularly for Maternal and Newborn Health (MNH). The evidence base is weak for policy makers to estimate resources required for scaling up contracting. This paper ascertains provider unit costs and expenditure distribution at contracted out government primary health centers to inform the development of optimal resource envelopes for contracting out MNH services. This is a case study of provider costs of MNH services at two government Rural Health Centers (RHCs) contracted out to a non-governmental organization in Pakistan. It reports on four selected Basic Emergency Obstetrical and Newborn Care (BEmONC) services provided in one RHC and six Comprehensive Emergency Obstetrical and Newborn Care (CEmONC) services in the other. Data were collected using staff interviews and record review to compile resource inputs and service volumes, and analyzed using the CORE Plus tool. Unit costs are based on actual costs of MNH services and are calculated for actual volumes in 2011 and for volumes projected to meet need with optimal resource inputs. The unit costs per service for actual 2011 volumes at the BEmONC RHC were antenatal care (ANC) visit USD$ 18.78, normal delivery US$ 84.61, newborn care US$ 16.86 and a postnatal care (PNC) visit US$ 13.86; and at the CEmONC RHC were ANC visit US$ 45.50, Normal Delivery US$ 148.43, assisted delivery US$ 167.43, C-section US$ 183.34, Newborn Care US$ 41.07, and PNC visit US$ 27.34. The unit costs for the projected volumes needed were lower due to optimal utilization of resources. The percentage distribution of expenditures at both RHCs was largest for salaries of technical staff, followed by salaries of administrative staff, and then operating costs, medicines, medical and diagnostic supplies. The unit costs of MNH services at the two contracted out government rural facilities remain higher than is optimal, primarily due to underutilization. Provider cost analysis

  9. A Hybrid Method for Pancreas Extraction from CT Image Based on Level Set Methods

    PubMed Central

    Tan, Hanqing; Fujita, Hiroshi

    2013-01-01

    This paper proposes a novel semiautomatic method to extract the pancreas from abdominal CT images. Traditional level set and region growing methods that request locating initial contour near the final boundary of object have problem of leakage to nearby tissues of pancreas region. The proposed method consists of a customized fast-marching level set method which generates an optimal initial pancreas region to solve the problem that the level set method is sensitive to the initial contour location and a modified distance regularized level set method which extracts accurate pancreas. The novelty in our method is the proper selection and combination of level set methods, furthermore an energy-decrement algorithm and an energy-tune algorithm are proposed to reduce the negative impact of bonding force caused by connected tissue whose intensity is similar with pancreas. As a result, our method overcomes the shortages of oversegmentation at weak boundary and can accurately extract pancreas from CT images. The proposed method is compared to other five state-of-the-art medical image segmentation methods based on a CT image dataset which contains abdominal images from 10 patients. The evaluated results demonstrate that our method outperforms other methods by achieving higher accuracy and making less false segmentation in pancreas extraction. PMID:24066016

  10. System and method for integrating hazard-based decision making tools and processes

    DOEpatents

    Hodgin, C Reed [Westminster, CO

    2012-03-20

    A system and method for inputting, analyzing, and disseminating information necessary for identified decision-makers to respond to emergency situations. This system and method provides consistency and integration among multiple groups, and may be used for both initial consequence-based decisions and follow-on consequence-based decisions. The system and method in a preferred embodiment also provides tools for accessing and manipulating information that are appropriate for each decision-maker, in order to achieve more reasoned and timely consequence-based decisions. The invention includes processes for designing and implementing a system or method for responding to emergency situations.

  11. A MULTICORE BASED PARALLEL IMAGE REGISTRATION METHOD

    PubMed Central

    Yang, Lin; Gong, Leiguang; Zhang, Hong; Nosher, John L.; Foran, David J.

    2012-01-01

    Image registration is a crucial step for many image-assisted clinical applications such as surgery planning and treatment evaluation. In this paper we proposed a landmark based nonlinear image registration algorithm for matching 2D image pairs. The algorithm was shown to be effective and robust under conditions of large deformations. In landmark based registration, the most important step is establishing the correspondence among the selected landmark points. This usually requires an extensive search which is often computationally expensive. We introduced a nonregular data partition algorithm using the K-means clustering algorithm to group the landmarks based on the number of available processing cores. The step optimizes the memory usage and data transfer. We have tested our method using IBM Cell Broadband Engine (Cell/B.E.) platform. PMID:19964921

  12. Comparison of Standard Culture-Based Method to Culture-Independent Method for Evaluation of Hygiene Effects on the Hand Microbiome

    PubMed Central

    Leff, J.; Henley, J.; Tittl, J.; De Nardo, E.; Butler, M.; Griggs, R.; Fierer, N.

    2017-01-01

    ABSTRACT Hands play a critical role in the transmission of microbiota on one’s own body, between individuals, and on environmental surfaces. Effectively measuring the composition of the hand microbiome is important to hand hygiene science, which has implications for human health. Hand hygiene products are evaluated using standard culture-based methods, but standard test methods for culture-independent microbiome characterization are lacking. We sampled the hands of 50 participants using swab-based and glove-based methods prior to and following four hand hygiene treatments (using a nonantimicrobial hand wash, alcohol-based hand sanitizer [ABHS], a 70% ethanol solution, or tap water). We compared results among culture plate counts, 16S rRNA gene sequencing of DNA extracted directly from hands, and sequencing of DNA extracted from culture plates. Glove-based sampling yielded higher numbers of unique operational taxonomic units (OTUs) but had less diversity in bacterial community composition than swab-based sampling. We detected treatment-induced changes in diversity only by using swab-based samples (P < 0.001); we were unable to detect changes with glove-based samples. Bacterial cell counts significantly decreased with use of the ABHS (P < 0.05) and ethanol control (P < 0.05). Skin hydration at baseline correlated with bacterial abundances, bacterial community composition, pH, and redness across subjects. The importance of the method choice was substantial. These findings are important to ensure improvement of hand hygiene industry methods and for future hand microbiome studies. On the basis of our results and previously published studies, we propose recommendations for best practices in hand microbiome research. PMID:28351915

  13. Interpreting the Coulomb-field approximation for generalized-Born electrostatics using boundary-integral equation theory.

    PubMed

    Bardhan, Jaydeep P

    2008-10-14

    The importance of molecular electrostatic interactions in aqueous solution has motivated extensive research into physical models and numerical methods for their estimation. The computational costs associated with simulations that include many explicit water molecules have driven the development of implicit-solvent models, with generalized-Born (GB) models among the most popular of these. In this paper, we analyze a boundary-integral equation interpretation for the Coulomb-field approximation (CFA), which plays a central role in most GB models. This interpretation offers new insights into the nature of the CFA, which traditionally has been assessed using only a single point charge in the solute. The boundary-integral interpretation of the CFA allows the use of multiple point charges, or even continuous charge distributions, leading naturally to methods that eliminate the interpolation inaccuracies associated with the Still equation. This approach, which we call boundary-integral-based electrostatic estimation by the CFA (BIBEE/CFA), is most accurate when the molecular charge distribution generates a smooth normal displacement field at the solute-solvent boundary, and CFA-based GB methods perform similarly. Conversely, both methods are least accurate for charge distributions that give rise to rapidly varying or highly localized normal displacement fields. Supporting this analysis are comparisons of the reaction-potential matrices calculated using GB methods and boundary-element-method (BEM) simulations. An approximation similar to BIBEE/CFA exhibits complementary behavior, with superior accuracy for charge distributions that generate rapidly varying normal fields and poorer accuracy for distributions that produce smooth fields. This approximation, BIBEE by preconditioning (BIBEE/P), essentially generates initial guesses for preconditioned Krylov-subspace iterative BEMs. Thus, iterative refinement of the BIBEE/P results recovers the BEM solution; excellent agreement

  14. Method for rapid base sequencing in DNA and RNA

    DOEpatents

    Jett, J.H.; Keller, R.A.; Martin, J.C.; Moyzis, R.K.; Ratliff, R.L.; Shera, E.B.; Stewart, C.C.

    1987-10-07

    A method is provided for the rapid base sequencing of DNA or RNA fragments wherein a single fragment of DNA or RNA is provided with identifiable bases and suspended in a moving flow stream. An exonuclease sequentially cleaves individual bases from the end of the suspended fragment. The moving flow stream maintains the cleaved bases in an orderly train for subsequent detection and identification. In a particular embodiment, individual bases forming the DNA or RNA fragments are individually tagged with a characteristic fluorescent dye. The train of bases is then excited to fluorescence with an output spectrum characteristic of the individual bases. Accordingly, the base sequence of the original DNA or RNA fragment can be reconstructed. 2 figs.

  15. Method for rapid base sequencing in DNA and RNA

    DOEpatents

    Jett, J.H.; Keller, R.A.; Martin, J.C.; Moyzis, R.K.; Ratliff, R.L.; Shera, E.B.; Stewart, C.C.

    1990-10-09

    A method is provided for the rapid base sequencing of DNA or RNA fragments wherein a single fragment of DNA or RNA is provided with identifiable bases and suspended in a moving flow stream. An exonuclease sequentially cleaves individual bases from the end of the suspended fragment. The moving flow stream maintains the cleaved bases in an orderly train for subsequent detection and identification. In a particular embodiment, individual bases forming the DNA or RNA fragments are individually tagged with a characteristic fluorescent dye. The train of bases is then excited to fluorescence with an output spectrum characteristic of the individual bases. Accordingly, the base sequence of the original DNA or RNA fragment can be reconstructed. 2 figs.

  16. Method for rapid base sequencing in DNA and RNA

    DOEpatents

    Jett, James H.; Keller, Richard A.; Martin, John C.; Moyzis, Robert K.; Ratliff, Robert L.; Shera, E. Brooks; Stewart, Carleton C.

    1990-01-01

    A method is provided for the rapid base sequencing of DNA or RNA fragments wherein a single fragment of DNA or RNA is provided with identifiable bases and suspended in a moving flow stream. An exonuclease sequentially cleaves individual bases from the end of the suspended fragment. The moving flow stream maintains the cleaved bases in an orderly train for subsequent detection and identification. In a particular embodiment, individual bases forming the DNA or RNA fragments are individually tagged with a characteristic fluorescent dye. The train of bases is then excited to fluorescence with an output spectrum characteristic of the individual bases. Accordingly, the base sequence of the original DNA or RNA fragment can be reconstructed.

  17. Hybrid statistics-simulations based method for atom-counting from ADF STEM images.

    PubMed

    De Wael, Annelies; De Backer, Annick; Jones, Lewys; Nellist, Peter D; Van Aert, Sandra

    2017-06-01

    A hybrid statistics-simulations based method for atom-counting from annular dark field scanning transmission electron microscopy (ADF STEM) images of monotype crystalline nanostructures is presented. Different atom-counting methods already exist for model-like systems. However, the increasing relevance of radiation damage in the study of nanostructures demands a method that allows atom-counting from low dose images with a low signal-to-noise ratio. Therefore, the hybrid method directly includes prior knowledge from image simulations into the existing statistics-based method for atom-counting, and accounts in this manner for possible discrepancies between actual and simulated experimental conditions. It is shown by means of simulations and experiments that this hybrid method outperforms the statistics-based method, especially for low electron doses and small nanoparticles. The analysis of a simulated low dose image of a small nanoparticle suggests that this method allows for far more reliable quantitative analysis of beam-sensitive materials. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. MFAM: Multiple Frequency Adaptive Model-Based Indoor Localization Method.

    PubMed

    Tuta, Jure; Juric, Matjaz B

    2018-03-24

    This paper presents MFAM (Multiple Frequency Adaptive Model-based localization method), a novel model-based indoor localization method that is capable of using multiple wireless signal frequencies simultaneously. It utilizes indoor architectural model and physical properties of wireless signal propagation through objects and space. The motivation for developing multiple frequency localization method lies in the future Wi-Fi standards (e.g., 802.11ah) and the growing number of various wireless signals present in the buildings (e.g., Wi-Fi, Bluetooth, ZigBee, etc.). Current indoor localization methods mostly rely on a single wireless signal type and often require many devices to achieve the necessary accuracy. MFAM utilizes multiple wireless signal types and improves the localization accuracy over the usage of a single frequency. It continuously monitors signal propagation through space and adapts the model according to the changes indoors. Using multiple signal sources lowers the required number of access points for a specific signal type while utilizing signals, already present in the indoors. Due to the unavailability of the 802.11ah hardware, we have evaluated proposed method with similar signals; we have used 2.4 GHz Wi-Fi and 868 MHz HomeMatic home automation signals. We have performed the evaluation in a modern two-bedroom apartment and measured mean localization error 2.0 to 2.3 m and median error of 2.0 to 2.2 m. Based on our evaluation results, using two different signals improves the localization accuracy by 18% in comparison to 2.4 GHz Wi-Fi-only approach. Additional signals would improve the accuracy even further. We have shown that MFAM provides better accuracy than competing methods, while having several advantages for real-world usage.

  19. MFAM: Multiple Frequency Adaptive Model-Based Indoor Localization Method

    PubMed Central

    Juric, Matjaz B.

    2018-01-01

    This paper presents MFAM (Multiple Frequency Adaptive Model-based localization method), a novel model-based indoor localization method that is capable of using multiple wireless signal frequencies simultaneously. It utilizes indoor architectural model and physical properties of wireless signal propagation through objects and space. The motivation for developing multiple frequency localization method lies in the future Wi-Fi standards (e.g., 802.11ah) and the growing number of various wireless signals present in the buildings (e.g., Wi-Fi, Bluetooth, ZigBee, etc.). Current indoor localization methods mostly rely on a single wireless signal type and often require many devices to achieve the necessary accuracy. MFAM utilizes multiple wireless signal types and improves the localization accuracy over the usage of a single frequency. It continuously monitors signal propagation through space and adapts the model according to the changes indoors. Using multiple signal sources lowers the required number of access points for a specific signal type while utilizing signals, already present in the indoors. Due to the unavailability of the 802.11ah hardware, we have evaluated proposed method with similar signals; we have used 2.4 GHz Wi-Fi and 868 MHz HomeMatic home automation signals. We have performed the evaluation in a modern two-bedroom apartment and measured mean localization error 2.0 to 2.3 m and median error of 2.0 to 2.2 m. Based on our evaluation results, using two different signals improves the localization accuracy by 18% in comparison to 2.4 GHz Wi-Fi-only approach. Additional signals would improve the accuracy even further. We have shown that MFAM provides better accuracy than competing methods, while having several advantages for real-world usage. PMID:29587352

  20. Connectivity-based, all-hexahedral mesh generation method and apparatus

    DOEpatents

    Tautges, T.J.; Mitchell, S.A.; Blacker, T.D.; Murdoch, P.

    1998-06-16

    The present invention is a computer-based method and apparatus for constructing all-hexahedral finite element meshes for finite element analysis. The present invention begins with a three-dimensional geometry and an all-quadrilateral surface mesh, then constructs hexahedral element connectivity from the outer boundary inward, and then resolves invalid connectivity. The result of the present invention is a complete representation of hex mesh connectivity only; actual mesh node locations are determined later. The basic method of the present invention comprises the step of forming hexahedral elements by making crossings of entities referred to as ``whisker chords.`` This step, combined with a seaming operation in space, is shown to be sufficient for meshing simple block problems. Entities that appear when meshing more complex geometries, namely blind chords, merged sheets, and self-intersecting chords, are described. A method for detecting invalid connectivity in space, based on repeated edges, is also described, along with its application to various cases of invalid connectivity introduced and resolved by the method. 79 figs.

  1. Usability Evaluation Methods for Gesture-Based Games: A Systematic Review

    PubMed Central

    Simor, Fernando Winckler; Brum, Manoela Rogofski; Schmidt, Jaison Dairon Ebertz; De Marchi, Ana Carolina Bertoletti

    2016-01-01

    Background Gestural interaction systems are increasingly being used, mainly in games, expanding the idea of entertainment and providing experiences with the purpose of promoting better physical and/or mental health. Therefore, it is necessary to establish mechanisms for evaluating the usability of these interfaces, which make gestures the basis of interaction, to achieve a balance between functionality and ease of use. Objective This study aims to present the results of a systematic review focused on usability evaluation methods for gesture-based games, considering devices with motion-sensing capability. We considered the usability methods used, the common interface issues, and the strategies adopted to build good gesture-based games. Methods The research was centered on four electronic databases: IEEE, Association for Computing Machinery (ACM), Springer, and Science Direct from September 4 to 21, 2015. Within 1427 studies evaluated, 10 matched the eligibility criteria. As a requirement, we considered studies about gesture-based games, Kinect and/or Wii as devices, and the use of a usability method to evaluate the user interface. Results In the 10 studies found, there was no standardization in the methods because they considered diverse analysis variables. Heterogeneously, authors used different instruments to evaluate gesture-based interfaces and no default approach was proposed. Questionnaires were the most used instruments (70%, 7/10), followed by interviews (30%, 3/10), and observation and video recording (20%, 2/10). Moreover, 60% (6/10) of the studies used gesture-based serious games to evaluate the performance of elderly participants in rehabilitation tasks. This highlights the need for creating an evaluation protocol for older adults to provide a user-friendly interface according to the user’s age and limitations. Conclusions Through this study, we conclude this field is in need of a usability evaluation method for serious games, especially games for

  2. A Case-Based Reasoning Method with Rank Aggregation

    NASA Astrophysics Data System (ADS)

    Sun, Jinhua; Du, Jiao; Hu, Jian

    2018-03-01

    In order to improve the accuracy of case-based reasoning (CBR), this paper addresses a new CBR framework with the basic principle of rank aggregation. First, the ranking methods are put forward in each attribute subspace of case. The ordering relation between cases on each attribute is got between cases. Then, a sorting matrix is got. Second, the similar case retrieval process from ranking matrix is transformed into a rank aggregation optimal problem, which uses the Kemeny optimal. On the basis, a rank aggregation case-based reasoning algorithm, named RA-CBR, is designed. The experiment result on UCI data sets shows that case retrieval accuracy of RA-CBR algorithm is higher than euclidean distance CBR and mahalanobis distance CBR testing.So we can get the conclusion that RA-CBR method can increase the performance and efficiency of CBR.

  3. Damage evaluation by a guided wave-hidden Markov model based method

    NASA Astrophysics Data System (ADS)

    Mei, Hanfei; Yuan, Shenfang; Qiu, Lei; Zhang, Jinjin

    2016-02-01

    Guided wave based structural health monitoring has shown great potential in aerospace applications. However, one of the key challenges of practical engineering applications is the accurate interpretation of the guided wave signals under time-varying environmental and operational conditions. This paper presents a guided wave-hidden Markov model based method to improve the damage evaluation reliability of real aircraft structures under time-varying conditions. In the proposed approach, an HMM based unweighted moving average trend estimation method, which can capture the trend of damage propagation from the posterior probability obtained by HMM modeling is used to achieve a probabilistic evaluation of the structural damage. To validate the developed method, experiments are performed on a hole-edge crack specimen under fatigue loading condition and a real aircraft wing spar under changing structural boundary conditions. Experimental results show the advantage of the proposed method.

  4. Effective properties of dispersed phase reinforced composite materials with perfect and imperfect interfaces

    NASA Astrophysics Data System (ADS)

    Han, Ru

    This thesis focuses on the analysis of dispersed phase reinforced composite materials with perfect as well as imperfect interfaces using the Boundary Element Method (BEM). Two problems of interest are considered, namely, to determine the limitations in the use of effective properties and the analysis of failure progression at the inclusion-matrix interface. The effective moduli (effective Young's modulus, effective Poisson's ratio, effective shear modulus, and effective bulk modulus) of composite materials can be determined at the mesoscopic level using three-dimensional parallel BEM simulations. By comparing the mesoscopic BEM results and the macroscopic results based on effective properties, limitations in the effective property approach can be determined. Decohesion is an important failure mode associated with fiber-reinforced composite materials. Analysis of failure progression at the fiber-matrix interface in fiber-reinforced composite materials is considered using a softening decohesion model consistent with thermodynamic concepts. In this model, the initiation of failure is given directly by a failure criterion. Damage is interpreted by the development of a discontinuity of displacement. The formulation describing the potential development of damage is governed by a discrete decohesive constitutive equation. Numerical simulations are performed using the direct boundary element method. Incremental decohesion simulations illustrate the progressive evolution of debonding zones and the propagation of cracks along the interfaces. The effect of decohesion on the macroscopic response of composite materials is also investigated.

  5. Self-Alignment MEMS IMU Method Based on the Rotation Modulation Technique on a Swing Base

    PubMed Central

    Chen, Zhiyong; Yang, Haotian; Wang, Chengbin; Lin, Zhihui; Guo, Meifeng

    2018-01-01

    The micro-electro-mechanical-system (MEMS) inertial measurement unit (IMU) has been widely used in the field of inertial navigation due to its small size, low cost, and light weight, but aligning MEMS IMUs remains a challenge for researchers. MEMS IMUs have been conventionally aligned on a static base, requiring other sensors, such as magnetometers or satellites, to provide auxiliary information, which limits its application range to some extent. Therefore, improving the alignment accuracy of MEMS IMU as much as possible under swing conditions is of considerable value. This paper proposes an alignment method based on the rotation modulation technique (RMT), which is completely self-aligned, unlike the existing alignment techniques. The effect of the inertial sensor errors is mitigated by rotating the IMU. Then, inertial frame-based alignment using the rotation modulation technique (RMT-IFBA) achieved coarse alignment on the swing base. The strong tracking filter (STF) further improved the alignment accuracy. The performance of the proposed method was validated with a physical experiment, and the results of the alignment showed that the standard deviations of pitch, roll, and heading angle were 0.0140°, 0.0097°, and 0.91°, respectively, which verified the practicality and efficacy of the proposed method for the self-alignment of the MEMS IMU on a swing base. PMID:29649150

  6. Project-Based Learning in Undergraduate Environmental Chemistry Laboratory: Using EPA Methods to Guide Student Method Development for Pesticide Quantitation

    ERIC Educational Resources Information Center

    Davis, Eric J.; Pauls, Steve; Dick, Jonathan

    2017-01-01

    Presented is a project-based learning (PBL) laboratory approach for an upper-division environmental chemistry or quantitative analysis course. In this work, a combined laboratory class of 11 environmental chemistry students developed a method based on published EPA methods for the extraction of dichlorodiphenyltrichloroethane (DDT) and its…

  7. Needs and Opportunities for Uncertainty-Based Multidisciplinary Design Methods for Aerospace Vehicles

    NASA Technical Reports Server (NTRS)

    Zang, Thomas A.; Hemsch, Michael J.; Hilburger, Mark W.; Kenny, Sean P; Luckring, James M.; Maghami, Peiman; Padula, Sharon L.; Stroud, W. Jefferson

    2002-01-01

    This report consists of a survey of the state of the art in uncertainty-based design together with recommendations for a Base research activity in this area for the NASA Langley Research Center. This report identifies the needs and opportunities for computational and experimental methods that provide accurate, efficient solutions to nondeterministic multidisciplinary aerospace vehicle design problems. Barriers to the adoption of uncertainty-based design methods are identified. and the benefits of the use of such methods are explained. Particular research needs are listed.

  8. Comparison of landmark-based and automatic methods for cortical surface registration

    PubMed Central

    Pantazis, Dimitrios; Joshi, Anand; Jiang, Jintao; Shattuck, David; Bernstein, Lynne E.; Damasio, Hanna; Leahy, Richard M.

    2009-01-01

    Group analysis of structure or function in cerebral cortex typically involves as a first step the alignment of the cortices. A surface based approach to this problem treats the cortex as a convoluted surface and coregisters across subjects so that cortical landmarks or features are aligned. This registration can be performed using curves representing sulcal fundi and gyral crowns to constrain the mapping. Alternatively, registration can be based on the alignment of curvature metrics computed over the entire cortical surface. The former approach typically involves some degree of user interaction in defining the sulcal and gyral landmarks while the latter methods can be completely automated. Here we introduce a cortical delineation protocol consisting of 26 consistent landmarks spanning the entire cortical surface. We then compare the performance of a landmark-based registration method that uses this protocol with that of two automatic methods implemented in the software packages FreeSurfer and BrainVoyager. We compare performance in terms of discrepancy maps between the different methods, the accuracy with which regions of interest are aligned, and the ability of the automated methods to correctly align standard cortical landmarks. Our results show similar performance for ROIs in the perisylvian region for the landmark based method and FreeSurfer. However, the discrepancy maps showed larger variability between methods in occipital and frontal cortex and also that automated methods often produce misalignment of standard cortical landmarks. Consequently, selection of the registration approach should consider the importance of accurate sulcal alignment for the specific task for which coregistration is being performed. When automatic methods are used, the users should ensure that sulci in regions of interest in their studies are adequately aligned before proceeding with subsequent analysis. PMID:19796696

  9. Web-based emergency response exercise management systems and methods thereof

    DOEpatents

    Goforth, John W.; Mercer, Michael B.; Heath, Zach; Yang, Lynn I.

    2014-09-09

    According to one embodiment, a method for simulating portions of an emergency response exercise includes generating situational awareness outputs associated with a simulated emergency and sending the situational awareness outputs to a plurality of output devices. Also, the method includes outputting to a user device a plurality of decisions associated with the situational awareness outputs at a decision point, receiving a selection of one of the decisions from the user device, generating new situational awareness outputs based on the selected decision, and repeating the sending, outputting and receiving steps based on the new situational awareness outputs. Other methods, systems, and computer program products are included according to other embodiments of the invention.

  10. Validation of a radiosonde-based cloud layer detection method against a ground-based remote sensing method at multiple ARM sites

    NASA Astrophysics Data System (ADS)

    Zhang, Jinqiang; Li, Zhanqing; Chen, Hongbin; Cribb, Maureen

    2013-01-01

    Cloud vertical structure is a key quantity in meteorological and climate studies, but it is also among the most difficult quantities to observe. In this study, we develop a long-term (10 years) radiosonde-based cloud profile product for the U.S. Department of Energy's Atmospheric Radiation Measurement (ARM) program Southern Great Plains (SGP), Tropical Western Pacific (TWP), and North Slope of Alaska (NSA) sites and a shorter-term product for the ARM Mobile Facility (AMF) deployed in Shouxian, Anhui Province, China (AMF-China). The AMF-China site was in operation from 14 May to 28 December 2008; the ARM sites have been collecting data for over 15 years. The Active Remote Sensing of Cloud (ARSCL) value-added product (VAP), which combines data from the 95-GHz W-band ARM Cloud Radar (WACR) and/or the 35-GHz Millimeter Microwave Cloud Radar (MMCR), is used in this study to validate the radiosonde-based cloud layer retrieval method. The performance of the radiosonde-based cloud layer retrieval method applied to data from different climate regimes is evaluated. Overall, cloud layers derived from the ARSCL VAP and radiosonde data agree very well at the SGP and AMF-China sites. At the TWP and NSA sites, the radiosonde tends to detect more cloud layers in the upper troposphere.

  11. Content based Image Retrieval based on Different Global and Local Color Histogram Methods: A Survey

    NASA Astrophysics Data System (ADS)

    Suhasini, Pallikonda Sarah; Sri Rama Krishna, K.; Murali Krishna, I. V.

    2017-02-01

    Different global and local color histogram methods for content based image retrieval (CBIR) are investigated in this paper. Color histogram is a widely used descriptor for CBIR. Conventional method of extracting color histogram is global, which misses the spatial content, is less invariant to deformation and viewpoint changes, and results in a very large three dimensional histogram corresponding to the color space used. To address the above deficiencies, different global and local histogram methods are proposed in recent research. Different ways of extracting local histograms to have spatial correspondence, invariant colour histogram to add deformation and viewpoint invariance and fuzzy linking method to reduce the size of the histogram are found in recent papers. The color space and the distance metric used are vital in obtaining color histogram. In this paper the performance of CBIR based on different global and local color histograms in three different color spaces, namely, RGB, HSV, L*a*b* and also with three distance measures Euclidean, Quadratic and Histogram intersection are surveyed, to choose appropriate method for future research.

  12. Inverse solutions for electrical impedance tomography based on conjugate gradients methods

    NASA Astrophysics Data System (ADS)

    Wang, M.

    2002-01-01

    A multistep inverse solution for two-dimensional electric field distribution is developed to deal with the nonlinear inverse problem of electric field distribution in relation to its boundary condition and the problem of divergence due to errors introduced by the ill-conditioned sensitivity matrix and the noise produced by electrode modelling and instruments. This solution is based on a normalized linear approximation method where the change in mutual impedance is derived from the sensitivity theorem and a method of error vector decomposition. This paper presents an algebraic solution of the linear equations at each inverse step, using a generalized conjugate gradients method. Limiting the number of iterations in the generalized conjugate gradients method controls the artificial errors introduced by the assumption of linearity and the ill-conditioned sensitivity matrix. The solution of the nonlinear problem is approached using a multistep inversion. This paper also reviews the mathematical and physical definitions of the sensitivity back-projection algorithm based on the sensitivity theorem. Simulations and discussion based on the multistep algorithm, the sensitivity coefficient back-projection method and the Newton-Raphson method are given. Examples of imaging gas-liquid mixing and a human hand in brine are presented.

  13. A quasiparticle-based multi-reference coupled-cluster method.

    PubMed

    Rolik, Zoltán; Kállay, Mihály

    2014-10-07

    The purpose of this paper is to introduce a quasiparticle-based multi-reference coupled-cluster (MRCC) approach. The quasiparticles are introduced via a unitary transformation which allows us to represent a complete active space reference function and other elements of an orthonormal multi-reference (MR) basis in a determinant-like form. The quasiparticle creation and annihilation operators satisfy the fermion anti-commutation relations. On the basis of these quasiparticles, a generalization of the normal-ordered operator products for the MR case can be introduced as an alternative to the approach of Mukherjee and Kutzelnigg [Recent Prog. Many-Body Theor. 4, 127 (1995); Mukherjee and Kutzelnigg, J. Chem. Phys. 107, 432 (1997)]. Based on the new normal ordering any quasiparticle-based theory can be formulated using the well-known diagram techniques. Beyond the general quasiparticle framework we also present a possible realization of the unitary transformation. The suggested transformation has an exponential form where the parameters, holding exclusively active indices, are defined in a form similar to the wave operator of the unitary coupled-cluster approach. The definition of our quasiparticle-based MRCC approach strictly follows the form of the single-reference coupled-cluster method and retains several of its beneficial properties. Test results for small systems are presented using a pilot implementation of the new approach and compared to those obtained by other MR methods.

  14. Hybrid Orientation Based Human Limbs Motion Tracking Method

    PubMed Central

    Glonek, Grzegorz; Wojciechowski, Adam

    2017-01-01

    One of the key technologies that lays behind the human–machine interaction and human motion diagnosis is the limbs motion tracking. To make the limbs tracking efficient, it must be able to estimate a precise and unambiguous position of each tracked human joint and resulting body part pose. In recent years, body pose estimation became very popular and broadly available for home users because of easy access to cheap tracking devices. Their robustness can be improved by different tracking modes data fusion. The paper defines the novel approach—orientation based data fusion—instead of dominating in literature position based approach, for two classes of tracking devices: depth sensors (i.e., Microsoft Kinect) and inertial measurement units (IMU). The detailed analysis of their working characteristics allowed to elaborate a new method that let fuse more precisely limbs orientation data from both devices and compensates their imprecisions. The paper presents the series of performed experiments that verified the method’s accuracy. This novel approach allowed to outperform the precision of position-based joints tracking, the methods dominating in the literature, of up to 18%. PMID:29232832

  15. Validation of a Smartphone Image-Based Dietary Assessment Method for Pregnant Women

    PubMed Central

    Ashman, Amy M.; Collins, Clare E.; Brown, Leanne J.; Rae, Kym M.; Rollo, Megan E.

    2017-01-01

    Image-based dietary records could lower participant burden associated with traditional prospective methods of dietary assessment. They have been used in children, adolescents and adults, but have not been evaluated in pregnant women. The current study evaluated relative validity of the DietBytes image-based dietary assessment method for assessing energy and nutrient intakes. Pregnant women collected image-based dietary records (via a smartphone application) of all food, drinks and supplements consumed over three non-consecutive days. Intakes from the image-based method were compared to intakes collected from three 24-h recalls, taken on random days; once per week, in the weeks following the image-based record. Data were analyzed using nutrient analysis software. Agreement between methods was ascertained using Pearson correlations and Bland-Altman plots. Twenty-five women (27 recruited, one withdrew, one incomplete), median age 29 years, 15 primiparas, eight Aboriginal Australians, completed image-based records for analysis. Significant correlations between the two methods were observed for energy, macronutrients and fiber (r = 0.58–0.84, all p < 0.05), and for micronutrients both including (r = 0.47–0.94, all p < 0.05) and excluding (r = 0.40–0.85, all p < 0.05) supplements in the analysis. Bland-Altman plots confirmed acceptable agreement with no systematic bias. The DietBytes method demonstrated acceptable relative validity for assessment of nutrient intakes of pregnant women. PMID:28106758

  16. A calibration method of infrared LVF based spectroradiometer

    NASA Astrophysics Data System (ADS)

    Liu, Jiaqing; Han, Shunli; Liu, Lei; Hu, Dexin

    2017-10-01

    In this paper, a calibration method of LVF-based spectroradiometer is summarize, including spectral calibration and radiometric calibration. The spectral calibration process as follow: first, the relationship between stepping motor's step number and transmission wavelength is derivative by theoretical calculation, including a non-linearity correction of LVF;second, a line-to-line method was used to corrected the theoretical wavelength; Finally, the 3.39 μm and 10.69 μm laser is used for spectral calibration validation, show the sought 0.1% accuracy or better is achieved.A new sub-region multi-point calibration method is used for radiometric calibration to improving accuracy, results show the sought 1% accuracy or better is achieved.

  17. An Implicit Characteristic Based Method for Electromagnetics

    NASA Technical Reports Server (NTRS)

    Beggs, John H.; Briley, W. Roger

    2001-01-01

    An implicit characteristic-based approach for numerical solution of Maxwell's time-dependent curl equations in flux conservative form is introduced. This method combines a characteristic based finite difference spatial approximation with an implicit lower-upper approximate factorization (LU/AF) time integration scheme. This approach is advantageous for three-dimensional applications because the characteristic differencing enables a two-factor approximate factorization that retains its unconditional stability in three space dimensions, and it does not require solution of tridiagonal systems. Results are given both for a Fourier analysis of stability, damping and dispersion properties, and for one-dimensional model problems involving propagation and scattering for free space and dielectric materials using both uniform and nonuniform grids. The explicit Finite Difference Time Domain Method (FDTD) algorithm is used as a convenient reference algorithm for comparison. The one-dimensional results indicate that for low frequency problems on a highly resolved uniform or nonuniform grid, this LU/AF algorithm can produce accurate solutions at Courant numbers significantly greater than one, with a corresponding improvement in efficiency for simulating a given period of time. This approach appears promising for development of dispersion optimized LU/AF schemes for three dimensional applications.

  18. Microbial detection method based on sensing molecular hydrogen

    NASA Technical Reports Server (NTRS)

    Wilkins, J. R.; Stoner, G. E.; Boykin, E. H.

    1974-01-01

    A simple method for detecting bacteria, based on the time of hydrogen evolution, was developed and tested against various members of the Enterobacteriaceae group. The test system consisted of (1) two electrodes, platinum and a reference electrode, (2) a buffer amplifier, and (3) a strip-chart recorder. Hydrogen evolution was measured by an increase in voltage in the negative (cathodic) direction. A linear relationship was established between inoculum size and the time hydrogen was detected (lag period). Lag times ranged from 1 h for 1 million cells/ml to 7 h for 1 cell/ml. For each 10-fold decrease in inoculum, length of the lag period increased 60 to 70 min. Based on the linear relationship between inoculum and lag period, these results indicate the potential application of the hydrogen-sensing method for rapidly detecting coliforms and other gas-producing microorganisms in a variety of clinical, food, and other samples.

  19. Wavelet-based adaptive thresholding method for image segmentation

    NASA Astrophysics Data System (ADS)

    Chen, Zikuan; Tao, Yang; Chen, Xin; Griffis, Carl

    2001-05-01

    A nonuniform background distribution may cause a global thresholding method to fail to segment objects. One solution is using a local thresholding method that adapts to local surroundings. In this paper, we propose a novel local thresholding method for image segmentation, using multiscale threshold functions obtained by wavelet synthesis with weighted detail coefficients. In particular, the coarse-to- fine synthesis with attenuated detail coefficients produces a threshold function corresponding to a high-frequency- reduced signal. This wavelet-based local thresholding method adapts to both local size and local surroundings, and its implementation can take advantage of the fast wavelet algorithm. We applied this technique to physical contaminant detection for poultry meat inspection using x-ray imaging. Experiments showed that inclusion objects in deboned poultry could be extracted at multiple resolutions despite their irregular sizes and uneven backgrounds.

  20. Comments on Baumrind's "Are Androgynous Individuals More Effective Persons and Parents?"

    ERIC Educational Resources Information Center

    Spence, Janet T.

    1982-01-01

    Argues that Baumrind (1982), in her discussion of studies employing Bem Sex Role Inventory (BSRI) and Personal Attitudes Questionnaire, confuses theories proposed by Bem (1974) and by Spence and Helmreich (1978, 1979), which are based on different assumptions and have different implications. Outlines differences between the two and points out…

  1. Infrared dim small target segmentation method based on ALI-PCNN model

    NASA Astrophysics Data System (ADS)

    Zhao, Shangnan; Song, Yong; Zhao, Yufei; Li, Yun; Li, Xu; Jiang, Yurong; Li, Lin

    2017-10-01

    Pulse Coupled Neural Network (PCNN) is improved by Adaptive Lateral Inhibition (ALI), while a method of infrared (IR) dim small target segmentation based on ALI-PCNN model is proposed in this paper. Firstly, the feeding input signal is modulated by lateral inhibition network to suppress background. Then, the linking input is modulated by ALI, and linking weight matrix is generated adaptively by calculating ALI coefficient of each pixel. Finally, the binary image is generated through the nonlinear modulation and the pulse generator in PCNN. The experimental results show that the segmentation effect as well as the values of contrast across region and uniformity across region of the proposed method are better than the OTSU method, maximum entropy method, the methods based on conventional PCNN and visual attention, and the proposed method has excellent performance in extracting IR dim small target from complex background.

  2. A sediment graph model based on SCS-CN method

    NASA Astrophysics Data System (ADS)

    Singh, P. K.; Bhunya, P. K.; Mishra, S. K.; Chaube, U. C.

    2008-01-01

    SummaryThis paper proposes new conceptual sediment graph models based on coupling of popular and extensively used methods, viz., Nash model based instantaneous unit sediment graph (IUSG), soil conservation service curve number (SCS-CN) method, and Power law. These models vary in their complexity and this paper tests their performance using data of the Nagwan watershed (area = 92.46 km 2) (India). The sensitivity of total sediment yield and peak sediment flow rate computations to model parameterisation is analysed. The exponent of the Power law, β, is more sensitive than other model parameters. The models are found to have substantial potential for computing sediment graphs (temporal sediment flow rate distribution) as well as total sediment yield.

  3. A novel dose-based positioning method for CT image-guided proton therapy

    PubMed Central

    Cheung, Joey P.; Park, Peter C.; Court, Laurence E.; Ronald Zhu, X.; Kudchadker, Rajat J.; Frank, Steven J.; Dong, Lei

    2013-01-01

    Purpose: Proton dose distributions can potentially be altered by anatomical changes in the beam path despite perfect target alignment using traditional image guidance methods. In this simulation study, the authors explored the use of dosimetric factors instead of only anatomy to set up patients for proton therapy using in-room volumetric computed tomographic (CT) images. Methods: To simulate patient anatomy in a free-breathing treatment condition, weekly time-averaged four-dimensional CT data near the end of treatment for 15 lung cancer patients were used in this study for a dose-based isocenter shift method to correct dosimetric deviations without replanning. The isocenter shift was obtained using the traditional anatomy-based image guidance method as the starting position. Subsequent isocenter shifts were established based on dosimetric criteria using a fast dose approximation method. For each isocenter shift, doses were calculated every 2 mm up to ±8 mm in each direction. The optimal dose alignment was obtained by imposing a target coverage constraint that at least 99% of the target would receive at least 95% of the prescribed dose and by minimizing the mean dose to the ipsilateral lung. Results: The authors found that 7 of 15 plans did not meet the target coverage constraint when using only the anatomy-based alignment. After the authors applied dose-based alignment, all met the target coverage constraint. For all but one case in which the target dose was met using both anatomy-based and dose-based alignment, the latter method was able to improve normal tissue sparing. Conclusions: The authors demonstrated that a dose-based adjustment to the isocenter can improve target coverage and/or reduce dose to nearby normal tissue. PMID:23635262

  4. System and method for deriving a process-based specification

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael Gerard (Inventor); Rouff, Christopher A. (Inventor); Rash, James Larry (Inventor)

    2009-01-01

    A system and method for deriving a process-based specification for a system is disclosed. The process-based specification is mathematically inferred from a trace-based specification. The trace-based specification is derived from a non-empty set of traces or natural language scenarios. The process-based specification is mathematically equivalent to the trace-based specification. Code is generated, if applicable, from the process-based specification. A process, or phases of a process, using the features disclosed can be reversed and repeated to allow for an interactive development and modification of legacy systems. The process is applicable to any class of system, including, but not limited to, biological and physical systems, electrical and electro-mechanical systems in addition to software, hardware and hybrid hardware-software systems.

  5. Weaving a Formal Methods Education with Problem-Based Learning

    NASA Astrophysics Data System (ADS)

    Gibson, J. Paul

    The idea of weaving formal methods through computing (or software engineering) degrees is not a new one. However, there has been little success in developing and implementing such a curriculum. Formal methods continue to be taught as stand-alone modules and students, in general, fail to see how fundamental these methods are to the engineering of software. A major problem is one of motivation — how can the students be expected to enthusiastically embrace a challenging subject when the learning benefits, beyond passing an exam and achieving curriculum credits, are not clear? Problem-based learning has gradually moved from being an innovative pedagogique technique, commonly used to better-motivate students, to being widely adopted in the teaching of many different disciplines, including computer science and software engineering. Our experience shows that a good problem can be re-used throughout a student's academic life. In fact, the best computing problems can be used with children (young and old), undergraduates and postgraduates. In this paper we present a process for weaving formal methods through a University curriculum that is founded on the application of problem-based learning and a library of good software engineering problems, where students learn about formal methods without sitting a traditional formal methods module. The process of constructing good problems and integrating them into the curriculum is shown to be analagous to the process of engineering software. This approach is not intended to replace more traditional formal methods modules: it will better prepare students for such specialised modules and ensure that all students have an understanding and appreciation for formal methods even if they do not go on to specialise in them.

  6. Active treatments for amblyopia: a review of the methods and evidence base.

    PubMed

    Suttle, Catherine M

    2010-09-01

    Treatment for amblyopia commonly involves passive methods such as occlusion of the non-amblyopic eye. An evidence base for these methods is provided by animal models of visual deprivation and plasticity in early life and randomised controlled studies in humans with amblyopia. Other treatments of amblyopia, intended to be used instead of or in conjunction with passive methods, are known as 'active' because they require some activity on the part of the patient. Active methods are intended to enhance treatment of amblyopia in a number of ways, including increased compliance and attention during the treatment periods (due to activities that are interesting for the patient) and the use of stimuli designed to activate and to encourage connectivity between certain cortical cell types. Active methods of amblyopia treatment are widely available and are discussed to some extent in the literature, but in many cases the evidence base is unclear, and effectiveness has not been thoroughly tested. This review looks at the techniques and evidence base for a range of these methods and discusses the need for an evidence-based approach to the acceptance and use of active amblyopia treatments.

  7. A fast button surface defects detection method based on convolutional neural network

    NASA Astrophysics Data System (ADS)

    Liu, Lizhe; Cao, Danhua; Wu, Songlin; Wu, Yubin; Wei, Taoran

    2018-01-01

    Considering the complexity of the button surface texture and the variety of buttons and defects, we propose a fast visual method for button surface defect detection, based on convolutional neural network (CNN). CNN has the ability to extract the essential features by training, avoiding designing complex feature operators adapted to different kinds of buttons, textures and defects. Firstly, we obtain the normalized button region and then use HOG-SVM method to identify the front and back side of the button. Finally, a convolutional neural network is developed to recognize the defects. Aiming at detecting the subtle defects, we propose a network structure with multiple feature channels input. To deal with the defects of different scales, we take a strategy of multi-scale image block detection. The experimental results show that our method is valid for a variety of buttons and able to recognize all kinds of defects that have occurred, including dent, crack, stain, hole, wrong paint and uneven. The detection rate exceeds 96%, which is much better than traditional methods based on SVM and methods based on template match. Our method can reach the speed of 5 fps on DSP based smart camera with 600 MHz frequency.

  8. Support vector machine-based facial-expression recognition method combining shape and appearance

    NASA Astrophysics Data System (ADS)

    Han, Eun Jung; Kang, Byung Jun; Park, Kang Ryoung; Lee, Sangyoun

    2010-11-01

    Facial expression recognition can be widely used for various applications, such as emotion-based human-machine interaction, intelligent robot interfaces, face recognition robust to expression variation, etc. Previous studies have been classified as either shape- or appearance-based recognition. The shape-based method has the disadvantage that the individual variance of facial feature points exists irrespective of similar expressions, which can cause a reduction of the recognition accuracy. The appearance-based method has a limitation in that the textural information of the face is very sensitive to variations in illumination. To overcome these problems, a new facial-expression recognition method is proposed, which combines both shape and appearance information, based on the support vector machine (SVM). This research is novel in the following three ways as compared to previous works. First, the facial feature points are automatically detected by using an active appearance model. From these, the shape-based recognition is performed by using the ratios between the facial feature points based on the facial-action coding system. Second, the SVM, which is trained to recognize the same and different expression classes, is proposed to combine two matching scores obtained from the shape- and appearance-based recognitions. Finally, a single SVM is trained to discriminate four different expressions, such as neutral, a smile, anger, and a scream. By determining the expression of the input facial image whose SVM output is at a minimum, the accuracy of the expression recognition is much enhanced. The experimental results showed that the recognition accuracy of the proposed method was better than previous researches and other fusion methods.

  9. A multiple-point spatially weighted k-NN method for object-based classification

    NASA Astrophysics Data System (ADS)

    Tang, Yunwei; Jing, Linhai; Li, Hui; Atkinson, Peter M.

    2016-10-01

    Object-based classification, commonly referred to as object-based image analysis (OBIA), is now commonly regarded as able to produce more appealing classification maps, often of greater accuracy, than pixel-based classification and its application is now widespread. Therefore, improvement of OBIA using spatial techniques is of great interest. In this paper, multiple-point statistics (MPS) is proposed for object-based classification enhancement in the form of a new multiple-point k-nearest neighbour (k-NN) classification method (MPk-NN). The proposed method first utilises a training image derived from a pre-classified map to characterise the spatial correlation between multiple points of land cover classes. The MPS borrows spatial structures from other parts of the training image, and then incorporates this spatial information, in the form of multiple-point probabilities, into the k-NN classifier. Two satellite sensor images with a fine spatial resolution were selected to evaluate the new method. One is an IKONOS image of the Beijing urban area and the other is a WorldView-2 image of the Wolong mountainous area, in China. The images were object-based classified using the MPk-NN method and several alternatives, including the k-NN, the geostatistically weighted k-NN, the Bayesian method, the decision tree classifier (DTC), and the support vector machine classifier (SVM). It was demonstrated that the new spatial weighting based on MPS can achieve greater classification accuracy relative to the alternatives and it is, thus, recommended as appropriate for object-based classification.

  10. Couple Graph Based Label Propagation Method for Hyperspectral Remote Sensing Data Classification

    NASA Astrophysics Data System (ADS)

    Wang, X. P.; Hu, Y.; Chen, J.

    2018-04-01

    Graph based semi-supervised classification method are widely used for hyperspectral image classification. We present a couple graph based label propagation method, which contains both the adjacency graph and the similar graph. We propose to construct the similar graph by using the similar probability, which utilize the label similarity among examples probably. The adjacency graph was utilized by a common manifold learning method, which has effective improve the classification accuracy of hyperspectral data. The experiments indicate that the couple graph Laplacian which unite both the adjacency graph and the similar graph, produce superior classification results than other manifold Learning based graph Laplacian and Sparse representation based graph Laplacian in label propagation framework.

  11. Numerical study for forced MHD convection heat transfer of a nanofluid in a square cavity with a cylinder of constant heat flux

    NASA Astrophysics Data System (ADS)

    Hassanpour, Amin; Ranjbar, A. A.; Sheikholeslami, M.

    2018-02-01

    In this research, flow and forced convection heat transfer of a water-copper nanofluid in the presence of magnetic field is studied. The walls of the square ventilation cavity are insulated. The dominating equations are solved by implementing the finite-volume method (FVM) using the Semi-Implicit Method for Pressure-Linked Equations (SIMPLE) algorithm. The effects of Hartmann number, nanoparticles volume fraction and Reynolds number on the flow and heat transfer characteristics were examined. The results demonstrate that increasing Reynolds and Hartmann numbers lead to increase the average Nusselt number. By evaluating the geometrical parameters, it was found that the size and number of vortices in the flow field decrease by increasing the inlet width. Besides, the increase of the average Nusselt number occurs with the increase of the inlet width. Moreover, it has been observed that the effect of the Hartmann number is more pronounced for higher Reynolds numbers.

  12. [A retrieval method of drug molecules based on graph collapsing].

    PubMed

    Qu, J W; Lv, X Q; Liu, Z M; Liao, Y; Sun, P H; Wang, B; Tang, Z

    2018-04-18

    To establish a compact and efficient hypergraph representation and a graph-similarity-based retrieval method of molecules to achieve effective and efficient medicine information retrieval. Chemical structural formula (CSF) was a primary search target as a unique and precise identifier for each compound at the molecular level in the research field of medicine information retrieval. To retrieve medicine information effectively and efficiently, a complete workflow of the graph-based CSF retrieval system was introduced. This system accepted the photos taken from smartphones and the sketches drawn on tablet personal computers as CSF inputs, and formalized the CSFs with the corresponding graphs. Then this paper proposed a compact and efficient hypergraph representation for molecules on the basis of analyzing factors that directly affected the efficiency of graph matching. According to the characteristics of CSFs, a hierarchical collapsing method combining graph isomorphism and frequent subgraph mining was adopted. There was yet a fundamental challenge, subgraph overlapping during the collapsing procedure, which hindered the method from establishing the correct compact hypergraph of an original CSF graph. Therefore, a graph-isomorphism-based algorithm was proposed to select dominant acyclic subgraphs on the basis of overlapping analysis. Finally, the spatial similarity among graphical CSFs was evaluated by multi-dimensional measures of similarity. To evaluate the performance of the proposed method, the proposed system was firstly compared with Wikipedia Chemical Structure Explorer (WCSE), the state-of-the-art system that allowed CSF similarity searching within Wikipedia molecules dataset, on retrieval accuracy. The system achieved higher values on mean average precision, discounted cumulative gain, rank-biased precision, and expected reciprocal rank than WCSE from the top-2 to the top-10 retrieved results. Specifically, the system achieved 10%, 1.41, 6.42%, and 1

  13. A combined emitter threat assessment method based on ICW-RCM

    NASA Astrophysics Data System (ADS)

    Zhang, Ying; Wang, Hongwei; Guo, Xiaotao; Wang, Yubing

    2017-08-01

    Considering that the tradition al emitter threat assessment methods are difficult to intuitively reflect the degree of target threaten and the deficiency of real-time and complexity, on the basis of radar chart method(RCM), an algorithm of emitter combined threat assessment based on ICW-RCM (improved combination weighting method, ICW) is proposed. The coarse sorting is integrated with fine sorting in emitter combined threat assessment, sequencing the emitter threat level roughly accordance to radar operation mode, and reducing task priority of the low-threat emitter; On the basis of ICW-RCM, sequencing the same radar operation mode emitter roughly, finally, obtain the results of emitter threat assessment through coarse and fine sorting. Simulation analyses show the correctness and effectiveness of this algorithm. Comparing with classical method of emitter threat assessment based on CW-RCM, the algorithm is visual in image and can work quickly with lower complexity.

  14. Practical quantum mechanics-based fragment methods for predicting molecular crystal properties.

    PubMed

    Wen, Shuhao; Nanda, Kaushik; Huang, Yuanhang; Beran, Gregory J O

    2012-06-07

    Significant advances in fragment-based electronic structure methods have created a real alternative to force-field and density functional techniques in condensed-phase problems such as molecular crystals. This perspective article highlights some of the important challenges in modeling molecular crystals and discusses techniques for addressing them. First, we survey recent developments in fragment-based methods for molecular crystals. Second, we use examples from our own recent research on a fragment-based QM/MM method, the hybrid many-body interaction (HMBI) model, to analyze the physical requirements for a practical and effective molecular crystal model chemistry. We demonstrate that it is possible to predict molecular crystal lattice energies to within a couple kJ mol(-1) and lattice parameters to within a few percent in small-molecule crystals. Fragment methods provide a systematically improvable approach to making predictions in the condensed phase, which is critical to making robust predictions regarding the subtle energy differences found in molecular crystals.

  15. Transistor-based particle detection systems and methods

    DOEpatents

    Jain, Ankit; Nair, Pradeep R.; Alam, Muhammad Ashraful

    2015-06-09

    Transistor-based particle detection systems and methods may be configured to detect charged and non-charged particles. Such systems may include a supporting structure contacting a gate of a transistor and separating the gate from a dielectric of the transistor, and the transistor may have a near pull-in bias and a sub-threshold region bias to facilitate particle detection. The transistor may be configured to change current flow through the transistor in response to a change in stiffness of the gate caused by securing of a particle to the gate, and the transistor-based particle detection system may configured to detect the non-charged particle at least from the change in current flow.

  16. Magnetic Signature: Small Arms Testing of Multiple Examples of Same Model Weapons

    DTIC Science & Technology

    2009-04-01

    inside the wooden building, showing a three-axis fluxgate magnetometer , north-south path lines, and instrumentation system...the FVM-400 Vector Fluxgate Magnetometer by Macintyre Electronics Design Associates, Inc. (MEDA) was used and in other cases two DFM100G2 Digital... Fluxgate Magnetometers made by Billingsley Magnetics were used. The majority of the data obtained was done with the latter. The MEDA has a 1 nT

  17. Comparing team-based and mixed active-learning methods in an ambulatory care elective course.

    PubMed

    Zingone, Michelle M; Franks, Andrea S; Guirguis, Alexander B; George, Christa M; Howard-Thompson, Amanda; Heidel, Robert E

    2010-11-10

    To assess students' performance and perceptions of team-based and mixed active-learning methods in 2 ambulatory care elective courses, and to describe faculty members' perceptions of team-based learning. Using the 2 teaching methods, students' grades were compared. Students' perceptions were assessed through 2 anonymous course evaluation instruments. Faculty members who taught courses using the team-based learning method were surveyed regarding their impressions of team-based learning. The ambulatory care course was offered to 64 students using team-based learning (n = 37) and mixed active learning (n = 27) formats. The mean quality points earned were 3.7 (team-based learning) and 3.3 (mixed active learning), p < 0.001. Course evaluations for both courses were favorable. All faculty members who used the team-based learning method reported that they would consider using team-based learning in another course. Students were satisfied with both teaching methods; however, student grades were significantly higher in the team-based learning course. Faculty members recognized team-based learning as an effective teaching strategy for small-group active learning.

  18. Two-Way Gene Interaction From Microarray Data Based on Correlation Methods

    PubMed Central

    Alavi Majd, Hamid; Talebi, Atefeh; Gilany, Kambiz; Khayyer, Nasibeh

    2016-01-01

    Background Gene networks have generated a massive explosion in the development of high-throughput techniques for monitoring various aspects of gene activity. Networks offer a natural way to model interactions between genes, and extracting gene network information from high-throughput genomic data is an important and difficult task. Objectives The purpose of this study is to construct a two-way gene network based on parametric and nonparametric correlation coefficients. The first step in constructing a Gene Co-expression Network is to score all pairs of gene vectors. The second step is to select a score threshold and connect all gene pairs whose scores exceed this value. Materials and Methods In the foundation-application study, we constructed two-way gene networks using nonparametric methods, such as Spearman’s rank correlation coefficient and Blomqvist’s measure, and compared them with Pearson’s correlation coefficient. We surveyed six genes of venous thrombosis disease, made a matrix entry representing the score for the corresponding gene pair, and obtained two-way interactions using Pearson’s correlation, Spearman’s rank correlation, and Blomqvist’s coefficient. Finally, these methods were compared with Cytoscape, based on BIND, and Gene Ontology, based on molecular function visual methods; R software version 3.2 and Bioconductor were used to perform these methods. Results Based on the Pearson and Spearman correlations, the results were the same and were confirmed by Cytoscape and GO visual methods; however, Blomqvist’s coefficient was not confirmed by visual methods. Conclusions Some results of the correlation coefficients are not the same with visualization. The reason may be due to the small number of data. PMID:27621916

  19. Object-based change detection method using refined Markov random field

    NASA Astrophysics Data System (ADS)

    Peng, Daifeng; Zhang, Yongjun

    2017-01-01

    In order to fully consider the local spatial constraints between neighboring objects in object-based change detection (OBCD), an OBCD approach is presented by introducing a refined Markov random field (MRF). First, two periods of images are stacked and segmented to produce image objects. Second, object spectral and textual histogram features are extracted and G-statistic is implemented to measure the distance among different histogram distributions. Meanwhile, object heterogeneity is calculated by combining spectral and textual histogram distance using adaptive weight. Third, an expectation-maximization algorithm is applied for determining the change category of each object and the initial change map is then generated. Finally, a refined change map is produced by employing the proposed refined object-based MRF method. Three experiments were conducted and compared with some state-of-the-art unsupervised OBCD methods to evaluate the effectiveness of the proposed method. Experimental results demonstrate that the proposed method obtains the highest accuracy among the methods used in this paper, which confirms its validness and effectiveness in OBCD.

  20. Metaphoric Investigation of the Phonic-Based Sentence Method

    ERIC Educational Resources Information Center

    Dogan, Birsen

    2012-01-01

    This study aimed to understand the views of prospective teachers with "phonic-based sentence method" through metaphoric images. In this descriptive study, the participants involve the prospective teachers who take reading-writing instruction courses in Primary School Classroom Teaching Program of the Education Faculty of Pamukkale…

  1. Towards robust and repeatable sampling methods in eDNA based studies.

    PubMed

    Dickie, Ian A; Boyer, Stephane; Buckley, Hannah; Duncan, Richard P; Gardner, Paul; Hogg, Ian D; Holdaway, Robert J; Lear, Gavin; Makiola, Andreas; Morales, Sergio E; Powell, Jeff R; Weaver, Louise

    2018-05-26

    DNA based techniques are increasingly used for measuring the biodiversity (species presence, identity, abundance and community composition) of terrestrial and aquatic ecosystems. While there are numerous reviews of molecular methods and bioinformatic steps, there has been little consideration of the methods used to collect samples upon which these later steps are based. This represents a critical knowledge gap, as methodologically sound field sampling is the foundation for subsequent analyses. We reviewed field sampling methods used for metabarcoding studies of both terrestrial and freshwater ecosystem biodiversity over a nearly three-year period (n = 75). We found that 95% (n = 71) of these studies used subjective sampling methods, inappropriate field methods, and/or failed to provide critical methodological information. It would be possible for researchers to replicate only 5% of the metabarcoding studies in our sample, a poorer level of reproducibility than for ecological studies in general. Our findings suggest greater attention to field sampling methods and reporting is necessary in eDNA-based studies of biodiversity to ensure robust outcomes and future reproducibility. Methods must be fully and accurately reported, and protocols developed that minimise subjectivity. Standardisation of sampling protocols would be one way to help to improve reproducibility, and have additional benefits in allowing compilation and comparison of data from across studies. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  2. Attribute-Based Methods

    Treesearch

    Thomas P. Holmes; Wiktor L. Adamowicz

    2003-01-01

    Stated preference methods of environmental valuation have been used by economists for decades where behavioral data have limitations. The contingent valuation method (Chapter 5) is the oldest stated preference approach, and hundreds of contingent valuation studies have been conducted. More recently, and especially over the last decade, a class of stated preference...

  3. Usability Evaluation Methods for Gesture-Based Games: A Systematic Review.

    PubMed

    Simor, Fernando Winckler; Brum, Manoela Rogofski; Schmidt, Jaison Dairon Ebertz; Rieder, Rafael; De Marchi, Ana Carolina Bertoletti

    2016-10-04

    Gestural interaction systems are increasingly being used, mainly in games, expanding the idea of entertainment and providing experiences with the purpose of promoting better physical and/or mental health. Therefore, it is necessary to establish mechanisms for evaluating the usability of these interfaces, which make gestures the basis of interaction, to achieve a balance between functionality and ease of use. This study aims to present the results of a systematic review focused on usability evaluation methods for gesture-based games, considering devices with motion-sensing capability. We considered the usability methods used, the common interface issues, and the strategies adopted to build good gesture-based games. The research was centered on four electronic databases: IEEE, Association for Computing Machinery (ACM), Springer, and Science Direct from September 4 to 21, 2015. Within 1427 studies evaluated, 10 matched the eligibility criteria. As a requirement, we considered studies about gesture-based games, Kinect and/or Wii as devices, and the use of a usability method to evaluate the user interface. In the 10 studies found, there was no standardization in the methods because they considered diverse analysis variables. Heterogeneously, authors used different instruments to evaluate gesture-based interfaces and no default approach was proposed. Questionnaires were the most used instruments (70%, 7/10), followed by interviews (30%, 3/10), and observation and video recording (20%, 2/10). Moreover, 60% (6/10) of the studies used gesture-based serious games to evaluate the performance of elderly participants in rehabilitation tasks. This highlights the need for creating an evaluation protocol for older adults to provide a user-friendly interface according to the user's age and limitations. Through this study, we conclude this field is in need of a usability evaluation method for serious games, especially games for older adults, and that the definition of a methodology

  4. SB certification handout material requirements, test methods, responsibilities, and minimum classification levels for mixture-based specification for flexible base.

    DOT National Transportation Integrated Search

    2012-10-01

    A handout with tables representing the material requirements, test methods, responsibilities, and minimum classification levels mixture-based specification for flexible base and details on aggregate and test methods employed, along with agency and co...

  5. Scene-based method for spatial misregistration detection in hyperspectral imagery.

    PubMed

    Dell'Endice, Francesco; Nieke, Jens; Schläpfer, Daniel; Itten, Klaus I

    2007-05-20

    Hyperspectral imaging (HSI) sensors suffer from spatial misregistration, an artifact that prevents the accurate acquisition of the spectra. Physical considerations let us assume that the influence of the spatial misregistration on the acquired data depends both on the wavelength and on the across-track position. A scene-based method, based on edge detection, is therefore proposed. Such a procedure measures the variation on the spatial location of an edge between its various monochromatic projections, giving an estimation for spatial misregistration, and also allowing identification of misalignments. The method has been applied to several hyperspectral sensors, either prism, or grating-based designs. The results confirm the dependence assumptions on lambda and theta, spectral wavelength and across-track pixel, respectively. Suggestions are also given to correct for spatial misregistration.

  6. Summary of water body extraction methods based on ZY-3 satellite

    NASA Astrophysics Data System (ADS)

    Zhu, Yu; Sun, Li Jian; Zhang, Chuan Yin

    2017-12-01

    Extracting from remote sensing images is one of the main means of water information extraction. Affected by spectral characteristics, many methods can be not applied to the satellite image of ZY-3. To solve this problem, we summarize the extraction methods for ZY-3 and analyze the extraction results of existing methods. According to the characteristics of extraction results, the method of WI& single band threshold and the method of texture filtering based on probability statistics are explored. In addition, the advantages and disadvantages of all methods are compared, which provides some reference for the research of water extraction from images. The obtained conclusions are as follows. 1) NIR has higher water sensitivity, consequently when the surface reflectance in the study area is less similar to water, using single band threshold method or multi band operation can obtain the ideal effect. 2) Compared with the water index and HIS optimal index method, object extraction method based on rules, which takes into account not only the spectral information of the water, but also space and texture feature constraints, can obtain better extraction effect, yet the image segmentation process is time consuming and the definition of the rules requires a certain knowledge. 3) The combination of the spectral relationship and water index can eliminate the interference of the shadow to a certain extent. When there is less small water or small water is not considered in further study, texture filtering based on probability statistics can effectively reduce the noises in result and avoid mixing shadows or paddy field with water in a certain extent.

  7. Secure Method for Biometric-Based Recognition with Integrated Cryptographic Functions

    PubMed Central

    Chiou, Shin-Yan

    2013-01-01

    Biometric systems refer to biometric technologies which can be used to achieve authentication. Unlike cryptography-based technologies, the ratio for certification in biometric systems needs not to achieve 100% accuracy. However, biometric data can only be directly compared through proximal access to the scanning device and cannot be combined with cryptographic techniques. Moreover, repeated use, improper storage, or transmission leaks may compromise security. Prior studies have attempted to combine cryptography and biometrics, but these methods require the synchronization of internal systems and are vulnerable to power analysis attacks, fault-based cryptanalysis, and replay attacks. This paper presents a new secure cryptographic authentication method using biometric features. The proposed system combines the advantages of biometric identification and cryptographic techniques. By adding a subsystem to existing biometric recognition systems, we can simultaneously achieve the security of cryptographic technology and the error tolerance of biometric recognition. This method can be used for biometric data encryption, signatures, and other types of cryptographic computation. The method offers a high degree of security with protection against power analysis attacks, fault-based cryptanalysis, and replay attacks. Moreover, it can be used to improve the confidentiality of biological data storage and biodata identification processes. Remote biometric authentication can also be safely applied. PMID:23762851

  8. Secure method for biometric-based recognition with integrated cryptographic functions.

    PubMed

    Chiou, Shin-Yan

    2013-01-01

    Biometric systems refer to biometric technologies which can be used to achieve authentication. Unlike cryptography-based technologies, the ratio for certification in biometric systems needs not to achieve 100% accuracy. However, biometric data can only be directly compared through proximal access to the scanning device and cannot be combined with cryptographic techniques. Moreover, repeated use, improper storage, or transmission leaks may compromise security. Prior studies have attempted to combine cryptography and biometrics, but these methods require the synchronization of internal systems and are vulnerable to power analysis attacks, fault-based cryptanalysis, and replay attacks. This paper presents a new secure cryptographic authentication method using biometric features. The proposed system combines the advantages of biometric identification and cryptographic techniques. By adding a subsystem to existing biometric recognition systems, we can simultaneously achieve the security of cryptographic technology and the error tolerance of biometric recognition. This method can be used for biometric data encryption, signatures, and other types of cryptographic computation. The method offers a high degree of security with protection against power analysis attacks, fault-based cryptanalysis, and replay attacks. Moreover, it can be used to improve the confidentiality of biological data storage and biodata identification processes. Remote biometric authentication can also be safely applied.

  9. Research on Knowledge-Based Optimization Method of Indoor Location Based on Low Energy Bluetooth

    NASA Astrophysics Data System (ADS)

    Li, C.; Li, G.; Deng, Y.; Wang, T.; Kang, Z.

    2017-09-01

    With the rapid development of LBS (Location-based Service), the demand for commercialization of indoor location has been increasing, but its technology is not perfect. Currently, the accuracy of indoor location, the complexity of the algorithm, and the cost of positioning are hard to be simultaneously considered and it is still restricting the determination and application of mainstream positioning technology. Therefore, this paper proposes a method of knowledge-based optimization of indoor location based on low energy Bluetooth. The main steps include: 1) The establishment and application of a priori and posterior knowledge base. 2) Primary selection of signal source. 3) Elimination of positioning gross error. 4) Accumulation of positioning knowledge. The experimental results show that the proposed algorithm can eliminate the signal source of outliers and improve the accuracy of single point positioning in the simulation data. The proposed scheme is a dynamic knowledge accumulation rather than a single positioning process. The scheme adopts cheap equipment and provides a new idea for the theory and method of indoor positioning. Moreover, the performance of the high accuracy positioning results in the simulation data shows that the scheme has a certain application value in the commercial promotion.

  10. Two-Way Gene Interaction From Microarray Data Based on Correlation Methods.

    PubMed

    Alavi Majd, Hamid; Talebi, Atefeh; Gilany, Kambiz; Khayyer, Nasibeh

    2016-06-01

    Gene networks have generated a massive explosion in the development of high-throughput techniques for monitoring various aspects of gene activity. Networks offer a natural way to model interactions between genes, and extracting gene network information from high-throughput genomic data is an important and difficult task. The purpose of this study is to construct a two-way gene network based on parametric and nonparametric correlation coefficients. The first step in constructing a Gene Co-expression Network is to score all pairs of gene vectors. The second step is to select a score threshold and connect all gene pairs whose scores exceed this value. In the foundation-application study, we constructed two-way gene networks using nonparametric methods, such as Spearman's rank correlation coefficient and Blomqvist's measure, and compared them with Pearson's correlation coefficient. We surveyed six genes of venous thrombosis disease, made a matrix entry representing the score for the corresponding gene pair, and obtained two-way interactions using Pearson's correlation, Spearman's rank correlation, and Blomqvist's coefficient. Finally, these methods were compared with Cytoscape, based on BIND, and Gene Ontology, based on molecular function visual methods; R software version 3.2 and Bioconductor were used to perform these methods. Based on the Pearson and Spearman correlations, the results were the same and were confirmed by Cytoscape and GO visual methods; however, Blomqvist's coefficient was not confirmed by visual methods. Some results of the correlation coefficients are not the same with visualization. The reason may be due to the small number of data.

  11. Salient object detection method based on multiple semantic features

    NASA Astrophysics Data System (ADS)

    Wang, Chunyang; Yu, Chunyan; Song, Meiping; Wang, Yulei

    2018-04-01

    The existing salient object detection model can only detect the approximate location of salient object, or highlight the background, to resolve the above problem, a salient object detection method was proposed based on image semantic features. First of all, three novel salient features were presented in this paper, including object edge density feature (EF), object semantic feature based on the convex hull (CF) and object lightness contrast feature (LF). Secondly, the multiple salient features were trained with random detection windows. Thirdly, Naive Bayesian model was used for combine these features for salient detection. The results on public datasets showed that our method performed well, the location of salient object can be fixed and the salient object can be accurately detected and marked by the specific window.

  12. A supervoxel-based segmentation method for prostate MR images.

    PubMed

    Tian, Zhiqiang; Liu, Lizhi; Zhang, Zhenfeng; Xue, Jianru; Fei, Baowei

    2017-02-01

    Segmentation of the prostate on MR images has many applications in prostate cancer management. In this work, we propose a supervoxel-based segmentation method for prostate MR images. A supervoxel is a set of pixels that have similar intensities, locations, and textures in a 3D image volume. The prostate segmentation problem is considered as assigning a binary label to each supervoxel, which is either the prostate or background. A supervoxel-based energy function with data and smoothness terms is used to model the label. The data term estimates the likelihood of a supervoxel belonging to the prostate by using a supervoxel-based shape feature. The geometric relationship between two neighboring supervoxels is used to build the smoothness term. The 3D graph cut is used to minimize the energy function to get the labels of the supervoxels, which yields the prostate segmentation. A 3D active contour model is then used to get a smooth surface by using the output of the graph cut as an initialization. The performance of the proposed algorithm was evaluated on 30 in-house MR image data and PROMISE12 dataset. The mean Dice similarity coefficients are 87.2 ± 2.3% and 88.2 ± 2.8% for our 30 in-house MR volumes and the PROMISE12 dataset, respectively. The proposed segmentation method yields a satisfactory result for prostate MR images. The proposed supervoxel-based method can accurately segment prostate MR images and can have a variety of application in prostate cancer diagnosis and therapy. © 2016 American Association of Physicists in Medicine.

  13. Picking vs Waveform based detection and location methods for induced seismicity monitoring

    NASA Astrophysics Data System (ADS)

    Grigoli, Francesco; Boese, Maren; Scarabello, Luca; Diehl, Tobias; Weber, Bernd; Wiemer, Stefan; Clinton, John F.

    2017-04-01

    Microseismic monitoring is a common operation in various industrial activities related to geo-resouces, such as oil and gas and mining operations or geothermal energy exploitation. In microseismic monitoring we generally deal with large datasets from dense monitoring networks that require robust automated analysis procedures. The seismic sequences being monitored are often characterized by very many events with short inter-event times that can even provide overlapped seismic signatures. In these situations, traditional approaches that identify seismic events using dense seismic networks based on detections, phase identification and event association can fail, leading to missed detections and/or reduced location resolution. In recent years, to improve the quality of automated catalogues, various waveform-based methods for the detection and location of microseismicity have been proposed. These methods exploit the coherence of the waveforms recorded at different stations and do not require any automated picking procedure. Although this family of methods have been applied to different induced seismicity datasets, an extensive comparison with sophisticated pick-based detection and location methods is still lacking. We aim here to perform a systematic comparison in term of performance using the waveform-based method LOKI and the pick-based detection and location methods (SCAUTOLOC and SCANLOC) implemented within the SeisComP3 software package. SCANLOC is a new detection and location method specifically designed for seismic monitoring at local scale. Although recent applications have proved an extensive test with induced seismicity datasets have been not yet performed. This method is based on a cluster search algorithm to associate detections to one or many potential earthquake sources. On the other hand, SCAUTOLOC is more a "conventional" method and is the basic tool for seismic event detection and location in SeisComp3. This approach was specifically designed for

  14. Wavelet-based unsupervised learning method for electrocardiogram suppression in surface electromyograms.

    PubMed

    Niegowski, Maciej; Zivanovic, Miroslav

    2016-03-01

    We present a novel approach aimed at removing electrocardiogram (ECG) perturbation from single-channel surface electromyogram (EMG) recordings by means of unsupervised learning of wavelet-based intensity images. The general idea is to combine the suitability of certain wavelet decomposition bases which provide sparse electrocardiogram time-frequency representations, with the capacity of non-negative matrix factorization (NMF) for extracting patterns from images. In order to overcome convergence problems which often arise in NMF-related applications, we design a novel robust initialization strategy which ensures proper signal decomposition in a wide range of ECG contamination levels. Moreover, the method can be readily used because no a priori knowledge or parameter adjustment is needed. The proposed method was evaluated on real surface EMG signals against two state-of-the-art unsupervised learning algorithms and a singular spectrum analysis based method. The results, expressed in terms of high-to-low energy ratio, normalized median frequency, spectral power difference and normalized average rectified value, suggest that the proposed method enables better ECG-EMG separation quality than the reference methods. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  15. Explorations in Using Arts-Based Self-Study Methods

    ERIC Educational Resources Information Center

    Samaras, Anastasia P.

    2010-01-01

    Research methods courses typically require students to conceptualize, describe, and present their research ideas in writing. In this article, the author describes her exploration in using arts-based techniques for teaching research to support the development of students' self-study research projects. The pedagogical approach emerged from the…

  16. A probability-based multi-cycle sorting method for 4D-MRI: A simulation study

    PubMed Central

    Liang, Xiao; Yin, Fang-Fang; Liu, Yilin; Cai, Jing

    2016-01-01

    Purpose: To develop a novel probability-based sorting method capable of generating multiple breathing cycles of 4D-MRI images and to evaluate performance of this new method by comparing with conventional phase-based methods in terms of image quality and tumor motion measurement. Methods: Based on previous findings that breathing motion probability density function (PDF) of a single breathing cycle is dramatically different from true stabilized PDF that resulted from many breathing cycles, it is expected that a probability-based sorting method capable of generating multiple breathing cycles of 4D images may capture breathing variation information missing from conventional single-cycle sorting methods. The overall idea is to identify a few main breathing cycles (and their corresponding weightings) that can best represent the main breathing patterns of the patient and then reconstruct a set of 4D images for each of the identified main breathing cycles. This method is implemented in three steps: (1) The breathing signal is decomposed into individual breathing cycles, characterized by amplitude, and period; (2) individual breathing cycles are grouped based on amplitude and period to determine the main breathing cycles. If a group contains more than 10% of all breathing cycles in a breathing signal, it is determined as a main breathing pattern group and is represented by the average of individual breathing cycles in the group; (3) for each main breathing cycle, a set of 4D images is reconstructed using a result-driven sorting method adapted from our previous study. The probability-based sorting method was first tested on 26 patients’ breathing signals to evaluate its feasibility of improving target motion PDF. The new method was subsequently tested for a sequential image acquisition scheme on the 4D digital extended cardiac torso (XCAT) phantom. Performance of the probability-based and conventional sorting methods was evaluated in terms of target volume precision and

  17. Betweenness-Based Method to Identify Critical Transmission Sectors for Supply Chain Environmental Pressure Mitigation.

    PubMed

    Liang, Sai; Qu, Shen; Xu, Ming

    2016-02-02

    To develop industry-specific policies for mitigating environmental pressures, previous studies primarily focus on identifying sectors that directly generate large amounts of environmental pressures (a.k.a. production-based method) or indirectly drive large amounts of environmental pressures through supply chains (e.g., consumption-based method). In addition to those sectors as important environmental pressure producers or drivers, there exist sectors that are also important to environmental pressure mitigation as transmission centers. Economy-wide environmental pressure mitigation might be achieved by improving production efficiency of these key transmission sectors, that is, using less upstream inputs to produce unitary output. We develop a betweenness-based method to measure the importance of transmission sectors, borrowing the betweenness concept from network analysis. We quantify the betweenness of sectors by examining supply chain paths extracted from structural path analysis that pass through a particular sector. We take China as an example and find that those critical transmission sectors identified by betweenness-based method are not always identifiable by existing methods. This indicates that betweenness-based method can provide additional insights that cannot be obtained with existing methods on the roles individual sectors play in generating economy-wide environmental pressures. Betweenness-based method proposed here can therefore complement existing methods for guiding sector-level environmental pressure mitigation strategies.

  18. A wavelet-based Gaussian method for energy dispersive X-ray fluorescence spectrum.

    PubMed

    Liu, Pan; Deng, Xiaoyan; Tang, Xin; Shen, Shijian

    2017-05-01

    This paper presents a wavelet-based Gaussian method (WGM) for the peak intensity estimation of energy dispersive X-ray fluorescence (EDXRF). The relationship between the parameters of Gaussian curve and the wavelet coefficients of Gaussian peak point is firstly established based on the Mexican hat wavelet. It is found that the Gaussian parameters can be accurately calculated by any two wavelet coefficients at the peak point which has to be known. This fact leads to a local Gaussian estimation method for spectral peaks, which estimates the Gaussian parameters based on the detail wavelet coefficients of Gaussian peak point. The proposed method is tested via simulated and measured spectra from an energy X-ray spectrometer, and compared with some existing methods. The results prove that the proposed method can directly estimate the peak intensity of EDXRF free from the background information, and also effectively distinguish overlap peaks in EDXRF spectrum.

  19. Method to produce nanocrystalline powders of oxide-based phosphors for lighting applications

    DOEpatents

    Loureiro, Sergio Paulo Martins; Setlur, Anant Achyut; Williams, Darryl Stephen; Manoharan, Mohan; Srivastava, Alok Mani

    2007-12-25

    Some embodiments of the present invention are directed toward nanocrystalline oxide-based phosphor materials, and methods for making same. Typically, such methods comprise a steric entrapment route for converting precursors into such phosphor material. In some embodiments, the nanocrystalline oxide-based phosphor materials are quantum splitting phosphors. In some or other embodiments, such nanocrystalline oxide based phosphor materials provide reduced scattering, leading to greater efficiency, when used in lighting applications.

  20. Distance-Based Phylogenetic Methods Around a Polytomy.

    PubMed

    Davidson, Ruth; Sullivant, Seth

    2014-01-01

    Distance-based phylogenetic algorithms attempt to solve the NP-hard least-squares phylogeny problem by mapping an arbitrary dissimilarity map representing biological data to a tree metric. The set of all dissimilarity maps is a Euclidean space properly containing the space of all tree metrics as a polyhedral fan. Outputs of distance-based tree reconstruction algorithms such as UPGMA and neighbor-joining are points in the maximal cones in the fan. Tree metrics with polytomies lie at the intersections of maximal cones. A phylogenetic algorithm divides the space of all dissimilarity maps into regions based upon which combinatorial tree is reconstructed by the algorithm. Comparison of phylogenetic methods can be done by comparing the geometry of these regions. We use polyhedral geometry to compare the local nature of the subdivisions induced by least-squares phylogeny, UPGMA, and neighbor-joining when the true tree has a single polytomy with exactly four neighbors. Our results suggest that in some circumstances, UPGMA and neighbor-joining poorly match least-squares phylogeny.

  1. Bridge Displacement Monitoring Method Based on Laser Projection-Sensing Technology

    PubMed Central

    Zhao, Xuefeng; Liu, Hao; Yu, Yan; Xu, Xiaodong; Hu, Weitong; Li, Mingchu; Ou, Jingping

    2015-01-01

    Bridge displacement is the most basic evaluation index of the health status of a bridge structure. The existing measurement methods for bridge displacement basically fail to realize long-term and real-time dynamic monitoring of bridge structures, because of the low degree of automation and the insufficient precision, causing bottlenecks and restriction. To solve this problem, we proposed a bridge displacement monitoring system based on laser projection-sensing technology. First, the laser spot recognition method was studied. Second, the software for the displacement monitoring system was developed. Finally, a series of experiments using this system were conducted, and the results show that such a system has high measurement accuracy and speed. We aim to develop a low-cost, high-accuracy and long-term monitoring method for bridge displacement based on these preliminary efforts. PMID:25871716

  2. A degradation-based sorting method for lithium-ion battery reuse.

    PubMed

    Chen, Hao; Shen, Julia

    2017-01-01

    In a world where millions of people are dependent on batteries to provide them with convenient and portable energy, battery recycling is of the utmost importance. In this paper, we developed a new method to sort 18650 Lithium-ion batteries in large quantities and in real time for harvesting used cells with enough capacity for battery reuse. Internal resistance and capacity tests were conducted as a basis for comparison with a novel degradation-based method based on X-ray radiographic scanning and digital image contrast computation. The test results indicate that the sorting accuracy of the test cells is about 79% and the execution time of our algorithm is at a level of 200 milliseconds, making our method a potential real-time solution for reusing the remaining capacity in good used cells.

  3. A degradation-based sorting method for lithium-ion battery reuse

    PubMed Central

    Chen, Hao

    2017-01-01

    In a world where millions of people are dependent on batteries to provide them with convenient and portable energy, battery recycling is of the utmost importance. In this paper, we developed a new method to sort 18650 Lithium-ion batteries in large quantities and in real time for harvesting used cells with enough capacity for battery reuse. Internal resistance and capacity tests were conducted as a basis for comparison with a novel degradation-based method based on X-ray radiographic scanning and digital image contrast computation. The test results indicate that the sorting accuracy of the test cells is about 79% and the execution time of our algorithm is at a level of 200 milliseconds, making our method a potential real-time solution for reusing the remaining capacity in good used cells. PMID:29023485

  4. Differential equation based method for accurate approximations in optimization

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1990-01-01

    This paper describes a method to efficiently and accurately approximate the effect of design changes on structural response. The key to this new method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in msot cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacement are used to approximate bending stresses.

  5. Differential equation based method for accurate approximations in optimization

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1990-01-01

    A method to efficiently and accurately approximate the effect of design changes on structural response is described. The key to this method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in most cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacements are used to approximate bending stresses.

  6. Fault Diagnosis for Rotating Machinery: A Method based on Image Processing

    PubMed Central

    Lu, Chen; Wang, Yang; Ragulskis, Minvydas; Cheng, Yujie

    2016-01-01

    Rotating machinery is one of the most typical types of mechanical equipment and plays a significant role in industrial applications. Condition monitoring and fault diagnosis of rotating machinery has gained wide attention for its significance in preventing catastrophic accident and guaranteeing sufficient maintenance. With the development of science and technology, fault diagnosis methods based on multi-disciplines are becoming the focus in the field of fault diagnosis of rotating machinery. This paper presents a multi-discipline method based on image-processing for fault diagnosis of rotating machinery. Different from traditional analysis method in one-dimensional space, this study employs computing method in the field of image processing to realize automatic feature extraction and fault diagnosis in a two-dimensional space. The proposed method mainly includes the following steps. First, the vibration signal is transformed into a bi-spectrum contour map utilizing bi-spectrum technology, which provides a basis for the following image-based feature extraction. Then, an emerging approach in the field of image processing for feature extraction, speeded-up robust features, is employed to automatically exact fault features from the transformed bi-spectrum contour map and finally form a high-dimensional feature vector. To reduce the dimensionality of the feature vector, thus highlighting main fault features and reducing subsequent computing resources, t-Distributed Stochastic Neighbor Embedding is adopt to reduce the dimensionality of the feature vector. At last, probabilistic neural network is introduced for fault identification. Two typical rotating machinery, axial piston hydraulic pump and self-priming centrifugal pumps, are selected to demonstrate the effectiveness of the proposed method. Results show that the proposed method based on image-processing achieves a high accuracy, thus providing a highly effective means to fault diagnosis for rotating machinery. PMID

  7. Fault Diagnosis for Rotating Machinery: A Method based on Image Processing.

    PubMed

    Lu, Chen; Wang, Yang; Ragulskis, Minvydas; Cheng, Yujie

    2016-01-01

    Rotating machinery is one of the most typical types of mechanical equipment and plays a significant role in industrial applications. Condition monitoring and fault diagnosis of rotating machinery has gained wide attention for its significance in preventing catastrophic accident and guaranteeing sufficient maintenance. With the development of science and technology, fault diagnosis methods based on multi-disciplines are becoming the focus in the field of fault diagnosis of rotating machinery. This paper presents a multi-discipline method based on image-processing for fault diagnosis of rotating machinery. Different from traditional analysis method in one-dimensional space, this study employs computing method in the field of image processing to realize automatic feature extraction and fault diagnosis in a two-dimensional space. The proposed method mainly includes the following steps. First, the vibration signal is transformed into a bi-spectrum contour map utilizing bi-spectrum technology, which provides a basis for the following image-based feature extraction. Then, an emerging approach in the field of image processing for feature extraction, speeded-up robust features, is employed to automatically exact fault features from the transformed bi-spectrum contour map and finally form a high-dimensional feature vector. To reduce the dimensionality of the feature vector, thus highlighting main fault features and reducing subsequent computing resources, t-Distributed Stochastic Neighbor Embedding is adopt to reduce the dimensionality of the feature vector. At last, probabilistic neural network is introduced for fault identification. Two typical rotating machinery, axial piston hydraulic pump and self-priming centrifugal pumps, are selected to demonstrate the effectiveness of the proposed method. Results show that the proposed method based on image-processing achieves a high accuracy, thus providing a highly effective means to fault diagnosis for rotating machinery.

  8. Method for fabricating beryllium-based multilayer structures

    DOEpatents

    Skulina, Kenneth M.; Bionta, Richard M.; Makowiecki, Daniel M.; Alford, Craig S.

    2003-02-18

    Beryllium-based multilayer structures and a process for fabricating beryllium-based multilayer mirrors, useful in the wavelength region greater than the beryllium K-edge (111 .ANG. or 11.1 nm). The process includes alternating sputter deposition of beryllium and a metal, typically from the fifth row of the periodic table, such as niobium (Nb), molybdenum (Mo), ruthenium (Ru), and rhodium (Rh). The process includes not only the method of sputtering the materials, but the industrial hygiene controls for safe handling of beryllium. The mirrors made in accordance with the process may be utilized in soft x-ray and extreme-ultraviolet projection lithography, which requires mirrors of high reflectivity (>60%) for x-rays in the range of 60-140 .ANG. (60-14.0 nm).

  9. Continental-scale Validation of MODIS-based and LEDAPS Landsat ETM+ Atmospheric Correction Methods

    NASA Technical Reports Server (NTRS)

    Ju, Junchang; Roy, David P.; Vermote, Eric; Masek, Jeffrey; Kovalskyy, Valeriy

    2012-01-01

    The potential of Landsat data processing to provide systematic continental scale products has been demonstrated by several projects including the NASA Web-enabled Landsat Data (WELD) project. The recent free availability of Landsat data increases the need for robust and efficient atmospheric correction algorithms applicable to large volume Landsat data sets. This paper compares the accuracy of two Landsat atmospheric correction methods: a MODIS-based method and the Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS) method. Both methods are based on the 6SV radiative transfer code but have different atmospheric characterization approaches. The MODIS-based method uses the MODIS Terra derived dynamic aerosol type, aerosol optical thickness, and water vapor to atmospherically correct ETM+ acquisitions in each coincident orbit. The LEDAPS method uses aerosol characterizations derived independently from each Landsat acquisition and assumes a fixed continental aerosol type and uses ancillary water vapor. Validation results are presented comparing ETM+ atmospherically corrected data generated using these two methods with AERONET corrected ETM+ data for 95 10 km×10 km 30 m subsets, a total of nearly 8 million 30 m pixels, located across the conterminous United States. The results indicate that the MODIS-based method has better accuracy than the LEDAPS method for the ETM+ red and longer wavelength bands.

  10. Development of DNA-based Identification methods to track the ...

    EPA Pesticide Factsheets

    The ability to track the identity and abundance of larval fish, which are ubiquitous during spawning season, may lead to a greater understanding of fish species distributions in Great Lakes nearshore areas including early-detection of invasive fish species before they become established. However, larval fish are notoriously hard to identify using traditional morphological techniques. While DNA-based identification methods could increase the ability of aquatic resource managers to determine larval fish composition, use of these methods in aquatic surveys is still uncommon and presents many challenges. In response to this need, we have been working with the U. S. Fish and Wildlife Service to develop field and laboratory methods to facilitate the identification of larval fish using DNA-meta-barcoding. In 2012, we initiated a pilot-project to develop a workflow for conducting DNA-based identification, and compared the species composition at sites within the St. Louis River Estuary of Lake Superior using traditional identification versus DNA meta-barcoding. In 2013, we extended this research to conduct DNA-identification of fish larvae collected from multiple nearshore areas of the Great Lakes by the USFWS. The species composition of larval fish generally mirrored that of fish species known from the same areas, but was influenced by the timing and intensity of sampling. Results indicate that DNA-based identification needs only very low levels of biomass to detect pre

  11. A low delay transmission method of multi-channel video based on FPGA

    NASA Astrophysics Data System (ADS)

    Fu, Weijian; Wei, Baozhi; Li, Xiaobin; Wang, Quan; Hu, Xiaofei

    2018-03-01

    In order to guarantee the fluency of multi-channel video transmission in video monitoring scenarios, we designed a kind of video format conversion method based on FPGA and its DMA scheduling for video data, reduces the overall video transmission delay.In order to sace the time in the conversion process, the parallel ability of FPGA is used to video format conversion. In order to improve the direct memory access (DMA) writing transmission rate of PCIe bus, a DMA scheduling method based on asynchronous command buffer is proposed. The experimental results show that this paper designs a low delay transmission method based on FPGA, which increases the DMA writing transmission rate by 34% compared with the existing method, and then the video overall delay is reduced to 23.6ms.

  12. Changes in Teaching Efficacy during a Professional Development School-Based Science Methods Course

    ERIC Educational Resources Information Center

    Swars, Susan L.; Dooley, Caitlin McMunn

    2010-01-01

    This mixed methods study offers a theoretically grounded description of a field-based science methods course within a Professional Development School (PDS) model (i.e., PDS-based course). The preservice teachers' (n = 21) experiences within the PDS-based course prompted significant changes in their personal teaching efficacy, with the…

  13. A probability-based multi-cycle sorting method for 4D-MRI: A simulation study.

    PubMed

    Liang, Xiao; Yin, Fang-Fang; Liu, Yilin; Cai, Jing

    2016-12-01

    To develop a novel probability-based sorting method capable of generating multiple breathing cycles of 4D-MRI images and to evaluate performance of this new method by comparing with conventional phase-based methods in terms of image quality and tumor motion measurement. Based on previous findings that breathing motion probability density function (PDF) of a single breathing cycle is dramatically different from true stabilized PDF that resulted from many breathing cycles, it is expected that a probability-based sorting method capable of generating multiple breathing cycles of 4D images may capture breathing variation information missing from conventional single-cycle sorting methods. The overall idea is to identify a few main breathing cycles (and their corresponding weightings) that can best represent the main breathing patterns of the patient and then reconstruct a set of 4D images for each of the identified main breathing cycles. This method is implemented in three steps: (1) The breathing signal is decomposed into individual breathing cycles, characterized by amplitude, and period; (2) individual breathing cycles are grouped based on amplitude and period to determine the main breathing cycles. If a group contains more than 10% of all breathing cycles in a breathing signal, it is determined as a main breathing pattern group and is represented by the average of individual breathing cycles in the group; (3) for each main breathing cycle, a set of 4D images is reconstructed using a result-driven sorting method adapted from our previous study. The probability-based sorting method was first tested on 26 patients' breathing signals to evaluate its feasibility of improving target motion PDF. The new method was subsequently tested for a sequential image acquisition scheme on the 4D digital extended cardiac torso (XCAT) phantom. Performance of the probability-based and conventional sorting methods was evaluated in terms of target volume precision and accuracy as measured

  14. Near-field radiative heat transfer in scanning thermal microscopy computed with the boundary element method

    NASA Astrophysics Data System (ADS)

    Nguyen, K. L.; Merchiers, O.; Chapuis, P.-O.

    2017-11-01

    We compute the near-field radiative heat transfer between a hot AFM tip and a cold substrate. This contribution to the tip-sample heat transfer in Scanning Thermal Microscopy is often overlooked, despite its leading role when the tip is out of contact. For dielectrics, we provide power levels exchanged as a function of the tip-sample distance in vacuum and spatial maps of the heat flux deposited into the sample which indicate the near-contact spatial resolution. The results are compared to analytical expressions of the Proximity Flux Approximation. The numerical results are obtained by means of the Boundary Element Method (BEM) implemented in the SCUFF-EM software, and require first a thorough convergence analysis of the progressive implementation of this method to the thermal emission by a sphere, the radiative transfer between two spheres, and the radiative exchange between a sphere and a finite substrate.

  15. Constructing financial network based on PMFG and threshold method

    NASA Astrophysics Data System (ADS)

    Nie, Chun-Xiao; Song, Fu-Tie

    2018-04-01

    Based on planar maximally filtered graph (PMFG) and threshold method, we introduced a correlation-based network named PMFG-based threshold network (PTN). We studied the community structure of PTN and applied ISOMAP algorithm to represent PTN in low-dimensional Euclidean space. The results show that the community corresponds well to the cluster in the Euclidean space. Further, we studied the dynamics of the community structure and constructed the normalized mutual information (NMI) matrix. Based on the real data in the market, we found that the volatility of the market can lead to dramatic changes in the community structure, and the structure is more stable during the financial crisis.

  16. Rapid processing of data based on high-performance algorithms for solving inverse problems and 3D-simulation of the tsunami and earthquakes

    NASA Astrophysics Data System (ADS)

    Marinin, I. V.; Kabanikhin, S. I.; Krivorotko, O. I.; Karas, A.; Khidasheli, D. G.

    2012-04-01

    We consider new techniques and methods for earthquake and tsunami related problems, particularly - inverse problems for the determination of tsunami source parameters, numerical simulation of long wave propagation in soil and water and tsunami risk estimations. In addition, we will touch upon the issue of database management and destruction scenario visualization. New approaches and strategies, as well as mathematical tools and software are to be shown. The long joint investigations by researchers of the Institute of Mathematical Geophysics and Computational Mathematics SB RAS and specialists from WAPMERR and Informap have produced special theoretical approaches, numerical methods, and software tsunami and earthquake modeling (modeling of propagation and run-up of tsunami waves on coastal areas), visualization, risk estimation of tsunami, and earthquakes. Algorithms are developed for the operational definition of the origin and forms of the tsunami source. The system TSS numerically simulates the source of tsunami and/or earthquakes and includes the possibility to solve the direct and the inverse problem. It becomes possible to involve advanced mathematical results to improve models and to increase the resolution of inverse problems. Via TSS one can construct maps of risks, the online scenario of disasters, estimation of potential damage to buildings and roads. One of the main tools for the numerical modeling is the finite volume method (FVM), which allows us to achieve stability with respect to possible input errors, as well as to achieve optimum computing speed. Our approach to the inverse problem of tsunami and earthquake determination is based on recent theoretical results concerning the Dirichlet problem for the wave equation. This problem is intrinsically ill-posed. We use the optimization approach to solve this problem and SVD-analysis to estimate the degree of ill-posedness and to find the quasi-solution. The software system we developed is intended to

  17. Fast and secure encryption-decryption method based on chaotic dynamics

    DOEpatents

    Protopopescu, Vladimir A.; Santoro, Robert T.; Tolliver, Johnny S.

    1995-01-01

    A method and system for the secure encryption of information. The method comprises the steps of dividing a message of length L into its character components; generating m chaotic iterates from m independent chaotic maps; producing an "initial" value based upon the m chaotic iterates; transforming the "initial" value to create a pseudo-random integer; repeating the steps of generating, producing and transforming until a pseudo-random integer sequence of length L is created; and encrypting the message as ciphertext based upon the pseudo random integer sequence. A system for accomplishing the invention is also provided.

  18. A deep learning-based multi-model ensemble method for cancer prediction.

    PubMed

    Xiao, Yawen; Wu, Jun; Lin, Zongli; Zhao, Xiaodong

    2018-01-01

    Cancer is a complex worldwide health problem associated with high mortality. With the rapid development of the high-throughput sequencing technology and the application of various machine learning methods that have emerged in recent years, progress in cancer prediction has been increasingly made based on gene expression, providing insight into effective and accurate treatment decision making. Thus, developing machine learning methods, which can successfully distinguish cancer patients from healthy persons, is of great current interest. However, among the classification methods applied to cancer prediction so far, no one method outperforms all the others. In this paper, we demonstrate a new strategy, which applies deep learning to an ensemble approach that incorporates multiple different machine learning models. We supply informative gene data selected by differential gene expression analysis to five different classification models. Then, a deep learning method is employed to ensemble the outputs of the five classifiers. The proposed deep learning-based multi-model ensemble method was tested on three public RNA-seq data sets of three kinds of cancers, Lung Adenocarcinoma, Stomach Adenocarcinoma and Breast Invasive Carcinoma. The test results indicate that it increases the prediction accuracy of cancer for all the tested RNA-seq data sets as compared to using a single classifier or the majority voting algorithm. By taking full advantage of different classifiers, the proposed deep learning-based multi-model ensemble method is shown to be accurate and effective for cancer prediction. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Development of performance-based evaluation methods and specifications for roadside maintenance.

    DOT National Transportation Integrated Search

    2011-01-01

    This report documents the work performed during Project 0-6387, Performance Based Roadside : Maintenance Specifications. Quality assurance methods and specifications for roadside performance-based : maintenance contracts (PBMCs) were developed ...

  20. Appearance-based representative samples refining method for palmprint recognition

    NASA Astrophysics Data System (ADS)

    Wen, Jiajun; Chen, Yan

    2012-07-01

    The sparse representation can deal with the lack of sample problem due to utilizing of all the training samples. However, the discrimination ability will degrade when more training samples are used for representation. We propose a novel appearance-based palmprint recognition method. We aim to find a compromise between the discrimination ability and the lack of sample problem so as to obtain a proper representation scheme. Under the assumption that the test sample can be well represented by a linear combination of a certain number of training samples, we first select the representative training samples according to the contributions of the samples. Then we further refine the training samples by an iteration procedure, excluding the training sample with the least contribution to the test sample for each time. Experiments on PolyU multispectral palmprint database and two-dimensional and three-dimensional palmprint database show that the proposed method outperforms the conventional appearance-based palmprint recognition methods. Moreover, we also explore and find out the principle of the usage for the key parameters in the proposed algorithm, which facilitates to obtain high-recognition accuracy.

  1. A supervoxel-based segmentation method for prostate MR images

    NASA Astrophysics Data System (ADS)

    Tian, Zhiqiang; Liu, LiZhi; Fei, Baowei

    2015-03-01

    Accurate segmentation of the prostate has many applications in prostate cancer diagnosis and therapy. In this paper, we propose a "Supervoxel" based method for prostate segmentation. The prostate segmentation problem is considered as assigning a label to each supervoxel. An energy function with data and smoothness terms is used to model the labeling process. The data term estimates the likelihood of a supervoxel belongs to the prostate according to a shape feature. The geometric relationship between two neighboring supervoxels is used to construct a smoothness term. A threedimensional (3D) graph cut method is used to minimize the energy function in order to segment the prostate. A 3D level set is then used to get a smooth surface based on the output of the graph cut. The performance of the proposed segmentation algorithm was evaluated with respect to the manual segmentation ground truth. The experimental results on 12 prostate volumes showed that the proposed algorithm yields a mean Dice similarity coefficient of 86.9%+/-3.2%. The segmentation method can be used not only for the prostate but also for other organs.

  2. Level set method for image segmentation based on moment competition

    NASA Astrophysics Data System (ADS)

    Min, Hai; Wang, Xiao-Feng; Huang, De-Shuang; Jin, Jing; Wang, Hong-Zhi; Li, Hai

    2015-05-01

    We propose a level set method for image segmentation which introduces the moment competition and weakly supervised information into the energy functional construction. Different from the region-based level set methods which use force competition, the moment competition is adopted to drive the contour evolution. Here, a so-called three-point labeling scheme is proposed to manually label three independent points (weakly supervised information) on the image. Then the intensity differences between the three points and the unlabeled pixels are used to construct the force arms for each image pixel. The corresponding force is generated from the global statistical information of a region-based method and weighted by the force arm. As a result, the moment can be constructed and incorporated into the energy functional to drive the evolving contour to approach the object boundary. In our method, the force arm can take full advantage of the three-point labeling scheme to constrain the moment competition. Additionally, the global statistical information and weakly supervised information are successfully integrated, which makes the proposed method more robust than traditional methods for initial contour placement and parameter setting. Experimental results with performance analysis also show the superiority of the proposed method on segmenting different types of complicated images, such as noisy images, three-phase images, images with intensity inhomogeneity, and texture images.

  3. Systems, methods and apparatus for verification of knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Rash, James L. (Inventor); Gracinin, Denis (Inventor); Erickson, John D. (Inventor); Rouff, Christopher A. (Inventor); Hinchey, Michael G. (Inventor)

    2010-01-01

    Systems, methods and apparatus are provided through which in some embodiments, domain knowledge is translated into a knowledge-based system. In some embodiments, a formal specification is derived from rules of a knowledge-based system, the formal specification is analyzed, and flaws in the formal specification are used to identify and correct errors in the domain knowledge, from which a knowledge-based system is translated.

  4. Method of removing and detoxifying a phosphorus-based substance

    DOEpatents

    Vandegrift, George F.; Steindler, Martin J.

    1989-01-01

    A method of removing organic phosphorus-based poisonous substances from water contaminated therewith and of subsequently destroying the toxicity of the substance is disclosed. Initially, a water-immiscible organic is immobilized on a supported liquid membrane. Thereafter, the contaminated water is contacted with one side of the supported liquid membrane to selectively dissolve the phosphorus-based substance in the organic extractant. At the same time, the other side of the supported liquid membrane is contacted with a hydroxy-affording strong base to react the phosphorus-based substance dissolved by the organic extractant with a hydroxy ion. This forms a non-toxic reaction product in the base. The organic extractant can be a water-insoluble trialkyl amine, such as trilauryl amine. The phosphorus-based substance can be phosphoryl or a thiophosphoryl.

  5. Ground-Cover Measurements: Assessing Correlation Among Aerial and Ground-Based Methods

    NASA Astrophysics Data System (ADS)

    Booth, D. Terrance; Cox, Samuel E.; Meikle, Tim; Zuuring, Hans R.

    2008-12-01

    Wyoming’s Green Mountain Common Allotment is public land providing livestock forage, wildlife habitat, and unfenced solitude, amid other ecological services. It is also the center of ongoing debate over USDI Bureau of Land Management’s (BLM) adjudication of land uses. Monitoring resource use is a BLM responsibility, but conventional monitoring is inadequate for the vast areas encompassed in this and other public-land units. New monitoring methods are needed that will reduce monitoring costs. An understanding of data-set relationships among old and new methods is also needed. This study compared two conventional methods with two remote sensing methods using images captured from two meters and 100 meters above ground level from a camera stand (a ground, image-based method) and a light airplane (an aerial, image-based method). Image analysis used SamplePoint or VegMeasure software. Aerial methods allowed for increased sampling intensity at low cost relative to the time and travel required by ground methods. Costs to acquire the aerial imagery and measure ground cover on 162 aerial samples representing 9000 ha were less than 3000. The four highest correlations among data sets for bare ground—the ground-cover characteristic yielding the highest correlations (r)—ranged from 0.76 to 0.85 and included ground with ground, ground with aerial, and aerial with aerial data-set associations. We conclude that our aerial surveys are a cost-effective monitoring method, that ground with aerial data-set correlations can be equal to, or greater than those among ground-based data sets, and that bare ground should continue to be investigated and tested for use as a key indicator of rangeland health.

  6. Missing value imputation in DNA microarrays based on conjugate gradient method.

    PubMed

    Dorri, Fatemeh; Azmi, Paeiz; Dorri, Faezeh

    2012-02-01

    Analysis of gene expression profiles needs a complete matrix of gene array values; consequently, imputation methods have been suggested. In this paper, an algorithm that is based on conjugate gradient (CG) method is proposed to estimate missing values. k-nearest neighbors of the missed entry are first selected based on absolute values of their Pearson correlation coefficient. Then a subset of genes among the k-nearest neighbors is labeled as the best similar ones. CG algorithm with this subset as its input is then used to estimate the missing values. Our proposed CG based algorithm (CGimpute) is evaluated on different data sets. The results are compared with sequential local least squares (SLLSimpute), Bayesian principle component analysis (BPCAimpute), local least squares imputation (LLSimpute), iterated local least squares imputation (ILLSimpute) and adaptive k-nearest neighbors imputation (KNNKimpute) methods. The average of normalized root mean squares error (NRMSE) and relative NRMSE in different data sets with various missing rates shows CGimpute outperforms other methods. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. Effectiveness of Spray-Based Decontamination Methods for ...

    EPA Pesticide Factsheets

    Report The objective of this project was to assess the effectiveness of spray-based common decontamination methods for inactivating Bacillus (B.) atrophaeus (surrogate for B. anthracis) spores and bacteriophage MS2 (surrogate for foot and mouth disease virus [FMDV]) on selected test surfaces (with or without a model agricultural soil load). Relocation of viable viruses or spores from the contaminated coupon surfaces into aerosol or liquid fractions during the decontamination methods was investigated. This project was conducted to support jointly held missions of the U.S. Department of Homeland Security (DHS) and the U.S. Environmental Protection Agency (EPA). Within the EPA, the project supports the mission of EPA’s Homeland Security Research Program (HSRP) by providing relevant information pertinent to the decontamination of contaminated areas resulting from a biological incident.

  8. A Novel Weighted Kernel PCA-Based Method for Optimization and Uncertainty Quantification

    NASA Astrophysics Data System (ADS)

    Thimmisetty, C.; Talbot, C.; Chen, X.; Tong, C. H.

    2016-12-01

    It has been demonstrated that machine learning methods can be successfully applied to uncertainty quantification for geophysical systems through the use of the adjoint method coupled with kernel PCA-based optimization. In addition, it has been shown through weighted linear PCA how optimization with respect to both observation weights and feature space control variables can accelerate convergence of such methods. Linear machine learning methods, however, are inherently limited in their ability to represent features of non-Gaussian stochastic random fields, as they are based on only the first two statistical moments of the original data. Nonlinear spatial relationships and multipoint statistics leading to the tortuosity characteristic of channelized media, for example, are captured only to a limited extent by linear PCA. With the aim of coupling the kernel-based and weighted methods discussed, we present a novel mathematical formulation of kernel PCA, Weighted Kernel Principal Component Analysis (WKPCA), that both captures nonlinear relationships and incorporates the attribution of significance levels to different realizations of the stochastic random field of interest. We also demonstrate how new instantiations retaining defining characteristics of the random field can be generated using Bayesian methods. In particular, we present a novel WKPCA-based optimization method that minimizes a given objective function with respect to both feature space random variables and observation weights through which optimal snapshot significance levels and optimal features are learned. We showcase how WKPCA can be applied to nonlinear optimal control problems involving channelized media, and in particular demonstrate an application of the method to learning the spatial distribution of material parameter values in the context of linear elasticity, and discuss further extensions of the method to stochastic inversion.

  9. Hybrid PSO-ASVR-based method for data fitting in the calibration of infrared radiometer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Sen; Li, Chengwei, E-mail: heikuanghit@163.com

    2016-06-15

    The present paper describes a hybrid particle swarm optimization-adaptive support vector regression (PSO-ASVR)-based method for data fitting in the calibration of infrared radiometer. The proposed hybrid PSO-ASVR-based method is based on PSO in combination with Adaptive Processing and Support Vector Regression (SVR). The optimization technique involves setting parameters in the ASVR fitting procedure, which significantly improves the fitting accuracy. However, its use in the calibration of infrared radiometer has not yet been widely explored. Bearing this in mind, the PSO-ASVR-based method, which is based on the statistical learning theory, is successfully used here to get the relationship between the radiationmore » of a standard source and the response of an infrared radiometer. Main advantages of this method are the flexible adjustment mechanism in data processing and the optimization mechanism in a kernel parameter setting of SVR. Numerical examples and applications to the calibration of infrared radiometer are performed to verify the performance of PSO-ASVR-based method compared to conventional data fitting methods.« less

  10. Development of an integrated BEM approach for hot fluid structure interaction

    NASA Technical Reports Server (NTRS)

    Dargush, G. F.; Banerjee, P. K.; Shi, Y.

    1990-01-01

    A comprehensive boundary element method is presented for transient thermoelastic analysis of hot section Earth-to-Orbit engine components. This time-domain formulation requires discretization of only the surface of the component, and thus provides an attractive alternative to finite element analysis for this class of problems. In addition, steep thermal gradients, which often occur near the surface, can be captured more readily since with a boundary element approach there are no shape functions to constrain the solution in the direction normal to the surface. For example, the circular disc analysis indicates the high level of accuracy that can be obtained. In fact, on the basis of reduced modeling effort and improved accuracy, it appears that the present boundary element method should be the preferred approach for general problems of transient thermoelasticity.

  11. A novel isoflavone profiling method based on UPLC-PDA-ESI-MS.

    PubMed

    Zhang, Shuang; Zheng, Zong-Ping; Zeng, Mao-Mao; He, Zhi-Yong; Tao, Guan-Jun; Qin, Fang; Chen, Jie

    2017-03-15

    A novel non-targeted isoflavone profiling method was developed using the diagnostic fragment-ion-based extension strategy, based on ultra-high performance liquid chromatography coupled with photo-diode array detector and electrospray ionization-mass spectrometry (UPLC-PDA-ESI-MS). 16 types of isoflavones were obtained in positive mode, but only 12 were obtained in negative mode due to the absence of precursor ions. Malonyldaidzin and malonylgenistin glycosylated at the 4'-O position or malonylated at the 4″-O position of glucose were indicated by their retention behavior and fragmentation pattern. Three possible quantification methods in one run based on UPLC-PDA and UPLC-ESI-MS were validated and compared, suggesting that methods based on UPLC-ESI-MS possess remarkable selectivity and sensitivity. Impermissible quantitative deviations induced by the linearity calibration with 400-fold dynamic range was observed for the first time and was recalibrated with a 20-fold dynamic range. These results suggest that isoflavones and their stereoisomers can be simultaneously determined by positive-ion UPLC-ESI-MS in soymilk. Copyright © 2016. Published by Elsevier Ltd.

  12. Method for PE Pipes Fusion Jointing Based on TRIZ Contradictions Theory

    NASA Astrophysics Data System (ADS)

    Sun, Jianguang; Tan, Runhua; Gao, Jinyong; Wei, Zihui

    The core of the TRIZ theories is the contradiction detection and solution. TRIZ provided various methods for the contradiction solution, but all that is not systematized. Combined with the technique system conception, this paper summarizes an integration solution method for contradiction solution based on the TRIZ contradiction theory. According to the method, a flowchart of integration solution method for contradiction is given. As a casestudy, method of fusion jointing PE pipe is analysised.

  13. Research and Implementation of Tibetan Word Segmentation Based on Syllable Methods

    NASA Astrophysics Data System (ADS)

    Jiang, Jing; Li, Yachao; Jiang, Tao; Yu, Hongzhi

    2018-03-01

    Tibetan word segmentation (TWS) is an important problem in Tibetan information processing, while abbreviated word recognition is one of the key and most difficult problems in TWS. Most of the existing methods of Tibetan abbreviated word recognition are rule-based approaches, which need vocabulary support. In this paper, we propose a method based on sequence tagging model for abbreviated word recognition, and then implement in TWS systems with sequence labeling models. The experimental results show that our abbreviated word recognition method is fast and effective and can be combined easily with the segmentation model. This significantly increases the effect of the Tibetan word segmentation.

  14. A method of mobile video transmission based on J2ee

    NASA Astrophysics Data System (ADS)

    Guo, Jian-xin; Zhao, Ji-chun; Gong, Jing; Chun, Yang

    2013-03-01

    As 3G (3rd-generation) networks evolve worldwide, the rising demand for mobile video services and the enormous growth of video on the internet is creating major new revenue opportunities for mobile network operators and application developers. The text introduced a method of mobile video transmission based on J2ME, giving the method of video compressing, then describing the video compressing standard, and then describing the software design. The proposed mobile video method based on J2EE is a typical mobile multimedia application, which has a higher availability and a wide range of applications. The users can get the video through terminal devices such as phone.

  15. Infrared image segmentation method based on spatial coherence histogram and maximum entropy

    NASA Astrophysics Data System (ADS)

    Liu, Songtao; Shen, Tongsheng; Dai, Yao

    2014-11-01

    In order to segment the target well and suppress background noises effectively, an infrared image segmentation method based on spatial coherence histogram and maximum entropy is proposed. First, spatial coherence histogram is presented by weighting the importance of the different position of these pixels with the same gray-level, which is obtained by computing their local density. Then, after enhancing the image by spatial coherence histogram, 1D maximum entropy method is used to segment the image. The novel method can not only get better segmentation results, but also have a faster computation time than traditional 2D histogram-based segmentation methods.

  16. Evaluation of Deep Learning Based Stereo Matching Methods: from Ground to Aerial Images

    NASA Astrophysics Data System (ADS)

    Liu, J.; Ji, S.; Zhang, C.; Qin, Z.

    2018-05-01

    Dense stereo matching has been extensively studied in photogrammetry and computer vision. In this paper we evaluate the application of deep learning based stereo methods, which were raised from 2016 and rapidly spread, on aerial stereos other than ground images that are commonly used in computer vision community. Two popular methods are evaluated. One learns matching cost with a convolutional neural network (known as MC-CNN); the other produces a disparity map in an end-to-end manner by utilizing both geometry and context (known as GC-net). First, we evaluate the performance of the deep learning based methods for aerial stereo images by a direct model reuse. The models pre-trained on KITTI 2012, KITTI 2015 and Driving datasets separately, are directly applied to three aerial datasets. We also give the results of direct training on target aerial datasets. Second, the deep learning based methods are compared to the classic stereo matching method, Semi-Global Matching(SGM), and a photogrammetric software, SURE, on the same aerial datasets. Third, transfer learning strategy is introduced to aerial image matching based on the assumption of a few target samples available for model fine tuning. It experimentally proved that the conventional methods and the deep learning based methods performed similarly, and the latter had greater potential to be explored.

  17. Investigation of self-adaptive LED surgical lighting based on entropy contrast enhancing method

    NASA Astrophysics Data System (ADS)

    Liu, Peng; Wang, Huihui; Zhang, Yaqin; Shen, Junfei; Wu, Rengmao; Zheng, Zhenrong; Li, Haifeng; Liu, Xu

    2014-05-01

    Investigation was performed to explore the possibility of enhancing contrast by varying the spectral distribution (SPD) of the surgical lighting. The illumination scenes with different SPDs were generated by the combination of a self-adaptive white light optimization method and the LED ceiling system, the images of biological sample are taken by a CCD camera and then processed by an 'Entropy' based contrast evaluation model which is proposed specific for surgery occasion. Compared with the neutral white LED based and traditional algorithm based image enhancing methods, the illumination based enhancing method turns out a better performance in contrast enhancing and improves the average contrast value about 9% and 6%, respectively. This low cost method is simple, practicable, and thus may provide an alternative solution for the expensive visual facility medical instruments.

  18. A Natural Teaching Method Based on Learning Theory.

    ERIC Educational Resources Information Center

    Smilkstein, Rita

    1991-01-01

    The natural teaching method is active and student-centered, based on schema and constructivist theories, and informed by research in neuroplasticity. A schema is a mental picture or understanding of something we have learned. Humans can have knowledge only to the degree to which they have constructed schemas from learning experiences and practice.…

  19. Effective Teaching Methods--Project-based Learning in Physics

    ERIC Educational Resources Information Center

    Holubova, Renata

    2008-01-01

    The paper presents results of the research of new effective teaching methods in physics and science. It is found out that it is necessary to educate pre-service teachers in approaches stressing the importance of the own activity of students, in competences how to create an interdisciplinary project. Project-based physics teaching and learning…

  20. Experimental cocrystal screening and solution based scale-up cocrystallization methods.

    PubMed

    Malamatari, Maria; Ross, Steven A; Douroumis, Dennis; Velaga, Sitaram P

    2017-08-01

    Cocrystals are crystalline single phase materials composed of two or more different molecular and/or ionic compounds generally in a stoichiometric ratio which are neither solvates nor simple salts. If one of the components is an active pharmaceutical ingredient (API), the term pharmaceutical cocrystal is often used. There is a growing interest among drug development scientists in exploring cocrystals, as means to address physicochemical, biopharmaceutical and mechanical properties and expand solid form diversity of the API. Conventionally, coformers are selected based on crystal engineering principles, and the equimolar mixtures of API and coformers are subjected to solution-based crystallization that are commonly employed in polymorph and salt screening. However, the availability of new knowledge on cocrystal phase behaviour in solid state and solutions has spurred the development and implementation of more rational experimental cocrystal screening as well as scale-up methods. This review aims to provide overview of commonly employed solid form screening techniques in drug development with an emphasis on cocrystal screening methodologies. The latest developments in understanding and the use of cocrystal phase diagrams in both screening and solution based scale-up methods are also presented. Final section is devoted to reviewing the state of the art research covering solution based scale-up cocrystallization process for different cocrystals besides more recent continuous crystallization methods. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. An Improved Interferometric Calibration Method Based on Independent Parameter Decomposition

    NASA Astrophysics Data System (ADS)

    Fan, J.; Zuo, X.; Li, T.; Chen, Q.; Geng, X.

    2018-04-01

    Interferometric SAR is sensitive to earth surface undulation. The accuracy of interferometric parameters plays a significant role in precise digital elevation model (DEM). The interferometric calibration is to obtain high-precision global DEM by calculating the interferometric parameters using ground control points (GCPs). However, interferometric parameters are always calculated jointly, making them difficult to decompose precisely. In this paper, we propose an interferometric calibration method based on independent parameter decomposition (IPD). Firstly, the parameters related to the interferometric SAR measurement are determined based on the three-dimensional reconstruction model. Secondly, the sensitivity of interferometric parameters is quantitatively analyzed after the geometric parameters are completely decomposed. Finally, each interferometric parameter is calculated based on IPD and interferometric calibration model is established. We take Weinan of Shanxi province as an example and choose 4 TerraDEM-X image pairs to carry out interferometric calibration experiment. The results show that the elevation accuracy of all SAR images is better than 2.54 m after interferometric calibration. Furthermore, the proposed method can obtain the accuracy of DEM products better than 2.43 m in the flat area and 6.97 m in the mountainous area, which can prove the correctness and effectiveness of the proposed IPD based interferometric calibration method. The results provide a technical basis for topographic mapping of 1 : 50000 and even larger scale in the flat area and mountainous area.

  2. Environment-based pin-power reconstruction method for homogeneous core calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leroyer, H.; Brosselard, C.; Girardi, E.

    2012-07-01

    Core calculation schemes are usually based on a classical two-step approach associated with assembly and core calculations. During the first step, infinite lattice assemblies calculations relying on a fundamental mode approach are used to generate cross-sections libraries for PWRs core calculations. This fundamental mode hypothesis may be questioned when dealing with loading patterns involving several types of assemblies (UOX, MOX), burnable poisons, control rods and burn-up gradients. This paper proposes a calculation method able to take into account the heterogeneous environment of the assemblies when using homogeneous core calculations and an appropriate pin-power reconstruction. This methodology is applied to MOXmore » assemblies, computed within an environment of UOX assemblies. The new environment-based pin-power reconstruction is then used on various clusters of 3x3 assemblies showing burn-up gradients and UOX/MOX interfaces, and compared to reference calculations performed with APOLLO-2. The results show that UOX/MOX interfaces are much better calculated with the environment-based calculation scheme when compared to the usual pin-power reconstruction method. The power peak is always better located and calculated with the environment-based pin-power reconstruction method on every cluster configuration studied. This study shows that taking into account the environment in transport calculations can significantly improve the pin-power reconstruction so far as it is consistent with the core loading pattern. (authors)« less

  3. Reentry trajectory optimization based on a multistage pseudospectral method.

    PubMed

    Zhao, Jiang; Zhou, Rui; Jin, Xuelian

    2014-01-01

    Of the many direct numerical methods, the pseudospectral method serves as an effective tool to solve the reentry trajectory optimization for hypersonic vehicles. However, the traditional pseudospectral method is time-consuming due to large number of discretization points. For the purpose of autonomous and adaptive reentry guidance, the research herein presents a multistage trajectory control strategy based on the pseudospectral method, capable of dealing with the unexpected situations in reentry flight. The strategy typically includes two subproblems: the trajectory estimation and trajectory refining. In each processing stage, the proposed method generates a specified range of trajectory with the transition of the flight state. The full glide trajectory consists of several optimal trajectory sequences. The newly focused geographic constraints in actual flight are discussed thereafter. Numerical examples of free-space flight, target transition flight, and threat avoidance flight are used to show the feasible application of multistage pseudospectral method in reentry trajectory optimization.

  4. Reentry Trajectory Optimization Based on a Multistage Pseudospectral Method

    PubMed Central

    Zhou, Rui; Jin, Xuelian

    2014-01-01

    Of the many direct numerical methods, the pseudospectral method serves as an effective tool to solve the reentry trajectory optimization for hypersonic vehicles. However, the traditional pseudospectral method is time-consuming due to large number of discretization points. For the purpose of autonomous and adaptive reentry guidance, the research herein presents a multistage trajectory control strategy based on the pseudospectral method, capable of dealing with the unexpected situations in reentry flight. The strategy typically includes two subproblems: the trajectory estimation and trajectory refining. In each processing stage, the proposed method generates a specified range of trajectory with the transition of the flight state. The full glide trajectory consists of several optimal trajectory sequences. The newly focused geographic constraints in actual flight are discussed thereafter. Numerical examples of free-space flight, target transition flight, and threat avoidance flight are used to show the feasible application of multistage pseudospectral method in reentry trajectory optimization. PMID:24574929

  5. A vision-based method for planar position measurement

    NASA Astrophysics Data System (ADS)

    Chen, Zong-Hao; Huang, Peisen S.

    2016-12-01

    In this paper, a vision-based method is proposed for three-degree-of-freedom (3-DOF) planar position (XY{θZ} ) measurement. This method uses a single camera to capture the image of a 2D periodic pattern and then uses the 2D discrete Fourier transform (2D DFT) method to estimate the phase of its fundamental frequency component for position measurement. To improve position measurement accuracy, the phase estimation error of 2D DFT is analyzed and a phase estimation method is proposed. Different simulations are done to verify the feasibility of this method and study the factors that influence the accuracy and precision of phase estimation. To demonstrate the performance of the proposed method for position measurement, a prototype encoder consisting of a black-and-white industrial camera with VGA resolution (480  ×  640 pixels) and an iPhone 4s has been developed. Experimental results show the peak-to-peak resolutions to be 3.5 nm in X axis, 8 nm in Y axis and 4 μ \\text{rad} in {θZ} axis. The corresponding RMS resolutions are 0.52 nm, 1.06 nm, and 0.60 μ \\text{rad} respectively.

  6. A method for data base management and analysis for wind tunnel data

    NASA Technical Reports Server (NTRS)

    Biser, Aileen O.

    1987-01-01

    To respond to the need for improved data base management and analysis capabilities for wind-tunnel data at the Langley 16-Foot Transonic Tunnel, research was conducted into current methods of managing wind-tunnel data and a method was developed as a solution to this need. This paper describes the development of the data base management and analysis method for wind-tunnel data. The design and implementation of the software system are discussed and examples of its use are shown.

  7. Stable isotope labelling methods in mass spectrometry-based quantitative proteomics.

    PubMed

    Chahrour, Osama; Cobice, Diego; Malone, John

    2015-09-10

    Mass-spectrometry based proteomics has evolved as a promising technology over the last decade and is undergoing a dramatic development in a number of different areas, such as; mass spectrometric instrumentation, peptide identification algorithms and bioinformatic computational data analysis. The improved methodology allows quantitative measurement of relative or absolute protein amounts, which is essential for gaining insights into their functions and dynamics in biological systems. Several different strategies involving stable isotopes label (ICAT, ICPL, IDBEST, iTRAQ, TMT, IPTL, SILAC), label-free statistical assessment approaches (MRM, SWATH) and absolute quantification methods (AQUA) are possible, each having specific strengths and weaknesses. Inductively coupled plasma mass spectrometry (ICP-MS), which is still widely recognised as elemental detector, has recently emerged as a complementary technique to the previous methods. The new application area for ICP-MS is targeting the fast growing field of proteomics related research, allowing absolute protein quantification using suitable elemental based tags. This document describes the different stable isotope labelling methods which incorporate metabolic labelling in live cells, ICP-MS based detection and post-harvest chemical label tagging for protein quantification, in addition to summarising their pros and cons. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Evaluation method based on the image correlation for laser jamming image

    NASA Astrophysics Data System (ADS)

    Che, Jinxi; Li, Zhongmin; Gao, Bo

    2013-09-01

    The jamming effectiveness evaluation of infrared imaging system is an important part of electro-optical countermeasure. The infrared imaging devices in the military are widely used in the searching, tracking and guidance and so many other fields. At the same time, with the continuous development of laser technology, research of laser interference and damage effect developed continuously, laser has been used to disturbing the infrared imaging device. Therefore, the effect evaluation of the infrared imaging system by laser has become a meaningful problem to be solved. The information that the infrared imaging system ultimately present to the user is an image, so the evaluation on jamming effect can be made from the point of assessment of image quality. The image contains two aspects of the information, the light amplitude and light phase, so the image correlation can accurately perform the difference between the original image and disturbed image. In the paper, the evaluation method of digital image correlation, the assessment method of image quality based on Fourier transform, the estimate method of image quality based on error statistic and the evaluation method of based on peak signal noise ratio are analysed. In addition, the advantages and disadvantages of these methods are analysed. Moreover, the infrared disturbing images of the experiment result, in which the thermal infrared imager was interfered by laser, were analysed by using these methods. The results show that the methods can better reflect the jamming effects of the infrared imaging system by laser. Furthermore, there is good consistence between evaluation results by using the methods and the results of subjective visual evaluation. And it also provides well repeatability and convenient quantitative analysis. The feasibility of the methods to evaluate the jamming effect was proved. It has some extent reference value for the studying and developing on electro-optical countermeasures equipments and

  9. A Fast Method for Embattling Optimization of Ground-Based Radar Surveillance Network

    NASA Astrophysics Data System (ADS)

    Jiang, H.; Cheng, H.; Zhang, Y.; Liu, J.

    A growing number of space activities have created an orbital debris environment that poses increasing impact risks to existing space systems and human space flight. For the safety of in-orbit spacecraft, a lot of observation facilities are needed to catalog space objects, especially in low earth orbit. Surveillance of Low earth orbit objects are mainly rely on ground-based radar, due to the ability limitation of exist radar facilities, a large number of ground-based radar need to build in the next few years in order to meet the current space surveillance demands. How to optimize the embattling of ground-based radar surveillance network is a problem to need to be solved. The traditional method for embattling optimization of ground-based radar surveillance network is mainly through to the detection simulation of all possible stations with cataloged data, and makes a comprehensive comparative analysis of various simulation results with the combinational method, and then selects an optimal result as station layout scheme. This method is time consuming for single simulation and high computational complexity for the combinational analysis, when the number of stations increases, the complexity of optimization problem will be increased exponentially, and cannot be solved with traditional method. There is no better way to solve this problem till now. In this paper, target detection procedure was simplified. Firstly, the space coverage of ground-based radar was simplified, a space coverage projection model of radar facilities in different orbit altitudes was built; then a simplified objects cross the radar coverage model was established according to the characteristics of space objects orbit motion; after two steps simplification, the computational complexity of the target detection was greatly simplified, and simulation results shown the correctness of the simplified results. In addition, the detection areas of ground-based radar network can be easily computed with the

  10. Comparative analysis of ROS-based monocular SLAM methods for indoor navigation

    NASA Astrophysics Data System (ADS)

    Buyval, Alexander; Afanasyev, Ilya; Magid, Evgeni

    2017-03-01

    This paper presents a comparison of four most recent ROS-based monocular SLAM-related methods: ORB-SLAM, REMODE, LSD-SLAM, and DPPTAM, and analyzes their feasibility for a mobile robot application in indoor environment. We tested these methods using video data that was recorded from a conventional wide-angle full HD webcam with a rolling shutter. The camera was mounted on a human-operated prototype of an unmanned ground vehicle, which followed a closed-loop trajectory. Both feature-based methods (ORB-SLAM, REMODE) and direct SLAMrelated algorithms (LSD-SLAM, DPPTAM) demonstrated reasonably good results in detection of volumetric objects, corners, obstacles and other local features. However, we met difficulties with recovering typical for offices homogeneously colored walls, since all of these methods created empty spaces in a reconstructed sparse 3D scene. This may cause collisions of an autonomously guided robot with unfeatured walls and thus limits applicability of maps, which are obtained by the considered monocular SLAM-related methods for indoor robot navigation.

  11. A new rational-based optimal design strategy of ship structure based on multi-level analysis and super-element modeling method

    NASA Astrophysics Data System (ADS)

    Sun, Li; Wang, Deyu

    2011-09-01

    A new multi-level analysis method of introducing the super-element modeling method, derived from the multi-level analysis method first proposed by O. F. Hughes, has been proposed in this paper to solve the problem of high time cost in adopting a rational-based optimal design method for ship structural design. Furthermore, the method was verified by its effective application in optimization of the mid-ship section of a container ship. A full 3-D FEM model of a ship, suffering static and quasi-static loads, was used as the analyzing object for evaluating the structural performance of the mid-ship module, including static strength and buckling performance. Research results reveal that this new method could substantially reduce the computational cost of the rational-based optimization problem without decreasing its accuracy, which increases the feasibility and economic efficiency of using a rational-based optimal design method in ship structural design.

  12. The Application of Continuous Wavelet Transform Based Foreground Subtraction Method in 21 cm Sky Surveys

    NASA Astrophysics Data System (ADS)

    Gu, Junhua; Xu, Haiguang; Wang, Jingying; An, Tao; Chen, Wen

    2013-08-01

    We propose a continuous wavelet transform based non-parametric foreground subtraction method for the detection of redshifted 21 cm signal from the epoch of reionization. This method works based on the assumption that the foreground spectra are smooth in frequency domain, while the 21 cm signal spectrum is full of saw-tooth-like structures, thus their characteristic scales are significantly different. We can distinguish them in the wavelet coefficient space easily and perform the foreground subtraction. Compared with the traditional spectral fitting based method, our method is more tolerant to complex foregrounds. Furthermore, we also find that when the instrument has uncorrected response error, our method can also work significantly better than the spectral fitting based method. Our method can obtain similar results with the Wp smoothing method, which is also a non-parametric method, but our method consumes much less computing time.

  13. Cepstrum based feature extraction method for fungus detection

    NASA Astrophysics Data System (ADS)

    Yorulmaz, Onur; Pearson, Tom C.; Çetin, A. Enis

    2011-06-01

    In this paper, a method for detection of popcorn kernels infected by a fungus is developed using image processing. The method is based on two dimensional (2D) mel and Mellin-cepstrum computation from popcorn kernel images. Cepstral features that were extracted from popcorn images are classified using Support Vector Machines (SVM). Experimental results show that high recognition rates of up to 93.93% can be achieved for both damaged and healthy popcorn kernels using 2D mel-cepstrum. The success rate for healthy popcorn kernels was found to be 97.41% and the recognition rate for damaged kernels was found to be 89.43%.

  14. A Self-Adaptive Model-Based Wi-Fi Indoor Localization Method.

    PubMed

    Tuta, Jure; Juric, Matjaz B

    2016-12-06

    This paper presents a novel method for indoor localization, developed with the main aim of making it useful for real-world deployments. Many indoor localization methods exist, yet they have several disadvantages in real-world deployments-some are static, which is not suitable for long-term usage; some require costly human recalibration procedures; and others require special hardware such as Wi-Fi anchors and transponders. Our method is self-calibrating and self-adaptive thus maintenance free and based on Wi-Fi only. We have employed two well-known propagation models-free space path loss and ITU models-which we have extended with additional parameters for better propagation simulation. Our self-calibrating procedure utilizes one propagation model to infer parameters of the space and the other to simulate the propagation of the signal without requiring any additional hardware beside Wi-Fi access points, which is suitable for real-world usage. Our method is also one of the few model-based Wi-Fi only self-adaptive approaches that do not require the mobile terminal to be in the access-point mode. The only input requirements of the method are Wi-Fi access point positions, and positions and properties of the walls. Our method has been evaluated in single- and multi-room environments, with measured mean error of 2-3 and 3-4 m, respectively, which is similar to existing methods. The evaluation has proven that usable localization accuracy can be achieved in real-world environments solely by the proposed Wi-Fi method that relies on simple hardware and software requirements.

  15. A Self-Adaptive Model-Based Wi-Fi Indoor Localization Method

    PubMed Central

    Tuta, Jure; Juric, Matjaz B.

    2016-01-01

    This paper presents a novel method for indoor localization, developed with the main aim of making it useful for real-world deployments. Many indoor localization methods exist, yet they have several disadvantages in real-world deployments—some are static, which is not suitable for long-term usage; some require costly human recalibration procedures; and others require special hardware such as Wi-Fi anchors and transponders. Our method is self-calibrating and self-adaptive thus maintenance free and based on Wi-Fi only. We have employed two well-known propagation models—free space path loss and ITU models—which we have extended with additional parameters for better propagation simulation. Our self-calibrating procedure utilizes one propagation model to infer parameters of the space and the other to simulate the propagation of the signal without requiring any additional hardware beside Wi-Fi access points, which is suitable for real-world usage. Our method is also one of the few model-based Wi-Fi only self-adaptive approaches that do not require the mobile terminal to be in the access-point mode. The only input requirements of the method are Wi-Fi access point positions, and positions and properties of the walls. Our method has been evaluated in single- and multi-room environments, with measured mean error of 2–3 and 3–4 m, respectively, which is similar to existing methods. The evaluation has proven that usable localization accuracy can be achieved in real-world environments solely by the proposed Wi-Fi method that relies on simple hardware and software requirements. PMID:27929453

  16. Breast histopathology image segmentation using spatio-colour-texture based graph partition method.

    PubMed

    Belsare, A D; Mushrif, M M; Pangarkar, M A; Meshram, N

    2016-06-01

    This paper proposes a novel integrated spatio-colour-texture based graph partitioning method for segmentation of nuclear arrangement in tubules with a lumen or in solid islands without a lumen from digitized Hematoxylin-Eosin stained breast histology images, in order to automate the process of histology breast image analysis to assist the pathologists. We propose a new similarity based super pixel generation method and integrate it with texton representation to form spatio-colour-texture map of Breast Histology Image. Then a new weighted distance based similarity measure is used for generation of graph and final segmentation using normalized cuts method is obtained. The extensive experiments carried shows that the proposed algorithm can segment nuclear arrangement in normal as well as malignant duct in breast histology tissue image. For evaluation of the proposed method the ground-truth image database of 100 malignant and nonmalignant breast histology images is created with the help of two expert pathologists and the quantitative evaluation of proposed breast histology image segmentation has been performed. It shows that the proposed method outperforms over other methods. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  17. Membership determination of open clusters based on a spectral clustering method

    NASA Astrophysics Data System (ADS)

    Gao, Xin-Hua

    2018-06-01

    We present a spectral clustering (SC) method aimed at segregating reliable members of open clusters in multi-dimensional space. The SC method is a non-parametric clustering technique that performs cluster division using eigenvectors of the similarity matrix; no prior knowledge of the clusters is required. This method is more flexible in dealing with multi-dimensional data compared to other methods of membership determination. We use this method to segregate the cluster members of five open clusters (Hyades, Coma Ber, Pleiades, Praesepe, and NGC 188) in five-dimensional space; fairly clean cluster members are obtained. We find that the SC method can capture a small number of cluster members (weak signal) from a large number of field stars (heavy noise). Based on these cluster members, we compute the mean proper motions and distances for the Hyades, Coma Ber, Pleiades, and Praesepe clusters, and our results are in general quite consistent with the results derived by other authors. The test results indicate that the SC method is highly suitable for segregating cluster members of open clusters based on high-precision multi-dimensional astrometric data such as Gaia data.

  18. Hybrid charge division multiplexing method for silicon photomultiplier based PET detectors

    NASA Astrophysics Data System (ADS)

    Park, Haewook; Ko, Guen Bae; Lee, Jae Sung

    2017-06-01

    Silicon photomultiplier (SiPM) is widely utilized in various positron emission tomography (PET) detectors and systems. However, the individual recording of SiPM output signals is still challenging owing to the high granularity of the SiPM; thus, charge division multiplexing is commonly used in PET detectors. Resistive charge division method is well established for reducing the number of output channels in conventional multi-channel photosensors, but it degrades the timing performance of SiPM-based PET detectors by yielding a large resistor-capacitor (RC) constant. Capacitive charge division method, on the other hand, yields a small RC constant and provides a faster timing response than the resistive method, but it suffers from an output signal undershoot. Therefore, in this study, we propose a hybrid charge division method which can be implemented by cascading the parallel combination of a resistor and a capacitor throughout the multiplexing network. In order to compare the performance of the proposed method with the conventional methods, a 16-channel Hamamatsu SiPM (S11064-050P) was coupled with a 4  ×  4 LGSO crystal block (3  ×  3  ×  20 mm3) and a 9  ×  9 LYSO crystal block (1.2  ×  1.2  ×  10 mm3). In addition, we tested a time-over-threshold (TOT) readout using the digitized position signals to further demonstrate the feasibility of the time-based readout of multiplexed signals based on the proposed method. The results indicated that the proposed method exhibited good energy and timing performance, thus inheriting only the advantages of conventional resistive and capacitive methods. Moreover, the proposed method showed excellent pulse shape uniformity that does not depend on the position of the interacted crystal. Accordingly, we can conclude that the hybrid charge division method is useful for effectively reducing the number of output channels of the SiPM array.

  19. The attitude inversion method of geostationary satellites based on unscented particle filter

    NASA Astrophysics Data System (ADS)

    Du, Xiaoping; Wang, Yang; Hu, Heng; Gou, Ruixin; Liu, Hao

    2018-04-01

    The attitude information of geostationary satellites is difficult to be obtained since they are presented in non-resolved images on the ground observation equipment in space object surveillance. In this paper, an attitude inversion method for geostationary satellite based on Unscented Particle Filter (UPF) and ground photometric data is presented. The inversion algorithm based on UPF is proposed aiming at the strong non-linear feature in the photometric data inversion for satellite attitude, which combines the advantage of Unscented Kalman Filter (UKF) and Particle Filter (PF). This update method improves the particle selection based on the idea of UKF to redesign the importance density function. Moreover, it uses the RMS-UKF to partially correct the prediction covariance matrix, which improves the applicability of the attitude inversion method in view of UKF and the particle degradation and dilution of the attitude inversion method based on PF. This paper describes the main principles and steps of algorithm in detail, correctness, accuracy, stability and applicability of the method are verified by simulation experiment and scaling experiment in the end. The results show that the proposed method can effectively solve the problem of particle degradation and depletion in the attitude inversion method on account of PF, and the problem that UKF is not suitable for the strong non-linear attitude inversion. However, the inversion accuracy is obviously superior to UKF and PF, in addition, in the case of the inversion with large attitude error that can inverse the attitude with small particles and high precision.

  20. Advanced image based methods for structural integrity monitoring: Review and prospects

    NASA Astrophysics Data System (ADS)

    Farahani, Behzad V.; Sousa, Pedro José; Barros, Francisco; Tavares, Paulo J.; Moreira, Pedro M. G. P.

    2018-02-01

    There is a growing trend in engineering to develop methods for structural integrity monitoring and characterization of in-service mechanical behaviour of components. The fast growth in recent years of image processing techniques and image-based sensing for experimental mechanics, brought about a paradigm change in phenomena sensing. Hence, several widely applicable optical approaches are playing a significant role in support of experiment. The current review manuscript describes advanced image based methods for structural integrity monitoring, and focuses on methods such as Digital Image Correlation (DIC), Thermoelastic Stress Analysis (TSA), Electronic Speckle Pattern Interferometry (ESPI) and Speckle Pattern Shearing Interferometry (Shearography). These non-contact full-field techniques rely on intensive image processing methods to measure mechanical behaviour, and evolve even as reviews such as this are being written, which justifies a special effort to keep abreast of this progress.