Sample records for conventional numerical methods

  1. An optimal implicit staggered-grid finite-difference scheme based on the modified Taylor-series expansion with minimax approximation method for elastic modeling

    NASA Astrophysics Data System (ADS)

    Yang, Lei; Yan, Hongyong; Liu, Hong

    2017-03-01

    Implicit staggered-grid finite-difference (ISFD) scheme is competitive for its great accuracy and stability, whereas its coefficients are conventionally determined by the Taylor-series expansion (TE) method, leading to a loss in numerical precision. In this paper, we modify the TE method using the minimax approximation (MA), and propose a new optimal ISFD scheme based on the modified TE (MTE) with MA method. The new ISFD scheme takes the advantage of the TE method that guarantees great accuracy at small wavenumbers, and keeps the property of the MA method that keeps the numerical errors within a limited bound at the same time. Thus, it leads to great accuracy for numerical solution of the wave equations. We derive the optimal ISFD coefficients by applying the new method to the construction of the objective function, and using a Remez algorithm to minimize its maximum. Numerical analysis is made in comparison with the conventional TE-based ISFD scheme, indicating that the MTE-based ISFD scheme with appropriate parameters can widen the wavenumber range with high accuracy, and achieve greater precision than the conventional ISFD scheme. The numerical modeling results also demonstrate that the MTE-based ISFD scheme performs well in elastic wave simulation, and is more efficient than the conventional ISFD scheme for elastic modeling.

  2. Evaluation of radiation loading on finite cylindrical shells using the fast Fourier transform: A comparison with direct numerical integration.

    PubMed

    Liu, S X; Zou, M S

    2018-03-01

    The radiation loading on a vibratory finite cylindrical shell is conventionally evaluated through the direct numerical integration (DNI) method. An alternative strategy via the fast Fourier transform algorithm is put forward in this work based on the general expression of radiation impedance. To check the feasibility and efficiency of the proposed method, a comparison with DNI is presented through numerical cases. The results obtained using the present method agree well with those calculated by DNI. More importantly, the proposed calculating strategy can significantly save the time cost compared with the conventional approach of straightforward numerical integration.

  3. Nonuniform fast Fourier transform method for numerical diffraction simulation on tilted planes.

    PubMed

    Xiao, Yu; Tang, Xiahui; Qin, Yingxiong; Peng, Hao; Wang, Wei; Zhong, Lijing

    2016-10-01

    The method, based on the rotation of the angular spectrum in the frequency domain, is generally used for the diffraction simulation between the tilted planes. Due to the rotation of the angular spectrum, the interval between the sampling points in the Fourier domain is not even. For the conventional fast Fourier transform (FFT)-based methods, a spectrum interpolation is needed to get the approximate sampling value on the equidistant sampling points. However, due to the numerical error caused by the spectrum interpolation, the calculation accuracy degrades very quickly as the rotation angle increases. Here, the diffraction propagation between the tilted planes is transformed into a problem about the discrete Fourier transform on the uneven sampling points, which can be evaluated effectively and precisely through the nonuniform fast Fourier transform method (NUFFT). The most important advantage of this method is that the conventional spectrum interpolation is avoided and the high calculation accuracy can be guaranteed for different rotation angles, even when the rotation angle is close to π/2. Also, its calculation efficiency is comparable with that of the conventional FFT-based methods. Numerical examples as well as a discussion about the calculation accuracy and the sampling method are presented.

  4. Novel Method for Superposing 3D Digital Models for Monitoring Orthodontic Tooth Movement.

    PubMed

    Schmidt, Falko; Kilic, Fatih; Piro, Neltje Emma; Geiger, Martin Eberhard; Lapatki, Bernd Georg

    2018-04-18

    Quantitative three-dimensional analysis of orthodontic tooth movement (OTM) is possible by superposition of digital jaw models made at different times during treatment. Conventional methods rely on surface alignment at palatal soft-tissue areas, which is applicable to the maxilla only. We introduce two novel numerical methods applicable to both maxilla and mandible. The OTM from the initial phase of multi-bracket appliance treatment of ten pairs of maxillary models were evaluated and compared with four conventional methods. The median range of deviation of OTM for three users was 13-72% smaller for the novel methods than for the conventional methods, indicating greater inter-observer agreement. Total tooth translation and rotation were significantly different (ANOVA, p < 0.01) for OTM determined by use of the two numerical and four conventional methods. Directional decomposition of OTM from the novel methods showed clinically acceptable agreement with reference results except for vertical translations (deviations of medians greater than 0.6 mm). The difference in vertical translational OTM can be explained by maxillary vertical growth during the observation period, which is additionally recorded by conventional methods. The novel approaches are, thus, particularly suitable for evaluation of pure treatment effects, because growth-related changes are ignored.

  5. Two modified symplectic partitioned Runge-Kutta methods for solving the elastic wave equation

    NASA Astrophysics Data System (ADS)

    Su, Bo; Tuo, Xianguo; Xu, Ling

    2017-08-01

    Based on a modified strategy, two modified symplectic partitioned Runge-Kutta (PRK) methods are proposed for the temporal discretization of the elastic wave equation. The two symplectic schemes are similar in form but are different in nature. After the spatial discretization of the elastic wave equation, the ordinary Hamiltonian formulation for the elastic wave equation is presented. The PRK scheme is then applied for time integration. An additional term associated with spatial discretization is inserted into the different stages of the PRK scheme. Theoretical analyses are conducted to evaluate the numerical dispersion and stability of the two novel PRK methods. A finite difference method is used to approximate the spatial derivatives since the two schemes are independent of the spatial discretization technique used. The numerical solutions computed by the two new schemes are compared with those computed by a conventional symplectic PRK. The numerical results, which verify the new method, are superior to those generated by traditional conventional methods in seismic wave modeling.

  6. Numerical study of rotating detonation engine with an array of injection holes

    NASA Astrophysics Data System (ADS)

    Yao, S.; Han, X.; Liu, Y.; Wang, J.

    2017-05-01

    This paper aims to adopt the method of injection via an array of holes in three-dimensional numerical simulations of a rotating detonation engine (RDE). The calculation is based on the Euler equations coupled with a one-step Arrhenius chemistry model. A pre-mixed stoichiometric hydrogen-air mixture is used. The present study uses a more practical fuel injection method in RDE simulations, injection via an array of holes, which is different from the previous conventional simulations where a relatively simple full injection method is usually adopted. The computational results capture some important experimental observations and a transient period after initiation. These phenomena are usually absent in conventional RDE simulations due to the use of an idealistic injection approximation. The results are compared with those obtained from other numerical studies and experiments with RDEs.

  7. Development of a numerical model for vehicle-bridge interaction analysis of railway bridges

    NASA Astrophysics Data System (ADS)

    Kim, Hee Ju; Cho, Eun Sang; Ham, Jun Su; Park, Ki Tae; Kim, Tae Heon

    2016-04-01

    In the field of civil engineering, analyzing dynamic response was main concern for a long time. These analysis methods can be divided into moving load analysis method and moving mass analysis method, and formulating each an equation of motion has recently been studied after dividing vehicles and bridges. In this study, the numerical method is presented, which can consider the various train types and can solve the equations of motion for a vehicle-bridge interaction analysis by non-iteration procedure through formulating the coupled equations for motion. Also, 3 dimensional accurate numerical models was developed by KTX-vehicle in order to analyze dynamic response characteristics. The equations of motion for the conventional trains are derived, and the numerical models of the conventional trains are idealized by a set of linear springs and dashpots with 18 degrees of freedom. The bridge models are simplified by the 3 dimensional space frame element which is based on the Euler-Bernoulli theory. The rail irregularities of vertical and lateral directions are generated by PSD functions of the Federal Railroad Administration (FRA).

  8. Elimination of numerical diffusion in 1 - phase and 2 - phase flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rajamaeki, M.

    1997-07-01

    The new hydraulics solution method PLIM (Piecewise Linear Interpolation Method) is capable of avoiding the excessive errors, numerical diffusion and also numerical dispersion. The hydraulics solver CFDPLIM uses PLIM and solves the time-dependent one-dimensional flow equations in network geometry. An example is given for 1-phase flow in the case when thermal-hydraulics and reactor kinetics are strongly coupled. Another example concerns oscillations in 2-phase flow. Both the example computations are not possible with conventional methods.

  9. Scalable parallel elastic-plastic finite element analysis using a quasi-Newton method with a balancing domain decomposition preconditioner

    NASA Astrophysics Data System (ADS)

    Yusa, Yasunori; Okada, Hiroshi; Yamada, Tomonori; Yoshimura, Shinobu

    2018-04-01

    A domain decomposition method for large-scale elastic-plastic problems is proposed. The proposed method is based on a quasi-Newton method in conjunction with a balancing domain decomposition preconditioner. The use of a quasi-Newton method overcomes two problems associated with the conventional domain decomposition method based on the Newton-Raphson method: (1) avoidance of a double-loop iteration algorithm, which generally has large computational complexity, and (2) consideration of the local concentration of nonlinear deformation, which is observed in elastic-plastic problems with stress concentration. Moreover, the application of a balancing domain decomposition preconditioner ensures scalability. Using the conventional and proposed domain decomposition methods, several numerical tests, including weak scaling tests, were performed. The convergence performance of the proposed method is comparable to that of the conventional method. In particular, in elastic-plastic analysis, the proposed method exhibits better convergence performance than the conventional method.

  10. Method for thermal and structural evaluation of shallow intense-beam deposition in matter

    NASA Astrophysics Data System (ADS)

    Pilan Zanoni, André

    2018-05-01

    The projected range of high-intensity proton and heavy-ion beams at energies below a few tens of MeV/A in matter can be as short as a few micrometers. For the evaluation of temperature and stresses from a shallow beam energy deposition in matter conventional numerical 3D models require minuscule element sizes for acceptable element aspect ratio as well as extremely short time steps for numerical convergence. In order to simulate energy deposition using a manageable number of elements this article presents a method using layered elements. This method is applied to beam stoppers and accidental intense-beam impact onto UHV sector valves. In those cases the thermal results from the new method are congruent to those from conventional solid-element and adiabatic models.

  11. Direct determination of one-dimensional interphase structures using normalized crystal truncation rod analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kawaguchi, Tomoya; Liu, Yihua; Reiter, Anthony

    Here, a one-dimensional non-iterative direct method was employed for normalized crystal truncation rod analysis. The non-iterative approach, utilizing the Kramers–Kronig relation, avoids the ambiguities due to an improper initial model or incomplete convergence in the conventional iterative methods. The validity and limitations of the present method are demonstrated through both numerical simulations and experiments with Pt(111) in a 0.1 M CsF aqueous solution. The present method is compared with conventional iterative phase-retrieval methods.

  12. Direct determination of one-dimensional interphase structures using normalized crystal truncation rod analysis

    DOE PAGES

    Kawaguchi, Tomoya; Liu, Yihua; Reiter, Anthony; ...

    2018-04-20

    Here, a one-dimensional non-iterative direct method was employed for normalized crystal truncation rod analysis. The non-iterative approach, utilizing the Kramers–Kronig relation, avoids the ambiguities due to an improper initial model or incomplete convergence in the conventional iterative methods. The validity and limitations of the present method are demonstrated through both numerical simulations and experiments with Pt(111) in a 0.1 M CsF aqueous solution. The present method is compared with conventional iterative phase-retrieval methods.

  13. A developed nearly analytic discrete method for forward modeling in the frequency domain

    NASA Astrophysics Data System (ADS)

    Liu, Shaolin; Lang, Chao; Yang, Hui; Wang, Wenshuai

    2018-02-01

    High-efficiency forward modeling methods play a fundamental role in full waveform inversion (FWI). In this paper, the developed nearly analytic discrete (DNAD) method is proposed to accelerate frequency-domain forward modeling processes. We first derive the discretization of frequency-domain wave equations via numerical schemes based on the nearly analytic discrete (NAD) method to obtain a linear system. The coefficients of numerical stencils are optimized to make the linear system easier to solve and to minimize computing time. Wavefield simulation and numerical dispersion analysis are performed to compare the numerical behavior of DNAD method with that of the conventional NAD method. The results demonstrate the superiority of our proposed method. Finally, the DNAD method is implemented in frequency-domain FWI, and high-resolution inverse results are obtained.

  14. Numerical simulation of pseudoelastic shape memory alloys using the large time increment method

    NASA Astrophysics Data System (ADS)

    Gu, Xiaojun; Zhang, Weihong; Zaki, Wael; Moumni, Ziad

    2017-04-01

    The paper presents a numerical implementation of the large time increment (LATIN) method for the simulation of shape memory alloys (SMAs) in the pseudoelastic range. The method was initially proposed as an alternative to the conventional incremental approach for the integration of nonlinear constitutive models. It is adapted here for the simulation of pseudoelastic SMA behavior using the Zaki-Moumni model and is shown to be especially useful in situations where the phase transformation process presents little or lack of hardening. In these situations, a slight stress variation in a load increment can result in large variations of strain and local state variables, which may lead to difficulties in numerical convergence. In contrast to the conventional incremental method, the LATIN method solve the global equilibrium and local consistency conditions sequentially for the entire loading path. The achieved solution must satisfy the conditions of static and kinematic admissibility and consistency simultaneously after several iterations. 3D numerical implementation is accomplished using an implicit algorithm and is then used for finite element simulation using the software Abaqus. Computational tests demonstrate the ability of this approach to simulate SMAs presenting flat phase transformation plateaus and subjected to complex loading cases, such as the quasi-static behavior of a stent structure. Some numerical results are contrasted to those obtained using step-by-step incremental integration.

  15. Runge-Kutta Methods for Linear Ordinary Differential Equations

    NASA Technical Reports Server (NTRS)

    Zingg, David W.; Chisholm, Todd T.

    1997-01-01

    Three new Runge-Kutta methods are presented for numerical integration of systems of linear inhomogeneous ordinary differential equations (ODES) with constant coefficients. Such ODEs arise in the numerical solution of the partial differential equations governing linear wave phenomena. The restriction to linear ODEs with constant coefficients reduces the number of conditions which the coefficients of the Runge-Kutta method must satisfy. This freedom is used to develop methods which are more efficient than conventional Runge-Kutta methods. A fourth-order method is presented which uses only two memory locations per dependent variable, while the classical fourth-order Runge-Kutta method uses three. This method is an excellent choice for simulations of linear wave phenomena if memory is a primary concern. In addition, fifth- and sixth-order methods are presented which require five and six stages, respectively, one fewer than their conventional counterparts, and are therefore more efficient. These methods are an excellent option for use with high-order spatial discretizations.

  16. The Accuracy of Shock Capturing in Two Spatial Dimensions

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Casper, Jay H.

    1997-01-01

    An assessment of the accuracy of shock capturing schemes is made for two-dimensional steady flow around a cylindrical projectile. Both a linear fourth-order method and a nonlinear third-order method are used in this study. It is shown, contrary to conventional wisdom, that captured two-dimensional shocks are asymptotically first-order, regardless of the design accuracy of the numerical method. The practical implications of this finding are discussed in the context of the efficacy of high-order numerical methods for discontinuous flows.

  17. Parallel Algorithm Solves Coupled Differential Equations

    NASA Technical Reports Server (NTRS)

    Hayashi, A.

    1987-01-01

    Numerical methods adapted to concurrent processing. Algorithm solves set of coupled partial differential equations by numerical integration. Adapted to run on hypercube computer, algorithm separates problem into smaller problems solved concurrently. Increase in computing speed with concurrent processing over that achievable with conventional sequential processing appreciable, especially for large problems.

  18. Numerical Manifold Method for the Forced Vibration of Thin Plates during Bending

    PubMed Central

    Jun, Ding; Song, Chen; Wei-Bin, Wen; Shao-Ming, Luo; Xia, Huang

    2014-01-01

    A novel numerical manifold method was derived from the cubic B-spline basis function. The new interpolation function is characterized by high-order coordination at the boundary of a manifold element. The linear elastic-dynamic equation used to solve the bending vibration of thin plates was derived according to the principle of minimum instantaneous potential energy. The method for the initialization of the dynamic equation and its solution process were provided. Moreover, the analysis showed that the calculated stiffness matrix exhibited favorable performance. Numerical results showed that the generalized degrees of freedom were significantly fewer and that the calculation accuracy was higher for the manifold method than for the conventional finite element method. PMID:24883403

  19. A simple iterative independent component analysis algorithm for vibration source signal identification of complex structures

    NASA Astrophysics Data System (ADS)

    Lee, Dong-Sup; Cho, Dae-Seung; Kim, Kookhyun; Jeon, Jae-Jin; Jung, Woo-Jin; Kang, Myeng-Hwan; Kim, Jae-Ho

    2015-01-01

    Independent Component Analysis (ICA), one of the blind source separation methods, can be applied for extracting unknown source signals only from received signals. This is accomplished by finding statistical independence of signal mixtures and has been successfully applied to myriad fields such as medical science, image processing, and numerous others. Nevertheless, there are inherent problems that have been reported when using this technique: instability and invalid ordering of separated signals, particularly when using a conventional ICA technique in vibratory source signal identification of complex structures. In this study, a simple iterative algorithm of the conventional ICA has been proposed to mitigate these problems. The proposed method to extract more stable source signals having valid order includes an iterative and reordering process of extracted mixing matrix to reconstruct finally converged source signals, referring to the magnitudes of correlation coefficients between the intermediately separated signals and the signals measured on or nearby sources. In order to review the problems of the conventional ICA technique and to validate the proposed method, numerical analyses have been carried out for a virtual response model and a 30 m class submarine model. Moreover, in order to investigate applicability of the proposed method to real problem of complex structure, an experiment has been carried out for a scaled submarine mockup. The results show that the proposed method could resolve the inherent problems of a conventional ICA technique.

  20. A highly accurate finite-difference method with minimum dispersion error for solving the Helmholtz equation

    NASA Astrophysics Data System (ADS)

    Wu, Zedong; Alkhalifah, Tariq

    2018-07-01

    Numerical simulation of the acoustic wave equation in either isotropic or anisotropic media is crucial to seismic modeling, imaging and inversion. Actually, it represents the core computation cost of these highly advanced seismic processing methods. However, the conventional finite-difference method suffers from severe numerical dispersion errors and S-wave artifacts when solving the acoustic wave equation for anisotropic media. We propose a method to obtain the finite-difference coefficients by comparing its numerical dispersion with the exact form. We find the optimal finite difference coefficients that share the dispersion characteristics of the exact equation with minimal dispersion error. The method is extended to solve the acoustic wave equation in transversely isotropic (TI) media without S-wave artifacts. Numerical examples show that the method is highly accurate and efficient.

  1. Spectral methods on arbitrary grids

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Gottlieb, David

    1995-01-01

    Stable and spectrally accurate numerical methods are constructed on arbitrary grids for partial differential equations. These new methods are equivalent to conventional spectral methods but do not rely on specific grid distributions. Specifically, we show how to implement Legendre Galerkin, Legendre collocation, and Laguerre Galerkin methodology on arbitrary grids.

  2. A GPU-based calculation using the three-dimensional FDTD method for electromagnetic field analysis.

    PubMed

    Nagaoka, Tomoaki; Watanabe, Soichi

    2010-01-01

    Numerical simulations with the numerical human model using the finite-difference time domain (FDTD) method have recently been performed frequently in a number of fields in biomedical engineering. However, the FDTD calculation runs too slowly. We focus, therefore, on general purpose programming on the graphics processing unit (GPGPU). The three-dimensional FDTD method was implemented on the GPU using Compute Unified Device Architecture (CUDA). In this study, we used the NVIDIA Tesla C1060 as a GPGPU board. The performance of the GPU is evaluated in comparison with the performance of a conventional CPU and a vector supercomputer. The results indicate that three-dimensional FDTD calculations using a GPU can significantly reduce run time in comparison with that using a conventional CPU, even a native GPU implementation of the three-dimensional FDTD method, while the GPU/CPU speed ratio varies with the calculation domain and thread block size.

  3. Multiscale solutions of radiative heat transfer by the discrete unified gas kinetic scheme

    NASA Astrophysics Data System (ADS)

    Luo, Xiao-Ping; Wang, Cun-Hai; Zhang, Yong; Yi, Hong-Liang; Tan, He-Ping

    2018-06-01

    The radiative transfer equation (RTE) has two asymptotic regimes characterized by the optical thickness, namely, optically thin and optically thick regimes. In the optically thin regime, a ballistic or kinetic transport is dominant. In the optically thick regime, energy transport is totally dominated by multiple collisions between photons; that is, the photons propagate by means of diffusion. To obtain convergent solutions to the RTE, conventional numerical schemes have a strong dependence on the number of spatial grids, which leads to a serious computational inefficiency in the regime where the diffusion is predominant. In this work, a discrete unified gas kinetic scheme (DUGKS) is developed to predict radiative heat transfer in participating media. Numerical performances of the DUGKS are compared in detail with conventional methods through three cases including one-dimensional transient radiative heat transfer, two-dimensional steady radiative heat transfer, and three-dimensional multiscale radiative heat transfer. Due to the asymptotic preserving property, the present method with relatively coarse grids gives accurate and reliable numerical solutions for large, small, and in-between values of optical thickness, and, especially in the optically thick regime, the DUGKS demonstrates a pronounced computational efficiency advantage over the conventional numerical models. In addition, the DUGKS has a promising potential in the study of multiscale radiative heat transfer inside the participating medium with a transition from optically thin to optically thick regimes.

  4. System Simulation by Recursive Feedback: Coupling a Set of Stand-Alone Subsystem Simulations

    NASA Technical Reports Server (NTRS)

    Nixon, D. D.

    2001-01-01

    Conventional construction of digital dynamic system simulations often involves collecting differential equations that model each subsystem, arran g them to a standard form, and obtaining their numerical gin solution as a single coupled, total-system simultaneous set. Simulation by numerical coupling of independent stand-alone subsimulations is a fundamentally different approach that is attractive because, among other things, the architecture naturally facilitates high fidelity, broad scope, and discipline independence. Recursive feedback is defined and discussed as a candidate approach to multidiscipline dynamic system simulation by numerical coupling of self-contained, single-discipline subsystem simulations. A satellite motion example containing three subsystems (orbit dynamics, attitude dynamics, and aerodynamics) has been defined and constructed using this approach. Conventional solution methods are used in the subsystem simulations. Distributed and centralized implementations of coupling have been considered. Numerical results are evaluated by direct comparison with a standard total-system, simultaneous-solution approach.

  5. Advantages of multigrid methods for certifying the accuracy of PDE modeling

    NASA Technical Reports Server (NTRS)

    Forester, C. K.

    1981-01-01

    Numerical techniques for assessing and certifying the accuracy of the modeling of partial differential equations (PDE) to the user's specifications are analyzed. Examples of the certification process with conventional techniques are summarized for the three dimensional steady state full potential and the two dimensional steady Navier-Stokes equations using fixed grid methods (FG). The advantages of the Full Approximation Storage (FAS) scheme of the multigrid technique of A. Brandt compared with the conventional certification process of modeling PDE are illustrated in one dimension with the transformed potential equation. Inferences are drawn for how MG will improve the certification process of the numerical modeling of two and three dimensional PDE systems. Elements of the error assessment process that are common to FG and MG are analyzed.

  6. Application of a quick-freezing and deep-etching method to pathological diagnosis: a case of elastofibroma.

    PubMed

    Hemmi, Akihiro; Tabata, Masahiko; Homma, Taku; Ohno, Nobuhiko; Terada, Nobuo; Fujii, Yasuhisa; Ohno, Shinichi; Nemoto, Norimichi

    2006-04-01

    A case of elastofibroma in a middle-aged Japanese woman was examined by the quick-freezing and deep-etching (QF-DE) method, as well as by immunohistochemistry and conventional electron microscopy. The slowly growing tumor developed at the right scapular region and was composed of fibrous connective tissue with unique elastic materials called elastofibroma fibers. A normal elastic fiber consists of a central core and peripheral zone, in which the latter has small aggregates of 10 nm microfibrils. By the QF-DE method, globular structures consisting of numerous fibrils (5-20 nm in width) were observed between the collagen bundles. We could confirm that they were microfibril-rich peripheral zones of elastofibroma fibers by comparing the replica membrane and conventional electron microscopy. One of the characteristics of elastofibroma fibers is that they are assumed to contain numerous microfibrils. Immunohistochemically, spindle tumor cells showed positive immunoreaction for vimentin, whereas alpha-smooth muscle actin, desmin, S-100 protein and CD34 showed negative immunoreaction. By conventional electron microscopy, the tumor cell had thin cytoplasmic processes, pinocytotic vesicles and prominent rough endoplasmic reticulum. Abundant intracytoplasmic filaments were observed in some tumor cells. Thick lamina-like structures along with their inner nuclear membrane were often observed in the tumor cell nuclei. The whole image of the tumor cell was considered to be a periosteal-derived cell, which would produce numerous microfibrils in the peripheral zone of elastofibroma fibers. This study indicated that the QF-DE method could be applied to the pathological diagnosis and analysis of pathomechanism, even for surgical specimens obtained from a patient.

  7. Introduction to 2009 Symposium on Alternative Methods of Controlling Pests and Diseases

    USDA-ARS?s Scientific Manuscript database

    Numerous pests and diseases limit potato productivity, and control of weeds, insects and pathogens remains a costly part of potato production. Although conventional agrichemical pest control is amazingly effective, interest in non-synthetic chemical and integrated methods of pest management is drive...

  8. Improved FFT-based numerical inversion of Laplace transforms via fast Hartley transform algorithm

    NASA Technical Reports Server (NTRS)

    Hwang, Chyi; Lu, Ming-Jeng; Shieh, Leang S.

    1991-01-01

    The disadvantages of numerical inversion of the Laplace transform via the conventional fast Fourier transform (FFT) are identified and an improved method is presented to remedy them. The improved method is based on introducing a new integration step length Delta(omega) = pi/mT for trapezoidal-rule approximation of the Bromwich integral, in which a new parameter, m, is introduced for controlling the accuracy of the numerical integration. Naturally, this method leads to multiple sets of complex FFT computations. A new inversion formula is derived such that N equally spaced samples of the inverse Laplace transform function can be obtained by (m/2) + 1 sets of N-point complex FFT computations or by m sets of real fast Hartley transform (FHT) computations.

  9. B-spline Method in Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Botella, Olivier; Shariff, Karim; Mansour, Nagi N. (Technical Monitor)

    2001-01-01

    B-spline functions are bases for piecewise polynomials that possess attractive properties for complex flow simulations : they have compact support, provide a straightforward handling of boundary conditions and grid nonuniformities, and yield numerical schemes with high resolving power, where the order of accuracy is a mere input parameter. This paper reviews the progress made on the development and application of B-spline numerical methods to computational fluid dynamics problems. Basic B-spline approximation properties is investigated, and their relationship with conventional numerical methods is reviewed. Some fundamental developments towards efficient complex geometry spline methods are covered, such as local interpolation methods, fast solution algorithms on cartesian grid, non-conformal block-structured discretization, formulation of spline bases of higher continuity over triangulation, and treatment of pressure oscillations in Navier-Stokes equations. Application of some of these techniques to the computation of viscous incompressible flows is presented.

  10. Evaluation of a transfinite element numerical solution method for nonlinear heat transfer problems

    NASA Technical Reports Server (NTRS)

    Cerro, J. A.; Scotti, S. J.

    1991-01-01

    Laplace transform techniques have been widely used to solve linear, transient field problems. A transform-based algorithm enables calculation of the response at selected times of interest without the need for stepping in time as required by conventional time integration schemes. The elimination of time stepping can substantially reduce computer time when transform techniques are implemented in a numerical finite element program. The coupling of transform techniques with spatial discretization techniques such as the finite element method has resulted in what are known as transfinite element methods. Recently attempts have been made to extend the transfinite element method to solve nonlinear, transient field problems. This paper examines the theoretical basis and numerical implementation of one such algorithm, applied to nonlinear heat transfer problems. The problem is linearized and solved by requiring a numerical iteration at selected times of interest. While shown to be acceptable for weakly nonlinear problems, this algorithm is ineffective as a general nonlinear solution method.

  11. Fractal analysis of GPS time series for early detection of disastrous seismic events

    NASA Astrophysics Data System (ADS)

    Filatov, Denis M.; Lyubushin, Alexey A.

    2017-03-01

    A new method of fractal analysis of time series for estimating the chaoticity of behaviour of open stochastic dynamical systems is developed. The method is a modification of the conventional detrended fluctuation analysis (DFA) technique. We start from analysing both methods from the physical point of view and demonstrate the difference between them which results in a higher accuracy of the new method compared to the conventional DFA. Then, applying the developed method to estimate the measure of chaoticity of a real dynamical system - the Earth's crust, we reveal that the latter exhibits two distinct mechanisms of transition to a critical state: while the first mechanism has already been known due to numerous studies of other dynamical systems, the second one is new and has not previously been described. Using GPS time series, we demonstrate efficiency of the developed method in identification of critical states of the Earth's crust. Finally we employ the method to solve a practically important task: we show how the developed measure of chaoticity can be used for early detection of disastrous seismic events and provide a detailed discussion of the numerical results, which are shown to be consistent with outcomes of other researches on the topic.

  12. Numerical simulation of immiscible viscous fingering using adaptive unstructured meshes

    NASA Astrophysics Data System (ADS)

    Adam, A.; Salinas, P.; Percival, J. R.; Pavlidis, D.; Pain, C.; Muggeridge, A. H.; Jackson, M.

    2015-12-01

    Displacement of one fluid by another in porous media occurs in various settings including hydrocarbon recovery, CO2 storage and water purification. When the invading fluid is of lower viscosity than the resident fluid, the displacement front is subject to a Saffman-Taylor instability and is unstable to transverse perturbations. These instabilities can grow, leading to fingering of the invading fluid. Numerical simulation of viscous fingering is challenging. The physics is controlled by a complex interplay of viscous and diffusive forces and it is necessary to ensure physical diffusion dominates numerical diffusion to obtain converged solutions. This typically requires the use of high mesh resolution and high order numerical methods. This is computationally expensive. We demonstrate here the use of a novel control volume - finite element (CVFE) method along with dynamic unstructured mesh adaptivity to simulate viscous fingering with higher accuracy and lower computational cost than conventional methods. Our CVFE method employs a discontinuous representation for both pressure and velocity, allowing the use of smaller control volumes (CVs). This yields higher resolution of the saturation field which is represented CV-wise. Moreover, dynamic mesh adaptivity allows high mesh resolution to be employed where it is required to resolve the fingers and lower resolution elsewhere. We use our results to re-examine the existing criteria that have been proposed to govern the onset of instability.Mesh adaptivity requires the mapping of data from one mesh to another. Conventional methods such as consistent interpolation do not readily generalise to discontinuous fields and are non-conservative. We further contribute a general framework for interpolation of CV fields by Galerkin projection. The method is conservative, higher order and yields improved results, particularly with higher order or discontinuous elements where existing approaches are often excessively diffusive.

  13. Equivalent orthotropic elastic moduli identification method for laminated electrical steel sheets

    NASA Astrophysics Data System (ADS)

    Saito, Akira; Nishikawa, Yasunari; Yamasaki, Shintaro; Fujita, Kikuo; Kawamoto, Atsushi; Kuroishi, Masakatsu; Nakai, Hideo

    2016-05-01

    In this paper, a combined numerical-experimental methodology for the identification of elastic moduli of orthotropic media is presented. Special attention is given to the laminated electrical steel sheets, which are modeled as orthotropic media with nine independent engineering elastic moduli. The elastic moduli are determined specifically for use with finite element vibration analyses. We propose a three-step methodology based on a conventional nonlinear least squares fit between measured and computed natural frequencies. The methodology consists of: (1) successive augmentations of the objective function by increasing the number of modes, (2) initial condition updates, and (3) appropriate selection of the natural frequencies based on their sensitivities on the elastic moduli. Using the results of numerical experiments, it is shown that the proposed method achieves more accurate converged solution than a conventional approach. Finally, the proposed method is applied to measured natural frequencies and mode shapes of the laminated electrical steel sheets. It is shown that the method can successfully identify the orthotropic elastic moduli that can reproduce the measured natural frequencies and frequency response functions by using finite element analyses with a reasonable accuracy.

  14. Multi-domain boundary element method for axi-symmetric layered linear acoustic systems

    NASA Astrophysics Data System (ADS)

    Reiter, Paul; Ziegelwanger, Harald

    2017-12-01

    Homogeneous porous materials like rock wool or synthetic foam are the main tool for acoustic absorption. The conventional absorbing structure for sound-proofing consists of one or multiple absorbers placed in front of a rigid wall, with or without air-gaps in between. Various models exist to describe these so called multi-layered acoustic systems mathematically for incoming plane waves. However, there is no efficient method to calculate the sound field in a half space above a multi layered acoustic system for an incoming spherical wave. In this work, an axi-symmetric multi-domain boundary element method (BEM) for absorbing multi layered acoustic systems and incoming spherical waves is introduced. In the proposed BEM formulation, a complex wave number is used to model absorbing materials as a fluid and a coordinate transformation is introduced which simplifies singular integrals of the conventional BEM to non-singular radial and angular integrals. The radial and angular part are integrated analytically and numerically, respectively. The output of the method can be interpreted as a numerical half space Green's function for grounds consisting of layered materials.

  15. Holographic particle size extraction by using Wigner-Ville distribution

    NASA Astrophysics Data System (ADS)

    Chuamchaitrakool, Porntip; Widjaja, Joewono; Yoshimura, Hiroyuki

    2014-06-01

    A new method for measuring object size from in-line holograms by using Wigner-Ville distribution (WVD) is proposed. The proposed method has advantages over conventional numerical reconstruction in that it is free from iterative process and it can extract the object size and position with only single computation of the WVD. Experimental verification of the proposed method is presented.

  16. Generalized multiscale finite-element method (GMsFEM) for elastic wave propagation in heterogeneous, anisotropic media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Kai; Fu, Shubin; Gibson, Richard L.

    It is important to develop fast yet accurate numerical methods for seismic wave propagation to characterize complex geological structures and oil and gas reservoirs. However, the computational cost of conventional numerical modeling methods, such as finite-difference method and finite-element method, becomes prohibitively expensive when applied to very large models. We propose a Generalized Multiscale Finite-Element Method (GMsFEM) for elastic wave propagation in heterogeneous, anisotropic media, where we construct basis functions from multiple local problems for both the boundaries and interior of a coarse node support or coarse element. The application of multiscale basis functions can capture the fine scale mediummore » property variations, and allows us to greatly reduce the degrees of freedom that are required to implement the modeling compared with conventional finite-element method for wave equation, while restricting the error to low values. We formulate the continuous Galerkin and discontinuous Galerkin formulation of the multiscale method, both of which have pros and cons. Applications of the multiscale method to three heterogeneous models show that our multiscale method can effectively model the elastic wave propagation in anisotropic media with a significant reduction in the degrees of freedom in the modeling system.« less

  17. Generalized Multiscale Finite-Element Method (GMsFEM) for elastic wave propagation in heterogeneous, anisotropic media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Kai, E-mail: kaigao87@gmail.com; Fu, Shubin, E-mail: shubinfu89@gmail.com; Gibson, Richard L., E-mail: gibson@tamu.edu

    It is important to develop fast yet accurate numerical methods for seismic wave propagation to characterize complex geological structures and oil and gas reservoirs. However, the computational cost of conventional numerical modeling methods, such as finite-difference method and finite-element method, becomes prohibitively expensive when applied to very large models. We propose a Generalized Multiscale Finite-Element Method (GMsFEM) for elastic wave propagation in heterogeneous, anisotropic media, where we construct basis functions from multiple local problems for both the boundaries and interior of a coarse node support or coarse element. The application of multiscale basis functions can capture the fine scale mediummore » property variations, and allows us to greatly reduce the degrees of freedom that are required to implement the modeling compared with conventional finite-element method for wave equation, while restricting the error to low values. We formulate the continuous Galerkin and discontinuous Galerkin formulation of the multiscale method, both of which have pros and cons. Applications of the multiscale method to three heterogeneous models show that our multiscale method can effectively model the elastic wave propagation in anisotropic media with a significant reduction in the degrees of freedom in the modeling system.« less

  18. Generalized multiscale finite-element method (GMsFEM) for elastic wave propagation in heterogeneous, anisotropic media

    DOE PAGES

    Gao, Kai; Fu, Shubin; Gibson, Richard L.; ...

    2015-04-14

    It is important to develop fast yet accurate numerical methods for seismic wave propagation to characterize complex geological structures and oil and gas reservoirs. However, the computational cost of conventional numerical modeling methods, such as finite-difference method and finite-element method, becomes prohibitively expensive when applied to very large models. We propose a Generalized Multiscale Finite-Element Method (GMsFEM) for elastic wave propagation in heterogeneous, anisotropic media, where we construct basis functions from multiple local problems for both the boundaries and interior of a coarse node support or coarse element. The application of multiscale basis functions can capture the fine scale mediummore » property variations, and allows us to greatly reduce the degrees of freedom that are required to implement the modeling compared with conventional finite-element method for wave equation, while restricting the error to low values. We formulate the continuous Galerkin and discontinuous Galerkin formulation of the multiscale method, both of which have pros and cons. Applications of the multiscale method to three heterogeneous models show that our multiscale method can effectively model the elastic wave propagation in anisotropic media with a significant reduction in the degrees of freedom in the modeling system.« less

  19. Experimental validation of spatial Fourier transform-based multiple sound zone generation with a linear loudspeaker array.

    PubMed

    Okamoto, Takuma; Sakaguchi, Atsushi

    2017-03-01

    Generating acoustically bright and dark zones using loudspeakers is gaining attention as one of the most important acoustic communication techniques for such uses as personal sound systems and multilingual guide services. Although most conventional methods are based on numerical solutions, an analytical approach based on the spatial Fourier transform with a linear loudspeaker array has been proposed, and its effectiveness has been compared with conventional acoustic energy difference maximization and presented by computer simulations. To describe the effectiveness of the proposal in actual environments, this paper investigates the experimental validation of the proposed approach with rectangular and Hann windows and compared it with three conventional methods: simple delay-and-sum beamforming, contrast maximization, and least squares-based pressure matching using an actually implemented linear array of 64 loudspeakers in an anechoic chamber. The results of both the computer simulations and the actual experiments show that the proposed approach with a Hann window more accurately controlled the bright and dark zones than the conventional methods.

  20. Fast focus estimation using frequency analysis in digital holography.

    PubMed

    Oh, Seungtaik; Hwang, Chi-Young; Jeong, Il Kwon; Lee, Sung-Keun; Park, Jae-Hyeung

    2014-11-17

    A novel fast frequency-based method to estimate the focus distance of digital hologram for a single object is proposed. The focus distance is computed by analyzing the distribution of intersections of smoothed-rays. The smoothed-rays are determined by the directions of energy flow which are computed from local spatial frequency spectrum based on the windowed Fourier transform. So our method uses only the intrinsic frequency information of the optical field on the hologram and therefore does not require any sequential numerical reconstructions and focus detection techniques of conventional photography, both of which are the essential parts in previous methods. To show the effectiveness of our method, numerical results and analysis are presented as well.

  1. Local unitary transformation method for large-scale two-component relativistic calculations. II. Extension to two-electron Coulomb interaction.

    PubMed

    Seino, Junji; Nakai, Hiromi

    2012-10-14

    The local unitary transformation (LUT) scheme at the spin-free infinite-order Douglas-Kroll-Hess (IODKH) level [J. Seino and H. Nakai, J. Chem. Phys. 136, 244102 (2012)], which is based on the locality of relativistic effects, has been extended to a four-component Dirac-Coulomb Hamiltonian. In the previous study, the LUT scheme was applied only to a one-particle IODKH Hamiltonian with non-relativistic two-electron Coulomb interaction, termed IODKH/C. The current study extends the LUT scheme to a two-particle IODKH Hamiltonian as well as one-particle one, termed IODKH/IODKH, which has been a real bottleneck in numerical calculation. The LUT scheme with the IODKH/IODKH Hamiltonian was numerically assessed in the diatomic molecules HX and X(2) and hydrogen halide molecules, (HX)(n) (X = F, Cl, Br, and I). The total Hartree-Fock energies calculated by the LUT method agree well with conventional IODKH/IODKH results. The computational cost of the LUT method is reduced drastically compared with that of the conventional method. In addition, the LUT method achieves linear-scaling with respect to the system size and a small prefactor.

  2. Space-based optical image encryption.

    PubMed

    Chen, Wen; Chen, Xudong

    2010-12-20

    In this paper, we propose a new method based on a three-dimensional (3D) space-based strategy for the optical image encryption. The two-dimensional (2D) processing of a plaintext in the conventional optical encryption methods is extended to a 3D space-based processing. Each pixel of the plaintext is considered as one particle in the proposed space-based optical image encryption, and the diffraction of all particles forms an object wave in the phase-shifting digital holography. The effectiveness and advantages of the proposed method are demonstrated by numerical results. The proposed method can provide a new optical encryption strategy instead of the conventional 2D processing, and may open up a new research perspective for the optical image encryption.

  3. Computer-based self-organized tectonic zoning: a tentative pattern recognition for Iran

    NASA Astrophysics Data System (ADS)

    Zamani, Ahmad; Hashemi, Naser

    2004-08-01

    Conventional methods of tectonic zoning are frequently characterized by two deficiencies. The first one is the large uncertainty involved in tectonic zoning based on non-quantitative and subjective analysis. Failure to interpret accurately a large amount of data "by eye" is the second. In order to alleviate each of these deficiencies, the multivariate statistical method of cluster analysis has been utilized to seek and separate zones with similar tectonic pattern and construct automated self-organized multivariate tectonic zoning maps. This analytical method of tectonic regionalization is particularly useful for showing trends in tectonic evolution of a region that could not be discovered by any other means. To illustrate, this method has been applied for producing a general-purpose numerical tectonic zoning map of Iran. While there are some similarities between the self-organized multivariate numerical maps and the conventional maps, the cluster solution maps reveal some remarkable features that cannot be observed on the current tectonic maps. The following specific examples need to be noted: (1) The much disputed extent and rigidity of the Lut Rigid Block, described as the microplate of east Iran, is clearly revealed on the self-organized numerical maps. (2) The cluster solution maps reveal a striking similarity between this microplate and the northern Central Iran—including the Great Kavir region. (3) Contrary to the conventional map, the cluster solution maps make a clear distinction between the East Iranian Ranges and the Makran Mountains. (4) Moreover, an interesting similarity between the Azarbaijan region in the northwest and the Makran Mountains in the southeast and between the Kopet Dagh Ranges in the northeast and the Zagros Folded Belt in the southwest of Iran are revealed in the clustering process. This new approach to tectonic zoning is a starting point and is expected to be improved and refined by collection of new data. The method is also a useful tool in studying neotectonics, seismotectonics, seismic zoning, and hazard estimation of the seismogenic regions.

  4. Finite Element Modelling and Analysis of Conventional Pultrusion Processes

    NASA Astrophysics Data System (ADS)

    Akishin, P.; Barkanov, E.; Bondarchuk, A.

    2015-11-01

    Pultrusion is one of many composite manufacturing techniques and one of the most efficient methods for producing fiber reinforced polymer composite parts with a constant cross-section. Numerical simulation is helpful for understanding the manufacturing process and developing scientific means for the pultrusion tooling design. Numerical technique based on the finite element method has been developed for the simulation of pultrusion processes. It uses the general purpose finite element software ANSYS Mechanical. It is shown that the developed technique predicts the temperature and cure profiles, which are in good agreement with those published in the open literature.

  5. Comparison of performance of shell-and-tube heat exchangers with conventional segmental baffles and continuous helical baffle

    NASA Astrophysics Data System (ADS)

    Ahmed, Asif; Ferdous, Imam Ul.; Saha, Sumon

    2017-06-01

    In the present study, three-dimensional numerical simulation of two shell-and-tube heat exchangers (STHXs) with conventional segmental baffles (STHXsSB) and continuous helical baffle (STHXsHB) is carried out and a comparative study is performed based on the simulation results. Both of the STHXs contain 37 tubes inside a 500 mm long and 200 mm diameter shell and mass flow rate of shell-side fluid is varied from 0.5 kg/s to 2 kg/s. At first, physical and mathematical models are developed and numerically simulated using finite element method (FEM). For the validation of the computational model, shell-side average nusselt number (Nus) is calculated from the simulation results and compared with the available experimental results. The comparative study shows that STHXsHB has 72-127% higher heat transfer coefficient per unit pressure drop compared to the conventional STHXsSB for the same shell-side mass flow rate. Moreover, STHXsHB has 59-63% lower shell-side pressure drop than STHXsSB.

  6. Self-learning Monte Carlo method and cumulative update in fermion systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Junwei; Shen, Huitao; Qi, Yang

    2017-06-07

    In this study, we develop the self-learning Monte Carlo (SLMC) method, a general-purpose numerical method recently introduced to simulate many-body systems, for studying interacting fermion systems. Our method uses a highly efficient update algorithm, which we design and dub “cumulative update”, to generate new candidate configurations in the Markov chain based on a self-learned bosonic effective model. From a general analysis and a numerical study of the double exchange model as an example, we find that the SLMC with cumulative update drastically reduces the computational cost of the simulation, while remaining statistically exact. Remarkably, its computational complexity is far lessmore » than the conventional algorithm with local updates.« less

  7. Performance-Based Seismic Design of Steel Frames Utilizing Colliding Bodies Algorithm

    PubMed Central

    Veladi, H.

    2014-01-01

    A pushover analysis method based on semirigid connection concept is developed and the colliding bodies optimization algorithm is employed to find optimum seismic design of frame structures. Two numerical examples from the literature are studied. The results of the new algorithm are compared to the conventional design methods to show the power or weakness of the algorithm. PMID:25202717

  8. Performance-based seismic design of steel frames utilizing colliding bodies algorithm.

    PubMed

    Veladi, H

    2014-01-01

    A pushover analysis method based on semirigid connection concept is developed and the colliding bodies optimization algorithm is employed to find optimum seismic design of frame structures. Two numerical examples from the literature are studied. The results of the new algorithm are compared to the conventional design methods to show the power or weakness of the algorithm.

  9. Free and forced vibrations of a tyre using a wave/finite element approach

    NASA Astrophysics Data System (ADS)

    Waki, Y.; Mace, B. R.; Brennan, M. J.

    2009-06-01

    Free and forced vibrations of a tyre are predicted using a wave/finite element (WFE) approach. A short circumferential segment of the tyre is modelled using conventional finite element (FE) methods, a periodicity condition applied and the mass and stiffness matrices post-processed to yield wave properties. Since conventional FE methods are used, commercial FE packages and existing element libraries can be utilised. An eigenvalue problem is formulated in terms of the transfer matrix of the segment. Zhong's method is used to improve numerical conditioning. The eigenvalues and eigenvectors give the wavenumbers and wave mode shapes, which in turn define transformations between the physical and wave domains. A method is described by which the frequency dependent material properties of the rubber components of the tyre can be included without the need to remesh the structure. Expressions for the forced response are developed which are numerically well-conditioned. Numerical results for a smooth tyre are presented. Dispersion curves for real, imaginary and complex wavenumbers are shown. The propagating waves are associated with various forms of motion of the tread supported by the stiffness of the side wall. Various dispersion phenomena are observed, including curve veering, non-zero cut-off and waves for which the phase velocity and the group velocity have opposite signs. Results for the forced response are compared with experimental measurements and good agreement is seen. The forced response is numerically determined for both finite area and point excitations. It is seen that the size of area of the excitation is particularly important at high frequencies. When the size of the excitation area is small enough compared to the tread thickness, the response at high frequencies becomes stiffness-like (reactive) and the effect of shear stiffness becomes important.

  10. New numerical approximation of fractional derivative with non-local and non-singular kernel: Application to chaotic models

    NASA Astrophysics Data System (ADS)

    Toufik, Mekkaoui; Atangana, Abdon

    2017-10-01

    Recently a new concept of fractional differentiation with non-local and non-singular kernel was introduced in order to extend the limitations of the conventional Riemann-Liouville and Caputo fractional derivatives. A new numerical scheme has been developed, in this paper, for the newly established fractional differentiation. We present in general the error analysis. The new numerical scheme was applied to solve linear and non-linear fractional differential equations. We do not need a predictor-corrector to have an efficient algorithm, in this method. The comparison of approximate and exact solutions leaves no doubt believing that, the new numerical scheme is very efficient and converges toward exact solution very rapidly.

  11. Low cost and efficient kurtosis-based deflationary ICA method: application to MRS sources separation problem.

    PubMed

    Saleh, M; Karfoul, A; Kachenoura, A; Senhadji, L; Albera, L

    2016-08-01

    Improving the execution time and the numerical complexity of the well-known kurtosis-based maximization method, the RobustICA, is investigated in this paper. A Newton-based scheme is proposed and compared to the conventional RobustICA method. A new implementation using the nonlinear Conjugate Gradient one is investigated also. Regarding the Newton approach, an exact computation of the Hessian of the considered cost function is provided. The proposed approaches and the considered implementations inherit the global plane search of the initial RobustICA method for which a better convergence speed for a given direction is still guaranteed. Numerical results on Magnetic Resonance Spectroscopy (MRS) source separation show the efficiency of the proposed approaches notably the quasi-Newton one using the BFGS method.

  12. Flow and Turbulence Modeling and Computation of Shock Buffet Onset for Conventional and Supercritical Airfoils

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.

    1998-01-01

    Flow and turbulence models applied to the problem of shock buffet onset are studied. The accuracy of the interactive boundary layer and the thin-layer Navier-Stokes equations solved with recent upwind techniques using similar transport field equation turbulence models is assessed for standard steady test cases, including conditions having significant shock separation. The two methods are found to compare well in the shock buffet onset region of a supercritical airfoil that involves strong trailing-edge separation. A computational analysis using the interactive-boundary layer has revealed a Reynolds scaling effect in the shock buffet onset of the supercritical airfoil, which compares well with experiment. The methods are next applied to a conventional airfoil. Steady shock-separated computations of the conventional airfoil with the two methods compare well with experiment. Although the interactive boundary layer computations in the shock buffet region compare well with experiment for the conventional airfoil, the thin-layer Navier-Stokes computations do not. These findings are discussed in connection with possible mechanisms important in the onset of shock buffet and the constraints imposed by current numerical modeling techniques.

  13. Estimating Soil Hydraulic Parameters using Gradient Based Approach

    NASA Astrophysics Data System (ADS)

    Rai, P. K.; Tripathi, S.

    2017-12-01

    The conventional way of estimating parameters of a differential equation is to minimize the error between the observations and their estimates. The estimates are produced from forward solution (numerical or analytical) of differential equation assuming a set of parameters. Parameter estimation using the conventional approach requires high computational cost, setting-up of initial and boundary conditions, and formation of difference equations in case the forward solution is obtained numerically. Gaussian process based approaches like Gaussian Process Ordinary Differential Equation (GPODE) and Adaptive Gradient Matching (AGM) have been developed to estimate the parameters of Ordinary Differential Equations without explicitly solving them. Claims have been made that these approaches can straightforwardly be extended to Partial Differential Equations; however, it has been never demonstrated. This study extends AGM approach to PDEs and applies it for estimating parameters of Richards equation. Unlike the conventional approach, the AGM approach does not require setting-up of initial and boundary conditions explicitly, which is often difficult in real world application of Richards equation. The developed methodology was applied to synthetic soil moisture data. It was seen that the proposed methodology can estimate the soil hydraulic parameters correctly and can be a potential alternative to the conventional method.

  14. Local unitary transformation method for large-scale two-component relativistic calculations: case for a one-electron Dirac Hamiltonian.

    PubMed

    Seino, Junji; Nakai, Hiromi

    2012-06-28

    An accurate and efficient scheme for two-component relativistic calculations at the spin-free infinite-order Douglas-Kroll-Hess (IODKH) level is presented. The present scheme, termed local unitary transformation (LUT), is based on the locality of the relativistic effect. Numerical assessments of the LUT scheme were performed in diatomic molecules such as HX and X(2) (X = F, Cl, Br, I, and At) and hydrogen halide clusters, (HX)(n) (X = F, Cl, Br, and I). Total energies obtained by the LUT method agree well with conventional IODKH results. The computational costs of the LUT method are drastically lower than those of conventional methods since in the former there is linear-scaling with respect to the system size and a small prefactor.

  15. Numerical simulation of artificial hip joint motion based on human age factor

    NASA Astrophysics Data System (ADS)

    Ramdhani, Safarudin; Saputra, Eko; Jamari, J.

    2018-05-01

    Artificial hip joint is a prosthesis (synthetic body part) which usually consists of two or more components. Replacement of the hip joint due to the occurrence of arthritis, ordinarily patients aged or older. Numerical simulation models are used to observe the range of motion in the artificial hip joint, the range of motion of joints used as the basis of human age. Finite- element analysis (FEA) is used to calculate stress von mises in motion and observes a probability of prosthetic impingement. FEA uses a three-dimensional nonlinear model and considers the position variation of acetabular liner cups. The result of numerical simulation shows that FEA method can be used to analyze the performance calculation of the artificial hip joint at this time more accurate than conventional method.

  16. Optical frequency-domain chromatic dispersion measurement method for higher-order modes in an optical fiber.

    PubMed

    Ahn, Tae-Jung; Jung, Yongmin; Oh, Kyunghwan; Kim, Dug Young

    2005-12-12

    We propose a new chromatic dispersion measurement method for the higher-order modes of an optical fiber using optical frequency modulated continuous-wave (FMCW) interferometry. An optical fiber which supports few excited modes was prepared for our experiments. Three different guiding modes of the fiber were identified by using far-field spatial beam profile measurements and confirmed with numerical mode analysis. By using the principle of a conventional FMWC interferometry with a tunable external cavity laser, we have demonstrated that the chromatic dispersion of a few-mode optical fiber can be obtained directly and quantitatively as well as qualitatively. We have also compared our measurement results with those of conventional modulation phase-shift method.

  17. Modified Involute Helical Gears: Computerized Design, Simulation of Meshing, and Stress Analysis

    NASA Technical Reports Server (NTRS)

    Handschuh, Robert (Technical Monitor); Litvin, Faydor L.; Gonzalez-Perez, Ignacio; Carnevali, Luca; Kawasaki, Kazumasa; Fuentes-Aznar, Alfonso

    2003-01-01

    The computerized design, methods for generation, simulation of meshing, and enhanced stress analysis of modified involute helical gears is presented. The approaches proposed for modification of conventional involute helical gears are based on conjugation of double-crowned pinion with a conventional helical involute gear. Double-crowning of the pinion means deviation of cross-profile from an involute one and deviation in longitudinal direction from a helicoid surface. Using the method developed, the pinion-gear tooth surfaces are in point-contact, the bearing contact is localized and oriented longitudinally, and edge contact is avoided. Also, the influence of errors of aligment on the shift of bearing contact, vibration, and noise are reduced substantially. The theory developed is illustrated with numerical examples that confirm the advantages of the gear drives of the modified geometry in comparison with conventional helical involute gears.

  18. Modified Involute Helical Gears: Computerized Design, Simulation of Meshing and Stress Analysis

    NASA Technical Reports Server (NTRS)

    2003-01-01

    The computerized design, methods for generation, simulation of meshing, and enhanced stress analysis of modified involute helical gears is presented. The approaches proposed for modification of conventional involute helical gears are based on conjugation of double-crowned pinion with a conventional helical involute gear. Double-crowning of the pinion means deviation of cross-profile from an involute one and deviation in longitudinal direction from a helicoid surface. Using the method developed, the pinion-gear tooth surfaces are in point-contact, the bearing contact is localized and oriented longitudinally, and edge contact is avoided. Also, the influence of errors of alignment on the shift of bearing contact, vibration, and noise are reduced substantially. The theory developed is illustrated with numerical examples that confirm the advantages of the gear drives of the modified geometry in comparison with conventional helical involute gears.

  19. A general numerical analysis of the superconducting quasiparticle mixer

    NASA Technical Reports Server (NTRS)

    Hicks, R. G.; Feldman, M. J.; Kerr, A. R.

    1985-01-01

    For very low noise millimeter-wave receivers, the superconductor-insulator-superconductor (SIS) quasiparticle mixer is now competitive with conventional Schottky mixers. Tucker (1979, 1980) has developed a quantum theory of mixing which has provided a basis for the rapid improvement in SIS mixer performance. The present paper is concerned with a general method of numerical analysis for SIS mixers which allows arbitrary terminating impedances for all the harmonic frequencies. This analysis provides an approach for an examination of the range of validity of the three-frequency results of the quantum mixer theory. The new method has been implemented with the aid of a Fortran computer program.

  20. Wavelet-based Adaptive Mesh Refinement Method for Global Atmospheric Chemical Transport Modeling

    NASA Astrophysics Data System (ADS)

    Rastigejev, Y.

    2011-12-01

    Numerical modeling of global atmospheric chemical transport presents enormous computational difficulties, associated with simulating a wide range of time and spatial scales. The described difficulties are exacerbated by the fact that hundreds of chemical species and thousands of chemical reactions typically are used for chemical kinetic mechanism description. These computational requirements very often forces researches to use relatively crude quasi-uniform numerical grids with inadequate spatial resolution that introduces significant numerical diffusion into the system. It was shown that this spurious diffusion significantly distorts the pollutant mixing and transport dynamics for typically used grid resolution. The described numerical difficulties have to be systematically addressed considering that the demand for fast, high-resolution chemical transport models will be exacerbated over the next decade by the need to interpret satellite observations of tropospheric ozone and related species. In this study we offer dynamically adaptive multilevel Wavelet-based Adaptive Mesh Refinement (WAMR) method for numerical modeling of atmospheric chemical evolution equations. The adaptive mesh refinement is performed by adding and removing finer levels of resolution in the locations of fine scale development and in the locations of smooth solution behavior accordingly. The algorithm is based on the mathematically well established wavelet theory. This allows us to provide error estimates of the solution that are used in conjunction with an appropriate threshold criteria to adapt the non-uniform grid. Other essential features of the numerical algorithm include: an efficient wavelet spatial discretization that allows to minimize the number of degrees of freedom for a prescribed accuracy, a fast algorithm for computing wavelet amplitudes, and efficient and accurate derivative approximations on an irregular grid. The method has been tested for a variety of benchmark problems including numerical simulation of transpacific traveling pollution plumes. The generated pollution plumes are diluted due to turbulent mixing as they are advected downwind. Despite this dilution, it was recently discovered that pollution plumes in the remote troposphere can preserve their identity as well-defined structures for two weeks or more as they circle the globe. Present Global Chemical Transport Models (CTMs) implemented for quasi-uniform grids are completely incapable of reproducing these layered structures due to high numerical plume dilution caused by numerical diffusion combined with non-uniformity of atmospheric flow. It is shown that WAMR algorithm solutions of comparable accuracy as conventional numerical techniques are obtained with more than an order of magnitude reduction in number of grid points, therefore the adaptive algorithm is capable to produce accurate results at a relatively low computational cost. The numerical simulations demonstrate that WAMR algorithm applied the traveling plume problem accurately reproduces the plume dynamics unlike conventional numerical methods that utilizes quasi-uniform numerical grids.

  1. Modified symplectic schemes with nearly-analytic discrete operators for acoustic wave simulations

    NASA Astrophysics Data System (ADS)

    Liu, Shaolin; Yang, Dinghui; Lang, Chao; Wang, Wenshuai; Pan, Zhide

    2017-04-01

    Using a structure-preserving algorithm significantly increases the computational efficiency of solving wave equations. However, only a few explicit symplectic schemes are available in the literature, and the capabilities of these symplectic schemes have not been sufficiently exploited. Here, we propose a modified strategy to construct explicit symplectic schemes for time advance. The acoustic wave equation is transformed into a Hamiltonian system. The classical symplectic partitioned Runge-Kutta (PRK) method is used for the temporal discretization. Additional spatial differential terms are added to the PRK schemes to form the modified symplectic methods and then two modified time-advancing symplectic methods with all of positive symplectic coefficients are then constructed. The spatial differential operators are approximated by nearly-analytic discrete (NAD) operators, and we call the fully discretized scheme modified symplectic nearly analytic discrete (MSNAD) method. Theoretical analyses show that the MSNAD methods exhibit less numerical dispersion and higher stability limits than conventional methods. Three numerical experiments are conducted to verify the advantages of the MSNAD methods, such as their numerical accuracy, computational cost, stability, and long-term calculation capability.

  2. Reflection full-waveform inversion using a modified phase misfit function

    NASA Astrophysics Data System (ADS)

    Cui, Chao; Huang, Jian-Ping; Li, Zhen-Chun; Liao, Wen-Yuan; Guan, Zhe

    2017-09-01

    Reflection full-waveform inversion (RFWI) updates the low- and highwavenumber components, and yields more accurate initial models compared with conventional full-waveform inversion (FWI). However, there is strong nonlinearity in conventional RFWI because of the lack of low-frequency data and the complexity of the amplitude. The separation of phase and amplitude information makes RFWI more linear. Traditional phase-calculation methods face severe phase wrapping. To solve this problem, we propose a modified phase-calculation method that uses the phase-envelope data to obtain the pseudo phase information. Then, we establish a pseudophase-information-based objective function for RFWI, with the corresponding source and gradient terms. Numerical tests verify that the proposed calculation method using the phase-envelope data guarantees the stability and accuracy of the phase information and the convergence of the objective function. The application on a portion of the Sigsbee2A model and comparison with inversion results of the improved RFWI and conventional FWI methods verify that the pseudophase-based RFWI produces a highly accurate and efficient velocity model. Moreover, the proposed method is robust to noise and high frequency.

  3. Numerical method to optimize the polar-azimuthal orientation of infrared superconducting-nanowire single-photon detectors.

    PubMed

    Csete, Mária; Sipos, Áron; Najafi, Faraz; Hu, Xiaolong; Berggren, Karl K

    2011-11-01

    A finite-element method for calculating the illumination-dependence of absorption in three-dimensional nanostructures is presented based on the radio frequency module of the Comsol Multiphysics software package (Comsol AB). This method is capable of numerically determining the optical response and near-field distribution of subwavelength periodic structures as a function of illumination orientations specified by polar angle, φ, and azimuthal angle, γ. The method was applied to determine the illumination-angle-dependent absorptance in cavity-based superconducting-nanowire single-photon detector (SNSPD) designs. Niobium-nitride stripes based on dimensions of conventional SNSPDs and integrated with ~ quarter-wavelength hydrogen-silsesquioxane-filled nano-optical cavity and covered by a thin gold film acting as a reflector were illuminated from below by p-polarized light in this study. The numerical results were compared to results from complementary transfer-matrix-method calculations on composite layers made of analogous film-stacks. This comparison helped to uncover the optical phenomena contributing to the appearance of extrema in the optical response. This paper presents an approach to optimizing the absorptance of different sensing and detecting devices via simultaneous numerical optimization of the polar and azimuthal illumination angles. © 2011 Optical Society of America

  4. An efficient impedance method for induced field evaluation based on a stabilized Bi-conjugate gradient algorithm.

    PubMed

    Wang, Hua; Liu, Feng; Xia, Ling; Crozier, Stuart

    2008-11-21

    This paper presents a stabilized Bi-conjugate gradient algorithm (BiCGstab) that can significantly improve the performance of the impedance method, which has been widely applied to model low-frequency field induction phenomena in voxel phantoms. The improved impedance method offers remarkable computational advantages in terms of convergence performance and memory consumption over the conventional, successive over-relaxation (SOR)-based algorithm. The scheme has been validated against other numerical/analytical solutions on a lossy, multilayered sphere phantom excited by an ideal coil loop. To demonstrate the computational performance and application capability of the developed algorithm, the induced fields inside a human phantom due to a low-frequency hyperthermia device is evaluated. The simulation results show the numerical accuracy and superior performance of the method.

  5. Multicritical points for spin-glass models on hierarchical lattices.

    PubMed

    Ohzeki, Masayuki; Nishimori, Hidetoshi; Berker, A Nihat

    2008-06-01

    The locations of multicritical points on many hierarchical lattices are numerically investigated by the renormalization group analysis. The results are compared with an analytical conjecture derived by using the duality, the gauge symmetry, and the replica method. We find that the conjecture does not give the exact answer but leads to locations slightly away from the numerically reliable data. We propose an improved conjecture to give more precise predictions of the multicritical points than the conventional one. This improvement is inspired by a different point of view coming from the renormalization group and succeeds in deriving very consistent answers with many numerical data.

  6. Inventory Management for Irregular Shipment of Goods in Distribution Centre

    NASA Astrophysics Data System (ADS)

    Takeda, Hitoshi; Kitaoka, Masatoshi; Usuki, Jun

    2016-01-01

    The shipping amount of commodity goods (Foods, confectionery, dairy products, such as public cosmetic pharmaceutical products) changes irregularly at the distribution center dealing with the general consumer goods. Because the shipment time and the amount of the shipment are irregular, the demand forecast becomes very difficult. For this, the inventory control becomes difficult, too. It cannot be applied to the shipment of the commodity by the conventional inventory control methods. This paper proposes the method for inventory control by cumulative flow curve method. It proposed the method of deciding the order quantity of the inventory control by the cumulative flow curve. Here, it proposes three methods. 1) Power method,2) Polynomial method and 3)Revised Holt's linear method that forecasts data with trends that is a kind of exponential smoothing method. This paper compares the economics of the conventional method, which is managed by the experienced and three new proposed methods. And, the effectiveness of the proposal method is verified from the numerical calculations.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tuo, Rui; Wu, C. F. Jeff

    Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.

  8. Comparison of updated Lagrangian FEM with arbitrary Lagrangian Eulerian method for 3D thermo-mechanical extrusion of a tube profile

    NASA Astrophysics Data System (ADS)

    Kronsteiner, J.; Horwatitsch, D.; Zeman, K.

    2017-10-01

    Thermo-mechanical numerical modelling and simulation of extrusion processes faces several serious challenges. Large plastic deformations in combination with a strong coupling of thermal with mechanical effects leads to a high numerical demand for the solution as well as for the handling of mesh distortions. The two numerical methods presented in this paper also reflect two different ways to deal with mesh distortions. Lagrangian Finite Element Methods (FEM) tackle distorted elements by building a new mesh (called re-meshing) whereas Arbitrary Lagrangian Eulerian (ALE) methods use an "advection" step to remap the solution from the distorted to the undistorted mesh. Another difference between conventional Lagrangian and ALE methods is the separate treatment of material and mesh in ALE, allowing the definition of individual velocity fields. In theory, an ALE formulation contains the Eulerian formulation as a subset to the Lagrangian description of the material. The investigations presented in this paper were dealing with the direct extrusion of a tube profile using EN-AW 6082 aluminum alloy and a comparison of experimental with Lagrangian and ALE results. The numerical simulations cover the billet upsetting and last until one third of the billet length is extruded. A good qualitative correlation of experimental and numerical results could be found, however, major differences between Lagrangian and ALE methods concerning thermo-mechanical coupling lead to deviations in the thermal results.

  9. Method for simulating discontinuous physical systems

    DOEpatents

    Baty, Roy S.; Vaughn, Mark R.

    2001-01-01

    The mathematical foundations of conventional numerical simulation of physical systems provide no consistent description of the behavior of such systems when subjected to discontinuous physical influences. As a result, the numerical simulation of such problems requires ad hoc encoding of specific experimental results in order to address the behavior of such discontinuous physical systems. In the present invention, these foundations are replaced by a new combination of generalized function theory and nonstandard analysis. The result is a class of new approaches to the numerical simulation of physical systems which allows the accurate and well-behaved simulation of discontinuous and other difficult physical systems, as well as simpler physical systems. Applications of this new class of numerical simulation techniques to process control, robotics, and apparatus design are outlined.

  10. Contour integral method for obtaining the self-energy matrices of electrodes in electron transport calculations

    NASA Astrophysics Data System (ADS)

    Iwase, Shigeru; Futamura, Yasunori; Imakura, Akira; Sakurai, Tetsuya; Tsukamoto, Shigeru; Ono, Tomoya

    2018-05-01

    We propose an efficient computational method for evaluating the self-energy matrices of electrodes to study ballistic electron transport properties in nanoscale systems. To reduce the high computational cost incurred in large systems, a contour integral eigensolver based on the Sakurai-Sugiura method combined with the shifted biconjugate gradient method is developed to solve an exponential-type eigenvalue problem for complex wave vectors. A remarkable feature of the proposed algorithm is that the numerical procedure is very similar to that of conventional band structure calculations. We implement the developed method in the framework of the real-space higher-order finite-difference scheme with nonlocal pseudopotentials. Numerical tests for a wide variety of materials validate the robustness, accuracy, and efficiency of the proposed method. As an illustration of the method, we present the electron transport property of the freestanding silicene with the line defect originating from the reversed buckled phases.

  11. Numerical and Experimental Validation of the Optimization Methodologies for a Wing-Tip Structure Equipped with Conventional and Morphing Ailerons =

    NASA Astrophysics Data System (ADS)

    Koreanschi, Andreea

    In order to answer the problem of 'how to reduce the aerospace industry's environment footprint?' new morphing technologies were developed. These technologies were aimed at reducing the aircraft's fuel consumption through reduction of the wing drag. The morphing concept used in the present research consists of replacing the conventional aluminium upper surface of the wing with a flexible composite skin for morphing abilities. For the ATR-42 'Morphing wing' project, the wing models were manufactured entirely from composite materials and the morphing region was optimized for flexibility. In this project two rigid wing models and an active morphing wing model were designed, manufactured and wind tunnel tested. For the CRIAQ MDO 505 project, a full scale wing-tip equipped with two types of ailerons, conventional and morphing, was designed, optimized, manufactured, bench and wind tunnel tested. The morphing concept was applied on a real wing internal structure and incorporated aerodynamic, structural and control constraints specific to a multidisciplinary approach. Numerical optimization, aerodynamic analysis and experimental validation were performed for both the CRIAQ MDO 505 full scale wing-tip demonstrator and the ATR-42 reduced scale wing models. In order to improve the aerodynamic performances of the ATR-42 and CRIAQ MDO 505 wing airfoils, three global optimization algorithms were developed, tested and compared. The three algorithms were: the genetic algorithm, the artificial bee colony and the gradient descent. The algorithms were coupled with the two-dimensional aerodynamic solver XFoil. XFoil is known for its rapid convergence, robustness and use of the semi-empirical e n method for determining the position of the flow transition from laminar to turbulent. Based on the performance comparison between the algorithms, the genetic algorithm was chosen for the optimization of the ATR-42 and CRIAQ MDO 505 wing airfoils. The optimization algorithm was improved during the CRIAQ MDO 505 project for convergence speed by introducing a two-step cross-over function. Structural constraints were introduced in the algorithm at each aero-structural optimization interaction, allowing a better manipulation of the algorithm and giving it more capabilities of morphing combinations. The CRIAQ MDO 505 project envisioned a morphing aileron concept for the morphing upper surface wing. For this morphing aileron concept, two optimization methods were developed. The methods used the already developed genetic algorithm and each method had a different design concept. The first method was based on the morphing upper surface concept, using actuation points to achieve the desired shape. The second method was based on the hinge rotation concept of the conventional aileron but applied at multiple nodes along the aileron camber to achieve the desired shape. Both methods were constrained by manufacturing and aerodynamic requirements. The purpose of the morphing aileron methods was to obtain an aileron shape with a smoother pressure distribution gradient during deflection than the conventional aileron. The aerodynamic optimization results were used for the structural optimization and design of the wing, particularly the flexible composite skin. Due to the structural changes performed on the initial wing-tip structure, an aeroelastic behaviour analysis, more specific on flutter phenomenon, was performed. The analyses were done to ensure the structural integrity of the wing-tip demonstrator during wind tunnel tests. Three wind tunnel tests were performed for the CRIAQ MDO 505 wing-tip demonstrator at the IAR-NRC subsonic wind tunnel facility in Ottawa. The first two tests were performed for the wing-tip equipped with conventional aileron. The purpose of these tests was to validate the control system designed for the morphing upper surface, the numerical optimization and aerodynamic analysis and to evaluate the optimization efficiency on the boundary layer behaviour and the wing drag. The third set of wind tunnel tests was performed on the wing-tip equipped with a morphing aileron. The purpose of this test was to evaluate the performances of the morphing aileron, in conjunction with the active morphing upper surface, and their effect on the lift, drag and boundary layer behaviour. Transition data, obtained from Infrared Thermography, and pressure data, extracted from Kulite and pressure taps recordings, were used to validate the numerical optimization and aerodynamic performances of the wing-tip demonstrator. A set of wind tunnel tests was performed on the ATR-42 rigid wing models at the Price-Paidoussis subsonic wind tunnel at Ecole de technologie Superieure. The results from the pressure taps recordings were used to validate the numerical optimization. A second derivative of the pressure distribution method was applied to evaluate the transition region on the upper surface of the wing models for comparison with the numerical transition values. (Abstract shortened by ProQuest.).

  12. A wavefront orientation method for precise numerical determination of tsunami travel time

    NASA Astrophysics Data System (ADS)

    Fine, I. V.; Thomson, R. E.

    2013-04-01

    We present a highly accurate and computationally efficient method (herein, the "wavefront orientation method") for determining the travel time of oceanic tsunamis. Based on Huygens principle, the method uses an eight-point grid-point pattern and the most recent information on the orientation of the advancing wave front to determine the time for a tsunami to travel to a specific oceanic location. The method is shown to provide improved accuracy and reduced anisotropy compared with the conventional multiple grid-point method presently in widespread use.

  13. Design sensitivity analysis using EAL. Part 1: Conventional design parameters

    NASA Technical Reports Server (NTRS)

    Dopker, B.; Choi, Kyung K.; Lee, J.

    1986-01-01

    A numerical implementation of design sensitivity analysis of builtup structures is presented, using the versatility and convenience of an existing finite element structural analysis code and its database management system. The finite element code used in the implemenatation presented is the Engineering Analysis Language (EAL), which is based on a hybrid method of analysis. It was shown that design sensitivity computations can be carried out using the database management system of EAL, without writing a separate program and a separate database. Conventional (sizing) design parameters such as cross-sectional area of beams or thickness of plates and plane elastic solid components are considered. Compliance, displacement, and stress functionals are considered as performance criteria. The method presented is being extended to implement shape design sensitivity analysis using a domain method and a design component method.

  14. Fully-relativistic full-potential multiple scattering theory: A pathology-free scheme

    NASA Astrophysics Data System (ADS)

    Liu, Xianglin; Wang, Yang; Eisenbach, Markus; Stocks, G. Malcolm

    2018-03-01

    The Green function plays an essential role in the Korringa-Kohn-Rostoker(KKR) multiple scattering method. In practice, it is constructed from the regular and irregular solutions of the local Kohn-Sham equation and robust methods exist for spherical potentials. However, when applied to a non-spherical potential, numerical errors from the irregular solutions give rise to pathological behaviors of the charge density at small radius. Here we present a full-potential implementation of the fully-relativistic KKR method to perform ab initio self-consistent calculation by directly solving the Dirac differential equations using the generalized variable phase (sine and cosine matrices) formalism Liu et al. (2016). The pathology around the origin is completely eliminated by carrying out the energy integration of the single-site Green function along the real axis. By using an efficient pole-searching technique to identify the zeros of the well-behaved Jost matrices, we demonstrated that this scheme is numerically stable and computationally efficient, with speed comparable to the conventional contour energy integration method, while free of the pathology problem of the charge density. As an application, this method is utilized to investigate the crystal structures of polonium and their bulk properties, which is challenging for a conventional real-energy scheme. The noble metals are also calculated, both as a test of our method and to study the relativistic effects.

  15. Fully-relativistic full-potential multiple scattering theory: A pathology-free scheme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Xianglin; Wang, Yang; Eisenbach, Markus

    The Green function plays an essential role in the Korringa–Kohn–Rostoker(KKR) multiple scattering method. In practice, it is constructed from the regular and irregular solutions of the local Kohn–Sham equation and robust methods exist for spherical potentials. However, when applied to a non-spherical potential, numerical errors from the irregular solutions give rise to pathological behaviors of the charge density at small radius. Here we present a full-potential implementation of the fully-relativistic KKR method to perform ab initio self-consistent calculation by directly solving the Dirac differential equations using the generalized variable phase (sine and cosine matrices) formalism Liu et al. (2016). Themore » pathology around the origin is completely eliminated by carrying out the energy integration of the single-site Green function along the real axis. Here, by using an efficient pole-searching technique to identify the zeros of the well-behaved Jost matrices, we demonstrated that this scheme is numerically stable and computationally efficient, with speed comparable to the conventional contour energy integration method, while free of the pathology problem of the charge density. As an application, this method is utilized to investigate the crystal structures of polonium and their bulk properties, which is challenging for a conventional real-energy scheme. The noble metals are also calculated, both as a test of our method and to study the relativistic effects.« less

  16. Fully-relativistic full-potential multiple scattering theory: A pathology-free scheme

    DOE PAGES

    Liu, Xianglin; Wang, Yang; Eisenbach, Markus; ...

    2017-10-28

    The Green function plays an essential role in the Korringa–Kohn–Rostoker(KKR) multiple scattering method. In practice, it is constructed from the regular and irregular solutions of the local Kohn–Sham equation and robust methods exist for spherical potentials. However, when applied to a non-spherical potential, numerical errors from the irregular solutions give rise to pathological behaviors of the charge density at small radius. Here we present a full-potential implementation of the fully-relativistic KKR method to perform ab initio self-consistent calculation by directly solving the Dirac differential equations using the generalized variable phase (sine and cosine matrices) formalism Liu et al. (2016). Themore » pathology around the origin is completely eliminated by carrying out the energy integration of the single-site Green function along the real axis. Here, by using an efficient pole-searching technique to identify the zeros of the well-behaved Jost matrices, we demonstrated that this scheme is numerically stable and computationally efficient, with speed comparable to the conventional contour energy integration method, while free of the pathology problem of the charge density. As an application, this method is utilized to investigate the crystal structures of polonium and their bulk properties, which is challenging for a conventional real-energy scheme. The noble metals are also calculated, both as a test of our method and to study the relativistic effects.« less

  17. Symplectic exponential Runge-Kutta methods for solving nonlinear Hamiltonian systems

    NASA Astrophysics Data System (ADS)

    Mei, Lijie; Wu, Xinyuan

    2017-06-01

    Symplecticity is also an important property for exponential Runge-Kutta (ERK) methods in the sense of structure preservation once the underlying problem is a Hamiltonian system, though ERK methods provide a good performance of higher accuracy and better efficiency than classical Runge-Kutta (RK) methods in dealing with stiff problems: y‧ (t) = My + f (y). On account of this observation, the main theme of this paper is to derive and analyze the symplectic conditions for ERK methods. Using the fundamental analysis of geometric integrators, we first establish one class of sufficient conditions for symplectic ERK methods. It is shown that these conditions will reduce to the conventional ones when M → 0, and this means that these conditions of symplecticity are extensions of the conventional ones in the literature. Furthermore, we also present a new class of structure-preserving ERK methods possessing the remarkable property of symplecticity. Meanwhile, the revised stiff order conditions are proposed and investigated in detail. Since the symplectic ERK methods are implicit and iterative solutions are required in practice, we also investigate the convergence of the corresponding fixed-point iterative procedure. Finally, the numerical experiments, including a nonlinear Schrödinger equation, a sine-Gordon equation, a nonlinear Klein-Gordon equation, and the well-known Fermi-Pasta-Ulam problem, are implemented in comparison with the corresponding symplectic RK methods and the prominent numerical results definitely coincide with the theories and conclusions made in this paper.

  18. Implementation of an optimized microfluidic mixer in alumina employing femtosecond laser ablation

    NASA Astrophysics Data System (ADS)

    Juodėnas, M.; Tamulevičius, T.; Ulčinas, O.; Tamulevičius, S.

    2018-01-01

    Manipulation of liquids at the lowest levels of volume and dimension is at the forefront of materials science, chemistry and medicine, offering important time and resource saving applications. However, manipulation by mixing is troublesome at the microliter and lower scales. One approach to overcome this problem is to use passive mixers, which exploit structural obstacles within microfluidic channels or the geometry of channels themselves to enforce and enhance fluid mixing. Some applications require the manipulation and mixing of aggressive substances, which makes conventional microfluidic materials, along with their fabrication methods, inappropriate. In this work, implementation of an optimized full scale three port microfluidic mixer is presented in a slide of a material that is very hard to process but possesses extreme chemical and physical resistance—alumina. The viability of the selected femtosecond laser fabrication method as an alternative to conventional lithography methods, which are unable to process this material, is demonstrated. For the validation and optimization of the microfluidic mixer, a finite element method (FEM) based numerical modeling of the influence of the mixer geometry on its mixing performance is completed. Experimental investigation of the laminar flow geometry demonstrated very good agreement with the numerical simulation results. Such a laser ablation microfabricated passive mixer structure is intended for use in a capillary force assisted nanoparticle assembly setup (CAPA).

  19. A Locally Modal B-Spline Based Full-Vector Finite-Element Method with PML for Nonlinear and Lossy Plasmonic Waveguide

    NASA Astrophysics Data System (ADS)

    Karimi, Hossein; Nikmehr, Saeid; Khodapanah, Ehsan

    2016-09-01

    In this paper, we develop a B-spline finite-element method (FEM) based on a locally modal wave propagation with anisotropic perfectly matched layers (PMLs), for the first time, to simulate nonlinear and lossy plasmonic waveguides. Conventional approaches like beam propagation method, inherently omit the wave spectrum and do not provide physical insight into nonlinear modes especially in the plasmonic applications, where nonlinear modes are constructed by linear modes with very close propagation constant quantities. Our locally modal B-spline finite element method (LMBS-FEM) does not suffer from the weakness of the conventional approaches. To validate our method, first, propagation of wave for various kinds of linear, nonlinear, lossless and lossy materials of metal-insulator plasmonic structures are simulated using LMBS-FEM in MATLAB and the comparisons are made with FEM-BPM module of COMSOL Multiphysics simulator and B-spline finite-element finite-difference wide angle beam propagation method (BSFEFD-WABPM). The comparisons show that not only our developed numerical approach is computationally more accurate and efficient than conventional approaches but also it provides physical insight into the nonlinear nature of the propagation modes.

  20. Recent developments of nano-structured materials as the catalysts for oxygen reduction reaction

    NASA Astrophysics Data System (ADS)

    Kang, SungYeon; Kim, HuiJung; Chung, Yong-Ho

    2018-04-01

    Developments of high efficient materials for electrocatalyst are significant topics of numerous researches since a few decades. Recent global interests related with energy conversion and storage lead to the expansion of efforts to find cost-effective catalysts that can substitute conventional catalytic materials. Especially, in the field of fuel cell, novel materials for oxygen reduction reaction (ORR) have been noticed to overcome disadvantages of conventional platinum-based catalysts. Various approaching methods have been attempted to achieve low cost and high electrochemical activity comparable with Pt-based catalysts, including reducing Pt consumption by the formation of hybrid materials, Pt-based alloys, and not-Pt metal or carbon based materials. To enhance catalytic performance and stability, numerous methods such as structural modifications and complex formations with other functional materials are proposed, and they are basically based on well-defined and well-ordered catalytic active sites by exquisite control at nanoscale. In this review, we highlight the development of nano-structured catalytic materials for ORR based on recent findings, and discuss about an outlook for the direction of future researches.

  1. TU-CD-207-05: A Novel Digital Tomosynthesis System Using Orthogonal Scanning Technique: A Feasibility Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, J; Park, C; Kauweloa, K

    2015-06-15

    Purpose: As an alternative to full tomographic imaging technique such as cone-beam computed tomography (CBCT), there is growing interest to adopt digital tomosynthesis (DTS) for the use of diagnostic as well as therapeutic applications. The aim of this study is to propose a new DTS system using novel orthogonal scanning technique, which can provide superior image quality DTS images compared to the conventional DTS scanning system. Methods: Unlike conventional DTS scanning system, the proposed DTS is reconstructed with two sets of orthogonal patient scans. 1) X-ray projections that are acquired along transverse trajectory and 2) an additional sets of X-raymore » projections acquired along the vertical direction at the mid angle of the previous transverse scan. To reconstruct DTS, we have used modified filtered backprojection technique to account for the different scanning directions of each projection set. We have evaluated the performance of our method using numerical planning CT data of liver cancer patient and a physical pelvis phantom experiment. The results were compared with conventional DTS techniques with single transverse and vertical scanning. Results: The experiments on both numerical simulation as well as physical experiment showed that the resolution as well as contrast of anatomical structures was much clearer using our method. Specifically, the image quality comparing with transversely scanned DTS showed that the edge and contrast of anatomical structures along Left-Right (LR) directions was comparable however, considerable discrepancy and enhancement could be observed along Superior-Inferior (SI) direction using our method. The opposite was observed when vertically scanned DTS was compared. Conclusion: In this study, we propose a novel DTS system using orthogonal scanning technique. The results indicated that the image quality of our novel DTS system was superior compared to conventional DTS system. This makes our DTS system potentially useful in various on-line clinical applications.« less

  2. Subwavelength-thick lenses with high numerical apertures and large efficiency based on high-contrast transmitarrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arbabi, Amir; Horie, Yu; Ball, Alexander J.

    2015-05-07

    Flat optical devices thinner than a wavelength promise to replace conventional free-space components for wavefront and polarization control. Transmissive flat lenses are particularly interesting for applications in imaging and on-chip optoelectronic integration. Several designs based on plasmonic metasurfaces, high-contrast transmitarrays and gratings have been recently implemented but have not provided a performance comparable to conventional curved lenses. Here we report polarization-insensitive, micron-thick, high-contrast transmitarray micro-lenses with focal spots as small as 0.57 λ. The measured focusing efficiency is up to 82%. A rigorous method for ultrathin lens design, and the trade-off between high efficiency and small spot size (or largemore » numerical aperture) are discussed. The micro-lenses, composed of silicon nano-posts on glass, are fabricated in one lithographic step that could be performed with high-throughput photo or nanoimprint lithography, thus enabling widespread adoption.« less

  3. Analytical, Numerical, and Experimental Investigation on a Non-Contact Method for the Measurements of Creep Properties of Ultra-High-Temperature Materials

    NASA Technical Reports Server (NTRS)

    Lee, Jonghyun; Hyers, Robert W.; Rogers, Jan R.; Rathz, Thomas J.; Choo, Hahn; Liaw, Peter

    2006-01-01

    Responsive access to space requires re-use of components such as rocket nozzles that operate at extremely high temperatures. For such applications, new ultra-hightemperature materials that can operate over 2,000 C are required. At the temperatures higher than the fifty percent of the melting temperature, the characterization of creep properties is indispensable. Since conventional methods for the measurement of creep is limited below 1,700 C, a new technique that can be applied at higher temperatures is strongly demanded. This research develops a non-contact method for the measurement of creep at the temperatures over 2,300 C. Using the electrostatic levitator in NASA MSFC, a spherical sample was rotated to cause creep deformation by centrifugal acceleration. The deforming sample was captured with a digital camera and analyzed to measure creep deformation. Numerical and analytical analyses have also been conducted to compare the experimental results. Analytical, numerical, and experimental results showed a good agreement with one another.

  4. A 3-D enlarged cell technique (ECT) for elastic wave modelling of a curved free surface

    NASA Astrophysics Data System (ADS)

    Wei, Songlin; Zhou, Jianyang; Zhuang, Mingwei; Liu, Qing Huo

    2016-09-01

    The conventional finite-difference time-domain (FDTD) method for elastic waves suffers from the staircasing error when applied to model a curved free surface because of its structured grid. In this work, an improved, stable and accurate 3-D FDTD method for elastic wave modelling on a curved free surface is developed based on the finite volume method and enlarged cell technique (ECT). To achieve a sufficiently accurate implementation, a finite volume scheme is applied to the curved free surface to remove the staircasing error; in the mean time, to achieve the same stability as the FDTD method without reducing the time step increment, the ECT is introduced to preserve the solution stability by enlarging small irregular cells into adjacent cells under the condition of conservation of force. This method is verified by several 3-D numerical examples. Results show that the method is stable at the Courant stability limit for a regular FDTD grid, and has much higher accuracy than the conventional FDTD method.

  5. Development of Extended Ray-tracing method including diffraction, polarization and wave decay effects

    NASA Astrophysics Data System (ADS)

    Yanagihara, Kota; Kubo, Shin; Dodin, Ilya; Nakamura, Hiroaki; Tsujimura, Toru

    2017-10-01

    Geometrical Optics Ray-tracing is a reasonable numerical analytic approach for describing the Electron Cyclotron resonance Wave (ECW) in slowly varying spatially inhomogeneous plasma. It is well known that the result with this conventional method is adequate in most cases. However, in the case of Helical fusion plasma which has complicated magnetic structure, strong magnetic shear with a large scale length of density can cause a mode coupling of waves outside the last closed flux surface, and complicated absorption structure requires a strong focused wave for ECH. Since conventional Ray Equations to describe ECW do not have any terms to describe the diffraction, polarization and wave decay effects, we can not describe accurately a mode coupling of waves, strong focus waves, behavior of waves in inhomogeneous absorption region and so on. For fundamental solution of these problems, we consider the extension of the Ray-tracing method. Specific process is planned as follows. First, calculate the reference ray by conventional method, and define the local ray-base coordinate system along the reference ray. Then, calculate the evolution of the distributions of amplitude and phase on ray-base coordinate step by step. The progress of our extended method will be presented.

  6. Exploring a potential energy surface by machine learning for characterizing atomic transport

    NASA Astrophysics Data System (ADS)

    Kanamori, Kenta; Toyoura, Kazuaki; Honda, Junya; Hattori, Kazuki; Seko, Atsuto; Karasuyama, Masayuki; Shitara, Kazuki; Shiga, Motoki; Kuwabara, Akihide; Takeuchi, Ichiro

    2018-03-01

    We propose a machine-learning method for evaluating the potential barrier governing atomic transport based on the preferential selection of dominant points for atomic transport. The proposed method generates numerous random samples of the entire potential energy surface (PES) from a probabilistic Gaussian process model of the PES, which enables defining the likelihood of the dominant points. The robustness and efficiency of the method are demonstrated on a dozen model cases for proton diffusion in oxides, in comparison with a conventional nudge elastic band method.

  7. Efficient calibration for imperfect computer models

    DOE PAGES

    Tuo, Rui; Wu, C. F. Jeff

    2015-12-01

    Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.

  8. Investigation into photostability of soybean oils by thermal lens spectroscopy

    NASA Astrophysics Data System (ADS)

    Savi, E. L.; Malacarne, L. C.; Baesso, M. L.; Pintro, P. T. M.; Croge, C.; Shen, J.; Astrath, N. G. C.

    2015-06-01

    Assessment of photochemical stability is essential for evaluating quality and the shelf life of vegetable oils, which are very important aspects of marketing and human health. Most of conventional methods used to investigate oxidative stability requires long time experimental procedures with high consumption of chemical inputs for the preparation or extraction of sample compounds. In this work we propose a time-resolved thermal lens method to analyze photostability of edible oils by quantitative measurement of photoreaction cross-section. An all-numerical routine is employed to solve a complex theoretical problem involving photochemical reaction, thermal lens effect, and mass diffusion during local laser excitation. The photostability of pure oil and oils with natural and synthetic antioxidants is investigated. The thermal lens results are compared with those obtained by conventional methods, and a complete set of physical properties of the samples is presented.

  9. Proper Generalized Decomposition (PGD) for the numerical simulation of polycrystalline aggregates under cyclic loading

    NASA Astrophysics Data System (ADS)

    Nasri, Mohamed Aziz; Robert, Camille; Ammar, Amine; El Arem, Saber; Morel, Franck

    2018-02-01

    The numerical modelling of the behaviour of materials at the microstructural scale has been greatly developed over the last two decades. Unfortunately, conventional resolution methods cannot simulate polycrystalline aggregates beyond tens of loading cycles, and they do not remain quantitative due to the plasticity behaviour. This work presents the development of a numerical solver for the resolution of the Finite Element modelling of polycrystalline aggregates subjected to cyclic mechanical loading. The method is based on two concepts. The first one consists in maintaining a constant stiffness matrix. The second uses a time/space model reduction method. In order to analyse the applicability and the performance of the use of a space-time separated representation, the simulations are carried out on a three-dimensional polycrystalline aggregate under cyclic loading. Different numbers of elements per grain and two time increments per cycle are investigated. The results show a significant CPU time saving while maintaining good precision. Moreover, increasing the number of elements and the number of time increments per cycle, the model reduction method is faster than the standard solver.

  10. Lead-lag cross-sectional structure and detection of correlated anticorrelated regime shifts: Application to the volatilities of inflation and economic growth rates

    NASA Astrophysics Data System (ADS)

    Zhou, Wei-Xing; Sornette, Didier

    2007-07-01

    We have recently introduced the “thermal optimal path” (TOP) method to investigate the real-time lead-lag structure between two time series. The TOP method consists in searching for a robust noise-averaged optimal path of the distance matrix along which the two time series have the greatest similarity. Here, we generalize the TOP method by introducing a more general definition of distance which takes into account possible regime shifts between positive and negative correlations. This generalization to track possible changes of correlation signs is able to identify possible transitions from one convention (or consensus) to another. Numerical simulations on synthetic time series verify that the new TOP method performs as expected even in the presence of substantial noise. We then apply it to investigate changes of convention in the dependence structure between the historical volatilities of the USA inflation rate and economic growth rate. Several measures show that the new TOP method significantly outperforms standard cross-correlation methods.

  11. Wave propagation simulation in the upper core of sodium-cooled fast reactors using a spectral-element method for heterogeneous media

    NASA Astrophysics Data System (ADS)

    Nagaso, Masaru; Komatitsch, Dimitri; Moysan, Joseph; Lhuillier, Christian

    2018-01-01

    ASTRID project, French sodium cooled nuclear reactor of 4th generation, is under development at the moment by Alternative Energies and Atomic Energy Commission (CEA). In this project, development of monitoring techniques for a nuclear reactor during operation are identified as a measure issue for enlarging the plant safety. Use of ultrasonic measurement techniques (e.g. thermometry, visualization of internal objects) are regarded as powerful inspection tools of sodium cooled fast reactors (SFR) including ASTRID due to opacity of liquid sodium. In side of a sodium cooling circuit, heterogeneity of medium occurs because of complex flow state especially in its operation and then the effects of this heterogeneity on an acoustic propagation is not negligible. Thus, it is necessary to carry out verification experiments for developments of component technologies, while such kind of experiments using liquid sodium may be relatively large-scale experiments. This is why numerical simulation methods are essential for preceding real experiments or filling up the limited number of experimental results. Though various numerical methods have been applied for a wave propagation in liquid sodium, we still do not have a method for verifying on three-dimensional heterogeneity. Moreover, in side of a reactor core being a complex acousto-elastic coupled region, it has also been difficult to simulate such problems with conventional methods. The objective of this study is to solve these 2 points by applying three-dimensional spectral element method. In this paper, our initial results on three-dimensional simulation study on heterogeneous medium (the first point) are shown. For heterogeneity of liquid sodium to be considered, four-dimensional temperature field (three spatial and one temporal dimension) calculated by computational fluid dynamics (CFD) with Large-Eddy Simulation was applied instead of using conventional method (i.e. Gaussian Random field). This three-dimensional numerical experiment yields that we could verify the effects of heterogeneity of propagation medium on waves in Liquid sodium.

  12. A free energy-based surface tension force model for simulation of multiphase flows by level-set method

    NASA Astrophysics Data System (ADS)

    Yuan, H. Z.; Chen, Z.; Shu, C.; Wang, Y.; Niu, X. D.; Shu, S.

    2017-09-01

    In this paper, a free energy-based surface tension force (FESF) model is presented for accurately resolving the surface tension force in numerical simulation of multiphase flows by the level set method. By using the analytical form of order parameter along the normal direction to the interface in the phase-field method and the free energy principle, FESF model offers an explicit and analytical formulation for the surface tension force. The only variable in this formulation is the normal distance to the interface, which can be substituted by the distance function solved by the level set method. On one hand, as compared to conventional continuum surface force (CSF) model in the level set method, FESF model introduces no regularized delta function, due to which it suffers less from numerical diffusions and performs better in mass conservation. On the other hand, as compared to the phase field surface tension force (PFSF) model, the evaluation of surface tension force in FESF model is based on an analytical approach rather than numerical approximations of spatial derivatives. Therefore, better numerical stability and higher accuracy can be expected. Various numerical examples are tested to validate the robustness of the proposed FESF model. It turns out that FESF model performs better than CSF model and PFSF model in terms of accuracy, stability, convergence speed and mass conservation. It is also shown in numerical tests that FESF model can effectively simulate problems with high density/viscosity ratio, high Reynolds number and severe topological interfacial changes.

  13. The space-time solution element method: A new numerical approach for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Scott, James R.; Chang, Sin-Chung

    1995-01-01

    This paper is one of a series of papers describing the development of a new numerical method for the Navier-Stokes equations. Unlike conventional numerical methods, the current method concentrates on the discrete simulation of both the integral and differential forms of the Navier-Stokes equations. Conservation of mass, momentum, and energy in space-time is explicitly provided for through a rigorous enforcement of both the integral and differential forms of the governing conservation laws. Using local polynomial expansions to represent the discrete primitive variables on each cell, fluxes at cell interfaces are evaluated and balanced using exact functional expressions. No interpolation or flux limiters are required. Because of the generality of the current method, it applies equally to the steady and unsteady Navier-Stokes equations. In this paper, we generalize and extend the authors' 2-D, steady state implicit scheme. A general closure methodology is presented so that all terms up through a given order in the local expansions may be retained. The scheme is also extended to nonorthogonal Cartesian grids. Numerous flow fields are computed and results are compared with known solutions. The high accuracy of the scheme is demonstrated through its ability to accurately resolve developing boundary layers on coarse grids. Finally, we discuss applications of the current method to the unsteady Navier-Stokes equations.

  14. High throughput screening of active pharmaceutical ingredients by UPLC.

    PubMed

    Al-Sayah, Mohammad A; Rizos, Panagiota; Antonucci, Vincent; Wu, Naijun

    2008-07-01

    Ultra performance LC (UPLC) was evaluated as an efficient screening approach to facilitate method development for drug candidates. Three stationary phases were screened: C-18, phenyl, and Shield RP 18 with column dimensions of 150 mm x 2.1 mm, 1.7 microm, which should theoretically generate 35,000 plates or 175% of the typical column plate count of a conventional 250 mm x 4.6 mm, 5 microm particle column. Thirteen different active pharmaceutical ingredients (APIs) were screened using this column set with a standardized mobile-phase gradient. The UPLC method selectivity results were compared to those obtained for these compounds via methods developed through laborious trial and error screening experiments using numerous conventional HPLC mobile and stationary phases. Peak capacity was compared for columns packed with 5 microm particles and columns packed with 1.7 microm particles. The impurities screened by UPLC were confirmed by LC/MS. The results demonstrate that simple, high efficiency UPLC gradients are a feasible and productive alternative to more conventional multiparametric chromatographic screening approaches for many compounds in the early stages of drug development.

  15. Reinforcing mechanism of anchors in slopes: a numerical comparison of results of LEM and FEM

    NASA Astrophysics Data System (ADS)

    Cai, Fei; Ugai, Keizo

    2003-06-01

    This paper reports the limitation of the conventional Bishop's simplified method to calculate the safety factor of slopes stabilized with anchors, and proposes a new approach to considering the reinforcing effect of anchors on the safety factor. The reinforcing effect of anchors can be explained using an additional shearing resistance on the slip surface. A three-dimensional shear strength reduction finite element method (SSRFEM), where soil-anchor interactions were simulated by three-dimensional zero-thickness elasto-plastic interface elements, was used to calculate the safety factor of slopes stabilized with anchors to verify the reinforcing mechanism of anchors. The results of SSRFEM were compared with those of the conventional and proposed approaches for Bishop's simplified method for various orientations, positions, and spacings of anchors, and shear strengths of soil-grouted body interfaces. For the safety factor, the proposed approach compared better with SSRFEM than the conventional approach. The additional shearing resistance can explain the influence of the orientation, position, and spacing of anchors, and the shear strength of soil-grouted body interfaces on the safety factor of slopes stabilized with anchors.

  16. Computational methods for the identification of spatially varying stiffness and damping in beams

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Rosen, I. G.

    1986-01-01

    A numerical approximation scheme for the estimation of functional parameters in Euler-Bernoulli models for the transverse vibration of flexible beams with tip bodies is developed. The method permits the identification of spatially varying flexural stiffness and Voigt-Kelvin viscoelastic damping coefficients which appear in the hybrid system of ordinary and partial differential equations and boundary conditions describing the dynamics of such structures. An inverse problem is formulated as a least squares fit to data subject to constraints in the form of a vector system of abstract first order evolution equations. Spline-based finite element approximations are used to finite dimensionalize the problem. Theoretical convergence results are given and numerical studies carried out on both conventional (serial) and vector computers are discussed.

  17. Improvement to the Convergence-Confinement Method: Inclusion of Support Installation Proximity and Stiffness

    NASA Astrophysics Data System (ADS)

    Oke, Jeffrey; Vlachopoulos, Nicholas; Diederichs, Mark

    2018-05-01

    The convergence-confinement method (CCM) is a method that has been introduced in tunnel construction that considers the ground response to the advancing tunnel face and the interaction with installed support. One limitation of the CCM is due to the numerically or empirically driven nature of the longitudinal displacement profile and the incomplete consideration of the longitudinal arching effect that occurs during tunnelling operations as part of the face effect. In this paper, the authors address the issue associated with when the CCM is used within squeezing ground conditions at depth. Based on numerical analysis, the authors have proposed a methodology and solution to improving the CCM in order to allow for more accurate results for squeezing ground conditions for three different excavation cases involving various excavation-support increments and distances from the face to the supported front. The tunnelling methods of consideration include: tunnel boring machine, mechanical (conventional), and drill and blast.

  18. Implicit methods for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Yoon, S.; Kwak, D.

    1990-01-01

    Numerical solutions of the Navier-Stokes equations using explicit schemes can be obtained at the expense of efficiency. Conventional implicit methods which often achieve fast convergence rates suffer high cost per iteration. A new implicit scheme based on lower-upper factorization and symmetric Gauss-Seidel relaxation offers very low cost per iteration as well as fast convergence. High efficiency is achieved by accomplishing the complete vectorizability of the algorithm on oblique planes of sweep in three dimensions.

  19. Time-optimal excitation of maximum quantum coherence: Physical limits and pulse sequences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Köcher, S. S.; Institute of Energy and Climate Research; Heydenreich, T.

    Here we study the optimum efficiency of the excitation of maximum quantum (MaxQ) coherence using analytical and numerical methods based on optimal control theory. The theoretical limit of the achievable MaxQ amplitude and the minimum time to achieve this limit are explored for a set of model systems consisting of up to five coupled spins. In addition to arbitrary pulse shapes, two simple pulse sequence families of practical interest are considered in the optimizations. Compared to conventional approaches, substantial gains were found both in terms of the achieved MaxQ amplitude and in pulse sequence durations. For a model system, theoreticallymore » predicted gains of a factor of three compared to the conventional pulse sequence were experimentally demonstrated. Motivated by the numerical results, also two novel analytical transfer schemes were found: Compared to conventional approaches based on non-selective pulses and delays, double-quantum coherence in two-spin systems can be created twice as fast using isotropic mixing and hard spin-selective pulses. Also it is proved that in a chain of three weakly coupled spins with the same coupling constants, triple-quantum coherence can be created in a time-optimal fashion using so-called geodesic pulses.« less

  20. Design sensitivity analysis with Applicon IFAD using the adjoint variable method

    NASA Technical Reports Server (NTRS)

    Frederick, Marjorie C.; Choi, Kyung K.

    1984-01-01

    A numerical method is presented to implement structural design sensitivity analysis using the versatility and convenience of existing finite element structural analysis program and the theoretical foundation in structural design sensitivity analysis. Conventional design variables, such as thickness and cross-sectional areas, are considered. Structural performance functionals considered include compliance, displacement, and stress. It is shown that calculations can be carried out outside existing finite element codes, using postprocessing data only. That is, design sensitivity analysis software does not have to be imbedded in an existing finite element code. The finite element structural analysis program used in the implementation presented is IFAD. Feasibility of the method is shown through analysis of several problems, including built-up structures. Accurate design sensitivity results are obtained without the uncertainty of numerical accuracy associated with selection of a finite difference perturbation.

  1. System Simulation by Recursive Feedback: Coupling A Set of Stand-Alone Subsystem Simulations

    NASA Technical Reports Server (NTRS)

    Nixon, Douglas D.; Hanson, John M. (Technical Monitor)

    2002-01-01

    Recursive feedback is defined and discussed as a framework for development of specific algorithms and procedures that propagate the time-domain solution for a dynamical system simulation consisting of multiple numerically coupled self-contained stand-alone subsystem simulations. A satellite motion example containing three subsystems (other dynamics, attitude dynamics, and aerodynamics) has been defined and constructed using this approach. Conventional solution methods are used in the subsystem simulations. Centralized and distributed versions of coupling structure have been addressed. Numerical results are evaluated by direct comparison with a standard total-system simultaneous-solution approach.

  2. A new analytical method for characterizing nonlinear visual processes with stimuli of arbitrary distribution: Theory and applications.

    PubMed

    Hayashi, Ryusuke; Watanabe, Osamu; Yokoyama, Hiroki; Nishida, Shin'ya

    2017-06-01

    Characterization of the functional relationship between sensory inputs and neuronal or observers' perceptual responses is one of the fundamental goals of systems neuroscience and psychophysics. Conventional methods, such as reverse correlation and spike-triggered data analyses are limited in their ability to resolve complex and inherently nonlinear neuronal/perceptual processes because these methods require input stimuli to be Gaussian with a zero mean. Recent studies have shown that analyses based on a generalized linear model (GLM) do not require such specific input characteristics and have advantages over conventional methods. GLM, however, relies on iterative optimization algorithms and its calculation costs become very expensive when estimating the nonlinear parameters of a large-scale system using large volumes of data. In this paper, we introduce a new analytical method for identifying a nonlinear system without relying on iterative calculations and yet also not requiring any specific stimulus distribution. We demonstrate the results of numerical simulations, showing that our noniterative method is as accurate as GLM in estimating nonlinear parameters in many cases and outperforms conventional, spike-triggered data analyses. As an example of the application of our method to actual psychophysical data, we investigated how different spatiotemporal frequency channels interact in assessments of motion direction. The nonlinear interaction estimated by our method was consistent with findings from previous vision studies and supports the validity of our method for nonlinear system identification.

  3. Thermal lattice BGK models for fluid dynamics

    NASA Astrophysics Data System (ADS)

    Huang, Jian

    1998-11-01

    As an alternative in modeling fluid dynamics, the Lattice Boltzmann method has attracted considerable attention. In this thesis, we shall present a general form of thermal Lattice BGK. This form can handle large differences in density, temperature, and high Mach number. This generalized method can easily model gases with different adiabatic index values. The numerical transport coefficients of this model are estimated both theoretically and numerically. Their dependency on the sizes of integration steps in time and space, and on the flow velocity and temperature, are studied and compared with other established CFD methods. This study shows that the numerical viscosity of the Lattice Boltzmann method depends linearly on the space interval, and on the flow velocity as well for supersonic flow. This indicates this method's limitation in modeling high Reynolds number compressible thermal flow. On the other hand, the Lattice Boltzmann method shows promise in modeling micro-flows, i.e., gas flows in micron-sized devices. A two-dimensional code has been developed based on the conventional thermal lattice BGK model, with some modifications and extensions for micro- flows and wall-fluid interactions. Pressure-driven micro- channel flow has been simulated. Results are compared with experiments and simulations using other methods, such as a spectral element code using slip boundary condition with Navier-Stokes equations and a Direct Simulation Monte Carlo (DSMC) method.

  4. Guidelines for the Effective Use of Entity-Attribute-Value Modeling for Biomedical Databases

    PubMed Central

    Dinu, Valentin; Nadkarni, Prakash

    2007-01-01

    Purpose To introduce the goals of EAV database modeling, to describe the situations where Entity-Attribute-Value (EAV) modeling is a useful alternative to conventional relational methods of database modeling, and to describe the fine points of implementation in production systems. Methods We analyze the following circumstances: 1) data are sparse and have a large number of applicable attributes, but only a small fraction will apply to a given entity; 2) numerous classes of data need to be represented, each class has a limited number of attributes, but the number of instances of each class is very small. We also consider situations calling for a mixed approach where both conventional and EAV design are used for appropriate data classes. Results and Conclusions In robust production systems, EAV-modeled databases trade a modest data sub-schema for a complex metadata sub-schema. The need to design the metadata effectively makes EAV design potentially more challenging than conventional design. PMID:17098467

  5. Co-citation Network Analysis of Religious Texts

    NASA Astrophysics Data System (ADS)

    Murai, Hajime; Tokosumi, Akifumi

    This paper introduces a method of representing in a network the thoughts of individual authors of dogmatic texts numerically and objectively by means of co-citation analysis and a method of distinguishing between the thoughts of various authors by clustering and analysis of clustered elements, generated by the clustering process. Using these methods, this paper creates and analyzes the co-citation networks for five authoritative Christian theologians through history (Augustine, Thomas Aquinas, Jean Calvin, Karl Barth, John Paul II). These analyses were able to extract the core element of Christian thought (Jn 1:14, Ph 2:6, Ph 2:7, Ph 2:8, Ga 4:4), as well as distinctions between the individual theologians in terms of their sect (Catholic or Protestant) and era (thinking about the importance of God's creation and the necessity of spreading the Gospel). By supplementing conventional literary methods in areas such as philosophy and theology, with these numerical and objective methods, it should be possible to compare the characteristics of various doctrines. The ability to numerically and objectively represent the characteristics of various thoughts opens up the possibilities of utilizing new information technology, such as web ontology and the Artificial Intelligence, in order to process information about ideological thoughts in the future.

  6. Effective way of reducing coupling loss between rectangular microwaveguide and fiber.

    PubMed

    Zhou, Hang; Chen, Zilun; Xi, Xiaoming; Hou, Jing; Chen, Jinbao

    2012-01-20

    We introduce an anamorphic photonic crystal fiber (PCF) produced by postprocessing techniques to improve the coupling loss between a conventional single-mode fiber and rectangular microwaveguide. One end of the round core is connected with the conventional fiber, and the other end of the rectangular core is connected with the rectangular microwaveguide, then the PCF is tapered pro rata. In this way, the loss of mode mismatch between the output of the conventional fiber and the input of the waveguide would be reduced, which results in enhanced coupling efficiency. The conclusion was confirmed by numerical simulation: the new method is better than straight coupling between the optical fiber and the rectangular microwaveguide, and more than 2.8 dB improvement of coupling efficiency is achieved. © 2012 Optical Society of America

  7. Using Formal Methods to Assist in the Requirements Analysis of the Space Shuttle GPS Change Request

    NASA Technical Reports Server (NTRS)

    DiVito, Ben L.; Roberts, Larry W.

    1996-01-01

    We describe a recent NASA-sponsored pilot project intended to gauge the effectiveness of using formal methods in Space Shuttle software requirements analysis. Several Change Requests (CR's) were selected as promising targets to demonstrate the utility of formal methods in this application domain. A CR to add new navigation capabilities to the Shuttle, based on Global Positioning System (GPS) technology, is the focus of this report. Carried out in parallel with the Shuttle program's conventional requirements analysis process was a limited form of analysis based on formalized requirements. Portions of the GPS CR were modeled using the language of SRI's Prototype Verification System (PVS). During the formal methods-based analysis, numerous requirements issues were discovered and submitted as official issues through the normal requirements inspection process. Shuttle analysts felt that many of these issues were uncovered earlier than would have occurred with conventional methods. We present a summary of these encouraging results and conclusions we have drawn from the pilot project.

  8. Adjoint Sensitivity Analysis for Scale-Resolving Turbulent Flow Solvers

    NASA Astrophysics Data System (ADS)

    Blonigan, Patrick; Garai, Anirban; Diosady, Laslo; Murman, Scott

    2017-11-01

    Adjoint-based sensitivity analysis methods are powerful design tools for engineers who use computational fluid dynamics. In recent years, these engineers have started to use scale-resolving simulations like large-eddy simulations (LES) and direct numerical simulations (DNS), which resolve more scales in complex flows with unsteady separation and jets than the widely-used Reynolds-averaged Navier-Stokes (RANS) methods. However, the conventional adjoint method computes large, unusable sensitivities for scale-resolving simulations, which unlike RANS simulations exhibit the chaotic dynamics inherent in turbulent flows. Sensitivity analysis based on least-squares shadowing (LSS) avoids the issues encountered by conventional adjoint methods, but has a high computational cost even for relatively small simulations. The following talk discusses a more computationally efficient formulation of LSS, ``non-intrusive'' LSS, and its application to turbulent flows simulated with a discontinuous-Galkerin spectral-element-method LES/DNS solver. Results are presented for the minimal flow unit, a turbulent channel flow with a limited streamwise and spanwise domain.

  9. Methodology and Method and Apparatus for Signaling with Capacity Optimized Constellations

    NASA Technical Reports Server (NTRS)

    Barsoum, Maged F. (Inventor); Jones, Christopher R. (Inventor)

    2016-01-01

    Design Methodology and Method and Apparatus for Signaling with Capacity Optimized Constellation Abstract Communication systems are described that use geometrically PSK shaped constellations that have increased capacity compared to conventional PSK constellations operating within a similar SNR band. The geometrically shaped PSK constellation is optimized based upon parallel decoding capacity. In many embodiments, a capacity optimized geometrically shaped constellation can be used to replace a conventional constellation as part of a firmware upgrade to transmitters and receivers within a communication system. In a number of embodiments, the geometrically shaped constellation is optimized for an Additive White Gaussian Noise channel or a fading channel. In numerous embodiments, the communication uses adaptive rate encoding and the location of points within the geometrically shaped constellation changes as the code rate changes.

  10. Optimization of finite difference forward modeling for elastic waves based on optimum combined window functions

    NASA Astrophysics Data System (ADS)

    Jian, Wang; Xiaohong, Meng; Hong, Liu; Wanqiu, Zheng; Yaning, Liu; Sheng, Gui; Zhiyang, Wang

    2017-03-01

    Full waveform inversion and reverse time migration are active research areas for seismic exploration. Forward modeling in the time domain determines the precision of the results, and numerical solutions of finite difference have been widely adopted as an important mathematical tool for forward modeling. In this article, the optimum combined of window functions was designed based on the finite difference operator using a truncated approximation of the spatial convolution series in pseudo-spectrum space, to normalize the outcomes of existing window functions for different orders. The proposed combined window functions not only inherit the characteristics of the various window functions, to provide better truncation results, but also control the truncation error of the finite difference operator manually and visually by adjusting the combinations and analyzing the characteristics of the main and side lobes of the amplitude response. Error level and elastic forward modeling under the proposed combined system were compared with outcomes from conventional window functions and modified binomial windows. Numerical dispersion is significantly suppressed, which is compared with modified binomial window function finite-difference and conventional finite-difference. Numerical simulation verifies the reliability of the proposed method.

  11. Simulation of multivariate stationary stochastic processes using dimension-reduction representation methods

    NASA Astrophysics Data System (ADS)

    Liu, Zhangjun; Liu, Zenghui; Peng, Yongbo

    2018-03-01

    In view of the Fourier-Stieltjes integral formula of multivariate stationary stochastic processes, a unified formulation accommodating spectral representation method (SRM) and proper orthogonal decomposition (POD) is deduced. By introducing random functions as constraints correlating the orthogonal random variables involved in the unified formulation, the dimension-reduction spectral representation method (DR-SRM) and the dimension-reduction proper orthogonal decomposition (DR-POD) are addressed. The proposed schemes are capable of representing the multivariate stationary stochastic process with a few elementary random variables, bypassing the challenges of high-dimensional random variables inherent in the conventional Monte Carlo methods. In order to accelerate the numerical simulation, the technique of Fast Fourier Transform (FFT) is integrated with the proposed schemes. For illustrative purposes, the simulation of horizontal wind velocity field along the deck of a large-span bridge is proceeded using the proposed methods containing 2 and 3 elementary random variables. Numerical simulation reveals the usefulness of the dimension-reduction representation methods.

  12. Mitigating cutting-induced plasticity in the contour method, Part 2: Numerical analysis

    DOE PAGES

    Muránsky, O.; Hamelin, C. J.; Hosseinzadeh, F.; ...

    2016-02-10

    Cutting-induced plasticity can have a significant effect on the measurement accuracy of the contour method. The present study examines the benefit of a double-embedded cutting configuration that relies on self-restraint of the specimen, relative to conventional edge-crack cutting configurations. A series of finite element analyses are used to simulate the planar sectioning performed during double-embedded and conventional edge-crack contour cutting configurations. The results of numerical analyses are first compared to measured results to validate the cutting simulations. The simulations are then used to compare the efficacy of different cutting configurations by predicting the deviation of the residual stress profile frommore » an original (pre-cutting) reference stress field, and the extent of cutting-induced plasticity. Comparisons reveal that while the double-embedded cutting configuration produces the most accurate residual stress measurements, the highest levels of plastic flow are generated in this process. As a result, this cutting-induced plastic deformation is, however, largely confined to small ligaments formed as a consequence of the sample sectioning process, and as such it does not significantly affect the back-calculated residual stress field.« less

  13. Mitigating cutting-induced plasticity in the contour method, Part 2: Numerical analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muránsky, O.; Hamelin, C. J.; Hosseinzadeh, F.

    Cutting-induced plasticity can have a significant effect on the measurement accuracy of the contour method. The present study examines the benefit of a double-embedded cutting configuration that relies on self-restraint of the specimen, relative to conventional edge-crack cutting configurations. A series of finite element analyses are used to simulate the planar sectioning performed during double-embedded and conventional edge-crack contour cutting configurations. The results of numerical analyses are first compared to measured results to validate the cutting simulations. The simulations are then used to compare the efficacy of different cutting configurations by predicting the deviation of the residual stress profile frommore » an original (pre-cutting) reference stress field, and the extent of cutting-induced plasticity. Comparisons reveal that while the double-embedded cutting configuration produces the most accurate residual stress measurements, the highest levels of plastic flow are generated in this process. As a result, this cutting-induced plastic deformation is, however, largely confined to small ligaments formed as a consequence of the sample sectioning process, and as such it does not significantly affect the back-calculated residual stress field.« less

  14. Numerical approach to reference identification of Staphylococcus, Stomatococcus, and Micrococcus spp.

    PubMed

    Rhoden, D L; Hancock, G A; Miller, J M

    1993-03-01

    A numerical-code system for the reference identification of Staphylococcus species, Stomatococcus mucilaginosus, and Micrococcus species was established by using a selected panel of conventional biochemicals. Results from 824 cultures (289 eye isolate cultures, 147 reference strains, and 388 known control strains) were used to generate a list of 354 identification code numbers. Each six-digit code number was based on results from 18 conventional biochemical reactions. Seven milliliters of purple agar base with 1% sterile carbohydrate solution added was poured into 60-mm-diameter agar plates. All biochemical tests were inoculated with 1 drop of a heavy broth suspension, incubated at 35 degrees C, and read daily for 3 days. All reactions were read and interpreted by the method of Kloos et al. (G. A. Hebert, C. G. Crowder, G. A. Hancock, W. R. Jarvis, and C. Thornsberry, J. Clin. Microbiol. 26:1939-1949, 1988; W. E. Kloos and D. W. Lambe, Jr., P. 222-237, in A. Balows, W. J. Hansler, Jr., K. L. Herrmann, H. D. Isenberg, and H. J. Shadomy, ed., Manual of Clinical Microbiology, 5th ed., 1991). This modified reference identification method was 96 to 98% accurate and could have value in reference and public health laboratory settings.

  15. Performance Enhancement of Pharmacokinetic Diffuse Fluorescence Tomography by Use of Adaptive Extended Kalman Filtering.

    PubMed

    Wang, Xin; Wu, Linhui; Yi, Xi; Zhang, Yanqi; Zhang, Limin; Zhao, Huijuan; Gao, Feng

    2015-01-01

    Due to both the physiological and morphological differences in the vascularization between healthy and diseased tissues, pharmacokinetic diffuse fluorescence tomography (DFT) can provide contrast-enhanced and comprehensive information for tumor diagnosis and staging. In this regime, the extended Kalman filtering (EKF) based method shows numerous advantages including accurate modeling, online estimation of multiparameters, and universal applicability to any optical fluorophore. Nevertheless the performance of the conventional EKF highly hinges on the exact and inaccessible prior knowledge about the initial values. To address the above issues, an adaptive-EKF scheme is proposed based on a two-compartmental model for the enhancement, which utilizes a variable forgetting-factor to compensate the inaccuracy of the initial states and emphasize the effect of the current data. It is demonstrated using two-dimensional simulative investigations on a circular domain that the proposed adaptive-EKF can obtain preferable estimation of the pharmacokinetic-rates to the conventional-EKF and the enhanced-EKF in terms of quantitativeness, noise robustness, and initialization independence. Further three-dimensional numerical experiments on a digital mouse model validate the efficacy of the method as applied in realistic biological systems.

  16. Dynamic balancing of dual-rotor system with very little rotating speed difference.

    PubMed

    Yang, Jian; He, Shi-zheng; Wang, Le-qin

    2003-01-01

    Unbalanced vibration in dual-rotor rotating machinery was studied with numerical simulations and experiments. A new method is proposed to separate vibration signals of inner and outer rotors for a system with very little difference in rotating speeds. Magnitudes and phase values of unbalance defects can be obtained directly by sampling the vibration signal synchronized with reference signal. The balancing process is completed by the reciprocity influence coefficients of inner and outer rotors method. Results showed the advantage of such method for a dual-rotor system as compared with conventional balancing.

  17. Unconditionally stable WLP-FDTD method for the modeling of electromagnetic wave propagation in gyrotropic materials.

    PubMed

    Li, Zheng-Wei; Xi, Xiao-Li; Zhang, Jin-Sheng; Liu, Jiang-fan

    2015-12-14

    The unconditional stable finite-difference time-domain (FDTD) method based on field expansion with weighted Laguerre polynomials (WLPs) is applied to model electromagnetic wave propagation in gyrotropic materials. The conventional Yee cell is modified to have the tightly coupled current density components located at the same spatial position. The perfectly matched layer (PML) is formulated in a stretched-coordinate (SC) system with the complex-frequency-shifted (CFS) factor to achieve good absorption performance. Numerical examples are shown to validate the accuracy and efficiency of the proposed method.

  18. Leak Detection and Location of Water Pipes Using Vibration Sensors and Modified ML Prefilter.

    PubMed

    Choi, Jihoon; Shin, Joonho; Song, Choonggeun; Han, Suyong; Park, Doo Il

    2017-09-13

    This paper proposes a new leak detection and location method based on vibration sensors and generalised cross-correlation techniques. Considering the estimation errors of the power spectral densities (PSDs) and the cross-spectral density (CSD), the proposed method employs a modified maximum-likelihood (ML) prefilter with a regularisation factor. We derive a theoretical variance of the time difference estimation error through summation in the discrete-frequency domain, and find the optimal regularisation factor that minimises the theoretical variance in practical water pipe channels. The proposed method is compared with conventional correlation-based techniques via numerical simulations using a water pipe channel model, and it is shown through field measurement that the proposed modified ML prefilter outperforms conventional prefilters for the generalised cross-correlation. In addition, we provide a formula to calculate the leak location using the time difference estimate when different types of pipes are connected.

  19. Bidirectional composition on lie groups for gradient-based image alignment.

    PubMed

    Mégret, Rémi; Authesserre, Jean-Baptiste; Berthoumieu, Yannick

    2010-09-01

    In this paper, a new formulation based on bidirectional composition on Lie groups (BCL) for parametric gradient-based image alignment is presented. Contrary to the conventional approaches, the BCL method takes advantage of the gradients of both template and current image without combining them a priori. Based on this bidirectional formulation, two methods are proposed and their relationship with state-of-the-art gradient based approaches is fully discussed. The first one, i.e., the BCL method, relies on the compositional framework to provide the minimization of the compensated error with respect to an augmented parameter vector. The second one, the projected BCL (PBCL), corresponds to a close approximation of the BCL approach. A comparative study is carried out dealing with computational complexity, convergence rate and frequence of convergence. Numerical experiments using a conventional benchmark show the performance improvement especially for asymmetric levels of noise, which is also discussed from a theoretical point of view.

  20. Adaptive-numerical-bias metadynamics.

    PubMed

    Khanjari, Neda; Eslami, Hossein; Müller-Plathe, Florian

    2017-12-05

    A metadynamics scheme is presented in which the free energy surface is filled with progressively adding adaptive biasing potentials, obtained from the accumulated probability distribution of the collective variables. Instead of adding Gaussians with assigned height and width in conventional metadynamics method, here we add a more realistic adaptive biasing potential to the Hamiltonian of the system. The shape of the adaptive biasing potential is adjusted on the fly by sampling over the visited states. As the top of the barrier is approached, the biasing potentials become wider. This decreases the problem of trapping the system in the niches, introduced by the addition of Gaussians of fixed height in metadynamics. Our results for the free energy profiles of three test systems show that this method is more accurate and converges more quickly than the conventional metadynamics, and is quite comparable (in accuracy and convergence rate) with the well-tempered metadynamics method. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  1. Leak Detection and Location of Water Pipes Using Vibration Sensors and Modified ML Prefilter

    PubMed Central

    Shin, Joonho; Song, Choonggeun; Han, Suyong; Park, Doo Il

    2017-01-01

    This paper proposes a new leak detection and location method based on vibration sensors and generalised cross-correlation techniques. Considering the estimation errors of the power spectral densities (PSDs) and the cross-spectral density (CSD), the proposed method employs a modified maximum-likelihood (ML) prefilter with a regularisation factor. We derive a theoretical variance of the time difference estimation error through summation in the discrete-frequency domain, and find the optimal regularisation factor that minimises the theoretical variance in practical water pipe channels. The proposed method is compared with conventional correlation-based techniques via numerical simulations using a water pipe channel model, and it is shown through field measurement that the proposed modified ML prefilter outperforms conventional prefilters for the generalised cross-correlation. In addition, we provide a formula to calculate the leak location using the time difference estimate when different types of pipes are connected. PMID:28902154

  2. Pain control in a randomized, controlled, clinical trial comparing moist exposed burn ointment and conventional methods in patients with partial-thickness burns.

    PubMed

    Ang, Erik; Lee, S-T; Gan, Christine S-G; Chan, Y-H; Cheung, Y-B; Machin, D

    2003-01-01

    Conventional management of partial-thickness burn wounds includes the use of paraffin gauze dressing, frequently with topical silver-based antibacterial creams. Some creams form an overlying slough that renders wound assessment difficult and are painful upon application. An alternative to conventional management, moist exposed burn ointment (MEBO), has been proposed as a topical agent that may accelerate wound healing and have antibacterial and analgesic properties. One hundred fifteen patients with partial-thickness burns were randomly assigned to conventional (n = 58) or MEBO treatment (n = 57). A verbal numerical rating score of pain was made in the morning, after burn dressing, and some 8 hours later. Patient pain profiles were summarized by locally weighted regression smoothing technique curves and the difference between treatments estimated using multilevel regression techniques. Mean verbal numerical rating scale pain levels (cm) in week 1 for all patients were highest at 3.2 for the after dressing assessment, lowest in the evening at 2.6, and intermediate in the morning at 3.0. This pattern continued at similar levels in week 2 and then declined by a mean of 0.5 in all groups in week 3. There was little evidence to suggest a difference in pain levels by treatment group with the exception of the postdressing pain levels in the first week when those receiving MEBO had a mean level of 0.7 cm (95% confidence interval, 0.2 to 1.1) lower than those on conventional therapy. MEBO appeared to bring greater pain relief for the postdressing assessment during the first week after burns. This initial relief, together with comparable pain levels experienced on other occasions, indicates that MEBO could be an alternative to conventional burns management.

  3. Methods for Combining Payload Parameter Variations with Input Environment

    NASA Technical Reports Server (NTRS)

    Merchant, D. H.; Straayer, J. W.

    1975-01-01

    Methods are presented for calculating design limit loads compatible with probabilistic structural design criteria. The approach is based on the concept that the desired limit load, defined as the largest load occuring in a mission, is a random variable having a specific probability distribution which may be determined from extreme-value theory. The design limit load, defined as a particular value of this random limit load, is the value conventionally used in structural design. Methods are presented for determining the limit load probability distributions from both time-domain and frequency-domain dynamic load simulations. Numerical demonstrations of the methods are also presented.

  4. Absorbing boundaries in numerical solutions of the time-dependent Schroedinger equation on a grid using exterior complex scaling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, F.; Ruiz, C.; Becker, A.

    We study the suppression of reflections in the numerical simulation of the time-dependent Schroedinger equation for strong-field problems on a grid using exterior complex scaling (ECS) as an absorbing boundary condition. It is shown that the ECS method can be applied in both the length and the velocity gauge as long as appropriate approximations are applied in the ECS transformation of the electron-field coupling. It is found that the ECS method improves the suppression of reflection as compared to the conventional masking function technique in typical simulations of atoms exposed to an intense laser pulse. Finally, we demonstrate the advantagemore » of the ECS technique to avoid unphysical artifacts in the evaluation of high harmonic spectra.« less

  5. A novel finite element analysis of three-dimensional circular crack

    NASA Astrophysics Data System (ADS)

    Ping, X. C.; Wang, C. G.; Cheng, L. P.

    2018-06-01

    A novel singular element containing a part of the circular crack front is established to solve the singular stress fields of circular cracks by using the numerical series eigensolutions of singular stress fields. The element is derived from the Hellinger-Reissner variational principle and can be directly incorporated into existing 3D brick elements. The singular stress fields are determined as the system unknowns appearing as displacement nodal values. The numerical studies are conducted to demonstrate the simplicity of the proposed technique in handling fracture problems of circular cracks. The usage of the novel singular element can avoid mesh refinement near the crack front domain without loss of calculation accuracy and velocity of convergence. Compared with the conventional finite element methods and existing analytical methods, the present method is more suitable for dealing with complicated structures with a large number of elements.

  6. Acoustic coupled fluid-structure interactions using a unified fast multipole boundary element method.

    PubMed

    Wilkes, Daniel R; Duncan, Alec J

    2015-04-01

    This paper presents a numerical model for the acoustic coupled fluid-structure interaction (FSI) of a submerged finite elastic body using the fast multipole boundary element method (FMBEM). The Helmholtz and elastodynamic boundary integral equations (BIEs) are, respectively, employed to model the exterior fluid and interior solid domains, and the pressure and displacement unknowns are coupled between conforming meshes at the shared boundary interface to achieve the acoustic FSI. The low frequency FMBEM is applied to both BIEs to reduce the algorithmic complexity of the iterative solution from O(N(2)) to O(N(1.5)) operations per matrix-vector product for N boundary unknowns. Numerical examples are presented to demonstrate the algorithmic and memory complexity of the method, which are shown to be in good agreement with the theoretical estimates, while the solution accuracy is comparable to that achieved by a conventional finite element-boundary element FSI model.

  7. Mathematical modeling of methyl ester concentration distribution in a continuous membrane tubular reactor and comparison with conventional tubular reactor

    NASA Astrophysics Data System (ADS)

    Talaghat, M. R.; Jokar, S. M.; Modarres, E.

    2017-10-01

    The reduction of fossil fuel resources and environmental issues made researchers find alternative fuels include biodiesels. One of the most widely used methods for production of biodiesel on a commercial scale is transesterification method. In this work, the biodiesel production by a transesterification method was modeled. Sodium hydroxide was considered as a catalyst to produce biodiesel from canola oil and methanol in a continuous tubular ceramic membranes reactor. As the Biodiesel production reaction from triglycerides is an equilibrium reaction, the reaction rate constants depend on temperature and related linearly to catalyst concentration. By using the mass balance for a membrane tubular reactor and considering the variation of raw materials and products concentration with time, the set of governing equations were solved by numerical methods. The results clearly show the superiority of membrane reactor than conventional tubular reactors. Afterward, the influences of molar ratio of alcohol to oil, weight percentage of the catalyst, and residence time on the performance of biodiesel production reactor were investigated.

  8. Theory of viscous transonic flow over airfoils at high Reynolds number

    NASA Technical Reports Server (NTRS)

    Melnik, R. E.; Chow, R.; Mead, H. R.

    1977-01-01

    This paper considers viscous flows with unseparated turbulent boundary layers over two-dimensional airfoils at transonic speeds. Conventional theoretical methods are based on boundary layer formulations which do not account for the effect of the curved wake and static pressure variations across the boundary layer in the trailing edge region. In this investigation an extended viscous theory is developed that accounts for both effects. The theory is based on a rational analysis of the strong turbulent interaction at airfoil trailing edges. The method of matched asymptotic expansions is employed to develop formal series solutions of the full Reynolds equations in the limit of Reynolds numbers tending to infinity. Procedures are developed for combining the local trailing edge solution with numerical methods for solving the full potential flow and boundary layer equations. Theoretical results indicate that conventional boundary layer methods account for only about 50% of the viscous effect on lift, the remaining contribution arising from wake curvature and normal pressure gradient effects.

  9. Exponential integrators in time-dependent density-functional calculations

    NASA Astrophysics Data System (ADS)

    Kidd, Daniel; Covington, Cody; Varga, Kálmán

    2017-12-01

    The integrating factor and exponential time differencing methods are implemented and tested for solving the time-dependent Kohn-Sham equations. Popular time propagation methods used in physics, as well as other robust numerical approaches, are compared to these exponential integrator methods in order to judge the relative merit of the computational schemes. We determine an improvement in accuracy of multiple orders of magnitude when describing dynamics driven primarily by a nonlinear potential. For cases of dynamics driven by a time-dependent external potential, the accuracy of the exponential integrator methods are less enhanced but still match or outperform the best of the conventional methods tested.

  10. Geometric Integration of Weakly Dissipative Systems

    NASA Astrophysics Data System (ADS)

    Modin, K.; Führer, C.; Soöderlind, G.

    2009-09-01

    Some problems in mechanics, e.g. in bearing simulation, contain subsystems that are conservative as well as weakly dissipative subsystems. Our experience is that geometric integration methods are often superior for such systems, as long as the dissipation is weak. Here we develop adaptive methods for dissipative perturbations of Hamiltonian systems. The methods are "geometric" in the sense that the form of the dissipative perturbation is preserved. The methods are linearly explicit, i.e., they require the solution of a linear subsystem. We sketch an analysis in terms of backward error analysis and numerical comparisons with a conventional RK method of the same order is given.

  11. Novel permanent magnet linear motor with isolated movers: analytical, numerical and experimental study.

    PubMed

    Yan, Liang; Peng, Juanjuan; Jiao, Zongxia; Chen, Chin-Yin; Chen, I-Ming

    2014-10-01

    This paper proposes a novel permanent magnet linear motor possessing two movers and one stator. The two movers are isolated and can interact with the stator poles to generate independent forces and motions. Compared with conventional multiple motor driving system, it helps to increase the system compactness, and thus improve the power density and working efficiency. The magnetic field distribution is obtained by using equivalent magnetic circuit method. Following that, the formulation of force output considering armature reaction is carried out. Then inductances are analyzed with finite element method to investigate the relationships of the two movers. It is found that the mutual-inductances are nearly equal to zero, and thus the interaction between the two movers is negligible. A research prototype of the linear motor and a measurement apparatus on thrust force have been developed. Both numerical computation and experiment measurement are conducted to validate the analytical model of thrust force. Comparison shows that the analytical model matches the numerical and experimental results well.

  12. Numerical Simulation of Transient Liquid Phase Bonding under Temperature Gradient

    NASA Astrophysics Data System (ADS)

    Ghobadi Bigvand, Arian

    Transient Liquid Phase bonding under Temperature Gradient (TG-TLP bonding) is a relatively new process of TLP diffusion bonding family for joining difficult-to-weld aerospace materials. Earlier studies have suggested that in contrast to the conventional TLP bonding process, liquid state diffusion drives joint solidification in TG-TLP bonding process. In the present work, a mass conservative numerical model that considers asymmetry in joint solidification is developed using finite element method to properly study the TG-TLP bonding process. The numerical results, which are experimentally verified, show that unlike what has been previously reported, solid state diffusion plays a major role in controlling the solidification behavior during TG-TLP bonding process. The newly developed model provides a vital tool for further elucidation of the TG-TLP bonding process.

  13. MR Vascular Fingerprinting: A New Approach to Compute Cerebral Blood Volume, Mean Vessel Radius, and Oxygenation Maps in the Human Brain

    PubMed Central

    Christen, T.; Pannetier, NA.; Ni, W.; Qiu, D.; Moseley, M.; Schuff, N.; Zaharchuk, G.

    2014-01-01

    In the present study, we describe a fingerprinting approach to analyze the time evolution of the MR signal and retrieve quantitative information about the microvascular network. We used a Gradient Echo Sampling of the Free Induction Decay and Spin Echo (GESFIDE) sequence and defined a fingerprint as the ratio of signals acquired pre and post injection of an iron based contrast agent. We then simulated the same experiment with an advanced numerical tool that takes a virtual voxel containing blood vessels as input, then computes microscopic magnetic fields and water diffusion effects, and eventually derives the expected MR signal evolution. The parameters inputs of the simulations (cerebral blood volume [CBV], mean vessel radius [R], and blood oxygen saturation [SO2]) were varied to obtain a dictionary of all possible signal evolutions. The best fit between the observed fingerprint and the dictionary was then determined using least square minimization. This approach was evaluated in 5 normal subjects and the results were compared to those obtained using more conventional MR methods, steady-state contrast imaging for CBV and R and a global measure of oxygenation obtained from the superior sagittal sinus for SO2. The fingerprinting method enabled the creation of high-resolution parametric maps of the microvascular network showing expected contrast and fine details. Numerical values in gray matter (CBV=3.1±0.7%, R=12.6±2.4µm, SO2=59.5±4.7%) are consistent with literature reports and correlated with conventional MR approaches. SO2 values in white matter (53.0±4.0%) were slightly lower than expected. Numerous improvements can easily be made and the method should be useful to study brain pathologies. PMID:24321559

  14. Numerical investigation & comparison of a tandem-bladed turbocharger centrifugal compressor stage with conventional design

    NASA Astrophysics Data System (ADS)

    Danish, Syed Noman; Qureshi, Shafiq Rehman; EL-Leathy, Abdelrahman; Khan, Salah Ud-Din; Umer, Usama; Ma, Chaochen

    2014-12-01

    Extensive numerical investigations of the performance and flow structure in an unshrouded tandem-bladed centrifugal compressor are presented in comparison to a conventional compressor. Stage characteristics are explored for various tip clearance levels, axial spacings and circumferential clockings. Conventional impeller was modified to tandem-bladed design with no modifications in backsweep angle, meridional gas passage and camber distributions in order to have a true comparison with conventional design. Performance degradation is observed for both the conventional and tandem designs with increase in tip clearance. Linear-equation models for correlating stage characteristics with tip clearance are proposed. Comparing two designs, it is clearly evident that the conventional design shows better performance at moderate flow rates. However; near choke flow, tandem design gives better results primarily because of the increase in throat area. Surge point flow rate also seems to drop for tandem compressor resulting in increased range of operation.

  15. Research on ionospheric tomography based on variable pixel height

    NASA Astrophysics Data System (ADS)

    Zheng, Dunyong; Li, Peiqing; He, Jie; Hu, Wusheng; Li, Chaokui

    2016-05-01

    A novel ionospheric tomography technique based on variable pixel height was developed for the tomographic reconstruction of the ionospheric electron density distribution. The method considers the height of each pixel as an unknown variable, which is retrieved during the inversion process together with the electron density values. In contrast to conventional computerized ionospheric tomography (CIT), which parameterizes the model with a fixed pixel height, the variable-pixel-height computerized ionospheric tomography (VHCIT) model applies a disturbance to the height of each pixel. In comparison with conventional CIT models, the VHCIT technique achieved superior results in a numerical simulation. A careful validation of the reliability and superiority of VHCIT was performed. According to the results of the statistical analysis of the average root mean square errors, the proposed model offers an improvement by 15% compared with conventional CIT models.

  16. Energy-efficient ovens for unpolluted balady bread

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gadalla, M.A.; Mansour, M.S.; Mahdy, E.

    A new bread oven has been developed, tested and presented in this work for local balady bread. The design has the advantage of being efficient and producing unpolluted bread. An extensive study of the conventional and available designs has been carried out in order to help developing the new design. Evaluation of the conventional design is based on numerous tests and measurements. A computer code utilizing the indirect method has been developed to evaluate the thermal performance of the tested ovens. The present design achieves higher thermal efficiency of about 50% than the conventional ones. In addition, its capital costmore » is much cheaper than other imported designs. Thus, the present design achieves higher efficiency, pollutant free products and less cost. Moreover, it may be modified for different types of bread baking systems.« less

  17. Numerical observer for atherosclerotic plaque classification in spectral computed tomography

    PubMed Central

    Lorsakul, Auranuch; Fakhri, Georges El; Worstell, William; Ouyang, Jinsong; Rakvongthai, Yothin; Laine, Andrew F.; Li, Quanzheng

    2016-01-01

    Abstract. Spectral computed tomography (SCT) generates better image quality than conventional computed tomography (CT). It has overcome several limitations for imaging atherosclerotic plaque. However, the literature evaluating the performance of SCT based on objective image assessment is very limited for the task of discriminating plaques. We developed a numerical-observer method and used it to assess performance on discrimination vulnerable-plaque features and compared the performance among multienergy CT (MECT), dual-energy CT (DECT), and conventional CT methods. Our numerical observer was designed to incorporate all spectral information and comprised two-processing stages. First, each energy-window domain was preprocessed by a set of localized channelized Hotelling observers (CHO). In this step, the spectral image in each energy bin was decorrelated using localized prewhitening and matched filtering with a set of Laguerre–Gaussian channel functions. Second, the series of the intermediate scores computed from all the CHOs were integrated by a Hotelling observer with an additional prewhitening and matched filter. The overall signal-to-noise ratio (SNR) and the area under the receiver operating characteristic curve (AUC) were obtained, yielding an overall discrimination performance metric. The performance of our new observer was evaluated for the particular binary classification task of differentiating between alternative plaque characterizations in carotid arteries. A clinically realistic model of signal variability was also included in our simulation of the discrimination tasks. The inclusion of signal variation is a key to applying the proposed observer method to spectral CT data. Hence, the task-based approaches based on the signal-known-exactly/background-known-exactly (SKE/BKE) framework and the clinical-relevant signal-known-statistically/background-known-exactly (SKS/BKE) framework were applied for analytical computation of figures of merit (FOM). Simulated data of a carotid-atherosclerosis patient were used to validate our methods. We used an extended cardiac-torso anthropomorphic digital phantom and three simulated plaque types (i.e., calcified plaque, fatty-mixed plaque, and iodine-mixed blood). The images were reconstructed using a standard filtered backprojection (FBP) algorithm for all the acquisition methods and were applied to perform two different discrimination tasks of: (1) calcified plaque versus fatty-mixed plaque and (2) calcified plaque versus iodine-mixed blood. MECT outperformed DECT and conventional CT systems for all cases of the SKE/BKE and SKS/BKE tasks (all p<0.01). On average of signal variability, MECT yielded the SNR improvements over other acquisition methods in the range of 46.8% to 65.3% (all p<0.01) for FBP-Ramp images and 53.2% to 67.7% (all p<0.01) for FBP-Hanning images for both identification tasks. This proposed numerical observer combined with our signal variability framework is promising for assessing material characterization obtained through the additional energy-dependent attenuation information of SCT. These methods can be further extended to other clinical tasks such as kidney or urinary stone identification applications. PMID:27429999

  18. Numerical modeling of zero-offset laboratory data in a strong topographic environment: results for a spectral-element method and a discretized Kirchhoff integral method

    NASA Astrophysics Data System (ADS)

    Favretto-Cristini, Nathalie; Tantsereva, Anastasiya; Cristini, Paul; Ursin, Bjørn; Komatitsch, Dimitri; Aizenberg, Arkady M.

    2014-08-01

    Accurate simulation of seismic wave propagation in complex geological structures is of particular interest nowadays. However conventional methods may fail to simulate realistic wavefields in environments with great and rapid structural changes, due for instance to the presence of shadow zones, diffractions and/or edge effects. Different methods, developed to improve seismic modeling, are typically tested on synthetic configurations against analytical solutions for simple canonical problems or reference methods, or via direct comparison with real data acquired in situ. Such approaches have limitations, especially if the propagation occurs in a complex environment with strong-contrast reflectors and surface irregularities, as it can be difficult to determine the method which gives the best approximation of the "real" solution, or to interpret the results obtained without an a priori knowledge of the geologic environment. An alternative approach for seismics consists in comparing the synthetic data with high-quality data collected in laboratory experiments under controlled conditions for a known configuration. In contrast with numerical experiments, laboratory data possess many of the characteristics of field data, as real waves propagate through models with no numerical approximations. We thus present a comparison of laboratory-scaled measurements of 3D zero-offset wave reflection of broadband pulses from a strong topographic environment immersed in a water tank with numerical data simulated by means of a spectral-element method and a discretized Kirchhoff integral method. The results indicate a good quantitative fit in terms of time arrivals and acceptable fit in amplitudes for all datasets.

  19. Comparison of Factorization-Based Filtering for Landing Navigation

    NASA Technical Reports Server (NTRS)

    McCabe, James S.; Brown, Aaron J.; DeMars, Kyle J.; Carson, John M., III

    2017-01-01

    This paper develops and analyzes methods for fusing inertial navigation data with external data, such as data obtained from an altimeter and a star camera. The particular filtering techniques are based upon factorized forms of the Kalman filter, specifically the UDU and Cholesky factorizations. The factorized Kalman filters are utilized to ensure numerical stability of the navigation solution. Simulations are carried out to compare the performance of the different approaches along a lunar descent trajectory using inertial and external data sources. It is found that the factorized forms improve upon conventional filtering techniques in terms of ensuring numerical stability for the investigated landing navigation scenario.

  20. Energy shadowing correction of ultrasonic pulse-echo records by digital signal processing

    NASA Technical Reports Server (NTRS)

    Kishonio, D.; Heyman, J. S.

    1985-01-01

    A numerical algorithm is described that enables the correction of energy shadowing during the ultrasonic testing of bulk materials. In the conventional method, an ultrasonic transducer transmits sound waves into a material that is immersed in water so that discontinuities such as defects can be revealed when the waves are reflected and then detected and displayed graphically. Since a defect that lies behind another defect is shadowed in that it receives less energy, the conventional method has a major drawback. The algorithm normalizes the energy of the incoming wave by measuring the energy of the waves reflected off the water/air interface. The algorithm is fast and simple enough to be adopted for real time applications in industry. Images of material defects with the shadowing corrections permit more quantitative interpretation of the material state.

  1. Genome engineering in ornamental plants: Current status and future prospects.

    PubMed

    Kishi-Kaboshi, Mitsuko; Aida, Ryutaro; Sasaki, Katsutomo

    2018-03-13

    Ornamental plants, like roses, carnations, and chrysanthemums, are economically important and are sold all over the world. In addition, numerous cut and garden flowers add colors to homes and gardens. Various strategies of plant breeding have been employed to improve traits of many ornamental plants. These approaches span from conventional techniques, such as crossbreeding and mutation breeding, to genetically modified plants. Recently, genome editing has become available as an efficient means for modifying traits in plant species. Genome editing technology is useful for genetic analysis and is poised to become a common breeding method for ornamental plants. In this review, we summarize the benefits and limitations of conventional breeding techniques and genome editing methods and discuss their future potential to accelerate the rate breeding programs in ornamental plants. Copyright © 2018 Elsevier Masson SAS. All rights reserved.

  2. A numerical comparison of discrete Kalman filtering algorithms: An orbit determination case study

    NASA Technical Reports Server (NTRS)

    Thornton, C. L.; Bierman, G. J.

    1976-01-01

    The numerical stability and accuracy of various Kalman filter algorithms are thoroughly studied. Numerical results and conclusions are based on a realistic planetary approach orbit determination study. The case study results of this report highlight the numerical instability of the conventional and stabilized Kalman algorithms. Numerical errors associated with these algorithms can be so large as to obscure important mismodeling effects and thus give misleading estimates of filter accuracy. The positive result of this study is that the Bierman-Thornton U-D covariance factorization algorithm is computationally efficient, with CPU costs that differ negligibly from the conventional Kalman costs. In addition, accuracy of the U-D filter using single-precision arithmetic consistently matches the double-precision reference results. Numerical stability of the U-D filter is further demonstrated by its insensitivity of variations in the a priori statistics.

  3. Normal tissue complication probability modelling of tissue fibrosis following breast radiotherapy

    NASA Astrophysics Data System (ADS)

    Alexander, M. A. R.; Brooks, W. A.; Blake, S. W.

    2007-04-01

    Cosmetic late effects of radiotherapy such as tissue fibrosis are increasingly regarded as being of importance. It is generally considered that the complication probability of a radiotherapy plan is dependent on the dose uniformity, and can be reduced by using better compensation to remove dose hotspots. This work aimed to model the effects of improved dose homogeneity on complication probability. The Lyman and relative seriality NTCP models were fitted to clinical fibrosis data for the breast collated from the literature. Breast outlines were obtained from a commercially available Rando phantom using the Osiris system. Multislice breast treatment plans were produced using a variety of compensation methods. Dose-volume histograms (DVHs) obtained for each treatment plan were reduced to simple numerical parameters using the equivalent uniform dose and effective volume DVH reduction methods. These parameters were input into the models to obtain complication probability predictions. The fitted model parameters were consistent with a parallel tissue architecture. Conventional clinical plans generally showed reducing complication probabilities with increasing compensation sophistication. Extremely homogenous plans representing idealized IMRT treatments showed increased complication probabilities compared to conventional planning methods, as a result of increased dose to areas receiving sub-prescription doses using conventional techniques.

  4. The preparation of liposomes using compressed carbon dioxide: strategies, important considerations and comparison with conventional techniques.

    PubMed

    Bridson, R H; Santos, R C D; Al-Duri, B; McAllister, S M; Robertson, J; Alpar, H O

    2006-06-01

    Numerous strategies are currently available for preparing liposomes, although no single method is ideal in every respect. Two methods for producing liposomes using compressed carbon dioxide in either its liquid or supercritical state were therefore investigated as possible alternatives to the conventional techniques currently used. The first technique used modified compressed carbon dioxide as a solvent system. The way in which changes in pressure, temperature, apparatus geometry and solvent flow rate affected the size distributions of the formulations was examined. In general, liposomes in the nano-size range with an average diameter of 200 nm could be produced, although some micron-sized vesicles were also present. Liposomes were characterized according to their hydrophobic drug-loading capacity and encapsulated aqueous volumes. The latter were found to be higher than in conventional techniques such as high-pressure homogenization. The second method used compressed carbon dioxide as an anti-solvent to promote uniform precipitation of phospholipids from concentrated ethanolic solutions. Finely divided solvent-free phospholipid powders of saturated lipids could be prepared that were subsequently hydrated to produce liposomes with mean volume diameters of around 5 microm.

  5. General framework for dynamic large deformation contact problems based on phantom-node X-FEM

    NASA Astrophysics Data System (ADS)

    Broumand, P.; Khoei, A. R.

    2018-04-01

    This paper presents a general framework for modeling dynamic large deformation contact-impact problems based on the phantom-node extended finite element method. The large sliding penalty contact formulation is presented based on a master-slave approach which is implemented within the phantom-node X-FEM and an explicit central difference scheme is used to model the inertial effects. The method is compared with conventional contact X-FEM; advantages, limitations and implementational aspects are also addressed. Several numerical examples are presented to show the robustness and accuracy of the proposed method.

  6. Hybrid method for determining the parameters of condenser microphones from measured membrane velocities and numerical calculations.

    PubMed

    Barrera-Figueroa, Salvador; Rasmussen, Knud; Jacobsen, Finn

    2009-10-01

    Typically, numerical calculations of the pressure, free-field, and random-incidence response of a condenser microphone are carried out on the basis of an assumed displacement distribution of the diaphragm of the microphone; the conventional assumption is that the displacement follows a Bessel function. This assumption is probably valid at frequencies below the resonance frequency. However, at higher frequencies the movement of the membrane is heavily coupled with the damping of the air film between membrane and backplate and with resonances in the back chamber of the microphone. A solution to this problem is to measure the velocity distribution of the membrane by means of a non-contact method, such as laser vibrometry. The measured velocity distribution can be used together with a numerical formulation such as the boundary element method for estimating the microphone response and other parameters, e.g., the acoustic center. In this work, such a hybrid method is presented and examined. The velocity distributions of a number of condenser microphones have been determined using a laser vibrometer, and these measured velocity distributions have been used for estimating microphone responses and other parameters. The agreement with experimental data is generally good. The method can be used as an alternative for validating the parameters of the microphones determined by classical calibration techniques.

  7. Evaluation of magnetic nanoparticle samples made from biocompatible ferucarbotran by time-correlation magnetic particle imaging reconstruction method

    PubMed Central

    2013-01-01

    Background Molecular imaging using magnetic nanoparticles (MNPs)—magnetic particle imaging (MPI)—has attracted interest for the early diagnosis of cancer and cardiovascular disease. However, because a steep local magnetic field distribution is required to obtain a defined image, sophisticated hardware is required. Therefore, it is desirable to realize excellent image quality even with low-performance hardware. In this study, the spatial resolution of MPI was evaluated using an image reconstruction method based on the correlation information of the magnetization signal in a time domain and by applying MNP samples made from biocompatible ferucarbotran that have adjusted particle diameters. Methods The magnetization characteristics and particle diameters of four types of MNP samples made from ferucarbotran were evaluated. A numerical analysis based on our proposed method that calculates the image intensity from correlation information between the magnetization signal generated from MNPs and the system function was attempted, and the obtained image quality was compared with that using the prototype in terms of image resolution and image artifacts. Results MNP samples obtained by adjusting ferucarbotran showed superior properties to conventional ferucarbotran samples, and numerical analysis showed that the same image quality could be obtained using a gradient magnetic field generator with 0.6 times the performance. However, because image blurring was included theoretically by the proposed method, an algorithm will be required to improve performance. Conclusions MNP samples obtained by adjusting ferucarbotran showed magnetizing properties superior to conventional ferucarbotran samples, and by using such samples, comparable image quality (spatial resolution) could be obtained with a lower gradient magnetic field intensity. PMID:23734917

  8. An improved rotated staggered-grid finite-difference method with fourth-order temporal accuracy for elastic-wave modeling in anisotropic media

    DOE PAGES

    Gao, Kai; Huang, Lianjie

    2017-08-31

    The rotated staggered-grid (RSG) finite-difference method is a powerful tool for elastic-wave modeling in 2D anisotropic media where the symmetry axes of anisotropy are not aligned with the coordinate axes. We develop an improved RSG scheme with fourth-order temporal accuracy to reduce the numerical dispersion associated with prolonged wave propagation or a large temporal step size. The high-order temporal accuracy is achieved by including high-order temporal derivatives, which can be converted to high-order spatial derivatives to reduce computational cost. Dispersion analysis and numerical tests show that our method exhibits very low temporal dispersion even with a large temporal step sizemore » for elastic-wave modeling in complex anisotropic media. Using the same temporal step size, our method is more accurate than the conventional RSG scheme. In conclusion, our improved RSG scheme is therefore suitable for prolonged modeling of elastic-wave propagation in 2D anisotropic media.« less

  9. Triangular covariance factorizations for. Ph.D. Thesis. - Calif. Univ.

    NASA Technical Reports Server (NTRS)

    Thornton, C. L.

    1976-01-01

    An improved computational form of the discrete Kalman filter is derived using an upper triangular factorization of the error covariance matrix. The covariance P is factored such that P = UDUT where U is unit upper triangular and D is diagonal. Recursions are developed for propagating the U-D covariance factors together with the corresponding state estimate. The resulting algorithm, referred to as the U-D filter, combines the superior numerical precision of square root filtering techniques with an efficiency comparable to that of Kalman's original formula. Moreover, this method is easily implemented and involves no more computer storage than the Kalman algorithm. These characteristics make the U-D method an attractive realtime filtering technique. A new covariance error analysis technique is obtained from an extension of the U-D filter equations. This evaluation method is flexible and efficient and may provide significantly improved numerical results. Cost comparisons show that for a large class of problems the U-D evaluation algorithm is noticeably less expensive than conventional error analysis methods.

  10. An improved rotated staggered-grid finite-difference method with fourth-order temporal accuracy for elastic-wave modeling in anisotropic media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Kai; Huang, Lianjie

    The rotated staggered-grid (RSG) finite-difference method is a powerful tool for elastic-wave modeling in 2D anisotropic media where the symmetry axes of anisotropy are not aligned with the coordinate axes. We develop an improved RSG scheme with fourth-order temporal accuracy to reduce the numerical dispersion associated with prolonged wave propagation or a large temporal step size. The high-order temporal accuracy is achieved by including high-order temporal derivatives, which can be converted to high-order spatial derivatives to reduce computational cost. Dispersion analysis and numerical tests show that our method exhibits very low temporal dispersion even with a large temporal step sizemore » for elastic-wave modeling in complex anisotropic media. Using the same temporal step size, our method is more accurate than the conventional RSG scheme. In conclusion, our improved RSG scheme is therefore suitable for prolonged modeling of elastic-wave propagation in 2D anisotropic media.« less

  11. Consumers' Kansei Needs Clustering Method for Product Emotional Design Based on Numerical Design Structure Matrix and Genetic Algorithms.

    PubMed

    Yang, Yan-Pu; Chen, Deng-Kai; Gu, Rong; Gu, Yu-Feng; Yu, Sui-Huai

    2016-01-01

    Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design.

  12. Consumers' Kansei Needs Clustering Method for Product Emotional Design Based on Numerical Design Structure Matrix and Genetic Algorithms

    PubMed Central

    Chen, Deng-kai; Gu, Rong; Gu, Yu-feng; Yu, Sui-huai

    2016-01-01

    Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design. PMID:27630709

  13. Numerical simulation of groundwater flow in strongly anisotropic aquifers using multiple-point flux approximation method

    NASA Astrophysics Data System (ADS)

    Lin, S. T.; Liou, T. S.

    2017-12-01

    Numerical simulation of groundwater flow in anisotropic aquifers usually suffers from the lack of accuracy of calculating groundwater flux across grid blocks. Conventional two-point flux approximation (TPFA) can only obtain the flux normal to the grid interface but completely neglects the one parallel to it. Furthermore, the hydraulic gradient in a grid block estimated from TPFA can only poorly represent the hydraulic condition near the intersection of grid blocks. These disadvantages are further exacerbated when the principal axes of hydraulic conductivity, global coordinate system, and grid boundary are not parallel to one another. In order to refine the estimation the in-grid hydraulic gradient, several multiple-point flux approximation (MPFA) methods have been developed for two-dimensional groundwater flow simulations. For example, the MPFA-O method uses the hydraulic head at the junction node as an auxiliary variable which is then eliminated using the head and flux continuity conditions. In this study, a three-dimensional MPFA method will be developed for numerical simulation of groundwater flow in three-dimensional and strongly anisotropic aquifers. This new MPFA method first discretizes the simulation domain into hexahedrons. Each hexahedron is further decomposed into a certain number of tetrahedrons. The 2D MPFA-O method is then extended to these tetrahedrons, using the unknown head at the intersection of hexahedrons as an auxiliary variable along with the head and flux continuity conditions to solve for the head at the center of each hexahedron. Numerical simulations using this new MPFA method have been successfully compared with those obtained from a modified version of TOUGH2.

  14. Stochastic porous media modeling and high-resolution schemes for numerical simulation of subsurface immiscible fluid flow transport

    NASA Astrophysics Data System (ADS)

    Brantson, Eric Thompson; Ju, Binshan; Wu, Dan; Gyan, Patricia Semwaah

    2018-04-01

    This paper proposes stochastic petroleum porous media modeling for immiscible fluid flow simulation using Dykstra-Parson coefficient (V DP) and autocorrelation lengths to generate 2D stochastic permeability values which were also used to generate porosity fields through a linear interpolation technique based on Carman-Kozeny equation. The proposed method of permeability field generation in this study was compared to turning bands method (TBM) and uniform sampling randomization method (USRM). On the other hand, many studies have also reported that, upstream mobility weighting schemes, commonly used in conventional numerical reservoir simulators do not accurately capture immiscible displacement shocks and discontinuities through stochastically generated porous media. This can be attributed to high level of numerical smearing in first-order schemes, oftentimes misinterpreted as subsurface geological features. Therefore, this work employs high-resolution schemes of SUPERBEE flux limiter, weighted essentially non-oscillatory scheme (WENO), and monotone upstream-centered schemes for conservation laws (MUSCL) to accurately capture immiscible fluid flow transport in stochastic porous media. The high-order schemes results match well with Buckley Leverett (BL) analytical solution without any non-oscillatory solutions. The governing fluid flow equations were solved numerically using simultaneous solution (SS) technique, sequential solution (SEQ) technique and iterative implicit pressure and explicit saturation (IMPES) technique which produce acceptable numerical stability and convergence rate. A comparative and numerical examples study of flow transport through the proposed method, TBM and USRM permeability fields revealed detailed subsurface instabilities with their corresponding ultimate recovery factors. Also, the impact of autocorrelation lengths on immiscible fluid flow transport were analyzed and quantified. A finite number of lines used in the TBM resulted into visual artifact banding phenomenon unlike the proposed method and USRM. In all, the proposed permeability and porosity fields generation coupled with the numerical simulator developed will aid in developing efficient mobility control schemes to improve on poor volumetric sweep efficiency in porous media.

  15. A numerical homogenization method for heterogeneous, anisotropic elastic media based on multiscale theory

    DOE PAGES

    Gao, Kai; Chung, Eric T.; Gibson, Richard L.; ...

    2015-06-05

    The development of reliable methods for upscaling fine scale models of elastic media has long been an important topic for rock physics and applied seismology. Several effective medium theories have been developed to provide elastic parameters for materials such as finely layered media or randomly oriented or aligned fractures. In such cases, the analytic solutions for upscaled properties can be used for accurate prediction of wave propagation. However, such theories cannot be applied directly to homogenize elastic media with more complex, arbitrary spatial heterogeneity. We therefore propose a numerical homogenization algorithm based on multiscale finite element methods for simulating elasticmore » wave propagation in heterogeneous, anisotropic elastic media. Specifically, our method used multiscale basis functions obtained from a local linear elasticity problem with appropriately defined boundary conditions. Homogenized, effective medium parameters were then computed using these basis functions, and the approach applied a numerical discretization that is similar to the rotated staggered-grid finite difference scheme. Comparisons of the results from our method and from conventional, analytical approaches for finely layered media showed that the homogenization reliably estimated elastic parameters for this simple geometry. Additional tests examined anisotropic models with arbitrary spatial heterogeneity where the average size of the heterogeneities ranged from several centimeters to several meters, and the ratio between the dominant wavelength and the average size of the arbitrary heterogeneities ranged from 10 to 100. Comparisons to finite-difference simulations proved that the numerical homogenization was equally accurate for these complex cases.« less

  16. The challenges and promises of genetic approaches for ballast water management

    NASA Astrophysics Data System (ADS)

    Rey, Anaïs; Basurko, Oihane C.; Rodríguez-Ezpeleta, Naiara

    2018-03-01

    Ballast water is a main vector of introduction of Harmful Aquatic Organisms and Pathogens, which includes Non-Indigenous Species. Numerous and diversified organisms are transferred daily from a donor to a recipient port. Developed to prevent these introduction events, the International Convention for the Control and Management of Ships' Ballast Water and Sediments will enter into force in 2017. This international convention is asking for the monitoring of Harmful Aquatic Organisms and Pathogens. In this review, we highlight the urgent need to develop cost-effective methods to: (1) perform the biological analyses required by the convention; and (2) assess the effectiveness of two main ballast water management strategies, i.e. the ballast water exchange and the use of ballast water treatment systems. We have compiled the biological analyses required by the convention, and performed a comprehensive evaluation of the potential and challenges of the use of genetic tools in this context. Following an overview of the studies applying genetic tools to ballast water related research, we present metabarcoding as a relevant approach for early detection of Harmful Aquatic Organisms and Pathogens in general and for ballast water monitoring and port risk assessment in particular. Nonetheless, before implementation of genetic tools in the context of the ballast water management convention, benchmarked tests against traditional methods should be performed, and standard, reproducible and easy to apply protocols should be developed.

  17. Group vector space method for estimating enthalpy of vaporization of organic compounds at the normal boiling point.

    PubMed

    Wenying, Wei; Jinyu, Han; Wen, Xu

    2004-01-01

    The specific position of a group in the molecule has been considered, and a group vector space method for estimating enthalpy of vaporization at the normal boiling point of organic compounds has been developed. Expression for enthalpy of vaporization Delta(vap)H(T(b)) has been established and numerical values of relative group parameters obtained. The average percent deviation of estimation of Delta(vap)H(T(b)) is 1.16, which show that the present method demonstrates significant improvement in applicability to predict the enthalpy of vaporization at the normal boiling point, compared the conventional group methods.

  18. An efficient method for the computation of Legendre moments.

    PubMed

    Yap, Pew-Thian; Paramesran, Raveendran

    2005-12-01

    Legendre moments are continuous moments, hence, when applied to discrete-space images, numerical approximation is involved and error occurs. This paper proposes a method to compute the exact values of the moments by mathematically integrating the Legendre polynomials over the corresponding intervals of the image pixels. Experimental results show that the values obtained match those calculated theoretically, and the image reconstructed from these moments have lower error than that of the conventional methods for the same order. Although the same set of exact Legendre moments can be obtained indirectly from the set of geometric moments, the computation time taken is much longer than the proposed method.

  19. Time-Domain Filtering for Spatial Large-Eddy Simulation

    NASA Technical Reports Server (NTRS)

    Pruett, C. David

    1997-01-01

    An approach to large-eddy simulation (LES) is developed whose subgrid-scale model incorporates filtering in the time domain, in contrast to conventional approaches, which exploit spatial filtering. The method is demonstrated in the simulation of a heated, compressible, axisymmetric jet, and results are compared with those obtained from fully resolved direct numerical simulation. The present approach was, in fact, motivated by the jet-flow problem and the desire to manipulate the flow by localized (point) sources for the purposes of noise suppression. Time-domain filtering appears to be more consistent with the modeling of point sources; moreover, time-domain filtering may resolve some fundamental inconsistencies associated with conventional space-filtered LES approaches.

  20. A conservative scheme for electromagnetic simulation of magnetized plasmas with kinetic electrons

    NASA Astrophysics Data System (ADS)

    Bao, J.; Lin, Z.; Lu, Z. X.

    2018-02-01

    A conservative scheme has been formulated and verified for gyrokinetic particle simulations of electromagnetic waves and instabilities in magnetized plasmas. An electron continuity equation derived from the drift kinetic equation is used to time advance the electron density perturbation by using the perturbed mechanical flow calculated from the parallel vector potential, and the parallel vector potential is solved by using the perturbed canonical flow from the perturbed distribution function. In gyrokinetic particle simulations using this new scheme, the shear Alfvén wave dispersion relation in the shearless slab and continuum damping in the sheared cylinder have been recovered. The new scheme overcomes the stringent requirement in the conventional perturbative simulation method that perpendicular grid size needs to be as small as electron collisionless skin depth even for the long wavelength Alfvén waves. The new scheme also avoids the problem in the conventional method that an unphysically large parallel electric field arises due to the inconsistency between electrostatic potential calculated from the perturbed density and vector potential calculated from the perturbed canonical flow. Finally, the gyrokinetic particle simulations of the Alfvén waves in sheared cylinder have superior numerical properties compared with the fluid simulations, which suffer from numerical difficulties associated with singular mode structures.

  1. Improved methods of vibration analysis of pretwisted, airfoil blades

    NASA Technical Reports Server (NTRS)

    Subrahmanyam, K. B.; Kaza, K. R. V.

    1984-01-01

    Vibration analysis of pretwisted blades of asymmetric airfoil cross section is performed by using two mixed variational approaches. Numerical results obtained from these two methods are compared to those obtained from an improved finite difference method and also to those given by the ordinary finite difference method. The relative merits, convergence properties and accuracies of all four methods are studied and discussed. The effects of asymmetry and pretwist on natural frequencies and mode shapes are investigated. The improved finite difference method is shown to be far superior to the conventional finite difference method in several respects. Close lower bound solutions are provided by the improved finite difference method for untwisted blades with a relatively coarse mesh while the mixed methods have not indicated any specific bound.

  2. Contrastive Numerical Investigations on Thermo-Structural Behaviors in Mass Concrete with Various Cements

    PubMed Central

    Zhou, Wei; Feng, Chuqiao; Liu, Xinghong; Liu, Shuhua; Zhang, Chao; Yuan, Wei

    2016-01-01

    This work is a contrastive investigation of numerical simulations to improve the comprehension of thermo-structural coupled phenomena of mass concrete structures during construction. The finite element (FE) analysis of thermo-structural behaviors is used to investigate the applicability of supersulfated cement (SSC) in mass concrete structures. A multi-scale framework based on a homogenization scheme is adopted in the parameter studies to describe the nonlinear concrete behaviors. Based on the experimental data of hydration heat evolution rate and quantity of SSC and fly ash Portland cement, the hydration properties of various cements are studied. Simulations are run on a concrete dam section with a conventional method and a chemo-thermo-mechanical coupled method. The results show that SSC is more suitable for mass concrete structures from the standpoint of temperature control and crack prevention. PMID:28773517

  3. Contrastive Numerical Investigations on Thermo-Structural Behaviors in Mass Concrete with Various Cements.

    PubMed

    Zhou, Wei; Feng, Chuqiao; Liu, Xinghong; Liu, Shuhua; Zhang, Chao; Yuan, Wei

    2016-05-20

    This work is a contrastive investigation of numerical simulations to improve the comprehension of thermo-structural coupled phenomena of mass concrete structures during construction. The finite element (FE) analysis of thermo-structural behaviors is used to investigate the applicability of supersulfated cement (SSC) in mass concrete structures. A multi-scale framework based on a homogenization scheme is adopted in the parameter studies to describe the nonlinear concrete behaviors. Based on the experimental data of hydration heat evolution rate and quantity of SSC and fly ash Portland cement, the hydration properties of various cements are studied. Simulations are run on a concrete dam section with a conventional method and a chemo-thermo-mechanical coupled method. The results show that SSC is more suitable for mass concrete structures from the standpoint of temperature control and crack prevention.

  4. Meshless Method for Simulation of Compressible Flow

    NASA Astrophysics Data System (ADS)

    Nabizadeh Shahrebabak, Ebrahim

    In the present age, rapid development in computing technology and high speed supercomputers has made numerical analysis and computational simulation more practical than ever before for large and complex cases. Numerical simulations have also become an essential means for analyzing the engineering problems and the cases that experimental analysis is not practical. There are so many sophisticated and accurate numerical schemes, which do these simulations. The finite difference method (FDM) has been used to solve differential equation systems for decades. Additional numerical methods based on finite volume and finite element techniques are widely used in solving problems with complex geometry. All of these methods are mesh-based techniques. Mesh generation is an essential preprocessing part to discretize the computation domain for these conventional methods. However, when dealing with mesh-based complex geometries these conventional mesh-based techniques can become troublesome, difficult to implement, and prone to inaccuracies. In this study, a more robust, yet simple numerical approach is used to simulate problems in an easier manner for even complex problem. The meshless, or meshfree, method is one such development that is becoming the focus of much research in the recent years. The biggest advantage of meshfree methods is to circumvent mesh generation. Many algorithms have now been developed to help make this method more popular and understandable for everyone. These algorithms have been employed over a wide range of problems in computational analysis with various levels of success. Since there is no connectivity between the nodes in this method, the challenge was considerable. The most fundamental issue is lack of conservation, which can be a source of unpredictable errors in the solution process. This problem is particularly evident in the presence of steep gradient regions and discontinuities, such as shocks that frequently occur in high speed compressible flow problems. To solve this discontinuity problem, this research study deals with the implementation of a conservative meshless method and its applications in computational fluid dynamics (CFD). One of the most common types of collocating meshless method the RBF-DQ, is used to approximate the spatial derivatives. The issue with meshless methods when dealing with highly convective cases is that they cannot distinguish the influence of fluid flow from upstream or downstream and some methodology is needed to make the scheme stable. Therefore, an upwinding scheme similar to one used in the finite volume method is added to capture steep gradient or shocks. This scheme creates a flexible algorithm within which a wide range of numerical flux schemes, such as those commonly used in the finite volume method, can be employed. In addition, a blended RBF is used to decrease the dissipation ensuing from the use of a low shape parameter. All of these steps are formulated for the Euler equation and a series of test problems used to confirm convergence of the algorithm. The present scheme was first employed on several incompressible benchmarks to validate the framework. The application of this algorithm is illustrated by solving a set of incompressible Navier-Stokes problems. Results from the compressible problem are compared with the exact solution for the flow over a ramp and compared with solutions of finite volume discretization and the discontinuous Galerkin method, both requiring a mesh. The applicability of the algorithm and its robustness are shown to be applied to complex problems.

  5. Nanocrystal synthesis in microfluidic reactors: where next?

    PubMed

    Phillips, Thomas W; Lignos, Ioannis G; Maceiczyk, Richard M; deMello, Andrew J; deMello, John C

    2014-09-07

    The past decade has seen a steady rise in the use of microfluidic reactors for nanocrystal synthesis, with numerous studies reporting improved reaction control relative to conventional batch chemistry. However, flow synthesis procedures continue to lag behind batch methods in terms of chemical sophistication and the range of accessible materials, with most reports having involved simple one- or two-step chemical procedures directly adapted from proven batch protocols. Here we examine the current status of microscale methods for nanocrystal synthesis, and consider what role microreactors might ultimately play in laboratory-scale research and industrial production.

  6. An analysis method for two-dimensional transonic viscous flow

    NASA Technical Reports Server (NTRS)

    Bavitz, P. C.

    1975-01-01

    A method for the approximate calculation of transonic flow over airfoils, including shock waves and viscous effects, is described. Numerical solutions are obtained by use of a computer program which is discussed in the appendix. The importance of including the boundary layer in the analysis is clearly demonstrated, as well as the need to improve on existing procedures near the trailing edge. Comparisons between calculations and experimental data are presented for both conventional and supercritical airfoils, emphasis being on the surface pressure distribution, and good agreement is indicated.

  7. Slat Noise Predictions Using Higher-Order Finite-Difference Methods on Overset Grids

    NASA Technical Reports Server (NTRS)

    Housman, Jeffrey A.; Kiris, Cetin

    2016-01-01

    Computational aeroacoustic simulations using the structured overset grid approach and higher-order finite difference methods within the Launch Ascent and Vehicle Aerodynamics (LAVA) solver framework are presented for slat noise predictions. The simulations are part of a collaborative study comparing noise generation mechanisms between a conventional slat and a Krueger leading edge flap. Simulation results are compared with experimental data acquired during an aeroacoustic test in the NASA Langley Quiet Flow Facility. Details of the structured overset grid, numerical discretization, and turbulence model are provided.

  8. An adaptive sparse deconvolution method for distinguishing the overlapping echoes of ultrasonic guided waves for pipeline crack inspection

    NASA Astrophysics Data System (ADS)

    Chang, Yong; Zi, Yanyang; Zhao, Jiyuan; Yang, Zhe; He, Wangpeng; Sun, Hailiang

    2017-03-01

    In guided wave pipeline inspection, echoes reflected from closely spaced reflectors generally overlap, meaning useful information is lost. To solve the overlapping problem, sparse deconvolution methods have been developed in the past decade. However, conventional sparse deconvolution methods have limitations in handling guided wave signals, because the input signal is directly used as the prototype of the convolution matrix, without considering the waveform change caused by the dispersion properties of the guided wave. In this paper, an adaptive sparse deconvolution (ASD) method is proposed to overcome these limitations. First, the Gaussian echo model is employed to adaptively estimate the column prototype of the convolution matrix instead of directly using the input signal as the prototype. Then, the convolution matrix is constructed upon the estimated results. Third, the split augmented Lagrangian shrinkage (SALSA) algorithm is introduced to solve the deconvolution problem with high computational efficiency. To verify the effectiveness of the proposed method, guided wave signals obtained from pipeline inspection are investigated numerically and experimentally. Compared to conventional sparse deconvolution methods, e.g. the {{l}1} -norm deconvolution method, the proposed method shows better performance in handling the echo overlap problem in the guided wave signal.

  9. Shack-Hartmann wavefront sensor with large dynamic range by adaptive spot search method.

    PubMed

    Shinto, Hironobu; Saita, Yusuke; Nomura, Takanori

    2016-07-10

    A Shack-Hartmann wavefront sensor (SHWFS) that consists of a microlens array and an image sensor has been used to measure the wavefront aberrations of human eyes. However, a conventional SHWFS has finite dynamic range depending on the diameter of the each microlens. The dynamic range cannot be easily expanded without a decrease of the spatial resolution. In this study, an adaptive spot search method to expand the dynamic range of an SHWFS is proposed. In the proposed method, spots are searched with the help of their approximate displacements measured with low spatial resolution and large dynamic range. By the proposed method, a wavefront can be correctly measured even if the spot is beyond the detection area. The adaptive spot search method is realized by using the special microlens array that generates both spots and discriminable patterns. The proposed method enables expanding the dynamic range of an SHWFS with a single shot and short processing time. The performance of the proposed method is compared with that of a conventional SHWFS by optical experiments. Furthermore, the dynamic range of the proposed method is quantitatively evaluated by numerical simulations.

  10. Numerical Simulation of Hydro-mechanical Deep Drawing — A Study on the Effect of Process Parameters on Drawability and Thickness Variation

    NASA Astrophysics Data System (ADS)

    Singh, Swadesh Kumar; Kumar, D. Ravi

    2005-08-01

    Hydro-mechanical deep drawing is a process for producing cup shaped parts with the assistance of a pressurized fluid. In the present work, numerical simulation of the conventional and counter pressure deep drawing processes has been done with the help of a finite element method based software. Simulation results were analyzed to study the improvement in drawability by using hydro-mechanical processes. The thickness variations in the drawn cups were analyzed and also the effect of counter pressure and oil gap on the thickness distribution was studied. Numerical simulations were also used for the die design, which combines both drawing and ironing processes in a single operation. This modification in the die provides high drawability, facilitates smooth material flow, gives more uniform thickness distribution and corrects the shape distortion.

  11. Numerical modelling of bifurcation and localisation in cohesive-frictional materials

    NASA Astrophysics Data System (ADS)

    de Borst, René

    1991-12-01

    Methods are reviewed for analysing highly localised failure and bifurcation modes in discretised mechanical systems as typically arise in numerical simulations of failure in soils, rocks, metals and concrete. By the example of a plane-strain biaxial test it is shown that strain softening and lack of normality in elasto-plastic constitutive equations and the ensuing loss of ellipticity of the governing field equations cause a pathological mesh dependence of numerical solutions for such problems, thus rendering the results effectively meaningless. The need for introduction of higher-order continuum models is emphasised to remedy this shortcoming of the conventional approach. For one such a continuum model, namely the unconstrained Cosserat continuum, it is demonstrated that meaningful and convergent solutions (in the sense that a finite width of the localisation zone is computed upon mesh refinement) can be obtained.

  12. Numerical solution of differential equations by artificial neural networks

    NASA Technical Reports Server (NTRS)

    Meade, Andrew J., Jr.

    1995-01-01

    Conventionally programmed digital computers can process numbers with great speed and precision, but do not easily recognize patterns or imprecise or contradictory data. Instead of being programmed in the conventional sense, artificial neural networks (ANN's) are capable of self-learning through exposure to repeated examples. However, the training of an ANN can be a time consuming and unpredictable process. A general method is being developed by the author to mate the adaptability of the ANN with the speed and precision of the digital computer. This method has been successful in building feedforward networks that can approximate functions and their partial derivatives from examples in a single iteration. The general method also allows the formation of feedforward networks that can approximate the solution to nonlinear ordinary and partial differential equations to desired accuracy without the need of examples. It is believed that continued research will produce artificial neural networks that can be used with confidence in practical scientific computing and engineering applications.

  13. Health care using high-bandwidth communication to overcome distance and time barriers for the Department of Defense

    NASA Astrophysics Data System (ADS)

    Mun, Seong K.; Freedman, Matthew T.; Gelish, Anthony; de Treville, Robert E.; Sheehy, Monet R.; Hansen, Mark; Hill, Mac; Zacharia, Elisabeth; Sullivan, Michael J.; Sebera, C. Wayne

    1993-01-01

    Image management and communications (IMAC) network, also known as picture archiving and communication system (PACS) consists of (1) digital image acquisition, (2) image review station (3) image storage device(s), image reading workstation, and (4) communication capability. When these subsystems are integrated over a high speed communication technology, possibilities are numerous in improving the timeliness and quality of diagnostic services within a hospital or at remote clinical sites. Teleradiology system uses basically the same hardware configuration together with a long distance communication capability. Functional characteristics of components are highlighted. Many medical imaging systems are already in digital form. These digital images constitute approximately 30% of the total volume of images produced in a radiology department. The remaining 70% of images include conventional x-ray films of the chest, skeleton, abdomen, and GI tract. Unless one develops a method of handling these conventional film images, global improvement in productivity in image management and radiology service throughout a hospital cannot be achieved. Currently, there are two method of producing digital information representing these conventional analog images for IMAC: film digitizers that scan the conventional films, and computed radiography (CR) that captures x-ray images using storage phosphor plate that is subsequently scanned by a laser beam.

  14. The potential impact of scatterometry on oceanography - A wave forecasting case

    NASA Technical Reports Server (NTRS)

    Cane, M. A.; Cardone, V. J.

    1981-01-01

    A series of observing system simulation experiments have been performed in order to assess the potential impact of marine surface wind data on numerical weather prediction. In addition to conventional data, the experiments simulated the time-continuous assimilation of remotely sensed marine surface wind or temperature sounding data. The wind data were fabricated directly for model grid points intercepted by a Seasat-1 scatterometer swath and were assimilated into the lowest active level (945 mb) of the model using a localized successive correction method. It is shown that Seasat wind data can greatly improve numerical weather forecasts due to better definition of specific features. The case of the QE II storm is examined.

  15. Fast generating Greenberger-Horne-Zeilinger state via iterative interaction pictures

    NASA Astrophysics Data System (ADS)

    Huang, Bi-Hua; Chen, Ye-Hong; Wu, Qi-Cheng; Song, Jie; Xia, Yan

    2016-10-01

    We delve a little deeper into the construction of shortcuts to adiabatic passage for three-level systems by iterative interaction picture (multiple Schrödinger dynamics). As an application example, we use the deduced iterative based shortcuts to rapidly generate the Greenberger-Horne-Zeilinger (GHZ) state in a three-atom system with the help of quantum Zeno dynamics. Numerical simulation shows the dynamics designed by the iterative picture method is physically feasible and the shortcut scheme performs much better than that using the conventional adiabatic passage techniques. Also, the influences of various decoherence processes are discussed by numerical simulation and the results prove that the scheme is fast and robust against decoherence and operational imperfection.

  16. Time-reversal transcranial ultrasound beam focusing using a k-space method

    PubMed Central

    Jing, Yun; Meral, F. Can; Clement, Greg. T.

    2012-01-01

    This paper proposes the use of a k-space method to obtain the correction for transcranial ultrasound beam focusing. Mirroring past approaches, A synthetic point source at the focal point is numerically excited, and propagated through the skull, using acoustic properties acquired from registered computed tomograpy of the skull being studied. The received data outside the skull contains the correction information and can be phase conjugated (time reversed) and then physically generated to achieve a tight focusing inside the skull, by assuming quasi-plane transmission where shear waves are not present or their contribution can be neglected. Compared with the conventional finite-difference time-domain method for wave propagation simulation, it will be shown that the k-space method is significantly more accurate even for a relatively coarse spatial resolution, leading to a dramatically reduced computation time. Both numerical simulations and experiments conducted on an ex vivo human skull demonstrate that, precise focusing can be realized using the k-space method with a spatial resolution as low as only 2.56 grid points per wavelength, thus allowing treatment planning computation on the order of minutes. PMID:22290477

  17. Multi-scale signed envelope inversion

    NASA Astrophysics Data System (ADS)

    Chen, Guo-Xin; Wu, Ru-Shan; Wang, Yu-Qing; Chen, Sheng-Chang

    2018-06-01

    Envelope inversion based on modulation signal mode was proposed to reconstruct large-scale structures of underground media. In order to solve the shortcomings of conventional envelope inversion, multi-scale envelope inversion was proposed using new envelope Fréchet derivative and multi-scale inversion strategy to invert strong contrast models. In multi-scale envelope inversion, amplitude demodulation was used to extract the low frequency information from envelope data. However, only to use amplitude demodulation method will cause the loss of wavefield polarity information, thus increasing the possibility of inversion to obtain multiple solutions. In this paper we proposed a new demodulation method which can contain both the amplitude and polarity information of the envelope data. Then we introduced this demodulation method into multi-scale envelope inversion, and proposed a new misfit functional: multi-scale signed envelope inversion. In the numerical tests, we applied the new inversion method to the salt layer model and SEG/EAGE 2-D Salt model using low-cut source (frequency components below 4 Hz were truncated). The results of numerical test demonstrated the effectiveness of this method.

  18. Anatomical image-guided fluorescence molecular tomography reconstruction using kernel method

    NASA Astrophysics Data System (ADS)

    Baikejiang, Reheman; Zhao, Yue; Fite, Brett Z.; Ferrara, Katherine W.; Li, Changqing

    2017-05-01

    Fluorescence molecular tomography (FMT) is an important in vivo imaging modality to visualize physiological and pathological processes in small animals. However, FMT reconstruction is ill-posed and ill-conditioned due to strong optical scattering in deep tissues, which results in poor spatial resolution. It is well known that FMT image quality can be improved substantially by applying the structural guidance in the FMT reconstruction. An approach to introducing anatomical information into the FMT reconstruction is presented using the kernel method. In contrast to conventional methods that incorporate anatomical information with a Laplacian-type regularization matrix, the proposed method introduces the anatomical guidance into the projection model of FMT. The primary advantage of the proposed method is that it does not require segmentation of targets in the anatomical images. Numerical simulations and phantom experiments have been performed to demonstrate the proposed approach's feasibility. Numerical simulation results indicate that the proposed kernel method can separate two FMT targets with an edge-to-edge distance of 1 mm and is robust to false-positive guidance and inhomogeneity in the anatomical image. For the phantom experiments with two FMT targets, the kernel method has reconstructed both targets successfully, which further validates the proposed kernel method.

  19. Sound Power Estimation for Beam and Plate Structures Using Polyvinylidene Fluoride Films as Sensors

    PubMed Central

    Mao, Qibo; Zhong, Haibing

    2017-01-01

    The theory for calculation and/or measurement of sound power based on the classical velocity-based radiation mode (V-mode) approach is well established for planar structures. However, the current V-mode theory is limited in scope in that it can only be applied to conventional motion sensors (i.e., accelerometers). In this study, in order to estimate the sound power of vibrating beam and plate structure by using polyvinylidene fluoride (PVDF) films as sensors, a PVDF-based radiation mode (C-mode) approach concept is introduced to determine the sound power radiation from the output signals of PVDF films of the vibrating structure. The proposed method is a hybrid of vibration measurement and numerical calculation of C-modes. The proposed C-mode approach has the following advantages: (1) compared to conventional motion sensors, the PVDF films are lightweight, flexible, and low-cost; (2) there is no need for special measuring environments, since the proposed method does not require the measurement of sound fields; (3) In low frequency range (typically with dimensionless frequency kl < 4), the radiation efficiencies of the C-modes fall off very rapidly with increasing mode order, furthermore, the shapes of the C-modes remain almost unchanged, which means that the computation load can be significantly reduced due to the fact only the first few dominant C-modes are involved in the low frequency range. Numerical simulations and experimental investigations were carried out to verify the accuracy and efficiency of the proposed method. PMID:28509870

  20. Numerical stabilization of entanglement computation in auxiliary-field quantum Monte Carlo simulations of interacting many-fermion systems.

    PubMed

    Broecker, Peter; Trebst, Simon

    2016-12-01

    In the absence of a fermion sign problem, auxiliary-field (or determinantal) quantum Monte Carlo (DQMC) approaches have long been the numerical method of choice for unbiased, large-scale simulations of interacting many-fermion systems. More recently, the conceptual scope of this approach has been expanded by introducing ingenious schemes to compute entanglement entropies within its framework. On a practical level, these approaches, however, suffer from a variety of numerical instabilities that have largely impeded their applicability. Here we report on a number of algorithmic advances to overcome many of these numerical instabilities and significantly improve the calculation of entanglement measures in the zero-temperature projective DQMC approach, ultimately allowing us to reach similar system sizes as for the computation of conventional observables. We demonstrate the applicability of this improved DQMC approach by providing an entanglement perspective on the quantum phase transition from a magnetically ordered Mott insulator to a band insulator in the bilayer square lattice Hubbard model at half filling.

  1. Solving the hypersingular boundary integral equation in three-dimensional acoustics using a regularization relationship.

    PubMed

    Yan, Zai You; Hung, Kin Chew; Zheng, Hui

    2003-05-01

    Regularization of the hypersingular integral in the normal derivative of the conventional Helmholtz integral equation through a double surface integral method or regularization relationship has been studied. By introducing the new concept of discretized operator matrix, evaluation of the double surface integrals is reduced to calculate the product of two discretized operator matrices. Such a treatment greatly improves the computational efficiency. As the number of frequencies to be computed increases, the computational cost of solving the composite Helmholtz integral equation is comparable to that of solving the conventional Helmholtz integral equation. In this paper, the detailed formulation of the proposed regularization method is presented. The computational efficiency and accuracy of the regularization method are demonstrated for a general class of acoustic radiation and scattering problems. The radiation of a pulsating sphere, an oscillating sphere, and a rigid sphere insonified by a plane acoustic wave are solved using the new method with curvilinear quadrilateral isoparametric elements. It is found that the numerical results rapidly converge to the corresponding analytical solutions as finer meshes are applied.

  2. Numerical modeling of the 2017 active seismic infrasound balloon experiment

    NASA Astrophysics Data System (ADS)

    Brissaud, Q.; Komjathy, A.; Garcia, R.; Cutts, J. A.; Pauken, M.; Krishnamoorthy, S.; Mimoun, D.; Jackson, J. M.; Lai, V. H.; Kedar, S.; Levillain, E.

    2017-12-01

    We have developed a numerical tool to propagate acoustic and gravity waves in a coupled solid-fluid medium with topography. It is a hybrid method between a continuous Galerkin and a discontinuous Galerkin method that accounts for non-linear atmospheric waves, visco-elastic waves and topography. We apply this method to a recent experiment that took place in the Nevada desert to study acoustic waves from seismic events. This experiment, developed by JPL and its partners, wants to demonstrate the viability of a new approach to probe seismic-induced acoustic waves from a balloon platform. To the best of our knowledge, this could be the only way, for planetary missions, to perform tomography when one faces challenging surface conditions, with high pressure and temperature (e.g. Venus), and thus when it is impossible to use conventional electronics routinely employed on Earth. To fully demonstrate the effectiveness of such a technique one should also be able to reconstruct the observed signals from numerical modeling. To model the seismic hammer experiment and the subsequent acoustic wave propagation, we rely on a subsurface seismic model constructed from the seismometers measurements during the 2017 Nevada experiment and an atmospheric model built from meteorological data. The source is considered as a Gaussian point source located at the surface. Comparison between the numerical modeling and the experimental data could help future mission designs and provide great insights into the planet's interior structure.

  3. A new approach for the calculation of response spectral density of a linear stationary random multidegree of freedom system

    NASA Astrophysics Data System (ADS)

    Sharan, A. M.; Sankar, S.; Sankar, T. S.

    1982-08-01

    A new approach for the calculation of response spectral density for a linear stationary random multidegree of freedom system is presented. The method is based on modifying the stochastic dynamic equations of the system by using a set of auxiliary variables. The response spectral density matrix obtained by using this new approach contains the spectral densities and the cross-spectral densities of the system generalized displacements and velocities. The new method requires significantly less computation time as compared to the conventional method for calculating response spectral densities. Two numerical examples are presented to compare quantitatively the computation time.

  4. Coherent mode decomposition using mixed Wigner functions of Hermite-Gaussian beams.

    PubMed

    Tanaka, Takashi

    2017-04-15

    A new method of coherent mode decomposition (CMD) is proposed that is based on a Wigner-function representation of Hermite-Gaussian beams. In contrast to the well-known method using the cross spectral density (CSD), it directly determines the mode functions and their weights without solving the eigenvalue problem. This facilitates the CMD of partially coherent light whose Wigner functions (and thus CSDs) are not separable, in which case the conventional CMD requires solving an eigenvalue problem with a large matrix and thus is numerically formidable. An example is shown regarding the CMD of synchrotron radiation, one of the most important applications of the proposed method.

  5. Optimization of auxiliary basis sets for the LEDO expansion and a projection technique for LEDO-DFT.

    PubMed

    Götz, Andreas W; Kollmar, Christian; Hess, Bernd A

    2005-09-01

    We present a systematic procedure for the optimization of the expansion basis for the limited expansion of diatomic overlap density functional theory (LEDO-DFT) and report on optimized auxiliary orbitals for the Ahlrichs split valence plus polarization basis set (SVP) for the elements H, Li--F, and Na--Cl. A new method to deal with near-linear dependences in the LEDO expansion basis is introduced, which greatly reduces the computational effort of LEDO-DFT calculations. Numerical results for a test set of small molecules demonstrate the accuracy of electronic energies, structural parameters, dipole moments, and harmonic frequencies. For larger molecular systems the numerical errors introduced by the LEDO approximation can lead to an uncontrollable behavior of the self-consistent field (SCF) process. A projection technique suggested by Löwdin is presented in the framework of LEDO-DFT, which guarantees for SCF convergence. Numerical results on some critical test molecules suggest the general applicability of the auxiliary orbitals presented in combination with this projection technique. Timing results indicate that LEDO-DFT is competitive with conventional density fitting methods. (c) 2005 Wiley Periodicals, Inc.

  6. Methods for combining payload parameter variations with input environment. [calculating design limit loads compatible with probabilistic structural design criteria

    NASA Technical Reports Server (NTRS)

    Merchant, D. H.

    1976-01-01

    Methods are presented for calculating design limit loads compatible with probabilistic structural design criteria. The approach is based on the concept that the desired limit load, defined as the largest load occurring in a mission, is a random variable having a specific probability distribution which may be determined from extreme-value theory. The design limit load, defined as a particular of this random limit load, is the value conventionally used in structural design. Methods are presented for determining the limit load probability distributions from both time-domain and frequency-domain dynamic load simulations. Numerical demonstrations of the method are also presented.

  7. Real-Space Analysis of Scanning Tunneling Microscopy Topography Datasets Using Sparse Modeling Approach

    NASA Astrophysics Data System (ADS)

    Miyama, Masamichi J.; Hukushima, Koji

    2018-04-01

    A sparse modeling approach is proposed for analyzing scanning tunneling microscopy topography data, which contain numerous peaks originating from the electron density of surface atoms and/or impurities. The method, based on the relevance vector machine with L1 regularization and k-means clustering, enables separation of the peaks and peak center positioning with accuracy beyond the resolution of the measurement grid. The validity and efficiency of the proposed method are demonstrated using synthetic data in comparison with the conventional least-squares method. An application of the proposed method to experimental data of a metallic oxide thin-film clearly indicates the existence of defects and corresponding local lattice distortions.

  8. Equivalent linearization for fatigue life estimates of a nonlinear structure

    NASA Technical Reports Server (NTRS)

    Miles, R. N.

    1989-01-01

    An analysis is presented of the suitability of the method of equivalent linearization for estimating the fatigue life of a nonlinear structure. Comparisons are made of the fatigue life of a nonlinear plate as predicted using conventional equivalent linearization and three other more accurate methods. The excitation of the plate is assumed to be Gaussian white noise and the plate response is modeled using a single resonant mode. The methods used for comparison consist of numerical simulation, a probabalistic formulation, and a modification of equivalent linearization which avoids the usual assumption that the response process is Gaussian. Remarkably close agreement is obtained between all four methods, even for cases where the response is significantly linear.

  9. Robust phase retrieval of complex-valued object in phase modulation by hybrid Wirtinger flow method

    NASA Astrophysics Data System (ADS)

    Wei, Zhun; Chen, Wen; Yin, Tiantian; Chen, Xudong

    2017-09-01

    This paper presents a robust iterative algorithm, known as hybrid Wirtinger flow (HWF), for phase retrieval (PR) of complex objects from noisy diffraction intensities. Numerical simulations indicate that the HWF method consistently outperforms conventional PR methods in terms of both accuracy and convergence rate in multiple phase modulations. The proposed algorithm is also more robust to low oversampling ratios, loose constraints, and noisy environments. Furthermore, compared with traditional Wirtinger flow, sample complexity is largely reduced. It is expected that the proposed HWF method will find applications in the rapidly growing coherent diffractive imaging field for high-quality image reconstruction with multiple modulations, as well as other disciplines where PR is needed.

  10. Boundary condition at a two-phase interface in the lattice Boltzmann method for the convection-diffusion equation.

    PubMed

    Yoshida, Hiroaki; Kobayashi, Takayuki; Hayashi, Hidemitsu; Kinjo, Tomoyuki; Washizu, Hitoshi; Fukuzawa, Kenji

    2014-07-01

    A boundary scheme in the lattice Boltzmann method (LBM) for the convection-diffusion equation, which correctly realizes the internal boundary condition at the interface between two phases with different transport properties, is presented. The difficulty in satisfying the continuity of flux at the interface in a transient analysis, which is inherent in the conventional LBM, is overcome by modifying the collision operator and the streaming process of the LBM. An asymptotic analysis of the scheme is carried out in order to clarify the role played by the adjustable parameters involved in the scheme. As a result, the internal boundary condition is shown to be satisfied with second-order accuracy with respect to the lattice interval, if we assign appropriate values to the adjustable parameters. In addition, two specific problems are numerically analyzed, and comparison with the analytical solutions of the problems numerically validates the proposed scheme.

  11. In-line phase contrast micro-CT reconstruction for biomedical specimens.

    PubMed

    Fu, Jian; Tan, Renbo

    2014-01-01

    X-ray phase contrast micro computed tomography (micro-CT) can non-destructively provide the internal structure information of soft tissues and low atomic number materials. It has become an invaluable analysis tool for biomedical specimens. Here an in-line phase contrast micro-CT reconstruction technique is reported, which consists of a projection extraction method and the conventional filter back-projection (FBP) reconstruction algorithm. The projection extraction is implemented by applying the Fourier transform to the forward projections of in-line phase contrast micro-CT. This work comprises a numerical study of the method and its experimental verification using a biomedical specimen dataset measured at an X-ray tube source micro-CT setup. The numerical and experimental results demonstrate that the presented technique can improve the imaging contrast of biomedical specimens. It will be of interest for a wide range of in-line phase contrast micro-CT applications in medicine and biology.

  12. Shack-Hartmann reflective micro profilometer

    NASA Astrophysics Data System (ADS)

    Gong, Hai; Soloviev, Oleg; Verhaegen, Michel; Vdovin, Gleb

    2018-01-01

    We present a quantitative phase imaging microscope based on a Shack-Hartmann sensor, that directly reconstructs the optical path difference (OPD) in reflective mode. Comparing with the holographic or interferometric methods, the SH technique needs no reference beam in the setup, which simplifies the system. With a preregistered reference, the OPD image can be reconstructed from a single shot. Also, the method has a rather relaxed requirement on the illumination coherence, thus a cheap light source such as a LED is feasible in the setup. In our previous research, we have successfully verified that a conventional transmissive microscope can be transformed into an optical path difference microscope by using a Shack-Hartmann wavefront sensor under incoherent illumination. The key condition is that the numerical aperture of illumination should be smaller than the numerical aperture of imaging lens. This approach is also applicable to characterization of reflective and slightly scattering surfaces.

  13. Time-domain full waveform inversion using instantaneous phase information with damping

    NASA Astrophysics Data System (ADS)

    Luo, Jingrui; Wu, Ru-Shan; Gao, Fuchun

    2018-06-01

    In time domain, the instantaneous phase can be obtained from the complex seismic trace using Hilbert transform. The instantaneous phase information has great potential in overcoming the local minima problem and improving the result of full waveform inversion. However, the phase wrapping problem, which comes from numerical calculation, prevents its application. In order to avoid the phase wrapping problem, we choose to use the exponential phase combined with the damping method, which gives instantaneous phase-based multi-stage inversion. We construct the objective functions based on the exponential instantaneous phase, and also derive the corresponding gradient operators. Conventional full waveform inversion and the instantaneous phase-based inversion are compared with numerical examples, which indicates that in the case without low frequency information in seismic data, our method is an effective and efficient approach for initial model construction for full waveform inversion.

  14. Methodology and Method and Apparatus for Signaling With Capacity Optimized Constellations

    NASA Technical Reports Server (NTRS)

    Barsoum, Maged F. (Inventor); Jones, Christopher R. (Inventor)

    2014-01-01

    Communication systems are described that use geometrically shaped constellations that have increased capacity compared to conventional constellations operating within a similar SNR band. In several embodiments, the geometrically shaped is optimized based upon a capacity measure such as parallel decoding capacity or joint capacity. In many embodiments, a capacity optimized geometrically shaped constellation can be used to replace a conventional constellation as part of a firmware upgrade to transmitters and receivers within a communication system. In a number of embodiments, the geometrically shaped constellation is optimized for an Additive White Gaussian Noise channel or a fading channel. In numerous embodiments, the communication uses adaptive rate encoding and the location of points within the geometrically shaped constellation changes as the code rate changes.

  15. Reinforcement learning for resource allocation in LEO satellite networks.

    PubMed

    Usaha, Wipawee; Barria, Javier A

    2007-06-01

    In this paper, we develop and assess online decision-making algorithms for call admission and routing for low Earth orbit (LEO) satellite networks. It has been shown in a recent paper that, in a LEO satellite system, a semi-Markov decision process formulation of the call admission and routing problem can achieve better performance in terms of an average revenue function than existing routing methods. However, the conventional dynamic programming (DP) numerical solution becomes prohibited as the problem size increases. In this paper, two solution methods based on reinforcement learning (RL) are proposed in order to circumvent the computational burden of DP. The first method is based on an actor-critic method with temporal-difference (TD) learning. The second method is based on a critic-only method, called optimistic TD learning. The algorithms enhance performance in terms of requirements in storage, computational complexity and computational time, and in terms of an overall long-term average revenue function that penalizes blocked calls. Numerical studies are carried out, and the results obtained show that the RL framework can achieve up to 56% higher average revenue over existing routing methods used in LEO satellite networks with reasonable storage and computational requirements.

  16. Value-Engineering Review for Numerical Control

    NASA Technical Reports Server (NTRS)

    Warner, J. L.

    1984-01-01

    Selecting parts for conversion from conventional machining to numerical control, value-engineering review performed for every part to identify potential changes to part design that result in increased production efficiency.

  17. Numerical simulation of drop impact on a thin film: the origin of the droplets in the splashing regime

    NASA Astrophysics Data System (ADS)

    Xie, Zhihua; Che, Zhizhao; Ismail, Renad; Pain, Chris; Matar, Omar

    2015-11-01

    Drop impact on a liquid layer is a feature of numerous multiphase flow problems, and has been the subject of numerous theoretical, experimental and numerical investigations. In the splashing regime, however, little attention has been focused on the origin of the droplets that are formed during the splashing process. The objective of this study is to investigate this issue numerically in order to improve our understanding of the mechanisms underlying splashing as a function of the relevant system parameters. In contrast to the conventional two-phase flow approach, commonly used to simulate splashing, here, a three-dimensional, three-phase flow model, with adaptive, unstructured meshing, is employed to study the liquid (droplet) - gas (surrounding air) - liquid (thin film) system. In the cases to be presented, both liquid phases have the same fluid property, although, clearly, our method can be used in the more general case of two different liquids. Numerical results of droplet impact on a thin film are analysed to determine whether the origin of the droplets following impact corresponds to the mother drop, or the thin film, or both. EPSRC Programme Grant, MEMPHIS, EP/K0039761/1.

  18. The terminal area simulation system. Volume 1: Theoretical formulation

    NASA Technical Reports Server (NTRS)

    Proctor, F. H.

    1987-01-01

    A three-dimensional numerical cloud model was developed for the general purpose of studying convective phenomena. The model utilizes a time splitting integration procedure in the numerical solution of the compressible nonhydrostatic primitive equations. Turbulence closure is achieved by a conventional first-order diagnostic approximation. Open lateral boundaries are incorporated which minimize wave reflection and which do not induce domain-wide mass trends. Microphysical processes are governed by prognostic equations for potential temperature water vapor, cloud droplets, ice crystals, rain, snow, and hail. Microphysical interactions are computed by numerous Orville-type parameterizations. A diagnostic surface boundary layer is parameterized assuming Monin-Obukhov similarity theory. The governing equation set is approximated on a staggered three-dimensional grid with quadratic-conservative central space differencing. Time differencing is approximated by the second-order Adams-Bashforth method. The vertical grid spacing may be either linear or stretched. The model domain may translate along with a convective cell, even at variable speeds.

  19. Design Tool Using a New Optimization Method Based on a Stochastic Process

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroaki; Yamaguchi, Katsuhito; Ishikawa, Yoshio

    Conventional optimization methods are based on a deterministic approach since their purpose is to find out an exact solution. However, such methods have initial condition dependence and the risk of falling into local solution. In this paper, we propose a new optimization method based on the concept of path integrals used in quantum mechanics. The method obtains a solution as an expected value (stochastic average) using a stochastic process. The advantages of this method are that it is not affected by initial conditions and does not require techniques based on experiences. We applied the new optimization method to a hang glider design. In this problem, both the hang glider design and its flight trajectory were optimized. The numerical calculation results prove that performance of the method is sufficient for practical use.

  20. Biointervention makes leather processing greener: an integrated cleansing and tanning system.

    PubMed

    Thanikaivelan, Palanisamy; Rao, Jonnalagadda Raghava; Nair, Balachandran Unni; Ramasami, Thirumalachari

    2003-06-01

    The do-undo methods adopted in conventional leather processing generate huge amounts of pollutants. In other words, conventional methods employed in leather processing subject the skin/hide to wide variations in pH. Pretanning and tanning processes alone contribute more than 90% of the total pollution from leather processing. Included in this is a great deal of solid wastes such as lime and chrome sludge. In the approach described here, the hair and flesh removal as well as fiber opening have been achieved using biocatalysts at pH 8.0 for cow hides. This was followed by a pickle-free chrome tanning, which does not require a basification step. Hence, this tanning technique involves primarily three steps, namely, dehairing, fiber opening, and tanning. It has been found that the extent of hair removal, opening up of fiber bundles, and penetration and distribution of chromium are comparable to that produced by traditional methods. This has been substantiated through scanning electron microscopic, stratigraphic chrome distribution analysis, and softness measurements. Performance of the leathers is shown to be on par with conventionally processed leathers through physical and hand evaluation. Importantly, softness of the leathers is numerically proven to be comparable with that of control. The process also demonstrates reduction in chemical oxygen demand load by 80%, total solids load by 85%, and chromium load by 80% as compared to the conventional process, thereby leading toward zero discharge. The input-output audit shows that the biocatalytic three-step tanning process employs a very low amount of chemicals, thereby reducing the discharge by 90% as compared to the conventional multistep processing. Furthermore, it is also demonstrated that the process is technoeconomically viable.

  1. Enhanced kidney stone fragmentation by short delay tandem conventional and modified lithotriptor shock waves: a numerical analysis.

    PubMed

    Tham, Leung-Mun; Lee, Heow Pueh; Lu, Chun

    2007-07-01

    We evaluated the effectiveness of modified lithotriptor shock waves using computer models. Finite element models were used to simulate the propagation of lithotriptor shock waves in human renal calculi in vivo. Kidney stones were assumed to be spherical, homogeneous, isotropic and linearly elastic, and immersed in a continuum fluid. Single and tandem shock wave pulses modified to intensify the collapse of cavitation bubbles near the stone surface to increase fragmentation efficiency and suppress the expansion of intraluminal bubbles for decreased vascular injury were analyzed. The effectiveness of the modified shock waves was assessed by comparing the states of loading in the renal calculi induced by these shock waves to those produced by conventional shock waves. Our numerical simulations revealed that modified shock waves produced marginally lower stresses in spherical renal calculi than those produced by conventional shock waves. Tandem pulses of conventional or modified shock waves produced peak stresses in the front and back halves of the renal calculi. However, the single shock wave pulses generated significant peak stresses in only the back halves of the renal calculi. Our numerical simulations suggest that for direct stress wave induced fragmentation modified shock waves should be as effective as conventional shock waves for fragmenting kidney stones. Also, with a small interval of 20 microseconds between the pulses tandem pulse lithotripsy using modified or conventional shock waves could be considerably more effective than single pulse lithotripsy for fragmenting kidney stones.

  2. Parameter analysis of a photonic crystal fiber with raised-core index profile based on effective index method

    NASA Astrophysics Data System (ADS)

    Seraji, Faramarz E.; Rashidi, Mahnaz; Khasheie, Vajieh

    2006-08-01

    Photonic crystal fibers (PCFs) with a stepped raised-core profile and one layer equally spaced holes in the cladding are analyzed. Using effective index method and considering a raised step refractive index difference between the index of the core and the effective index of the cladding, we improve the characteristic parameters such as numerical aperture and V-parameter, and reduce its bending loss to about one tenth of a conventional PCF. Implementing such a structure in PCFs may be one step forward to achieve low loss PCFs for communication applications.

  3. SNR Improvement of QEPAS System by Preamplifier Circuit Optimization and Frequency Locked Technique

    NASA Astrophysics Data System (ADS)

    Zhang, Qinduan; Chang, Jun; Wang, Zongliang; Wang, Fupeng; Jiang, Fengting; Wang, Mengyao

    2018-06-01

    Preamplifier circuit noise is of great importance in quartz enhanced photoacoustic spectroscopy (QEPAS) system. In this paper, several noise sources are evaluated and discussed in detail. Based on the noise characteristics, the corresponding noise reduction method is proposed. In addition, a frequency locked technique is introduced to further optimize the QEPAS system noise and improve signal, which achieves a better performance than the conventional frequency scan method. As a result, the signal-to-noise ratio (SNR) could be increased 14 times by utilizing frequency locked technique and numerical averaging technique in the QEPAS system for water vapor detection.

  4. Finite-time synchronization control of a class of memristor-based recurrent neural networks.

    PubMed

    Jiang, Minghui; Wang, Shuangtao; Mei, Jun; Shen, Yanjun

    2015-03-01

    This paper presents a global and local finite-time synchronization control law for memristor neural networks. By utilizing the drive-response concept, differential inclusions theory, and Lyapunov functional method, we establish several sufficient conditions for finite-time synchronization between the master and corresponding slave memristor-based neural network with the designed controller. In comparison with the existing results, the proposed stability conditions are new, and the obtained results extend some previous works on conventional recurrent neural networks. Two numerical examples are provided to illustrate the effective of the design method. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Accurate evaluation of exchange fields in finite element micromagnetic solvers

    NASA Astrophysics Data System (ADS)

    Chang, R.; Escobar, M. A.; Li, S.; Lubarda, M. V.; Lomakin, V.

    2012-04-01

    Quadratic basis functions (QBFs) are implemented for solving the Landau-Lifshitz-Gilbert equation via the finite element method. This involves the introduction of a set of special testing functions compatible with the QBFs for evaluating the Laplacian operator. The results by using QBFs are significantly more accurate than those via linear basis functions. QBF approach leads to significantly more accurate results than conventionally used approaches based on linear basis functions. Importantly QBFs allow reducing the error of computing the exchange field by increasing the mesh density for structured and unstructured meshes. Numerical examples demonstrate the feasibility of the method.

  6. Computation of Sound Propagation by Boundary Element Method

    NASA Technical Reports Server (NTRS)

    Guo, Yueping

    2005-01-01

    This report documents the development of a Boundary Element Method (BEM) code for the computation of sound propagation in uniform mean flows. The basic formulation and implementation follow the standard BEM methodology; the convective wave equation and the boundary conditions on the surfaces of the bodies in the flow are formulated into an integral equation and the method of collocation is used to discretize this equation into a matrix equation to be solved numerically. New features discussed here include the formulation of the additional terms due to the effects of the mean flow and the treatment of the numerical singularities in the implementation by the method of collocation. The effects of mean flows introduce terms in the integral equation that contain the gradients of the unknown, which is undesirable if the gradients are treated as additional unknowns, greatly increasing the sizes of the matrix equation, or if numerical differentiation is used to approximate the gradients, introducing numerical error in the computation. It is shown that these terms can be reformulated in terms of the unknown itself, making the integral equation very similar to the case without mean flows and simple for numerical implementation. To avoid asymptotic analysis in the treatment of numerical singularities in the method of collocation, as is conventionally done, we perform the surface integrations in the integral equation by using sub-triangles so that the field point never coincide with the evaluation points on the surfaces. This simplifies the formulation and greatly facilitates the implementation. To validate the method and the code, three canonic problems are studied. They are respectively the sound scattering by a sphere, the sound reflection by a plate in uniform mean flows and the sound propagation over a hump of irregular shape in uniform flows. The first two have analytical solutions and the third is solved by the method of Computational Aeroacoustics (CAA), all of which are used to compare the BEM solutions. The comparisons show very good agreements and validate the accuracy of the BEM approach implemented here.

  7. Fast calculation method of computer-generated hologram using a depth camera with point cloud gridding

    NASA Astrophysics Data System (ADS)

    Zhao, Yu; Shi, Chen-Xiao; Kwon, Ki-Chul; Piao, Yan-Ling; Piao, Mei-Lan; Kim, Nam

    2018-03-01

    We propose a fast calculation method for a computer-generated hologram (CGH) of real objects that uses a point cloud gridding method. The depth information of the scene is acquired using a depth camera and the point cloud model is reconstructed virtually. Because each point of the point cloud is distributed precisely to the exact coordinates of each layer, each point of the point cloud can be classified into grids according to its depth. A diffraction calculation is performed on the grids using a fast Fourier transform (FFT) to obtain a CGH. The computational complexity is reduced dramatically in comparison with conventional methods. The feasibility of the proposed method was confirmed by numerical and optical experiments.

  8. NOTE: Solving the ECG forward problem by means of a meshless finite element method

    NASA Astrophysics Data System (ADS)

    Li, Z. S.; Zhu, S. A.; He, Bin

    2007-07-01

    The conventional numerical computational techniques such as the finite element method (FEM) and the boundary element method (BEM) require laborious and time-consuming model meshing. The new meshless FEM only uses the boundary description and the node distribution and no meshing of the model is required. This paper presents the fundamentals and implementation of meshless FEM and the meshless FEM method is adapted to solve the electrocardiography (ECG) forward problem. The method is evaluated on a single-layer torso model, in which the analytical solution exists, and tested in a realistic geometry homogeneous torso model, with satisfactory results being obtained. The present results suggest that the meshless FEM may provide an alternative for ECG forward solutions.

  9. On shifted Jacobi spectral method for high-order multi-point boundary value problems

    NASA Astrophysics Data System (ADS)

    Doha, E. H.; Bhrawy, A. H.; Hafez, R. M.

    2012-10-01

    This paper reports a spectral tau method for numerically solving multi-point boundary value problems (BVPs) of linear high-order ordinary differential equations. The construction of the shifted Jacobi tau approximation is based on conventional differentiation. This use of differentiation allows the imposition of the governing equation at the whole set of grid points and the straight forward implementation of multiple boundary conditions. Extension of the tau method for high-order multi-point BVPs with variable coefficients is treated using the shifted Jacobi Gauss-Lobatto quadrature. Shifted Jacobi collocation method is developed for solving nonlinear high-order multi-point BVPs. The performance of the proposed methods is investigated by considering several examples. Accurate results and high convergence rates are achieved.

  10. MUSTA fluxes for systems of conservation laws

    NASA Astrophysics Data System (ADS)

    Toro, E. F.; Titarev, V. A.

    2006-08-01

    This paper is about numerical fluxes for hyperbolic systems and we first present a numerical flux, called GFORCE, that is a weighted average of the Lax-Friedrichs and Lax-Wendroff fluxes. For the linear advection equation with constant coefficient, the new flux reduces identically to that of the Godunov first-order upwind method. Then we incorporate GFORCE in the framework of the MUSTA approach [E.F. Toro, Multi-Stage Predictor-Corrector Fluxes for Hyperbolic Equations. Technical Report NI03037-NPA, Isaac Newton Institute for Mathematical Sciences, University of Cambridge, UK, 17th June, 2003], resulting in a version that we call GMUSTA. For non-linear systems this gives results that are comparable to those of the Godunov method in conjunction with the exact Riemann solver or complete approximate Riemann solvers, noting however that in our approach, the solution of the Riemann problem in the conventional sense is avoided. Both the GFORCE and GMUSTA fluxes are extended to multi-dimensional non-linear systems in a straightforward unsplit manner, resulting in linearly stable schemes that have the same stability regions as the straightforward multi-dimensional extension of Godunov's method. The methods are applicable to general meshes. The schemes of this paper share with the family of centred methods the common properties of being simple and applicable to a large class of hyperbolic systems, but the schemes of this paper are distinctly more accurate. Finally, we proceed to the practical implementation of our numerical fluxes in the framework of high-order finite volume WENO methods for multi-dimensional non-linear hyperbolic systems. Numerical results are presented for the Euler equations and for the equations of magnetohydrodynamics.

  11. Numerical modeling of carrier gas flow in atomic layer deposition vacuum reactor: A comparative study of lattice Boltzmann models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pan, Dongqing; Chien Jen, Tien; Li, Tao

    2014-01-15

    This paper characterizes the carrier gas flow in the atomic layer deposition (ALD) vacuum reactor by introducing Lattice Boltzmann Method (LBM) to the ALD simulation through a comparative study of two LBM models. Numerical models of gas flow are constructed and implemented in two-dimensional geometry based on lattice Bhatnagar–Gross–Krook (LBGK)-D2Q9 model and two-relaxation-time (TRT) model. Both incompressible and compressible scenarios are simulated and the two models are compared in the aspects of flow features, stability, and efficiency. Our simulation outcome reveals that, for our specific ALD vacuum reactor, TRT model generates better steady laminar flow features all over the domainmore » with better stability and reliability than LBGK-D2Q9 model especially when considering the compressible effects of the gas flow. The LBM-TRT is verified indirectly by comparing the numerical result with conventional continuum-based computational fluid dynamics solvers, and it shows very good agreement with these conventional methods. The velocity field of carrier gas flow through ALD vacuum reactor was characterized by LBM-TRT model finally. The flow in ALD is in a laminar steady state with velocity concentrated at the corners and around the wafer. The effects of flow fields on precursor distributions, surface absorptions, and surface reactions are discussed in detail. Steady and evenly distributed velocity field contribute to higher precursor concentration near the wafer and relatively lower particle velocities help to achieve better surface adsorption and deposition. The ALD reactor geometry needs to be considered carefully if a steady and laminar flow field around the wafer and better surface deposition are desired.« less

  12. A numerical method for measuring capacitive soft sensors through one channel

    NASA Astrophysics Data System (ADS)

    Tairych, Andreas; Anderson, Iain A.

    2018-03-01

    Soft capacitive stretch sensors are well suited for unobtrusive wearable body motion capture. Conventional sensing methods measure sensor capacitances through separate channels. In sensing garments with many sensors, this results in high wiring complexity, and a large footprint of rigid sensing circuit boards. We have developed a more efficient sensing method that detects multiple sensors through only one channel, and one set of wires. It is based on a R-C transmission line assembled from capacitive conductive fabric stretch sensors, and external resistors. The unknown capacitances are identified by solving a system of nonlinear equations. These equations are established by modelling and continuously measuring transmission line reactances at different frequencies. Solving these equations numerically with a Newton-Raphson solver for the unknown capacitances enables real time reading of all sensors. The method was verified with a prototype comprising three sensors that is capable of detecting both individually and simultaneously stretched sensors. Instead of using three channels and six wires to detect the sensors, the task was achieved with only one channel and two wires.

  13. Novel Fourier-domain constraint for fast phase retrieval in coherent diffraction imaging.

    PubMed

    Latychevskaia, Tatiana; Longchamp, Jean-Nicolas; Fink, Hans-Werner

    2011-09-26

    Coherent diffraction imaging (CDI) for visualizing objects at atomic resolution has been realized as a promising tool for imaging single molecules. Drawbacks of CDI are associated with the difficulty of the numerical phase retrieval from experimental diffraction patterns; a fact which stimulated search for better numerical methods and alternative experimental techniques. Common phase retrieval methods are based on iterative procedures which propagate the complex-valued wave between object and detector plane. Constraints in both, the object and the detector plane are applied. While the constraint in the detector plane employed in most phase retrieval methods requires the amplitude of the complex wave to be equal to the squared root of the measured intensity, we propose a novel Fourier-domain constraint, based on an analogy to holography. Our method allows achieving a low-resolution reconstruction already in the first step followed by a high-resolution reconstruction after further steps. In comparison to conventional schemes this Fourier-domain constraint results in a fast and reliable convergence of the iterative reconstruction process. © 2011 Optical Society of America

  14. Revealing retroperitoneal liposarcoma morphology using optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Carbajal, Esteban F.; Baranov, Stepan A.; Manne, Venu G. R.; Young, Eric D.; Lazar, Alexander J.; Lev, Dina C.; Pollock, Raphael E.; Larin, Kirill V.

    2011-02-01

    A new approach to distinguish normal fat, well-differentiated (WD), and dedifferentiated liposarcoma (LS) tumors is demonstrated, based on the use of optical coherence tomography (OCT). OCT images show the same structures seen with conventional histological methods. Our visual grading analysis is supported by numerical analysis of observed structures for normal fat and WDLS samples. Further development could apply the real-time and high resolution advantages of OCT for use in liposarcoma diagnosis and clinical procedures.

  15. Feasibility of motion laws for planar one degree of freedom linkage mechanisms at dead point configurations

    NASA Astrophysics Data System (ADS)

    Lores García, E.; Veciana Fontanet, J. M.; Jordi Nebot, L.

    2018-01-01

    This paper proposes an analytical solution of the Inverse Kinematics (IK) problem at dead point configurations for any planar one degree of freedom linkage mechanism, with regard to the continuity Cn of the motion law. The systems analyzed are those whose elements are linked with lower pairs and do not present redundancies. The study aims to provide the user with some rules to facilitate the design of feasible motion profiles to be reproduced by conventional electrical actuators at these configurations. During the last decades, several methods and techniques have been developed to study this specific configuration. However, these techniques are mainly focused on solving numerically the IK indeterminacy, rather than analyzing the motion laws that the mechanisms are able to perform at these particular configurations. The analysis presented in this paper has been carried out differentiating and applying l'Hôpital's rule to the system of constraint equations ϕ (q) of the mechanism. The study also considers the feasibility of the time-domain profiles to be reproduced with conventional electrical actuators (i.e. AC/DC motors, linear actuators, etc.). To show the usefulness and effectiveness of the method, the development includes the analytical application and numerical simulations for two common one degree of freedom systems: a slider-crank and a four linkage mechanisms. Finally, experimental results are presented on a four linkage mechanism test bed.

  16. On estimating gravity anomalies: A comparison of least squares collocation with least squares techniques

    NASA Technical Reports Server (NTRS)

    Argentiero, P.; Lowrey, B.

    1976-01-01

    The least squares collocation algorithm for estimating gravity anomalies from geodetic data is shown to be an application of the well known regression equations which provide the mean and covariance of a random vector (gravity anomalies) given a realization of a correlated random vector (geodetic data). It is also shown that the collocation solution for gravity anomalies is equivalent to the conventional least-squares-Stokes' function solution when the conventional solution utilizes properly weighted zero a priori estimates. The mathematical and physical assumptions underlying the least squares collocation estimator are described, and its numerical properties are compared with the numerical properties of the conventional least squares estimator.

  17. Structure-preserving spectral element method in attenuating seismic wave modeling

    NASA Astrophysics Data System (ADS)

    Cai, Wenjun; Zhang, Huai

    2016-04-01

    This work describes the extension of the conformal symplectic method to solve the damped acoustic wave equation and the elastic wave equations in the framework of the spectral element method. The conformal symplectic method is a variation of conventional symplectic methods to treat non-conservative time evolution problems which has superior behaviors in long-time stability and dissipation preservation. To construct the conformal symplectic method, we first reformulate the damped acoustic wave equation and the elastic wave equations in their equivalent conformal multi-symplectic structures, which naturally reveal the intrinsic properties of the original systems, especially, the dissipation laws. We thereafter separate each structures into a conservative Hamiltonian system and a purely dissipative ordinary differential equation system. Based on the splitting methodology, we solve the two subsystems respectively. The dissipative one is cheaply solved by its analytic solution. While for the conservative system, we combine a fourth-order symplectic Nyström method in time and the spectral element method in space to cover the circumstances in realistic geological structures involving complex free-surface topography. The Strang composition method is adopted thereby to concatenate the corresponding two parts of solutions and generate the completed numerical scheme, which is conformal symplectic and can therefore guarantee the numerical stability and dissipation preservation after a large time modeling. Additionally, a relative larger Courant number than that of the traditional Newmark scheme is found in the numerical experiments in conjunction with a spatial sampling of approximately 5 points per wavelength. A benchmark test for the damped acoustic wave equation validates the effectiveness of our proposed method in precisely capturing dissipation rate. The classical Lamb problem is used to demonstrate the ability of modeling Rayleigh-wave propagation. More comprehensive numerical experiments are presented to investigate the long-time simulation, low dispersion and energy conservation properties of the conformal symplectic method in both the attenuating homogeneous and heterogeneous mediums.

  18. Obtaining high-resolution velocity spectra using weighted semblance

    NASA Astrophysics Data System (ADS)

    Ebrahimi, Saleh; Kahoo, Amin Roshandel; Porsani, Milton J.; Kalateh, Ali Nejati

    2017-02-01

    Velocity analysis employs coherency measurement along a hyperbolic or non-hyperbolic trajectory time window to build velocity spectra. Accuracy and resolution are strictly related to the method of coherency measurements. Semblance, the most common coherence measure, has poor resolution velocity which affects one's ability to distinguish and pick distinct peaks. Increase the resolution of the semblance velocity spectra causes the accuracy of estimated velocity for normal moveout correction and stacking is improved. The low resolution of semblance spectra depends on its low sensitivity to velocity changes. In this paper, we present a new weighted semblance method that ensures high-resolution velocity spectra. To increase the resolution of semblance spectra, we introduce two weighting functions based on the first to second singular values ratio of the time window and the position of the seismic wavelet in the time window to the semblance equation. We test the method on both synthetic and real field data to compare the resolution of weighted and conventional semblance methods. Numerical examples with synthetic and real seismic data indicate that the new proposed weighted semblance method provides higher resolution than conventional semblance and can separate the reflectors which are mixed in the semblance spectrum.

  19. Simulation of cryolipolysis as a novel method for noninvasive fat layer reduction.

    PubMed

    Majdabadi, Abbas; Abazari, Mohammad

    2016-12-20

    Regarding previous problems in conventional liposuction methods, the need for development of new fat removal operations was appreciated. In this study we are going to simulate one of the novel methods, cryolipolysis, aimed to tackle those drawbacks. We think that simulation of clinical procedures contributes considerably in efficacious performance of the operations. To do this we have attempted to simulate temperature distribution in a sample fat of the human body. Using Abaqus software we have presented the graphical display of temperature-time variations within the medium. Findings of our simulation indicate that tissue temperature decreases after cold exposure of about 30 min. It can be seen that the minimum temperature of tissue occurs in shallow layers of the sample and the temperature in deeper layers of the sample remains nearly unchanged. It is clear that cold exposure time of more than the specific time (t > 30 min) does not result in considerable changes. Numerous clinical studies have proved the efficacy of cryolipolysis. This noninvasive technique has eliminated some of drawbacks of conventional methods. Findings of our simulation clearly prove the efficiency of this method, especially for superficial fat layers.

  20. Taguchi optimization of bismuth-telluride based thermoelectric cooler

    NASA Astrophysics Data System (ADS)

    Anant Kishore, Ravi; Kumar, Prashant; Sanghadasa, Mohan; Priya, Shashank

    2017-07-01

    In the last few decades, considerable effort has been made to enhance the figure-of-merit (ZT) of thermoelectric (TE) materials. However, the performance of commercial TE devices still remains low due to the fact that the module figure-of-merit not only depends on the material ZT, but also on the operating conditions and configuration of TE modules. This study takes into account comprehensive set of parameters to conduct the numerical performance analysis of the thermoelectric cooler (TEC) using a Taguchi optimization method. The Taguchi method is a statistical tool that predicts the optimal performance with a far less number of experimental runs than the conventional experimental techniques. Taguchi results are also compared with the optimized parameters obtained by a full factorial optimization method, which reveals that the Taguchi method provides optimum or near-optimum TEC configuration using only 25 experiments against 3125 experiments needed by the conventional optimization method. This study also shows that the environmental factors such as ambient temperature and cooling coefficient do not significantly affect the optimum geometry and optimum operating temperature of TECs. The optimum TEC configuration for simultaneous optimization of cooling capacity and coefficient of performance is also provided.

  1. Spectral element method for elastic and acoustic waves in frequency domain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, Linlin; Zhou, Yuanguo; Wang, Jia-Min

    Numerical techniques in time domain are widespread in seismic and acoustic modeling. In some applications, however, frequency-domain techniques can be advantageous over the time-domain approach when narrow band results are desired, especially if multiple sources can be handled more conveniently in the frequency domain. Moreover, the medium attenuation effects can be more accurately and conveniently modeled in the frequency domain. In this paper, we present a spectral-element method (SEM) in frequency domain to simulate elastic and acoustic waves in anisotropic, heterogeneous, and lossy media. The SEM is based upon the finite-element framework and has exponential convergence because of the usemore » of GLL basis functions. The anisotropic perfectly matched layer is employed to truncate the boundary for unbounded problems. Compared with the conventional finite-element method, the number of unknowns in the SEM is significantly reduced, and higher order accuracy is obtained due to its spectral accuracy. To account for the acoustic-solid interaction, the domain decomposition method (DDM) based upon the discontinuous Galerkin spectral-element method is proposed. Numerical experiments show the proposed method can be an efficient alternative for accurate calculation of elastic and acoustic waves in frequency domain.« less

  2. Higher-order triangular spectral element method with optimized cubature points for seismic wavefield modeling

    NASA Astrophysics Data System (ADS)

    Liu, Youshan; Teng, Jiwen; Xu, Tao; Badal, José

    2017-05-01

    The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate new cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant-Friedrichs-Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational efficiency, the OTSEM is more efficient than the Fekete-based TSEM, although it is slightly costlier than the QSEM when a comparable numerical accuracy is required.

  3. MR vascular fingerprinting: A new approach to compute cerebral blood volume, mean vessel radius, and oxygenation maps in the human brain.

    PubMed

    Christen, T; Pannetier, N A; Ni, W W; Qiu, D; Moseley, M E; Schuff, N; Zaharchuk, G

    2014-04-01

    In the present study, we describe a fingerprinting approach to analyze the time evolution of the MR signal and retrieve quantitative information about the microvascular network. We used a Gradient Echo Sampling of the Free Induction Decay and Spin Echo (GESFIDE) sequence and defined a fingerprint as the ratio of signals acquired pre- and post-injection of an iron-based contrast agent. We then simulated the same experiment with an advanced numerical tool that takes a virtual voxel containing blood vessels as input, then computes microscopic magnetic fields and water diffusion effects, and eventually derives the expected MR signal evolution. The parameter inputs of the simulations (cerebral blood volume [CBV], mean vessel radius [R], and blood oxygen saturation [SO2]) were varied to obtain a dictionary of all possible signal evolutions. The best fit between the observed fingerprint and the dictionary was then determined by using least square minimization. This approach was evaluated in 5 normal subjects and the results were compared to those obtained by using more conventional MR methods, steady-state contrast imaging for CBV and R and a global measure of oxygenation obtained from the superior sagittal sinus for SO2. The fingerprinting method enabled the creation of high-resolution parametric maps of the microvascular network showing expected contrast and fine details. Numerical values in gray matter (CBV=3.1±0.7%, R=12.6±2.4μm, SO2=59.5±4.7%) are consistent with literature reports and correlated with conventional MR approaches. SO2 values in white matter (53.0±4.0%) were slightly lower than expected. Numerous improvements can easily be made and the method should be useful to study brain pathologies. Copyright © 2013 Elsevier Inc. All rights reserved.

  4. A Strassen-Newton algorithm for high-speed parallelizable matrix inversion

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Ferguson, Helaman R. P.

    1988-01-01

    Techniques are described for computing matrix inverses by algorithms that are highly suited to massively parallel computation. The techniques are based on an algorithm suggested by Strassen (1969). Variations of this scheme use matrix Newton iterations and other methods to improve the numerical stability while at the same time preserving a very high level of parallelism. One-processor Cray-2 implementations of these schemes range from one that is up to 55 percent faster than a conventional library routine to one that is slower than a library routine but achieves excellent numerical stability. The problem of computing the solution to a single set of linear equations is discussed, and it is shown that this problem can also be solved efficiently using these techniques.

  5. A Hermite WENO reconstruction for fourth order temporal accurate schemes based on the GRP solver for hyperbolic conservation laws

    NASA Astrophysics Data System (ADS)

    Du, Zhifang; Li, Jiequan

    2018-02-01

    This paper develops a new fifth order accurate Hermite WENO (HWENO) reconstruction method for hyperbolic conservation schemes in the framework of the two-stage fourth order accurate temporal discretization in Li and Du (2016) [13]. Instead of computing the first moment of the solution additionally in the conventional HWENO or DG approach, we can directly take the interface values, which are already available in the numerical flux construction using the generalized Riemann problem (GRP) solver, to approximate the first moment. The resulting scheme is fourth order temporal accurate by only invoking the HWENO reconstruction twice so that it becomes more compact. Numerical experiments show that such compactness makes significant impact on the resolution of nonlinear waves.

  6. Application of a trigonometric finite difference procedure to numerical analysis of compressive and shear buckling of orthotropic panels

    NASA Technical Reports Server (NTRS)

    Stein, M.; Housner, J. D.

    1978-01-01

    A numerical analysis developed for the buckling of rectangular orthotropic layered panels under combined shear and compression is described. This analysis uses a central finite difference procedure based on trigonometric functions instead of using the conventional finite differences which are based on polynomial functions. Inasmuch as the buckle mode shape is usually trigonometric in nature, the analysis using trigonometric finite differences can be made to exhibit a much faster convergence rate than that using conventional differences. Also, the trigonometric finite difference procedure leads to difference equations having the same form as conventional finite differences; thereby allowing available conventional finite difference formulations to be converted readily to trigonometric form. For two-dimensional problems, the procedure introduces two numerical parameters into the analysis. Engineering approaches for the selection of these parameters are presented and the analysis procedure is demonstrated by application to several isotropic and orthotropic panel buckling problems. Among these problems is the shear buckling of stiffened isotropic and filamentary composite panels in which the stiffener is broken. Results indicate that a break may degrade the effect of the stiffener to the extent that the panel will not carry much more load than if the stiffener were absent.

  7. Direct reconstruction of cardiac PET kinetic parametric images using a preconditioned conjugate gradient approach

    PubMed Central

    Rakvongthai, Yothin; Ouyang, Jinsong; Guerin, Bastien; Li, Quanzheng; Alpert, Nathaniel M.; El Fakhri, Georges

    2013-01-01

    Purpose: Our research goal is to develop an algorithm to reconstruct cardiac positron emission tomography (PET) kinetic parametric images directly from sinograms and compare its performance with the conventional indirect approach. Methods: Time activity curves of a NCAT phantom were computed according to a one-tissue compartmental kinetic model with realistic kinetic parameters. The sinograms at each time frame were simulated using the activity distribution for the time frame. The authors reconstructed the parametric images directly from the sinograms by optimizing a cost function, which included the Poisson log-likelihood and a spatial regularization terms, using the preconditioned conjugate gradient (PCG) algorithm with the proposed preconditioner. The proposed preconditioner is a diagonal matrix whose diagonal entries are the ratio of the parameter and the sensitivity of the radioactivity associated with parameter. The authors compared the reconstructed parametric images using the direct approach with those reconstructed using the conventional indirect approach. Results: At the same bias, the direct approach yielded significant relative reduction in standard deviation by 12%–29% and 32%–70% for 50 × 106 and 10 × 106 detected coincidences counts, respectively. Also, the PCG method effectively reached a constant value after only 10 iterations (with numerical convergence achieved after 40–50 iterations), while more than 500 iterations were needed for CG. Conclusions: The authors have developed a novel approach based on the PCG algorithm to directly reconstruct cardiac PET parametric images from sinograms, and yield better estimation of kinetic parameters than the conventional indirect approach, i.e., curve fitting of reconstructed images. The PCG method increases the convergence rate of reconstruction significantly as compared to the conventional CG method. PMID:24089922

  8. A method of determining bending properties of poultry long bones using beam analysis and micro-CT data.

    PubMed

    Vaughan, Patrick E; Orth, Michael W; Haut, Roger C; Karcher, Darrin M

    2016-01-01

    While conventional mechanical testing has been regarded as a gold standard for the evaluation of bone heath in numerous studies, with recent advances in medical imaging, virtual methods of biomechanics are rapidly evolving in the human literature. The objective of the current study was to evaluate the feasibility of determining the elastic and failure properties of poultry long bones using established methods of analysis from the human literature. In order to incorporate a large range of bone sizes and densities, a small number of specimens were utilized from an ongoing study of Regmi et al. (2016) that involved humeri and tibiae from 3 groups of animals (10 from each) including aviary, enriched, and conventional housing systems. Half the animals from each group were used for 'training' that involved the development of a regression equation relating bone density and geometry to bending properties from conventional mechanical tests. The remaining specimens from each group were used for 'testing' in which the mechanical properties from conventional tests were compared to those predicted by the regression equations. Based on the regression equations, the coefficients of determination for the 'test' set of data were 0.798 for bending bone stiffness and 0.901 for the yield (or failure) moment of the bones. All regression slopes and intercepts values for the tests versus predicted plots were not significantly different from 1 and 0, respectively. The study showed the feasibility of developing future methods of virtual biomechanics for the evaluation of poultry long bones. With further development, virtual biomechanics may have utility in future in vivo studies to assess laying hen bone health over time without the need to sacrifice large groups of animals at each time point. © 2016 Poultry Science Association Inc.

  9. The Tool for Designing Engineering Systems Using a New Optimization Method Based on a Stochastic Process

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroaki; Yamaguchi, Katsuhito; Ishikawa, Yoshio

    The conventional optimization methods were based on a deterministic approach, since their purpose is to find out an exact solution. However, these methods have initial condition dependence and risk of falling into local solution. In this paper, we propose a new optimization method based on a concept of path integral method used in quantum mechanics. The method obtains a solutions as an expected value (stochastic average) using a stochastic process. The advantages of this method are not to be affected by initial conditions and not to need techniques based on experiences. We applied the new optimization method to a design of the hang glider. In this problem, not only the hang glider design but also its flight trajectory were optimized. The numerical calculation results showed that the method has a sufficient performance.

  10. Aerodynamic shape optimization using control theory

    NASA Technical Reports Server (NTRS)

    Reuther, James

    1996-01-01

    Aerodynamic shape design has long persisted as a difficult scientific challenge due its highly nonlinear flow physics and daunting geometric complexity. However, with the emergence of Computational Fluid Dynamics (CFD) it has become possible to make accurate predictions of flows which are not dominated by viscous effects. It is thus worthwhile to explore the extension of CFD methods for flow analysis to the treatment of aerodynamic shape design. Two new aerodynamic shape design methods are developed which combine existing CFD technology, optimal control theory, and numerical optimization techniques. Flow analysis methods for the potential flow equation and the Euler equations form the basis of the two respective design methods. In each case, optimal control theory is used to derive the adjoint differential equations, the solution of which provides the necessary gradient information to a numerical optimization method much more efficiently then by conventional finite differencing. Each technique uses a quasi-Newton numerical optimization algorithm to drive an aerodynamic objective function toward a minimum. An analytic grid perturbation method is developed to modify body fitted meshes to accommodate shape changes during the design process. Both Hicks-Henne perturbation functions and B-spline control points are explored as suitable design variables. The new methods prove to be computationally efficient and robust, and can be used for practical airfoil design including geometric and aerodynamic constraints. Objective functions are chosen to allow both inverse design to a target pressure distribution and wave drag minimization. Several design cases are presented for each method illustrating its practicality and efficiency. These include non-lifting and lifting airfoils operating at both subsonic and transonic conditions.

  11. Nonlinear phase noise tolerance for coherent optical systems using soft-decision-aided ML carrier phase estimation enhanced with constellation partitioning

    NASA Astrophysics Data System (ADS)

    Li, Yan; Wu, Mingwei; Du, Xinwei; Xu, Zhuoran; Gurusamy, Mohan; Yu, Changyuan; Kam, Pooi-Yuen

    2018-02-01

    A novel soft-decision-aided maximum likelihood (SDA-ML) carrier phase estimation method and its simplified version, the decision-aided and soft-decision-aided maximum likelihood (DA-SDA-ML) methods are tested in a nonlinear phase noise-dominant channel. The numerical performance results show that both the SDA-ML and DA-SDA-ML methods outperform the conventional DA-ML in systems with constant-amplitude modulation formats. In addition, modified algorithms based on constellation partitioning are proposed. With partitioning, the modified SDA-ML and DA-SDA-ML are shown to be useful for compensating the nonlinear phase noise in multi-level modulation systems.

  12. Cryopreservation of boar semen. III: Ultrastructure of boar spermatozoa frozen ultra-rapidly at various stages of conventional freezing and thawing.

    PubMed

    Bwanga, C O; Ekwall, H; Rodriguez-Martinez, H

    1991-01-01

    Ejaculated boar spermatozoa subjected to a conventional freezing and thawing process, were ultra-rapidly fixed, freeze-substituted and examined by electron microscopy to monitor the presence of real or potential intracellular ice and the degree of cell protection attained with the different extenders used during the process. Numerous ice crystal marks representing the degree of hydration of the cells were located in the perinuclear space of those spermatozoa not in proper contact with the extender containing glycerol (i.e. prior to freezing). The spermatozoa which were in proper contact with the extenders presented a high degree of preservation of the acrosomes, plasma membranes as well as the nuclear envelopes. No ice marks were detected in acrosomes before thawing, indicating that the conventional assayed cryopreservation method provided a good protection against cryoinjury. The presence of acrosomal changes (internal vesiculization, hydration and swelling) in thawed samples however, raises serious questions about the thawing procedure employed.

  13. Wideband Motion Control by Position and Acceleration Input Based Disturbance Observer

    NASA Astrophysics Data System (ADS)

    Irie, Kouhei; Katsura, Seiichiro; Ohishi, Kiyoshi

    The disturbance observer can observe and suppress the disturbance torque within its bandwidth. Recent motion systems begin to spread in the society and they are required to have ability to contact with unknown environment. Such a haptic motion requires much wider bandwidth. However, since the conventional disturbance observer attains the acceleration response by the second order derivative of position response, the bandwidth is limited due to the derivative noise. This paper proposes a novel structure of a disturbance observer. The proposed disturbance observer uses an acceleration sensor for enlargement of bandwidth. Generally, the bandwidth of an acceleration sensor is from 1Hz to more than 1kHz. To cover DC range, the conventional position sensor based disturbance observer is integrated. Thus, the performance of the proposed Position and Acceleration input based disturbance observer (PADO) is superior to the conventional one. The PADO is applied to position control (infinity stiffness) and force control (zero stiffness). The numerical and experimental results show viability of the proposed method.

  14. Use of photovoltaic detector for photocatalytic activity estimation

    NASA Astrophysics Data System (ADS)

    Das, Susanta Kumar; Satapathy, Pravakar; Rao, P. Sai Shruti; Sabar, Bilu; Panda, Rudrashish; Khatua, Lizina

    2018-05-01

    Photocatalysis is a very important process and have numerous applications. Generally, to estimate the photocatalytic activity of newly grown material, its reaction rate constant w.r.t to some standard commercial TiO2 nanoparticles like Degussa P25 is evaluated. Here a photovoltaic detector in conjunction with laser is used to determine this rate constant. This method is tested using Zinc Orthotitanate (Zn2TiO4) nanoparticles prepared by solid state reaction and it is found that its reaction rate constant is six times higher than that of P25. The value is found to be close to the value found by a conventional system. Our proposed system is much more cost-effective than the conventional one and has the potential to do real time monitoring of the photocatalytic activity.

  15. Anatomical image-guided fluorescence molecular tomography reconstruction using kernel method

    PubMed Central

    Baikejiang, Reheman; Zhao, Yue; Fite, Brett Z.; Ferrara, Katherine W.; Li, Changqing

    2017-01-01

    Abstract. Fluorescence molecular tomography (FMT) is an important in vivo imaging modality to visualize physiological and pathological processes in small animals. However, FMT reconstruction is ill-posed and ill-conditioned due to strong optical scattering in deep tissues, which results in poor spatial resolution. It is well known that FMT image quality can be improved substantially by applying the structural guidance in the FMT reconstruction. An approach to introducing anatomical information into the FMT reconstruction is presented using the kernel method. In contrast to conventional methods that incorporate anatomical information with a Laplacian-type regularization matrix, the proposed method introduces the anatomical guidance into the projection model of FMT. The primary advantage of the proposed method is that it does not require segmentation of targets in the anatomical images. Numerical simulations and phantom experiments have been performed to demonstrate the proposed approach’s feasibility. Numerical simulation results indicate that the proposed kernel method can separate two FMT targets with an edge-to-edge distance of 1 mm and is robust to false-positive guidance and inhomogeneity in the anatomical image. For the phantom experiments with two FMT targets, the kernel method has reconstructed both targets successfully, which further validates the proposed kernel method. PMID:28464120

  16. Nanophotonic particle simulation and inverse design using artificial neural networks.

    PubMed

    Peurifoy, John; Shen, Yichen; Jing, Li; Yang, Yi; Cano-Renteria, Fidel; DeLacy, Brendan G; Joannopoulos, John D; Tegmark, Max; Soljačić, Marin

    2018-06-01

    We propose a method to use artificial neural networks to approximate light scattering by multilayer nanoparticles. We find that the network needs to be trained on only a small sampling of the data to approximate the simulation to high precision. Once the neural network is trained, it can simulate such optical processes orders of magnitude faster than conventional simulations. Furthermore, the trained neural network can be used to solve nanophotonic inverse design problems by using back propagation, where the gradient is analytical, not numerical.

  17. Comments on “A Unified Representation of Deep Moist Convection in Numerical Modeling of the Atmosphere. Part I”

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Guang; Fan, Jiwen; Xu, Kuan-Man

    2015-06-01

    Arakawa and Wu (2013, hereafter referred to as AW13) recently developed a formal approach to a unified parameterization of atmospheric convection for high-resolution numerical models. The work is based on ideas formulated by Arakawa et al. (2011). It lays the foundation for a new parameterization pathway in the era of high-resolution numerical modeling of the atmosphere. The key parameter in this approach is convective cloud fraction. In conventional parameterization, it is assumed that <<1. This assumption is no longer valid when horizontal resolution of numerical models approaches a few to a few tens kilometers, since in such situations convective cloudmore » fraction can be comparable to unity. Therefore, they argue that the conventional approach to parameterizing convective transport must include a factor 1 - in order to unify the parameterization for the full range of model resolutions so that it is scale-aware and valid for large convective cloud fractions. While AW13’s approach provides important guidance for future convective parameterization development, in this note we intend to show that the conventional approach already has this scale awareness factor 1 - built in, although not recognized for the last forty years. Therefore, it should work well even in situations of large convective cloud fractions in high-resolution numerical models.« less

  18. Numerical and Experimental Study on Hydrodynamic Performance of A Novel Semi-Submersible Concept

    NASA Astrophysics Data System (ADS)

    Gao, Song; Tao, Long-bin; Kou, Yu-feng; Lu, Chao; Sun, Jiang-long

    2018-04-01

    Multiple Column Platform (MCP) semi-submersible is a newly proposed concept, which differs from the conventional semi-submersibles, featuring centre column and middle pontoon. It is paramount to ensure its structural reliability and safe operation at sea, and a rigorous investigation is conducted to examine the hydrodynamic and structural performance for the novel structure concept. In this paper, the numerical and experimental studies on the hydrodynamic performance of MCP are performed. Numerical simulations are conducted in both the frequency and time domains based on 3D potential theory. The numerical models are validated by experimental measurements obtained from extensive sets of model tests under both regular wave and irregular wave conditions. Moreover, a comparative study on MCP and two conventional semi-submersibles are carried out using numerical simulation. Specifically, the hydrodynamic characteristics, including hydrodynamic coefficients, natural periods and motion response amplitude operators (RAOs), mooring line tension are fully examined. The present study proves the feasibility of the novel MCP and demonstrates the potential possibility of optimization in the future study.

  19. Effects of mesh type on a non-premixed model in a flameless combustion simulation

    NASA Astrophysics Data System (ADS)

    Komonhirun, Seekharin; Yongyingsakthavorn, Pisit; Nontakeaw, Udomkiat

    2018-01-01

    Flameless combustion is a recently developed combustion system, which provides zero emission product. This phenomenon requires auto-ignition by supplying high-temperature air with low oxygen concentration. The flame is vanished and colorless. Temperature of the flameless combustion is less than that of a conventional case, where NOx reactions can be well suppressed. To design a flameless combustor, the computational fluid dynamics (CFD) is employed. The designed air-and-fuel injection method can be applied with the turbulent and non-premixed models. Due to the fact that nature of turbulent non-premixed combustion is based on molecular randomness, inappropriate mesh type can lead to significant numerical errors. Therefore, this research aims to numerically investigate the effects of mesh type on flameless combustion characteristics, which is a primary step of design process. Different meshes, i.e. tetrahedral, hexagonal are selected. Boundary conditions are 5% of oxygen and 900 K of air-inlet temperature for the flameless combustion, and 21% of oxygen and 300 K of air-inlet temperature for the conventional case. The results are finally presented and discussed in terms of velocity streamlines, and contours of turbulent kinetic energy and viscosity, temperature, and combustion products.

  20. A multivariate variational objective analysis-assimilation method. Part 2: Case study results with and without satellite data

    NASA Technical Reports Server (NTRS)

    Achtemeier, Gary L.; Kidder, Stanley Q.; Scott, Robert W.

    1988-01-01

    The variational multivariate assimilation method described in a companion paper by Achtemeier and Ochs is applied to conventional and conventional plus satellite data. Ground-based and space-based meteorological data are weighted according to the respective measurement errors and blended into a data set that is a solution of numerical forms of the two nonlinear horizontal momentum equations, the hydrostatic equation, and an integrated continuity equation for a dry atmosphere. The analyses serve first, to evaluate the accuracy of the model, and second to contrast the analyses with and without satellite data. Evaluation criteria measure the extent to which: (1) the assimilated fields satisfy the dynamical constraints, (2) the assimilated fields depart from the observations, and (3) the assimilated fields are judged to be realistic through pattern analysis. The last criterion requires that the signs, magnitudes, and patterns of the hypersensitive vertical velocity and local tendencies of the horizontal velocity components be physically consistent with respect to the larger scale weather systems.

  1. A finite element formulation preserving symmetric and banded diffusion stiffness matrix characteristics for fractional differential equations

    NASA Astrophysics Data System (ADS)

    Lin, Zeng; Wang, Dongdong

    2017-10-01

    Due to the nonlocal property of the fractional derivative, the finite element analysis of fractional diffusion equation often leads to a dense and non-symmetric stiffness matrix, in contrast to the conventional finite element formulation with a particularly desirable symmetric and banded stiffness matrix structure for the typical diffusion equation. This work first proposes a finite element formulation that preserves the symmetry and banded stiffness matrix characteristics for the fractional diffusion equation. The key point of the proposed formulation is the symmetric weak form construction through introducing a fractional weight function. It turns out that the stiffness part of the present formulation is identical to its counterpart of the finite element method for the conventional diffusion equation and thus the stiffness matrix formulation becomes trivial. Meanwhile, the fractional derivative effect in the discrete formulation is completely transferred to the force vector, which is obviously much easier and efficient to compute than the dense fractional derivative stiffness matrix. Subsequently, it is further shown that for the general fractional advection-diffusion-reaction equation, the symmetric and banded structure can also be maintained for the diffusion stiffness matrix, although the total stiffness matrix is not symmetric in this case. More importantly, it is demonstrated that under certain conditions this symmetric diffusion stiffness matrix formulation is capable of producing very favorable numerical solutions in comparison with the conventional non-symmetric diffusion stiffness matrix finite element formulation. The effectiveness of the proposed methodology is illustrated through a series of numerical examples.

  2. A Carrier Estimation Method Based on MLE and KF for Weak GNSS Signals.

    PubMed

    Zhang, Hongyang; Xu, Luping; Yan, Bo; Zhang, Hua; Luo, Liyan

    2017-06-22

    Maximum likelihood estimation (MLE) has been researched for some acquisition and tracking applications of global navigation satellite system (GNSS) receivers and shows high performance. However, all current methods are derived and operated based on the sampling data, which results in a large computation burden. This paper proposes a low-complexity MLE carrier tracking loop for weak GNSS signals which processes the coherent integration results instead of the sampling data. First, the cost function of the MLE of signal parameters such as signal amplitude, carrier phase, and Doppler frequency are used to derive a MLE discriminator function. The optimal value of the cost function is searched by an efficient Levenberg-Marquardt (LM) method iteratively. Its performance including Cramér-Rao bound (CRB), dynamic characteristics and computation burden are analyzed by numerical techniques. Second, an adaptive Kalman filter is designed for the MLE discriminator to obtain smooth estimates of carrier phase and frequency. The performance of the proposed loop, in terms of sensitivity, accuracy and bit error rate, is compared with conventional methods by Monte Carlo (MC) simulations both in pedestrian-level and vehicle-level dynamic circumstances. Finally, an optimal loop which combines the proposed method and conventional method is designed to achieve the optimal performance both in weak and strong signal circumstances.

  3. Detection of admittivity anomaly on high-contrast heterogeneous backgrounds using frequency difference EIT.

    PubMed

    Jang, J; Seo, J K

    2015-06-01

    This paper describes a multiple background subtraction method in frequency difference electrical impedance tomography (fdEIT) to detect an admittivity anomaly from a high-contrast background conductivity distribution. The proposed method expands the use of the conventional weighted frequency difference EIT method, which has been used limitedly to detect admittivity anomalies in a roughly homogeneous background. The proposed method can be viewed as multiple weighted difference imaging in fdEIT. Although the spatial resolutions of the output images by fdEIT are very low due to the inherent ill-posedness, numerical simulations and phantom experiments of the proposed method demonstrate its feasibility to detect anomalies. It has potential application in stroke detection in a head model, which is highly heterogeneous due to the skull.

  4. Computational and experimental model of transdermal iontophorethic drug delivery system.

    PubMed

    Filipovic, Nenad; Saveljic, Igor; Rac, Vladislav; Graells, Beatriz Olalde; Bijelic, Goran

    2017-11-30

    The concept of iontophoresis is often applied to increase the transdermal transport of drugs and other bioactive agents into the skin or other tissues. It is a non-invasive drug delivery method which involves electromigration and electroosmosis in addition to diffusion and is shown to be a viable alternative to conventional administration routs such as oral, hypodermic and intravenous injection. In this study we investigated, experimentally and numerically, in vitro drug delivery of dexamethasone sodium phosphate to porcine skin. Different current densities, delivery durations and drug loads were investigated experimentally and introduced as boundary conditions for numerical simulations. Nernst-Planck equation was used for calculation of active substance flux through equivalent model of homogeneous hydrogel and skin layers. The obtained numerical results were in good agreement with experimental observations. A comprehensive in-silico platform, which includes appropriate numerical tools for fitting, could contribute to iontophoretic drug-delivery devices design and correct dosage and drug clearance profiles as well as to perform much faster in-silico experiments to better determine parameters and performance criteria of iontophoretic drug delivery. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Multi-mounted X-ray cone-beam computed tomography

    NASA Astrophysics Data System (ADS)

    Fu, Jian; Wang, Jingzheng; Guo, Wei; Peng, Peng

    2018-04-01

    As a powerful nondestructive inspection technique, X-ray computed tomography (X-CT) has been widely applied to clinical diagnosis, industrial production and cutting-edge research. Imaging efficiency is currently one of the major obstacles for the applications of X-CT. In this paper, a multi-mounted three dimensional cone-beam X-CT (MM-CBCT) method is reported. It consists of a novel multi-mounted cone-beam scanning geometry and the corresponding three dimensional statistical iterative reconstruction algorithm. The scanning geometry is the most iconic design and significantly different from the current CBCT systems. Permitting the cone-beam scanning of multiple objects simultaneously, the proposed approach has the potential to achieve an imaging efficiency orders of magnitude greater than the conventional methods. Although multiple objects can be also bundled together and scanned simultaneously by the conventional CBCT methods, it will lead to the increased penetration thickness and signal crosstalk. In contrast, MM-CBCT avoids substantially these problems. This work comprises a numerical study of the method and its experimental verification using a dataset measured with a developed MM-CBCT prototype system. This technique will provide a possible solution for the CT inspection in a large scale.

  6. Quantum tomography for collider physics: illustrations with lepton-pair production

    NASA Astrophysics Data System (ADS)

    Martens, John C.; Ralston, John P.; Takaki, J. D. Tapia

    2018-01-01

    Quantum tomography is a method to experimentally extract all that is observable about a quantum mechanical system. We introduce quantum tomography to collider physics with the illustration of the angular distribution of lepton pairs. The tomographic method bypasses much of the field-theoretic formalism to concentrate on what can be observed with experimental data. We provide a practical, experimentally driven guide to model-independent analysis using density matrices at every step. Comparison with traditional methods of analyzing angular correlations of inclusive reactions finds many advantages in the tomographic method, which include manifest Lorentz covariance, direct incorporation of positivity constraints, exhaustively complete polarization information, and new invariants free from frame conventions. For example, experimental data can determine the entanglement entropy of the production process. We give reproducible numerical examples and provide a supplemental standalone computer code that implements the procedure. We also highlight a property of complex positivity that guarantees in a least-squares type fit that a local minimum of a χ 2 statistic will be a global minimum: There are no isolated local minima. This property with an automated implementation of positivity promises to mitigate issues relating to multiple minima and convention dependence that have been problematic in previous work on angular distributions.

  7. Efficient C1-continuous phase-potential upwind (C1-PPU) schemes for coupled multiphase flow and transport with gravity

    NASA Astrophysics Data System (ADS)

    Jiang, Jiamin; Younis, Rami M.

    2017-10-01

    In the presence of counter-current flow, nonlinear convergence problems may arise in implicit time-stepping when the popular phase-potential upwinding (PPU) scheme is used. The PPU numerical flux is non-differentiable across the co-current/counter-current flow regimes. This may lead to cycles or divergence in the Newton iterations. Recently proposed methods address improved smoothness of the numerical flux. The objective of this work is to devise and analyze an alternative numerical flux scheme called C1-PPU that, in addition to improving smoothness with respect to saturations and phase potentials, also improves the level of scalar nonlinearity and accuracy. C1-PPU involves a novel use of the flux limiter concept from the context of high-resolution methods, and allows a smooth variation between the co-current/counter-current flow regimes. The scheme is general and applies to fully coupled flow and transport formulations with an arbitrary number of phases. We analyze the consistency property of the C1-PPU scheme, and derive saturation and pressure estimates, which are used to prove the solution existence. Several numerical examples for two- and three-phase flows in heterogeneous and multi-dimensional reservoirs are presented. The proposed scheme is compared to the conventional PPU and the recently proposed Hybrid Upwinding schemes. We investigate three properties of these numerical fluxes: smoothness, nonlinearity, and accuracy. The results indicate that in addition to smoothness, nonlinearity may also be critical for convergence behavior and thus needs to be considered in the design of an efficient numerical flux scheme. Moreover, the numerical examples show that the C1-PPU scheme exhibits superior convergence properties for large time steps compared to the other alternatives.

  8. A Study of the Effects of Seafloor Topography on Tsunami Propagation

    NASA Astrophysics Data System (ADS)

    Ohata, T.; Mikada, H.; Goto, T.; Takekawa, J.

    2011-12-01

    For tsunami disaster mitigation, we consider the phenomena related to tsunami in terms of the generation, propagation, and run-up to the coast. With consideration for these three phenomena, we have to consider tsunami propagation to predict the arrival time and the run-up height of tsunami. Numerical simulations of tsunami that propagates from the source location to the coast have been widely used to estimate these important parameters. When a tsunami propagates, however, reflected and scattered waves arrive as later phases of tsunami. These waves are generated by the changes of water depth, and could influence the height estimation, especially in later phases. The maximum height of tsunami could be observed not as the first arrivals but as the later phases, therefore it is necessary to consider the effects of the seafloor topography on tsunami propagation. Since many simulations, however, mainly focus on the prediction of the first arrival times and the initial height of tsunami, it is difficult to simulate the later phases that are important for the tsunami disaster mitigation in the conventional methods. In this study, we investigate the effects of the seafloor topography on tsunami propagation after accommodating a tsunami simulation to the superposition of reflected and refracted waves caused by the smooth changes of water depths. Developing the new numerical code, we consider how the effects of the sea floor topography affect on the tsunami propagation, comparing with the tsunami simulated by the conventional method based on the liner long wave theory. Our simulation employs the three dimensional in-equally spaced grids in finite difference method (FDM) to introduce the real seafloor topography. In the simulation, we import the seafloor topography from the real bathymetry data near the Sendai-Bay, off the northeast Tohoku region, Japan, and simulate the tsunami propagation over the varying seafloor topography there. Comparing with the tsunami simulated by the conventional method based on the liner long wave theory, we found that the amplitudes of tsunamis are different from each other for the two simulations. The degree of the amplification of the height of tsunami in our method is larger than that in the conventional one. The height of the later phases of the tsunamis shows the discrepancy between the two results. We would like to conclude that the real changes of water depth affect the prediction of tsunami propagation and the maximum height. Because of the effects of the seafloor topography, the amplitude of the later phases is sometimes larger than the former ones. Due to the inclusion of such effects by the real topography, we believe our method lead to a higher accuracy of prediction of tsunami later phases, which would be effective for tsunami disaster mitigation.

  9. Two-dimensional numerical modeling and solution of convection heat transfer in turbulent He II

    NASA Technical Reports Server (NTRS)

    Zhang, Burt X.; Karr, Gerald R.

    1991-01-01

    Numerical schemes are employed to investigate heat transfer in the turbulent flow of He II. FEM is used to solve a set of equations governing the heat transfer and hydrodynamics of He II in the turbulent regime. Numerical results are compared with available experimental data and interpreted in terms of conventional heat transfer parameters such as the Prandtl number, the Peclet number, and the Nusselt number. Within the prescribed Reynolds number domain, the Gorter-Mellink thermal counterflow mechanism becomes less significant, and He II acts like an ordinary fluid. The convection heat transfer characteristics of He II in the highly turbulent regime can be successfully described by using the conventional turbulence and heat transfer theories.

  10. A blended continuous–discontinuous finite element method for solving the multi-fluid plasma model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sousa, E.M., E-mail: sousae@uw.edu; Shumlak, U., E-mail: shumlak@uw.edu

    The multi-fluid plasma model represents electrons, multiple ion species, and multiple neutral species as separate fluids that interact through short-range collisions and long-range electromagnetic fields. The model spans a large range of temporal and spatial scales, which renders the model stiff and presents numerical challenges. To address the large range of timescales, a blended continuous and discontinuous Galerkin method is proposed, where the massive ion and neutral species are modeled using an explicit discontinuous Galerkin method while the electrons and electromagnetic fields are modeled using an implicit continuous Galerkin method. This approach is able to capture large-gradient ion and neutralmore » physics like shock formation, while resolving high-frequency electron dynamics in a computationally efficient manner. The details of the Blended Finite Element Method (BFEM) are presented. The numerical method is benchmarked for accuracy and tested using two-fluid one-dimensional soliton problem and electromagnetic shock problem. The results are compared to conventional finite volume and finite element methods, and demonstrate that the BFEM is particularly effective in resolving physics in stiff problems involving realistic physical parameters, including realistic electron mass and speed of light. The benefit is illustrated by computing a three-fluid plasma application that demonstrates species separation in multi-component plasmas.« less

  11. Using Finite Element and Eigenmode Expansion Methods to Investigate the Periodic and Spectral Characteristic of Superstructure Fiber Bragg Gratings

    PubMed Central

    He, Yue-Jing; Hung, Wei-Chih; Lai, Zhe-Ping

    2016-01-01

    In this study, a numerical simulation method was employed to investigate and analyze superstructure fiber Bragg gratings (SFBGs) with five duty cycles (50%, 33.33%, 14.28%, 12.5%, and 10%). This study focuses on demonstrating the relevance between design period and spectral characteristics of SFBGs (in the form of graphics) for SFBGs of all duty cycles. Compared with complicated and hard-to-learn conventional coupled-mode theory, the result of the present study may assist beginner and expert designers in understanding the basic application aspects, optical characteristics, and design techniques of SFBGs, thereby indirectly lowering the physical concepts and mathematical skills required for entering the design field. To effectively improve the accuracy of overall computational performance and numerical calculations and to shorten the gap between simulation results and actual production, this study integrated a perfectly matched layer (PML), perfectly reflecting boundary (PRB), object meshing method (OMM), and boundary meshing method (BMM) into the finite element method (FEM) and eigenmode expansion method (EEM). The integrated method enables designers to easily and flexibly design optical fiber communication systems that conform to the specific spectral characteristic by using the simulation data in this paper, which includes bandwidth, number of channels, and band gap size. PMID:26861322

  12. 22 CFR 42.24 - Adoption under the Hague Convention on Protection of Children and Co-operation in Respect of...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... IMMIGRANTS UNDER THE IMMIGRATION AND NATIONALITY ACT, AS AMENDED Immigrants Not Subject to Numerical... Convention effective date. Although this part 42 generally applies to the issuance of immigrant visas, this...

  13. A parallel Jacobson-Oksman optimization algorithm. [parallel processing (computers)

    NASA Technical Reports Server (NTRS)

    Straeter, T. A.; Markos, A. T.

    1975-01-01

    A gradient-dependent optimization technique which exploits the vector-streaming or parallel-computing capabilities of some modern computers is presented. The algorithm, derived by assuming that the function to be minimized is homogeneous, is a modification of the Jacobson-Oksman serial minimization method. In addition to describing the algorithm, conditions insuring the convergence of the iterates of the algorithm and the results of numerical experiments on a group of sample test functions are presented. The results of these experiments indicate that this algorithm will solve optimization problems in less computing time than conventional serial methods on machines having vector-streaming or parallel-computing capabilities.

  14. Photoacoustic spectroscopy of condensed matter

    NASA Technical Reports Server (NTRS)

    Somoano, R. B.

    1978-01-01

    Photoacoustic spectroscopy is a new analytical tool that provides a simple nondestructive technique for obtaining information about the electronic absorption spectrum of samples such as powders, semisolids, gels, and liquids. It can also be applied to samples which cannot be examined by conventional optical methods. Numerous applications of this technique in the field of inorganic and organic semiconductors, biology, and catalysis have been described. Among the advantages of photoacoustic spectroscopy, the signal is almost insensitive to light scattering by the sample and information can be obtained about nonradiative deactivation processes. Signal saturation, which can modify the intensity of individual absorption bands in special cases, is a drawback of the method.

  15. Spectroscopic optical coherence tomography based on wavelength de-multiplexing and smart pixel array detection

    NASA Astrophysics Data System (ADS)

    Laubscher, Markus; Bourquin, Stéphane; Froehly, Luc; Karamata, Boris; Lasser, Theo

    2004-07-01

    Current spectroscopic optical coherence tomography (OCT) methods rely on a posteriori numerical calculation. We present an experimental alternative for accessing spectroscopic information in OCT without post-processing based on wavelength de-multiplexing and parallel detection using a diffraction grating and a smart pixel detector array. Both a conventional A-scan with high axial resolution and the spectrally resolved measurement are acquired simultaneously. A proof-of-principle demonstration is given on a dynamically changing absorbing sample. The method's potential for fast spectroscopic OCT imaging is discussed. The spectral measurements obtained with this approach are insensitive to scan non-linearities or sample movements.

  16. Development of an Improved Simulator for Chemical and Microbial EOR Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pope, Gary A.; Sepehrnoori, Kamy; Delshad, Mojdeh

    2000-09-11

    The objective of this research was to extend the capability of an existing simulator (UTCHEM) to improved oil recovery methods that use surfactants, polymers, gels, alkaline chemicals, microorganisms and foam as well as various combinations of these in both conventional and naturally fractured oil reservoirs. Task 1 is the addition of a dual-porosity model for chemical improved of recovery processes in naturally fractured oil reservoirs. Task 2 is the addition of a foam model. Task 3 addresses several numerical and coding enhancements that will greatly improve the versatility and performance of UTCHEM. Task 4 is the enhancements of physical propertymore » models.« less

  17. An improved method for testing tension properties of fiber-reinforced polymer rebar

    NASA Astrophysics Data System (ADS)

    Yuan, Guoqing; Ma, Jian; Dong, Guohua

    2010-03-01

    We have conducted a series of tests to measure tensile strength and modulus of elasticity of fiber reinforced polymer (FRP) rebar. In these tests, the ends of each rebar specimen were embedded in steel tube filled with expansive cement, and the rebar was loaded by gripping the tubes with the conventional fixture during the tensile tests. However, most of specimens were failed at the ends where the section changed abruptly. Numerical simulations of the stress field at bar ends in such tests by ANSYS revealed that such unexpected failure modes were caused by the test setup. The changing abruptly of the section induced stress concentration. So the test results would be regarded as invalid. An improved testing method is developed in this paper to avoid this issue. A transition part was added between the free segment of the rebar and the tube, which could eliminate the stress concentration effectively and thus yield more accurate values for the properties of FRP rebar. The validity of the proposed method was demonstrated by both experimental tests and numerical analysis.

  18. An improved method for testing tension properties of fiber-reinforced polymer rebar

    NASA Astrophysics Data System (ADS)

    Yuan, Guoqing; Ma, Jian; Dong, Guohua

    2009-12-01

    We have conducted a series of tests to measure tensile strength and modulus of elasticity of fiber reinforced polymer (FRP) rebar. In these tests, the ends of each rebar specimen were embedded in steel tube filled with expansive cement, and the rebar was loaded by gripping the tubes with the conventional fixture during the tensile tests. However, most of specimens were failed at the ends where the section changed abruptly. Numerical simulations of the stress field at bar ends in such tests by ANSYS revealed that such unexpected failure modes were caused by the test setup. The changing abruptly of the section induced stress concentration. So the test results would be regarded as invalid. An improved testing method is developed in this paper to avoid this issue. A transition part was added between the free segment of the rebar and the tube, which could eliminate the stress concentration effectively and thus yield more accurate values for the properties of FRP rebar. The validity of the proposed method was demonstrated by both experimental tests and numerical analysis.

  19. Assessment of WENO-extended two-fluid modelling in compressible multiphase flows

    NASA Astrophysics Data System (ADS)

    Kitamura, Keiichi; Nonomura, Taku

    2017-03-01

    The two-fluid modelling based on an advection-upwind-splitting-method (AUSM)-family numerical flux function, AUSM+-up, following the work by Chang and Liou [Journal of Computational Physics 2007;225: 840-873], has been successfully extended to the fifth order by weighted-essentially-non-oscillatory (WENO) schemes. Then its performance is surveyed in several numerical tests. The results showed a desired performance in one-dimensional benchmark test problems: Without relying upon an anti-diffusion device, the higher-order two-fluid method captures the phase interface within a fewer grid points than the conventional second-order method, as well as a rarefaction wave and a very weak shock. At a high pressure ratio (e.g. 1,000), the interpolated variables appeared to affect the performance: the conservative-variable-based characteristic-wise WENO interpolation showed less sharper but more robust representations of the shocks and expansions than the primitive-variable-based counterpart did. In two-dimensional shock/droplet test case, however, only the primitive-variable-based WENO with a huge void fraction realised a stable computation.

  20. Numerical investigation of tube hyroforming of TWT using Corner Fill Test

    NASA Astrophysics Data System (ADS)

    Zribi, Temim; Khalfallah, Ali

    2018-05-01

    Tube hydroforming presents a very good alternative to conventional forming processes for obtaining good quality mechanical parts used in several industrial fields, such as the automotive and aerospace sectors. Research in the field of tube hydroforming is aimed at improving the formability, stiffness and weight reduction of manufactured parts using this process. In recent years, a new method of hydroforming appears; it consists of deforming parts made from welded tubes and having different thicknesses. This technique which contributes to the weight reduction of the hydroformed tubes is a good alternative to the conventional tube hydroforming. This technique makes it possible to build rigid and light structures with a reduced cost. However, it is possible to improve the weight reduction by using dissimilar tailor welded tubes (TWT). This paper is a first attempt to analyze by numerical simulation the behavior of TWT hydroformed in square cross section dies, commonly called (Corner Fill Test). Considered tubes are composed of two materials assembled by butt welding. The present analysis will focus on the effect of loading paths on the formability of the structure by determining the change in thickness in several sections of the part. A comparison between the results obtained by hydroforming the butt joint of tubes made of dissimilar materials and those obtained using single-material tube is achieved. Numerical calculations show that the bi-material welded tube has better thinning resistance and a more even thickness distribution in the circumferential directions when compared to the single-material tube.

  1. On numerical integration and computer implementation of viscoplastic models

    NASA Technical Reports Server (NTRS)

    Chang, T. Y.; Chang, J. P.; Thompson, R. L.

    1985-01-01

    Due to the stringent design requirement for aerospace or nuclear structural components, considerable research interests have been generated on the development of constitutive models for representing the inelastic behavior of metals at elevated temperatures. In particular, a class of unified theories (or viscoplastic constitutive models) have been proposed to simulate material responses such as cyclic plasticity, rate sensitivity, creep deformations, strain hardening or softening, etc. This approach differs from the conventional creep and plasticity theory in that both the creep and plastic deformations are treated as unified time-dependent quantities. Although most of viscoplastic models give better material behavior representation, the associated constitutive differential equations have stiff regimes which present numerical difficulties in time-dependent analysis. In this connection, appropriate solution algorithm must be developed for viscoplastic analysis via finite element method.

  2. The numerical dynamic for highly nonlinear partial differential equations

    NASA Technical Reports Server (NTRS)

    Lafon, A.; Yee, H. C.

    1992-01-01

    Problems associated with the numerical computation of highly nonlinear equations in computational fluid dynamics are set forth and analyzed in terms of the potential ranges of spurious behaviors. A reaction-convection equation with a nonlinear source term is employed to evaluate the effects related to spatial and temporal discretizations. The discretization of the source term is described according to several methods, and the various techniques are shown to have a significant effect on the stability of the spurious solutions. Traditional linearized stability analyses cannot provide the level of confidence required for accurate fluid dynamics computations, and the incorporation of nonlinear analysis is proposed. Nonlinear analysis based on nonlinear dynamical systems complements the conventional linear approach and is valuable in the analysis of hypersonic aerodynamics and combustion phenomena.

  3. Quasi-Talbot effect of orbital angular momentum beams for generation of optical vortex arrays by multiplexing metasurface design.

    PubMed

    Gao, Hui; Li, Yang; Chen, Lianwei; Jin, Jinjin; Pu, Mingbo; Li, Xiong; Gao, Ping; Wang, Changtao; Luo, Xiangang; Hong, Minghui

    2018-01-03

    The quasi-Talbot effect of orbital angular momentum (OAM) beams, in which the centers are placed in a rotationally symmetric position, is demonstrated both numerically and experimentally for the first time. Since its multiplication factor is much higher than the conventional fractional Talbot effect, the quasi-Talbot effect can be used in the generation of vortex beam arrays. A metasurface based on this theory was designed and fabricated to test the validity of this assumption. The agreement between the numerical and measured results suggests the practicability of this method to realize vortex beam arrays with high integrated levels, which can open a new door to achieve various potential uses related to optical vortex arrays in integrated optical systems for wide-ranging applications.

  4. An improved DPSM technique for modelling ultrasonic fields in cracked solids

    NASA Astrophysics Data System (ADS)

    Banerjee, Sourav; Kundu, Tribikram; Placko, Dominique

    2007-04-01

    In recent years Distributed Point Source Method (DPSM) is being used for modelling various ultrasonic, electrostatic and electromagnetic field modelling problems. In conventional DPSM several point sources are placed near the transducer face, interface and anomaly boundaries. The ultrasonic or the electromagnetic field at any point is computed by superimposing the contributions of different layers of point sources strategically placed. The conventional DPSM modelling technique is modified in this paper so that the contributions of the point sources in the shadow region can be removed from the calculations. For this purpose the conventional point sources that radiate in all directions are replaced by Controlled Space Radiation (CSR) sources. CSR sources can take care of the shadow region problem to some extent. Complete removal of the shadow region problem can be achieved by introducing artificial interfaces. Numerically synthesized fields obtained by the conventional DPSM technique that does not give any special consideration to the point sources in the shadow region and the proposed modified technique that nullifies the contributions of the point sources in the shadow region are compared. One application of this research can be found in the improved modelling of the real time ultrasonic non-destructive evaluation experiments.

  5. Study on Damage Evaluation and Machinability of UD-CFRP for the Orthogonal Cutting Operation Using Scanning Acoustic Microscopy and the Finite Element Method.

    PubMed

    Wang, Dongyao; He, Xiaodong; Xu, Zhonghai; Jiao, Weicheng; Yang, Fan; Jiang, Long; Li, Linlin; Liu, Wenbo; Wang, Rongguo

    2017-02-20

    Owing to high specific strength and designability, unidirectional carbon fiber reinforced polymer (UD-CFRP) has been utilized in numerous fields to replace conventional metal materials. Post machining processes are always required for UD-CFRP to achieve dimensional tolerance and assembly specifications. Due to inhomogeneity and anisotropy, UD-CFRP differs greatly from metal materials in machining and failure mechanism. To improve the efficiency and avoid machining-induced damage, this paper undertook to study the correlations between cutting parameters, fiber orientation angle, cutting forces, and cutting-induced damage for UD-CFRP laminate. Scanning acoustic microscopy (SAM) was employed and one-/two-dimensional damage factors were then created to quantitatively characterize the damage of the laminate workpieces. According to the 3D Hashin's criteria a numerical model was further proposed in terms of the finite element method (FEM). A good agreement between simulation and experimental results was validated for the prediction and structural optimization of the UD-CFRP.

  6. A modified symplectic PRK scheme for seismic wave modeling

    NASA Astrophysics Data System (ADS)

    Liu, Shaolin; Yang, Dinghui; Ma, Jian

    2017-02-01

    A new scheme for the temporal discretization of the seismic wave equation is constructed based on symplectic geometric theory and a modified strategy. The ordinary differential equation in terms of time, which is obtained after spatial discretization via the spectral-element method, is transformed into a Hamiltonian system. A symplectic partitioned Runge-Kutta (PRK) scheme is used to solve the Hamiltonian system. A term related to the multiplication of the spatial discretization operator with the seismic wave velocity vector is added into the symplectic PRK scheme to create a modified symplectic PRK scheme. The symplectic coefficients of the new scheme are determined via Taylor series expansion. The positive coefficients of the scheme indicate that its long-term computational capability is more powerful than that of conventional symplectic schemes. An exhaustive theoretical analysis reveals that the new scheme is highly stable and has low numerical dispersion. The results of three numerical experiments demonstrate the high efficiency of this method for seismic wave modeling.

  7. Study on Damage Evaluation and Machinability of UD-CFRP for the Orthogonal Cutting Operation Using Scanning Acoustic Microscopy and the Finite Element Method

    PubMed Central

    Wang, Dongyao; He, Xiaodong; Xu, Zhonghai; Jiao, Weicheng; Yang, Fan; Jiang, Long; Li, Linlin; Liu, Wenbo; Wang, Rongguo

    2017-01-01

    Owing to high specific strength and designability, unidirectional carbon fiber reinforced polymer (UD-CFRP) has been utilized in numerous fields to replace conventional metal materials. Post machining processes are always required for UD-CFRP to achieve dimensional tolerance and assembly specifications. Due to inhomogeneity and anisotropy, UD-CFRP differs greatly from metal materials in machining and failure mechanism. To improve the efficiency and avoid machining-induced damage, this paper undertook to study the correlations between cutting parameters, fiber orientation angle, cutting forces, and cutting-induced damage for UD-CFRP laminate. Scanning acoustic microscopy (SAM) was employed and one-/two-dimensional damage factors were then created to quantitatively characterize the damage of the laminate workpieces. According to the 3D Hashin’s criteria a numerical model was further proposed in terms of the finite element method (FEM). A good agreement between simulation and experimental results was validated for the prediction and structural optimization of the UD-CFRP. PMID:28772565

  8. Eigenstates and dynamics of Hooke's atom: Exact results and path integral simulations

    NASA Astrophysics Data System (ADS)

    Gholizadehkalkhoran, Hossein; Ruokosenmäki, Ilkka; Rantala, Tapio T.

    2018-05-01

    The system of two interacting electrons in one-dimensional harmonic potential or Hooke's atom is considered, again. On one hand, it appears as a model for quantum dots in a strong confinement regime, and on the other hand, it provides us with a hard test bench for new methods with the "space splitting" arising from the one-dimensional Coulomb potential. Here, we complete the numerous previous studies of the ground state of Hooke's atom by including the excited states and dynamics, not considered earlier. With the perturbation theory, we reach essentially exact eigenstate energies and wave functions for the strong confinement regime as novel results. We also consider external perturbation induced quantum dynamics in a simple separable case. Finally, we test our novel numerical approach based on real-time path integrals (RTPIs) in reproducing the above. The RTPI turns out to be a straightforward approach with exact account of electronic correlations for solving the eigenstates and dynamics without the conventional restrictions of electronic structure methods.

  9. A New Method for Single-Epoch Ambiguity Resolution with Indoor Pseudolite Positioning.

    PubMed

    Li, Xin; Zhang, Peng; Guo, Jiming; Wang, Jinling; Qiu, Weining

    2017-04-21

    Ambiguity resolution (AR) is crucial for high-precision indoor pseudolite positioning. Due to the existing characteristics of the pseudolite positioning system, such as the geometry structure of the stationary pseudolite which is consistently invariant, the indoor signal is easy to interrupt and the first order linear truncation error cannot be ignored, and a new AR method based on the idea of the ambiguity function method (AFM) is proposed in this paper. The proposed method is a single-epoch and nonlinear method that is especially well-suited for indoor pseudolite positioning. Considering the very low computational efficiency of conventional AFM, we adopt an improved particle swarm optimization (IPSO) algorithm to search for the best solution in the coordinate domain, and variances of a least squares adjustment is conducted to ensure the reliability of the solving ambiguity. Several experiments, including static and kinematic tests, are conducted to verify the validity of the proposed AR method. Numerical results show that the IPSO significantly improved the computational efficiency of AFM and has a more elaborate search ability compared to the conventional grid searching method. For the indoor pseudolite system, which had an initial approximate coordinate precision better than 0.2 m, the AFM exhibited good performances in both static and kinematic tests. With the corrected ambiguity gained from our proposed method, indoor pseudolite positioning can achieve centimeter-level precision using a low-cost single-frequency software receiver.

  10. Algorithm-Based Fault Tolerance for Numerical Subroutines

    NASA Technical Reports Server (NTRS)

    Tumon, Michael; Granat, Robert; Lou, John

    2007-01-01

    A software library implements a new methodology of detecting faults in numerical subroutines, thus enabling application programs that contain the subroutines to recover transparently from single-event upsets. The software library in question is fault-detecting middleware that is wrapped around the numericalsubroutines. Conventional serial versions (based on LAPACK and FFTW) and a parallel version (based on ScaLAPACK) exist. The source code of the application program that contains the numerical subroutines is not modified, and the middleware is transparent to the user. The methodology used is a type of algorithm- based fault tolerance (ABFT). In ABFT, a checksum is computed before a computation and compared with the checksum of the computational result; an error is declared if the difference between the checksums exceeds some threshold. Novel normalization methods are used in the checksum comparison to ensure correct fault detections independent of algorithm inputs. In tests of this software reported in the peer-reviewed literature, this library was shown to enable detection of 99.9 percent of significant faults while generating no false alarms.

  11. Numerical Evaluation of the "Dual-Kernel Counter-flow" Matric Convolution Integral that Arises in Discrete/Continuous (D/C) Control Theory

    NASA Technical Reports Server (NTRS)

    Nixon, Douglas D.

    2009-01-01

    Discrete/Continuous (D/C) control theory is a new generalized theory of discrete-time control that expands the concept of conventional (exact) discrete-time control to create a framework for design and implementation of discretetime control systems that include a continuous-time command function generator so that actuator commands need not be constant between control decisions, but can be more generally defined and implemented as functions that vary with time across sample period. Because the plant/control system construct contains two linear subsystems arranged in tandem, a novel dual-kernel counter-flow convolution integral appears in the formulation. As part of the D/C system design and implementation process, numerical evaluation of that integral over the sample period is required. Three fundamentally different evaluation methods and associated algorithms are derived for the constant-coefficient case. Numerical results are matched against three available examples that have closed-form solutions.

  12. Keystroke dynamics in the pre-touchscreen era

    PubMed Central

    Ahmad, Nasir; Szymkowiak, Andrea; Campbell, Paul A.

    2013-01-01

    Biometric authentication seeks to measure an individual’s unique physiological attributes for the purpose of identity verification. Conventionally, this task has been realized via analyses of fingerprints or signature iris patterns. However, whilst such methods effectively offer a superior security protocol compared with password-based approaches for example, their substantial infrastructure costs, and intrusive nature, make them undesirable and indeed impractical for many scenarios. An alternative approach seeks to develop similarly robust screening protocols through analysis of typing patterns, formally known as keystroke dynamics. Here, keystroke analysis methodologies can utilize multiple variables, and a range of mathematical techniques, in order to extract individuals’ typing signatures. Such variables may include measurement of the period between key presses, and/or releases, or even key-strike pressures. Statistical methods, neural networks, and fuzzy logic have often formed the basis for quantitative analysis on the data gathered, typically from conventional computer keyboards. Extension to more recent technologies such as numerical keypads and touch-screen devices is in its infancy, but obviously important as such devices grow in popularity. Here, we review the state of knowledge pertaining to authentication via conventional keyboards with a view toward indicating how this platform of knowledge can be exploited and extended into the newly emergent type-based technological contexts. PMID:24391568

  13. Stapled peptides as a new technology to investigate protein-protein interactions in human platelets.

    PubMed

    Iegre, Jessica; Ahmed, Niaz S; Gaynord, Josephine S; Wu, Yuteng; Herlihy, Kara M; Tan, Yaw Sing; Lopes-Pires, Maria E; Jha, Rupam; Lau, Yu Heng; Sore, Hannah F; Verma, Chandra; O' Donovan, Daniel H; Pugh, Nicholas; Spring, David R

    2018-05-28

    Platelets are blood cells with numerous crucial pathophysiological roles in hemostasis, cardiovascular thrombotic events and cancer metastasis. Platelet activation requires the engagement of intracellular signalling pathways that involve protein-protein interactions (PPIs). A better understanding of these pathways is therefore crucial for the development of selective anti-platelet drugs. New strategies for studying PPIs in human platelets are required to overcome limitations associated with conventional platelet research methods. For example, small molecule inhibitors can lack selectivity and are often difficult to design and synthesise. Additionally, development of transgenic animal models is costly and time-consuming and conventional recombinant techniques are ineffective due to the lack of a nucleus in platelets. Herein, we describe the generation of a library of novel, functionalised stapled peptides and their first application in the investigation of platelet PPIs. Moreover, the use of platelet-permeable stapled Bim BH3 peptides confirms the part of Bim in phosphatidyl-serine (PS) exposure and reveals a role for the Bim protein in platelet activatory processes. Our work demonstrates that functionalised stapled peptides are a complementary alternative to conventional platelet research methods, and could make a significant contribution to the understanding of platelet signalling pathways and hence to the development of anti-platelet drugs.

  14. Keystroke dynamics in the pre-touchscreen era.

    PubMed

    Ahmad, Nasir; Szymkowiak, Andrea; Campbell, Paul A

    2013-12-19

    Biometric authentication seeks to measure an individual's unique physiological attributes for the purpose of identity verification. Conventionally, this task has been realized via analyses of fingerprints or signature iris patterns. However, whilst such methods effectively offer a superior security protocol compared with password-based approaches for example, their substantial infrastructure costs, and intrusive nature, make them undesirable and indeed impractical for many scenarios. An alternative approach seeks to develop similarly robust screening protocols through analysis of typing patterns, formally known as keystroke dynamics. Here, keystroke analysis methodologies can utilize multiple variables, and a range of mathematical techniques, in order to extract individuals' typing signatures. Such variables may include measurement of the period between key presses, and/or releases, or even key-strike pressures. Statistical methods, neural networks, and fuzzy logic have often formed the basis for quantitative analysis on the data gathered, typically from conventional computer keyboards. Extension to more recent technologies such as numerical keypads and touch-screen devices is in its infancy, but obviously important as such devices grow in popularity. Here, we review the state of knowledge pertaining to authentication via conventional keyboards with a view toward indicating how this platform of knowledge can be exploited and extended into the newly emergent type-based technological contexts.

  15. Holistic Assessment and Ethical Disputation on a New Trend in Solid Biofuels.

    PubMed

    Hašková, Simona

    2017-04-01

    A new trend in the production technology of solid biof uels has appeared. There is a wide consensus that most solid biofuels will be produced according to the new production methods within a few years. Numerous samples were manufactured from agro-residues according to conventional methods as well as new methods. Robust analyses that reviewed the hygienic, environmental, financial and ethical aspects were performed. The hygienic and environmental aspect was assessed by robust chemical and technical analyses. The financial aspect was assessed by energy cost breakdown. The ethical point of view was built on the above stated findings, the survey questionnaire and critical discussion with the literature. It is concluded that the new production methods are significantly favourable from both the hygienic and environmental points of view. Financial indicators do not allow the expressing of any preference. Regarding the ethical aspect, it is concluded that the new methods are beneficial in terms of environmental responsibility. However, it showed that most of the customers that took part in the survey are price oriented and therefore they tend to prefer the cheaper-conventional alternative. In the long term it can be assumed that expansion of the new technology and competition among manufacturers will reduce the costs.

  16. Multigrid methods for flow transition in three-dimensional boundary layers with surface roughness

    NASA Technical Reports Server (NTRS)

    Liu, Chaoqun; Liu, Zhining; Mccormick, Steve

    1993-01-01

    The efficient multilevel adaptive method has been successfully applied to perform direct numerical simulations (DNS) of flow transition in 3-D channels and 3-D boundary layers with 2-D and 3-D isolated and distributed roughness in a curvilinear coordinate system. A fourth-order finite difference technique on stretched and staggered grids, a fully-implicit time marching scheme, a semi-coarsening multigrid method associated with line distributive relaxation scheme, and an improved outflow boundary-condition treatment, which needs only a very short buffer domain to damp all order-one wave reflections, are developed. These approaches make the multigrid DNS code very accurate and efficient. This allows us not only to be able to do spatial DNS for the 3-D channel and flat plate at low computational costs, but also to do spatial DNS for transition in the 3-D boundary layer with 3-D single and multiple roughness elements, which would have extremely high computational costs with conventional methods. Numerical results show good agreement with the linear stability theory, the secondary instability theory, and a number of laboratory experiments. The contribution of isolated and distributed roughness to transition is analyzed.

  17. Computing Normal Shock-Isotropic Turbulence Interaction With Tetrahedral Meshes and the Space-Time CESE Method

    NASA Astrophysics Data System (ADS)

    Venkatachari, Balaji Shankar; Chang, Chau-Lyan

    2016-11-01

    The focus of this study is scale-resolving simulations of the canonical normal shock- isotropic turbulence interaction using unstructured tetrahedral meshes and the space-time conservation element solution element (CESE) method. Despite decades of development in unstructured mesh methods and its potential benefits of ease of mesh generation around complex geometries and mesh adaptation, direct numerical or large-eddy simulations of turbulent flows are predominantly carried out using structured hexahedral meshes. This is due to the lack of consistent multi-dimensional numerical formulations in conventional schemes for unstructured meshes that can resolve multiple physical scales and flow discontinuities simultaneously. The CESE method - due to its Riemann-solver-free shock capturing capabilities, non-dissipative baseline schemes, and flux conservation in time as well as space - has the potential to accurately simulate turbulent flows using tetrahedral meshes. As part of the study, various regimes of the shock-turbulence interaction (wrinkled and broken shock regimes) will be investigated along with a study on how adaptive refinement of tetrahedral meshes benefits this problem. The research funding for this paper has been provided by Revolutionary Computational Aerosciences (RCA) subproject under the NASA Transformative Aeronautics Concepts Program (TACP).

  18. High-precision Non-Contact Measurement of Creep of Ultra-High Temperature Materials for Aerospace

    NASA Technical Reports Server (NTRS)

    Rogers, Jan R.; Hyers, Robert

    2008-01-01

    For high-temperature applications (greater than 2,000 C) such as solid rocket motors, hypersonic aircraft, nuclear electric/thermal propulsion for spacecraft, and more efficient jet engines, creep becomes one of the most important design factors to be considered. Conventional creep-testing methods, where the specimen and test apparatus are in contact with each other, are limited to temperatures approximately 1,700 C. Development of alloys for higher-temperature applications is limited by the availability of testing methods at temperatures above 2000 C. Development of alloys for applications requiring a long service life at temperatures as low as 1500 C, such as the next generation of jet turbine superalloys, is limited by the difficulty of accelerated testing at temperatures above 1700 C. For these reasons, a new, non-contact creep-measurement technique is needed for higher temperature applications. A new non-contact method for creep measurements of ultra-high-temperature metals and ceramics has been developed and validated. Using the electrostatic levitation (ESL) facility at NASA Marshall Space Flight Center, a spherical sample is rotated quickly enough to cause creep deformation due to centrifugal acceleration. Very accurate measurement of the deformed shape through digital image analysis allows the stress exponent n to be determined very precisely from a single test, rather than from numerous conventional tests. Validation tests on single-crystal niobium spheres showed excellent agreement with conventional tests at 1985 C; however the non-contact method provides much greater precision while using only about 40 milligrams of material. This method is being applied to materials including metals and ceramics for non-eroding throats in solid rockets and next-generation superalloys for turbine engines. Recent advances in the method and the current state of these new measurements will be presented.

  19. Analysis of International Space Station Materials on MISSE-3 and MISSE-4

    NASA Technical Reports Server (NTRS)

    Finckenor, Miria M.; Golden, Johnny L.; O'Rourke, Mary Jane

    2008-01-01

    For high-temperature applications (> 2,000 C) such as solid rocket motors, hypersonic aircraft, nuclear electric/thermal propulsion for spacecraft, and more efficient jet engines, creep becomes one of the most important design factors to be considered. Conventional creep-testing methods, where the specimen and test apparatus are in contact with each other, are limited to temperatures 1,700 deg. C. Development of alloys for higher-temperature applications is limited by the availability of testing methods at temperatures above 2000 C. Development of alloys for applications requiring a long service life at temperatures as low as 1500 C, such as the next generation of jet turbine superalloys, is limited by the difficulty of accelerated testing at temperatures above 1700 0c. For these reasons, a new, non-contact creep-measurement technique is needed for higher temperature applications. A new non-contact method for creep measurements of ultra-high-temperature metals and ceramics has been developed and validated. Using the electrostatic levitation (ESL) facility at NASA Marshall Space Flight Center, a spherical sample is rotated quickly enough to cause creep deformation due to centrifugal acceleration. Very accurate measurement of the deformed shape through digital image analysis allows the stress exponent n to be determined very precisely from a single test, rather than from numerous conventional tests. Validation tests on single-crystal niobium spheres showed excellent agreement with conventional tests at 1985 C; however the non-contact method provides much greater precision while using only about 40 milligrams of material. This method is being applied to materials including metals and ceramics for noneroding throats in solid rockets and next-generation superalloys for turbine engines. Recent advances in the method and the current state of these new measurements will be presented.

  20. A novel transmitter IQ imbalance and phase noise suppression method utilizing pilots in PDM CO-OFDM system

    NASA Astrophysics Data System (ADS)

    Zhang, Haoyuan; Ma, Xiurong; Li, Pengru

    2018-04-01

    In this paper, we develop a novel pilot structure to suppress transmitter in-phase and quadrature (Tx IQ) imbalance, phase noise and channel distortion for polarization division multiplexed (PDM) coherent optical orthogonal frequency division multiplexing (CO-OFDM) systems. Compared with the conventional approach, our method not only significantly improves the system tolerance of IQ imbalance as well as phase noise, but also provides higher transmission speed. Numerical simulations of PDM CO-OFDM system is used to validate the theoretical analysis under the simulation conditions: the amplitude mismatch 3 dB, the phase mismatch 15°, the transmission bit rate 100 Gb/s and 560 km standard signal-mode fiber transmission. Moreover, the proposed method is 63% less complex than the compared method.

  1. Embedded WENO: A design strategy to improve existing WENO schemes

    NASA Astrophysics Data System (ADS)

    van Lith, Bart S.; ten Thije Boonkkamp, Jan H. M.; IJzerman, Wilbert L.

    2017-02-01

    Embedded WENO methods utilise all adjacent smooth substencils to construct a desirable interpolation. Conventional WENO schemes under-use this possibility close to large gradients or discontinuities. We develop a general approach for constructing embedded versions of existing WENO schemes. Embedded methods based on the WENO schemes of Jiang and Shu [1] and on the WENO-Z scheme of Borges et al. [2] are explicitly constructed. Several possible choices are presented that result in either better spectral properties or a higher order of convergence for sufficiently smooth solutions. However, these improvements carry over to discontinuous solutions. The embedded methods are demonstrated to be indeed improvements over their standard counterparts by several numerical examples. All the embedded methods presented have no added computational effort compared to their standard counterparts.

  2. Receiver IQ mismatch estimation in PDM CO-OFDM system using training symbol

    NASA Astrophysics Data System (ADS)

    Peng, Dandan; Ma, Xiurong; Yao, Xin; Zhang, Haoyuan

    2017-07-01

    Receiver in-phase/quadrature (IQ) mismatch is hard to mitigate at the receiver via using conventional method in polarization division multiplexed (PDM) coherent optical orthogonal frequency division multiplexing (CO-OFDM) system. In this paper, a novel training symbol structure is proposed to estimate IQ mismatch and channel distortion. Combined this structure with Gram Schmidt orthogonalization procedure (GSOP) algorithm, we can get lower bit error rate (BER). Meanwhile, based on this structure one estimation method is deduced in frequency domain which can achieve the estimation of IQ mismatch and channel distortion independently and improve the system performance obviously. Numerical simulation shows that the proposed two methods have better performance than compared method at 100 Gb/s after 480 km fiber transmission. Besides, the calculation complexity is also analyzed.

  3. An implicit fast Fourier transform method for integration of the time dependent Schrodinger equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riley, M.E.; Ritchie, A.B.

    1997-12-31

    One finds that the conventional exponentiated split operator procedure is subject to difficulties when solving the time-dependent Schrodinger equation for Coulombic systems. By rearranging the kinetic and potential energy terms in the temporal propagator of the finite difference equations, one can find a propagation algorithm for three dimensions that looks much like the Crank-Nicholson and alternating direction implicit methods for one- and two-space-dimensional partial differential equations. The authors report investigations of this novel implicit split operator procedure. The results look promising for a purely numerical approach to certain electron quantum mechanical problems. A charge exchange calculation is presented as anmore » example of the power of the method.« less

  4. Effect of Trailing Intensive Cooling on Residual Stress and Welding Distortion of Friction Stir Welded 2060 Al-Li Alloy

    NASA Astrophysics Data System (ADS)

    Ji, Shude; Yang, Zhanpeng; Wen, Quan; Yue, Yumei; Zhang, Liguo

    2018-04-01

    Trailing intensive cooling with liquid nitrogen has successfully applied to friction stir welding of 2 mm thick 2060 Al-Li alloy. Welding temperature, plastic strain, residual stress and distortion of 2060 Al-Li alloy butt-joint are compared and discussed between conventional cooling and trailing intensive cooling using experimental and numerical simulation methods. The results reveal that trailing intensive cooling is beneficial to shrink high temperature area, reduce peak temperature and decrease plastic strain during friction stir welding process. In addition, the reduction degree of plastic strain outside weld is smaller than that inside weld. Welding distortion presents an anti-saddle shape. Compared with conventional cooling, the reductions of welding distortion and longitudinal residual stresses of welding joint under intense cooling reach 47.7 % and 23.8 %, respectively.

  5. Tissue-supported dental implant prosthesis (overdenture): the search for the ideal protocol. A literature review

    PubMed Central

    Laurito, Domenica; Lamazza, Luca; Spink, Michael J.; De Biase, Alberto

    2012-01-01

    Summary Aims The success of maxillary and mandibular tissue supported implant prostheses varies in the literature, and the ideal protocol may be elusive from given the numerous studies. The oral rehabilitation option is an alternative to conventional dentures and should improve function, satisfaction, and retention. The purpose of this review article is to clarify these questions. Methods The search of literature reviews English non-anecdotal implant overdentures articles from 1991 to 2011. Results The results display an aggregate comprehensive list of categorical variables from the literature review. Overall success of maxillary and mandibular implant overdenture was respectively, 86.6% and 95.8%. Conclusion The literature indicates that the implant overdenture prosthesis provides predictable results – enhanced stability, function and a high-degree of satisfaction compared to conventional removable dentures. PMID:22783448

  6. An asymptotic theory for cross-correlation between auto-correlated sequences and its application on neuroimaging data.

    PubMed

    Zhou, Yunyi; Tao, Chenyang; Lu, Wenlian; Feng, Jianfeng

    2018-04-20

    Functional connectivity is among the most important tools to study brain. The correlation coefficient, between time series of different brain areas, is the most popular method to quantify functional connectivity. Correlation coefficient in practical use assumes the data to be temporally independent. However, the time series data of brain can manifest significant temporal auto-correlation. A widely applicable method is proposed for correcting temporal auto-correlation. We considered two types of time series models: (1) auto-regressive-moving-average model, (2) nonlinear dynamical system model with noisy fluctuations, and derived their respective asymptotic distributions of correlation coefficient. These two types of models are most commonly used in neuroscience studies. We show the respective asymptotic distributions share a unified expression. We have verified the validity of our method, and shown our method exhibited sufficient statistical power for detecting true correlation on numerical experiments. Employing our method on real dataset yields more robust functional network and higher classification accuracy than conventional methods. Our method robustly controls the type I error while maintaining sufficient statistical power for detecting true correlation in numerical experiments, where existing methods measuring association (linear and nonlinear) fail. In this work, we proposed a widely applicable approach for correcting the effect of temporal auto-correlation on functional connectivity. Empirical results favor the use of our method in functional network analysis. Copyright © 2018. Published by Elsevier B.V.

  7. Direct reconstruction of cardiac PET kinetic parametric images using a preconditioned conjugate gradient approach.

    PubMed

    Rakvongthai, Yothin; Ouyang, Jinsong; Guerin, Bastien; Li, Quanzheng; Alpert, Nathaniel M; El Fakhri, Georges

    2013-10-01

    Our research goal is to develop an algorithm to reconstruct cardiac positron emission tomography (PET) kinetic parametric images directly from sinograms and compare its performance with the conventional indirect approach. Time activity curves of a NCAT phantom were computed according to a one-tissue compartmental kinetic model with realistic kinetic parameters. The sinograms at each time frame were simulated using the activity distribution for the time frame. The authors reconstructed the parametric images directly from the sinograms by optimizing a cost function, which included the Poisson log-likelihood and a spatial regularization terms, using the preconditioned conjugate gradient (PCG) algorithm with the proposed preconditioner. The proposed preconditioner is a diagonal matrix whose diagonal entries are the ratio of the parameter and the sensitivity of the radioactivity associated with parameter. The authors compared the reconstructed parametric images using the direct approach with those reconstructed using the conventional indirect approach. At the same bias, the direct approach yielded significant relative reduction in standard deviation by 12%-29% and 32%-70% for 50 × 10(6) and 10 × 10(6) detected coincidences counts, respectively. Also, the PCG method effectively reached a constant value after only 10 iterations (with numerical convergence achieved after 40-50 iterations), while more than 500 iterations were needed for CG. The authors have developed a novel approach based on the PCG algorithm to directly reconstruct cardiac PET parametric images from sinograms, and yield better estimation of kinetic parameters than the conventional indirect approach, i.e., curve fitting of reconstructed images. The PCG method increases the convergence rate of reconstruction significantly as compared to the conventional CG method.

  8. Quasi-static image-based immersed boundary-finite element model of left ventricle under diastolic loading

    PubMed Central

    Gao, Hao; Wang, Huiming; Berry, Colin; Luo, Xiaoyu; Griffith, Boyce E

    2014-01-01

    Finite stress and strain analyses of the heart provide insight into the biomechanics of myocardial function and dysfunction. Herein, we describe progress toward dynamic patient-specific models of the left ventricle using an immersed boundary (IB) method with a finite element (FE) structural mechanics model. We use a structure-based hyperelastic strain-energy function to describe the passive mechanics of the ventricular myocardium, a realistic anatomical geometry reconstructed from clinical magnetic resonance images of a healthy human heart, and a rule-based fiber architecture. Numerical predictions of this IB/FE model are compared with results obtained by a commercial FE solver. We demonstrate that the IB/FE model yields results that are in good agreement with those of the conventional FE model under diastolic loading conditions, and the predictions of the LV model using either numerical method are shown to be consistent with previous computational and experimental data. These results are among the first to analyze the stress and strain predictions of IB models of ventricular mechanics, and they serve both to verify the IB/FE simulation framework and to validate the IB/FE model. Moreover, this work represents an important step toward using such models for fully dynamic fluid–structure interaction simulations of the heart. © 2014 The Authors. International Journal for Numerical Methods in Engineering published by John Wiley & Sons, Ltd. PMID:24799090

  9. Using the numerical method in 1836, James Jackson bridged French therapeutic epistemology and American medical pragmatism.

    PubMed

    Kahn, Linda G; Morabia, Alfredo

    2015-04-01

    To review James Jackson's analysis of bloodletting among pneumonitis patients at the newly founded Massachusetts General Hospital, in which he implemented the numerical method advocated by Pierre-Charles-Alexandre Louis. The study sample included 34 cases of clinically diagnosed pneumonitis admitted to Massachusetts General Hospital between April 19, 1825, and May 10, 1835, and discharged alive. Patient data were extracted from meticulously kept case books. Jackson calculated mean number of venesections, ounces of blood taken, and days of convalescence within groups stratified by day of the disease when first bloodletting occurred. He also calculated average convalescence within groups stratified by age, sex, prior health, vesication, and day of the disease when the patients were admitted to the hospital. To Jackson's surprise, it "seemed to be of less importance, whether our patients were bled or not, than whether they entered the hospital early or late" after the onset of the pneumonitis. Bloodletting was ineffective. Our multivariate reanalysis of his data confirms his conclusion. Outstandingly for his time, Jackson ruled out unwarranted effects of covariates by tabulating their numerical relations to the duration of pneumonia. Using novel gathering of patient clinical data from hospital records and quantitative analytical methods, Jackson contributed results that challenged conventional wisdom and bridged French therapeutic epistemology and American medical pragmatism. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. A staggered-grid convolutional differentiator for elastic wave modelling

    NASA Astrophysics Data System (ADS)

    Sun, Weijia; Zhou, Binzhong; Fu, Li-Yun

    2015-11-01

    The computation of derivatives in governing partial differential equations is one of the most investigated subjects in the numerical simulation of physical wave propagation. An analytical staggered-grid convolutional differentiator (CD) for first-order velocity-stress elastic wave equations is derived in this paper by inverse Fourier transformation of the band-limited spectrum of a first derivative operator. A taper window function is used to truncate the infinite staggered-grid CD stencil. The truncated CD operator is almost as accurate as the analytical solution, and as efficient as the finite-difference (FD) method. The selection of window functions will influence the accuracy of the CD operator in wave simulation. We search for the optimal Gaussian windows for different order CDs by minimizing the spectral error of the derivative and comparing the windows with the normal Hanning window function for tapering the CD operators. It is found that the optimal Gaussian window appears to be similar to the Hanning window function for tapering the same CD operator. We investigate the accuracy of the windowed CD operator and the staggered-grid FD method with different orders. Compared to the conventional staggered-grid FD method, a short staggered-grid CD operator achieves an accuracy equivalent to that of a long FD operator, with lower computational costs. For example, an 8th order staggered-grid CD operator can achieve the same accuracy of a 16th order staggered-grid FD algorithm but with half of the computational resources and time required. Numerical examples from a homogeneous model and a crustal waveguide model are used to illustrate the superiority of the CD operators over the conventional staggered-grid FD operators for the simulation of wave propagations.

  11. Tradeoff studies in multiobjective insensitive design of airplane control systems

    NASA Technical Reports Server (NTRS)

    Schy, A. A.; Giesy, D. P.

    1983-01-01

    A computer aided design method for multiobjective parameter-insensitive design of airplane control systems is described. Methods are presented for trading off nominal values of design objectives against sensitivities of the design objectives to parameter uncertainties, together with guidelines for designer utilization of the methods. The methods are illustrated by application to the design of a lateral stability augmentation system for two supersonic flight conditions of the Shuttle Orbiter. Objective functions are conventional handling quality measures and peak magnitudes of control deflections and rates. The uncertain parameters are assumed Gaussian, and numerical approximations of the stochastic behavior of the objectives are described. Results of applying the tradeoff methods to this example show that stochastic-insensitive designs are distinctly different from deterministic multiobjective designs. The main penalty for achieving significant decrease in sensitivity is decreased speed of response for the nominal system.

  12. The least-squares finite element method for low-mach-number compressible viscous flows

    NASA Technical Reports Server (NTRS)

    Yu, Sheng-Tao

    1994-01-01

    The present paper reports the development of the Least-Squares Finite Element Method (LSFEM) for simulating compressible viscous flows at low Mach numbers in which the incompressible flows pose as an extreme. Conventional approach requires special treatments for low-speed flows calculations: finite difference and finite volume methods are based on the use of the staggered grid or the preconditioning technique; and, finite element methods rely on the mixed method and the operator-splitting method. In this paper, however, we show that such difficulty does not exist for the LSFEM and no special treatment is needed. The LSFEM always leads to a symmetric, positive-definite matrix through which the compressible flow equations can be effectively solved. Two numerical examples are included to demonstrate the method: first, driven cavity flows at various Reynolds numbers; and, buoyancy-driven flows with significant density variation. Both examples are calculated by using full compressible flow equations.

  13. Fast convergent frequency-domain MIMO equalizer for few-mode fiber communication systems

    NASA Astrophysics Data System (ADS)

    He, Xuan; Weng, Yi; Wang, Junyi; Pan, Z.

    2018-02-01

    Space division multiplexing using few-mode fibers has been extensively explored to sustain the continuous traffic growth. In few-mode fiber optical systems, both spatial and polarization modes are exploited to transmit parallel channels, thus increasing the overall capacity. However, signals on spatial channels inevitably suffer from the intrinsic inter-modal coupling and large accumulated differential mode group delay (DMGD), which causes spatial modes de-multiplex even harder. Many research articles have demonstrated that frequency domain adaptive multi-input multi-output (MIMO) equalizer can effectively compensate the DMGD and demultiplex the spatial channels with digital signal processing (DSP). However, the large accumulated DMGD usually requires a large number of training blocks for the initial convergence of adaptive MIMO equalizers, which will decrease the overall system efficiency and even degrade the equalizer performance in fast-changing optical channels. Least mean square (LMS) algorithm is always used in MIMO equalization to dynamically demultiplex the spatial signals. We have proposed to use signal power spectral density (PSD) dependent method and noise PSD directed method to improve the convergence speed of adaptive frequency domain LMS algorithm. We also proposed frequency domain recursive least square (RLS) algorithm to further increase the convergence speed of MIMO equalizer at cost of greater hardware complexity. In this paper, we will compare the hardware complexity and convergence speed of signal PSD dependent and noise power directed algorithms against the conventional frequency domain LMS algorithm. In our numerical study of a three-mode 112 Gbit/s PDM-QPSK optical system with 3000 km transmission, the noise PSD directed and signal PSD dependent methods could improve the convergence speed by 48.3% and 36.1% respectively, at cost of 17.2% and 10.7% higher hardware complexity. We will also compare the frequency domain RLS algorithm against conventional frequency domain LMS algorithm. Our numerical study shows that, in a three-mode 224 Gbit/s PDM-16-QAM system with 3000 km transmission, the RLS algorithm could improve the convergence speed by 53.7% over conventional frequency domain LMS algorithm.

  14. Experimental Validation of Normalized Uniform Load Surface Curvature Method for Damage Localization

    PubMed Central

    Jung, Ho-Yeon; Sung, Seung-Hoon; Jung, Hyung-Jo

    2015-01-01

    In this study, we experimentally validated the normalized uniform load surface (NULS) curvature method, which has been developed recently to assess damage localization in beam-type structures. The normalization technique allows for the accurate assessment of damage localization with greater sensitivity irrespective of the damage location. In this study, damage to a simply supported beam was numerically and experimentally investigated on the basis of the changes in the NULS curvatures, which were estimated from the modal flexibility matrices obtained from the acceleration responses under an ambient excitation. Two damage scenarios were considered for the single damage case as well as the multiple damages case by reducing the bending stiffness (EI) of the affected element(s). Numerical simulations were performed using MATLAB as a preliminary step. During the validation experiments, a series of tests were performed. It was found that the damage locations could be identified successfully without any false-positive or false-negative detections using the proposed method. For comparison, the damage detection performances were compared with those of two other well-known methods based on the modal flexibility matrix, namely, the uniform load surface (ULS) method and the ULS curvature method. It was confirmed that the proposed method is more effective for investigating the damage locations of simply supported beams than the two conventional methods in terms of sensitivity to damage under measurement noise. PMID:26501286

  15. Application of Bayesian Maximum Entropy Filter in parameter calibration of groundwater flow model in PingTung Plain

    NASA Astrophysics Data System (ADS)

    Cheung, Shao-Yong; Lee, Chieh-Han; Yu, Hwa-Lung

    2017-04-01

    Due to the limited hydrogeological observation data and high levels of uncertainty within, parameter estimation of the groundwater model has been an important issue. There are many methods of parameter estimation, for example, Kalman filter provides a real-time calibration of parameters through measurement of groundwater monitoring wells, related methods such as Extended Kalman Filter and Ensemble Kalman Filter are widely applied in groundwater research. However, Kalman Filter method is limited to linearity. This study propose a novel method, Bayesian Maximum Entropy Filtering, which provides a method that can considers the uncertainty of data in parameter estimation. With this two methods, we can estimate parameter by given hard data (certain) and soft data (uncertain) in the same time. In this study, we use Python and QGIS in groundwater model (MODFLOW) and development of Extended Kalman Filter and Bayesian Maximum Entropy Filtering in Python in parameter estimation. This method may provide a conventional filtering method and also consider the uncertainty of data. This study was conducted through numerical model experiment to explore, combine Bayesian maximum entropy filter and a hypothesis for the architecture of MODFLOW groundwater model numerical estimation. Through the virtual observation wells to simulate and observe the groundwater model periodically. The result showed that considering the uncertainty of data, the Bayesian maximum entropy filter will provide an ideal result of real-time parameters estimation.

  16. A nonstandard finite difference scheme for a basic model of cellular immune response to viral infection

    NASA Astrophysics Data System (ADS)

    Korpusik, Adam

    2017-02-01

    We present a nonstandard finite difference scheme for a basic model of cellular immune response to viral infection. The main advantage of this approach is that it preserves the essential qualitative features of the original continuous model (non-negativity and boundedness of the solution, equilibria and their stability conditions), while being easy to implement. All of the qualitative features are preserved independently of the chosen step-size. Numerical simulations of our approach and comparison with other conventional simulation methods are presented.

  17. A Comparative Study Using Numerical Methods for Surface X Ray Doses with Conventional and Digital Radiology Equipment in Pediatric Radiology

    NASA Astrophysics Data System (ADS)

    Dan, Posa Ioan; Florin, Georgescu Remus; Virgil, Ciobanu; Antonescu, Elisabeta

    2011-09-01

    The place of the study is a pediatrics clinic which realizes a great variety of emergency, ambulatory ad hospital examinations. The radiology compartment respects work procedures and a system to ensure the quality of X ray examinations. The results show a constant for the programmator of the digital detector machine for the tension applied to the tube. For the screen-film detector machine the applied tension increases proportionally with the physical development of the child considering the trunk thickness.

  18. Nanophotonic particle simulation and inverse design using artificial neural networks

    PubMed Central

    Peurifoy, John; Shen, Yichen; Jing, Li; Cano-Renteria, Fidel; DeLacy, Brendan G.; Joannopoulos, John D.; Tegmark, Max

    2018-01-01

    We propose a method to use artificial neural networks to approximate light scattering by multilayer nanoparticles. We find that the network needs to be trained on only a small sampling of the data to approximate the simulation to high precision. Once the neural network is trained, it can simulate such optical processes orders of magnitude faster than conventional simulations. Furthermore, the trained neural network can be used to solve nanophotonic inverse design problems by using back propagation, where the gradient is analytical, not numerical. PMID:29868640

  19. Phase-Shifted Based Numerical Method for Modeling Frequency-Dependent Effects on Seismic Reflections

    NASA Astrophysics Data System (ADS)

    Chen, Xuehua; Qi, Yingkai; He, Xilei; He, Zhenhua; Chen, Hui

    2016-08-01

    The significant velocity dispersion and attenuation has often been observed when seismic waves propagate in fluid-saturated porous rocks. Both the magnitude and variation features of the velocity dispersion and attenuation are frequency-dependent and related closely to the physical properties of the fluid-saturated porous rocks. To explore the effects of frequency-dependent dispersion and attenuation on the seismic responses, in this work, we present a numerical method for seismic data modeling based on the diffusive and viscous wave equation (DVWE), which introduces the poroelastic theory and takes into account diffusive and viscous attenuation in diffusive-viscous-theory. We derive a phase-shift wave extrapolation algorithm in frequencywavenumber domain for implementing the DVWE-based simulation method that can handle the simultaneous lateral variations in velocity, diffusive coefficient and viscosity. Then, we design a distributary channels model in which a hydrocarbon-saturated sand reservoir is embedded in one of the channels. Next, we calculated the synthetic seismic data to analytically and comparatively illustrate the seismic frequency-dependent behaviors related to the hydrocarbon-saturated reservoir, by employing DVWE-based and conventional acoustic wave equation (AWE) based method, respectively. The results of the synthetic seismic data delineate the intrinsic energy loss, phase delay, lower instantaneous dominant frequency and narrower bandwidth due to the frequency-dependent dispersion and attenuation when seismic wave travels through the hydrocarbon-saturated reservoir. The numerical modeling method is expected to contribute to improve the understanding of the features and mechanism of the seismic frequency-dependent effects resulted from the hydrocarbon-saturated porous rocks.

  20. Multi-scale calculation based on dual domain material point method combined with molecular dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dhakal, Tilak Raj

    This dissertation combines the dual domain material point method (DDMP) with molecular dynamics (MD) in an attempt to create a multi-scale numerical method to simulate materials undergoing large deformations with high strain rates. In these types of problems, the material is often in a thermodynamically non-equilibrium state, and conventional constitutive relations are often not available. In this method, the closure quantities, such as stress, at each material point are calculated from a MD simulation of a group of atoms surrounding the material point. Rather than restricting the multi-scale simulation in a small spatial region, such as phase interfaces, or crackmore » tips, this multi-scale method can be used to consider non-equilibrium thermodynamic e ects in a macroscopic domain. This method takes advantage that the material points only communicate with mesh nodes, not among themselves; therefore MD simulations for material points can be performed independently in parallel. First, using a one-dimensional shock problem as an example, the numerical properties of the original material point method (MPM), the generalized interpolation material point (GIMP) method, the convected particle domain interpolation (CPDI) method, and the DDMP method are investigated. Among these methods, only the DDMP method converges as the number of particles increases, but the large number of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the computation. The numerical properties of the multiscale method are investigated as well as the results from this multi-scale calculation are compared of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the computation. The numerical properties of the multiscale method are investigated as well as the results from this multi-scale calculation are compared with direct MD simulation results to demonstrate the feasibility of the method. Also, the multi-scale method is applied for a two dimensional problem of jet formation around copper notch under a strong impact.« less

  1. On the fusion of tuning parameters of fuzzy rules and neural network

    NASA Astrophysics Data System (ADS)

    Mamuda, Mamman; Sathasivam, Saratha

    2017-08-01

    Learning fuzzy rule-based system with neural network can lead to a precise valuable empathy of several problems. Fuzzy logic offers a simple way to reach at a definite conclusion based upon its vague, ambiguous, imprecise, noisy or missing input information. Conventional learning algorithm for tuning parameters of fuzzy rules using training input-output data usually end in a weak firing state, this certainly powers the fuzzy rule and makes it insecure for a multiple-input fuzzy system. In this paper, we introduce a new learning algorithm for tuning the parameters of the fuzzy rules alongside with radial basis function neural network (RBFNN) in training input-output data based on the gradient descent method. By the new learning algorithm, the problem of weak firing using the conventional method was addressed. We illustrated the efficiency of our new learning algorithm by means of numerical examples. MATLAB R2014(a) software was used in simulating our result The result shows that the new learning method has the best advantage of training the fuzzy rules without tempering with the fuzzy rule table which allowed a membership function of the rule to be used more than one time in the fuzzy rule base.

  2. Numerical investigation of three-dimensional pupil model impact on the relative illumination in panomorph lenses

    NASA Astrophysics Data System (ADS)

    Zhuang, Zhenfeng; Thibault, Simon

    2017-11-01

    One of the key issues in conventional wide-angle lenses is the well-known cosine-fourth power law problem causing the illumination falloff at its image space. This paper explores methods of improving illumination in the image space in panomorph lenses. By tracing skew rays within the defined field of view and pupil diameter, we obtained the actual position of the three-dimensional pupil model of the entrance pupil (EP) and exit pupil (XP). Based on the law of irradiance transport conservation, the relation between the area of the EP projection and illumination in the image space is derived to investigate the factors affecting the illumination on the peripheral field. A panomorph lens has been optimized as an example by providing a self-defined operation in the optimization process. The characteristic of the EP and XP in panomorph lenses is qualitatively analyzed. Compared with the conventional design method, the proposed design strategy can enhance the illumination with and without polarized light based on qualitatively evaluating the area of projected EP. It is demonstrated that this method enables the enhancement of the illumination without additional film coating.

  3. A fast solver for the Helmholtz equation based on the generalized multiscale finite-element method

    NASA Astrophysics Data System (ADS)

    Fu, Shubin; Gao, Kai

    2017-11-01

    Conventional finite-element methods for solving the acoustic-wave Helmholtz equation in highly heterogeneous media usually require finely discretized mesh to represent the medium property variations with sufficient accuracy. Computational costs for solving the Helmholtz equation can therefore be considerably expensive for complicated and large geological models. Based on the generalized multiscale finite-element theory, we develop a novel continuous Galerkin method to solve the Helmholtz equation in acoustic media with spatially variable velocity and mass density. Instead of using conventional polynomial basis functions, we use multiscale basis functions to form the approximation space on the coarse mesh. The multiscale basis functions are obtained from multiplying the eigenfunctions of a carefully designed local spectral problem with an appropriate multiscale partition of unity. These multiscale basis functions can effectively incorporate the characteristics of heterogeneous media's fine-scale variations, thus enable us to obtain accurate solution to the Helmholtz equation without directly solving the large discrete system formed on the fine mesh. Numerical results show that our new solver can significantly reduce the dimension of the discrete Helmholtz equation system, and can also obviously reduce the computational time.

  4. Identification of lithology in Gulf of Mexico Miocene rocks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hilterman, F.J.; Sherwood, J.W.C.; Schellhorn, R.

    1996-12-31

    In the Gulf of Mexico, many gas-saturated sands are not Bright Spots and thus are difficult to detect on conventional 3D seismic data. These small amplitude reflections occur frequently in Pliocene-Miocene exploration plays when the acoustic impedances of the gas-saturated sands and shales are approximately the same. In these areas, geophysicists have had limited success using AVO to reduce the exploration risk. The interpretation of the conventional AVO attributes is often difficult and contains questionable relationships to the physical properties of the media. A 3D AVO study was conducted utilizing numerous well-log suites, core analyses, and production histories to helpmore » calibrate the seismic response to the petrophysical properties. This study resulted in an extension of the AVO method to a technique that now displays Bright spots when very clean sands and gas-saturated sands occur. These litho-stratigraphic reflections on the new AVO technique are related to Poisson`s ratio, a petrophysical property that is normally mixed with the acoustic impedance on conventional 3D migrated data.« less

  5. Particle tracking by using single coefficient of Wigner-Ville distribution

    NASA Astrophysics Data System (ADS)

    Widjaja, J.; Dawprateep, S.; Chuamchaitrakool, P.; Meemon, P.

    2016-11-01

    A new method for extracting information from particle holograms by using a single coefficient of Wigner-Ville distribution (WVD) is proposed to obviate drawbacks of conventional numerical reconstructions. Our previous study found that analysis of the holograms by using the WVD gives output coefficients which are mainly confined along a diagonal direction intercepted at the origin of the WVD plane. The slope of this diagonal direction is inversely proportional to the particle position. One of these coefficients always has minimum amplitude, regardless of the particle position. By detecting position of the coefficient with minimum amplitude in the WVD plane, the particle position can be accurately measured. The proposed method is verified through computer simulations.

  6. Optofluidic time-stretch microscopy: recent advances

    NASA Astrophysics Data System (ADS)

    Lei, Cheng; Nitta, Nao; Ozeki, Yasuyuki; Goda, Keisuke

    2018-06-01

    Flow cytometry is an indispensable method for valuable applications in numerous fields such as immunology, pathology, pharmacology, molecular biology, and marine biology. Optofluidic time-stretch microscopy is superior to conventional flow cytometry methods for its capability to acquire high-quality images of single cells at a high-throughput exceeding 10,000 cells per second. This makes it possible to extract copious information from cellular images for accurate cell detection and analysis with the assistance of machine learning. Optofluidic time-stretch microscopy has proven its effectivity in various applications, including microalga-based biofuel production, evaluation of thrombotic disorders, as well as drug screening and discovery. In this review, we discuss the principles and recent advances of optofluidic time-stretch microscopy.

  7. Optofluidic time-stretch microscopy: recent advances

    NASA Astrophysics Data System (ADS)

    Lei, Cheng; Nitta, Nao; Ozeki, Yasuyuki; Goda, Keisuke

    2018-04-01

    Flow cytometry is an indispensable method for valuable applications in numerous fields such as immunology, pathology, pharmacology, molecular biology, and marine biology. Optofluidic time-stretch microscopy is superior to conventional flow cytometry methods for its capability to acquire high-quality images of single cells at a high-throughput exceeding 10,000 cells per second. This makes it possible to extract copious information from cellular images for accurate cell detection and analysis with the assistance of machine learning. Optofluidic time-stretch microscopy has proven its effectivity in various applications, including microalga-based biofuel production, evaluation of thrombotic disorders, as well as drug screening and discovery. In this review, we discuss the principles and recent advances of optofluidic time-stretch microscopy.

  8. Non-adiabatic excited state molecular dynamics of phenylene ethynylene dendrimer using a multiconfigurational Ehrenfest approach

    DOE PAGES

    Fernandez-Alberti, Sebastian; Makhov, Dmitry V.; Tretiak, Sergei; ...

    2016-03-10

    Photoinduced dynamics of electronic and vibrational unidirectional energy transfer between meta-linked building blocks in a phenylene ethynylene dendrimer is simulated using a multiconfigurational Ehrenfest in time-dependent diabatic basis (MCE-TDDB) method, a new variant of the MCE approach developed by us for dynamics involving multiple electronic states with numerous abrupt crossings. Excited-state energies, gradients and non-adiabatic coupling terms needed for dynamics simulation are calculated on-the-fly using the Collective Electron Oscillator (CEO) approach. In conclusion, a comparative analysis of our results obtained using MCE-TDDB, the conventional Ehrenfest method and the surface-hopping approach with and without decoherence corrections is presented.

  9. A reformulation of the coupled perturbed self-consistent field equations entirely within a local atomic orbital density matrix-based scheme

    NASA Astrophysics Data System (ADS)

    Ochsenfeld, Christian; Head-Gordon, Martin

    1997-05-01

    To exploit the exponential decay found in numerical studies for the density matrix and its derivative with respect to nuclear displacements, we reformulate the coupled perturbed self-consistent field (CPSCF) equations and a quadratically convergent SCF (QCSCF) method for Hartree-Fock and density functional theory within a local density matrix-based scheme. Our D-CPSCF (density matrix-based CPSCF) and D-QCSCF schemes open the way for exploiting sparsity and to achieve asymptotically linear scaling of computational complexity with molecular size ( M), in case of D-CPSCF for all O( M) derivative densities. Furthermore, these methods are even for small molecules strongly competitive to conventional algorithms.

  10. Influence of Tribology of Cage Material on Ball Bearing Cage Instability

    NASA Astrophysics Data System (ADS)

    Servais, S.; Duquenne, M.; Bozet, J.-L.

    2013-09-01

    By creating a solid lubricant thickness on both bearing races, a cage material of cryogenic ball bearing plays a significant role in the good dynamical behavior of the cage. This role is essential because of the lack of conventional lubricant into this kind of bearing.In this paper, a method able to identify if a particular potential cage material can correctly fulfill its function is described. In other words, if it can lead to a stable movement of the cage. From the identification of fundamental tribological parameters governing the cage behavior, this method presents an example of ranking of such materials. This is based on pin-on-disk tests and on a numerical approach.

  11. Defining disease with laser precision: laser capture microdissection in gastroenterology.

    PubMed

    Blatt, Richard; Srinivasan, Shanthi

    2008-08-01

    Laser capture microdissection (LCM) is an efficient and precise method for obtaining pure cell populations or specific cells of interest from a given tissue sample. LCM has been applied to animal and human gastroenterology research in analyzing the protein, DNA, and RNA from all organs of the gastrointestinal system. There are numerous potential applications for this technology in gastroenterology research, including malignancies of the esophagus, stomach, colon, biliary tract, and liver. This technology can also be used to study gastrointestinal infections, inflammatory bowel disease, pancreatitis, motility, malabsorption, and radiation enteropathy. LCM has multiple advantages when compared with conventional methods of microdissection, and this technology can be exploited to identify precursors to disease, diagnostic biomarkers, and therapeutic interventions.

  12. An Efficient Multiscale Finite-Element Method for Frequency-Domain Seismic Wave Propagation

    DOE PAGES

    Gao, Kai; Fu, Shubin; Chung, Eric T.

    2018-02-13

    The frequency-domain seismic-wave equation, that is, the Helmholtz equation, has many important applications in seismological studies, yet is very challenging to solve, particularly for large geological models. Iterative solvers, domain decomposition, or parallel strategies can partially alleviate the computational burden, but these approaches may still encounter nontrivial difficulties in complex geological models where a sufficiently fine mesh is required to represent the fine-scale heterogeneities. We develop a novel numerical method to solve the frequency-domain acoustic wave equation on the basis of the multiscale finite-element theory. We discretize a heterogeneous model with a coarse mesh and employ carefully constructed high-order multiscalemore » basis functions to form the basis space for the coarse mesh. Solved from medium- and frequency-dependent local problems, these multiscale basis functions can effectively capture themedium’s fine-scale heterogeneity and the source’s frequency information, leading to a discrete system matrix with a much smaller dimension compared with those from conventional methods.We then obtain an accurate solution to the acoustic Helmholtz equation by solving only a small linear system instead of a large linear system constructed on the fine mesh in conventional methods.We verify our new method using several models of complicated heterogeneities, and the results show that our new multiscale method can solve the Helmholtz equation in complex models with high accuracy and extremely low computational costs.« less

  13. An Efficient Multiscale Finite-Element Method for Frequency-Domain Seismic Wave Propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Kai; Fu, Shubin; Chung, Eric T.

    The frequency-domain seismic-wave equation, that is, the Helmholtz equation, has many important applications in seismological studies, yet is very challenging to solve, particularly for large geological models. Iterative solvers, domain decomposition, or parallel strategies can partially alleviate the computational burden, but these approaches may still encounter nontrivial difficulties in complex geological models where a sufficiently fine mesh is required to represent the fine-scale heterogeneities. We develop a novel numerical method to solve the frequency-domain acoustic wave equation on the basis of the multiscale finite-element theory. We discretize a heterogeneous model with a coarse mesh and employ carefully constructed high-order multiscalemore » basis functions to form the basis space for the coarse mesh. Solved from medium- and frequency-dependent local problems, these multiscale basis functions can effectively capture themedium’s fine-scale heterogeneity and the source’s frequency information, leading to a discrete system matrix with a much smaller dimension compared with those from conventional methods.We then obtain an accurate solution to the acoustic Helmholtz equation by solving only a small linear system instead of a large linear system constructed on the fine mesh in conventional methods.We verify our new method using several models of complicated heterogeneities, and the results show that our new multiscale method can solve the Helmholtz equation in complex models with high accuracy and extremely low computational costs.« less

  14. Mapped Chebyshev Pseudo-Spectral Method for Dynamic Aero-Elastic Problem of Limit Cycle Oscillation

    NASA Astrophysics Data System (ADS)

    Im, Dong Kyun; Kim, Hyun Soon; Choi, Seongim

    2018-05-01

    A mapped Chebyshev pseudo-spectral method is developed as one of the Fourier-spectral approaches and solves nonlinear PDE systems for unsteady flows and dynamic aero-elastic problem in a given time interval, where the flows or elastic motions can be periodic, nonperiodic, or periodic with an unknown frequency. The method uses the Chebyshev polynomials of the first kind for the basis function and redistributes the standard Chebyshev-Gauss-Lobatto collocation points more evenly by a conformal mapping function for improved numerical stability. Contributions of the method are several. It can be an order of magnitude more efficient than the conventional finite difference-based, time-accurate computation, depending on the complexity of solutions and the number of collocation points. The method reformulates the dynamic aero-elastic problem in spectral form for coupled analysis of aerodynamics and structures, which can be effective for design optimization of unsteady and dynamic problems. A limit cycle oscillation (LCO) is chosen for the validation and a new method to determine the LCO frequency is introduced based on the minimization of a second derivative of the aero-elastic formulation. Two examples of the limit cycle oscillation are tested: nonlinear, one degree-of-freedom mass-spring-damper system and two degrees-of-freedom oscillating airfoil under pitch and plunge motions. Results show good agreements with those of the conventional time-accurate simulations and wind tunnel experiments.

  15. A New Finite Difference Q-compensated RTM Algorithm in Tilted Transverse Isotropic (TTI) Media

    NASA Astrophysics Data System (ADS)

    Zhou, T.; Hu, W.; Ning, J.

    2017-12-01

    Attenuating anisotropic geological body is difficult to image with conventional migration methods. In such kind of scenarios, recorded seismic data suffer greatly from both amplitude decay and phase distortion, resulting in degraded resolution, poor illumination and incorrect migration depth in imaging results. To efficiently obtain high quality images, we propose a novel TTI QRTM algorithm based on Generalized Standard Linear Solid model combined with a unique multi-stage optimization technique to simultaneously correct the decayed amplitude and the distorted phase velocity. Numerical tests (shown in the figure) demonstrate that our TTI QRTM algorithm effectively corrects migration depth, significantly improves illumination, and enhances resolution within and below the low Q regions. The result of our new method is very close to the reference RTM image, while QRTM without TTI cannot get a correct image. Compared to the conventional QRTM method based on a pseudo-spectral operator for fractional Laplacian evaluation, our method is more computationally efficient for large scale applications and more suitable for GPU acceleration. With the current multi-stage dispersion optimization scheme, this TTI QRTM method best performs in the frequency range 10-70 Hz, and could be used in a wider frequency range. Furthermore, as this method can also handle frequency dependent Q, it has potential to be applied in imaging deep structures where low Q exists, such as subduction zones, volcanic zones or fault zones with passive source observations.

  16. Bayesian-MCMC-based parameter estimation of stealth aircraft RCS models

    NASA Astrophysics Data System (ADS)

    Xia, Wei; Dai, Xiao-Xia; Feng, Yuan

    2015-12-01

    When modeling a stealth aircraft with low RCS (Radar Cross Section), conventional parameter estimation methods may cause a deviation from the actual distribution, owing to the fact that the characteristic parameters are estimated via directly calculating the statistics of RCS. The Bayesian-Markov Chain Monte Carlo (Bayesian-MCMC) method is introduced herein to estimate the parameters so as to improve the fitting accuracies of fluctuation models. The parameter estimations of the lognormal and the Legendre polynomial models are reformulated in the Bayesian framework. The MCMC algorithm is then adopted to calculate the parameter estimates. Numerical results show that the distribution curves obtained by the proposed method exhibit improved consistence with the actual ones, compared with those fitted by the conventional method. The fitting accuracy could be improved by no less than 25% for both fluctuation models, which implies that the Bayesian-MCMC method might be a good candidate among the optimal parameter estimation methods for stealth aircraft RCS models. Project supported by the National Natural Science Foundation of China (Grant No. 61101173), the National Basic Research Program of China (Grant No. 613206), the National High Technology Research and Development Program of China (Grant No. 2012AA01A308), the State Scholarship Fund by the China Scholarship Council (CSC), and the Oversea Academic Training Funds, and University of Electronic Science and Technology of China (UESTC).

  17. Beyond Bifurcation: Examining the Conventions of Organic Agriculture in New Zealand

    ERIC Educational Resources Information Center

    Rosin, Christopher; Campbell, Hugh

    2009-01-01

    The last 10 years have witnessed numerous attempts to evaluate the merits of new theoretical approaches--ranging from Actor Network Theory to "post-structural" Political Economy and inhabiting a "post-Political Economy' theoretical space--to the explanation of global agricultural change. This article examines Convention Theory (CT)…

  18. A method for validation of finite element forming simulation on basis of a pointwise comparison of distance and curvature

    NASA Astrophysics Data System (ADS)

    Dörr, Dominik; Joppich, Tobias; Schirmaier, Fabian; Mosthaf, Tobias; Kärger, Luise; Henning, Frank

    2016-10-01

    Thermoforming of continuously fiber reinforced thermoplastics (CFRTP) is ideally suited to thin walled and complex shaped products. By means of forming simulation, an initial validation of the producibility of a specific geometry, an optimization of the forming process and the prediction of fiber-reorientation due to forming is possible. Nevertheless, applied methods need to be validated. Therefor a method is presented, which enables the calculation of error measures for the mismatch between simulation results and experimental tests, based on measurements with a conventional coordinate measuring device. As a quantitative measure, describing the curvature is provided, the presented method is also suitable for numerical or experimental sensitivity studies on wrinkling behavior. The applied methods for forming simulation, implemented in Abaqus explicit, are presented and applied to a generic geometry. The same geometry is tested experimentally and simulation and test results are compared by the proposed validation method.

  19. Deblurring in digital tomosynthesis by iterative self-layer subtraction

    NASA Astrophysics Data System (ADS)

    Youn, Hanbean; Kim, Jee Young; Jang, SunYoung; Cho, Min Kook; Cho, Seungryong; Kim, Ho Kyung

    2010-04-01

    Recent developments in large-area flat-panel detectors have made tomosynthesis technology revisited in multiplanar xray imaging. However, the typical shift-and-add (SAA) or backprojection reconstruction method is notably claimed by a lack of sharpness in the reconstructed images because of blur artifact which is the superposition of objects which are out of planes. In this study, we have devised an intuitive simple method to reduce the blur artifact based on an iterative approach. This method repeats a forward and backward projection procedure to determine the blur artifact affecting on the plane-of-interest (POI), and then subtracts it from the POI. The proposed method does not include any Fourierdomain operations hence excluding the Fourier-domain-originated artifacts. We describe the concept of the self-layer subtractive tomosynthesis and demonstrate its performance with numerical simulation and experiments. Comparative analysis with the conventional methods, such as the SAA and filtered backprojection methods, is addressed.

  20. Low sidelobe level and high time resolution for metallic ultrasonic testing with linear-chirp-Golay coded excitation

    NASA Astrophysics Data System (ADS)

    Zhang, Jiaying; Gang, Tie; Ye, Chaofeng; Cong, Sen

    2018-04-01

    Linear-chirp-Golay (LCG)-coded excitation combined with pulse compression is proposed in this paper to improve the time resolution and suppress sidelobe in ultrasonic testing. The LCG-coded excitation is binary complementary pair Golay signal with linear-chirp signal applied on every sub pulse. Compared with conventional excitation which is a common ultrasonic testing method using a brief narrow pulse as exciting signal, the performances of LCG-coded excitation, in terms of time resolution improvement and sidelobe suppression, are studied via numerical and experimental investigations. The numerical simulations are implemented using Matlab K-wave toolbox. It is seen from the simulation results that time resolution of LCG excitation is 35.5% higher and peak sidelobe level (PSL) is 57.6 dB lower than linear-chirp excitation with 2.4 MHz chirp bandwidth and 3 μs time duration. In the B-scan experiment, time resolution of LCG excitation is higher and PSL is lower than conventional brief pulse excitation and chirp excitation. In terms of time resolution, LCG-coded signal has better performance than chirp signal. Moreover, the impact of chirp bandwidth on LCG-coded signal is less than that on chirp signal. In addition, the sidelobe of LCG-coded signal is lower than that of chirp signal with pulse compression.

  1. A rapid, efficient, and economic device and method for the isolation and purification of mouse islet cells

    PubMed Central

    Zongyi, Yin; Funian, Zou; Hao, Li; Ying, Cheng; Jialin, Zhang

    2017-01-01

    Rapid, efficient, and economic method for the isolation and purification of islets has been pursued by numerous islet-related researchers. In this study, we compared the advantages and disadvantages of our developed patented method with those of commonly used conventional methods (Ficoll-400, 1077, and handpicking methods). Cell viability was assayed using Trypan blue, cell purity and yield were assayed using diphenylthiocarbazone, and islet function was assayed using acridine orange/ethidium bromide staining and enzyme-linked immunosorbent assay-glucose stimulation testing 4 days after cultivation. The results showed that our islet isolation and purification method required 12 ± 3 min, which was significantly shorter than the time required in Ficoll-400, 1077, and HPU groups (34 ± 3, 41 ± 4, and 30 ± 4 min, respectively; P < 0.05). There was no significant difference in islet viability among the four groups. The islet purity, function, yield, and cost of our method were superior to those of the Ficoll-400 and 1077 methods, but inferior to the handpicking method. However, the handpicking method may cause wrist injury and visual impairment in researchers during large-scale islet isolation (>1000 islets). In summary, the MCT method is a rapid, efficient, and economic method for isolating and purifying murine islet cell clumps. This method overcomes some of the shortcomings of conventional methods, showing a relatively higher quality and yield of islets within a shorter duration at a lower cost. Therefore, the current method provides researchers with an alternative option for islet isolation and should be widely generalized. PMID:28207765

  2. A rapid, efficient, and economic device and method for the isolation and purification of mouse islet cells.

    PubMed

    Zongyi, Yin; Funian, Zou; Hao, Li; Ying, Cheng; Jialin, Zhang; Baifeng, Li

    2017-01-01

    Rapid, efficient, and economic method for the isolation and purification of islets has been pursued by numerous islet-related researchers. In this study, we compared the advantages and disadvantages of our developed patented method with those of commonly used conventional methods (Ficoll-400, 1077, and handpicking methods). Cell viability was assayed using Trypan blue, cell purity and yield were assayed using diphenylthiocarbazone, and islet function was assayed using acridine orange/ethidium bromide staining and enzyme-linked immunosorbent assay-glucose stimulation testing 4 days after cultivation. The results showed that our islet isolation and purification method required 12 ± 3 min, which was significantly shorter than the time required in Ficoll-400, 1077, and HPU groups (34 ± 3, 41 ± 4, and 30 ± 4 min, respectively; P < 0.05). There was no significant difference in islet viability among the four groups. The islet purity, function, yield, and cost of our method were superior to those of the Ficoll-400 and 1077 methods, but inferior to the handpicking method. However, the handpicking method may cause wrist injury and visual impairment in researchers during large-scale islet isolation (>1000 islets). In summary, the MCT method is a rapid, efficient, and economic method for isolating and purifying murine islet cell clumps. This method overcomes some of the shortcomings of conventional methods, showing a relatively higher quality and yield of islets within a shorter duration at a lower cost. Therefore, the current method provides researchers with an alternative option for islet isolation and should be widely generalized.

  3. SU-F-J-86: Method to Include Tissue Dose Response Effect in Deformable Image Registration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, J; Liang, J; Chen, S

    Purpose: Organ changes shape and size during radiation treatment due to both mechanical stress and radiation dose response. However, the dose response induced deformation has not been considered in conventional deformable image registration (DIR). A novel DIR approach is proposed to include both tissue elasticity and radiation dose induced organ deformation. Methods: Assuming that organ sub-volume shrinkage was proportional to the radiation dose induced cell killing/absorption, the dose induced organ volume change was simulated applying virtual temperature on each sub-volume. Hence, both stress and heterogeneity temperature induced organ deformation. Thermal stress finite element method with organ surface boundary condition wasmore » used to solve deformation. Initial boundary correspondence on organ surface was created from conventional DIR. Boundary condition was updated by an iterative optimization scheme to minimize elastic deformation energy. The registration was validated on a numerical phantom. Treatment dose was constructed applying both the conventional DIR and the proposed method using daily CBCT image obtained from HN treatment. Results: Phantom study showed 2.7% maximal discrepancy with respect to the actual displacement. Compared with conventional DIR, subvolume displacement difference in a right parotid had the mean±SD (Min, Max) to be 1.1±0.9(−0.4∼4.8), −0.1±0.9(−2.9∼2.4) and −0.1±0.9(−3.4∼1.9)mm in RL/PA/SI directions respectively. Mean parotid dose and V30 constructed including the dose response induced shrinkage were 6.3% and 12.0% higher than those from the conventional DIR. Conclusion: Heterogeneous dose distribution in normal organ causes non-uniform sub-volume shrinkage. Sub-volume in high dose region has a larger shrinkage than the one in low dose region, therefore causing more sub-volumes to move into the high dose area during the treatment course. This leads to an unfavorable dose-volume relationship for the normal organ. Without including this effect in DIR, treatment dose in normal organ could be underestimated affecting treatment evaluation and planning modification. Acknowledgement: Partially Supported by Elekta Research Grant.« less

  4. Numerical simulation and comparison of conventional and sloped solar chimney power plants: the case for Lanzhou.

    PubMed

    Cao, Fei; Li, Huashan; Zhang, Yang; Zhao, Liang

    2013-01-01

    The solar chimney power plant (SCPP) generates updraft wind through the green house effect. In this paper, the performances of two SCPP styles, that is, the conventional solar chimney power plant (CSCPP) and the sloped solar chimney power plant (SSCPP), are compared through a numerical simulation. A simplified Computational Fluid Dynamics (CFD) model is built to predict the performances of the SCPP. The model is validated through a comparison with the reported results from the Manzanares prototype. The annual performances of the CSCPP and the SSCPP are compared by taking Lanzhou as a case study. Numerical results indicate that the SSCPP holds a higher efficiency and generates smoother power than those of the CSCPP, and the effective pressure in the SSCPP is relevant to both the chimney and the collector heights.

  5. Numerical Simulation and Comparison of Conventional and Sloped Solar Chimney Power Plants: The Case for Lanzhou

    PubMed Central

    Zhang, Yang; Zhao, Liang

    2013-01-01

    The solar chimney power plant (SCPP) generates updraft wind through the green house effect. In this paper, the performances of two SCPP styles, that is, the conventional solar chimney power plant (CSCPP) and the sloped solar chimney power plant (SSCPP), are compared through a numerical simulation. A simplified Computational Fluid Dynamics (CFD) model is built to predict the performances of the SCPP. The model is validated through a comparison with the reported results from the Manzanares prototype. The annual performances of the CSCPP and the SSCPP are compared by taking Lanzhou as a case study. Numerical results indicate that the SSCPP holds a higher efficiency and generates smoother power than those of the CSCPP, and the effective pressure in the SSCPP is relevant to both the chimney and the collector heights. PMID:24489515

  6. Report of the Proceedings of the Forty-Sixth Meeting of the Convention of American Instructors of the Deaf; Indiana School for the Deaf, Indianapolis, Indiana. Convention Theme: "Educational Crossroads for Deaf Children". June 24-29, 1973.

    ERIC Educational Resources Information Center

    Davis, Ferne E., Ed.

    Presented are proceedings of the 46th (1973) meeting of the Convention of American Instructors of the Deaf. Included are numerous papers and discussions on auditory training, career development, continuing education, reading and language, counseling, curriculum, deaf-blind children, diagnostic assessment, early education, total communication,…

  7. Unmitigated numerical solution to the diffraction term in the parabolic nonlinear ultrasound wave equation.

    PubMed

    Hasani, Mojtaba H; Gharibzadeh, Shahriar; Farjami, Yaghoub; Tavakkoli, Jahan

    2013-09-01

    Various numerical algorithms have been developed to solve the Khokhlov-Kuznetsov-Zabolotskaya (KZK) parabolic nonlinear wave equation. In this work, a generalized time-domain numerical algorithm is proposed to solve the diffraction term of the KZK equation. This algorithm solves the transverse Laplacian operator of the KZK equation in three-dimensional (3D) Cartesian coordinates using a finite-difference method based on the five-point implicit backward finite difference and the five-point Crank-Nicolson finite difference discretization techniques. This leads to a more uniform discretization of the Laplacian operator which in turn results in fewer calculation gridding nodes without compromising accuracy in the diffraction term. In addition, a new empirical algorithm based on the LU decomposition technique is proposed to solve the system of linear equations obtained from this discretization. The proposed empirical algorithm improves the calculation speed and memory usage, while the order of computational complexity remains linear in calculation of the diffraction term in the KZK equation. For evaluating the accuracy of the proposed algorithm, two previously published algorithms are used as comparison references: the conventional 2D Texas code and its generalization for 3D geometries. The results show that the accuracy/efficiency performance of the proposed algorithm is comparable with the established time-domain methods.

  8. Effective modeling and reverse-time migration for novel pure acoustic wave in arbitrary orthorhombic anisotropic media

    NASA Astrophysics Data System (ADS)

    Xu, Shigang; Liu, Yang

    2018-03-01

    The conventional pseudo-acoustic wave equations (PWEs) in arbitrary orthorhombic anisotropic (OA) media usually have coupled P- and SV-wave modes. These coupled equations may introduce strong SV-wave artifacts and numerical instabilities in P-wave simulation results and reverse-time migration (RTM) profiles. However, pure acoustic wave equations (PAWEs) completely decouple the P-wave component from the full elastic wavefield and naturally solve all the aforementioned problems. In this article, we present a novel PAWE in arbitrary OA media and compare it with the conventional coupled PWEs. Through decomposing the solution of the corresponding eigenvalue equation for the original PWE into an ellipsoidal differential operator (EDO) and an ellipsoidal scalar operator (ESO), the new PAWE in time-space domain is constructed by applying the combination of these two solvable operators and can effectively describe P-wave features in arbitrary OA media. Furthermore, we adopt the optimal finite-difference method (FDM) to solve the newly derived PAWE. In addition, the three-dimensional (3D) hybrid absorbing boundary condition (HABC) with some reasonable modifications is developed for reducing artificial edge reflections in anisotropic media. To improve computational efficiency in 3D case, we adopt graphic processing unit (GPU) with Compute Unified Device Architecture (CUDA) instead of traditional central processing unit (CPU) architecture. Several numerical experiments for arbitrary OA models confirm that the proposed schemes can produce pure, stable and accurate P-wave modeling results and RTM images with higher computational efficiency. Moreover, the 3D numerical simulations can provide us with a comprehensive and real description of wave propagation.

  9. Strain expansion-reduction approach

    NASA Astrophysics Data System (ADS)

    Baqersad, Javad; Bharadwaj, Kedar

    2018-02-01

    Validating numerical models are one of the main aspects of engineering design. However, correlating million degrees of freedom of numerical models to the few degrees of freedom of test models is challenging. Reduction/expansion approaches have been traditionally used to match these degrees of freedom. However, the conventional reduction/expansion approaches are only limited to displacement, velocity or acceleration data. While in many cases only strain data are accessible (e.g. when a structure is monitored using strain-gages), the conventional approaches are not capable of expanding strain data. To bridge this gap, the current paper outlines a reduction/expansion technique to reduce/expand strain data. In the proposed approach, strain mode shapes of a structure are extracted using the finite element method or the digital image correlation technique. The strain mode shapes are used to generate a transformation matrix that can expand the limited set of measurement data. The proposed approach can be used to correlate experimental and analytical strain data. Furthermore, the proposed technique can be used to expand real-time operating data for structural health monitoring (SHM). In order to verify the accuracy of the approach, the proposed technique was used to expand the limited set of real-time operating data in a numerical model of a cantilever beam subjected to various types of excitations. The proposed technique was also applied to expand real-time operating data measured using a few strain gages mounted to an aluminum beam. It was shown that the proposed approach can effectively expand the strain data at limited locations to accurately predict the strain at locations where no sensors were placed.

  10. Additive Manufactured Superconducting Cavities

    NASA Astrophysics Data System (ADS)

    Holland, Eric; Rosen, Yaniv; Woolleet, Nathan; Materise, Nicholas; Voisin, Thomas; Wang, Morris; Mireles, Jorge; Carosi, Gianpaolo; Dubois, Jonathan

    Superconducting radio frequency cavities provide an ultra-low dissipative environment, which has enabled fundamental investigations in quantum mechanics, materials properties, and the search for new particles in and beyond the standard model. However, resonator designs are constrained by limitations in conventional machining techniques. For example, current through a seam is a limiting factor in performance for many waveguide cavities. Development of highly reproducible methods for metallic parts through additive manufacturing, referred to colloquially as 3D printing\\x9D, opens the possibility for novel cavity designs which cannot be implemented through conventional methods. We present preliminary investigations of superconducting cavities made through a selective laser melting process, which compacts a granular powder via a high-power laser according to a digitally defined geometry. Initial work suggests that assuming a loss model and numerically optimizing a geometry to minimize dissipation results in modest improvements in device performance. Furthermore, a subset of titanium alloys, particularly, a titanium, aluminum, vanadium alloy (Ti - 6Al - 4V) exhibits properties indicative of a high kinetic inductance material. This work is supported by LDRD 16-SI-004.

  11. Programmable Colored Illumination Microscopy (PCIM): A practical and flexible optical staining approach for microscopic contrast enhancement

    NASA Astrophysics Data System (ADS)

    Zuo, Chao; Sun, Jiasong; Feng, Shijie; Hu, Yan; Chen, Qian

    2016-03-01

    Programmable colored illumination microscopy (PCIM) has been proposed as a flexible optical staining technique for microscopic contrast enhancement. In this method, we replace the condenser diaphragm of a conventional microscope with a programmable thin film transistor-liquid crystal display (TFT-LCD). By displaying different patterns on the LCD, numerous established imaging modalities can be realized, such as bright field, dark field, phase contrast, oblique illumination, and Rheinberg illuminations, which conventionally rely on intricate alterations in the respective microscope setups. Furthermore, the ease of modulating both the color and the intensity distribution at the aperture of the condenser opens the possibility to combine multiple microscopic techniques, or even realize completely new methods for optical color contrast staining, such as iridescent dark-field and iridescent phase-contrast imaging. The versatility and effectiveness of PCIM is demonstrated by imaging of several transparent colorless specimens, such as unstained lung cancer cells, diatom, textile fibers, and a cryosection of mouse kidney. Finally, the potentialities of PCIM for RGB-splitting imaging with stained samples are also explored by imaging stained red blood cells and a histological section.

  12. A Numerical Study of Three Moving-Grid Methods for One-Dimensional Partial Differential Equations Which Are Based on the Method of Lines

    NASA Astrophysics Data System (ADS)

    Furzeland, R. M.; Verwer, J. G.; Zegeling, P. A.

    1990-08-01

    In recent years, several sophisticated packages based on the method of lines (MOL) have been developed for the automatic numerical integration of time-dependent problems in partial differential equations (PDEs), notably for problems in one space dimension. These packages greatly benefit from the very successful developments of automatic stiff ordinary differential equation solvers. However, from the PDE point of view, they integrate only in a semiautomatic way in the sense that they automatically adjust the time step sizes, but use just a fixed space grid, chosen a priori, for the entire calculation. For solutions possessing sharp spatial transitions that move, e.g., travelling wave fronts or emerging boundary and interior layers, a grid held fixed for the entire calculation is computationally inefficient, since for a good solution this grid often must contain a very large number of nodes. In such cases methods which attempt automatically to adjust the sizes of both the space and the time steps are likely to be more successful in efficiently resolving critical regions of high spatial and temporal activity. Methods and codes that operate this way belong to the realm of adaptive or moving-grid methods. Following the MOL approach, this paper is devoted to an evaluation and comparison, mainly based on extensive numerical tests, of three moving-grid methods for 1D problems, viz., the finite-element method of Miller and co-workers, the method published by Petzold, and a method based on ideas adopted from Dorfi and Drury. Our examination of these three methods is aimed at assessing which is the most suitable from the point of view of retaining the acknowledged features of reliability, robustness, and efficiency of the conventional MOL approach. Therefore, considerable attention is paid to the temporal performance of the methods.

  13. Radiation dose reduction in medical x-ray CT via Fourier-based iterative reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fahimian, Benjamin P.; Zhao Yunzhe; Huang Zhifeng

    Purpose: A Fourier-based iterative reconstruction technique, termed Equally Sloped Tomography (EST), is developed in conjunction with advanced mathematical regularization to investigate radiation dose reduction in x-ray CT. The method is experimentally implemented on fan-beam CT and evaluated as a function of imaging dose on a series of image quality phantoms and anonymous pediatric patient data sets. Numerical simulation experiments are also performed to explore the extension of EST to helical cone-beam geometry. Methods: EST is a Fourier based iterative algorithm, which iterates back and forth between real and Fourier space utilizing the algebraically exact pseudopolar fast Fourier transform (PPFFT). Inmore » each iteration, physical constraints and mathematical regularization are applied in real space, while the measured data are enforced in Fourier space. The algorithm is automatically terminated when a proposed termination criterion is met. Experimentally, fan-beam projections were acquired by the Siemens z-flying focal spot technology, and subsequently interleaved and rebinned to a pseudopolar grid. Image quality phantoms were scanned at systematically varied mAs settings, reconstructed by EST and conventional reconstruction methods such as filtered back projection (FBP), and quantified using metrics including resolution, signal-to-noise ratios (SNRs), and contrast-to-noise ratios (CNRs). Pediatric data sets were reconstructed at their original acquisition settings and additionally simulated to lower dose settings for comparison and evaluation of the potential for radiation dose reduction. Numerical experiments were conducted to quantify EST and other iterative methods in terms of image quality and computation time. The extension of EST to helical cone-beam CT was implemented by using the advanced single-slice rebinning (ASSR) method. Results: Based on the phantom and pediatric patient fan-beam CT data, it is demonstrated that EST reconstructions with the lowest scanner flux setting of 39 mAs produce comparable image quality, resolution, and contrast relative to FBP with the 140 mAs flux setting. Compared to the algebraic reconstruction technique and the expectation maximization statistical reconstruction algorithm, a significant reduction in computation time is achieved with EST. Finally, numerical experiments on helical cone-beam CT data suggest that the combination of EST and ASSR produces reconstructions with higher image quality and lower noise than the Feldkamp Davis and Kress (FDK) method and the conventional ASSR approach. Conclusions: A Fourier-based iterative method has been applied to the reconstruction of fan-bean CT data with reduced x-ray fluence. This method incorporates advantageous features in both real and Fourier space iterative schemes: using a fast and algebraically exact method to calculate forward projection, enforcing the measured data in Fourier space, and applying physical constraints and flexible regularization in real space. Our results suggest that EST can be utilized for radiation dose reduction in x-ray CT via the readily implementable technique of lowering mAs settings. Numerical experiments further indicate that EST requires less computation time than several other iterative algorithms and can, in principle, be extended to helical cone-beam geometry in combination with the ASSR method.« less

  14. Radiation dose reduction in medical x-ray CT via Fourier-based iterative reconstruction

    PubMed Central

    Fahimian, Benjamin P.; Zhao, Yunzhe; Huang, Zhifeng; Fung, Russell; Mao, Yu; Zhu, Chun; Khatonabadi, Maryam; DeMarco, John J.; Osher, Stanley J.; McNitt-Gray, Michael F.; Miao, Jianwei

    2013-01-01

    Purpose: A Fourier-based iterative reconstruction technique, termed Equally Sloped Tomography (EST), is developed in conjunction with advanced mathematical regularization to investigate radiation dose reduction in x-ray CT. The method is experimentally implemented on fan-beam CT and evaluated as a function of imaging dose on a series of image quality phantoms and anonymous pediatric patient data sets. Numerical simulation experiments are also performed to explore the extension of EST to helical cone-beam geometry. Methods: EST is a Fourier based iterative algorithm, which iterates back and forth between real and Fourier space utilizing the algebraically exact pseudopolar fast Fourier transform (PPFFT). In each iteration, physical constraints and mathematical regularization are applied in real space, while the measured data are enforced in Fourier space. The algorithm is automatically terminated when a proposed termination criterion is met. Experimentally, fan-beam projections were acquired by the Siemens z-flying focal spot technology, and subsequently interleaved and rebinned to a pseudopolar grid. Image quality phantoms were scanned at systematically varied mAs settings, reconstructed by EST and conventional reconstruction methods such as filtered back projection (FBP), and quantified using metrics including resolution, signal-to-noise ratios (SNRs), and contrast-to-noise ratios (CNRs). Pediatric data sets were reconstructed at their original acquisition settings and additionally simulated to lower dose settings for comparison and evaluation of the potential for radiation dose reduction. Numerical experiments were conducted to quantify EST and other iterative methods in terms of image quality and computation time. The extension of EST to helical cone-beam CT was implemented by using the advanced single-slice rebinning (ASSR) method. Results: Based on the phantom and pediatric patient fan-beam CT data, it is demonstrated that EST reconstructions with the lowest scanner flux setting of 39 mAs produce comparable image quality, resolution, and contrast relative to FBP with the 140 mAs flux setting. Compared to the algebraic reconstruction technique and the expectation maximization statistical reconstruction algorithm, a significant reduction in computation time is achieved with EST. Finally, numerical experiments on helical cone-beam CT data suggest that the combination of EST and ASSR produces reconstructions with higher image quality and lower noise than the Feldkamp Davis and Kress (FDK) method and the conventional ASSR approach. Conclusions: A Fourier-based iterative method has been applied to the reconstruction of fan-bean CT data with reduced x-ray fluence. This method incorporates advantageous features in both real and Fourier space iterative schemes: using a fast and algebraically exact method to calculate forward projection, enforcing the measured data in Fourier space, and applying physical constraints and flexible regularization in real space. Our results suggest that EST can be utilized for radiation dose reduction in x-ray CT via the readily implementable technique of lowering mAs settings. Numerical experiments further indicate that EST requires less computation time than several other iterative algorithms and can, in principle, be extended to helical cone-beam geometry in combination with the ASSR method. PMID:23464329

  15. 4D numerical observer for lesion detection in respiratory-gated PET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lorsakul, Auranuch; Li, Quanzheng; Ouyang, Jinsong

    2014-10-15

    Purpose: Respiratory-gated positron emission tomography (PET)/computed tomography protocols reduce lesion smearing and improve lesion detection through a synchronized acquisition of emission data. However, an objective assessment of image quality of the improvement gained from respiratory-gated PET is mainly limited to a three-dimensional (3D) approach. This work proposes a 4D numerical observer that incorporates both spatial and temporal informations for detection tasks in pulmonary oncology. Methods: The authors propose a 4D numerical observer constructed with a 3D channelized Hotelling observer for the spatial domain followed by a Hotelling observer for the temporal domain. Realistic {sup 18}F-fluorodeoxyglucose activity distributions were simulated usingmore » a 4D extended cardiac torso anthropomorphic phantom including 12 spherical lesions at different anatomical locations (lower, upper, anterior, and posterior) within the lungs. Simulated data based on Monte Carlo simulation were obtained using GEANT4 application for tomographic emission (GATE). Fifty noise realizations of six respiratory-gated PET frames were simulated by GATE using a model of the Siemens Biograph mMR scanner geometry. PET sinograms of the thorax background and pulmonary lesions that were simulated separately were merged to generate different conditions of the lesions to the background (e.g., lesion contrast and motion). A conventional ordered subset expectation maximization (OSEM) reconstruction (5 iterations and 6 subsets) was used to obtain: (1) gated, (2) nongated, and (3) motion-corrected image volumes (a total of 3200 subimage volumes: 2400 gated, 400 nongated, and 400 motion-corrected). Lesion-detection signal-to-noise ratios (SNRs) were measured in different lesion-to-background contrast levels (3.5, 8.0, 9.0, and 20.0), lesion diameters (10.0, 13.0, and 16.0 mm), and respiratory motion displacements (17.6–31.3 mm). The proposed 4D numerical observer applied on multiple-gated images was compared to the conventional 3D approach applied on the nongated and motion-corrected images. Results: On average, the proposed 4D numerical observer improved the detection SNR by 48.6% (p < 0.005), whereas the 3D methods on motion-corrected images improved by 31.0% (p < 0.005) as compared to the nongated method. For all different conditions of the lesions, the relative SNR measurement (Gain = SNR{sub Observed}/SNR{sub Nongated}) of the 4D method was significantly higher than one from the motion-corrected 3D method by 13.8% (p < 0.02), where Gain{sub 4D} was 1.49 ± 0.21 and Gain{sub 3D} was 1.31 ± 0.15. For the lesion with the highest amplitude of motion, the 4D numerical observer yielded the highest observer-performance improvement (176%). For the lesion undergoing the smallest motion amplitude, the 4D method provided superior lesion detectability compared with the 3D method, which provided a detection SNR close to the nongated method. The investigation on a structure of the 4D numerical observer showed that a Laguerre–Gaussian channel matrix with a volumetric 3D function yielded higher lesion-detection performance than one with a 2D-stack-channelized function, whereas a different kind of channels that have the ability to mimic the human visual system, i.e., difference-of-Gaussian, showed similar performance in detecting uniform and spherical lesions. The investigation of the detection performance when increasing noise levels yielded decreasing detection SNR by 27.6% and 41.5% for the nongated and gated methods, respectively. The investigation of lesion contrast and diameter showed that the proposed 4D observer preserved the linearity property of an optimal-linear observer while the motion was present. Furthermore, the investigation of the iteration and subset numbers of the OSEM algorithm demonstrated that these parameters had impact on the lesion detectability and the selection of the optimal parameters could provide the maximum lesion-detection performance. The proposed 4D numerical observer outperformed the other observers for the lesion-detection task in various lesion conditions and motions. Conclusions: The 4D numerical observer shows substantial improvement in lesion detectability over the 3D observer method. The proposed 4D approach could potentially provide a more reliable objective assessment of the impact of respiratory-gated PET improvement for lesion-detection tasks. On the other hand, the 4D approach may be used as an upper bound to investigate the performance of the motion correction method. In future work, the authors will validate the proposed 4D approach on clinical data for detection tasks in pulmonary oncology.« less

  16. On modelling three-dimensional piezoelectric smart structures with boundary spectral element method

    NASA Astrophysics Data System (ADS)

    Zou, Fangxin; Aliabadi, M. H.

    2017-05-01

    The computational efficiency of the boundary element method in elastodynamic analysis can be significantly improved by employing high-order spectral elements for boundary discretisation. In this work, for the first time, the so-called boundary spectral element method is utilised to formulate the piezoelectric smart structures that are widely used in structural health monitoring (SHM) applications. The resultant boundary spectral element formulation has been validated by the finite element method (FEM) and physical experiments. The new formulation has demonstrated a lower demand on computational resources and a higher numerical stability than commercial FEM packages. Comparing to the conventional boundary element formulation, a significant reduction in computational expenses has been achieved. In summary, the boundary spectral element formulation presented in this paper provides a highly efficient and stable mathematical tool for the development of SHM applications.

  17. Physical retrieval of precipitation water contents from Special Sensor Microwave/Imager (SSM/I) data. Part 2: Retrieval method and applications (report version)

    NASA Technical Reports Server (NTRS)

    Olson, William S.

    1990-01-01

    A physical retrieval method for estimating precipitating water distributions and other geophysical parameters based upon measurements from the DMSP-F8 SSM/I is developed. Three unique features of the retrieval method are (1) sensor antenna patterns are explicitly included to accommodate varying channel resolution; (2) precipitation-brightness temperature relationships are quantified using the cloud ensemble/radiative parameterization; and (3) spatial constraints are imposed for certain background parameters, such as humidity, which vary more slowly in the horizontal than the cloud and precipitation water contents. The general framework of the method will facilitate the incorporation of measurements from the SSMJT, SSM/T-2 and geostationary infrared measurements, as well as information from conventional sources (e.g., radiosondes) or numerical forecast model fields.

  18. Computerized implementation of higher-order electron-correlation methods and their linear-scaling divide-and-conquer extensions.

    PubMed

    Nakano, Masahiko; Yoshikawa, Takeshi; Hirata, So; Seino, Junji; Nakai, Hiromi

    2017-11-05

    We have implemented a linear-scaling divide-and-conquer (DC)-based higher-order coupled-cluster (CC) and Møller-Plesset perturbation theories (MPPT) as well as their combinations automatically by means of the tensor contraction engine, which is a computerized symbolic algebra system. The DC-based energy expressions of the standard CC and MPPT methods and the CC methods augmented with a perturbation correction were proposed for up to high excitation orders [e.g., CCSDTQ, MP4, and CCSD(2) TQ ]. The numerical assessment for hydrogen halide chains, polyene chains, and first coordination sphere (C1) model of photoactive yellow protein has revealed that the DC-based correlation methods provide reliable correlation energies with significantly less computational cost than that of the conventional implementations. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  19. Reliable use of determinants to solve nonlinear structural eigenvalue problems efficiently

    NASA Technical Reports Server (NTRS)

    Williams, F. W.; Kennedy, D.

    1988-01-01

    The analytical derivation, numerical implementation, and performance of a multiple-determinant parabolic interpolation method (MDPIM) for use in solving transcendental eigenvalue (critical buckling or undamped free vibration) problems in structural mechanics are presented. The overall bounding, eigenvalue-separation, qualified parabolic interpolation, accuracy-confirmation, and convergence-recovery stages of the MDPIM are described in detail, and the numbers of iterations required to solve sample plane-frame problems using the MDPIM are compared with those for a conventional bisection method and for the Newtonian method of Simpson (1984) in extensive tables. The MDPIM is shown to use 31 percent less computation time than bisection when accuracy of 0.0001 is required, but 62 percent less when accuracy of 10 to the -8th is required; the time savings over the Newtonian method are about 10 percent.

  20. Prestack density inversion using the Fatti equation constrained by the P- and S-wave impedance and density

    NASA Astrophysics Data System (ADS)

    Liang, Li-Feng; Zhang, Hong-Bing; Dan, Zhi-Wei; Xu, Zi-Qiang; Liu, Xiu-Juan; Cao, Cheng-Hao

    2017-03-01

    Simultaneous prestack inversion is based on the modified Fatti equation and uses the ratio of the P- and S-wave velocity as constraints. We use the relation of P-wave impedance and density (PID) and S-wave impedance and density (SID) to replace the constant Vp/Vs constraint, and we propose the improved constrained Fatti equation to overcome the effect of P-wave impedance on density. We compare the sensitivity of both methods using numerical simulations and conclude that the density inversion sensitivity improves when using the proposed method. In addition, the random conjugate-gradient method is used in the inversion because it is fast and produces global solutions. The use of synthetic and field data suggests that the proposed inversion method is effective in conventional and nonconventional lithologies.

  1. Topology-optimized metasurfaces: impact of initial geometric layout.

    PubMed

    Yang, Jianji; Fan, Jonathan A

    2017-08-15

    Topology optimization is a powerful iterative inverse design technique in metasurface engineering and can transform an initial layout into a high-performance device. With this method, devices are optimized within a local design phase space, making the identification of suitable initial geometries essential. In this Letter, we examine the impact of initial geometric layout on the performance of large-angle (75 deg) topology-optimized metagrating deflectors. We find that when conventional metasurface designs based on dielectric nanoposts are used as initial layouts for topology optimization, the final devices have efficiencies around 65%. In contrast, when random initial layouts are used, the final devices have ultra-high efficiencies that can reach 94%. Our numerical experiments suggest that device topologies based on conventional metasurface designs may not be suitable to produce ultra-high-efficiency, large-angle metasurfaces. Rather, initial geometric layouts with non-trivial topologies and shapes are required.

  2. Three-dimensional modeling of light rays on the surface of a slanted lenticular array for autostereoscopic displays.

    PubMed

    Jung, Sung-Min; Kang, In-Byeong

    2013-08-10

    In this paper, we developed an optical model describing the behavior of light at the surface of a slanted lenticular array for autostereoscopic displays in three dimensions and simulated the optical characteristics of autostereoscopic displays using the Monte Carlo method under actual design conditions. The behavior of light is analyzed by light rays for selected inclination and azimuthal angles; numerical aberrations and conditions of total internal reflection for the lenticular array were found. The intensity and the three-dimensional crosstalk distributions calculated from our model coincide very well with those from conventional design software, and our model shows highly enhanced calculation speed that is 67 times faster than that of the conventional software. From the results, we think that the optical model is very useful for predicting the optical characteristics of autostereoscopic displays with enhanced calculation speed.

  3. Fresnel-propagated imaging for the study of human tooth dentin by partially coherent x-ray tomography

    NASA Astrophysics Data System (ADS)

    Zabler, S.; Riesemeier, H.; Fratzl, P.; Zaslansky, P.

    2006-09-01

    Recent methods of phase imaging in x-ray tomography allow the visualization of features that are not resolved in conventional absorption microtomography. Of these, the relatively simple setup needed to produce Fresnel-propagated tomograms appears to be well suited to probe tooth-dentin where composition as well as microstructure vary in a graded manner. By adapting analytical propagation approximations we provide predictions of the form of the interference patterns in the 3D images, which we compare to numerical simulations as well as data obtained from measurements of water immersed samples. Our observations reveal details of the tubular structure of dentin, and may be evaluated similarly to conventional absorption tomograms. We believe this exemplifies the power of Fresnel-propagated imaging as a form of 3D microscopy, well suited to quantify gradual microstructural-variations in teeth and similar tissues.

  4. Domestic and Industrial Water Disinfection Using Boron-Doped Diamond Electrodes

    NASA Astrophysics Data System (ADS)

    Rychen, Philippe; Provent, Christophe; Pupunat, Laurent; Hermant, Nicolas

    This chapter first describes main properties and manufacturing process (production using HF-CVD, quality-control measurements, etc.) of diamond electrodes and more specifically boron-doped diamond (BDD) electrodes. Their exceptional properties make such electrodes particularly suited for many disinfection applications as thanks to their wide working potential window and their high anodic potential, they allow generating a mixture of powerful oxidizing species mainly based on active oxygen and peroxides. Such mixture of disinfecting agents is far more efficient than conventional chemical or physical known techniques. Their efficiency was tested against numerous microorganisms and then proved to be greater than conventional methods. All bacteria and viruses tested up to date were inactivated 3-5 times faster with a treatment based on with BDD electrodes and the DiaCellⓇ technology than with other techniques. Several applications, either industrial or private (wellness and home use), are discussed with a focus on the dedicated products and the main technology advantages.

  5. An indirect approach to the extensive calculation of relationship coefficients

    PubMed Central

    Colleau, Jean-Jacques

    2002-01-01

    A method was described for calculating population statistics on relationship coefficients without using corresponding individual data. It relied on the structure of the inverse of the numerator relationship matrix between individuals under investigation and ancestors. Computation times were observed on simulated populations and were compared to those incurred with a conventional direct approach. The indirect approach turned out to be very efficient for multiplying the relationship matrix corresponding to planned matings (full design) by any vector. Efficiency was generally still good or very good for calculating statistics on these simulated populations. An extreme implementation of the method is the calculation of inbreeding coefficients themselves. Relative performances of the indirect method were good except when many full-sibs during many generations existed in the population. PMID:12270102

  6. Speeding up GW Calculations to Meet the Challenge of Large Scale Quasiparticle Predictions.

    PubMed

    Gao, Weiwei; Xia, Weiyi; Gao, Xiang; Zhang, Peihong

    2016-11-11

    Although the GW approximation is recognized as one of the most accurate theories for predicting materials excited states properties, scaling up conventional GW calculations for large systems remains a major challenge. We present a powerful and simple-to-implement method that can drastically accelerate fully converged GW calculations for large systems, enabling fast and accurate quasiparticle calculations for complex materials systems. We demonstrate the performance of this new method by presenting the results for ZnO and MgO supercells. A speed-up factor of nearly two orders of magnitude is achieved for a system containing 256 atoms (1024 valence electrons) with a negligibly small numerical error of ±0.03 eV. Finally, we discuss the application of our method to the GW calculations for 2D materials.

  7. Quantification of human responses

    NASA Technical Reports Server (NTRS)

    Steinlage, R. C.; Gantner, T. E.; Lim, P. Y. W.

    1992-01-01

    Human perception is a complex phenomenon which is difficult to quantify with instruments. For this reason, large panels of people are often used to elicit and aggregate subjective judgments. Print quality, taste, smell, sound quality of a stereo system, softness, and grading Olympic divers and skaters are some examples of situations where subjective measurements or judgments are paramount. We usually express what is in our mind through language as a medium but languages are limited in available choices of vocabularies, and as a result, our verbalizations are only approximate expressions of what we really have in mind. For lack of better methods to quantify subjective judgments, it is customary to set up a numerical scale such as 1, 2, 3, 4, 5 or 1, 2, 3, ..., 9, 10 for characterizing human responses and subjective judgments with no valid justification except that these scales are easy to understand and convenient to use. But these numerical scales are arbitrary simplifications of the complex human mind; the human mind is not restricted to such simple numerical variations. In fact, human responses and subjective judgments are psychophysical phenomena that are fuzzy entities and therefore difficult to handle by conventional mathematics and probability theory. The fuzzy mathematical approach provides a more realistic insight into understanding and quantifying human responses. This paper presents a method for quantifying human responses and subjective judgments without assuming a pattern of linear or numerical variation for human responses. In particular, quantification and evaluation of linguistic judgments was investigated.

  8. On a framework for generating PoD curves assisted by numerical simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Subair, S. Mohamed, E-mail: prajagopal@iitm.ac.in; Agrawal, Shweta, E-mail: prajagopal@iitm.ac.in; Balasubramaniam, Krishnan, E-mail: prajagopal@iitm.ac.in

    2015-03-31

    The Probability of Detection (PoD) curve method has emerged as an important tool for the assessment of the performance of NDE techniques, a topic of particular interest to the nuclear industry where inspection qualification is very important. The conventional experimental means of generating PoD curves though, can be expensive, requiring large data sets (covering defects and test conditions), and equipment and operator time. Several methods of achieving faster estimates for PoD curves using physics-based modelling have been developed to address this problem. Numerical modelling techniques are also attractive, especially given the ever-increasing computational power available to scientists today. Here wemore » develop procedures for obtaining PoD curves, assisted by numerical simulation and based on Bayesian statistics. Numerical simulations are performed using Finite Element analysis for factors that are assumed to be independent, random and normally distributed. PoD curves so generated are compared with experiments on austenitic stainless steel (SS) plates with artificially created notches. We examine issues affecting the PoD curve generation process including codes, standards, distribution of defect parameters and the choice of the noise threshold. We also study the assumption of normal distribution for signal response parameters and consider strategies for dealing with data that may be more complex or sparse to justify this. These topics are addressed and illustrated through the example case of generation of PoD curves for pulse-echo ultrasonic inspection of vertical surface-breaking cracks in SS plates.« less

  9. On a framework for generating PoD curves assisted by numerical simulations

    NASA Astrophysics Data System (ADS)

    Subair, S. Mohamed; Agrawal, Shweta; Balasubramaniam, Krishnan; Rajagopal, Prabhu; Kumar, Anish; Rao, Purnachandra B.; Tamanna, Jayakumar

    2015-03-01

    The Probability of Detection (PoD) curve method has emerged as an important tool for the assessment of the performance of NDE techniques, a topic of particular interest to the nuclear industry where inspection qualification is very important. The conventional experimental means of generating PoD curves though, can be expensive, requiring large data sets (covering defects and test conditions), and equipment and operator time. Several methods of achieving faster estimates for PoD curves using physics-based modelling have been developed to address this problem. Numerical modelling techniques are also attractive, especially given the ever-increasing computational power available to scientists today. Here we develop procedures for obtaining PoD curves, assisted by numerical simulation and based on Bayesian statistics. Numerical simulations are performed using Finite Element analysis for factors that are assumed to be independent, random and normally distributed. PoD curves so generated are compared with experiments on austenitic stainless steel (SS) plates with artificially created notches. We examine issues affecting the PoD curve generation process including codes, standards, distribution of defect parameters and the choice of the noise threshold. We also study the assumption of normal distribution for signal response parameters and consider strategies for dealing with data that may be more complex or sparse to justify this. These topics are addressed and illustrated through the example case of generation of PoD curves for pulse-echo ultrasonic inspection of vertical surface-breaking cracks in SS plates.

  10. Numerical Modeling of Pulse Detonation Rocket Engine Gasdynamics and Performance

    NASA Technical Reports Server (NTRS)

    Morris, C. I.

    2003-01-01

    Pulse detonation engines (PDB) have generated considerable research interest in recent years as a chemical propulsion system potentially offering improved performance and reduced complexity compared to conventional gas turbines and rocket engines. The detonative mode of combustion employed by these devices offers a theoretical thermodynamic advantage over the constant-pressure deflagrative combustion mode used in conventional engines. However, the unsteady blowdown process intrinsic to all pulse detonation devices has made realistic estimates of the actual propulsive performance of PDES problematic. The recent review article by Kailasanath highlights some of the progress that has been made in comparing the available experimental measurements with analytical and numerical models.

  11. Conventional and modified Schwarzschild objective for EUV lithography: design relations

    NASA Astrophysics Data System (ADS)

    Bollanti, S.; di Lazzaro, P.; Flora, F.; Mezi, L.; Murra, D.; Torre, A.

    2006-12-01

    The design criteria of a Schwarzschild-type optical system are reviewed in relation to its use as an imaging system in an extreme ultraviolet lithography setup. Both the conventional and the modified reductor imaging configurations are considered, and the respective performances, as far as the geometrical resolution in the image plane is concerned, are compared. In this connection, a formal relation defining the modified configuration is elaborated, refining a rather naïve definition presented in an earlier work. The dependence of the geometrical resolution on the image-space numerical aperture for a given magnification is investigated in detail for both configurations. So, the advantages of the modified configuration with respect to the conventional one are clearly evidenced. The results of a semi-analytical procedure are compared with those obtained from a numerical simulation performed by an optical design program. The Schwarzschild objective based system under implementation at the ENEA Frascati Center within the context of the Italian FIRB project for EUV lithography has been used as a model. Best-fit functions accounting for the behaviour of the system parameters vs. the numerical aperture are reported; they can be a useful guide for the design of Schwarzschild objective type optical systems.

  12. Iterative methods for 3D implicit finite-difference migration using the complex Padé approximation

    NASA Astrophysics Data System (ADS)

    Costa, Carlos A. N.; Campos, Itamara S.; Costa, Jessé C.; Neto, Francisco A.; Schleicher, Jörg; Novais, Amélia

    2013-08-01

    Conventional implementations of 3D finite-difference (FD) migration use splitting techniques to accelerate performance and save computational cost. However, such techniques are plagued with numerical anisotropy that jeopardises the correct positioning of dipping reflectors in the directions not used for the operator splitting. We implement 3D downward continuation FD migration without splitting using a complex Padé approximation. In this way, the numerical anisotropy is eliminated at the expense of a computationally more intensive solution of a large-band linear system. We compare the performance of the iterative stabilized biconjugate gradient (BICGSTAB) and that of the multifrontal massively parallel direct solver (MUMPS). It turns out that the use of the complex Padé approximation not only stabilizes the solution, but also acts as an effective preconditioner for the BICGSTAB algorithm, reducing the number of iterations as compared to the implementation using the real Padé expansion. As a consequence, the iterative BICGSTAB method is more efficient than the direct MUMPS method when solving a single term in the Padé expansion. The results of both algorithms, here evaluated by computing the migration impulse response in the SEG/EAGE salt model, are of comparable quality.

  13. 15 CFR 711.5 - Numerical precision of submitted data.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 15 Commerce and Foreign Trade 2 2011-01-01 2011-01-01 false Numerical precision of submitted data. 711.5 Section 711.5 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade (Continued) BUREAU OF INDUSTRY AND SECURITY, DEPARTMENT OF COMMERCE CHEMICAL WEAPONS CONVENTION REGULATIONS...

  14. 15 CFR 711.5 - Numerical precision of submitted data.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 15 Commerce and Foreign Trade 2 2013-01-01 2013-01-01 false Numerical precision of submitted data. 711.5 Section 711.5 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade (Continued) BUREAU OF INDUSTRY AND SECURITY, DEPARTMENT OF COMMERCE CHEMICAL WEAPONS CONVENTION REGULATIONS...

  15. 15 CFR 711.5 - Numerical precision of submitted data.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 15 Commerce and Foreign Trade 2 2012-01-01 2012-01-01 false Numerical precision of submitted data. 711.5 Section 711.5 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade (Continued) BUREAU OF INDUSTRY AND SECURITY, DEPARTMENT OF COMMERCE CHEMICAL WEAPONS CONVENTION REGULATIONS...

  16. 15 CFR 711.5 - Numerical precision of submitted data.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 15 Commerce and Foreign Trade 2 2010-01-01 2010-01-01 false Numerical precision of submitted data. 711.5 Section 711.5 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade (Continued) BUREAU OF INDUSTRY AND SECURITY, DEPARTMENT OF COMMERCE CHEMICAL WEAPONS CONVENTION REGULATIONS...

  17. 15 CFR 711.5 - Numerical precision of submitted data.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 15 Commerce and Foreign Trade 2 2014-01-01 2014-01-01 false Numerical precision of submitted data. 711.5 Section 711.5 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade (Continued) BUREAU OF INDUSTRY AND SECURITY, DEPARTMENT OF COMMERCE CHEMICAL WEAPONS CONVENTION REGULATIONS...

  18. A New Unified Analysis of Estimate Errors by Model-Matching Phase-Estimation Methods for Sensorless Drive of Permanent-Magnet Synchronous Motors and New Trajectory-Oriented Vector Control, Part II

    NASA Astrophysics Data System (ADS)

    Shinnaka, Shinji

    This paper presents a new unified analysis of estimate errors by model-matching extended-back-EMF estimation methods for sensorless drive of permanent-magnet synchronous motors. Analytical solutions about estimate errors, whose validity is confirmed by numerical experiments, are rich in universality and applicability. As an example of universality and applicability, a new trajectory-oriented vector control method is proposed, which can realize directly quasi-optimal strategy minimizing total losses with no additional computational loads by simply orienting one of vector-control coordinates to the associated quasi-optimal trajectory. The coordinate orientation rule, which is analytically derived, is surprisingly simple. Consequently the trajectory-oriented vector control method can be applied to a number of conventional vector control systems using model-matching extended-back-EMF estimation methods.

  19. Experimental determination of the viscous flow permeability of porous materials by measuring reflected low frequency acoustic waves

    NASA Astrophysics Data System (ADS)

    Berbiche, A.; Sadouki, M.; Fellah, Z. E. A.; Ogam, E.; Fellah, M.; Mitri, F. G.; Depollier, C.

    2016-01-01

    An acoustic reflectivity method is proposed for measuring the permeability or flow resistivity of air-saturated porous materials. In this method, a simplified expression of the reflection coefficient is derived in the Darcy's regime (low frequency range), which does not depend on frequency and porosity. Numerical simulations show that the reflection coefficient of a porous material can be approximated by its simplified expression obtained from its Taylor development to the first order. This approximation is good especially for resistive materials (of low permeability) and for the lower frequencies. The permeability is reconstructed by solving the inverse problem using waves reflected by plastic foam samples, at different frequency bandwidths in the Darcy regime. The proposed method has the advantage of being simple compared to the conventional methods that use experimental reflected data, and is complementary to the transmissivity method, which is more adapted to low resistive materials (high permeability).

  20. A sparse equivalent source method for near-field acoustic holography.

    PubMed

    Fernandez-Grande, Efren; Xenaki, Angeliki; Gerstoft, Peter

    2017-01-01

    This study examines a near-field acoustic holography method consisting of a sparse formulation of the equivalent source method, based on the compressive sensing (CS) framework. The method, denoted Compressive-Equivalent Source Method (C-ESM), encourages spatially sparse solutions (based on the superposition of few waves) that are accurate when the acoustic sources are spatially localized. The importance of obtaining a non-redundant representation, i.e., a sensing matrix with low column coherence, and the inherent ill-conditioning of near-field reconstruction problems is addressed. Numerical and experimental results on a classical guitar and on a highly reactive dipole-like source are presented. C-ESM is valid beyond the conventional sampling limits, making wide-band reconstruction possible. Spatially extended sources can also be addressed with C-ESM, although in this case the obtained solution does not recover the spatial extent of the source.

  1. Reconceptualising Childhood: Children's Rights and Youth Participation in Schools

    ERIC Educational Resources Information Center

    Johnny, Leanne

    2006-01-01

    Article 12 of the United Nations Convention on the Rights of the Child holds that young people have a right to participate in matters affecting them. While all members of the United Nations have ratified the Convention (with the exception of the United States and Somalia), there are numerous challenges associated with implementing the…

  2. Radiation dose reduction in medical x-ray CT via Fourier-based iterative reconstruction.

    PubMed

    Fahimian, Benjamin P; Zhao, Yunzhe; Huang, Zhifeng; Fung, Russell; Mao, Yu; Zhu, Chun; Khatonabadi, Maryam; DeMarco, John J; Osher, Stanley J; McNitt-Gray, Michael F; Miao, Jianwei

    2013-03-01

    A Fourier-based iterative reconstruction technique, termed Equally Sloped Tomography (EST), is developed in conjunction with advanced mathematical regularization to investigate radiation dose reduction in x-ray CT. The method is experimentally implemented on fan-beam CT and evaluated as a function of imaging dose on a series of image quality phantoms and anonymous pediatric patient data sets. Numerical simulation experiments are also performed to explore the extension of EST to helical cone-beam geometry. EST is a Fourier based iterative algorithm, which iterates back and forth between real and Fourier space utilizing the algebraically exact pseudopolar fast Fourier transform (PPFFT). In each iteration, physical constraints and mathematical regularization are applied in real space, while the measured data are enforced in Fourier space. The algorithm is automatically terminated when a proposed termination criterion is met. Experimentally, fan-beam projections were acquired by the Siemens z-flying focal spot technology, and subsequently interleaved and rebinned to a pseudopolar grid. Image quality phantoms were scanned at systematically varied mAs settings, reconstructed by EST and conventional reconstruction methods such as filtered back projection (FBP), and quantified using metrics including resolution, signal-to-noise ratios (SNRs), and contrast-to-noise ratios (CNRs). Pediatric data sets were reconstructed at their original acquisition settings and additionally simulated to lower dose settings for comparison and evaluation of the potential for radiation dose reduction. Numerical experiments were conducted to quantify EST and other iterative methods in terms of image quality and computation time. The extension of EST to helical cone-beam CT was implemented by using the advanced single-slice rebinning (ASSR) method. Based on the phantom and pediatric patient fan-beam CT data, it is demonstrated that EST reconstructions with the lowest scanner flux setting of 39 mAs produce comparable image quality, resolution, and contrast relative to FBP with the 140 mAs flux setting. Compared to the algebraic reconstruction technique and the expectation maximization statistical reconstruction algorithm, a significant reduction in computation time is achieved with EST. Finally, numerical experiments on helical cone-beam CT data suggest that the combination of EST and ASSR produces reconstructions with higher image quality and lower noise than the Feldkamp Davis and Kress (FDK) method and the conventional ASSR approach. A Fourier-based iterative method has been applied to the reconstruction of fan-bean CT data with reduced x-ray fluence. This method incorporates advantageous features in both real and Fourier space iterative schemes: using a fast and algebraically exact method to calculate forward projection, enforcing the measured data in Fourier space, and applying physical constraints and flexible regularization in real space. Our results suggest that EST can be utilized for radiation dose reduction in x-ray CT via the readily implementable technique of lowering mAs settings. Numerical experiments further indicate that EST requires less computation time than several other iterative algorithms and can, in principle, be extended to helical cone-beam geometry in combination with the ASSR method.

  3. Speckle-field propagation in 'frozen' turbulence: brightness function approach

    NASA Astrophysics Data System (ADS)

    Dudorov, Vadim V.; Vorontsov, Mikhail A.; Kolosov, Valeriy V.

    2006-08-01

    Speckle-field long- and short-exposure spatial correlation characteristics for target-in-the-loop (TIL) laser beam propagation and scattering in atmospheric turbulence are analyzed through the use of two different approaches: the conventional Monte Carlo (MC) technique and the recently developed brightness function (BF) method. Both the MC and the BF methods are applied to analysis of speckle-field characteristics averaged over target surface roughness realizations under conditions of 'frozen' turbulence. This corresponds to TIL applications where speckle-field fluctuations associated with target surface roughness realization updates occur within a time scale that can be significantly shorter than the characteristic atmospheric turbulence time. Computational efficiency and accuracy of both methods are compared on the basis of a known analytical solution for the long-exposure mutual correlation function. It is shown that in the TIL propagation scenarios considered the BF method provides improved accuracy and requires significantly less computational time than the conventional MC technique. For TIL geometry with a Gaussian outgoing beam and Lambertian target surface, both analytical and numerical estimations for the speckle-field long-exposure correlation length are obtained. Short-exposure speckle-field correlation characteristics corresponding to propagation in 'frozen' turbulence are estimated using the BF method. It is shown that atmospheric turbulence-induced static refractive index inhomogeneities do not significantly affect the characteristic correlation length of the speckle field, whereas long-exposure spatial correlation characteristics are strongly dependent on turbulence strength.

  4. Improving label-free detection of circulating melanoma cells by photoacoustic flow cytometry

    NASA Astrophysics Data System (ADS)

    Zhou, Huan; Wang, Qiyan; Pang, Kai; Zhou, Quanyu; Yang, Ping; He, Hao; Wei, Xunbin

    2018-02-01

    Melanoma is a kind of a malignant tumor of melanocytes with the properties of high mortality and high metastasis rate. The circulating melanoma cells with the high content of melanin can be detected by light absorption to diagnose and treat cancer at an early stage. Compared with conventional detection methods such as in vivo flow cytometry (IVFC) based on fluorescence, the in vivo photoacoustic flow cytometry (PAFC) utilizes melanin cells as biomarkers to collect the photoacoustic (PA) signals without toxic fluorescent dyes labeling in a non-invasive way. The information of target tumor cells is helpful for data analysis and cell counting. However, the raw signals in PAFC system contain numerous noises such as environmental noise, device noise and in vivo motion noise. Conventional denoising algorithms such as wavelet denoising (WD) method and means filter (MF) method are based on the local information to extract the data of clinical interest, which remove the subtle feature and leave many noises. To address the above questions, the nonlocal means (NLM) method based on nonlocal data has been proposed to suppress the noise in PA signals. Extensive experiments on in vivo PA signals from the mice with the injection of B16F10 cells in caudal vein have been conducted. All the results indicate that the NLM method has superior noise reduction performance and subtle information reservation.

  5. Speckle-field propagation in 'frozen' turbulence: brightness function approach.

    PubMed

    Dudorov, Vadim V; Vorontsov, Mikhail A; Kolosov, Valeriy V

    2006-08-01

    Speckle-field long- and short-exposure spatial correlation characteristics for target-in-the-loop (TIL) laser beam propagation and scattering in atmospheric turbulence are analyzed through the use of two different approaches: the conventional Monte Carlo (MC) technique and the recently developed brightness function (BF) method. Both the MC and the BF methods are applied to analysis of speckle-field characteristics averaged over target surface roughness realizations under conditions of 'frozen' turbulence. This corresponds to TIL applications where speckle-field fluctuations associated with target surface roughness realization updates occur within a time scale that can be significantly shorter than the characteristic atmospheric turbulence time. Computational efficiency and accuracy of both methods are compared on the basis of a known analytical solution for the long-exposure mutual correlation function. It is shown that in the TIL propagation scenarios considered the BF method provides improved accuracy and requires significantly less computational time than the conventional MC technique. For TIL geometry with a Gaussian outgoing beam and Lambertian target surface, both analytical and numerical estimations for the speckle-field long-exposure correlation length are obtained. Short-exposure speckle-field correlation characteristics corresponding to propagation in 'frozen' turbulence are estimated using the BF method. It is shown that atmospheric turbulence-induced static refractive index inhomogeneities do not significantly affect the characteristic correlation length of the speckle field, whereas long-exposure spatial correlation characteristics are strongly dependent on turbulence strength.

  6. Shock waves simulated using the dual domain material point method combined with molecular dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Duan Z.; Dhakal, Tilak Raj

    Here in this work we combine the dual domain material point method with molecular dynamics in an attempt to create a multiscale numerical method to simulate materials undergoing large deformations with high strain rates. In these types of problems, the material is often in a thermodynamically nonequilibrium state, and conventional constitutive relations or equations of state are often not available. In this method, the closure quantities, such as stress, at each material point are calculated from a molecular dynamics simulation of a group of atoms surrounding the material point. Rather than restricting the multiscale simulation in a small spatial region,more » such as phase interfaces, or crack tips, this multiscale method can be used to consider nonequilibrium thermodynamic effects in a macroscopic domain. This method takes the advantage that the material points only communicate with mesh nodes, not among themselves; therefore molecular dynamics simulations for material points can be performed independently in parallel. The dual domain material point method is chosen for this multiscale method because it can be used in history dependent problems with large deformation without generating numerical noise as material points move across cells, and also because of its convergence and conservation properties. In conclusion, to demonstrate the feasibility and accuracy of this method, we compare the results of a shock wave propagation in a cerium crystal calculated using the direct molecular dynamics simulation with the results from this combined multiscale calculation.« less

  7. Shock waves simulated using the dual domain material point method combined with molecular dynamics

    DOE PAGES

    Zhang, Duan Z.; Dhakal, Tilak Raj

    2017-01-17

    Here in this work we combine the dual domain material point method with molecular dynamics in an attempt to create a multiscale numerical method to simulate materials undergoing large deformations with high strain rates. In these types of problems, the material is often in a thermodynamically nonequilibrium state, and conventional constitutive relations or equations of state are often not available. In this method, the closure quantities, such as stress, at each material point are calculated from a molecular dynamics simulation of a group of atoms surrounding the material point. Rather than restricting the multiscale simulation in a small spatial region,more » such as phase interfaces, or crack tips, this multiscale method can be used to consider nonequilibrium thermodynamic effects in a macroscopic domain. This method takes the advantage that the material points only communicate with mesh nodes, not among themselves; therefore molecular dynamics simulations for material points can be performed independently in parallel. The dual domain material point method is chosen for this multiscale method because it can be used in history dependent problems with large deformation without generating numerical noise as material points move across cells, and also because of its convergence and conservation properties. In conclusion, to demonstrate the feasibility and accuracy of this method, we compare the results of a shock wave propagation in a cerium crystal calculated using the direct molecular dynamics simulation with the results from this combined multiscale calculation.« less

  8. A novel approach to evaluation of pest insect abundance in the presence of noise.

    PubMed

    Embleton, Nina; Petrovskaya, Natalia

    2014-03-01

    Evaluation of pest abundance is an important task of integrated pest management. It has recently been shown that evaluation of pest population size from discrete sampling data can be done by using the ideas of numerical integration. Numerical integration of the pest population density function is a computational technique that readily gives us an estimate of the pest population size, where the accuracy of the estimate depends on the number of traps installed in the agricultural field to collect the data. However, in a standard mathematical problem of numerical integration, it is assumed that the data are precise, so that the random error is zero when the data are collected. This assumption does not hold in ecological applications. An inherent random error is often present in field measurements, and therefore it may strongly affect the accuracy of evaluation. In our paper, we offer a novel approach to evaluate the pest insect population size under the assumption that the data about the pest population include a random error. The evaluation is not based on statistical methods but is done using a spatially discrete method of numerical integration where the data obtained by trapping as in pest insect monitoring are converted to values of the population density. It will be discussed in the paper how the accuracy of evaluation differs from the case where the same evaluation method is employed to handle precise data. We also consider how the accuracy of the pest insect abundance evaluation can be affected by noise when the data available from trapping are sparse. In particular, we show that, contrary to intuitive expectations, noise does not have any considerable impact on the accuracy of evaluation when the number of traps is small as is conventional in ecological applications.

  9. A Stabilized Finite Element Method for Modified Poisson-Nernst-Planck Equations to Determine Ion Flow Through a Nanopore

    PubMed Central

    Chaudhry, Jehanzeb Hameed; Comer, Jeffrey; Aksimentiev, Aleksei; Olson, Luke N.

    2013-01-01

    The conventional Poisson-Nernst-Planck equations do not account for the finite size of ions explicitly. This leads to solutions featuring unrealistically high ionic concentrations in the regions subject to external potentials, in particular, near highly charged surfaces. A modified form of the Poisson-Nernst-Planck equations accounts for steric effects and results in solutions with finite ion concentrations. Here, we evaluate numerical methods for solving the modified Poisson-Nernst-Planck equations by modeling electric field-driven transport of ions through a nanopore. We describe a novel, robust finite element solver that combines the applications of the Newton's method to the nonlinear Galerkin form of the equations, augmented with stabilization terms to appropriately handle the drift-diffusion processes. To make direct comparison with particle-based simulations possible, our method is specifically designed to produce solutions under periodic boundary conditions and to conserve the number of ions in the solution domain. We test our finite element solver on a set of challenging numerical experiments that include calculations of the ion distribution in a volume confined between two charged plates, calculations of the ionic current though a nanopore subject to an external electric field, and modeling the effect of a DNA molecule on the ion concentration and nanopore current. PMID:24363784

  10. Modeling nonlinear ultrasound propagation in heterogeneous media with power law absorption using a k-space pseudospectral method.

    PubMed

    Treeby, Bradley E; Jaros, Jiri; Rendell, Alistair P; Cox, B T

    2012-06-01

    The simulation of nonlinear ultrasound propagation through tissue realistic media has a wide range of practical applications. However, this is a computationally difficult problem due to the large size of the computational domain compared to the acoustic wavelength. Here, the k-space pseudospectral method is used to reduce the number of grid points required per wavelength for accurate simulations. The model is based on coupled first-order acoustic equations valid for nonlinear wave propagation in heterogeneous media with power law absorption. These are derived from the equations of fluid mechanics and include a pressure-density relation that incorporates the effects of nonlinearity, power law absorption, and medium heterogeneities. The additional terms accounting for convective nonlinearity and power law absorption are expressed as spatial gradients making them efficient to numerically encode. The governing equations are then discretized using a k-space pseudospectral technique in which the spatial gradients are computed using the Fourier-collocation method. This increases the accuracy of the gradient calculation and thus relaxes the requirement for dense computational grids compared to conventional finite difference methods. The accuracy and utility of the developed model is demonstrated via several numerical experiments, including the 3D simulation of the beam pattern from a clinical ultrasound probe.

  11. Elastic-wave-mode separation in TTI media with inverse-distance weighted interpolation involving position shading

    NASA Astrophysics Data System (ADS)

    Wang, Jian; Meng, Xiaohong; Zheng, Wanqiu

    2017-10-01

    The elastic-wave reverse-time migration of inhomogeneous anisotropic media is becoming the hotspot of research today. In order to ensure the accuracy of the migration, it is necessary to separate the wave mode into P-wave and S-wave before migration. For inhomogeneous media, the Kelvin-Christoffel equation can be solved in the wave-number domain by using the anisotropic parameters of the mesh nodes, and the polarization vector of the P-wave and S-wave at each node can be calculated and transformed into the space domain to obtain the quasi-differential operators. However, this method is computationally expensive, especially for the process of quasi-differential operators. In order to reduce the computational complexity, the wave-mode separation of mixed domain can be realized on the basis of a reference model in the wave-number domain. But conventional interpolation methods and reference model selection methods reduce the separation accuracy. In order to further improve the separation effect, this paper introduces an inverse-distance interpolation method involving position shading and uses the reference model selection method of random points scheme. This method adds the spatial weight coefficient K, which reflects the orientation of the reference point on the conventional IDW algorithm, and the interpolation process takes into account the combined effects of the distance and azimuth of the reference points. Numerical simulation shows that the proposed method can separate the wave mode more accurately using fewer reference models and has better practical value.

  12. Biclustering Learning of Trading Rules.

    PubMed

    Huang, Qinghua; Wang, Ting; Tao, Dacheng; Li, Xuelong

    2015-10-01

    Technical analysis with numerous indicators and patterns has been regarded as important evidence for making trading decisions in financial markets. However, it is extremely difficult for investors to find useful trading rules based on numerous technical indicators. This paper innovatively proposes the use of biclustering mining to discover effective technical trading patterns that contain a combination of indicators from historical financial data series. This is the first attempt to use biclustering algorithm on trading data. The mined patterns are regarded as trading rules and can be classified as three trading actions (i.e., the buy, the sell, and no-action signals) with respect to the maximum support. A modified K nearest neighborhood ( K -NN) method is applied to classification of trading days in the testing period. The proposed method [called biclustering algorithm and the K nearest neighbor (BIC- K -NN)] was implemented on four historical datasets and the average performance was compared with the conventional buy-and-hold strategy and three previously reported intelligent trading systems. Experimental results demonstrate that the proposed trading system outperforms its counterparts and will be useful for investment in various financial markets.

  13. A stochastic method for Brownian-like optical transport calculations in anisotropic biosuspensions and blood

    NASA Astrophysics Data System (ADS)

    Miller, Steven

    1998-03-01

    A generic stochastic method is presented that rapidly evaluates numerical bulk flux solutions to the one-dimensional integrodifferential radiative transport equation, for coherent irradiance of optically anisotropic suspensions of nonspheroidal bioparticles, such as blood. As Fermat rays or geodesics enter the suspension, they evolve into a bundle of random paths or trajectories due to scattering by the suspended bioparticles. Overall, this can be interpreted as a bundle of Markov trajectories traced out by a "gas" of Brownian-like point photons being scattered and absorbed by the homogeneous distribution of uncorrelated cells in suspension. By considering the cumulative vectorial intersections of a statistical bundle of random trajectories through sets of interior data planes in the space containing the medium, the effective equivalent information content and behavior of the (generally unknown) analytical flux solutions of the radiative transfer equation rapidly emerges. The fluxes match the analytical diffuse flux solutions in the diffusion limit, which verifies the accuracy of the algorithm. The method is not constrained by the diffusion limit and gives correct solutions for conditions where diffuse solutions are not viable. Unlike conventional Monte Carlo and numerical techniques adapted from neutron transport or nuclear reactor problems that compute scalar quantities, this vectorial technique is fast, easily implemented, adaptable, and viable for a wide class of biophotonic scenarios. By comparison, other analytical or numerical techniques generally become unwieldy, lack viability, or are more difficult to utilize and adapt. Illustrative calculations are presented for blood medias at monochromatic wavelengths in the visible spectrum.

  14. Computational ecology as an emerging science

    PubMed Central

    Petrovskii, Sergei; Petrovskaya, Natalia

    2012-01-01

    It has long been recognized that numerical modelling and computer simulations can be used as a powerful research tool to understand, and sometimes to predict, the tendencies and peculiarities in the dynamics of populations and ecosystems. It has been, however, much less appreciated that the context of modelling and simulations in ecology is essentially different from those that normally exist in other natural sciences. In our paper, we review the computational challenges arising in modern ecology in the spirit of computational mathematics, i.e. with our main focus on the choice and use of adequate numerical methods. Somewhat paradoxically, the complexity of ecological problems does not always require the use of complex computational methods. This paradox, however, can be easily resolved if we recall that application of sophisticated computational methods usually requires clear and unambiguous mathematical problem statement as well as clearly defined benchmark information for model validation. At the same time, many ecological problems still do not have mathematically accurate and unambiguous description, and available field data are often very noisy, and hence it can be hard to understand how the results of computations should be interpreted from the ecological viewpoint. In this scientific context, computational ecology has to deal with a new paradigm: conventional issues of numerical modelling such as convergence and stability become less important than the qualitative analysis that can be provided with the help of computational techniques. We discuss this paradigm by considering computational challenges arising in several specific ecological applications. PMID:23565336

  15. Optimal rotated staggered-grid finite-difference schemes for elastic wave modeling in TTI media

    NASA Astrophysics Data System (ADS)

    Yang, Lei; Yan, Hongyong; Liu, Hong

    2015-11-01

    The rotated staggered-grid finite-difference (RSFD) is an effective approach for numerical modeling to study the wavefield characteristics in tilted transversely isotropic (TTI) media. But it surfaces from serious numerical dispersion, which directly affects the modeling accuracy. In this paper, we propose two different optimal RSFD schemes based on the sampling approximation (SA) method and the least-squares (LS) method respectively to overcome this problem. We first briefly introduce the RSFD theory, based on which we respectively derive the SA-based RSFD scheme and the LS-based RSFD scheme. Then different forms of analysis are used to compare the SA-based RSFD scheme and the LS-based RSFD scheme with the conventional RSFD scheme, which is based on the Taylor-series expansion (TE) method. The contrast in numerical accuracy analysis verifies the greater accuracy of the two proposed optimal schemes, and indicates that these schemes can effectively widen the wavenumber range with great accuracy compared with the TE-based RSFD scheme. Further comparisons between these two optimal schemes show that at small wavenumbers, the SA-based RSFD scheme performs better, while at large wavenumbers, the LS-based RSFD scheme leads to a smaller error. Finally, the modeling results demonstrate that for the same operator length, the SA-based RSFD scheme and the LS-based RSFD scheme can achieve greater accuracy than the TE-based RSFD scheme, while for the same accuracy, the optimal schemes can adopt shorter difference operators to save computing time.

  16. A simple, robust and efficient high-order accurate shock-capturing scheme for compressible flows: Towards minimalism

    NASA Astrophysics Data System (ADS)

    Ohwada, Taku; Shibata, Yuki; Kato, Takuma; Nakamura, Taichi

    2018-06-01

    Developed is a high-order accurate shock-capturing scheme for the compressible Euler/Navier-Stokes equations; the formal accuracy is 5th order in space and 4th order in time. The performance and efficiency of the scheme are validated in various numerical tests. The main ingredients of the scheme are nothing special; they are variants of the standard numerical flux, MUSCL, the usual Lagrange's polynomial and the conventional Runge-Kutta method. The scheme can compute a boundary layer accurately with a rational resolution and capture a stationary contact discontinuity sharply without inner points. And yet it is endowed with high resistance against shock anomalies (carbuncle phenomenon, post-shock oscillations, etc.). A good balance between high robustness and low dissipation is achieved by blending three types of numerical fluxes according to physical situation in an intuitively easy-to-understand way. The performance of the scheme is largely comparable to that of WENO5-Rusanov, while its computational cost is 30-40% less than of that of the advanced scheme.

  17. A microkernel design for component-based parallel numerical software systems.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balay, S.

    1999-01-13

    What is the minimal software infrastructure and what type of conventions are needed to simplify development of sophisticated parallel numerical application codes using a variety of software components that are not necessarily available as source code? We propose an opaque object-based model where the objects are dynamically loadable from the file system or network. The microkernel required to manage such a system needs to include, at most: (1) a few basic services, namely--a mechanism for loading objects at run time via dynamic link libraries, and consistent schemes for error handling and memory management; and (2) selected methods that all objectsmore » share, to deal with object life (destruction, reference counting, relationships), and object observation (viewing, profiling, tracing). We are experimenting with these ideas in the context of extensible numerical software within the ALICE (Advanced Large-scale Integrated Computational Environment) project, where we are building the microkernel to manage the interoperability among various tools for large-scale scientific simulations. This paper presents some preliminary observations and conclusions from our work with microkernel design.« less

  18. Microsphere-assisted super-resolution imaging with enlarged numerical aperture by semi-immersion

    NASA Astrophysics Data System (ADS)

    Wang, Fengge; Yang, Songlin; Ma, Huifeng; Shen, Ping; Wei, Nan; Wang, Meng; Xia, Yang; Deng, Yun; Ye, Yong-Hong

    2018-01-01

    Microsphere-assisted imaging is an extraordinary simple technology that can obtain optical super-resolution under white-light illumination. Here, we introduce a method to improve the resolution of a microsphere lens by increasing its numerical aperture. In our proposed structure, BaTiO3 glass (BTG) microsphere lenses are semi-immersed in a S1805 layer with a refractive index of 1.65, and then, the semi-immersed microspheres are fully embedded in an elastomer with an index of 1.4. We experimentally demonstrate that this structure, in combination with a conventional optical microscope, can clearly resolve a two-dimensional 200-nm-diameter hexagonally close-packed (hcp) silica microsphere array. On the contrary, the widely used structure where BTG microsphere lenses are fully immersed in a liquid or elastomer cannot even resolve a 250-nm-diameter hcp silica microsphere array. The improvement in resolution through the proposed structure is due to an increase in the effective numerical aperture by semi-immersing BTG microsphere lenses in a high-refractive-index S1805 layer. Our results will inform on the design of microsphere-based high-resolution imaging systems.

  19. Terminal illness and the increased mortality risk of conventional antipsychotics in observational studies: a systematic review.

    PubMed

    Luijendijk, Hendrika J; de Bruin, Niels C; Hulshof, Tessa A; Koolman, Xander

    2016-02-01

    Numerous large observational studies have shown an increased risk of mortality in elderly users of conventional antipsychotics. Health authorities have warned against use of these drugs. However, terminal illness is a potentially strong confounder of the observational findings. So, the objective of this study was to systematically assess whether terminal illness may have biased the observational association between conventional antipsychotics and risk of mortality in elderly patients. Studies were searched in PubMed, CINAHL, Embase, the references of selected studies and articles referring to selected studies (Web of Science). Inclusion criteria were (i) observational studies that estimated (ii) the risk of all-cause mortality in (iii) new elderly users of (iv) conventional antipsychotics compared with atypical antipsychotics or no use. Two investigators assessed the characteristics of the exposure and reference groups, main results, measured confounders and methods used to adjust for unmeasured confounders. We identified 21 studies. All studies were based on administrative medical and pharmaceutical databases. Sicker and older patients received conventional antipsychotics more often than new antipsychotics. The risk of dying was especially high in the first month of use, and when haloperidol was administered per injection or in high doses. Terminal illness was not measured in any study. Instrumental variables that were used were also confounded by terminal illness. We conclude that terminal illness has not been adjusted for in observational studies that reported an increased risk of mortality risk in elderly users of conventional antipsychotics. As the validity of the evidence is questionable, so is the warning based on it. Copyright © 2015 John Wiley & Sons, Ltd.

  20. Erratum: Sources of Image Degradation in Fundamental and Harmonic Ultrasound Imaging: A Nonlinear, Full-Wave, Simulation Study

    PubMed Central

    Pinton, Gianmarco F.; Trahey, Gregg E.; Dahl, Jeremy J.

    2015-01-01

    A full-wave equation that describes nonlinear propagation in a heterogeneous attenuating medium is solved numerically with finite differences in the time domain. This numerical method is used to simulate propagation of a diagnostic ultrasound pulse through a measured representation of the human abdomen with heterogeneities in speed of sound, attenuation, density, and nonlinearity. Conventional delay-and-sum beamforming is used to generate point spread functions (PSFs) that display the effects of these heterogeneities. For the particular imaging configuration that is modeled, these PSFs reveal that the primary source of degradation in fundamental imaging is due to reverberation from near-field structures. Compared with fundamental imaging, reverberation clutter in harmonic imaging is 27.1 dB lower. Simulated tissue with uniform velocity but unchanged impedance characteristics indicates that for harmonic imaging, the primary source of degradation is phase aberration. PMID:21693410

  1. A numerical analysis for non-linear radiation in MHD flow around a cylindrical surface with chemically reactive species

    NASA Astrophysics Data System (ADS)

    Khan, Junaid Ahmad; Mustafa, M.

    2018-03-01

    Boundary layer flow around a stretchable rough cylinder is modeled by taking into account boundary slip and transverse magnetic field effects. The main concern is to resolve heat/mass transfer problem considering non-linear radiative heat transfer and temperature/concentration jump aspects. Using conventional similarity approach, the equations of motion and heat transfer are converted into a boundary value problem whose solution is computed by shooting method for broad range of slip coefficients. The proposed numerical scheme appears to improve as the strengths of magnetic field and slip coefficients are enhanced. Axial velocity and temperature are considerably influenced by a parameter M which is inversely proportional to the radius of cylinder. A significant change in temperature profile is depicted for growing wall to ambient temperature ratio. Relevant physical quantities such as wall shear stress, local Nusselt number and local Sherwood number are elucidated in detail.

  2. Fracture network created by 3D printer and its validation using CT images

    NASA Astrophysics Data System (ADS)

    Suzuki, A.; Watanabe, N.; Li, K.; Horne, R. N.

    2017-12-01

    Understanding flow mechanisms in fractured media is essential for geoscientific research and geological development industries. This study used 3D printed fracture networks in order to control the properties of fracture distributions inside the sample. The accuracy and appropriateness of creating samples by the 3D printer was investigated by using a X-ray CT scanner. The CT scan images suggest that the 3D printer is able to reproduce complex three-dimensional spatial distributions of fracture networks. Use of hexane after printing was found to be an effective way to remove wax for the post-treatment. Local permeability was obtained by the cubic law and used to calculate the global mean. The experimental value of the permeability was between the arithmetic and geometric means of the numerical results, which is consistent with conventional studies. This methodology based on 3D printed fracture networks can help validate existing flow modeling and numerical methods.

  3. Fast Estimation of Strains for Cross-Beams Six-Axis Force/Torque Sensors by Mechanical Modeling

    PubMed Central

    Ma, Junqing; Song, Aiguo

    2013-01-01

    Strain distributions are crucial criteria of cross-beams six-axis force/torque sensors. The conventional method for calculating the criteria is to utilize Finite Element Analysis (FEA) to get numerical solutions. This paper aims to obtain analytical solutions of strains under the effect of external force/torque in each dimension. Genetic mechanical models for cross-beams six-axis force/torque sensors are proposed, in which deformable cross elastic beams and compliant beams are modeled as quasi-static Timoshenko beam. A detailed description of model assumptions, model idealizations, application scope and model establishment is presented. The results are validated by both numerical FEA simulations and calibration experiments, and test results are found to be compatible with each other for a wide range of geometric properties. The proposed analytical solutions are demonstrated to be an accurate estimation algorithm with higher efficiency. PMID:23686144

  4. Multirate Particle-in-Cell Time Integration Techniques of Vlasov-Maxwell Equations for Collisionless Kinetic Plasma Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Guangye; Chacon, Luis; Knoll, Dana Alan

    2015-07-31

    A multi-rate PIC formulation was developed that employs large timesteps for slow field evolution, and small (adaptive) timesteps for particle orbit integrations. Implementation is based on a JFNK solver with nonlinear elimination and moment preconditioning. The approach is free of numerical instabilities (ω peΔt >>1, and Δx >> λ D), and requires many fewer dofs (vs. explicit PIC) for comparable accuracy in challenging problems. Significant gains (vs. conventional explicit PIC) may be possible for large scale simulations. The paper is organized as follows: Vlasov-Maxwell Particle-in-cell (PIC) methods for plasmas; Explicit, semi-implicit, and implicit time integrations; Implicit PIC formulation (Jacobian-Free Newton-Krylovmore » (JFNK) with nonlinear elimination allows different treatments of disparate scales, discrete conservation properties (energy, charge, canonical momentum, etc.)); Some numerical examples; and Summary.« less

  5. Performance evaluation of cryogenic counter-flow heat exchangers with longitudinal conduction, heat in-leak and property variations

    NASA Astrophysics Data System (ADS)

    Jiang, Q. F.; Zhuang, M.; Zhu, Z. G.; Y Zhang, Q.; Sheng, L. H.

    2017-12-01

    Counter-flow plate-fin heat exchangers are commonly utilized in cryogenic applications due to their high effectiveness and compact size. For cryogenic heat exchangers in helium liquefaction/refrigeration systems, conventional design theory is no longer applicable and they are usually sensitive to longitudinal heat conduction, heat in-leak from surroundings and variable fluid properties. Governing equations based on distributed parameter method are developed to evaluate performance deterioration caused by these effects. The numerical model could also be applied in many other recuperators with different structures and, hence, available experimental data are used to validate it. For a specific case of the multi-stream heat exchanger in the EAST helium refrigerator, quantitative effects of these heat losses are further discussed, in comparison with design results obtained by the common commercial software. The numerical model could be useful to evaluate and rate the heat exchanger performance under the actual cryogenic environment.

  6. Initiation Capacity of a Specially Shaped Booster Pellet and Numerical Simulation of Its Initiation Process

    NASA Astrophysics Data System (ADS)

    Hu, Li-Shuang; Hu, Shuang-Qi; Cao, Xiong; Zhang, Jian-Ren

    2014-01-01

    The insensitive main charge explosive is creating new requirements for the booster pellet of detonation trains. The traditional cylindrical booster pellet has insufficient energy output to reliably initiate the insensitive main charge explosive. In this research, a concave spherical booster pellet was designed. The initiation capacity of the concave spherical booster pellet was studied using varied composition and axial steel dent methods. The initiation process of the concave spherical booster pellet was also simulated by ANSYS/LS-DYNA. The results showed that using a concave spherical booster allows a 42% reduction in the amount of explosive needed to match the initiation capacity of a conventional cylindrical booster of the same dimensions. With the other parameters kept constant, the initiation capacity of the concave spherical booster pellet increases with decreased cone angle and concave radius. The numerical simulation results are in good agreement with the experimental data.

  7. Numerical simulation of magnetic field for compact electromagnet consisting of REBCO coils and iron yoke

    NASA Astrophysics Data System (ADS)

    You, Shuangrong; Chi, Changxin; Guo, Yanqun; Bai, Chuanyi; Liu, Zhiyong; Lu, Yuming; Cai, Chuanbing

    2018-07-01

    This paper presents the numerical simulation of a high-temperature superconductor electromagnet consisting of REBCO (RE-Ba2Cu3O7‑x, RE: rare earth) superconducting tapes and a ferromagnetic iron yoke. The REBCO coils with multi-width design are operating at 77 K, with the iron yoke at room temperature, providing a magnetic space with a 32 mm gap between two poles. The finite element method is applied to compute the 3D model of the studied magnet. Simulated results show that the magnet generates a 1.5 T magnetic field at an operating current of 38.7 A, and the spatial inhomogeneity of the field is 0.8% in a Φ–20 mm diameter sphere volume. Compared with the conventional iron electromagnet, the present compact design is more suitable for practical application.

  8. Derivation and application of an analytical rock displacement solution on rectangular cavern wall using the inverse mapping method.

    PubMed

    Gao, Mingzhong; Yu, Bin; Qiu, Zhiqiang; Yin, Xiangang; Li, Shengwei; Liu, Qiang

    2017-01-01

    Rectangular caverns are increasingly used in underground engineering projects, the failure mechanism of rectangular cavern wall rock is significantly different as a result of the cross-sectional shape and variations in wall stress distributions. However, the conventional computational method always results in a long-winded computational process and multiple displacement solutions of internal rectangular wall rock. This paper uses a Laurent series complex method to obtain a mapping function expression based on complex variable function theory and conformal transformation. This method is combined with the Schwarz-Christoffel method to calculate the mapping function coefficient and to determine the rectangular cavern wall rock deformation. With regard to the inverse mapping concept, the mapping relation between the polar coordinate system within plane ς and a corresponding unique plane coordinate point inside the cavern wall rock is discussed. The disadvantage of multiple solutions when mapping from the plane to the polar coordinate system is addressed. This theoretical formula is used to calculate wall rock boundary deformation and displacement field nephograms inside the wall rock for a given cavern height and width. A comparison with ANSYS numerical software results suggests that the theoretical solution and numerical solution exhibit identical trends, thereby demonstrating the method's validity. This method greatly improves the computing accuracy and reduces the difficulty in solving for cavern boundary and internal wall rock displacements. The proposed method provides a theoretical guide for controlling cavern wall rock deformation failure.

  9. Large exchange-dominated domain wall velocities in antiferromagnetically coupled nanowires

    NASA Astrophysics Data System (ADS)

    Kuteifan, Majd; Lubarda, M. V.; Fu, S.; Chang, R.; Escobar, M. A.; Mangin, S.; Fullerton, E. E.; Lomakin, V.

    2016-04-01

    Magnetic nanowires supporting field- and current-driven domain wall motion are envisioned for methods of information storage and processing. A major obstacle for their practical use is the domain-wall velocity, which is traditionally limited for low fields and currents due to the Walker breakdown occurring when the driving component reaches a critical threshold value. We show through numerical and analytical modeling that the Walker breakdown limit can be extended or completely eliminated in antiferromagnetically coupled magnetic nanowires. These coupled nanowires allow for large domain-wall velocities driven by field and/or current as compared to conventional nanowires.

  10. Analysis and design of fiber-coupled high-power laser diode array

    NASA Astrophysics Data System (ADS)

    Zhou, Chongxi; Liu, Yinhui; Xie, Weimin; Du, Chunlei

    2003-11-01

    A conclusion that a single conventional optical system could not realize fiber coupled high-power laser diode array is drawn based on the BPP of laser beam. According to the parameters of coupled fiber, a method to couple LDA beams into a single multi-mode fiber including beams collimating, shaping, focusing and coupling is present. The divergence angles after collimating are calculated and analyzed; the shape equation of the collimating micro-lenses array is deprived. The focusing lens is designed. A fiber coupled LDA result with the core diameter of 800 um and numeric aperture of 0.37 is gotten.

  11. Mechanism of Na+ binding to thrombin resolved by ultra-rapid kinetics

    PubMed Central

    Gianni, Stefano; Ivarsson, Ylva; Bah, Alaji; Bush-Pelc, Leslie A.; Di Cera, Enrico

    2007-01-01

    The interaction of Na+ and K+ with proteins is at the basis of numerous processes of biological importance. However, measurement of the kinetic components of the interaction has eluded experimentalists for decades because the rate constants are too fast to resolve with conventional stopped-flow methods. Using a continuous-flow apparatus with a dead time of 50 μs we have been able to resolve the kinetic rate constants and entire mechanism of Na+ binding to thrombin, an interaction that is at the basis of the procoagulant and prothrombotic roles of the enzyme in the blood. PMID:17935858

  12. Versatile microbial surface-display for environmental remediation and biofuels production

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Cindy H.; Mulchandani, Ashok; Chen, wilfred

    2008-02-14

    Surface display is a powerful technique that utilizes natural microbial functional components to express proteins or peptides on the cell exterior. Since the reporting of the first surface-display system in the mid-1980s, a variety of new systems have been reported for yeast, Gram-positive and Gram-negative bacteria. Non-conventional display methods are emerging, eliminating the generation of genetically modified microorganisms. Cells with surface display are used as biocatalysts, biosorbents and biostimulants. Microbial cell-surface display has proven to be extremely important for numerous applications ranging from combinatorial library screening and protein engineering to bioremediation and biofuels production.

  13. Expanding use of pulsed electromagnetic field therapies.

    PubMed

    Markov, Marko S

    2007-01-01

    Various types of magnetic and electromagnetic fields are now in successful use in modern medicine. Electromagnetic therapy carries the promise to heal numerous health problems, even where conventional medicine has failed. Today, magnetotherapy provides a non invasive, safe, and easy method to directly treat the site of injury, the source of pain and inflammation, and a variety of diseases and pathologies. Millions of people worldwide have received help in treatment of the musculoskeletal system, as well as for pain relief. Pulsed electromagnetic fields are one important modality in magnetotherapy. Recent technological innovations, implementing advancements in computer technologies, offer excellent state-of-the-art therapy.

  14. Assisted Sonication vs Conventional Transesterification Numerical Simulation and Sensitivity Study

    NASA Astrophysics Data System (ADS)

    Janajreh, Isam; Noorul Hussain, Mohammed; El Samad, Tala

    2015-10-01

    Transeterification is known as slow reaction that can take over several hours to complete as the two immiscible liquid reactants combine to form biodiesel and the less favorable glycerol. The quest of finding the perfect catalyst, optimal operational conditions, and reactor configuration to accelerate the reaction in mere few minutes that ensures high quality biodiesel, in economically viable way is coming along with sonication. This drastic reduction is a key enabler for the development of a continuous processing that otherwise is fairly costly and low throughput using conventional method. The reaction kinetics of sonication assisted as inferred by several authors is several time faster and this work implements these rates in a high fidelity numerical simulation model. This flow model is based on Navier-Stokes equations coupled with energy equation for non-isothermal flow and the transport equations of the multiple reactive species. The model is initially validated against experimental data from previous work of the authors using an annular reactor configuration. Following the validation, comparison of the reaction rate is shown to gain more insight to the distribution of the reaction and its attained rates. The two models (conventional and sonication) then compared on the basis of their sensitivity to the methane to oil molar ratio as the most pronounced process parameter. Both the exit reactor yield and the distribution of the species are evaluated with favorable yield under sonication process. These results pave the way to build a more robust process intensified reactor having an integrated selective heterogeneous catalyst to steer the reaction. This can avoid the downstream cleaning processes, cutting reaction time, and render economic benefit to the process.

  15. Phase retrieval using a modified Shack-Hartmann wavefront sensor with defocus.

    PubMed

    Li, Changwei; Li, Bangming; Zhang, Sijiong

    2014-02-01

    This paper proposes a modified Shack-Hartmann wavefront sensor for phase retrieval. The sensor is revamped by placing a detector at a defocused plane before the focal plane of the lenslet array of the Shack-Hartmann sensor. The algorithm for phase retrieval is an optimization with initial Zernike coefficients calculated by the conventional phase reconstruction of the Shack-Hartmann sensor. Numerical simulations show that the proposed sensor permits sensitive, accurate phase retrieval. Furthermore, experiments tested the feasibility of phase retrieval using the proposed sensor. The surface irregularity for a flat mirror was measured by the proposed method and a Veeco interferometer, respectively. The irregularity for the mirror measured by the proposed method is in very good agreement with that measured using the Veeco interferometer.

  16. Diode Lasers used in Plastic Welding and Selective Laser Soldering - Applications and Products

    NASA Astrophysics Data System (ADS)

    Reinl, S.

    Aside from conventional welding methods, laser welding of plastics has established itself as a proven bonding method. The component-conserving and clean process offers numerous advantages and enables welding of sensitive assemblies in automotive, electronic, medical, human care, food packaging and consumer electronics markets. Diode lasers are established since years within plastic welding applications. Also, soft soldering using laser radiation is becoming more and more significant in the field of direct diode laser applications. Fast power controllability combined with a contactless temperature measurement to minimize thermal damage make the diode laser an ideal tool for this application. These advantages come in to full effect when soldering of increasingly small parts in temperature sensitive environments is necessary.

  17. Application of MALDI-TOF mass spectrometry in clinical diagnostic microbiology.

    PubMed

    De Carolis, Elena; Vella, Antonietta; Vaccaro, Luisa; Torelli, Riccardo; Spanu, Teresa; Fiori, Barbara; Posteraro, Brunella; Sanguinetti, Maurizio

    2014-09-12

    Matrix-assisted laser desorption/ionization-time of flight mass spectrometry (MALDI-TOF MS) has recently emerged as a powerful technique for identification of microorganisms, changing the workflow of well-established laboratories so that its impact on microbiological diagnostics has been unparalleled. In comparison with conventional identification methods that rely on biochemical tests and require long incubation procedures, MALDI-TOF MS has the advantage of identifying bacteria and fungi directly from colonies grown on culture plates in a few minutes and with simple procedures. Numerous studies on different systems available demonstrate the reliability and accuracy of the method, and new frontiers have been explored besides microbial species level identification, such as direct identification of pathogens from positive blood cultures, subtyping, and drug susceptibility detection.

  18. Defining disease with laser precision: laser capture microdissection in gastroenterology

    PubMed Central

    Blatt, Richard; Srinivasan, Shanthi

    2013-01-01

    Laser capture microdissection (LCM) is an efficient and precise method for obtaining pure cell populations or specific cells of interest from a given tissue sample. LCM has been applied to animal and human gastroenterology research in analyzing the protein, DNA and RNA from all organs of the gastrointestinal system. There are numerous potential applications for this technology in gastroenterology research including malignancies of the esophagus, stomach, colon, biliary tract and liver. This technology can also be used to study gastrointestinal infections, inflammatory bowel disease, pancreatitis, motility, malabsorption and radiation enteropathy. LCM has multiple advantages when compared to conventional methods of microdissection, and this technology can be exploited to identify precursors to disease, diagnostic biomarkers, and therapeutic interventions. PMID:18619446

  19. A special case of the Poisson PDE formulated for Earth's surface and its capability to approximate the terrain mass density employing land-based gravity data, a case study in the south of Iran

    NASA Astrophysics Data System (ADS)

    AllahTavakoli, Yahya; Safari, Abdolreza; Vaníček, Petr

    2016-12-01

    This paper resurrects a version of Poisson's Partial Differential Equation (PDE) associated with the gravitational field at the Earth's surface and illustrates how the PDE possesses a capability to extract the mass density of Earth's topography from land-based gravity data. Herein, first we propound a theorem which mathematically introduces this version of Poisson's PDE adapted for the Earth's surface and then we use this PDE to develop a method of approximating the terrain mass density. Also, we carry out a real case study showing how the proposed approach is able to be applied to a set of land-based gravity data. In the case study, the method is summarized by an algorithm and applied to a set of gravity stations located along a part of the north coast of the Persian Gulf in the south of Iran. The results were numerically validated via rock-samplings as well as a geological map. Also, the method was compared with two conventional methods of mass density reduction. The numerical experiments indicate that the Poisson PDE at the Earth's surface has the capability to extract the mass density from land-based gravity data and is able to provide an alternative and somewhat more precise method of estimating the terrain mass density.

  20. Accuracy and precision of polyurethane dental arch models fabricated using a three-dimensional subtractive rapid prototyping method with an intraoral scanning technique.

    PubMed

    Kim, Jae-Hong; Kim, Ki-Baek; Kim, Woong-Chul; Kim, Ji-Hwan; Kim, Hae-Young

    2014-03-01

    This study aimed to evaluate the accuracy and precision of polyurethane (PUT) dental arch models fabricated using a three-dimensional (3D) subtractive rapid prototyping (RP) method with an intraoral scanning technique by comparing linear measurements obtained from PUT models and conventional plaster models. Ten plaster models were duplicated using a selected standard master model and conventional impression, and 10 PUT models were duplicated using the 3D subtractive RP technique with an oral scanner. Six linear measurements were evaluated in terms of x, y, and z-axes using a non-contact white light scanner. Accuracy was assessed using mean differences between two measurements, and precision was examined using four quantitative methods and the Bland-Altman graphical method. Repeatability was evaluated in terms of intra-examiner variability, and reproducibility was assessed in terms of inter-examiner and inter-method variability. The mean difference between plaster models and PUT models ranged from 0.07 mm to 0.33 mm. Relative measurement errors ranged from 2.2% to 7.6% and intraclass correlation coefficients ranged from 0.93 to 0.96, when comparing plaster models and PUT models. The Bland-Altman plot showed good agreement. The accuracy and precision of PUT dental models for evaluating the performance of oral scanner and subtractive RP technology was acceptable. Because of the recent improvements in block material and computerized numeric control milling machines, the subtractive RP method may be a good choice for dental arch models.

  1. Subspace-based interference removal methods for a multichannel biomagnetic sensor array.

    PubMed

    Sekihara, Kensuke; Nagarajan, Srikantan S

    2017-10-01

    In biomagnetic signal processing, the theory of the signal subspace has been applied to removing interfering magnetic fields, and a representative algorithm is the signal space projection algorithm, in which the signal/interference subspace is defined in the spatial domain as the span of signal/interference-source lead field vectors. This paper extends the notion of this conventional (spatial domain) signal subspace by introducing a new definition of signal subspace in the time domain. It defines the time-domain signal subspace as the span of row vectors that contain the source time course values. This definition leads to symmetric relationships between the time-domain and the conventional (spatial-domain) signal subspaces. As a review, this article shows that the notion of the time-domain signal subspace provides useful insights over existing interference removal methods from a unified perspective. Main results and significance. Using the time-domain signal subspace, it is possible to interpret a number of interference removal methods as the time domain signal space projection. Such methods include adaptive noise canceling, sensor noise suppression, the common temporal subspace projection, the spatio-temporal signal space separation, and the recently-proposed dual signal subspace projection. Our analysis using the notion of the time domain signal space projection reveals implicit assumptions these methods rely on, and shows that the difference between these methods results only from the manner of deriving the interference subspace. Numerical examples that illustrate the results of our arguments are provided.

  2. Subspace-based interference removal methods for a multichannel biomagnetic sensor array

    NASA Astrophysics Data System (ADS)

    Sekihara, Kensuke; Nagarajan, Srikantan S.

    2017-10-01

    Objective. In biomagnetic signal processing, the theory of the signal subspace has been applied to removing interfering magnetic fields, and a representative algorithm is the signal space projection algorithm, in which the signal/interference subspace is defined in the spatial domain as the span of signal/interference-source lead field vectors. This paper extends the notion of this conventional (spatial domain) signal subspace by introducing a new definition of signal subspace in the time domain. Approach. It defines the time-domain signal subspace as the span of row vectors that contain the source time course values. This definition leads to symmetric relationships between the time-domain and the conventional (spatial-domain) signal subspaces. As a review, this article shows that the notion of the time-domain signal subspace provides useful insights over existing interference removal methods from a unified perspective. Main results and significance. Using the time-domain signal subspace, it is possible to interpret a number of interference removal methods as the time domain signal space projection. Such methods include adaptive noise canceling, sensor noise suppression, the common temporal subspace projection, the spatio-temporal signal space separation, and the recently-proposed dual signal subspace projection. Our analysis using the notion of the time domain signal space projection reveals implicit assumptions these methods rely on, and shows that the difference between these methods results only from the manner of deriving the interference subspace. Numerical examples that illustrate the results of our arguments are provided.

  3. Photonic band structures solved by a plane-wave-based transfer-matrix method.

    PubMed

    Li, Zhi-Yuan; Lin, Lan-Lan

    2003-04-01

    Transfer-matrix methods adopting a plane-wave basis have been routinely used to calculate the scattering of electromagnetic waves by general multilayer gratings and photonic crystal slabs. In this paper we show that this technique, when combined with Bloch's theorem, can be extended to solve the photonic band structure for 2D and 3D photonic crystal structures. Three different eigensolution schemes to solve the traditional band diagrams along high-symmetry lines in the first Brillouin zone of the crystal are discussed. Optimal rules for the Fourier expansion over the dielectric function and electromagnetic fields with discontinuities occurring at the boundary of different material domains have been employed to accelerate the convergence of numerical computation. Application of this method to an important class of 3D layer-by-layer photonic crystals reveals the superior convergency of this different approach over the conventional plane-wave expansion method.

  4. A Meta-heuristic Approach for Variants of VRP in Terms of Generalized Saving Method

    NASA Astrophysics Data System (ADS)

    Shimizu, Yoshiaki

    Global logistic design is becoming a keen interest to provide an essential infrastructure associated with modern societal provision. For examples, we can designate green and/or robust logistics in transportation systems, smart grids in electricity utilization systems, and qualified service in delivery systems, and so on. As a key technology for such deployments, we engaged in practical vehicle routing problem on a basis of the conventional saving method. This paper extends such idea and gives a general framework available for various real-world applications. It can cover not only delivery problems but also two kind of pick-up problems, i.e., straight and drop-by routings. Moreover, multi-depot problem is considered by a hybrid approach with graph algorithm and its solution method is realized in a hierarchical manner. Numerical experiments have been taken place to validate effectiveness of the proposed method.

  5. MODFLOW equipped with a new method for the accurate simulation of axisymmetric flow

    NASA Astrophysics Data System (ADS)

    Samani, N.; Kompani-Zare, M.; Barry, D. A.

    2004-01-01

    Axisymmetric flow to a well is an important topic of groundwater hydraulics, the simulation of which depends on accurate computation of head gradients. Groundwater numerical models with conventional rectilinear grid geometry such as MODFLOW (in contrast to analytical models) generally have not been used to simulate aquifer test results at a pumping well because they are not designed or expected to closely simulate the head gradient near the well. A scaling method is proposed based on mapping the governing flow equation from cylindrical to Cartesian coordinates, and vice versa. A set of relationships and scales is derived to implement the conversion. The proposed scaling method is then embedded in MODFLOW 2000. To verify the accuracy of the method steady and unsteady flows in confined and unconfined aquifers with fully or partially penetrating pumping wells are simulated and compared with the corresponding analytical solutions. In all cases a high degree of accuracy is achieved.

  6. Interquantile Shrinkage in Regression Models

    PubMed Central

    Jiang, Liewen; Wang, Huixia Judy; Bondell, Howard D.

    2012-01-01

    Conventional analysis using quantile regression typically focuses on fitting the regression model at different quantiles separately. However, in situations where the quantile coefficients share some common feature, joint modeling of multiple quantiles to accommodate the commonality often leads to more efficient estimation. One example of common features is that a predictor may have a constant effect over one region of quantile levels but varying effects in other regions. To automatically perform estimation and detection of the interquantile commonality, we develop two penalization methods. When the quantile slope coefficients indeed do not change across quantile levels, the proposed methods will shrink the slopes towards constant and thus improve the estimation efficiency. We establish the oracle properties of the two proposed penalization methods. Through numerical investigations, we demonstrate that the proposed methods lead to estimations with competitive or higher efficiency than the standard quantile regression estimation in finite samples. Supplemental materials for the article are available online. PMID:24363546

  7. Comparison of anatomical, functional and regression methods for estimating the rotation axes of the forearm.

    PubMed

    Fraysse, François; Thewlis, Dominic

    2014-11-07

    Numerous methods exist to estimate the pose of the axes of rotation of the forearm. These include anatomical definitions, such as the conventions proposed by the ISB, and functional methods based on instantaneous helical axes, which are commonly accepted as the modelling gold standard for non-invasive, in-vivo studies. We investigated the validity of a third method, based on regression equations, to estimate the rotation axes of the forearm. We also assessed the accuracy of both ISB methods. Axes obtained from a functional method were considered as the reference. Results indicate a large inter-subject variability in the axes positions, in accordance with previous studies. Both ISB methods gave the same level of accuracy in axes position estimations. Regression equations seem to improve estimation of the flexion-extension axis but not the pronation-supination axis. Overall, given the large inter-subject variability, the use of regression equations cannot be recommended. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. A Perturbation Analysis of Harmonics Generation from Saturated Elements in Power Systems

    NASA Astrophysics Data System (ADS)

    Kumano, Teruhisa

    Nonlinear phenomena such as saturation in magnetic flux give considerable effects in power system analysis. It is reported that a failure in a real 500kV system triggered islanding operation, where resultant even harmonics caused malfunctions in protective relays. It is also reported that the major origin of this wave distortion is nothing but unidirectional magnetization of the transformer iron core. Time simulation is widely used today to analyze this type of phenomena, but it has basically two shortcomings. One is that the time simulation takes two much computing time in the vicinity of inflection points in the saturation characteristic curve because certain iterative procedure such as N-R (Newton-Raphson) should be used and such methods tend to be caught in an ill conditioned numerical hunting. The other is that such simulation methods sometimes do not help intuitive understanding of the studied phenomenon because the whole nonlinear equations are treated in a matrix form and not properly divided into understandable parts as done in linear systems. This paper proposes a new computation scheme which is based on so called perturbation method. Magnetic saturation in iron cores in a generator and a transformer are taken into account. The proposed method has a special feature against the first shortcoming of the N-R based time simulation method stated above. In the proposed method no iterative process is used to reduce the equation residue but uses perturbation series, which means free from the ill condition problem. Users have only to calculate each perturbation terms one by one until he reaches necessary accuracy. In a numerical example treated in the present paper the first order perturbation can make reasonably high accuracy, which means very fast computing. In numerical study three nonlinear elements are considered. Calculated results are almost identical to the conventional Newton-Raphson based time simulation, which shows the validity of the method. The proposed method would be effectively used in a screening where many case studies are needed.

  9. Modeling of fatigue crack induced nonlinear ultrasonics using a highly parallelized explicit local interaction simulation approach

    NASA Astrophysics Data System (ADS)

    Shen, Yanfeng; Cesnik, Carlos E. S.

    2016-04-01

    This paper presents a parallelized modeling technique for the efficient simulation of nonlinear ultrasonics introduced by the wave interaction with fatigue cracks. The elastodynamic wave equations with contact effects are formulated using an explicit Local Interaction Simulation Approach (LISA). The LISA formulation is extended to capture the contact-impact phenomena during the wave damage interaction based on the penalty method. A Coulomb friction model is integrated into the computation procedure to capture the stick-slip contact shear motion. The LISA procedure is coded using the Compute Unified Device Architecture (CUDA), which enables the highly parallelized supercomputing on powerful graphic cards. Both the explicit contact formulation and the parallel feature facilitates LISA's superb computational efficiency over the conventional finite element method (FEM). The theoretical formulations based on the penalty method is introduced and a guideline for the proper choice of the contact stiffness is given. The convergence behavior of the solution under various contact stiffness values is examined. A numerical benchmark problem is used to investigate the new LISA formulation and results are compared with a conventional contact finite element solution. Various nonlinear ultrasonic phenomena are successfully captured using this contact LISA formulation, including the generation of nonlinear higher harmonic responses. Nonlinear mode conversion of guided waves at fatigue cracks is also studied.

  10. Directly imaging steeply-dipping fault zones in geothermal fields with multicomponent seismic data

    DOE PAGES

    Chen, Ting; Huang, Lianjie

    2015-07-30

    For characterizing geothermal systems, it is important to have clear images of steeply-dipping fault zones because they may confine the boundaries of geothermal reservoirs and influence hydrothermal flow. Elastic reverse-time migration (ERTM) is the most promising tool for subsurface imaging with multicomponent seismic data. However, conventional ERTM usually generates significant artifacts caused by the cross correlation of undesired wavefields and the polarity reversal of shear waves. In addition, it is difficult for conventional ERTM to directly image steeply-dipping fault zones. We develop a new ERTM imaging method in this paper to reduce these artifacts and directly image steeply-dipping fault zones.more » In our new ERTM method, forward-propagated source wavefields and backward-propagated receiver wavefields are decomposed into compressional (P) and shear (S) components. Furthermore, each component of these wavefields is separated into left- and right-going, or downgoing and upgoing waves. The cross correlation imaging condition is applied to the separated wavefields along opposite propagation directions. For converted waves (P-to-S or S-to-P), the polarity correction is applied to the separated wavefields based on the analysis of Poynting vectors. Numerical imaging examples of synthetic seismic data demonstrate that our new ERTM method produces high-resolution images of steeply-dipping fault zones.« less

  11. Accelerated Compressed Sensing Based CT Image Reconstruction.

    PubMed

    Hashemi, SayedMasoud; Beheshti, Soosan; Gill, Patrick R; Paul, Narinder S; Cobbold, Richard S C

    2015-01-01

    In X-ray computed tomography (CT) an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS) enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization.

  12. Accelerated Compressed Sensing Based CT Image Reconstruction

    PubMed Central

    Hashemi, SayedMasoud; Beheshti, Soosan; Gill, Patrick R.; Paul, Narinder S.; Cobbold, Richard S. C.

    2015-01-01

    In X-ray computed tomography (CT) an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS) enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization. PMID:26167200

  13. Derivation and application of an analytical rock displacement solution on rectangular cavern wall using the inverse mapping method

    PubMed Central

    Gao, Mingzhong; Qiu, Zhiqiang; Yin, Xiangang; Li, Shengwei; Liu, Qiang

    2017-01-01

    Rectangular caverns are increasingly used in underground engineering projects, the failure mechanism of rectangular cavern wall rock is significantly different as a result of the cross-sectional shape and variations in wall stress distributions. However, the conventional computational method always results in a long-winded computational process and multiple displacement solutions of internal rectangular wall rock. This paper uses a Laurent series complex method to obtain a mapping function expression based on complex variable function theory and conformal transformation. This method is combined with the Schwarz-Christoffel method to calculate the mapping function coefficient and to determine the rectangular cavern wall rock deformation. With regard to the inverse mapping concept, the mapping relation between the polar coordinate system within plane ς and a corresponding unique plane coordinate point inside the cavern wall rock is discussed. The disadvantage of multiple solutions when mapping from the plane to the polar coordinate system is addressed. This theoretical formula is used to calculate wall rock boundary deformation and displacement field nephograms inside the wall rock for a given cavern height and width. A comparison with ANSYS numerical software results suggests that the theoretical solution and numerical solution exhibit identical trends, thereby demonstrating the method’s validity. This method greatly improves the computing accuracy and reduces the difficulty in solving for cavern boundary and internal wall rock displacements. The proposed method provides a theoretical guide for controlling cavern wall rock deformation failure. PMID:29155892

  14. Adjacent Vehicle Number-Triggered Adaptive Transmission for V2V Communications.

    PubMed

    Wei, Yiqiao; Chen, Jingjun; Hwang, Seung-Hoon

    2018-03-02

    For vehicle-to-vehicle (V2V) communication, such issues as continuity and reliability still have to be solved. Specifically, it is necessary to consider a more scalable physical layer due to the high-speed mobility of vehicles and the complex channel environment. Adaptive transmission has been adapted in channel-dependent scheduling. However, it has been neglected with regards to the physical topology changes in the vehicle network. In this paper, we propose a physical topology-triggered adaptive transmission scheme which adjusts the data rate between vehicles according to the number of connectable vehicles nearby. Also, we investigate the performance of the proposed method using computer simulations and compare it with the conventional methods. The numerical results show that the proposed method can provide more continuous and reliable data transmission for V2V communications.

  15. Computer synthesis of high resolution electron micrographs

    NASA Technical Reports Server (NTRS)

    Nathan, R.

    1976-01-01

    Specimen damage, spherical aberration, low contrast and noisy sensors combine to prevent direct atomic viewing in a conventional electron microscope. The paper describes two methods for obtaining ultra-high resolution in biological specimens under the electron microscope. The first method assumes the physical limits of the electron objective lens and uses a series of dark field images of biological crystals to obtain direct information on the phases of the Fourier diffraction maxima; this information is used in an appropriate computer to synthesize a large aperture lens for a 1-A resolution. The second method assumes there is sufficient amplitude scatter from images recorded in focus which can be utilized with a sensitive densitometer and computer contrast stretching to yield fine structure image details. Cancer virus characterization is discussed as an illustrative example. Numerous photographs supplement the text.

  16. Adjacent Vehicle Number-Triggered Adaptive Transmission for V2V Communications

    PubMed Central

    Wei, Yiqiao; Chen, Jingjun

    2018-01-01

    For vehicle-to-vehicle (V2V) communication, such issues as continuity and reliability still have to be solved. Specifically, it is necessary to consider a more scalable physical layer due to the high-speed mobility of vehicles and the complex channel environment. Adaptive transmission has been adapted in channel-dependent scheduling. However, it has been neglected with regards to the physical topology changes in the vehicle network. In this paper, we propose a physical topology-triggered adaptive transmission scheme which adjusts the data rate between vehicles according to the number of connectable vehicles nearby. Also, we investigate the performance of the proposed method using computer simulations and compare it with the conventional methods. The numerical results show that the proposed method can provide more continuous and reliable data transmission for V2V communications. PMID:29498646

  17. Nitsche’s Method For Helmholtz Problems with Embedded Interfaces

    PubMed Central

    Zou, Zilong; Aquino, Wilkins; Harari, Isaac

    2016-01-01

    SUMMARY In this work, we use Nitsche’s formulation to weakly enforce kinematic constraints at an embedded interface in Helmholtz problems. Allowing embedded interfaces in a mesh provides significant ease for discretization, especially when material interfaces have complex geometries. We provide analytical results that establish the well-posedness of Helmholtz variational problems and convergence of the corresponding finite element discretizations when Nitsche’s method is used to enforce kinematic constraints. As in the analysis of conventional Helmholtz problems, we show that the inf-sup constant remains positive provided that the Nitsche’s stabilization parameter is judiciously chosen. We then apply our formulation to several 2D plane-wave examples that confirm our analytical findings. Doing so, we demonstrate the asymptotic convergence of the proposed method and show that numerical results are in accordance with the theoretical analysis. PMID:28713177

  18. Speeding up GW Calculations to Meet the Challenge of Large Scale Quasiparticle Predictions

    PubMed Central

    Gao, Weiwei; Xia, Weiyi; Gao, Xiang; Zhang, Peihong

    2016-01-01

    Although the GW approximation is recognized as one of the most accurate theories for predicting materials excited states properties, scaling up conventional GW calculations for large systems remains a major challenge. We present a powerful and simple-to-implement method that can drastically accelerate fully converged GW calculations for large systems, enabling fast and accurate quasiparticle calculations for complex materials systems. We demonstrate the performance of this new method by presenting the results for ZnO and MgO supercells. A speed-up factor of nearly two orders of magnitude is achieved for a system containing 256 atoms (1024 valence electrons) with a negligibly small numerical error of ±0.03 eV. Finally, we discuss the application of our method to the GW calculations for 2D materials. PMID:27833140

  19. Almost analytical Karhunen-Loeve representation of irregular waves based on the prolate spheroidal wave functions

    NASA Astrophysics Data System (ADS)

    Lee, Gibbeum; Cho, Yeunwoo

    2017-11-01

    We present an almost analytical new approach to solving the matrix eigenvalue problem or the integral equation in Karhunen-Loeve (K-L) representation of random data such as irregular ocean waves. Instead of solving this matrix eigenvalue problem purely numerically, which may suffer from the computational inaccuracy for big data, first, we consider a pair of integral and differential equations, which are related to the so-called prolate spheroidal wave functions (PSWF). For the PSWF differential equation, the pair of the eigenvectors (PSWF) and eigenvalues can be obtained from a relatively small number of analytical Legendre functions. Then, the eigenvalues in the PSWF integral equation are expressed in terms of functional values of the PSWF and the eigenvalues of the PSWF differential equation. Finally, the analytically expressed PSWFs and the eigenvalues in the PWSF integral equation are used to form the kernel matrix in the K-L integral equation for the representation of exemplary wave data; ordinary irregular waves and rogue waves. We found that the present almost analytical method is better than the conventional data-independent Fourier representation and, also, the conventional direct numerical K-L representation in terms of both accuracy and computational cost. This work was supported by the National Research Foundation of Korea (NRF). (NRF-2017R1D1A1B03028299).

  20. Multi-relaxation-time lattice Boltzmann modeling of the acoustic field generated by focused transducer

    NASA Astrophysics Data System (ADS)

    Shan, Feng; Guo, Xiasheng; Tu, Juan; Cheng, Jianchun; Zhang, Dong

    The high-intensity focused ultrasound (HIFU) has become an attractive therapeutic tool for the noninvasive tumor treatment. The ultrasonic transducer is the key component in HIFU treatment to generate the HIFU energy. The dimension of focal region generated by the transducer is closely relevant to the safety of HIFU treatment. Therefore, it is essential to numerically investigate the focal region of the transducer. Although the conventional acoustic wave equations have been used successfully to describe the acoustic field, there still exist some inherent drawbacks. In this work, we presented an axisymmetric isothermal multi-relaxation-time lattice Boltzmann method (MRT-LBM) model with the Bouzidi-Firdaouss-Lallemand (BFL) boundary condition in cylindrical coordinate system. With this model, some preliminary simulations were firstly conducted to determine a reasonable value of the relaxation parameter. Then, the validity of the model was examined by comparing the results obtained with the LBM results with the Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation and the Spheroidal beam equation (SBE) for the focused transducers with different aperture angles, respectively. In addition, the influences of the aperture angle on the focal region were investigated. The proposed model in this work will provide significant references for the parameter optimization of the focused transducer for applications in the HIFU treatment or other fields, and provide new insights into the conventional acoustic numerical simulations.

  1. Sources of image degradation in fundamental and harmonic ultrasound imaging using nonlinear, full-wave simulations.

    PubMed

    Pinton, Gianmarco F; Trahey, Gregg E; Dahl, Jeremy J

    2011-04-01

    A full-wave equation that describes nonlinear propagation in a heterogeneous attenuating medium is solved numerically with finite differences in the time domain (FDTD). This numerical method is used to simulate propagation of a diagnostic ultrasound pulse through a measured representation of the human abdomen with heterogeneities in speed of sound, attenuation, density, and nonlinearity. Conventional delay-andsum beamforming is used to generate point spread functions (PSF) that display the effects of these heterogeneities. For the particular imaging configuration that is modeled, these PSFs reveal that the primary source of degradation in fundamental imaging is reverberation from near-field structures. Reverberation clutter in the harmonic PSF is 26 dB higher than the fundamental PSF. An artificial medium with uniform velocity but unchanged impedance characteristics indicates that for the fundamental PSF, the primary source of degradation is phase aberration. An ultrasound image is created in silico using the same physical and algorithmic process used in an ultrasound scanner: a series of pulses are transmitted through heterogeneous scattering tissue and the received echoes are used in a delay-and-sum beamforming algorithm to generate images. These beamformed images are compared with images obtained from convolution of the PSF with a scatterer field to demonstrate that a very large portion of the PSF must be used to accurately represent the clutter observed in conventional imaging. © 2011 IEEE

  2. Radiative Transfer and Satellite Remote Sensing of Cirrus Clouds Using FIRE-2-IFO Data

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Under the support of the NASA grant, we have developed a new geometric-optics model (GOM2) for the calculation of the single-scattering and polarization properties for arbitrarily oriented hexagonal ice crystals. From comparisons with the results computed by the finite difference time domain (FDTD) method, we show that the novel geometric-optics can be applied to the computation of the extinction cross section and single-scattering albedo for ice crystals with size parameters along the minimum dimension as small as approximately 6. We demonstrate that the present model converges to the conventional ray tracing method for large size parameters and produces single-scattering results close to those computed by the FDTD method for size parameters along the minimum dimension smaller than approximately 20. We demonstrate that neither the conventional geometric optics method nor the Lorenz-Mie theory can be used to approximate the scattering, absorption, and polarization features for hexagonal ice crystals with size parameters from approximately 5 to 20. On the satellite remote sensing algorithm development and validation, we have developed a numerical scheme to identify multilayer cirrus cloud systems using AVHRR data. We have applied this scheme to the satellite data collected over the FIRE-2-IFO area during nine overpasses within seven observation dates. Determination of the threshold values used in the detection scheme are based on statistical analyses of these satellite data.

  3. On the numerical modeling of sliding beams: A comparison of different approaches

    NASA Astrophysics Data System (ADS)

    Steinbrecher, Ivo; Humer, Alexander; Vu-Quoc, Loc

    2017-11-01

    The transient analysis of sliding beams represents a challenging problem of structural mechanics. Typically, the sliding motion superimposed by large flexible deformation requires numerical methods as, e.g., finite elements, to obtain approximate solutions. By means of the classical sliding spaghetti problem, the present paper provides a guideline to the numerical modeling with conventional finite element codes. For this purpose, two approaches, one using solid elements and one using beam elements, respectively, are employed in the analysis, and the characteristics of each approach are addressed. The contact formulation realizing the interaction of the beam with its support demands particular attention in the context of sliding structures. Additionally, the paper employs the sliding-beam formulation as a third approach, which avoids the numerical difficulties caused by the large sliding motion through a suitable coordinate transformation. The present paper briefly outlines the theoretical fundamentals of the respective approaches for the modeling of sliding structures and gives a detailed comparison by means of the sliding spaghetti serving as a representative example. The specific advantages and limitations of the different approaches with regard to accuracy and computational efficiency are discussed in detail. Through the comparison, the sliding-beam formulation, which proves as an effective approach for the modeling, can be validated for the general problem of a sliding structure subjected to large deformation.

  4. A numerical analysis on forming limits during spiral and concentric single point incremental forming

    NASA Astrophysics Data System (ADS)

    Gipiela, M. L.; Amauri, V.; Nikhare, C.; Marcondes, P. V. P.

    2017-01-01

    Sheet metal forming is one of the major manufacturing industries, which are building numerous parts for aerospace, automotive and medical industry. Due to the high demand in vehicle industry and environmental regulations on less fuel consumption on other hand, researchers are innovating new methods to build these parts with energy efficient sheet metal forming process instead of conventionally used punch and die to form the parts to achieve the lightweight parts. One of the most recognized manufacturing process in this category is Single Point Incremental Forming (SPIF). SPIF is the die-less sheet metal forming process in which the single point tool incrementally forces any single point of sheet metal at any process time to plastic deformation zone. In the present work, finite element method (FEM) is applied to analyze the forming limits of high strength low alloy steel formed by single point incremental forming (SPIF) by spiral and concentric tool path. SPIF numerical simulations were model with 24 and 29 mm cup depth, and the results were compare with Nakajima results obtained by experiments and FEM. It was found that the cup formed with Nakajima tool failed at 24 mm while cups formed by SPIF surpassed the limit for both depths with both profiles. It was also notice that the strain achieved in concentric profile are lower than that in spiral profile.

  5. The infinite medium Green's function for neutron transport in plane geometry 40 years later

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ganapol, B.D.

    1993-01-01

    In 1953, the first of what was supposed to be two volumes on neutron transport theory was published. The monograph, entitled [open quotes]Introduction to the Theory of Neutron Diffusion[close quotes] by Case et al., appeared as a Los Alamos National Laboratory report and was to be followed by a second volume, which never appeared as intended because of the death of Placzek. Instead, Case and Zweifel collaborated on the now classic work entitled Linear Transport Theory 2 in which the underlying mathematical theory of linear transport was presented. The initial monograph, however, represented the coming of age of neutron transportmore » theory, which had its roots in radiative transfer and kinetic theory. In addition, it provided the first benchmark results along with the mathematical development for several fundamental neutron transport problems. In particular, one-dimensional infinite medium Green's functions for the monoenergetic transport equation in plane and spherical geometries were considered complete with numerical results to be used as standards to guide code development for applications. Unfortunately, because of the limited computational resources of the day, some numerical results were incorrect. Also, only conventional mathematics and numerical methods were used because the transport theorists of the day were just becoming acquainted with more modern mathematical approaches. In this paper, Green's function solution is revisited in light of modern numerical benchmarking methods with an emphasis on evaluation rather than theoretical results. The primary motivation for considering the Green's function at this time is its emerging use in solving finite and heterogeneous media transport problems.« less

  6. 7Be and hydrological model for more efficient implementation of erosion control measure

    NASA Astrophysics Data System (ADS)

    Al-Barri, Bashar; Bode, Samuel; Blake, William; Ryken, Nick; Cornelis, Wim; Boeckx, Pascal

    2014-05-01

    Increased concern about the on-site and off-site impacts of soil erosion in agricultural and forested areas has endorsed interest in innovative methods to assess in an unbiased way spatial and temporal soil erosion rates and redistribution patterns. Hence, interest in precisely estimating the magnitude of the problem and therefore applying erosion control measures (ECM) more efficiently. The latest generation of physically-based hydrological models, which fully couple overland flow and subsurface flow in three dimensions, permit implementing ECM in small and large scales more effectively if coupled with a sediment transport algorithm. While many studies focused on integrating empirical or numerical models based on traditional erosion budget measurements into 3D hydrological models, few studies evaluated the efficiency of ECM on watershed scale and very little attention is given to the potentials of environmental Fallout Radio-Nuclides (FRNs) in such applications. The use of FRN tracer 7Be in soil erosion/deposition research proved to overcome many (if not all) of the problems associated with the conventional approaches providing reliable data for efficient land use management. This poster will underline the pros and cones of using conventional methods and 7Be tracers to evaluate the efficiency of coconuts dams installed as ECM in experimental field in Belgium. It will also outline the potentials of 7Be in providing valuable inputs for evolving the numerical sediment transport algorithm needed for the hydrological model on field scale leading to assess the possibility of using this short-lived tracer as a validation tool for the upgraded hydrological model on watershed scale in further steps. Keywords: FRN, erosion control measures, hydrological modes

  7. Weak-value amplification and optimal parameter estimation in the presence of correlated noise

    NASA Astrophysics Data System (ADS)

    Sinclair, Josiah; Hallaji, Matin; Steinberg, Aephraim M.; Tollaksen, Jeff; Jordan, Andrew N.

    2017-11-01

    We analytically and numerically investigate the performance of weak-value amplification (WVA) and related parameter estimation methods in the presence of temporally correlated noise. WVA is a special instance of a general measurement strategy that involves sorting data into separate subsets based on the outcome of a second "partitioning" measurement. Using a simplified correlated noise model that can be analyzed exactly together with optimal statistical estimators, we compare WVA to a conventional measurement method. We find that WVA indeed yields a much lower variance of the parameter of interest than the conventional technique does, optimized in the absence of any partitioning measurements. In contrast, a statistically optimal analysis that employs partitioning measurements, incorporating all partitioned results and their known correlations, is found to yield an improvement—typically slight—over the noise reduction achieved by WVA. This result occurs because the simple WVA technique is not tailored to any specific noise environment and therefore does not make use of correlations between the different partitions. We also compare WVA to traditional background subtraction, a familiar technique where measurement outcomes are partitioned to eliminate unknown offsets or errors in calibration. Surprisingly, for the cases we consider, background subtraction turns out to be a special case of the optimal partitioning approach, possessing a similar typically slight advantage over WVA. These results give deeper insight into the role of partitioning measurements (with or without postselection) in enhancing measurement precision, which some have found puzzling. They also resolve previously made conflicting claims about the usefulness of weak-value amplification to precision measurement in the presence of correlated noise. We finish by presenting numerical results to model a more realistic laboratory situation of time-decaying correlations, showing that our conclusions hold for a wide range of statistical models.

  8. Rapid-prenatal diagnosis through fluorescence in situ hybridization for preventing aneuploidy related birth defects

    PubMed Central

    Fauzdar, Ashish; Chowdhry, Mohit; Makroo, R. N.; Mishra, Manoj; Srivastava, Priyanka; Tyagi, Richa; Bhadauria, Preeti; Kaul, Anita

    2013-01-01

    BACKGROUND AND OBJECTIVE: Women with high-risk pregnancies are offered prenatal diagnosis through amniocentesis for cytogenetic analysis of fetal cells. The aim of this study was to evaluate the effectiveness of the rapid fluorescence in situ hybridization (FISH) technique for detecting numerical aberrations of chromosomes 13, 21, 18, X and Y in high-risk pregnancies in an Indian scenario. MATERIALS AND METHODS: A total of 163 samples were received for a FISH and/or a full karyotype for prenatal diagnosis from high-risk pregnancies. In 116 samples both conventional culture techniques for getting karyotype through G-banding techniques were applied in conjunction to FISH test using the AneuVysion kit (Abbott Molecular, Inc.), following standard recommended protocol to compare the both the techniques in our setup. RESULTS: Out of 116 patients, we got 96 normal for the five major chromosome abnormality and seven patients were found to be abnormal (04 trisomy 21, 02 monosomy X, and 01 trisomy 13) and all the FISH results correlated with conventional cytogenetics. To summarize the results of total 163 patients for the major chromosomal abnormalities analyzed by both/or cytogenetics and FISH there were 140 (86%) normal, 9 (6%) cases were abnormal and another 4 (2.5%) cases were suspicious mosaic and 10 (6%) cases of culture failure. The diagnostic detection rate with FISH in 116 patients was 97.5%. There were no false-positive and false-negative autosomal or sex chromosomal results, within our established criteria for reporting FISH signals. CONCLUSION: Rapid FISH is a reliable and prompt method for detecting numerical chromosomal aberrations and has now been implemented as a routine diagnostic procedure for detection of fetal aneuploidy in India. PMID:23901191

  9. Design of catheter radio frequency coils using coaxial transmission line resonators for interventional neurovascular MR imaging.

    PubMed

    Zhang, Xiaoliang; Martin, Alastair; Jordan, Caroline; Lillaney, Prasheel; Losey, Aaron; Pang, Yong; Hu, Jeffrey; Wilson, Mark; Cooke, Daniel; Hetts, Steven W

    2017-04-01

    It is technically challenging to design compact yet sensitive miniature catheter radio frequency (RF) coils for endovascular interventional MR imaging. In this work, a new design method for catheter RF coils is proposed based on the coaxial transmission line resonator (TLR) technique. Due to its distributed circuit, the TLR catheter coil does not need any lumped capacitors to support its resonance, which simplifies the practical design and construction and provides a straightforward technique for designing miniature catheter-mounted imaging coils that are appropriate for interventional neurovascular procedures. The outer conductor of the TLR serves as an RF shield, which prevents electromagnetic energy loss, and improves coil Q factors. It also minimizes interaction with surrounding tissues and signal losses along the catheter coil. To investigate the technique, a prototype catheter coil was built using the proposed coaxial TLR technique and evaluated with standard RF testing and measurement methods and MR imaging experiments. Numerical simulation was carried out to assess the RF electromagnetic field behavior of the proposed TLR catheter coil and the conventional lumped-element catheter coil. The proposed TLR catheter coil was successfully tuned to 64 MHz for proton imaging at 1.5 T. B 1 fields were numerically calculated, showing improved magnetic field intensity of the TLR catheter coil over the conventional lumped-element catheter coil. MR images were acquired from a dedicated vascular phantom using the TLR catheter coil and also the system body coil. The TLR catheter coil is able to provide a significant signal-to-noise ratio (SNR) increase (a factor of 200 to 300) over its imaging volume relative to the body coil. Catheter imaging RF coil design using the proposed coaxial TLR technique is feasible and advantageous in endovascular interventional MR imaging applications.

  10. The impact of satellite temperature soundings on the forecasts of a small national meteorological service

    NASA Technical Reports Server (NTRS)

    Wolfson, N.; Thomasell, A.; Alperson, Z.; Brodrick, H.; Chang, J. T.; Gruber, A.; Ohring, G.

    1984-01-01

    The impact of introducing satellite temperature sounding data on a numerical weather prediction model of a national weather service is evaluated. A dry five level, primitive equation model which covers most of the Northern Hemisphere, is used for these experiments. Series of parallel forecast runs out to 48 hours are made with three different sets of initial conditions: (1) NOSAT runs, only conventional surface and upper air observations are used; (2) SAT runs, satellite soundings are added to the conventional data over oceanic regions and North Africa; and (3) ALLSAT runs, the conventional upper air observations are replaced by satellite soundings over the entire model domain. The impact on the forecasts is evaluated by three verification methods: the RMS errors in sea level pressure forecasts, systematic errors in sea level pressure forecasts, and errors in subjective forecasts of significant weather elements for a selected portion of the model domain. For the relatively short range of the present forecasts, the major beneficial impacts on the sea level pressure forecasts are found precisely in those areas where the satellite sounding are inserted and where conventional upper air observations are sparse. The RMS and systematic errors are reduced in these regions. The subjective forecasts of significant weather elements are improved with the use of the satellite data. It is found that the ALLSAT forecasts are of a quality comparable to the SAR forecasts.

  11. Injection molding lens metrology using software configurable optical test system

    NASA Astrophysics Data System (ADS)

    Zhan, Cheng; Cheng, Dewen; Wang, Shanshan; Wang, Yongtian

    2016-10-01

    Optical plastic lens produced by injection molding machine possesses numerous advantages of light quality, impact resistance, low cost, etc. The measuring methods in the optical shop are mainly interferometry, profile meter. However, these instruments are not only expensive, but also difficult to alignment. The software configurable optical test system (SCOTS) is based on the geometry of the fringe refection and phase measuring deflectometry method (PMD), which can be used to measure large diameter mirror, aspheric and freeform surface rapidly, robustly, and accurately. In addition to the conventional phase shifting method, we propose another data collection method called as dots matrix projection. We also use the Zernike polynomials to correct the camera distortion. This polynomials fitting mapping distortion method has not only simple operation, but also high conversion precision. We simulate this test system to measure the concave surface using CODE V and MATLAB. The simulation results show that the dots matrix projection method has high accuracy and SCOTS has important significance for on-line detection in optical shop.

  12. One-step leapfrog ADI-FDTD method for simulating electromagnetic wave propagation in general dispersive media.

    PubMed

    Wang, Xiang-Hua; Yin, Wen-Yan; Chen, Zhi Zhang David

    2013-09-09

    The one-step leapfrog alternating-direction-implicit finite-difference time-domain (ADI-FDTD) method is reformulated for simulating general electrically dispersive media. It models material dispersive properties with equivalent polarization currents. These currents are then solved with the auxiliary differential equation (ADE) and then incorporated into the one-step leapfrog ADI-FDTD method. The final equations are presented in the form similar to that of the conventional FDTD method but with second-order perturbation. The adapted method is then applied to characterize (a) electromagnetic wave propagation in a rectangular waveguide loaded with a magnetized plasma slab, (b) transmission coefficient of a plane wave normally incident on a monolayer graphene sheet biased by a magnetostatic field, and (c) surface plasmon polaritons (SPPs) propagation along a monolayer graphene sheet biased by an electrostatic field. The numerical results verify the stability, accuracy and computational efficiency of the proposed one-step leapfrog ADI-FDTD algorithm in comparison with analytical results and the results obtained with the other methods.

  13. The investigation of a variable camber blade lift control for helicopter rotor systems

    NASA Technical Reports Server (NTRS)

    Awani, A. O.

    1982-01-01

    A new rotor configuration called the variable camber rotor was investigated numerically for its potential to reduce helicopter control loads and improve hover performance. This rotor differs from a conventional rotor in that it incorporates a deflectable 50% chord trailing edge flap to control rotor lift, and a non-feathering (fixed) forward portion. Lift control is achieved by linking the blade flap to a conventional swashplate mechanism; therefore, it is pilot action to the flap deflection that controls rotor lift and tip path plane tilt. This report presents the aerodynamic characteristics of the flapped and unflapped airfoils, evaluations of aerodynamics techniques to minimize flap hinge moment, comparative hover rotor performance and the physical concepts of the blade motion and rotor control. All the results presented herein are based on numerical analyses. The assessment of payoff for the total configuration in comparison with a conventional blade, having the same physical characteristics as an H-34 helicopter rotor blade was examined for hover only.

  14. Comparative Study of Essential Oils Extracted from Algerian Myrtus communis L. Leaves Using Microwaves and Hydrodistillation

    PubMed Central

    Berka-Zougali, Baya; Ferhat, Mohamed-Amine; Hassani, Aicha; Chemat, Farid; Allaf, Karim S.

    2012-01-01

    Two different extraction methods were used for a comparative study of Algerian Myrtle leaf essential oils: solvent-free-microwave-extraction (SFME) and conventional hydrodistillation (HD). Essential oils analyzed by GC and GC-MS presented 51 components constituting 97.71 and 97.39% of the total oils, respectively. Solvent-Free-Microwave-Extract Essential oils SFME-EO were richer in oxygenated compounds. Their major compounds were 1,8-cineole, followed by α-pinene as against α-pinene, followed by 1,8-cineole for HD. Their antimicrobial activity was investigated on 12 microorganisms. The antioxidant activities were studied with the 2,2-diphenyl-1-picrylhydrazyl (DPPH•) radical scavenging method. Generally, both essential oils showed high antimicrobial and weak antioxidant activities. Microstructure analyses were also undertaken on the solid residue of myrtle leaves by Scanning Electronic Microscopy (SEM); it showed that the SFME-cellular structure undergoes significant modifications compared to the conventional HD residual solid. Comparison between hydrodistillation and SFME presented numerous distinctions. Several advantages with SFME were observed: faster kinetics and higher efficiency with similar yields: 0.32% dry basis, in 30 min as against 180 min for HD. PMID:22606003

  15. Nonequilibrium scheme for computing the flux of the convection-diffusion equation in the framework of the lattice Boltzmann method.

    PubMed

    Chai, Zhenhua; Zhao, T S

    2014-07-01

    In this paper, we propose a local nonequilibrium scheme for computing the flux of the convection-diffusion equation with a source term in the framework of the multiple-relaxation-time (MRT) lattice Boltzmann method (LBM). Both the Chapman-Enskog analysis and the numerical results show that, at the diffusive scaling, the present nonequilibrium scheme has a second-order convergence rate in space. A comparison between the nonequilibrium scheme and the conventional second-order central-difference scheme indicates that, although both schemes have a second-order convergence rate in space, the present nonequilibrium scheme is more accurate than the central-difference scheme. In addition, the flux computation rendered by the present scheme also preserves the parallel computation feature of the LBM, making the scheme more efficient than conventional finite-difference schemes in the study of large-scale problems. Finally, a comparison between the single-relaxation-time model and the MRT model is also conducted, and the results show that the MRT model is more accurate than the single-relaxation-time model, both in solving the convection-diffusion equation and in computing the flux.

  16. The Capacity Gain of Orbital Angular Momentum Based Multiple-Input-Multiple-Output System

    PubMed Central

    Zhang, Zhuofan; Zheng, Shilie; Chen, Yiling; Jin, Xiaofeng; Chi, Hao; Zhang, Xianmin

    2016-01-01

    Wireless communication using electromagnetic wave carrying orbital angular momentum (OAM) has attracted increasing interest in recent years, and its potential to increase channel capacity has been explored widely. In this paper, we compare the technique of using uniform linear array consist of circular traveling-wave OAM antennas for multiplexing with the conventional multiple-in-multiple-out (MIMO) communication method, and numerical results show that the OAM based MIMO system can increase channel capacity while communication distance is long enough. An equivalent model is proposed to illustrate that the OAM multiplexing system is equivalent to a conventional MIMO system with a larger element spacing, which means OAM waves could decrease the spatial correlation of MIMO channel. In addition, the effects of some system parameters, such as OAM state interval and element spacing, on the capacity advantage of OAM based MIMO are also investigated. Our results reveal that OAM waves are complementary with MIMO method. OAM waves multiplexing is suitable for long-distance line-of-sight (LoS) communications or communications in open area where the multi-path effect is weak and can be used in massive MIMO systems as well. PMID:27146453

  17. An efficient algorithm for accurate computation of the Dirichlet-multinomial log-likelihood function.

    PubMed

    Yu, Peng; Shaw, Chad A

    2014-06-01

    The Dirichlet-multinomial (DMN) distribution is a fundamental model for multicategory count data with overdispersion. This distribution has many uses in bioinformatics including applications to metagenomics data, transctriptomics and alternative splicing. The DMN distribution reduces to the multinomial distribution when the overdispersion parameter ψ is 0. Unfortunately, numerical computation of the DMN log-likelihood function by conventional methods results in instability in the neighborhood of [Formula: see text]. An alternative formulation circumvents this instability, but it leads to long runtimes that make it impractical for large count data common in bioinformatics. We have developed a new method for computation of the DMN log-likelihood to solve the instability problem without incurring long runtimes. The new approach is composed of a novel formula and an algorithm to extend its applicability. Our numerical experiments show that this new method both improves the accuracy of log-likelihood evaluation and the runtime by several orders of magnitude, especially in high-count data situations that are common in deep sequencing data. Using real metagenomic data, our method achieves manyfold runtime improvement. Our method increases the feasibility of using the DMN distribution to model many high-throughput problems in bioinformatics. We have included in our work an R package giving access to this method and a vingette applying this approach to metagenomic data. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. Inverse Regional Modeling with Adjoint-Free Technique

    NASA Astrophysics Data System (ADS)

    Yaremchuk, M.; Martin, P.; Panteleev, G.; Beattie, C.

    2016-02-01

    The ongoing parallelization trend in computer technologies facilitates the use ensemble methods in geophysical data assimilation. Of particular interest are ensemble techniques which do not require the development of tangent linear numerical models and their adjoints for optimization. These ``adjoint-free'' methods minimize the cost function within the sequence of subspaces spanned by a carefully chosen sets perturbations of the control variables. In this presentation, an adjoint-free variational technique (a4dVar) is demonstrated in an application estimating initial conditions of two numerical models: the Navy Coastal Ocean Model (NCOM), and the surface wave model (WAM). With the NCOM, performance of both adjoint and adjoint-free 4dVar data assimilation techniques is compared in application to the hydrographic surveys and velocity observations collected in the Adriatic Sea in 2006. Numerical experiments have shown that a4dVar is capable of providing forecast skill similar to that of conventional 4dVar at comparable computational expense while being less susceptible to excitation of ageostrophic modes that are not supported by observations. Adjoint-free technique constrained by the WAM model is tested in a series of data assimilation experiments with synthetic observations in the southern Chukchi Sea. The types of considered observations are directional spectra estimated from point measurements by stationary buoys, significant wave height (SWH) observations by coastal high-frequency radars and along-track SWH observations by satellite altimeters. The a4dVar forecast skill is shown to be 30-40% better than the skill of the sequential assimilaiton method based on optimal interpolation which is currently used in operations. Prospects of further development of the a4dVar methods in regional applications are discussed.

  19. On the solution of the complex eikonal equation in acoustic VTI media: A perturbation plus optimization scheme

    NASA Astrophysics Data System (ADS)

    Huang, Xingguo; Sun, Jianguo; Greenhalgh, Stewart

    2018-04-01

    We present methods for obtaining numerical and analytic solutions of the complex eikonal equation in inhomogeneous acoustic VTI media (transversely isotropic media with a vertical symmetry axis). The key and novel point of the method for obtaining numerical solutions is to transform the problem of solving the highly nonlinear acoustic VTI eikonal equation into one of solving the relatively simple eikonal equation for the background (isotropic) medium and a system of linear partial differential equations. Specifically, to obtain the real and imaginary parts of the complex traveltime in inhomogeneous acoustic VTI media, we generalize a perturbation theory, which was developed earlier for solving the conventional real eikonal equation in inhomogeneous anisotropic media, to the complex eikonal equation in such media. After the perturbation analysis, we obtain two types of equations. One is the complex eikonal equation for the background medium and the other is a system of linearized partial differential equations for the coefficients of the corresponding complex traveltime formulas. To solve the complex eikonal equation for the background medium, we employ an optimization scheme that we developed for solving the complex eikonal equation in isotropic media. Then, to solve the system of linearized partial differential equations for the coefficients of the complex traveltime formulas, we use the finite difference method based on the fast marching strategy. Furthermore, by applying the complex source point method and the paraxial approximation, we develop the analytic solutions of the complex eikonal equation in acoustic VTI media, both for the isotropic and elliptical anisotropic background medium. Our numerical results demonstrate the effectiveness of our derivations and illustrate the influence of the beam widths and the anisotropic parameters on the complex traveltimes.

  20. Single-Image Super Resolution for Multispectral Remote Sensing Data Using Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Liebel, L.; Körner, M.

    2016-06-01

    In optical remote sensing, spatial resolution of images is crucial for numerous applications. Space-borne systems are most likely to be affected by a lack of spatial resolution, due to their natural disadvantage of a large distance between the sensor and the sensed object. Thus, methods for single-image super resolution are desirable to exceed the limits of the sensor. Apart from assisting visual inspection of datasets, post-processing operations—e.g., segmentation or feature extraction—can benefit from detailed and distinguishable structures. In this paper, we show that recently introduced state-of-the-art approaches for single-image super resolution of conventional photographs, making use of deep learning techniques, such as convolutional neural networks (CNN), can successfully be applied to remote sensing data. With a huge amount of training data available, end-to-end learning is reasonably easy to apply and can achieve results unattainable using conventional handcrafted algorithms. We trained our CNN on a specifically designed, domain-specific dataset, in order to take into account the special characteristics of multispectral remote sensing data. This dataset consists of publicly available SENTINEL-2 images featuring 13 spectral bands, a ground resolution of up to 10m, and a high radiometric resolution and thus satisfying our requirements in terms of quality and quantity. In experiments, we obtained results superior compared to competing approaches trained on generic image sets, which failed to reasonably scale satellite images with a high radiometric resolution, as well as conventional interpolation methods.

  1. Discrete square root filtering - A survey of current techniques.

    NASA Technical Reports Server (NTRS)

    Kaminskii, P. G.; Bryson, A. E., Jr.; Schmidt, S. F.

    1971-01-01

    Current techniques in square root filtering are surveyed and related by applying a duality association. Four efficient square root implementations are suggested, and compared with three common conventional implementations in terms of computational complexity and precision. It is shown that the square root computational burden should not exceed the conventional by more than 50% in most practical problems. An examination of numerical conditioning predicts that the square root approach can yield twice the effective precision of the conventional filter in ill-conditioned problems. This prediction is verified in two examples.

  2. Accuracy and precision of polyurethane dental arch models fabricated using a three-dimensional subtractive rapid prototyping method with an intraoral scanning technique

    PubMed Central

    Kim, Jae-Hong; Kim, Ki-Baek; Kim, Woong-Chul; Kim, Ji-Hwan

    2014-01-01

    Objective This study aimed to evaluate the accuracy and precision of polyurethane (PUT) dental arch models fabricated using a three-dimensional (3D) subtractive rapid prototyping (RP) method with an intraoral scanning technique by comparing linear measurements obtained from PUT models and conventional plaster models. Methods Ten plaster models were duplicated using a selected standard master model and conventional impression, and 10 PUT models were duplicated using the 3D subtractive RP technique with an oral scanner. Six linear measurements were evaluated in terms of x, y, and z-axes using a non-contact white light scanner. Accuracy was assessed using mean differences between two measurements, and precision was examined using four quantitative methods and the Bland-Altman graphical method. Repeatability was evaluated in terms of intra-examiner variability, and reproducibility was assessed in terms of inter-examiner and inter-method variability. Results The mean difference between plaster models and PUT models ranged from 0.07 mm to 0.33 mm. Relative measurement errors ranged from 2.2% to 7.6% and intraclass correlation coefficients ranged from 0.93 to 0.96, when comparing plaster models and PUT models. The Bland-Altman plot showed good agreement. Conclusions The accuracy and precision of PUT dental models for evaluating the performance of oral scanner and subtractive RP technology was acceptable. Because of the recent improvements in block material and computerized numeric control milling machines, the subtractive RP method may be a good choice for dental arch models. PMID:24696823

  3. Faster modified protocol for first order reversal curve measurements

    NASA Astrophysics Data System (ADS)

    De Biasi, Emilio

    2017-10-01

    In this work we present a faster modified protocol for first order reversal curve (FORC) measurements. The main idea of this procedure is to use the information of the ascending and descending branches constructed through successive sweeps of magnetic field. The new method reduces the number of field sweeps to almost one half as compared to the traditional method. The length of each branch is reduced faster than in the usual FORC protocol. The new method implies not only a new measurement protocol but also a new recipe for the previous treatment of the data. After of these pre-processing, the FORC diagram can be obtained by the conventional methods. In the present work we show that the new FORC procedure leads to results identical to the conventional method if the system under study follows the Stoner-Wohlfarth model with interactions that do not depend of the magnetic state (up or down) of the entities, as in the Preisach model. More specifically, if the coercive and interactions fields are not correlated, and the hysteresis loops have a square shape. Some numerical examples show the comparison between the usual FORC procedure and the propose one. We also discuss that it is possible to find some differences in the case of real systems, due to the magnetic interactions. There is no reason to prefer one FORC method over the other from the point of view of the information to be obtained. On the contrary, the use of both methods could open doors for a more accurate and deep analysis.

  4. Reinforcing the role of the conventional C-arm - a novel method for simplified distal interlocking

    PubMed Central

    2012-01-01

    Background The common practice for insertion of distal locking screws of intramedullary nails is a freehand technique under fluoroscopic control. The process is technically demanding, time-consuming and afflicted to considerable radiation exposure of the patient and the surgical personnel. A new concept is introduced utilizing information from within conventional radiographic images to help accurately guide the surgeon to place the interlocking bolt into the interlocking hole. The newly developed technique was compared to conventional freehand in an operating room (OR) like setting on human cadaveric lower legs in terms of operating time and radiation exposure. Methods The proposed concept (guided freehand), generally based on the freehand gold standard, additionally guides the surgeon by means of visible landmarks projected into the C-arm image. A computer program plans the correct drilling trajectory by processing the lens-shaped hole projections of the interlocking holes from a single image. Holes can be drilled by visually aligning the drill to the planned trajectory. Besides a conventional C-arm, no additional tracking or navigation equipment is required. Ten fresh frozen human below-knee specimens were instrumented with an Expert Tibial Nail (Synthes GmbH, Switzerland). The implants were distally locked by performing the newly proposed technique as well as the conventional freehand technique on each specimen. An orthopedic resident surgeon inserted four distal screws per procedure. Operating time, number of images and radiation time were recorded and statistically compared between interlocking techniques using non-parametric tests. Results A 58% reduction in number of taken images per screw was found for the guided freehand technique (7.4 ± 3.4) (mean ± SD) compared to the freehand technique (17.6 ± 10.3) (p < 0.001). Total radiation time (all 4 screws) was 55% lower for the guided freehand technique compared to conventional freehand (p = 0.001). Operating time per screw (from first shot to screw tightened) was on average 22% reduced by guided freehand (p = 0.018). Conclusions In an experimental setting, the newly developed guided freehand technique for distal interlocking has proven to markedly reduce radiation exposure when compared to the conventional freehand technique. The method utilizes established clinical workflows and does not require cost intensive add-on devices or extensive training. The underlying principle carries potential to assist implant positioning in numerous other applications within orthopedics and trauma from screw insertions to placement of plates, nails or prostheses. PMID:22276698

  5. Wettability of graphitic-carbon and silicon surfaces: MD modeling and theoretical analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramos-Alvarado, Bladimir; Kumar, Satish; Peterson, G. P.

    2015-07-28

    The wettability of graphitic carbon and silicon surfaces was numerically and theoretically investigated. A multi-response method has been developed for the analysis of conventional molecular dynamics (MD) simulations of droplets wettability. The contact angle and indicators of the quality of the computations are tracked as a function of the data sets analyzed over time. This method of analysis allows accurate calculations of the contact angle obtained from the MD simulations. Analytical models were also developed for the calculation of the work of adhesion using the mean-field theory, accounting for the interfacial entropy changes. A calibration method is proposed to providemore » better predictions of the respective contact angles under different solid-liquid interaction potentials. Estimations of the binding energy between a water monomer and graphite match those previously reported. In addition, a breakdown in the relationship between the binding energy and the contact angle was observed. The macroscopic contact angles obtained from the MD simulations were found to match those predicted by the mean-field model for graphite under different wettability conditions, as well as the contact angles of Si(100) and Si(111) surfaces. Finally, an assessment of the effect of the Lennard-Jones cutoff radius was conducted to provide guidelines for future comparisons between numerical simulations and analytical models of wettability.« less

  6. Comparison of different numerical treatments for x-ray phase tomography of soft tissue from differential phase projections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pelliccia, Daniele; Vaz, Raquel; Svalbe, Imants

    X-ray imaging of soft tissue is made difficult by their low absorbance. The use of x-ray phase imaging and tomography can significantly enhance the detection of these tissues and several approaches have been proposed to this end. Methods such as analyzer-based imaging or grating interferometry produce differential phase projections that can be used to reconstruct the 3D distribution of the sample refractive index. We report on the quantitative comparison of three different methods to obtain x-ray phase tomography with filtered back-projection from differential phase projections in the presence of noise. The three procedures represent different numerical approaches to solve themore » same mathematical problem, namely phase retrieval and filtered back-projection. It is found that obtaining individual phase projections and subsequently applying a conventional filtered back-projection algorithm produces the best results for noisy experimental data, when compared with other procedures based on the Hilbert transform. The algorithms are tested on simulated phantom data with added noise and the predictions are confirmed by experimental data acquired using a grating interferometer. The experiment is performed on unstained adult zebrafish, an important model organism for biomedical studies. The method optimization described here allows resolution of weak soft tissue features, such as muscle fibers.« less

  7. Aeroelastic Analysis of Helicopter Rotor Blades Incorporating Anisotropic Piezoelectric Twist Actuation

    NASA Technical Reports Server (NTRS)

    Wilkie, W. Keats; Belvin, W. Keith; Park, K. C.

    1996-01-01

    A simple aeroelastic analysis of a helicopter rotor blade incorporating embedded piezoelectric fiber composite, interdigitated electrode blade twist actuators is described. The analysis consists of a linear torsion and flapwise bending model coupled with a nonlinear ONERA based unsteady aerodynamics model. A modified Galerkin procedure is performed upon the rotor blade partial differential equations of motion to develop a system of ordinary differential equations suitable for dynamics simulation using numerical integration. The twist actuation responses for three conceptual fullscale blade designs with realistic constraints on blade mass are numerically evaluated using the analysis. Numerical results indicate that useful amplitudes of nonresonant elastic twist, on the order of one to two degrees, are achievable under one-g hovering flight conditions for interdigitated electrode poling configurations. Twist actuation for the interdigitated electrode blades is also compared with the twist actuation of a conventionally poled piezoelectric fiber composite blade. Elastic twist produced using the interdigitated electrode actuators was found to be four to five times larger than that obtained with the conventionally poled actuators.

  8. An aeroelastic analysis of helicopter rotor blades incorporating piezoelectric fiber composite twist actuation

    NASA Technical Reports Server (NTRS)

    Wilkie, W. Keats; Park, K. C.

    1996-01-01

    A simple aeroelastic analysis of a helicopter rotor blade incorporating embedded piezoelectric fiber composite, interdigitated electrode blade twist actuators is described. The analysis consist of a linear torsion and flapwise bending model coupled with a nonlinear ONERA based unsteady aerodynamics model. A modified Galerkin procedure is performed upon the rotor blade partial differential equations of motion to develop a system of ordinary differential equations suitable for numerical integration. The twist actuation responses for three conceptual full-scale blade designs with realistic constraints on blade mass are numerically evaluated using the analysis. Numerical results indicate that useful amplitudes of nonresonant elastic twist, on the order of one to two degrees, are achievable under one-g hovering flight conditions for interdigitated electrode poling configurations. Twist actuation for the interdigitated electrode blades is also compared with the twist actuation of a conventionally poled piezoelectric fiber composite blade. Elastic twist produced using the interdigitated electrode actuators was found to be four to five times larger than that obtained with the conventionally poled actuators.

  9. Kramers problem: Numerical Wiener-Hopf-like model characteristics

    NASA Astrophysics Data System (ADS)

    Ezin, A. N.; Samgin, A. L.

    2010-11-01

    Since the Kramers problem cannot be, in general, solved in terms of elementary functions, various numerical techniques or approximate methods must be employed. We present a study of characteristics for a particle in a damped well, which can be considered as a discretized version of the Melnikov [Phys. Rev. E 48, 3271 (1993)]10.1103/PhysRevE.48.3271 turnover theory. The main goal is to justify the direct computational scheme to the basic Wiener-Hopf model. In contrast to the Melnikov approach, which implements factorization through a Cauchy-theorem-based formulation, we employ the Wiener-Levy theorem to reduce the Kramers problem to a Wiener-Hopf sum equation written in terms of Toeplitz matrices. This latter can provide a stringent test for the reliability of analytic approximations for energy distribution functions occurring in the Kramers problems at arbitrary damping. For certain conditions, the simulated characteristics are compared well with those determined using the conventional Fourier-integral formulas, but sometimes may differ slightly depending on the value of a dissipation parameter. Another important feature is that, with our method, we can avoid some complications inherent to the Melnikov method. The calculational technique reported in the present paper may gain particular importance in situations where the energy losses of the particle to the bath are a complex-shaped function of the particle energy and analytic solutions of desired accuracy are not at hand. In order to appreciate more readily the significance and scope of the present numerical approach, we also discuss concrete aspects relating to the field of superionic conductors.

  10. A linear stability analysis for nonlinear, grey, thermal radiative transfer problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wollaber, Allan B., E-mail: wollaber@lanl.go; Larsen, Edward W., E-mail: edlarsen@umich.ed

    2011-02-20

    We present a new linear stability analysis of three time discretizations and Monte Carlo interpretations of the nonlinear, grey thermal radiative transfer (TRT) equations: the widely used 'Implicit Monte Carlo' (IMC) equations, the Carter Forest (CF) equations, and the Ahrens-Larsen or 'Semi-Analog Monte Carlo' (SMC) equations. Using a spatial Fourier analysis of the 1-D Implicit Monte Carlo (IMC) equations that are linearized about an equilibrium solution, we show that the IMC equations are unconditionally stable (undamped perturbations do not exist) if {alpha}, the IMC time-discretization parameter, satisfies 0.5 < {alpha} {<=} 1. This is consistent with conventional wisdom. However, wemore » also show that for sufficiently large time steps, unphysical damped oscillations can exist that correspond to the lowest-frequency Fourier modes. After numerically confirming this result, we develop a method to assess the stability of any time discretization of the 0-D, nonlinear, grey, thermal radiative transfer problem. Subsequent analyses of the CF and SMC methods then demonstrate that the CF method is unconditionally stable and monotonic, but the SMC method is conditionally stable and permits unphysical oscillatory solutions that can prevent it from reaching equilibrium. This stability theory provides new conditions on the time step to guarantee monotonicity of the IMC solution, although they are likely too conservative to be used in practice. Theoretical predictions are tested and confirmed with numerical experiments.« less

  11. A linear stability analysis for nonlinear, grey, thermal radiative transfer problems

    NASA Astrophysics Data System (ADS)

    Wollaber, Allan B.; Larsen, Edward W.

    2011-02-01

    We present a new linear stability analysis of three time discretizations and Monte Carlo interpretations of the nonlinear, grey thermal radiative transfer (TRT) equations: the widely used “Implicit Monte Carlo” (IMC) equations, the Carter Forest (CF) equations, and the Ahrens-Larsen or “Semi-Analog Monte Carlo” (SMC) equations. Using a spatial Fourier analysis of the 1-D Implicit Monte Carlo (IMC) equations that are linearized about an equilibrium solution, we show that the IMC equations are unconditionally stable (undamped perturbations do not exist) if α, the IMC time-discretization parameter, satisfies 0.5 < α ⩽ 1. This is consistent with conventional wisdom. However, we also show that for sufficiently large time steps, unphysical damped oscillations can exist that correspond to the lowest-frequency Fourier modes. After numerically confirming this result, we develop a method to assess the stability of any time discretization of the 0-D, nonlinear, grey, thermal radiative transfer problem. Subsequent analyses of the CF and SMC methods then demonstrate that the CF method is unconditionally stable and monotonic, but the SMC method is conditionally stable and permits unphysical oscillatory solutions that can prevent it from reaching equilibrium. This stability theory provides new conditions on the time step to guarantee monotonicity of the IMC solution, although they are likely too conservative to be used in practice. Theoretical predictions are tested and confirmed with numerical experiments.

  12. An immersed boundary formulation for simulating high-speed compressible viscous flows with moving solids

    NASA Astrophysics Data System (ADS)

    Qu, Yegao; Shi, Ruchao; Batra, Romesh C.

    2018-02-01

    We present a robust sharp-interface immersed boundary method for numerically studying high speed flows of compressible and viscous fluids interacting with arbitrarily shaped either stationary or moving rigid solids. The Navier-Stokes equations are discretized on a rectangular Cartesian grid based on a low-diffusion flux splitting method for inviscid fluxes and conservative high-order central-difference schemes for the viscous components. Discontinuities such as those introduced by shock waves and contact surfaces are captured by using a high-resolution weighted essentially non-oscillatory (WENO) scheme. Ghost cells in the vicinity of the fluid-solid interface are introduced to satisfy boundary conditions on the interface. Values of variables in the ghost cells are found by using a constrained moving least squares method (CMLS) that eliminates numerical instabilities encountered in the conventional MLS formulation. The solution of the fluid flow and the solid motion equations is advanced in time by using the third-order Runge-Kutta and the implicit Newmark integration schemes, respectively. The performance of the proposed method has been assessed by computing results for the following four problems: shock-boundary layer interaction, supersonic viscous flows past a rigid cylinder, moving piston in a shock tube and lifting off from a flat surface of circular, rectangular and elliptic cylinders triggered by shock waves, and comparing computed results with those available in the literature.

  13. Dry calibration of electromagnetic flowmeters based on numerical models combining multiple physical phenomena (multiphysics)

    NASA Astrophysics Data System (ADS)

    Fu, X.; Hu, L.; Lee, K. M.; Zou, J.; Ruan, X. D.; Yang, H. Y.

    2010-10-01

    This paper presents a method for dry calibration of an electromagnetic flowmeter (EMF). This method, which determines the voltage induced in the EMF as conductive liquid flows through a magnetic field, numerically solves a coupled set of multiphysical equations with measured boundary conditions for the magnetic, electric, and flow fields in the measuring pipe of the flowmeter. Specifically, this paper details the formulation of dry calibration and an efficient algorithm (that adaptively minimizes the number of measurements and requires only the normal component of the magnetic flux density as boundary conditions on the pipe surface to reconstruct the magnetic field involved) for computing the sensitivity of EMF. Along with an in-depth discussion on factors that could significantly affect the final precision of a dry calibrated EMF, the effects of flow disturbance on measuring errors have been experimentally studied by installing a baffle at the inflow port of the EMF. Results of the dry calibration on an actual EMF were compared against flow-rig calibration; excellent agreements (within 0.3%) between dry calibration and flow-rig tests verify the multiphysical computation of the fields and the robustness of the method. As requiring no actual flow, the dry calibration is particularly useful for calibrating large-diameter EMFs where conventional flow-rig methods are often costly and difficult to implement.

  14. A New Numerical Simulation technology of Multistage Fracturing in Horizontal Well

    NASA Astrophysics Data System (ADS)

    Cheng, Ning; Kang, Kaifeng; Li, Jianming; Liu, Tao; Ding, Kun

    2017-11-01

    Horizontal multi-stage fracturing is recognized the effective development technology of unconventional oil resources. Geological mechanics in the numerical simulation of hydraulic fracturing technology occupies very important position, compared with the conventional numerical simulation technology, because of considering the influence of geological mechanics. New numerical simulation of hydraulic fracturing can more effectively optimize the design of fracturing and evaluate the production after fracturing. This paper studies is based on the three-dimensional stress and rock physics parameters model, using the latest fluid-solid coupling numerical simulation technology to engrave the extension process of fracture and describes the change of stress field in fracturing process, finally predict the production situation.

  15. New method to design stellarator coils without the winding surface

    NASA Astrophysics Data System (ADS)

    Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao; Wan, Yuanxi

    2018-01-01

    Finding an easy-to-build coils set has been a critical issue for stellarator design for decades. Conventional approaches assume a toroidal ‘winding’ surface, but a poorly chosen winding surface can unnecessarily constrain the coil optimization algorithm, This article presents a new method to design coils for stellarators. Each discrete coil is represented as an arbitrary, closed, one-dimensional curve embedded in three-dimensional space. A target function to be minimized that includes both physical requirements and engineering constraints is constructed. The derivatives of the target function with respect to the parameters describing the coil geometries and currents are calculated analytically. A numerical code, named flexible optimized coils using space curves (FOCUS), has been developed. Applications to a simple stellarator configuration, W7-X and LHD vacuum fields are presented.

  16. False Discovery Control in Large-Scale Spatial Multiple Testing

    PubMed Central

    Sun, Wenguang; Reich, Brian J.; Cai, T. Tony; Guindani, Michele; Schwartzman, Armin

    2014-01-01

    Summary This article develops a unified theoretical and computational framework for false discovery control in multiple testing of spatial signals. We consider both point-wise and cluster-wise spatial analyses, and derive oracle procedures which optimally control the false discovery rate, false discovery exceedance and false cluster rate, respectively. A data-driven finite approximation strategy is developed to mimic the oracle procedures on a continuous spatial domain. Our multiple testing procedures are asymptotically valid and can be effectively implemented using Bayesian computational algorithms for analysis of large spatial data sets. Numerical results show that the proposed procedures lead to more accurate error control and better power performance than conventional methods. We demonstrate our methods for analyzing the time trends in tropospheric ozone in eastern US. PMID:25642138

  17. Design of transmission-type phase holograms for a compact radar-cross-section measurement range at 650 GHz.

    PubMed

    Noponen, Eero; Tamminen, Aleksi; Vaaja, Matti

    2007-07-10

    A design formalism is presented for transmission-type phase holograms for use in a submillimeter-wave compact radar-cross-section (RCS) measurement range. The design method is based on rigorous electromagnetic grating theory combined with conventional hologram synthesis. Hologram structures consisting of a curved groove pattern on a 320 mmx280 mm Teflon plate are designed to transform an incoming spherical wave at 650 GHz into an output wave generating a 100 mm diameter planar field region (quiet zone) at a distance of 1 m. The reconstructed quiet-zone field is evaluated by a numerical simulation method. The uniformity of the quiet-zone field is further improved by reoptimizing the goal field. Measurement results are given for a test hologram fabricated on Teflon.

  18. Microbial Ecology: Where are we now?

    PubMed

    Boughner, Lisa A; Singh, Pallavi

    2016-11-01

    Conventional microbiological methods have been readily taken over by newer molecular techniques due to the ease of use, reproducibility, sensitivity and speed of working with nucleic acids. These tools allow high throughput analysis of complex and diverse microbial communities, such as those in soil, freshwater, saltwater, or the microbiota living in collaboration with a host organism (plant, mouse, human, etc). For instance, these methods have been robustly used for characterizing the plant (rhizosphere), animal and human microbiome specifically the complex intestinal microbiota. The human body has been referred to as the Superorganism since microbial genes are more numerous than the number of human genes and are essential to the health of the host. In this review we provide an overview of the Next Generation tools currently available to study microbial ecology, along with their limitations and advantages.

  19. History matching by spline approximation and regularization in single-phase areal reservoirs

    NASA Technical Reports Server (NTRS)

    Lee, T. Y.; Kravaris, C.; Seinfeld, J.

    1986-01-01

    An automatic history matching algorithm is developed based on bi-cubic spline approximations of permeability and porosity distributions and on the theory of regularization to estimate permeability or porosity in a single-phase, two-dimensional real reservoir from well pressure data. The regularization feature of the algorithm is used to convert the ill-posed history matching problem into a well-posed problem. The algorithm employs the conjugate gradient method as its core minimization method. A number of numerical experiments are carried out to evaluate the performance of the algorithm. Comparisons with conventional (non-regularized) automatic history matching algorithms indicate the superiority of the new algorithm with respect to the parameter estimates obtained. A quasioptimal regularization parameter is determined without requiring a priori information on the statistical properties of the observations.

  20. Rapid phenotypic antimicrobial susceptibility testing using nanoliter arrays.

    PubMed

    Avesar, Jonathan; Rosenfeld, Dekel; Truman-Rosentsvit, Marianna; Ben-Arye, Tom; Geffen, Yuval; Bercovici, Moran; Levenberg, Shulamit

    2017-07-18

    Antibiotic resistance is a major global health concern that requires action across all sectors of society. In particular, to allow conservative and effective use of antibiotics clinical settings require better diagnostic tools that provide rapid determination of antimicrobial susceptibility. We present a method for rapid and scalable antimicrobial susceptibility testing using stationary nanoliter droplet arrays that is capable of delivering results in approximately half the time of conventional methods, allowing its results to be used the same working day. In addition, we present an algorithm for automated data analysis and a multiplexing system promoting practicality and translatability for clinical settings. We test the efficacy of our approach on numerous clinical isolates and demonstrate a 2-d reduction in diagnostic time when testing bacteria isolated directly from urine samples.

  1. Estimation of Coal Reserves for UCG in the Upper Silesian Coal Basin, Poland

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bialecka, Barbara

    One of the prospective methods of coal utilization, especially in case of coal resources which are not mineable by means of conventional methods, is underground coal gasification (UCG). This technology allows recovery of coal energy 'in situ' and thus avoid the health and safety risks related to people which are inseparable from traditional coal extraction techniques.In Poland most mining areas are characterized by numerous coal beds where extraction was ceased on account of technical and economic reasons or safety issues. This article presents estimates of Polish hard coal resources, broken down into individual mines, that can constitute the basis ofmore » raw materials for the gasification process. Five mines, representing more than 4 thousand tons, appear to be UCG candidates.« less

  2. Metal-chelate dye-controlled organization of Cd32S14(SPh)40(4-) nanoclusters into three-dimensional molecular and covalent open architecture.

    PubMed

    Zheng, Nanfeng; Lu, Haiwei; Bu, Xianhui; Feng, Pingyun

    2006-04-12

    Chalcogenide II-VI nanoclusters are usually prepared as isolated clusters and have defied numerous efforts to join them into covalent open-framework architecture with conventional templating methods such as protonated amines or inorganic cations commonly used to direct the formation of porous frameworks. Herein, we report the first templated synthesis of II-VI covalent superlattices from large II-VI tetrahedral clusters (i.e., [Cd32S14(SPh)38]2-). Our method takes advantage of low charge density of metal-chelate dyes that is a unique match with three-dimensional II-VI semiconductor frameworks in charge density, surface hydrophilicity-hydrophobicity, and spatial organization. In addition, metal-chelate dyes also serve to tune the optical properties of resulting dye semiconductor composite materials.

  3. Numerical Model of Multiple Scattering and Emission from Layering Snowpack for Microwave Remote Sensing

    NASA Astrophysics Data System (ADS)

    Jin, Y.; Liang, Z.

    2002-12-01

    The vector radiative transfer (VRT) equation is an integral-deferential equation to describe multiple scattering, absorption and transmission of four Stokes parameters in random scatter media. From the integral formal solution of VRT equation, the lower order solutions, such as the first-order scattering for a layer medium or the second order scattering for a half space, can be obtained. The lower order solutions are usually good at low frequency when high-order scattering is negligible. It won't be feasible to continue iteration for obtaining high order scattering solution because too many folds integration would be involved. In the space-borne microwave remote sensing, for example, the DMSP (Defense Meterological Satellite Program) SSM/I (Special Sensor Microwave/Imager) employed seven channels of 19, 22, 37 and 85GHz. Multiple scattering from the terrain surfaces such as snowpack cannot be neglected at these channels. The discrete ordinate and eigen-analysis method has been studied to take into account for multiple scattering and applied to remote sensing of atmospheric precipitation, snowpack etc. Snowpack was modeled as a layer of dense spherical particles, and the VRT for a layer of uniformly dense spherical particles has been numerically studied by the discrete ordinate method. However, due to surface melting and refrozen crusts, the snowpack undergoes stratifying to form inhomegeneous profiles of the ice grain size, fractional volume and physical temperature etc. It becomes necessary to study multiple scattering and emission from stratified snowpack of dense ice grains. But, the discrete ordinate and eigen-analysis method cannot be simply applied to multi-layers model, because numerically solving a set of multi-equations of VRT is difficult. Stratifying the inhomogeneous media into multi-slabs and employing the first order Mueller matrix of each thin slab, this paper developed an iterative method to derive high orders scattering solutions of whole scatter media. High order scattering and emission from inhomogeneous stratifying media of dense spherical particles are numerically obtained. The brightness temperature at low frequency such as 5.3 GHz without high order scattering and at SSM/I channels with high order scattering are obtained. This approach is also compared with the conventional discrete ordinate method for an uniform layer model. Numerical simulation for inhomogeneous snowpack is also compared with the measurements of microwave remote sensing.

  4. Stochastic stability of sigma-point Unscented Predictive Filter.

    PubMed

    Cao, Lu; Tang, Yu; Chen, Xiaoqian; Zhao, Yong

    2015-07-01

    In this paper, the Unscented Predictive Filter (UPF) is derived based on unscented transformation for nonlinear estimation, which breaks the confine of conventional sigma-point filters by employing Kalman filter as subject investigated merely. In order to facilitate the new method, the algorithm flow of UPF is given firstly. Then, the theoretical analyses demonstrate that the estimate accuracy of the model error and system for the UPF is higher than that of the conventional PF. Moreover, the authors analyze the stochastic boundedness and the error behavior of Unscented Predictive Filter (UPF) for general nonlinear systems in a stochastic framework. In particular, the theoretical results present that the estimation error remains bounded and the covariance keeps stable if the system׳s initial estimation error, disturbing noise terms as well as the model error are small enough, which is the core part of the UPF theory. All of the results have been demonstrated by numerical simulations for a nonlinear example system. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. Zhou; H. Huang; M. Deo

    Log and seismic data indicate that most shale formations have strong heterogeneity. Conventional analytical and semi-analytical fracture models are not enough to simulate the complex fracture propagation in these highly heterogeneous formation. Without considering the intrinsic heterogeneity, predicted morphology of hydraulic fracture may be biased and misleading in optimizing the completion strategy. In this paper, a fully coupling fluid flow and geomechanics hydraulic fracture simulator based on dual-lattice Discrete Element Method (DEM) is used to predict the hydraulic fracture propagation in heterogeneous reservoir. The heterogeneity of rock is simulated by assigning different material force constant and critical strain to differentmore » particles and is adjusted by conditioning to the measured data and observed geological features. Based on proposed model, the effects of heterogeneity at different scale on micromechanical behavior and induced macroscopic fractures are examined. From the numerical results, the microcrack will be more inclined to form at the grain weaker interface. The conventional simulator with homogeneous assumption is not applicable for highly heterogeneous shale formation.« less

  6. Design of Current Leads for the MICE Coupling Magnet

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Li; Li, L.K.; Wu, Hong

    2008-04-02

    A pair of superconducting coupling magnets will be part of the Muon Ionization Cooling Experiment (MICE). They were designed and will be constructed by the Institute of Cryogenics and Superconductivity Technology, Harbin Institute of Technology, in collaboration with Lawrence Berkeley National Laboratory. The coupling magnet is to be cooled by using cryocoolers at 4.2K. In order to reduce the heat leak to the 4.2K cold mass from 300 K, a pair of current leads composed of conventional copper leads and high temperature superconductor (HTS) leads will be used to supply current to the magnet. This paper presents the optimization ofmore » the conventional conduction-cooled metal leads for the coupling magnet. Analyses on heat transfer down the leads using theoretical method and numerical simulation were carried out. The stray magnetic field around the HTS leads has been calculated and effects of the magnetic field on the performance of the HTS leads has also been analyzed.« less

  7. Computer numerical control grinding of spiral bevel gears

    NASA Technical Reports Server (NTRS)

    Scott, H. Wayne

    1991-01-01

    The development of Computer Numerical Control (CNC) spiral bevel gear grinding has paved the way for major improvement in the production of precision spiral bevel gears. The object of the program was to decrease the setup, maintenance of setup, and pattern development time by 50 percent of the time required on conventional spiral bevel gear grinders. Details of the process are explained.

  8. Implementation of Regional and International HIV and AIDS Prevention, Treatment, Care and Support Conventions and Declarations in Lesotho, Malawi and Mozambique

    ERIC Educational Resources Information Center

    Kalanda, Boniface; Mamimine, Patrick; Taela, Katia; Chingandu, Louis; Musuka, Godfrey

    2010-01-01

    The governments across the world have endorsed numerous international Conventions and Declarations (C&Ds) that enhance interventions to reduce the impact of HIV and AIDS. The objective of this study was to assess the extent to which the governments of Lesotho, Malawi and Mozambique have implemented HIV and AIDS international and regional…

  9. A New Unified Analysis of Estimate Errors by Model-Matching Phase-Estimation Methods for Sensorless Drive of Permanent-Magnet Synchronous Motors and New Trajectory-Oriented Vector Control, Part I

    NASA Astrophysics Data System (ADS)

    Shinnaka, Shinji; Sano, Kousuke

    This paper presents a new unified analysis of estimate errors by model-matching phase-estimation methods such as rotor-flux state-observers, back EMF state-observers, and back EMF disturbance-observers, for sensorless drive of permanent-magnet synchronous motors. Analytical solutions about estimate errors, whose validity is confirmed by numerical experiments, are rich in universality and applicability. As an example of universality and applicability, a new trajectory-oriented vector control method is proposed, which can realize directly quasi-optimal strategy minimizing total losses with no additional computational loads by simply orienting one of vector-control coordinates to the associated quasi-optimal trajectory. The coordinate orientation rule, which is analytically derived, is surprisingly simple. Consequently the trajectory-oriented vector control method can be applied to a number of conventional vector control systems using one of the model-matching phase-estimation methods.

  10. Computational wave dynamics for innovative design of coastal structures

    PubMed Central

    GOTOH, Hitoshi; OKAYASU, Akio

    2017-01-01

    For innovative designs of coastal structures, Numerical Wave Flumes (NWFs), which are solvers of Navier-Stokes equation for free-surface flows, are key tools. In this article, various methods and techniques for NWFs are overviewed. In the former half, key techniques of NWFs, namely the interface capturing (MAC, VOF, C-CUP) and significance of NWFs in comparison with the conventional wave models are described. In the latter part of this article, recent improvements of the particle method are shown as one of cores of NWFs. Methods for attenuating unphysical pressure fluctuation and improving accuracy, such as CMPS method for momentum conservation, Higher-order Source of Poisson Pressure Equation (PPE), Higher-order Laplacian, Error-Compensating Source in PPE, and Gradient Correction for ensuring Taylor-series consistency, are reviewed briefly. Finally, the latest new frontier of the accurate particle method, including Dynamic Stabilization for providing minimum-required artificial repulsive force to improve stability of computation, and Space Potential Particle for describing the exact free-surface boundary condition, is described. PMID:29021506

  11. Development of a New Generation of Stable, Tunable, and Catalytically Active Nanoparticles Produced by the Helium Nanodroplet Deposition Method

    DOE PAGES

    Wu, Qiyuan; Ridge, Claron J.; Zhao, Shen; ...

    2016-07-13

    Nanoparticles (NPs) are revolutionizing many areas of science and technology, often delivering unprecedented improvements to properties of the conventional materials. However, despite important advances in NPs synthesis and applications, numerous challenges still remain. Development of alternative synthetic method capable of producing very uniform, extremely clean and very stable NPs is urgently needed. If successful, such method can potentially transform several areas of nanoscience, including environmental and energy related catalysis. Here we present the first experimental demonstration of catalytically active NPs synthesis achieved by the helium nanodroplet isolation method. This alternative method of NPs fabrication and deposition produces narrowly distributed, clean,more » and remarkably stable NPs. The fabrication is achieved inside ultra-low temperature, superfluid helium nanodroplets, which can be subsequently deposited onto any substrate. Lastly, this technique is universal enough to be applied to nearly any element, while achieving high deposition rates for single element as well as composite core-shell NPs.« less

  12. Round-robin differential-phase-shift quantum key distribution with a passive decoy state method

    PubMed Central

    Liu, Li; Guo, Fen-Zhuo; Qin, Su-Juan; Wen, Qiao-Yan

    2017-01-01

    Recently, a new type of protocol named Round-robin differential-phase-shift quantum key distribution (RRDPS QKD) was proposed, where the security can be guaranteed without monitoring conventional signal disturbances. The active decoy state method can be used in this protocol to overcome the imperfections of the source. But, it may lead to side channel attacks and break the security of QKD systems. In this paper, we apply the passive decoy state method to the RRDPS QKD protocol. Not only can the more environment disturbance be tolerated, but in addition it can overcome side channel attacks on the sources. Importantly, we derive a new key generation rate formula for our RRDPS protocol using passive decoy states and enhance the key generation rate. We also compare the performance of our RRDPS QKD to that using the active decoy state method and the original RRDPS QKD without any decoy states. From numerical simulations, the performance improvement of the RRDPS QKD by our new method can be seen. PMID:28198808

  13. Electronic mail.

    PubMed Central

    Pallen, M.

    1995-01-01

    Electronic mail (email) has many advantages over other forms of communication: it is easy to use, free of charge, fast, and delivers information in a digital format. As a text only medium, email is usually less formal in style than conventional correspondence and may contain acronyms and other features, such as smileys, that are peculiar to the Internet. Email client programs that run on your own microcomputer render email powerful and easy to use. With suitable encoding methods, email can be used to send any kind of computer file, including pictures, sounds, programs, and movies. Numerous biomedical electronic mailing lists and other Internet services are accessible by email. PMID:8520343

  14. Control of photon storage time using phase locking.

    PubMed

    Ham, Byoung S

    2010-01-18

    A photon echo storage-time extension protocol is presented by using a phase locking method in a three-level backward propagation scheme, where phase locking serves as a conditional stopper of the rephasing process in conventional two-pulse photon echoes. The backward propagation scheme solves the critical problems of extremely low retrieval efficiency and pi rephasing pulse-caused spontaneous emission noise in photon echo based quantum memories. The physics of the storage time extension lies in the imminent population transfer from the excited state to an auxiliary spin state by a phase locking control pulse. We numerically demonstrate that the storage time is lengthened by spin dephasing time.

  15. Quadratic Finite Element Method for 1D Deterministic Transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tolar, Jr., D R; Ferguson, J M

    2004-01-06

    In the discrete ordinates, or SN, numerical solution of the transport equation, both the spatial ({und r}) and angular ({und {Omega}}) dependences on the angular flux {psi}{und r},{und {Omega}}are modeled discretely. While significant effort has been devoted toward improving the spatial discretization of the angular flux, we focus on improving the angular discretization of {psi}{und r},{und {Omega}}. Specifically, we employ a Petrov-Galerkin quadratic finite element approximation for the differencing of the angular variable ({mu}) in developing the one-dimensional (1D) spherical geometry S{sub N} equations. We develop an algorithm that shows faster convergence with angular resolution than conventional S{sub N} algorithms.

  16. A novel adaptive finite time controller for bilateral teleoperation system

    NASA Astrophysics Data System (ADS)

    Wang, Ziwei; Chen, Zhang; Liang, Bin; Zhang, Bo

    2018-03-01

    Most bilateral teleoperation researches focus on the system stability within time-delays. However, practical teleoperation tasks require high performances besides system stability, such as convergence rate and accuracy. This paper investigates bilateral teleoperation controller design with transient performances. To ensure the transient performances and system stability simultaneously, an adaptive non-singular fast terminal mode controller is proposed to achieve practical finite-time stability considering system uncertainties and time delays. In addition, a novel switching scheme is introduced, in which way the singularity problem of conventional terminal sliding manifold is avoided. Finally, numerical simulations demonstrate the effectiveness and validity of the proposed method.

  17. Application of the implicit MacCormack scheme to the PNS equations

    NASA Technical Reports Server (NTRS)

    Lawrence, S. L.; Tannehill, J. C.; Chaussee, D. S.

    1983-01-01

    The two-dimensional parabolized Navier-Stokes equations are solved using MacCormack's (1981) implicit finite-difference scheme. It is shown that this method for solving the parabolized Navier-Stokes equations does not require the inversion of block tridiagonal systems of algebraic equations and allows the original explicit scheme to be employed in those regions where implicit treatment is not needed. The finite-difference algorithm is discussed and the computational results for two laminar test cases are presented. Results obtained using this method for the case of a flat plate boundary layer are compared with those obtained using the conventional Beam-Warming scheme, as well as those obtained from a boundary layer code. The computed results for a more severe test of the method, the hypersonic flow past a 15 deg compression corner, are found to compare favorably with experiment and a numerical solution of the complete Navier-Stokes equations.

  18. Manifold Preserving: An Intrinsic Approach for Semisupervised Distance Metric Learning.

    PubMed

    Ying, Shihui; Wen, Zhijie; Shi, Jun; Peng, Yaxin; Peng, Jigen; Qiao, Hong

    2017-05-18

    In this paper, we address the semisupervised distance metric learning problem and its applications in classification and image retrieval. First, we formulate a semisupervised distance metric learning model by considering the metric information of inner classes and interclasses. In this model, an adaptive parameter is designed to balance the inner metrics and intermetrics by using data structure. Second, we convert the model to a minimization problem whose variable is symmetric positive-definite matrix. Third, in implementation, we deduce an intrinsic steepest descent method, which assures that the metric matrix is strictly symmetric positive-definite at each iteration, with the manifold structure of the symmetric positive-definite matrix manifold. Finally, we test the proposed algorithm on conventional data sets, and compare it with other four representative methods. The numerical results validate that the proposed method significantly improves the classification with the same computational efficiency.

  19. The effect of tooling design parameters on web-warping in the flexible roll forming of UHSS

    NASA Astrophysics Data System (ADS)

    Jiao, Jingsi; Rolfe, Bernard; Mendiguren, Joseba; Galdos, Lander; Weiss, Matthias

    2013-12-01

    To reduce weight and improve passenger safety there is an increased need in the automotive industry to use Ultra High Strength Steels (UHSS) for structural and crash components. However, the application of UHSS is restricted by their limited formability and the difficulty of forming them in conventional processes. An alternative method of manufacturing structural auto body parts from UHSS is the flexible roll forming process which can accommodate materials with high strength and limited ductility in the production of complex and weight-optimised components. However, one major concern in the flexible roll forming is web-warping, which is the height deviation of the profile web area. This paper investigates, using a numerical model, the effect on web-warping with respect to various forming methods. The results demonstrate that different forming methods lead to different amount of web-warping in terms of forming the product with identical geometry.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khrapak, Sergey A.; Joint Institute for High Temperatures, 125412 Moscow; Chaudhuri, Manis

    We put forward an approximate method to locate the fluid-solid (freezing) phase transition in systems of classical particles interacting via a wide range of Lennard-Jones-type potentials. This method is based on the constancy of the properly normalized second derivative of the interaction potential (freezing indicator) along the freezing curve. As demonstrated recently it yields remarkably good agreement with previous numerical simulation studies of the conventional 12-6 Lennard-Jones (LJ) fluid [S.A.Khrapak, M.Chaudhuri, G.E.Morfill, Phys. Rev. B 134, 052101 (2010)]. In this paper, we test this approach using a wide range of the LJ-type potentials, including LJ n-6 and exp-6 models, andmore » find that it remains sufficiently accurate and reliable in reproducing the corresponding freezing curves, down to the triple-point temperatures. One of the possible application of the method--estimation of the freezing conditions in complex (dusty) plasmas with ''tunable'' interactions--is briefly discussed.« less

  1. Scrubbers with a level head

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pedersen, G.C.; Bhattachararjee, P.K.

    1997-11-01

    The available methods for removing pollutants from a gas stream are numerous, to say the least. A popular method, scrubbers allow users to separate gases and solids by allowing the gas to come into contact with a liquid stream. In the end, the pollutants are washed away in the effluent, and the gas exits the system to be used in later processes or to be released into the atmosphere. For many years, counter-flow scrubber methods have been used for the lion`s share of the work in industries such as phosphate fertilizer and semiconductor chemicals manufacturing. Now these industries are exploringmore » the use of cross-flow scrubber design, which offers consistently high efficiency and low operating costs. In addition, the unit`s horizontal orientation makes maintenance easier than typical tower scrubbers. For certain classes of unit operations, cross-flow is now being recognized as a strong alternative to conventional counterflow technology.« less

  2. High-speed X-ray microscopy by use of high-resolution zone plates and synchrotron radiation.

    PubMed

    Hou, Qiyue; Wang, Zhili; Gao, Kun; Pan, Zhiyun; Wang, Dajiang; Ge, Xin; Zhang, Kai; Hong, Youli; Zhu, Peiping; Wu, Ziyu

    2012-09-01

    X-ray microscopy based on synchrotron radiation has become a fundamental tool in biology and life sciences to visualize the morphology of a specimen. These studies have particular requirements in terms of radiation damage and the image exposure time, which directly determines the total acquisition speed. To monitor and improve these key parameters, we present a novel X-ray microscopy method using a high-resolution zone plate as the objective and the matching condenser. Numerical simulations based on the scalar wave field theory validate the feasibility of the method and also indicate the performance of X-ray microscopy is optimized most with sub-10-nm-resolution zone plates. The proposed method is compatible with conventional X-ray microscopy techniques, such as computed tomography, and will find wide applications in time-resolved and/or dose-sensitive studies such as living cell imaging.

  3. Trend and future of diesel engine: Development of high efficiency and low emission low temperature combustion diesel engine

    NASA Astrophysics Data System (ADS)

    Ho, R. J.; Yusoff, M. Z.; Palanisamy, K.

    2013-06-01

    Stringent emission policy has put automotive research & development on developing high efficiency and low pollutant power train. Conventional direct injection diesel engine with diffused flame has reached its limitation and has driven R&D to explore other field of combustion. Low temperature combustion (LTC) and homogeneous charge combustion ignition has been proven to be effective methods in decreasing combustion pollutant emission. Nitrogen Oxide (NOx) and Particulate Matter (PM) formation from combustion can be greatly suppressed. A review on each of method is covered to identify the condition and processes that result in these reductions. The critical parameters that allow such combustion to take place will be highlighted and serves as emphasis to the direction of developing future diesel engine system. This paper is written to explore potential of present numerical and experimental methods in optimizing diesel engine design through adoption of the new combustion technology.

  4. Current conduction in junction gate field effect transistors. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Kim, C.

    1970-01-01

    The internal physical mechanism that governs the current conduction in junction-gate field effect transistors is studied. A numerical method of analyzing the devices with different length-to-width ratios and doping profiles is developed. This method takes into account the two dimensional character of the electric field and the field dependent mobility. Application of the method to various device models shows that the channel width and the carrier concentration in the conductive channel decrease with increasing drain-to-source voltage for conventional devices. It also shows larger differential drain conductances for shorter devices when the drift velocity is not saturated. The interaction of the source and the drain gives the carrier accumulation in the channel which leads to the space-charge-limited current flow. The important parameters for the space-charge-limited current flow are found to be the L/L sub DE ratio and the crossover voltage.

  5. Research progress on the brewing techniques of new-type rice wine.

    PubMed

    Jiao, Aiquan; Xu, Xueming; Jin, Zhengyu

    2017-01-15

    As a traditional alcoholic beverage, Chinese rice wine (CRW) with high nutritional value and unique flavor has been popular in China for thousands of years. Although traditional production methods had been used without change for centuries, numerous technological innovations in the last decades have greatly impacted on the CRW industry. However, reviews related to the technology research progress in this field are relatively few. This article aimed at providing a brief summary of the recent developments in the new brewing technologies for making CRW. Based on the comparison between the conventional methods and the innovative technologies of CRW brewing, three principal aspects were summarized and sorted, including the innovation of raw material pretreatment, the optimization of fermentation and the reform of sterilization technology. Furthermore, by comparing the advantages and disadvantages of these methods, various issues are addressed related to the prospect of the CRW industry. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Simple Method to Generate Terawatt-Attosecond X-Ray Free-Electron-Laser Pulses.

    PubMed

    Prat, Eduard; Reiche, Sven

    2015-06-19

    X-ray free-electron lasers (XFELs) are cutting-edge research tools that produce almost fully coherent radiation with high power and short-pulse length with applications in multiple science fields. There is a strong demand to achieve even shorter pulses and higher radiation powers than the ones obtained at state-of-the-art XFEL facilities. In this context we propose a novel method to generate terawatt-attosecond XFEL pulses, where an XFEL pulse is pushed through several short good-beam regions of the electron bunch. In addition to the elements of conventional XFEL facilities, the method uses only a multiple-slotted foil and small electron delays between undulator sections. Our scheme is thus simple, compact, and easy to implement both in already operating as well as future XFEL projects. We present numerical simulations that confirm the feasibility and validity of our proposal.

  7. Multi-Mounted X-Ray Computed Tomography.

    PubMed

    Fu, Jian; Liu, Zhenzhong; Wang, Jingzheng

    2016-01-01

    Most existing X-ray computed tomography (CT) techniques work in single-mounted mode and need to scan the inspected objects one by one. It is time-consuming and not acceptable for the inspection in a large scale. In this paper, we report a multi-mounted CT method and its first engineering implementation. It consists of a multi-mounted scanning geometry and the corresponding algebraic iterative reconstruction algorithm. This approach permits the CT rotation scanning of multiple objects simultaneously without the increase of penetration thickness and the signal crosstalk. Compared with the conventional single-mounted methods, it has the potential to improve the imaging efficiency and suppress the artifacts from the beam hardening and the scatter. This work comprises a numerical study of the method and its experimental verification using a dataset measured with a developed multi-mounted X-ray CT prototype system. We believe that this technique is of particular interest for pushing the engineering applications of X-ray CT.

  8. Von Neumann stability analysis of globally divergence-free RKDG schemes for the induction equation using multidimensional Riemann solvers

    NASA Astrophysics Data System (ADS)

    Balsara, Dinshaw S.; Käppeli, Roger

    2017-05-01

    In this paper we focus on the numerical solution of the induction equation using Runge-Kutta Discontinuous Galerkin (RKDG)-like schemes that are globally divergence-free. The induction equation plays a role in numerical MHD and other systems like it. It ensures that the magnetic field evolves in a divergence-free fashion; and that same property is shared by the numerical schemes presented here. The algorithms presented here are based on a novel DG-like method as it applies to the magnetic field components in the faces of a mesh. (I.e., this is not a conventional DG algorithm for conservation laws.) The other two novel building blocks of the method include divergence-free reconstruction of the magnetic field and multidimensional Riemann solvers; both of which have been developed in recent years by the first author. Since the method is linear, a von Neumann stability analysis is carried out in two-dimensions to understand its stability properties. The von Neumann stability analysis that we develop in this paper relies on transcribing from a modal to a nodal DG formulation in order to develop discrete evolutionary equations for the nodal values. These are then coupled to a suitable Runge-Kutta timestepping strategy so that one can analyze the stability of the entire scheme which is suitably high order in space and time. We show that our scheme permits CFL numbers that are comparable to those of traditional RKDG schemes. We also analyze the wave propagation characteristics of the method and show that with increasing order of accuracy the wave propagation becomes more isotropic and free of dissipation for a larger range of long wavelength modes. This makes a strong case for investing in higher order methods. We also use the von Neumann stability analysis to show that the divergence-free reconstruction and multidimensional Riemann solvers are essential algorithmic ingredients of a globally divergence-free RKDG-like scheme. Numerical accuracy analyses of the RKDG-like schemes are presented and compared with the accuracy of PNPM schemes. It is found that PNPM retrieve much of the accuracy of the RKDG-like schemes while permitting a larger CFL number.

  9. Finite element analysis of different loading conditions for implant-supported overdentures supported by conventional or mini implants.

    PubMed

    Solberg, K; Heinemann, F; Pellikaan, P; Keilig, L; Stark, H; Bourauel, C; Hasan, I

    2017-05-01

    The effect of implants' number on overdenture stability and stress distribution in edentulous mandible, implants and overdenture was numerically investigated for implant-supported overdentures. Three models were constructed. Overdentures were connected to implants by means of ball head abutments and rubber ring. In model 1, the overdenture was retained by two conventional implants; in model 2, by four conventional implants; and in model 3, by five mini implants. The overdenture was subjected to a symmetrical load at an angle of 20 degrees to the overdenture at the canine regions and vertically at the first molars. Four different loading conditions with two total forces (120, 300 N) were considered for the numerical analysis. The overdenture displacement was about 2.2 times higher when five mini implants were used rather than four conventional implants. The lowest stress in bone bed was observed with four conventional implants. Stresses in bone were reduced by 61% in model 2 and by 6% in model 3 in comparison to model 1. The highest stress was observed with five mini implants. Stresses in implants were reduced by 76% in model 2 and 89% increased in model 3 compared to model 1. The highest implant displacement was observed with five mini implants. Implant displacements were reduced by 29% in model 2, and increased by 273% in model 3 compared to model 1. Conventional implants proved better stability for overdenture than mini implants. Regardless the type and number of implants, the stress within the bone and implants are below the critical limits.

  10. Dissipation-preserving spectral element method for damped seismic wave equations

    NASA Astrophysics Data System (ADS)

    Cai, Wenjun; Zhang, Huai; Wang, Yushun

    2017-12-01

    This article describes the extension of the conformal symplectic method to solve the damped acoustic wave equation and the elastic wave equations in the framework of the spectral element method. The conformal symplectic method is a variation of conventional symplectic methods to treat non-conservative time evolution problems, which has superior behaviors in long-time stability and dissipation preservation. To reveal the intrinsic dissipative properties of the model equations, we first reformulate the original systems in their equivalent conformal multi-symplectic structures and derive the corresponding conformal symplectic conservation laws. We thereafter separate each system into a conservative Hamiltonian system and a purely dissipative ordinary differential equation system. Based on the splitting methodology, we solve the two subsystems respectively. The dissipative one is cheaply solved by its analytic solution. While for the conservative system, we combine a fourth-order symplectic Nyström method in time and the spectral element method in space to cover the circumstances in realistic geological structures involving complex free-surface topography. The Strang composition method is adopted thereby to concatenate the corresponding two parts of solutions and generate the completed conformal symplectic method. A relative larger Courant number than that of the traditional Newmark scheme is found in the numerical experiments in conjunction with a spatial sampling of approximately 5 points per wavelength. A benchmark test for the damped acoustic wave equation validates the effectiveness of our proposed method in precisely capturing dissipation rate. The classical Lamb problem is used to demonstrate the ability of modeling Rayleigh wave in elastic wave propagation. More comprehensive numerical experiments are presented to investigate the long-time simulation, low dispersion and energy conservation properties of the conformal symplectic methods in both the attenuating homogeneous and heterogeneous media.

  11. Numerical Analysis of Ginzburg-Landau Models for Superconductivity.

    NASA Astrophysics Data System (ADS)

    Coskun, Erhan

    Thin film conventional, as well as High T _{c} superconductors of various geometric shapes placed under both uniform and variable strength magnetic field are studied using the universially accepted macroscopic Ginzburg-Landau model. A series of new theoretical results concerning the properties of solution is presented using the semi -discrete time-dependent Ginzburg-Landau equations, staggered grid setup and natural boundary conditions. Efficient serial algorithms including a novel adaptive algorithm is developed and successfully implemented for solving the governing highly nonlinear parabolic system of equations. Refinement technique used in the adaptive algorithm is based on modified forward Euler method which was also developed by us to ease the restriction on time step size for stability considerations. Stability and convergence properties of forward and modified forward Euler schemes are studied. Numerical simulations of various recent physical experiments of technological importance such as vortes motion and pinning are performed. The numerical code for solving time-dependent Ginzburg-Landau equations is parallelized using BlockComm -Chameleon and PCN. The parallel code was run on the distributed memory multiprocessors intel iPSC/860, IBM-SP1 and cluster of Sun Sparc workstations, all located at Mathematics and Computer Science Division, Argonne National Laboratory.

  12. Numerical algorithms for cold-relativistic plasma models in the presence of discontinuties

    NASA Astrophysics Data System (ADS)

    Hakim, Ammar; Cary, John; Bruhwiler, David; Geddes, Cameron; Leemans, Wim; Esarey, Eric

    2006-10-01

    A numerical algorithm is presented to solve cold-relativistic electron fluid equations in the presence of sharp gradients and discontinuities. The intended application is to laser wake-field accelerator simulations in which the laser induces accelerating fields thousands of times those achievable in conventional RF accelerators. The relativistic cold-fluid equations are formulated as non-classical system of hyperbolic balance laws. It is shown that the flux Jacobian for this system can not be diagonalized which causes numerical difficulties when developing shock-capturing algorithms. Further, the system is shown to admit generalized delta-shock solutions, first discovered in the context of sticky-particle dynamics (Bouchut, Ser. Adv. Math App. Sci., 22 (1994) pp. 171--190). A new approach, based on relaxation schemes proposed by Jin and Xin (Comm. Pure Appl. Math. 48 (1995) pp. 235--276) and LeVeque and Pelanti (J. Comput. Phys. 172 (2001) pp. 572--591) is developed to solve this system of equations. The method consists of finding an exact solution to a Riemann problem at each cell interface and coupling these to advance the solution in time. Applications to an intense laser propagating in an under-dense plasma are presented.

  13. Modeling of single film bubble and numerical study of the plateau structure in foam system

    NASA Astrophysics Data System (ADS)

    Sun, Zhong-guo; Ni, Ni; Sun, Yi-jie; Xi, Guang

    2018-02-01

    The single-film bubble has a special geometry with a certain amount of gas shrouded by a thin layer of liquid film under the surface tension force both on the inside and outside surfaces of the bubble. Based on the mesh-less moving particle semi-implicit (MPS) method, a single-film double-gas-liquid-interface surface tension (SDST) model is established for the single-film bubble, which characteristically has totally two gas-liquid interfaces on both sides of the film. Within this framework, the conventional surface free energy surface tension model is improved by using a higher order potential energy equation between particles, and the modification results in higher accuracy and better symmetry properties. The complex interface movement in the oscillation process of the single-film bubble is numerically captured, as well as typical flow phenomena and deformation characteristics of the liquid film. In addition, the basic behaviors of the coalescence and connection process between two and even three single-film bubbles are studied, and the cases with bubbles of different sizes are also included. Furthermore, the classic plateau structure in the foam system is reproduced and numerically proved to be in the steady state for multi-bubble connections.

  14. Target recognition and phase acquisition by using incoherent digital holographic imaging

    NASA Astrophysics Data System (ADS)

    Lee, Munseob; Lee, Byung-Tak

    2017-05-01

    In this study, we proposed the Incoherent Digital Holographic Imaging (IDHI) for recognition and phase information of dedicated target. Although recent development of a number of target recognition techniques such as LIDAR, there have limited success in target discrimination, in part due to low-resolution, low scanning speed, and computation power. In the paper, the proposed system consists of the incoherent light source, such as LED, Michelson interferometer, and digital CCD for acquisition of four phase shifting image. First of all, to compare with relative coherence, we used a source as laser and LED, respectively. Through numerical reconstruction by using the four phase shifting method and Fresnel diffraction method, we recovered the intensity and phase image of USAF resolution target apart from about 1.0m distance. In this experiment, we show 1.2 times improvement in resolution compared to conventional imaging. Finally, to confirm the recognition result of camouflaged targets with the same color from background, we carry out to test holographic imaging in incoherent light. In this result, we showed the possibility of a target detection and recognition that used three dimensional shape and size signatures, numerical distance from phase information of obtained holographic image.

  15. Intensity correction for multichannel hyperpolarized 13C imaging of the heart.

    PubMed

    Dominguez-Viqueira, William; Geraghty, Benjamin J; Lau, Justin Y C; Robb, Fraser J; Chen, Albert P; Cunningham, Charles H

    2016-02-01

    Develop and test an analytic correction method to correct the signal intensity variation caused by the inhomogeneous reception profile of an eight-channel phased array for hyperpolarized (13) C imaging. Fiducial markers visible in anatomical images were attached to the individual coils to provide three dimensional localization of the receive hardware with respect to the image frame of reference. The coil locations and dimensions were used to numerically model the reception profile using the Biot-Savart Law. The accuracy of the coil sensitivity estimation was validated with images derived from a homogenous (13) C phantom. Numerical coil sensitivity estimates were used to perform intensity correction of in vivo hyperpolarized (13) C cardiac images in pigs. In comparison to the conventional sum-of-squares reconstruction, improved signal uniformity was observed in the corrected images. The analytical intensity correction scheme was shown to improve the uniformity of multichannel image reconstruction in hyperpolarized [1-(13) C]pyruvate and (13) C-bicarbonate cardiac MRI. The method is independent of the pulse sequence used for (13) C data acquisition, simple to implement and does not require additional scan time, making it an attractive technique for multichannel hyperpolarized (13) C MRI. © 2015 Wiley Periodicals, Inc.

  16. A novel power spectrum calculation method using phase-compensation and weighted averaging for the estimation of ultrasound attenuation.

    PubMed

    Heo, Seo Weon; Kim, Hyungsuk

    2010-05-01

    An estimation of ultrasound attenuation in soft tissues is critical in the quantitative ultrasound analysis since it is not only related to the estimations of other ultrasound parameters, such as speed of sound, integrated scatterers, or scatterer size, but also provides pathological information of the scanned tissue. However, estimation performances of ultrasound attenuation are intimately tied to the accurate extraction of spectral information from the backscattered radiofrequency (RF) signals. In this paper, we propose two novel techniques for calculating a block power spectrum from the backscattered ultrasound signals. These are based on the phase-compensation of each RF segment using the normalized cross-correlation to minimize estimation errors due to phase variations, and the weighted averaging technique to maximize the signal-to-noise ratio (SNR). The simulation results with uniform numerical phantoms demonstrate that the proposed method estimates local attenuation coefficients within 1.57% of the actual values while the conventional methods estimate those within 2.96%. The proposed method is especially effective when we deal with the signal reflected from the deeper depth where the SNR level is lower or when the gated window contains a small number of signal samples. Experimental results, performed at 5MHz, were obtained with a one-dimensional 128 elements array, using the tissue-mimicking phantoms also show that the proposed method provides better estimation results (within 3.04% of the actual value) with smaller estimation variances compared to the conventional methods (within 5.93%) for all cases considered. Copyright 2009 Elsevier B.V. All rights reserved.

  17. Optimization of GPS water vapor tomography technique with radiosonde and COSMIC historical data

    NASA Astrophysics Data System (ADS)

    Ye, Shirong; Xia, Pengfei; Cai, Changsheng

    2016-09-01

    The near-real-time high spatial resolution of atmospheric water vapor distribution is vital in numerical weather prediction. GPS tomography technique has been proved effectively for three-dimensional water vapor reconstruction. In this study, the tomography processing is optimized in a few aspects by the aid of radiosonde and COSMIC historical data. Firstly, regional tropospheric zenith hydrostatic delay (ZHD) models are improved and thus the zenith wet delay (ZWD) can be obtained at a higher accuracy. Secondly, the regional conversion factor of converting the ZWD to the precipitable water vapor (PWV) is refined. Next, we develop a new method for dividing the tomography grid with an uneven voxel height and a varied water vapor layer top. Finally, we propose a Gaussian exponential vertical interpolation method which can better reflect the vertical variation characteristic of water vapor. GPS datasets collected in Hong Kong in February 2014 are employed to evaluate the optimized tomographic method by contrast with the conventional method. The radiosonde-derived and COSMIC-derived water vapor densities are utilized as references to evaluate the tomographic results. Using radiosonde products as references, the test results obtained from our optimized method indicate that the water vapor density accuracy is improved by 15 and 12 % compared to those derived from the conventional method below the height of 3.75 km and above the height of 3.75 km, respectively. Using the COSMIC products as references, the results indicate that the water vapor density accuracy is improved by 15 and 19 % below 3.75 km and above 3.75 km, respectively.

  18. Linear and nonlinear dynamic analysis of redundant load path bearingless rotor systems

    NASA Technical Reports Server (NTRS)

    Murthy, V. R.; Shultz, Louis A.

    1994-01-01

    The goal of this research is to develop the transfer matrix method to treat nonlinear autonomous boundary value problems with multiple branches. The application is the complete nonlinear aeroelastic analysis of multiple-branched rotor blades. Once the development is complete, it can be incorporated into the existing transfer matrix analyses. There are several difficulties to be overcome in reaching this objective. The conventional transfer matrix method is limited in that it is applicable only to linear branch chain-like structures, but consideration of multiple branch modeling is important for bearingless rotors. Also, hingeless and bearingless rotor blade dynamic characteristics (particularly their aeroelasticity problems) are inherently nonlinear. The nonlinear equations of motion and the multiple-branched boundary value problem are treated together using a direct transfer matrix method. First, the formulation is applied to a nonlinear single-branch blade to validate the nonlinear portion of the formulation. The nonlinear system of equations is iteratively solved using a form of Newton-Raphson iteration scheme developed for differential equations of continuous systems. The formulation is then applied to determine the nonlinear steady state trim and aeroelastic stability of a rotor blade in hover with two branches at the root. A comprehensive computer program is developed and is used to obtain numerical results for the (1) free vibration, (2) nonlinearly deformed steady state, (3) free vibration about the nonlinearly deformed steady state, and (4) aeroelastic stability tasks. The numerical results obtained by the present method agree with results from other methods.

  19. Numerical evaluation of magnetic absolute measurements with arbitrarily distributed DI-fluxgate theodolite orientations

    NASA Astrophysics Data System (ADS)

    Brunke, Heinz-Peter; Matzka, Jürgen

    2018-01-01

    At geomagnetic observatories the absolute measurements are needed to determine the calibration parameters of the continuously recording vector magnetometer (variometer). Absolute measurements are indispensable for determining the vector of the geomagnetic field over long periods of time. A standard DI (declination, inclination) measuring scheme for absolute measurements establishes routines in magnetic observatories. The traditional measuring schema uses a fixed number of eight orientations (Jankowski et al., 1996).

    We present a numerical method, allowing for the evaluation of an arbitrary number (minimum of five as there are five independent parameters) of telescope orientations. Our method provides D, I and Z base values and calculated error bars of them.

    A general approach has significant advantages. Additional measurements may be seamlessly incorporated for higher accuracy. Individual erroneous readings are identified and can be discarded without invalidating the entire data set. A priori information can be incorporated. We expect the general method to also ease requirements for automated DI-flux measurements. The method can reveal certain properties of the DI theodolite which are not captured by the conventional method.

    Based on the alternative evaluation method, a new faster and less error-prone measuring schema is presented. It avoids needing to calculate the magnetic meridian prior to the inclination measurements.

    Measurements in the vicinity of the magnetic equator are possible with theodolites and without a zenith ocular.

    The implementation of the method in MATLAB is available as source code at the GFZ Data Center Brunke (2017).

  20. Reinforcing the role of the conventional C-arm--a novel method for simplified distal interlocking.

    PubMed

    Windolf, Markus; Schroeder, Josh; Fliri, Ladina; Dicht, Benno; Liebergall, Meir; Richards, R Geoff

    2012-01-25

    The common practice for insertion of distal locking screws of intramedullary nails is a freehand technique under fluoroscopic control. The process is technically demanding, time-consuming and afflicted to considerable radiation exposure of the patient and the surgical personnel. A new concept is introduced utilizing information from within conventional radiographic images to help accurately guide the surgeon to place the interlocking bolt into the interlocking hole. The newly developed technique was compared to conventional freehand in an operating room (OR) like setting on human cadaveric lower legs in terms of operating time and radiation exposure. The proposed concept (guided freehand), generally based on the freehand gold standard, additionally guides the surgeon by means of visible landmarks projected into the C-arm image. A computer program plans the correct drilling trajectory by processing the lens-shaped hole projections of the interlocking holes from a single image. Holes can be drilled by visually aligning the drill to the planned trajectory. Besides a conventional C-arm, no additional tracking or navigation equipment is required.Ten fresh frozen human below-knee specimens were instrumented with an Expert Tibial Nail (Synthes GmbH, Switzerland). The implants were distally locked by performing the newly proposed technique as well as the conventional freehand technique on each specimen. An orthopedic resident surgeon inserted four distal screws per procedure. Operating time, number of images and radiation time were recorded and statistically compared between interlocking techniques using non-parametric tests. A 58% reduction in number of taken images per screw was found for the guided freehand technique (7.4 ± 3.4) (mean ± SD) compared to the freehand technique (17.6 ± 10.3) (p < 0.001). Total radiation time (all 4 screws) was 55% lower for the guided freehand technique compared to conventional freehand (p = 0.001). Operating time per screw (from first shot to screw tightened) was on average 22% reduced by guided freehand (p = 0.018). In an experimental setting, the newly developed guided freehand technique for distal interlocking has proven to markedly reduce radiation exposure when compared to the conventional freehand technique. The method utilizes established clinical workflows and does not require cost intensive add-on devices or extensive training. The underlying principle carries potential to assist implant positioning in numerous other applications within orthopedics and trauma from screw insertions to placement of plates, nails or prostheses.

Top