Accurate Projection Methods for the Incompressible Navier–Stokes Equations
Brown, David L.; Cortez, Ricardo; Minion, Michael L.
2001-04-10
This paper considers the accuracy of projection method approximations to the initial–boundary-value problem for the incompressible Navier–Stokes equations. The issue of how to correctly specify numerical boundary conditions for these methods has been outstanding since the birth of the second-order methodology a decade and a half ago. It has been observed that while the velocity can be reliably computed to second-order accuracy in time and space, the pressure is typically only first-order accurate in the L ∞-norm. Here, we identify the source of this problem in the interplay of the global pressure-update formula with the numerical boundary conditions and presentsmore » an improved projection algorithm which is fully second-order accurate, as demonstrated by a normal mode analysis and numerical experiments. In addition, a numerical method based on a gauge variable formulation of the incompressible Navier–Stokes equations, which provides another option for obtaining fully second-order convergence in both velocity and pressure, is discussed. The connection between the boundary conditions for projection methods and the gauge method is explained in detail.« less
ACCESS 3. Approximation concepts code for efficient structural synthesis: User's guide
NASA Technical Reports Server (NTRS)
Fleury, C.; Schmit, L. A., Jr.
1980-01-01
A user's guide is presented for ACCESS-3, a research oriented program which combines dual methods and a collection of approximation concepts to achieve excellent efficiency in structural synthesis. The finite element method is used for structural analysis and dual algorithms of mathematical programming are applied in the design optimization procedure. This program retains all of the ACCESS-2 capabilities and the data preparation formats are fully compatible. Four distinct optimizer options were added: interior point penalty function method (NEWSUMT); second order primal projection method (PRIMAL2); second order Newton-type dual method (DUAL2); and first order gradient projection-type dual method (DUAL1). A pure discrete and mixed continuous-discrete design variable capability, and zero order approximation of the stress constraints are also included.
NASA Astrophysics Data System (ADS)
Piatkowski, Marian; Müthing, Steffen; Bastian, Peter
2018-03-01
In this paper we consider discontinuous Galerkin (DG) methods for the incompressible Navier-Stokes equations in the framework of projection methods. In particular we employ symmetric interior penalty DG methods within the second-order rotational incremental pressure correction scheme. The major focus of the paper is threefold: i) We propose a modified upwind scheme based on the Vijayasundaram numerical flux that has favourable properties in the context of DG. ii) We present a novel postprocessing technique in the Helmholtz projection step based on H (div) reconstruction of the pressure correction that is computed locally, is a projection in the discrete setting and ensures that the projected velocity satisfies the discrete continuity equation exactly. As a consequence it also provides local mass conservation of the projected velocity. iii) Numerical results demonstrate the properties of the scheme for different polynomial degrees applied to two-dimensional problems with known solution as well as large-scale three-dimensional problems. In particular we address second-order convergence in time of the splitting scheme as well as its long-time stability.
Fully decoupled monolithic projection method for natural convection problems
NASA Astrophysics Data System (ADS)
Pan, Xiaomin; Kim, Kyoungyoun; Lee, Changhoon; Choi, Jung-Il
2017-04-01
To solve time-dependent natural convection problems, we propose a fully decoupled monolithic projection method. The proposed method applies the Crank-Nicolson scheme in time and the second-order central finite difference in space. To obtain a non-iterative monolithic method from the fully discretized nonlinear system, we first adopt linearizations of the nonlinear convection terms and the general buoyancy term with incurring second-order errors in time. Approximate block lower-upper decompositions, along with an approximate factorization technique, are additionally employed to a global linearly coupled system, which leads to several decoupled subsystems, i.e., a fully decoupled monolithic procedure. We establish global error estimates to verify the second-order temporal accuracy of the proposed method for velocity, pressure, and temperature in terms of a discrete l2-norm. Moreover, according to the energy evolution, the proposed method is proved to be stable if the time step is less than or equal to a constant. In addition, we provide numerical simulations of two-dimensional Rayleigh-Bénard convection and periodic forced flow. The results demonstrate that the proposed method significantly mitigates the time step limitation, reduces the computational cost because only one Poisson equation is required to be solved, and preserves the second-order temporal accuracy for velocity, pressure, and temperature. Finally, the proposed method reasonably predicts a three-dimensional Rayleigh-Bénard convection for different Rayleigh numbers.
NASA Technical Reports Server (NTRS)
Chen, Zhangxin; Ewing, Richard E.
1996-01-01
Multigrid algorithms for nonconforming and mixed finite element methods for second order elliptic problems on triangular and rectangular finite elements are considered. The construction of several coarse-to-fine intergrid transfer operators for nonconforming multigrid algorithms is discussed. The equivalence between the nonconforming and mixed finite element methods with and without projection of the coefficient of the differential problems into finite element spaces is described.
Second derivatives for approximate spin projection methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thompson, Lee M.; Hratchian, Hrant P., E-mail: hhratchian@ucmerced.edu
2015-02-07
The use of broken-symmetry electronic structure methods is required in order to obtain correct behavior of electronically strained open-shell systems, such as transition states, biradicals, and transition metals. This approach often has issues with spin contamination, which can lead to significant errors in predicted energies, geometries, and properties. Approximate projection schemes are able to correct for spin contamination and can often yield improved results. To fully make use of these methods and to carry out exploration of the potential energy surface, it is desirable to develop an efficient second energy derivative theory. In this paper, we formulate the analytical secondmore » derivatives for the Yamaguchi approximate projection scheme, building on recent work that has yielded an efficient implementation of the analytical first derivatives.« less
Higher order explicit symmetric integrators for inseparable forms of coordinates and momenta
NASA Astrophysics Data System (ADS)
Liu, Lei; Wu, Xin; Huang, Guoqing; Liu, Fuyao
2016-06-01
Pihajoki proposed the extended phase-space second-order explicit symmetric leapfrog methods for inseparable Hamiltonian systems. On the basis of this work, we survey a critical problem on how to mix the variables in the extended phase space. Numerical tests show that sequent permutations of coordinates and momenta can make the leapfrog-like methods yield the most accurate results and the optimal long-term stabilized error behaviour. We also present a novel method to construct many fourth-order extended phase-space explicit symmetric integration schemes. Each scheme represents the symmetric production of six usual second-order leapfrogs without any permutations. This construction consists of four segments: the permuted coordinates, triple product of the usual second-order leapfrog without permutations, the permuted momenta and the triple product of the usual second-order leapfrog without permutations. Similarly, extended phase-space sixth, eighth and other higher order explicit symmetric algorithms are available. We used several inseparable Hamiltonian examples, such as the post-Newtonian approach of non-spinning compact binaries, to show that one of the proposed fourth-order methods is more efficient than the existing methods; examples include the fourth-order explicit symplectic integrators of Chin and the fourth-order explicit and implicit mixed symplectic integrators of Zhong et al. Given a moderate choice for the related mixing and projection maps, the extended phase-space explicit symplectic-like methods are well suited for various inseparable Hamiltonian problems. Samples of these problems involve the algorithmic regularization of gravitational systems with velocity-dependent perturbations in the Solar system and post-Newtonian Hamiltonian formulations of spinning compact objects.
NASA Technical Reports Server (NTRS)
Lee, Allan Y.; Tsuha, Walter S.
1993-01-01
A two-stage model reduction methodology, combining the classical Component Mode Synthesis (CMS) method and the newly developed Enhanced Projection and Assembly (EP&A) method, is proposed in this research. The first stage of this methodology, called the COmponent Modes Projection and Assembly model REduction (COMPARE) method, involves the generation of CMS mode sets, such as the MacNeal-Rubin mode sets. These mode sets are then used to reduce the order of each component model in the Rayleigh-Ritz sense. The resultant component models are then combined to generate reduced-order system models at various system configurations. A composite mode set which retains important system modes at all system configurations is then selected from these reduced-order system models. In the second stage, the EP&A model reduction method is employed to reduce further the order of the system model generated in the first stage. The effectiveness of the COMPARE methodology has been successfully demonstrated on a high-order, finite-element model of the cruise-configured Galileo spacecraft.
Matveev, Alexei V; Rösch, Notker
2008-06-28
We suggest an approximate relativistic model for economical all-electron calculations on molecular systems that exploits an atomic ansatz for the relativistic projection transformation. With such a choice, the projection transformation matrix is by definition both transferable and independent of the geometry. The formulation is flexible with regard to the level at which the projection transformation is approximated; we employ the free-particle Foldy-Wouthuysen and the second-order Douglas-Kroll-Hess variants. The (atomic) infinite-order decoupling scheme shows little effect on structural parameters in scalar-relativistic calculations; also, the use of a screened nuclear potential in the definition of the projection transformation shows hardly any effect in the context of the present work. Applications to structural and energetic parameters of various systems (diatomics AuH, AuCl, and Au(2), two structural isomers of Ir(4), and uranyl dication UO(2) (2+) solvated by 3-6 water ligands) show that the atomic approximation to the conventional second-order Douglas-Kroll-Hess projection (ADKH) transformation yields highly accurate results at substantial computational savings, in particular, when calculating energy derivatives of larger systems. The size-dependence of the intrinsic error of the ADKH method in extended systems of heavy elements is analyzed for the atomization energies of Pd(n) clusters (n=116).
NASA Astrophysics Data System (ADS)
Champagne, Benoı̂t; Botek, Edith; Nakano, Masayoshi; Nitta, Tomoshige; Yamaguchi, Kizashi
2005-03-01
The basis set and electron correlation effects on the static polarizability (α) and second hyperpolarizability (γ) are investigated ab initio for two model open-shell π-conjugated systems, the C5H7 radical and the C6H8 radical cation in their doublet state. Basis set investigations evidence that the linear and nonlinear responses of the radical cation necessitate the use of a less extended basis set than its neutral analog. Indeed, double-zeta-type basis sets supplemented by a set of d polarization functions but no diffuse functions already provide accurate (hyper)polarizabilities for C6H8 whereas diffuse functions are compulsory for C5H7, in particular, p diffuse functions. In addition to the 6-31G*+pd basis set, basis sets resulting from removing not necessary diffuse functions from the augmented correlation consistent polarized valence double zeta basis set have been shown to provide (hyper)polarizability values of similar quality as more extended basis sets such as augmented correlation consistent polarized valence triple zeta and doubly augmented correlation consistent polarized valence double zeta. Using the selected atomic basis sets, the (hyper)polarizabilities of these two model compounds are calculated at different levels of approximation in order to assess the impact of including electron correlation. As a function of the method of calculation antiparallel and parallel variations have been demonstrated for α and γ of the two model compounds, respectively. For the polarizability, the unrestricted Hartree-Fock and unrestricted second-order Møller-Plesset methods bracket the reference value obtained at the unrestricted coupled cluster singles and doubles with a perturbative inclusion of the triples level whereas the projected unrestricted second-order Møller-Plesset results are in much closer agreement with the unrestricted coupled cluster singles and doubles with a perturbative inclusion of the triples values than the projected unrestricted Hartree-Fock results. Moreover, the differences between the restricted open-shell Hartree-Fock and restricted open-shell second-order Møller-Plesset methods are small. In what concerns the second hyperpolarizability, the unrestricted Hartree-Fock and unrestricted second-order Møller-Plesset values remain of similar quality while using spin-projected schemes fails for the charged system but performs nicely for the neutral one. The restricted open-shell schemes, and especially the restricted open-shell second-order Møller-Plesset method, provide for both compounds γ values close to the results obtained at the unrestricted coupled cluster level including singles and doubles with a perturbative inclusion of the triples. Thus, to obtain well-converged α and γ values at low-order electron correlation levels, the removal of spin contamination is a necessary but not a sufficient condition. Density-functional theory calculations of α and γ have also been carried out using several exchange-correlation functionals. Those employing hybrid exchange-correlation functionals have been shown to reproduce fairly well the reference coupled cluster polarizability and second hyperpolarizability values. In addition, inclusion of Hartree-Fock exchange is of major importance for determining accurate polarizability whereas for the second hyperpolarizability the gradient corrections are large.
Exponential Methods for the Time Integration of Schroedinger Equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cano, B.; Gonzalez-Pachon, A.
2010-09-30
We consider exponential methods of second order in time in order to integrate the cubic nonlinear Schroedinger equation. We are interested in taking profit of the special structure of this equation. Therefore, we look at symmetry, symplecticity and approximation of invariants of the proposed methods. That will allow to integrate till long times with reasonable accuracy. Computational efficiency is also our aim. Therefore, we make numerical computations in order to compare the methods considered and so as to conclude that explicit Lawson schemes projected on the norm of the solution are an efficient tool to integrate this equation.
Equidistant map projections of a triaxial ellipsoid with the use of reduced coordinates
NASA Astrophysics Data System (ADS)
Pędzich, Paweł
2017-12-01
The paper presents a new method of constructing equidistant map projections of a triaxial ellipsoid as a function of reduced coordinates. Equations for x and y coordinates are expressed with the use of the normal elliptic integral of the second kind and Jacobian elliptic functions. This solution allows to use common known and widely described in literature methods of solving such integrals and functions. The main advantage of this method is the fact that the calculations of x and y coordinates are practically based on a single algorithm that is required to solve the elliptic integral of the second kind. Equations are provided for three types of map projections: cylindrical, azimuthal and pseudocylindrical. These types of projections are often used in planetary cartography for presentation of entire and polar regions of extraterrestrial objects. The paper also contains equations for the calculation of the length of a meridian and a parallel of a triaxial ellipsoid in reduced coordinates. Moreover, graticules of three coordinates systems (planetographic, planetocentric and reduced) in developed map projections are presented. The basic properties of developed map projections are also described. The obtained map projections may be applied in planetary cartography in order to create maps of extraterrestrial objects.
3D image acquisition by fiber-based fringe projection
NASA Astrophysics Data System (ADS)
Pfeifer, Tilo; Driessen, Sascha
2005-02-01
In macroscopic production processes several measuring methods are used to assure the quality of 3D parts. Definitely, one of the most widespread techniques is the fringe projection. It"s a fast and accurate method to receive the topography of a part as a computer file which can be processed in further steps, e.g. to compare the measured part to a given CAD file. In this article it will be shown how the fringe projection method is applied to a fiber-optic system. The fringes generated by a miniaturized fringe projector (MiniRot) are first projected onto the front-end of an image guide using special optics. The image guide serves as a transmitter for the fringes in order to get them onto the surface of a micro part. A second image guide is used to observe the micro part. It"s mounted under an angle relating to the illuminating image guide so that the triangulation condition is fulfilled. With a CCD camera connected to the second image guide the projected fringes are recorded and those data is analyzed by an image processing system.
NASA Technical Reports Server (NTRS)
Hou, Gene
1998-01-01
Sensitivity analysis is a technique for determining derivatives of system responses with respect to design parameters. Among many methods available for sensitivity analysis, automatic differentiation has been proven through many applications in fluid dynamics and structural mechanics to be an accurate and easy method for obtaining derivatives. Nevertheless, the method can be computational expensive and can require a high memory space. This project will apply an automatic differentiation tool, ADIFOR, to a p-version finite element code to obtain first- and second- order then-nal derivatives, respectively. The focus of the study is on the implementation process and the performance of the ADIFOR-enhanced codes for sensitivity analysis in terms of memory requirement, computational efficiency, and accuracy.
Using Riemannian geometry to obtain new results on Dikin and Karmarkar methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oliveira, P.; Joao, X.; Piaui, T.
1994-12-31
We are motivated by a 1990 Karmarkar paper on Riemannian geometry and Interior Point Methods. In this talk we show 3 results. (1) Karmarkar direction can be derived from the Dikin one. This is obtained by constructing a certain Z(x) representation of the null space of the unitary simplex (e, x) = 1; then the projective direction is the image under Z(x) of the affine-scaling one, when it is restricted to that simplex. (2) Second order information on Dikin and Karmarkar methods. We establish computable Hessians for each of the metrics corresponding to both directions, thus permitting the generation ofmore » {open_quotes}second order{close_quotes} methods. (3) Dikin and Karmarkar geodesic descent methods. For those directions, we make computable the theoretical Luenberger geodesic descent method, since we are able to explicit very accurate expressions of the corresponding geodesics. Convergence results are given.« less
N3LO corrections to jet production in deep inelastic scattering using the Projection-to-Born method
NASA Astrophysics Data System (ADS)
Currie, J.; Gehrmann, T.; Glover, E. W. N.; Huss, A.; Niehues, J.; Vogt, A.
2018-05-01
Computations of higher-order QCD corrections for processes with exclusive final states require a subtraction method for real-radiation contributions. We present the first-ever generalisation of a subtraction method for third-order (N3LO) QCD corrections. The Projection-to-Born method is used to combine inclusive N3LO coefficient functions with an exclusive second-order (NNLO) calculation for a final state with an extra jet. The input requirements, advantages, and potential applications of the method are discussed, and validations at lower orders are performed. As a test case, we compute the N3LO corrections to kinematical distributions and production rates for single-jet production in deep inelastic scattering in the laboratory frame, and compare them with data from the ZEUS experiment at HERA. The corrections are small in the central rapidity region, where they stabilize the predictions to sub per-cent level. The corrections increase substantially towards forward rapidity where large logarithmic effects are expected, thereby yielding an improved description of the data in this region.
From the Moons of Jupiter to the Milky Way
NASA Technical Reports Server (NTRS)
Cohen, Martin
1993-01-01
In this report we will describe the successes and problems encountered in carrying out the above project. Due to funding delays, we were unable to begin the project until February, 1993. The telescopes were ordered in September, 1992. We arranged with the principals of the participating schools, Fruitvale Elementary and Allendale year-Round, to conduct the building and lecture phases of the project during the second and third weeks of February. The principals chose to employ totally different methods of selecting children to participate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakano, M; Kida, S; Masutani, Y
2014-06-01
Purpose: In the previous study, we developed time-ordered fourdimensional (4D) cone-beam CT (CBCT) technique to visualize nonperiodic organ motion, such as peristaltic motion of gastrointestinal organs and adjacent area, using half-scan reconstruction method. One important obstacle was that truncation of projection was caused by asymmetric location of flat-panel detector (FPD) in order to cover whole abdomen or pelvis in one rotation. In this study, we propose image mosaicing to extend projection data to make possible to reconstruct full field-of-view (FOV) image using half-scan reconstruction. Methods: The projections of prostate cancer patients were acquired using the X-ray Volume Imaging system (XVI,more » version 4.5) on Synergy linear accelerator system (Elekta, UK). The XVI system has three options of FOV, S, M and L, and M FOV was chosen for pelvic CBCT acquisition, with a FPD panel 11.5 cm offset. The method to produce extended projections consists of three main steps: First, normal three-dimensional (3D) reconstruction which contains whole pelvis was implemented using real projections. Second, virtual projections were produced by reprojection process of the reconstructed 3D image. Third, real and virtual projections in each angle were combined into one extended mosaic projection. Then, 4D CBCT images were reconstructed using our inhouse reconstruction software based on Feldkamp, Davis and Kress algorithm. The angular range of each reconstruction phase in the 4D reconstruction was 180 degrees, and the range moved as time progressed. Results: Projection data were successfully extended without discontinuous boundary between real and virtual projections. Using mosaic projections, 4D CBCT image sets were reconstructed without artifacts caused by the truncation, and thus, whole pelvis was clearly visible. Conclusion: The present method provides extended projections which contain whole pelvis. The presented reconstruction method also enables time-ordered 4D CBCT reconstruction of organs with non-periodic motion with full FOV without projection-truncation artifacts. This work was partly supported by the JSPS Core-to-Core Program(No. 23003). This work was partly supported by JSPS KAKENHI 24234567.« less
Measuring Second Language Acquisition. Studies in Language Education, No. 6.
ERIC Educational Resources Information Center
Cooper, Thomas C.
This research project was designed to analyze by quantitative methods a corpus of writing produced by four groups of American college students enrolled in German courses and by one group of professional German writers. Analysis was undertaken in order to determine whether or not significant quantitative differences in the use of selected syntactic…
Advanced technology applications for second and third general coal gasification systems
NASA Technical Reports Server (NTRS)
Bradford, R.; Hyde, J. D.; Mead, C. W.
1980-01-01
The historical background of coal conversion is reviewed and the programmatic status (operational, construction, design, proposed) of coal gasification processes is tabulated for both commercial and demonstration projects as well as for large and small pilot plants. Both second and third generation processes typically operate at higher temperatures and pressures than first generation methods. Much of the equipment that has been tested has failed. The most difficult problems are in process control. The mechanics of three-phase flow are not fully understood. Companies participating in coal conversion projects are ordering duplicates of failure prone units. No real solutions to any of the significant problems in technology development have been developed in recent years.
A PDF projection method: A pressure algorithm for stand-alone transported PDFs
NASA Astrophysics Data System (ADS)
Ghorbani, Asghar; Steinhilber, Gerd; Markus, Detlev; Maas, Ulrich
2015-03-01
In this paper, a new formulation of the projection approach is introduced for stand-alone probability density function (PDF) methods. The method is suitable for applications in low-Mach number transient turbulent reacting flows. The method is based on a fractional step method in which first the advection-diffusion-reaction equations are modelled and solved within a particle-based PDF method to predict an intermediate velocity field. Then the mean velocity field is projected onto a space where the continuity for the mean velocity is satisfied. In this approach, a Poisson equation is solved on the Eulerian grid to obtain the mean pressure field. Then the mean pressure is interpolated at the location of each stochastic Lagrangian particle. The formulation of the Poisson equation avoids the time derivatives of the density (due to convection) as well as second-order spatial derivatives. This in turn eliminates the major sources of instability in the presence of stochastic noise that are inherent in particle-based PDF methods. The convergence of the algorithm (in the non-turbulent case) is investigated first by the method of manufactured solutions. Then the algorithm is applied to a one-dimensional turbulent premixed flame in order to assess the accuracy and convergence of the method in the case of turbulent combustion. As a part of this work, we also apply the algorithm to a more realistic flow, namely a transient turbulent reacting jet, in order to assess the performance of the method.
Hessian Schatten-norm regularization for linear inverse problems.
Lefkimmiatis, Stamatios; Ward, John Paul; Unser, Michael
2013-05-01
We introduce a novel family of invariant, convex, and non-quadratic functionals that we employ to derive regularized solutions of ill-posed linear inverse imaging problems. The proposed regularizers involve the Schatten norms of the Hessian matrix, which are computed at every pixel of the image. They can be viewed as second-order extensions of the popular total-variation (TV) semi-norm since they satisfy the same invariance properties. Meanwhile, by taking advantage of second-order derivatives, they avoid the staircase effect, a common artifact of TV-based reconstructions, and perform well for a wide range of applications. To solve the corresponding optimization problems, we propose an algorithm that is based on a primal-dual formulation. A fundamental ingredient of this algorithm is the projection of matrices onto Schatten norm balls of arbitrary radius. This operation is performed efficiently based on a direct link we provide between vector projections onto lq norm balls and matrix projections onto Schatten norm balls. Finally, we demonstrate the effectiveness of the proposed methods through experimental results on several inverse imaging problems with real and simulated data.
Sprays and Cartan projective connections
NASA Astrophysics Data System (ADS)
Saunders, D. J.
2004-10-01
Around 80 years ago, several authors (for instance H. Weyl, T.Y. Thomas, J. Douglas and J.H.C. Whitehead) studied the projective geometry of paths, using the methods of tensor calculus. The principal object of study was a spray, namely a homogeneous second-order differential equation, or more generally a projective equivalence class of sprays. At around the same time, E. Cartan studied the same topic from a different point of view, by imagining a projective space attached to a manifold, or, more generally, attached to a `manifold of elements'; the infinitesimal `glue' may be interpreted in modern language as a Cartan projective connection on a principal bundle. This paper describes the geometrical relationship between these two points of view.
Compact multi-bounce projection system for extreme ultraviolet projection lithography
Hudyma, Russell M.
2002-01-01
An optical system compatible with short wavelength (extreme ultraviolet) radiation comprising four optical elements providing five reflective surfaces for projecting a mask image onto a substrate. The five optical surfaces are characterized in order from object to image as concave, convex, concave, convex and concave mirrors. The second and fourth reflective surfaces are part of the same optical element. The optical system is particularly suited for ring field step and scan lithography methods. The invention uses aspheric mirrors to minimize static distortion and balance the static distortion across the ring field width, which effectively minimizes dynamic distortion.
DAX - The Next Generation: Towards One Million Processes on Commodity Hardware.
Damon, Stephen M; Boyd, Brian D; Plassard, Andrew J; Taylor, Warren; Landman, Bennett A
2017-01-01
Large scale image processing demands a standardized way of not only storage but also a method for job distribution and scheduling. The eXtensible Neuroimaging Archive Toolkit (XNAT) is one of several platforms that seeks to solve the storage issues. Distributed Automation for XNAT (DAX) is a job control and distribution manager. Recent massive data projects have revealed several bottlenecks for projects with >100,000 assessors (i.e., data processing pipelines in XNAT). In order to address these concerns, we have developed a new API, which exposes a direct connection to the database rather than REST API calls to accomplish the generation of assessors. This method, consistent with XNAT, keeps a full history for auditing purposes. Additionally, we have optimized DAX to keep track of processing status on disk (called DISKQ) rather than on XNAT, which greatly reduces load on XNAT by vastly dropping the number of API calls. Finally, we have integrated DAX into a Docker container with the idea of using it as a Docker controller to launch Docker containers of image processing pipelines. Using our new API, we reduced the time to create 1,000 assessors (a sub-cohort of our case project) from 65040 seconds to 229 seconds (a decrease of over 270 fold). DISKQ, using pyXnat, allows launching of 400 jobs in under 10 seconds which previously took 2,000 seconds. Together these updates position DAX to support projects with hundreds of thousands of scans and to run them in a time-efficient manner.
DAX - the next generation: towards one million processes on commodity hardware
NASA Astrophysics Data System (ADS)
Damon, Stephen M.; Boyd, Brian D.; Plassard, Andrew J.; Taylor, Warren; Landman, Bennett A.
2017-03-01
Large scale image processing demands a standardized way of not only storage but also a method for job distribution and scheduling. The eXtensible Neuroimaging Archive Toolkit (XNAT) is one of several platforms that seeks to solve the storage issues. Distributed Automation for XNAT (DAX) is a job control and distribution manager. Recent massive data projects have revealed several bottlenecks for projects with <100,000 assessors (i.e., data processing pipelines in XNAT). In order to address these concerns, we have developed a new API, which exposes a direct connection to the database rather than REST API calls to accomplish the generation of assessors. This method, consistent with XNAT, keeps a full history for auditing purposes. Additionally, we have optimized DAX to keep track of processing status on disk (called DISKQ) rather than on XNAT, which greatly reduces load on XNAT by vastly dropping the number of API calls. Finally, we have integrated DAX into a Docker container with the idea of using it as a Docker controller to launch Docker containers of image processing pipelines. Using our new API, we reduced the time to create 1,000 assessors (a sub-cohort of our case project) from 65040 seconds to 229 seconds (a decrease of over 270 fold). DISKQ, using pyXnat, allows launching of 400 jobs in under 10 seconds which previously took 2,000 seconds. Together these updates position DAX to support projects with hundreds of thousands of scans and to run them in a time-efficient manner.
DAX - The Next Generation: Towards One Million Processes on Commodity Hardware
Boyd, Brian D.; Plassard, Andrew J.; Taylor, Warren; Landman, Bennett A.
2017-01-01
Large scale image processing demands a standardized way of not only storage but also a method for job distribution and scheduling. The eXtensible Neuroimaging Archive Toolkit (XNAT) is one of several platforms that seeks to solve the storage issues. Distributed Automation for XNAT (DAX) is a job control and distribution manager. Recent massive data projects have revealed several bottlenecks for projects with >100,000 assessors (i.e., data processing pipelines in XNAT). In order to address these concerns, we have developed a new API, which exposes a direct connection to the database rather than REST API calls to accomplish the generation of assessors. This method, consistent with XNAT, keeps a full history for auditing purposes. Additionally, we have optimized DAX to keep track of processing status on disk (called DISKQ) rather than on XNAT, which greatly reduces load on XNAT by vastly dropping the number of API calls. Finally, we have integrated DAX into a Docker container with the idea of using it as a Docker controller to launch Docker containers of image processing pipelines. Using our new API, we reduced the time to create 1,000 assessors (a sub-cohort of our case project) from 65040 seconds to 229 seconds (a decrease of over 270 fold). DISKQ, using pyXnat, allows launching of 400 jobs in under 10 seconds which previously took 2,000 seconds. Together these updates position DAX to support projects with hundreds of thousands of scans and to run them in a time-efficient manner. PMID:28919661
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kupferman, R.
The author presents a numerical study of the axisymmetric Couette-Taylor problem using a finite difference scheme. The scheme is based on a staggered version of a second-order central-differencing method combined with a discrete Hodge projection. The use of central-differencing operators obviates the need to trace the characteristic flow associated with the hyperbolic terms. The result is a simple and efficient scheme which is readily adaptable to other geometries and to more complicated flows. The scheme exhibits competitive performance in terms of accuracy, resolution, and robustness. The numerical results agree accurately with linear stability theory and with previous numerical studies.
Biogas slurry pricing method based on nutrient content
NASA Astrophysics Data System (ADS)
Zhang, Chang-ai; Guo, Honghai; Yang, Zhengtao; Xin, Shurong
2017-11-01
In order to promote biogas-slurry commercialization, A method was put forward to valuate biogas slurry based on its nutrient contents. Firstly, element contents of biogas slurry was measured; Secondly, each element was valuated based on its market price, and then traffic cost, using cost and market effect were taken into account, the pricing method of biogas slurry were obtained lastly. This method could be useful in practical production. Taking cattle manure raw meterial biogas slurry and con stalk raw material biogas slurry for example, their price were 38.50 yuan RMB per ton and 28.80 yuan RMB per ton. This paper will be useful for recognizing the value of biogas projects, ensuring biogas project running, and instructing the cyclic utilization of biomass resources in China.
Laser projection positioning of spatial contour curves via a galvanometric scanner
NASA Astrophysics Data System (ADS)
Tu, Junchao; Zhang, Liyan
2018-04-01
The technology of laser projection positioning is widely applied in advanced manufacturing fields (e.g. composite plying, parts location and installation). In order to use it better, a laser projection positioning (LPP) system is designed and implemented. Firstly, the LPP system is built by a laser galvanometric scanning (LGS) system and a binocular vision system. Applying Single-hidden Layer Feed-forward Neural Network (SLFN), the system model is constructed next. Secondly, the LGS system and the binocular system, which are respectively independent, are integrated through a datadriven calibration method based on extreme learning machine (ELM) algorithm. Finally, a projection positioning method is proposed within the framework of the calibrated SLFN system model. A well-designed experiment is conducted to verify the viability and effectiveness of the proposed system. In addition, the accuracy of projection positioning are evaluated to show that the LPP system can achieves the good localization effect.
ERIC Educational Resources Information Center
Koutrouba, Konstantina; Karageorgou, Elissavet
2013-01-01
The present questionnaire-based study was conducted in 2010 in order to examine 677 Greek Second Chance School (SCS) students' perceptions about the cognitive and socio-affective outcomes of project-based learning. Data elaboration, statistical and factor analysis showed that the participants found that project-based learning offered a second…
NASA Astrophysics Data System (ADS)
Gaset, Jordi; Román-Roy, Narciso
2016-12-01
The projectability of Poincaré-Cartan forms in a third-order jet bundle J3π onto a lower-order jet bundle is a consequence of the degenerate character of the corresponding Lagrangian. This fact is analyzed using the constraint algorithm for the associated Euler-Lagrange equations in J3π. The results are applied to study the Hilbert Lagrangian for the Einstein equations (in vacuum) from a multisymplectic point of view. Thus we show how these equations are a consequence of the application of the constraint algorithm to the geometric field equations, meanwhile the other constraints are related with the fact that this second-order theory is equivalent to a first-order theory. Furthermore, the case of higher-order mechanics is also studied as a particular situation.
Kumar, K Vasanth
2007-04-02
Kinetic experiments were carried out for the sorption of safranin onto activated carbon particles. The kinetic data were fitted to pseudo-second order model of Ho, Sobkowsk and Czerwinski, Blanchard et al. and Ritchie by linear and non-linear regression methods. Non-linear method was found to be a better way of obtaining the parameters involved in the second order rate kinetic expressions. Both linear and non-linear regression showed that the Sobkowsk and Czerwinski and Ritchie's pseudo-second order models were the same. Non-linear regression analysis showed that both Blanchard et al. and Ho have similar ideas on the pseudo-second order model but with different assumptions. The best fit of experimental data in Ho's pseudo-second order expression by linear and non-linear regression method showed that Ho pseudo-second order model was a better kinetic expression when compared to other pseudo-second order kinetic expressions.
Empowering the ESL Worker within the New Work Order.
ERIC Educational Resources Information Center
Moore, Rita A.
1999-01-01
Investigates issues of empowerment and language learning for English-as-a-Second-Language workers (in workplace literacy projects) making the transition from one type of workplace culture to another. Describes the project and participants, how workplace structures and linguistic hierarchies disempowered second-language learners, benefits of…
On probability-possibility transformations
NASA Technical Reports Server (NTRS)
Klir, George J.; Parviz, Behzad
1992-01-01
Several probability-possibility transformations are compared in terms of the closeness of preserving second-order properties. The comparison is based on experimental results obtained by computer simulation. Two second-order properties are involved in this study: noninteraction of two distributions and projections of a joint distribution.
Chen, Shyi-Ming; Chen, Shen-Wen
2015-03-01
In this paper, we present a new method for fuzzy forecasting based on two-factors second-order fuzzy-trend logical relationship groups and the probabilities of trends of fuzzy-trend logical relationships. Firstly, the proposed method fuzzifies the historical training data of the main factor and the secondary factor into fuzzy sets, respectively, to form two-factors second-order fuzzy logical relationships. Then, it groups the obtained two-factors second-order fuzzy logical relationships into two-factors second-order fuzzy-trend logical relationship groups. Then, it calculates the probability of the "down-trend," the probability of the "equal-trend" and the probability of the "up-trend" of the two-factors second-order fuzzy-trend logical relationships in each two-factors second-order fuzzy-trend logical relationship group, respectively. Finally, it performs the forecasting based on the probabilities of the down-trend, the equal-trend, and the up-trend of the two-factors second-order fuzzy-trend logical relationships in each two-factors second-order fuzzy-trend logical relationship group. We also apply the proposed method to forecast the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX) and the NTD/USD exchange rates. The experimental results show that the proposed method outperforms the existing methods.
Yang, Yi; Tang, Xiangyang
2012-12-01
The x-ray differential phase contrast imaging implemented with the Talbot interferometry has recently been reported to be capable of providing tomographic images corresponding to attenuation-contrast, phase-contrast, and dark-field contrast, simultaneously, from a single set of projection data. The authors believe that, along with small-angle x-ray scattering, the second-order phase derivative Φ(") (s)(x) plays a role in the generation of dark-field contrast. In this paper, the authors derive the analytic formulae to characterize the contribution made by the second-order phase derivative to the dark-field contrast (namely, second-order differential phase contrast) and validate them via computer simulation study. By proposing a practical retrieval method, the authors investigate the potential of second-order differential phase contrast imaging for extensive applications. The theoretical derivation starts at assuming that the refractive index decrement of an object can be decomposed into δ = δ(s) + δ(f), where δ(f) corresponds to the object's fine structures and manifests itself in the dark-field contrast via small-angle scattering. Based on the paraxial Fresnel-Kirchhoff theory, the analytic formulae to characterize the contribution made by δ(s), which corresponds to the object's smooth structures, to the dark-field contrast are derived. Through computer simulation with specially designed numerical phantoms, an x-ray differential phase contrast imaging system implemented with the Talbot interferometry is utilized to evaluate and validate the derived formulae. The same imaging system is also utilized to evaluate and verify the capability of the proposed method to retrieve the second-order differential phase contrast for imaging, as well as its robustness over the dimension of detector cell and the number of steps in grating shifting. Both analytic formulae and computer simulations show that, in addition to small-angle scattering, the contrast generated by the second-order derivative is magnified substantially by the ratio of detector cell dimension over grating period, which plays a significant role in dark-field imaging implemented with the Talbot interferometry. The analytic formulae derived in this work to characterize the second-order differential phase contrast in the dark-field imaging implemented with the Talbot interferometry are of significance, which may initiate more activities in the research and development of x-ray differential phase contrast imaging for extensive preclinical and eventually clinical applications.
Improving Reading Comprehension through Higher-Order Thinking Skills
ERIC Educational Resources Information Center
McKown, Brigitte A.; Barnett, Cynthia L.
2007-01-01
This action research project report documents the action research project that was conducted to improve reading comprehension with second grade and third grade students. The teacher researchers intended to improve reading comprehension by using higher-order thinking skills such as predicting, making connections, visualizing, inferring,…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jibben, Zechariah Joel; Herrmann, Marcus
Here, we present a Runge-Kutta discontinuous Galerkin method for solving conservative reinitialization in the context of the conservative level set method. This represents an extension of the method recently proposed by Owkes and Desjardins [21], by solving the level set equations on the refined level set grid and projecting all spatially-dependent variables into the full basis used by the discontinuous Galerkin discretization. By doing so, we achieve the full k+1 order convergence rate in the L1 norm of the level set field predicted for RKDG methods given kth degree basis functions when the level set profile thickness is held constantmore » with grid refinement. Shape and volume errors for the 0.5-contour of the level set, on the other hand, are found to converge between first and second order. We show a variety of test results, including the method of manufactured solutions, reinitialization of a circle and sphere, Zalesak's disk, and deforming columns and spheres, all showing substantial improvements over the high-order finite difference traditional level set method studied for example by Herrmann. We also demonstrate the need for kth order accurate normal vectors, as lower order normals are found to degrade the convergence rate of the method.« less
Kumar, K Vasanth; Sivanesan, S
2006-08-25
Pseudo second order kinetic expressions of Ho, Sobkowsk and Czerwinski, Blanachard et al. and Ritchie were fitted to the experimental kinetic data of malachite green onto activated carbon by non-linear and linear method. Non-linear method was found to be a better way of obtaining the parameters involved in the second order rate kinetic expressions. Both linear and non-linear regression showed that the Sobkowsk and Czerwinski and Ritchie's pseudo second order model were the same. Non-linear regression analysis showed that both Blanachard et al. and Ho have similar ideas on the pseudo second order model but with different assumptions. The best fit of experimental data in Ho's pseudo second order expression by linear and non-linear regression method showed that Ho pseudo second order model was a better kinetic expression when compared to other pseudo second order kinetic expressions. The amount of dye adsorbed at equilibrium, q(e), was predicted from Ho pseudo second order expression and were fitted to the Langmuir, Freundlich and Redlich Peterson expressions by both linear and non-linear method to obtain the pseudo isotherms. The best fitting pseudo isotherm was found to be the Langmuir and Redlich Peterson isotherm. Redlich Peterson is a special case of Langmuir when the constant g equals unity.
Jibben, Zechariah Joel; Herrmann, Marcus
2017-08-24
Here, we present a Runge-Kutta discontinuous Galerkin method for solving conservative reinitialization in the context of the conservative level set method. This represents an extension of the method recently proposed by Owkes and Desjardins [21], by solving the level set equations on the refined level set grid and projecting all spatially-dependent variables into the full basis used by the discontinuous Galerkin discretization. By doing so, we achieve the full k+1 order convergence rate in the L1 norm of the level set field predicted for RKDG methods given kth degree basis functions when the level set profile thickness is held constantmore » with grid refinement. Shape and volume errors for the 0.5-contour of the level set, on the other hand, are found to converge between first and second order. We show a variety of test results, including the method of manufactured solutions, reinitialization of a circle and sphere, Zalesak's disk, and deforming columns and spheres, all showing substantial improvements over the high-order finite difference traditional level set method studied for example by Herrmann. We also demonstrate the need for kth order accurate normal vectors, as lower order normals are found to degrade the convergence rate of the method.« less
Permission to Be Confused: Toward a Second Wave of Critical Whiteness Pedagogy
ERIC Educational Resources Information Center
Tanner, Samuel Jaye
2017-01-01
This article considers second wave critical Whiteness pedagogy by examining the author's teacher-researcher implementation of teaching project about Whiteness in a large, suburban high school near a major city in the Midwest. The author relies on narrative scholarship in order to both tell and interpret stories about a yearlong project that used…
Neuron-to-neuron transmission of α-synuclein fibrils through axonal transport
Freundt, Eric C.; Maynard, Nate; Clancy, Eileen K.; Roy, Shyamali; Bousset, Luc; Sourigues, Yannick; Covert, Markus; Melki, Ronald; Kirkegaard, Karla; Brahic, Michel
2012-01-01
Objective The lesions of Parkinson's disease spread through the brain in a characteristic pattern that corresponds to axonal projections. Previous observations suggest that misfolded α-synuclein could behave as a prion, moving from neuron to neuron and causing endogenous α-synuclein to misfold. Here, we characterized and quantified the axonal transport of α-synuclein fibrils and showed that fibrils could be transferred from axons to second-order neurons following anterograde transport. Methods We grew primary cortical mouse neurons in microfluidic devices to separate soma from axonal projections in fluidically isolated microenvironments. We used live-cell imaging and immunofluorescence to characterize the transport of fluorescent α-synuclein fibrils and their transfer to second-order neurons. Results Fibrillar α-synuclein was internalized by primary neurons and transported in axons with kinetics consistent with slow component-b of axonal transport (fast axonal transport with saltatory movement). Fibrillar α-synuclein was readily observed in the cell bodies of second-order neurons following anterograde axonal transport. Axon-to-soma transfer appeared not to require synaptic contacts. Interpretation These results support the hypothesis that the progression of Parkinson's disease can be caused by neuron-to-neuron spread of α-synuclein aggregates and that the anatomical pattern of progression of lesions between axonally connected areas results from the axonal transport of such aggregates. That the transfer did not appear to be transsynaptic gives hope that α-synuclein fibrils could be intercepted by drugs during the extra-cellular phase of their journey. PMID:23109146
Imprints of local lightcone \\ projection effects on the galaxy bispectrum. Part II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jolicoeur, Sheean; Umeh, Obinna; Maartens, Roy
General relativistic imprints on the galaxy bispectrum arise from observational (or projection) effects. The lightcone projection effects include local contributions from Doppler and gravitational potential terms, as well as lensing and other integrated contributions. We recently presented for the first time, the correction to the galaxy bispectrum from all local lightcone projection effects up to second order in perturbations. Here we provide the details underlying this correction, together with further results and illustrations. For moderately squeezed shapes, the correction to the Newtonian prediction is ∼ 30% on equality scales at z ∼ 1. We generalise our recent results to includemore » the contribution, up to second order, of magnification bias (which affects some of the local terms) and evolution bias.« less
A novel heterogeneous training sample selection method on space-time adaptive processing
NASA Astrophysics Data System (ADS)
Wang, Qiang; Zhang, Yongshun; Guo, Yiduo
2018-04-01
The performance of ground target detection about space-time adaptive processing (STAP) decreases when non-homogeneity of clutter power is caused because of training samples contaminated by target-like signals. In order to solve this problem, a novel nonhomogeneous training sample selection method based on sample similarity is proposed, which converts the training sample selection into a convex optimization problem. Firstly, the existing deficiencies on the sample selection using generalized inner product (GIP) are analyzed. Secondly, the similarities of different training samples are obtained by calculating mean-hausdorff distance so as to reject the contaminated training samples. Thirdly, cell under test (CUT) and the residual training samples are projected into the orthogonal subspace of the target in the CUT, and mean-hausdorff distances between the projected CUT and training samples are calculated. Fourthly, the distances are sorted in order of value and the training samples which have the bigger value are selective preference to realize the reduced-dimension. Finally, simulation results with Mountain-Top data verify the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Hy, B.; Barré-Boscher, N.; Özgümüs, A.; Roussière, B.; Tusseau-Nenez, S.; Lau, C.; Cheikh Mhamed, M.; Raynaud, M.; Said, A.; Kolos, K.; Cottereau, E.; Essabaa, S.; Tougait, O.; Pasturel, M.
2012-10-01
In the context of radioactive ion beams, fission targets, often based on uranium compounds, have been used for more than 50 years at isotope separator on line facilities. The development of several projects of second generation facilities aiming at intensities two or three orders of magnitude higher than today puts an emphasis on the properties of the uranium fission targets. A study, driven by Institut de Physique Nucléaire d'Orsay (IPNO), has been started within the SPIRAL2 project to try and fully understand the behavior of these targets. In this paper, we have focused on five uranium carbide based targets. We present an off-line method to characterize their fission product release and the results are examined in conjunction with physical characteristics of each material such as the microstructure, the porosity and the chemical composition.
Ho, Yuh-Shan
2006-01-01
A comparison was made of the linear least-squares method and a trial-and-error non-linear method of the widely used pseudo-second-order kinetic model for the sorption of cadmium onto ground-up tree fern. Four pseudo-second-order kinetic linear equations are discussed. Kinetic parameters obtained from the four kinetic linear equations using the linear method differed but they were the same when using the non-linear method. A type 1 pseudo-second-order linear kinetic model has the highest coefficient of determination. Results show that the non-linear method may be a better way to obtain the desired parameters.
Numerical study of radiometric forces via the direct solution of the Boltzmann kinetic equation
NASA Astrophysics Data System (ADS)
Anikin, Yu. A.
2011-07-01
The two-dimensional rarefied gas motion in a Crookes radiometer and the resulting radiometric forces are studied by numerically solving the Boltzmann kinetic equation. The collision integral is directly evaluated using a projection method, and second-order accurate TVD schemes are used to solve the advection equation. The radiometric forces are found as functions of the Knudsen number and the temperatures, and their spatial distribution is analyzed.
NASA Astrophysics Data System (ADS)
Maljaars, Jakob M.; Labeur, Robert Jan; Möller, Matthias
2018-04-01
A generic particle-mesh method using a hybridized discontinuous Galerkin (HDG) framework is presented and validated for the solution of the incompressible Navier-Stokes equations. Building upon particle-in-cell concepts, the method is formulated in terms of an operator splitting technique in which Lagrangian particles are used to discretize an advection operator, and an Eulerian mesh-based HDG method is employed for the constitutive modeling to account for the inter-particle interactions. Key to the method is the variational framework provided by the HDG method. This allows to formulate the projections between the Lagrangian particle space and the Eulerian finite element space in terms of local (i.e. cellwise) ℓ2-projections efficiently. Furthermore, exploiting the HDG framework for solving the constitutive equations results in velocity fields which excellently approach the incompressibility constraint in a local sense. By advecting the particles through these velocity fields, the particle distribution remains uniform over time, obviating the need for additional quality control. The presented methodology allows for a straightforward extension to arbitrary-order spatial accuracy on general meshes. A range of numerical examples shows that optimal convergence rates are obtained in space and, given the particular time stepping strategy, second-order accuracy is obtained in time. The model capabilities are further demonstrated by presenting results for the flow over a backward facing step and for the flow around a cylinder.
Multiresolution and Explicit Methods for Vector Field Analysis and Visualization
NASA Technical Reports Server (NTRS)
Nielson, Gregory M.
1997-01-01
This is a request for a second renewal (3d year of funding) of a research project on the topic of multiresolution and explicit methods for vector field analysis and visualization. In this report, we describe the progress made on this research project during the second year and give a statement of the planned research for the third year. There are two aspects to this research project. The first is concerned with the development of techniques for computing tangent curves for use in visualizing flow fields. The second aspect of the research project is concerned with the development of multiresolution methods for curvilinear grids and their use as tools for visualization, analysis and archiving of flow data. We report on our work on the development of numerical methods for tangent curve computation first.
NASA Astrophysics Data System (ADS)
Matsubara, Takahiko
2003-02-01
We formulate a general method for perturbative evaluations of statistics of smoothed cosmic fields and provide useful formulae for application of the perturbation theory to various statistics. This formalism is an extensive generalization of the method used by Matsubara, who derived a weakly nonlinear formula of the genus statistic in a three-dimensional density field. After describing the general method, we apply the formalism to a series of statistics, including genus statistics, level-crossing statistics, Minkowski functionals, and a density extrema statistic, regardless of the dimensions in which each statistic is defined. The relation between the Minkowski functionals and other geometrical statistics is clarified. These statistics can be applied to several cosmic fields, including three-dimensional density field, three-dimensional velocity field, two-dimensional projected density field, and so forth. The results are detailed for second-order theory of the formalism. The effect of the bias is discussed. The statistics of smoothed cosmic fields as functions of rescaled threshold by volume fraction are discussed in the framework of second-order perturbation theory. In CDM-like models, their functional deviations from linear predictions plotted against the rescaled threshold are generally much smaller than that plotted against the direct threshold. There is still a slight meatball shift against rescaled threshold, which is characterized by asymmetry in depths of troughs in the genus curve. A theory-motivated asymmetry factor in the genus curve is proposed.
Observed galaxy number counts on the lightcone up to second order: I. Main result
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertacca, Daniele; Maartens, Roy; Clarkson, Chris, E-mail: daniele.bertacca@gmail.com, E-mail: roy.maartens@gmail.com, E-mail: chris.clarkson@gmail.com
2014-09-01
We present the galaxy number overdensity up to second order in redshift space on cosmological scales for a concordance model. The result contains all general relativistic effects up to second order that arise from observing on the past light cone, including all redshift effects, lensing distortions from convergence and shear, and contributions from velocities, Sachs-Wolfe, integrated SW and time-delay terms. This result will be important for accurate calculation of the bias on estimates of non-Gaussianity and on precision parameter estimates, introduced by nonlinear projection effects.
ERIC Educational Resources Information Center
Facao, M.; Lopes, A.; Silva, A. L.; Silva, P.
2011-01-01
We propose an undergraduate numerical project for simulating the results of the second-order correlation function as obtained by an intensity interference experiment for two kinds of light, namely bunched light with Gaussian or Lorentzian power density spectrum and antibunched light obtained from single-photon sources. While the algorithm for…
NASA Astrophysics Data System (ADS)
Yu, Jun; Hao, Du; Li, Decai
2018-01-01
The phenomenon whereby an object whose density is greater than magnetic fluid can be suspended stably in magnetic fluid under the magnetic field is one of the peculiar properties of magnetic fluids. Examples of applications based on the peculiar properties of magnetic fluid are sensors and actuators, dampers, positioning systems and so on. Therefore, the calculation and measurement of magnetic levitation force of magnetic fluid is of vital importance. This paper concerns the peculiar second-order buoyancy experienced by a magnet immersed in magnetic fluid. The expression for calculating the second-order buoyancy was derived, and a novel method for calculating and measuring the second-order buoyancy was proposed based on the expression. The second-order buoyancy was calculated by ANSYS and measured experimentally using the novel method. To verify the novel method, the second-order buoyancy was measured experimentally with a nonmagnetic rod stuck on the top surface of the magnet. The results of calculations and experiments show that the novel method for calculating the second-order buoyancy is correct with high accuracy. In addition, the main causes of error were studied in this paper, including magnetic shielding of magnetic fluid and the movement of magnetic fluid in a nonuniform magnetic field.
A hybrid incremental projection method for thermal-hydraulics applications
NASA Astrophysics Data System (ADS)
Christon, Mark A.; Bakosi, Jozsef; Nadiga, Balasubramanya T.; Berndt, Markus; Francois, Marianne M.; Stagg, Alan K.; Xia, Yidong; Luo, Hong
2016-07-01
A new second-order accurate, hybrid, incremental projection method for time-dependent incompressible viscous flow is introduced in this paper. The hybrid finite-element/finite-volume discretization circumvents the well-known Ladyzhenskaya-Babuška-Brezzi conditions for stability, and does not require special treatment to filter pressure modes by either Rhie-Chow interpolation or by using a Petrov-Galerkin finite element formulation. The use of a co-velocity with a high-resolution advection method and a linearly consistent edge-based treatment of viscous/diffusive terms yields a robust algorithm for a broad spectrum of incompressible flows. The high-resolution advection method is shown to deliver second-order spatial convergence on mixed element topology meshes, and the implicit advective treatment significantly increases the stable time-step size. The algorithm is robust and extensible, permitting the incorporation of features such as porous media flow, RANS and LES turbulence models, and semi-/fully-implicit time stepping. A series of verification and validation problems are used to illustrate the convergence properties of the algorithm. The temporal stability properties are demonstrated on a range of problems with 2 ≤ CFL ≤ 100. The new flow solver is built using the Hydra multiphysics toolkit. The Hydra toolkit is written in C++ and provides a rich suite of extensible and fully-parallel components that permit rapid application development, supports multiple discretization techniques, provides I/O interfaces, dynamic run-time load balancing and data migration, and interfaces to scalable popular linear solvers, e.g., in open-source packages such as HYPRE, PETSc, and Trilinos.
A hybrid incremental projection method for thermal-hydraulics applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christon, Mark A.; Bakosi, Jozsef; Nadiga, Balasubramanya T.
In this paper, a new second-order accurate, hybrid, incremental projection method for time-dependent incompressible viscous flow is introduced in this paper. The hybrid finite-element/finite-volume discretization circumvents the well-known Ladyzhenskaya–Babuška–Brezzi conditions for stability, and does not require special treatment to filter pressure modes by either Rhie–Chow interpolation or by using a Petrov–Galerkin finite element formulation. The use of a co-velocity with a high-resolution advection method and a linearly consistent edge-based treatment of viscous/diffusive terms yields a robust algorithm for a broad spectrum of incompressible flows. The high-resolution advection method is shown to deliver second-order spatial convergence on mixed element topology meshes,more » and the implicit advective treatment significantly increases the stable time-step size. The algorithm is robust and extensible, permitting the incorporation of features such as porous media flow, RANS and LES turbulence models, and semi-/fully-implicit time stepping. A series of verification and validation problems are used to illustrate the convergence properties of the algorithm. The temporal stability properties are demonstrated on a range of problems with 2 ≤ CFL ≤ 100. The new flow solver is built using the Hydra multiphysics toolkit. The Hydra toolkit is written in C++ and provides a rich suite of extensible and fully-parallel components that permit rapid application development, supports multiple discretization techniques, provides I/O interfaces, dynamic run-time load balancing and data migration, and interfaces to scalable popular linear solvers, e.g., in open-source packages such as HYPRE, PETSc, and Trilinos.« less
A hybrid incremental projection method for thermal-hydraulics applications
Christon, Mark A.; Bakosi, Jozsef; Nadiga, Balasubramanya T.; ...
2016-07-01
In this paper, a new second-order accurate, hybrid, incremental projection method for time-dependent incompressible viscous flow is introduced in this paper. The hybrid finite-element/finite-volume discretization circumvents the well-known Ladyzhenskaya–Babuška–Brezzi conditions for stability, and does not require special treatment to filter pressure modes by either Rhie–Chow interpolation or by using a Petrov–Galerkin finite element formulation. The use of a co-velocity with a high-resolution advection method and a linearly consistent edge-based treatment of viscous/diffusive terms yields a robust algorithm for a broad spectrum of incompressible flows. The high-resolution advection method is shown to deliver second-order spatial convergence on mixed element topology meshes,more » and the implicit advective treatment significantly increases the stable time-step size. The algorithm is robust and extensible, permitting the incorporation of features such as porous media flow, RANS and LES turbulence models, and semi-/fully-implicit time stepping. A series of verification and validation problems are used to illustrate the convergence properties of the algorithm. The temporal stability properties are demonstrated on a range of problems with 2 ≤ CFL ≤ 100. The new flow solver is built using the Hydra multiphysics toolkit. The Hydra toolkit is written in C++ and provides a rich suite of extensible and fully-parallel components that permit rapid application development, supports multiple discretization techniques, provides I/O interfaces, dynamic run-time load balancing and data migration, and interfaces to scalable popular linear solvers, e.g., in open-source packages such as HYPRE, PETSc, and Trilinos.« less
Teaching Reform of Civil Engineering Materials Course Based on Project-Driven Pedagogy
NASA Astrophysics Data System (ADS)
Yidong, Xu; Wei, Chen; WeiguoJian, You; Jiansheng, Shen
2018-05-01
In view of the scattered experimental projects in practical courses of civil engineering materials, the poor practical ability of students and the disconnection between practical teaching and theoretical teaching, this paper proposes a practical teaching procedure. Firstly, the single experiment should be offered which emphasizes on improving the students’ basic experimental operating ability. Secondly, the compressive experiment is offered and the overall quality of students can be examined in the form of project team. In order to investigate the effect of teaching reform, the comparative analysis of the students of three grades (2014, 2015 and 2016) majored in civil engineering was conducted. The result shows that the students’ ability of experimental operation is obviously improved by using the project driven method-based teaching reform. Besides, the students’ ability to analyse and solve problems has also been improved.
NASA Astrophysics Data System (ADS)
Yan, Bing-Nan; Liu, Chong-Xin; Ni, Jun-Kang; Zhao, Liang
2016-10-01
In order to grasp the downhole situation immediately, logging while drilling (LWD) technology is adopted. One of the LWD technologies, called acoustic telemetry, can be successfully applied to modern drilling. It is critical for acoustic telemetry technology that the signal is successfully transmitted to the ground. In this paper, binary phase shift keying (BPSK) is used to modulate carrier waves for the transmission and a new BPSK demodulation scheme based on Duffing chaos is investigated. Firstly, a high-order system is given in order to enhance the signal detection capability and it is realized through building a virtual circuit using an electronic workbench (EWB). Secondly, a new BPSK demodulation scheme is proposed based on the intermittent chaos phenomena of the new Duffing system. Finally, a system variable crossing zero-point equidistance method is proposed to obtain the phase difference between the system and the BPSK signal. Then it is determined that the digital signal transmitted from the bottom of the well is ‘0’ or ‘1’. The simulation results show that the demodulation method is feasible. Project supported by the National Natural Science Foundation of China (Grant No. 51177117) and the National Key Science & Technology Special Projects, China (Grant No. 2011ZX05021-005).
A fourth-order box method for solving the boundary layer equations
NASA Technical Reports Server (NTRS)
Wornom, S. F.
1977-01-01
A fourth order box method for calculating high accuracy numerical solutions to parabolic, partial differential equations in two variables or ordinary differential equations is presented. The method is the natural extension of the second order Keller Box scheme to fourth order and is demonstrated with application to the incompressible, laminar and turbulent boundary layer equations. Numerical results for high accuracy test cases show the method to be significantly faster than other higher order and second order methods.
Consensus for second-order multi-agent systems with position sampled data
NASA Astrophysics Data System (ADS)
Wang, Rusheng; Gao, Lixin; Chen, Wenhai; Dai, Dameng
2016-10-01
In this paper, the consensus problem with position sampled data for second-order multi-agent systems is investigated. The interaction topology among the agents is depicted by a directed graph. The full-order and reduced-order observers with position sampled data are proposed, by which two kinds of sampled data-based consensus protocols are constructed. With the provided sampled protocols, the consensus convergence analysis of a continuous-time multi-agent system is equivalently transformed into that of a discrete-time system. Then, by using matrix theory and a sampled control analysis method, some sufficient and necessary consensus conditions based on the coupling parameters, spectrum of the Laplacian matrix and sampling period are obtained. While the sampling period tends to zero, our established necessary and sufficient conditions are degenerated to the continuous-time protocol case, which are consistent with the existing result for the continuous-time case. Finally, the effectiveness of our established results is illustrated by a simple simulation example. Project supported by the Natural Science Foundation of Zhejiang Province, China (Grant No. LY13F030005) and the National Natural Science Foundation of China (Grant No. 61501331).
Chen, Shyi-Ming; Manalu, Gandhi Maruli Tua; Pan, Jeng-Shyang; Liu, Hsiang-Chuan
2013-06-01
In this paper, we present a new method for fuzzy forecasting based on two-factors second-order fuzzy-trend logical relationship groups and particle swarm optimization (PSO) techniques. First, we fuzzify the historical training data of the main factor and the secondary factor, respectively, to form two-factors second-order fuzzy logical relationships. Then, we group the two-factors second-order fuzzy logical relationships into two-factors second-order fuzzy-trend logical relationship groups. Then, we obtain the optimal weighting vector for each fuzzy-trend logical relationship group by using PSO techniques to perform the forecasting. We also apply the proposed method to forecast the Taiwan Stock Exchange Capitalization Weighted Stock Index and the NTD/USD exchange rates. The experimental results show that the proposed method gets better forecasting performance than the existing methods.
Stability and stabilization of the lattice Boltzmann method
NASA Astrophysics Data System (ADS)
Brownlee, R. A.; Gorban, A. N.; Levesley, J.
2007-03-01
We revisit the classical stability versus accuracy dilemma for the lattice Boltzmann methods (LBM). Our goal is a stable method of second-order accuracy for fluid dynamics based on the lattice Bhatnager-Gross-Krook method (LBGK). The LBGK scheme can be recognized as a discrete dynamical system generated by free flight and entropic involution. In this framework the stability and accuracy analysis are more natural. We find the necessary and sufficient conditions for second-order accurate fluid dynamics modeling. In particular, it is proven that in order to guarantee second-order accuracy the distribution should belong to a distinguished surface—the invariant film (up to second order in the time step). This surface is the trajectory of the (quasi)equilibrium distribution surface under free flight. The main instability mechanisms are identified. The simplest recipes for stabilization add no artificial dissipation (up to second order) and provide second-order accuracy of the method. Two other prescriptions add some artificial dissipation locally and prevent the system from loss of positivity and local blowup. Demonstration of the proposed stable LBGK schemes are provided by the numerical simulation of a one-dimensional (1D) shock tube and the unsteady 2D flow around a square cylinder up to Reynolds number Rẽ20000 .
Second-order variational equations for N-body simulations
NASA Astrophysics Data System (ADS)
Rein, Hanno; Tamayo, Daniel
2016-07-01
First-order variational equations are widely used in N-body simulations to study how nearby trajectories diverge from one another. These allow for efficient and reliable determinations of chaos indicators such as the Maximal Lyapunov characteristic Exponent (MLE) and the Mean Exponential Growth factor of Nearby Orbits (MEGNO). In this paper we lay out the theoretical framework to extend the idea of variational equations to higher order. We explicitly derive the differential equations that govern the evolution of second-order variations in the N-body problem. Going to second order opens the door to new applications, including optimization algorithms that require the first and second derivatives of the solution, like the classical Newton's method. Typically, these methods have faster convergence rates than derivative-free methods. Derivatives are also required for Riemann manifold Langevin and Hamiltonian Monte Carlo methods which provide significantly shorter correlation times than standard methods. Such improved optimization methods can be applied to anything from radial-velocity/transit-timing-variation fitting to spacecraft trajectory optimization to asteroid deflection. We provide an implementation of first- and second-order variational equations for the publicly available REBOUND integrator package. Our implementation allows the simultaneous integration of any number of first- and second-order variational equations with the high-accuracy IAS15 integrator. We also provide routines to generate consistent and accurate initial conditions without the need for finite differencing.
Investigation of second-order hyperpolarizability of some organic compounds
NASA Astrophysics Data System (ADS)
Tajalli, H.; Zirak, P.; Ahmadi, S.
2003-04-01
In this work, we have measured the second order hyperpolarizability of some organic materials with (EFISH) method and also calculated the second order hyperpolarizability of 13 organic compound with Mopac6 software and investigated the different factors that affect the amount of second order hyperpolarizability and ways to increase it.
How to Select a Project Delivery Method for School Facilities
ERIC Educational Resources Information Center
Kalina, David
2007-01-01
In this article, the author discusses and explains three project delivery methods that are commonly used today in the United States. The first project delivery method mentioned is the design-bid-build, which is still the predominant method of project delivery for public works and school construction in the United States. The second is the…
Kumar, K Vasanth
2006-10-11
Batch kinetic experiments were carried out for the sorption of methylene blue onto activated carbon. The experimental kinetics were fitted to the pseudo first-order and pseudo second-order kinetics by linear and a non-linear method. The five different types of Ho pseudo second-order expression have been discussed. A comparison of linear least-squares method and a trial and error non-linear method of estimating the pseudo second-order rate kinetic parameters were examined. The sorption process was found to follow a both pseudo first-order kinetic and pseudo second-order kinetic model. Present investigation showed that it is inappropriate to use a type 1 and type pseudo second-order expressions as proposed by Ho and Blanachard et al. respectively for predicting the kinetic rate constants and the initial sorption rate for the studied system. Three correct possible alternate linear expressions (type 2 to type 4) to better predict the initial sorption rate and kinetic rate constants for the studied system (methylene blue/activated carbon) was proposed. Linear method was found to check only the hypothesis instead of verifying the kinetic model. Non-linear regression method was found to be the more appropriate method to determine the rate kinetic parameters.
Abel's Theorem Simplifies Reduction of Order
ERIC Educational Resources Information Center
Green, William R.
2011-01-01
We give an alternative to the standard method of reduction or order, in which one uses one solution of a homogeneous, linear, second order differential equation to find a second, linearly independent solution. Our method, based on Abel's Theorem, is shorter, less complex and extends to higher order equations.
Stirling engine design manual, 2nd edition
NASA Technical Reports Server (NTRS)
Martini, W. R.
1983-01-01
This manual is intended to serve as an introduction to Stirling cycle heat engines, as a key to the available literature on Stirling engines and to identify nonproprietary Stirling engine design methodologies. Two different fully described Stirling engines are discussed. Engine design methods are categorized as first order, second order, and third order with increased order number indicating increased complexity. FORTRAN programs are listed for both an isothermal second order design program and an adiabatic second order design program. Third order methods are explained and enumerated. In this second edition of the manual the references are updated. A revised personal and corporate author index is given and an expanded directory lists over 80 individuals and companies active in Stirling engines.
Control method of Three-phase Four-leg converter based on repetitive control
NASA Astrophysics Data System (ADS)
Hui, Wang
2018-03-01
The research chose the magnetic levitation force of wind power generation system as the object. In order to improve the power quality problem caused by unbalanced load in power supply system, we combined the characteristics and repetitive control principle of magnetic levitation wind power generation system, and then an independent control strategy for three-phase four-leg converter was proposed. In this paper, based on the symmetric component method, the second order generalized integrator was used to generate the positive and negative sequence of signals, and the decoupling control was carried out under the synchronous rotating reference frame, in which the positive and negative sequence voltage is PI double closed loop, and a PI regulator with repetitive control was introduced to eliminate the static error regarding the fundamental frequency fluctuation characteristic of zero sequence component. The simulation results based on Matlab/Simulink show that the proposed control project can effectively suppress the disturbance caused by unbalanced loads and maintain the load voltage balance. The project is easy to be achieved and remarkably improves the quality of the independent power supply system.
Recent activities within the Aeroservoelasticity Branch at the NASA Langley Research Center
NASA Technical Reports Server (NTRS)
Noll, Thomas E.; Perry, Boyd, III; Gilbert, Michael G.
1989-01-01
The objective of research in aeroservoelasticity at the NASA Langley Research Center is to enhance the modeling, analysis, and multidisciplinary design methodologies for obtaining multifunction digital control systems for application to flexible flight vehicles. Recent accomplishments are discussed, and a status report on current activities within the Aeroservoelasticity Branch is presented. In the area of modeling, improvements to the Minimum-State Method of approximating unsteady aerodynamics are shown to provide precise, low-order aeroservoelastic models for design and simulation activities. Analytical methods based on Matched Filter Theory and Random Process Theory to provide efficient and direct predictions of the critical gust profile and the time-correlated gust loads for linear structural design considerations are also discussed. Two research projects leading towards improved design methodology are summarized. The first program is developing an integrated structure/control design capability based on hierarchical problem decomposition, multilevel optimization and analytical sensitivities. The second program provides procedures for obtaining low-order, robust digital control laws for aeroelastic applications. In terms of methodology validation and application the current activities associated with the Active Flexible Wing project are reviewed.
NASA Astrophysics Data System (ADS)
Moschini, Elena
Academics are beginning to explore the educational potential of Second LifeTM (SL) by setting up inworld educational activities and projects. Given the relative novelty of the use of virtual world environments in higher education many such projects are still at pilot stage. However the initial pilot and experimentation stage will have to be followed by a rigorous evaluation process as for more traditional teaching projects. The chapter addresses issues about SL research tools and research methods. It introduces a "researcher toolkit" that includes: the various stages in the evaluation of SL educational projects and the theoretical framework that can inform such projects; an outline of the inworld tools that can be utilised or customised for academic research purposes; a review of methods for collecting feedback from participants and of the main ethical issues involved in researching virtual world environments; a discussion on the technical skills required to operate a research project in SL. The chapter also offers an indication of the inworld opportunities for the dissemination of SL research findings.
NASA Astrophysics Data System (ADS)
Ghale, Purnima; Johnson, Harley T.
2018-06-01
We present an efficient sparse matrix-vector (SpMV) based method to compute the density matrix P from a given Hamiltonian in electronic structure computations. Our method is a hybrid approach based on Chebyshev-Jackson approximation theory and matrix purification methods like the second order spectral projection purification (SP2). Recent methods to compute the density matrix scale as O(N) in the number of floating point operations but are accompanied by large memory and communication overhead, and they are based on iterative use of the sparse matrix-matrix multiplication kernel (SpGEMM), which is known to be computationally irregular. In addition to irregularity in the sparse Hamiltonian H, the nonzero structure of intermediate estimates of P depends on products of H and evolves over the course of computation. On the other hand, an expansion of the density matrix P in terms of Chebyshev polynomials is straightforward and SpMV based; however, the resulting density matrix may not satisfy the required constraints exactly. In this paper, we analyze the strengths and weaknesses of the Chebyshev-Jackson polynomials and the second order spectral projection purification (SP2) method, and propose to combine them so that the accurate density matrix can be computed using the SpMV computational kernel only, and without having to store the density matrix P. Our method accomplishes these objectives by using the Chebyshev polynomial estimate as the initial guess for SP2, which is followed by using sparse matrix-vector multiplications (SpMVs) to replicate the behavior of the SP2 algorithm for purification. We demonstrate the method on a tight-binding model system of an oxide material containing more than 3 million atoms. In addition, we also present the predicted behavior of our method when applied to near-metallic Hamiltonians with a wide energy spectrum.
NASA Technical Reports Server (NTRS)
Chamberlain, Robert G.; Duquette, William H.; Provenzano, Joseph P.; Brunzie, Theodore J.; Jordan, Benjamin
2011-01-01
The Athena simulation software supports an analyst from DoD or other federal agency in making stability and reconstruction projections for operational analyses in areas like Iraq or Afghanistan. It encompasses the use of all elements of national power: diplomatic, information, military, and economic (DIME), and anticipates their effects on political, military, economic, social, information, and infrastructure (PMESII) variables in real-world battle space environments. Athena is a stand-alone model that provides analysts with insights into the effectiveness of complex operations by anticipating second-, third-, and higher-order effects. For example, the first-order effect of executing a curfew may be to reduce insurgent activity, but it may also reduce consumer spending and keep workers home as second-order effects. Reduced spending and reduced labor may reduce the gross domestic product (GDP) as a third-order effect. Damage to the economy will have further consequences. The Athena approach has also been considered for application in studies related to climate change and the smart grid. It can be applied to any project where the impacts on the population and their perceptions are important, and where population perception is important to the success of the project.
NASA Astrophysics Data System (ADS)
Wallhead, Ian; Ocaña, Roberto
2014-05-01
Laser projection devices should be designed to maximize their luminous efficacy and color gamut. This is for two main reasons. Firstly, being either stand alone devices or embedded in other products, they could be powered by battery, and lifetime is an important factor. Secondly, the increasing use of lasers to project images calls for a consideration of eye safety issues. The brightness of the projected image may be limited by the Class II accessible emission limit. There is reason to believe that current laser beam scanning projection technology is already close to the power ceiling based on eye safety limits. Consequently, it would be desirable to improve luminous efficacy to increase the output luminous flux whilst maintaining or improving color gamut for the same eye-safe optical power limit. Here we present a novel study about the combination of four laser wavelengths in order to maximize both color gamut and efficacy to produce the color white. Firstly, an analytic method to calculate efficacy as function of both four laser wavelengths and four laser powers is derived. Secondly we provide a new way to present the results by providing the diagram efficacy vs color gamut area that summarizes the performance of any wavelength combination for projection purposes. The results indicate that the maximal efficacy for the D65 white is only achievable by using a suitable combination of both laser power ratios and wavelengths.
NASA Astrophysics Data System (ADS)
Gallinato, Olivier; Poignard, Clair
2017-06-01
In this paper, we present a superconvergent second order Cartesian method to solve a free boundary problem with two harmonic phases coupled through the moving interface. The model recently proposed by the authors and colleagues describes the formation of cell protrusions. The moving interface is described by a level set function and is advected at the velocity given by the gradient of the inner phase. The finite differences method proposed in this paper consists of a new stabilized ghost fluid method and second order discretizations for the Laplace operator with the boundary conditions (Dirichlet, Neumann or Robin conditions). Interestingly, the method to solve the harmonic subproblems is superconvergent on two levels, in the sense that the first and second order derivatives of the numerical solutions are obtained with the second order of accuracy, similarly to the solution itself. We exhibit numerical criteria on the data accuracy to get such properties and numerical simulations corroborate these criteria. In addition to these properties, we propose an appropriate extension of the velocity of the level-set to avoid any loss of consistency, and to obtain the second order of accuracy of the complete free boundary problem. Interestingly, we highlight the transmission of the superconvergent properties for the static subproblems and their preservation by the dynamical scheme. Our method is also well suited for quasistatic Hele-Shaw-like or Muskat-like problems.
NASA Astrophysics Data System (ADS)
Zhao, J. M.; Tan, J. Y.; Liu, L. H.
2013-01-01
A new second order form of radiative transfer equation (named MSORTE) is proposed, which overcomes the singularity problem of a previously proposed second order radiative transfer equation [J.E. Morel, B.T. Adams, T. Noh, J.M. McGhee, T.M. Evans, T.J. Urbatsch, Spatial discretizations for self-adjoint forms of the radiative transfer equations, J. Comput. Phys. 214 (1) (2006) 12-40 (where it was termed SAAI), J.M. Zhao, L.H. Liu, Second order radiative transfer equation and its properties of numerical solution using finite element method, Numer. Heat Transfer B 51 (2007) 391-409] in dealing with inhomogeneous media where some locations have very small/zero extinction coefficient. The MSORTE contains a naturally introduced diffusion (or second order) term which provides better numerical property than the classic first order radiative transfer equation (RTE). The stability and convergence characteristics of the MSORTE discretized by central difference scheme is analyzed theoretically, and the better numerical stability of the second order form radiative transfer equations than the RTE when discretized by the central difference type method is proved. A collocation meshless method is developed based on the MSORTE to solve radiative transfer in inhomogeneous media. Several critical test cases are taken to verify the performance of the presented method. The collocation meshless method based on the MSORTE is demonstrated to be capable of stably and accurately solve radiative transfer in strongly inhomogeneous media, media with void region and even with discontinuous extinction coefficient.
Seward Park High School. Project Superemos, 1981-1982. O.E.E. Evaluation Report.
ERIC Educational Resources Information Center
Torres, Judith A.; And Others
Project Superemos, conducted at Seward Park High School in New York City, was implemented in order to supplement the school's instructional services in English as a Second Language, native language arts, and bilingual instruction. The project provided supportive services necessary for mainstreaming into the regular school curriculum approximately…
ERIC Educational Resources Information Center
Güven Yildirim, Ezgi; Köklükaya, Ayse Nesibe
2018-01-01
The purposes of this study were first to investigate the effects of the project-based learning (PBL) method and project exhibition event on the success of physics teacher candidates, and second, to reveal the experiment group students' views toward this learning method and project exhibition. The research model called explanatory mixed method, in…
NASA Astrophysics Data System (ADS)
Jie, Cao; Zhi-Hai, Wu; Li, Peng
2016-05-01
This paper investigates the consensus tracking problems of second-order multi-agent systems with a virtual leader via event-triggered control. A novel distributed event-triggered transmission scheme is proposed, which is intermittently examined at constant sampling instants. Only partial neighbor information and local measurements are required for event detection. Then the corresponding event-triggered consensus tracking protocol is presented to guarantee second-order multi-agent systems to achieve consensus tracking. Numerical simulations are given to illustrate the effectiveness of the proposed strategy. Project supported by the National Natural Science Foundation of China (Grant Nos. 61203147, 61374047, and 61403168).
Chen, Zhenhua; Hoffmann, Mark R
2012-07-07
A unitary wave operator, exp (G), G(+) = -G, is considered to transform a multiconfigurational reference wave function Φ to the potentially exact, within basis set limit, wave function Ψ = exp (G)Φ. To obtain a useful approximation, the Hausdorff expansion of the similarity transformed effective Hamiltonian, exp (-G)Hexp (G), is truncated at second order and the excitation manifold is limited; an additional separate perturbation approximation can also be made. In the perturbation approximation, which we refer to as multireference unitary second-order perturbation theory (MRUPT2), the Hamiltonian operator in the highest order commutator is approximated by a Mo̸ller-Plesset-type one-body zero-order Hamiltonian. If a complete active space self-consistent field wave function is used as reference, then the energy is invariant under orbital rotations within the inactive, active, and virtual orbital subspaces for both the second-order unitary coupled cluster method and its perturbative approximation. Furthermore, the redundancies of the excitation operators are addressed in a novel way, which is potentially more efficient compared to the usual full diagonalization of the metric of the excited configurations. Despite the loss of rigorous size-extensivity possibly due to the use of a variational approach rather than a projective one in the solution of the amplitudes, test calculations show that the size-extensivity errors are very small. Compared to other internally contracted multireference perturbation theories, MRUPT2 only needs reduced density matrices up to three-body even with a non-complete active space reference wave function when two-body excitations within the active orbital subspace are involved in the wave operator, exp (G). Both the coupled cluster and perturbation theory variants are amenable to large, incomplete model spaces. Applications to some widely studied model systems that can be problematic because of geometry dependent quasidegeneracy, H4, P4, and BeH(2), are performed in order to test the new methods on problems where full configuration interaction results are available.
NASA Astrophysics Data System (ADS)
McClure, C.; Jaffe, D. A.; Edgerton, E.; Jansen, J. J.
2013-12-01
During the summer of 2013, we initiated a project to examine the performance of Tekran measurements of Gaseous Oxidized Mercury (GOM) with a pyrolysis method at the North Birmingham SEARCH site. Measurements started in June 2013 and will run until September 2013. This project responds to recent studies that indicate problems with the KCl denuder method for collection of GOM (e.g. Lyman et al., 2010; Gustin et al., 2013; Ambrose et al., 2013). For this project, we compared two GOM measurement systems, one using the KCl denuder method and a second method using high temperature pyrolysis of Hg compounds and detection of the resulting Hg0 vapors. Both instruments were also calibrated using an HgBr2 source to understand the recovery of one possible atmospheric GOM constituent. Both instruments sampled from a common, heated manifold. Past work has shown that in order to fully transmit HgBr2 sample lines must be made from PFA lines and heated to 100 °C. The transmission rate of HgBr2 during this project is approximately 90% over 25 feet of sample tubing at this temperature. Very preliminary results from this study have found that the transmitted HgBr2 is captured with 95% efficiency in carbon-scrubbed ambient air for both the KCl denuder and the pyrolysis method. However, the denuder method appears to be significantly less efficient in the capture of GOM when sampling unaltered ambient air versus the pyrolysis validation of total Hg0. Therefore, calibration of GOM measurements is essential in order to accurately correct for fluctuations in the GOM capture efficiency. We have also found that calibrations for GOM can be done routinely in the field and that these are essential to fully understand the GOM measurements. At present our calibration system is performed manually, but in principle this method could be readily automated.
NASA Astrophysics Data System (ADS)
Man, Yiu-Kwong
2010-10-01
In this communication, we present a method for computing the Liouvillian solution of second-order linear differential equations via algebraic invariant curves. The main idea is to integrate Kovacic's results on second-order linear differential equations with the Prelle-Singer method for computing first integrals of differential equations. Some examples on using this approach are provided.
75 FR 51988 - Bison Pipeline LLC; Notice of Application
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-24
... (Certificate) in order to construct the Project in two phases; first to meet the service requirements of the... rates for transportation service approved in the Order. During the first phase, Bison would construct... and related appurtenances as authorized in the Order (Phase 1). During the second phase, Bison plans...
A stable second order method for training back propagation networks
NASA Technical Reports Server (NTRS)
Nachtsheim, Philip R.
1993-01-01
A simple method for improving the learning rate of the back-propagation algorithm is described. The basis of the method is that approximate second order corrections can be incorporated in the output units. The extended method leads to significant improvements in the convergence rate.
ERIC Educational Resources Information Center
Colomar, M. Pilar Alberola; Guzman, Eva Gil
2009-01-01
We are presenting a methodological approach that aims to increase students' motivation by asking them to develop tasks based on professional settings. In order to meet this objective a collaborative methodology was designed and applied to two multidisciplinary projects: MARKETOUR and ICT-SUSTOUR. Both projects made students face real workplace…
Alpha Project. Townsight Canada. Project Canada West.
ERIC Educational Resources Information Center
Western Curriculum Project on Canada Studies, Edmonton (Alberta).
In order to acquaint students with other environments and to develop an awareness of their own community, the study of a small community in Canada was undertaken by this project development team. The Alpha students studied Chilliwack the first year (ED 066 352) and this second report covers their study of Powell River. The aim of the developers is…
ERIC Educational Resources Information Center
Miller, Scott A.
2013-01-01
This research examined children's performance on second-order false belief tasks as a function of the content area for the belief and the method of assessing understanding. A total of 70 kindergarten and first-grade children responded to four second-order stories. On two stories, the task was to judge a belief about a belief, and on two, the…
Central Schemes for Multi-Dimensional Hamilton-Jacobi Equations
NASA Technical Reports Server (NTRS)
Bryson, Steve; Levy, Doron; Biegel, Bryan (Technical Monitor)
2002-01-01
We present new, efficient central schemes for multi-dimensional Hamilton-Jacobi equations. These non-oscillatory, non-staggered schemes are first- and second-order accurate and are designed to scale well with an increasing dimension. Efficiency is obtained by carefully choosing the location of the evolution points and by using a one-dimensional projection step. First-and second-order accuracy is verified for a variety of multi-dimensional, convex and non-convex problems.
Precision calculations of the cosmic shear power spectrum projection
NASA Astrophysics Data System (ADS)
Kilbinger, Martin; Heymans, Catherine; Asgari, Marika; Joudaki, Shahab; Schneider, Peter; Simon, Patrick; Van Waerbeke, Ludovic; Harnois-Déraps, Joachim; Hildebrandt, Hendrik; Köhlinger, Fabian; Kuijken, Konrad; Viola, Massimo
2017-12-01
We compute the spherical-sky weak-lensing power spectrum of the shear and convergence. We discuss various approximations, such as flat-sky, and first- and second-order Limber equations for the projection. We find that the impact of adopting these approximations is negligible when constraining cosmological parameters from current weak-lensing surveys. This is demonstrated using data from the Canada-France-Hawaii Telescope Lensing Survey. We find that the reported tension with Planck cosmic microwave background temperature anisotropy results cannot be alleviated. For future large-scale surveys with unprecedented precision, we show that the spherical second-order Limber approximation will provide sufficient accuracy. In this case, the cosmic-shear power spectrum is shown to be in agreement with the full projection at the sub-percent level for ℓ > 3, with the corresponding errors an order of magnitude below cosmic variance for all ℓ. When computing the two-point shear correlation function, we show that the flat-sky fast Hankel transformation results in errors below two percent compared to the full spherical transformation. In the spirit of reproducible research, our numerical implementation of all approximations and the full projection are publicly available within the package NICAEA at http://www.cosmostat.org/software/nicaea.
A New Continuous-Time Equality-Constrained Optimization to Avoid Singularity.
Quan, Quan; Cai, Kai-Yuan
2016-02-01
In equality-constrained optimization, a standard regularity assumption is often associated with feasible point methods, namely, that the gradients of constraints are linearly independent. In practice, the regularity assumption may be violated. In order to avoid such a singularity, a new projection matrix is proposed based on which a feasible point method to continuous-time, equality-constrained optimization is developed. First, the equality constraint is transformed into a continuous-time dynamical system with solutions that always satisfy the equality constraint. Second, a new projection matrix without singularity is proposed to realize the transformation. An update (or say a controller) is subsequently designed to decrease the objective function along the solutions of the transformed continuous-time dynamical system. The invariance principle is then applied to analyze the behavior of the solution. Furthermore, the proposed method is modified to address cases in which solutions do not satisfy the equality constraint. Finally, the proposed optimization approach is applied to three examples to demonstrate its effectiveness.
Numerical study of the radiometric phenomenon exhibited by a rotating Crookes radiometer
NASA Astrophysics Data System (ADS)
Anikin, Yu. A.
2011-11-01
The two-dimensional rarefied gas flow around a rotating Crookes radiometer and the arising radiometric forces are studied by numerically solving the Boltzmann kinetic equation. The computations are performed in a noninertial frame of reference rotating together with the radiometer. The collision integral is directly evaluated using a projection method, while second- and third-order accurate TVD schemes are used to solve the advection equation and the equation for inertia-induced transport in the velocity space, respectively. The radiometric forces are found as functions of the rotation frequency.
A parametric model order reduction technique for poroelastic finite element models.
Lappano, Ettore; Polanz, Markus; Desmet, Wim; Mundo, Domenico
2017-10-01
This research presents a parametric model order reduction approach for vibro-acoustic problems in the frequency domain of systems containing poroelastic materials (PEM). The method is applied to the Finite Element (FE) discretization of the weak u-p integral formulation based on the Biot-Allard theory and makes use of reduced basis (RB) methods typically employed for parametric problems. The parametric reduction is obtained rewriting the Biot-Allard FE equations for poroelastic materials using an affine representation of the frequency (therefore allowing for RB methods) and projecting the frequency-dependent PEM system on a global reduced order basis generated with the proper orthogonal decomposition instead of standard modal approaches. This has proven to be better suited to describe the nonlinear frequency dependence and the strong coupling introduced by damping. The methodology presented is tested on two three-dimensional systems: in the first experiment, the surface impedance of a PEM layer sample is calculated and compared with results of the literature; in the second, the reduced order model of a multilayer system coupled to an air cavity is assessed and the results are compared to those of the reference FE model.
NASA Astrophysics Data System (ADS)
Busto, S.; Ferrín, J. L.; Toro, E. F.; Vázquez-Cendón, M. E.
2018-01-01
In this paper the projection hybrid FV/FE method presented in [1] is extended to account for species transport equations. Furthermore, turbulent regimes are also considered thanks to the k-ε model. Regarding the transport diffusion stage new schemes of high order of accuracy are developed. The CVC Kolgan-type scheme and ADER methodology are extended to 3D. The latter is modified in order to profit from the dual mesh employed by the projection algorithm and the derivatives involved in the diffusion term are discretized using a Galerkin approach. The accuracy and stability analysis of the new method are carried out for the advection-diffusion-reaction equation. Within the projection stage the pressure correction is computed by a piecewise linear finite element method. Numerical results are presented, aimed at verifying the formal order of accuracy of the scheme and to assess the performance of the method on several realistic test problems.
Enhancement of observability and protection of smart power system
NASA Astrophysics Data System (ADS)
Siddique, Abdul Hasib
It is important for a modern power grid to be smarter in order to provide reliable and sustainable supply of electricity. Traditional way of receiving data from the wired system is a very old and outdated technology. For a quicker and better response from the electric system, it is important to look at wireless systems as a feasible option. In order to enhance the observability and protection it is important to integrate wireless technology with the modern power system. In this thesis, wireless network based architecture for wide area monitoring and an alternate method for performing current measurement for protection of generators and motors, has been adopted. There are basically two part of this project. First part deals with the wide area monitoring of the power system and the second part focuses more on application of wireless technology from the protection point of view. A number of wireless method have been adopted in both the part, these includes Zigbee, analog transmission (Both AM and FM) and digital transmission. The main aim of our project was to propose a cost effective wide area monitoring and protection method which will enhance the observability and stability of power grid. A new concept of wireless integration in the power protection system has been implemented in this thesis work.
Direct discriminant locality preserving projection with Hammerstein polynomial expansion.
Chen, Xi; Zhang, Jiashu; Li, Defang
2012-12-01
Discriminant locality preserving projection (DLPP) is a linear approach that encodes discriminant information into the objective of locality preserving projection and improves its classification ability. To enhance the nonlinear description ability of DLPP, we can optimize the objective function of DLPP in reproducing kernel Hilbert space to form a kernel-based discriminant locality preserving projection (KDLPP). However, KDLPP suffers the following problems: 1) larger computational burden; 2) no explicit mapping functions in KDLPP, which results in more computational burden when projecting a new sample into the low-dimensional subspace; and 3) KDLPP cannot obtain optimal discriminant vectors, which exceedingly optimize the objective of DLPP. To overcome the weaknesses of KDLPP, in this paper, a direct discriminant locality preserving projection with Hammerstein polynomial expansion (HPDDLPP) is proposed. The proposed HPDDLPP directly implements the objective of DLPP in high-dimensional second-order Hammerstein polynomial space without matrix inverse, which extracts the optimal discriminant vectors for DLPP without larger computational burden. Compared with some other related classical methods, experimental results for face and palmprint recognition problems indicate the effectiveness of the proposed HPDDLPP.
A preliminary compressible second-order closure model for high speed flows
NASA Technical Reports Server (NTRS)
Speziale, Charles G.; Sarkar, Sutanu
1989-01-01
A preliminary version of a compressible second-order closure model that was developed in connection with the National Aero-Space Plane Project is presented. The model requires the solution of transport equations for the Favre-averaged Reynolds stress tensor and dissipation rate. Gradient transport hypotheses are used for the Reynolds heat flux, mass flux, and turbulent diffusion terms. Some brief remarks are made about the direction of future research to generalize the model.
Incompressible flow simulations on regularized moving meshfree grids
NASA Astrophysics Data System (ADS)
Vasyliv, Yaroslav; Alexeev, Alexander
2017-11-01
A moving grid meshfree solver for incompressible flows is presented. To solve for the flow field, a semi-implicit approximate projection method is directly discretized on meshfree grids using General Finite Differences (GFD) with sharp interface stencil modifications. To maintain a regular grid, an explicit shift is used to relax compressed pseudosprings connecting a star node to its cloud of neighbors. The following test cases are used for validation: the Taylor-Green vortex decay, the analytic and modified lid-driven cavities, and an oscillating cylinder enclosed in a container for a range of Reynolds number values. We demonstrate that 1) the grid regularization does not impede the second order spatial convergence rate, 2) the Courant condition can be used for time marching but the projection splitting error reduces the convergence rate to first order, and 3) moving boundaries and arbitrary grid distortions can readily be handled. Financial support provided by the National Science Foundation (NSF) Graduate Research Fellowship, Grant No. DGE-1148903.
NASA Technical Reports Server (NTRS)
Mickens, Ronald E.
1987-01-01
It is shown that a discrete multi-time method can be constructed to obtain approximations to the periodic solutions of a special class of second-order nonlinear difference equations containing a small parameter. Three examples illustrating the method are presented.
Resource-Constrained Project Scheduling Under Uncertainty: Models, Algorithms and Applications
2014-11-10
Make-to-Order (MTO) Production Planning using Bayesian Updating, International Journal of Production Economics (04 2014) Norman Keith Womer, Haitao...2013) Made-to-Order Production Scheduling using Bayesian Updating, Working Paper, under second-round review in International Journal of Production Economics . VI
Second order upwind Lagrangian particle method for Euler equations
Samulyak, Roman; Chen, Hsin -Chiang; Yu, Kwangmin
2016-06-01
A new second order upwind Lagrangian particle method for solving Euler equations for compressible inviscid fluid or gas flows is proposed. Similar to smoothed particle hydrodynamics (SPH), the method represents fluid cells with Lagrangian particles and is suitable for the simulation of complex free surface / multiphase flows. The main contributions of our method, which is different from SPH in all other aspects, are (a) significant improvement of approximation of differential operators based on a polynomial fit via weighted least squares approximation and the convergence of prescribed order, (b) an upwind second-order particle-based algorithm with limiter, providing accuracy and longmore » term stability, and (c) accurate resolution of states at free interfaces. In conclusion, numerical verification tests demonstrating the convergence order for fixed domain and free surface problems are presented.« less
Second order upwind Lagrangian particle method for Euler equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samulyak, Roman; Chen, Hsin -Chiang; Yu, Kwangmin
A new second order upwind Lagrangian particle method for solving Euler equations for compressible inviscid fluid or gas flows is proposed. Similar to smoothed particle hydrodynamics (SPH), the method represents fluid cells with Lagrangian particles and is suitable for the simulation of complex free surface / multiphase flows. The main contributions of our method, which is different from SPH in all other aspects, are (a) significant improvement of approximation of differential operators based on a polynomial fit via weighted least squares approximation and the convergence of prescribed order, (b) an upwind second-order particle-based algorithm with limiter, providing accuracy and longmore » term stability, and (c) accurate resolution of states at free interfaces. In conclusion, numerical verification tests demonstrating the convergence order for fixed domain and free surface problems are presented.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stinis, Panagiotis
We present a comparative study of two methods for thereduction of the dimensionality of a system of ordinary differentialequations that exhibits time-scale separation. Both methods lead to areduced system of stochastic differential equations. The novel feature ofthese methods is that they allow the use, in the reduced system, ofhigher order terms in the resolved variables. The first method, proposedby Majda, Timofeyev and Vanden-Eijnden, is based on an asymptoticstrategy developed by Kurtz. The second method is a short-memoryapproximation of the Mori-Zwanzig projection formalism of irreversiblestatistical mechanics, as proposed by Chorin, Hald and Kupferman. Wepresent conditions under which the reduced models arisingmore » from the twomethods should have similar predictive ability. We apply the two methodsto test cases that satisfy these conditions. The form of the reducedmodels and the numerical simulations show that the two methods havesimilar predictive ability as expected.« less
NASA Astrophysics Data System (ADS)
Gorthi, Sai Siva; Rajshekhar, G.; Rastogi, Pramod
2010-04-01
For three-dimensional (3D) shape measurement using fringe projection techniques, the information about the 3D shape of an object is encoded in the phase of a recorded fringe pattern. The paper proposes a high-order instantaneous moments based method to estimate phase from a single fringe pattern in fringe projection. The proposed method works by approximating the phase as a piece-wise polynomial and subsequently determining the polynomial coefficients using high-order instantaneous moments to construct the polynomial phase. Simulation results are presented to show the method's potential.
PROJECTING THE BIOLOGICAL CONDITION OF STREAMS UNDER ALTERNATIVE SCENARIOS OF HUMAN LAND USE
We present empirical models for estimating the status of fish and aquatic invertebrate communities in all second to fourth-order streams (1:100,000 scale; total stream length = 6476 km) throughout the Willamette River Basin, Oregon. The models project fish and invertebrate status...
An Integrated Laboratory Project in NMR Spectroscopy.
ERIC Educational Resources Information Center
Hudson, Reggie L.; Pendley, Bradford D.
1988-01-01
Describes an advanced NMR project that can be done with a 60-MHz continuous-wave proton spectrometer. Points out the main purposes are to give students experience in second-order NMR analysis, the simplification of spectra by raising the frequency, and the effect of non-hydrogen nuclei on proton resonances. (MVL)
Research on early-warning index of the spatial temperature field in concrete dams.
Yang, Guang; Gu, Chongshi; Bao, Tengfei; Cui, Zhenming; Kan, Kan
2016-01-01
Warning indicators of the dam body's temperature are required for the real-time monitoring of the service conditions of concrete dams to ensure safety and normal operations. Warnings theories are traditionally targeted at a single point which have limitations, and the scientific warning theories on global behavior of the temperature field are non-existent. In this paper, first, in 3D space, the behavior of temperature field has regional dissimilarity. Through the Ward spatial clustering method, the temperature field was divided into regions. Second, the degree of order and degree of disorder of the temperature monitoring points were defined by the probability method. Third, the weight values of monitoring points of each regions were explored via projection pursuit. Forth, a temperature entropy expression that can describe degree of order of the spatial temperature field in concrete dams was established. Fifth, the early-warning index of temperature entropy was set up according to the calculated sequential value of temperature entropy. Finally, project cases verified the feasibility of the proposed theories. The early-warning index of temperature entropy is conducive to the improvement of early-warning ability and safety management levels during the operation of high concrete dams.
ERIC Educational Resources Information Center
Salmona Madriñan, Mara
2014-01-01
This action research project was carried out in order to identify the role of first language in the second-language classroom. This study was conducted in a Colombian international school with an English immersion program for kindergarten students attending their first year of school. The purpose of this study was to identify if the use of the…
Method of surface error visualization using laser 3D projection technology
NASA Astrophysics Data System (ADS)
Guo, Lili; Li, Lijuan; Lin, Xuezhu
2017-10-01
In the process of manufacturing large components, such as aerospace, automobile and shipping industry, some important mold or stamped metal plate requires precise forming on the surface, which usually needs to be verified, if necessary, the surface needs to be corrected and reprocessed. In order to make the correction of the machined surface more convenient, this paper proposes a method based on Laser 3D projection system, this method uses the contour form of terrain contour, directly showing the deviation between the actually measured data and the theoretical mathematical model (CAD) on the measured surface. First, measure the machined surface to get the point cloud data and the formation of triangular mesh; secondly, through coordinate transformation, unify the point cloud data to the theoretical model and calculate the three-dimensional deviation, according to the sign (positive or negative) and size of the deviation, use the color deviation band to denote the deviation of three-dimensional; then, use three-dimensional contour lines to draw and represent every coordinates deviation band, creating the projection files; finally, import the projection files into the laser projector, and make the contour line projected to the processed file with 1:1 in the form of a laser beam, compare the Full-color 3D deviation map with the projection graph, then, locate and make quantitative correction to meet the processing precision requirements. It can display the trend of the machined surface deviation clearly.
NASA Astrophysics Data System (ADS)
Boscheri, Walter; Dumbser, Michael; Loubère, Raphaël; Maire, Pierre-Henri
2018-04-01
In this paper we develop a conservative cell-centered Lagrangian finite volume scheme for the solution of the hydrodynamics equations on unstructured multidimensional grids. The method is derived from the Eucclhyd scheme discussed in [47,43,45]. It is second-order accurate in space and is combined with the a posteriori Multidimensional Optimal Order Detection (MOOD) limiting strategy to ensure robustness and stability at shock waves. Second-order of accuracy in time is achieved via the ADER (Arbitrary high order schemes using DERivatives) approach. A large set of numerical test cases is proposed to assess the ability of the method to achieve effective second order of accuracy on smooth flows, maintaining an essentially non-oscillatory behavior on discontinuous profiles, general robustness ensuring physical admissibility of the numerical solution, and precision where appropriate.
NASA Astrophysics Data System (ADS)
Kjærgaard, Thomas
2017-01-01
The divide-expand-consolidate resolution of the identity second-order Møller-Plesset perturbation (DEC-RI-MP2) theory method introduced in Baudin et al. [J. Chem. Phys. 144, 054102 (2016)] is significantly improved by introducing the Laplace transform of the orbital energy denominator in order to construct the double amplitudes directly in the local basis. Furthermore, this paper introduces the auxiliary reduction procedure, which reduces the set of the auxiliary functions employed in the individual fragments. The resulting Laplace transformed divide-expand-consolidate resolution of the identity second-order Møller-Plesset perturbation method is applied to the insulin molecule where we obtain a factor 9.5 speedup compared to the DEC-RI-MP2 method.
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III
2004-01-01
This final report will document the accomplishments of the work of this project. 1) The incremental-iterative (II) form of the reverse-mode (adjoint) method for computing first-order (FO) aerodynamic sensitivity derivatives (SDs) has been successfully implemented and tested in a 2D CFD code (called ANSERS) using the reverse-mode capability of ADIFOR 3.0. These preceding results compared very well with similar SDS computed via a black-box (BB) application of the reverse-mode capability of ADIFOR 3.0, and also with similar SDs calculated via the method of finite differences. 2) Second-order (SO) SDs have been implemented in the 2D ASNWERS code using the very efficient strategy that was originally proposed (but not previously tested) of Reference 3, Appendix A. Furthermore, these SO SOs have been validated for accuracy and computational efficiency. 3) Studies were conducted in Quasi-1D and 2D concerning the smoothness (or lack of smoothness) of the FO and SO SD's for flows with shock waves. The phenomenon is documented in the publications of this study (listed subsequently), however, the specific numerical mechanism which is responsible for this unsmoothness phenomenon was not discovered. 4) The FO and SO derivatives for Quasi-1D and 2D flows were applied to predict aerodynamic design uncertainties, and were also applied in robust design optimization studies.
Second-order small disturbance theory for hypersonic flow over power-law bodies. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Townsend, J. C.
1974-01-01
A mathematical method for determining the flow field about power-law bodies in hypersonic flow conditions is developed. The second-order solutions, which reflect the effects of the second-order terms in the equations, are obtained by applying the method of small perturbations in terms of body slenderness parameter to the zeroth-order solutions. The method is applied by writing each flow variable as the sum of a zeroth-order and a perturbation function, each multiplied by the axial variable raised to a power. The similarity solutions are developed for infinite Mach number. All results obtained are for no flow through the body surface (as a boundary condition), but the derivation indicates that small amounts of blowing or suction through the wall can be accommodated.
2006-10-08
FINAL REPORT to Air Force Office of Scientific Research (AFOSR) Project Title Influence of Surface Roughness on the Second Order Transport of...large amount of research has been performed to quantify the effects of Mach number, roughness, and wall curvature on turbulent boundary layers. However...18 a) b) c) Figure 3: a) A. D. Smith high pressure storage tank. b) Morin B series actuator controlling Virgo Engineers Trunion Mounted Ball Valve. c
Second-order singular pertubative theory for gravitational lenses
NASA Astrophysics Data System (ADS)
Alard, C.
2018-03-01
The extension of the singular perturbative approach to the second order is presented in this paper. The general expansion to the second order is derived. The second-order expansion is considered as a small correction to the first-order expansion. Using this approach, it is demonstrated that in practice the second-order expansion is reducible to a first order expansion via a re-definition of the first-order pertubative fields. Even if in usual applications the second-order correction is small the reducibility of the second-order expansion to the first-order expansion indicates a potential degeneracy issue. In general, this degeneracy is hard to break. A useful and simple second-order approximation is the thin source approximation, which offers a direct estimation of the correction. The practical application of the corrections derived in this paper is illustrated by using an elliptical NFW lens model. The second-order pertubative expansion provides a noticeable improvement, even for the simplest case of thin source approximation. To conclude, it is clear that for accurate modelization of gravitational lenses using the perturbative method the second-order perturbative expansion should be considered. In particular, an evaluation of the degeneracy due to the second-order term should be performed, for which the thin source approximation is particularly useful.
Adaptive interference cancel filter for evoked potential using high-order cumulants.
Lin, Bor-Shyh; Lin, Bor-Shing; Chong, Fok-Ching; Lai, Feipei
2004-01-01
This paper is to present evoked potential (EP) processing using adaptive interference cancel (AIC) filter with second and high order cumulants. In conventional ensemble averaging method, people have to conduct repetitively experiments to record the required data. Recently, the use of AIC structure with second statistics in processing EP has proved more efficiency than traditional averaging method, but it is sensitive to both of the reference signal statistics and the choice of step size. Thus, we proposed higher order statistics-based AIC method to improve these disadvantages. This study was experimented in somatosensory EP corrupted with EEG. Gradient type algorithm is used in AIC method. Comparisons with AIC filter on second, third, fourth order statistics are also presented in this paper. We observed that AIC filter with third order statistics has better convergent performance for EP processing and is not sensitive to the selection of step size and reference input.
Decentralized Quasi-Newton Methods
NASA Astrophysics Data System (ADS)
Eisen, Mark; Mokhtari, Aryan; Ribeiro, Alejandro
2017-05-01
We introduce the decentralized Broyden-Fletcher-Goldfarb-Shanno (D-BFGS) method as a variation of the BFGS quasi-Newton method for solving decentralized optimization problems. The D-BFGS method is of interest in problems that are not well conditioned, making first order decentralized methods ineffective, and in which second order information is not readily available, making second order decentralized methods impossible. D-BFGS is a fully distributed algorithm in which nodes approximate curvature information of themselves and their neighbors through the satisfaction of a secant condition. We additionally provide a formulation of the algorithm in asynchronous settings. Convergence of D-BFGS is established formally in both the synchronous and asynchronous settings and strong performance advantages relative to first order methods are shown numerically.
Automatic Monitoring of Tunnel Deformation Based on High Density Point Clouds Data
NASA Astrophysics Data System (ADS)
Du, L.; Zhong, R.; Sun, H.; Wu, Q.
2017-09-01
An automated method for tunnel deformation monitoring using high density point clouds data is presented. Firstly, the 3D point clouds data are converted to two-dimensional surface by projection on the XOY plane, the projection point set of central axis on XOY plane named Uxoy is calculated by combining the Alpha Shape algorithm with RANSAC (Random Sampling Consistency) algorithm, and then the projection point set of central axis on YOZ plane named Uyoz is obtained by highest and lowest points which are extracted by intersecting straight lines that through each point of Uxoy and perpendicular to the two -dimensional surface with the tunnel point clouds, Uxoy and Uyoz together form the 3D center axis finally. Secondly, the buffer of each cross section is calculated by K-Nearest neighbor algorithm, and the initial cross-sectional point set is quickly constructed by projection method. Finally, the cross sections are denoised and the section lines are fitted using the method of iterative ellipse fitting. In order to improve the accuracy of the cross section, a fine adjustment method is proposed to rotate the initial sectional plane around the intercept point in the horizontal and vertical direction within the buffer. The proposed method is used in Shanghai subway tunnel, and the deformation of each section in the direction of 0 to 360 degrees is calculated. The result shows that the cross sections becomes flat circles from regular circles due to the great pressure at the top of the tunnel
NASA Astrophysics Data System (ADS)
Wälz, Gero; Kats, Daniel; Usvyat, Denis; Korona, Tatiana; Schütz, Martin
2012-11-01
Linear-response methods, based on the time-dependent variational coupled-cluster or the unitary coupled-cluster model, and truncated at the second order according to the Møller-Plesset partitioning, i.e., the TD-VCC[2] and TD-UCC[2] linear-response methods, are presented and compared. For both of these methods a Hermitian eigenvalue problem has to be solved to obtain excitation energies and state eigenvectors. The excitation energies thus are guaranteed always to be real valued, and the eigenvectors are mutually orthogonal, in contrast to response theories based on “traditional” coupled-cluster models. It turned out that the TD-UCC[2] working equations for excitation energies and polarizabilities are equivalent to those of the second-order algebraic diagrammatic construction scheme ADC(2). Numerical tests are carried out by calculating TD-VCC[2] and TD-UCC[2] excitation energies and frequency-dependent dipole polarizabilities for several test systems and by comparing them to the corresponding values obtained from other second- and higher-order methods. It turns out that the TD-VCC[2] polarizabilities in the frequency regions away from the poles are of a similar accuracy as for other second-order methods, as expected from the perturbative analysis of the TD-VCC[2] polarizability expression. On the other hand, the TD-VCC[2] excitation energies are systematically too low relative to other second-order methods (including TD-UCC[2]). On the basis of these results and an analysis presented in this work, we conjecture that the perturbative expansion of the Jacobian converges more slowly for the TD-VCC formalism than for TD-UCC or for response theories based on traditional coupled-cluster models.
Cooperative Bi-Literacy: Parents, Students, and Teachers Read to Transform
ERIC Educational Resources Information Center
Rodriguez-Valls, Fernando
2009-01-01
Thousands of students in California learn English as a second language in schools that utilize exclusively monolingual--English Only--literacy programs. With such programs students do not have the opportunity to use the knowledge of their first language in order to acquire and master their second language. The project of cooperative bi-literacy…
Processing Relative Clauses in Chinese as a Second Language
ERIC Educational Resources Information Center
Xu, Yi
2014-01-01
This project investigates second language (L2) learners' processing of four types of Chinese relative clauses crossing extraction types and demonstrative-classifier (DCl) positions. Using a word order judgment task with a whole-sentence reading technique, the study also discusses how psycholinguistic theories bear explanatory power in L2 data. An…
ERIC Educational Resources Information Center
Carlson, Aaron M.; McPhail, Ellen D.; Rodriguez, Vilmarie; Schroeder, Georgene; Wolanskyj, Alexandra P.
2014-01-01
Instruction in hematopathology at Mayo Medical School has evolved from instructor-guided direct inspection under the light microscope (laboratory method), to photomicrographs of glass slides with classroom projection (projection method). These methods have not been compared directly to date. Forty-one second-year medical students participated in…
USDA-ARS?s Scientific Manuscript database
Adaptive waveform interpretation with Gaussian filtering (AWIGF) and second order bounded mean oscillation operator Z square 2(u,t,r) are TDR analysis methods based on second order differentiation. AWIGF was originally designed for relatively long probe (greater than 150 mm) TDR waveforms, while Z s...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Hsiang-Hsu; Taam, Ronald E.; Yen, David C. C., E-mail: yen@math.fju.edu.tw
Investigating the evolution of disk galaxies and the dynamics of proto-stellar disks can involve the use of both a hydrodynamical and a Poisson solver. These systems are usually approximated as infinitesimally thin disks using two-dimensional Cartesian or polar coordinates. In Cartesian coordinates, the calculations of the hydrodynamics and self-gravitational forces are relatively straightforward for attaining second-order accuracy. However, in polar coordinates, a second-order calculation of self-gravitational forces is required for matching the second-order accuracy of hydrodynamical schemes. We present a direct algorithm for calculating self-gravitational forces with second-order accuracy without artificial boundary conditions. The Poisson integral in polar coordinates ismore » expressed in a convolution form and the corresponding numerical complexity is nearly linear using a fast Fourier transform. Examples with analytic solutions are used to verify that the truncated error of this algorithm is of second order. The kernel integral around the singularity is applied to modify the particle method. The use of a softening length is avoided and the accuracy of the particle method is significantly improved.« less
Effects of Second-Order Hydrodynamics on a Semisubmersible Floating Offshore Wind Turbine: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bayati, I.; Jonkman, J.; Robertson, A.
2014-07-01
The objective of this paper is to assess the second-order hydrodynamic effects on a semisubmersible floating offshore wind turbine. Second-order hydrodynamics induce loads and motions at the sum- and difference-frequencies of the incident waves. These effects have often been ignored in offshore wind analysis, under the assumption that they are significantly smaller than first-order effects. The sum- and difference-frequency loads can, however, excite eigenfrequencies of the system, leading to large oscillations that strain the mooring system or vibrations that cause fatigue damage to the structure. Observations of supposed second-order responses in wave-tank tests performed by the DeepCwind consortium at themore » MARIN offshore basin suggest that these effects might be more important than originally expected. These observations inspired interest in investigating how second-order excitation affects floating offshore wind turbines and whether second-order hydrodynamics should be included in offshore wind simulation tools like FAST in the future. In this work, the effects of second-order hydrodynamics on a floating semisubmersible offshore wind turbine are investigated. Because FAST is currently unable to account for second-order effects, a method to assess these effects was applied in which linearized properties of the floating wind system derived from FAST (including the 6x6 mass and stiffness matrices) are used by WAMIT to solve the first- and second-order hydrodynamics problems in the frequency domain. The method has been applied to the OC4-DeepCwind semisubmersible platform, supporting the NREL 5-MW baseline wind turbine. The loads and response of the system due to the second-order hydrodynamics are analysed and compared to first-order hydrodynamic loads and induced motions in the frequency domain. Further, the second-order loads and induced response data are compared to the loads and motions induced by aerodynamic loading as solved by FAST.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beste, Ariana; Vazquez-Mayagoitia, Alvaro; Ortiz, J. Vincent
2013-01-01
A direct method (D-Delta-MBPT(2)) to calculate second-order ionization potentials (IPs), electron affinities (EAs), and excitation energies is developed. The Delta-MBPT(2) method is defined as the correlated extension of the Delta-HF method. Energy differences are obtained by integrating the energy derivative with respect to occupation numbers over the appropriate parameter range. This is made possible by writing the second-order energy as a function of the occupation numbers. Relaxation effects are fully included at the SCF level. This is in contrast to linear response theory, which makes the D-Delta-MBPT(2) applicable not only to single excited but also higher excited states. We showmore » the relationship of the D-Delta-MBPT(2) method for IPs and EAs to a second-order approximation of the effective Fock-space coupled-cluster Hamiltonian and a second-order electron propagator method. We also discuss the connection between the D-Delta-MBPT(2) method for excitation energies and the CIS-MP2 method. Finally, as a proof of principle, we apply our method to calculate ionization potentials and excitation energies of some small molecules. For IPs, the Delta-MBPT(2) results compare well to the second-order solution of the Dyson equation. For excitation energies, the deviation from EOM-CCSD increases when correlation becomes more important. When using the numerical integration technique, we encounter difficulties that prevented us from reaching the Delta-MBPT(2) values. Most importantly, relaxation beyond the Hartree Fock level is significant and needs to be included in future research.« less
Time domain reflectometry waveform analysis with second order bounded mean oscillation
USDA-ARS?s Scientific Manuscript database
Tangent-line methods and adaptive waveform interpretation with Gaussian filtering (AWIGF) have been proposed for determining reflection positions of time domain reflectometry (TDR) waveforms. However, the accuracy of those methods is limited for short probe TDR sensors. Second order bounded mean osc...
Valle, Annalisa; Massaro, Davide; Castelli, Ilaria; Sangiuliano Intra, Francesca; Lombardi, Elisabetta; Bracaglia, Edoardo; Marchetti, Antonella
2016-01-01
Mentalization research focuses on different aspects of this topic, highlighting individual differences in mentalizing and proposing programs of intervention for children and adults to increase this ability. The "Thought in Mind Project" (TiM Project) provides training targeted to adults-teachers or parents-to increase their mentalization and, consequently, to obtain mentalization improvement in children. The present research aimed to explore for the first time ever the potential of training for teachers based on the TiM Project, regarding the enhancement of mentalizing of an adult who would have interacted as a teacher with children. For this reason, two teachers - similar for meta-cognitive and meta-emotional skills - and their classes (N = 46) were randomly assigned to the training or control condition. In the first case, the teacher participated in training on the implementation of promotion of mentalizing in everyday school teaching strategies; in the second case the teacher participated in a control activity, similar to training for scheduling and methods, but without promoting the implementation of mentalization (in both conditions two meetings lasting about 3 h at the beginning of the school year and two supervisions during the school year were conducted). The children were tested by tasks assessing several aspects of mentalization (second and third-order false belief understanding, Strange Stories, Reading the mind in the Eyes, Mentalizing Task) both before and after the teacher participate in the TiM or control training (i.e., at the beginning and at the end of the school year). The results showed that, although some measured components of mentalization progressed over time, only the TiM Project training group significantly improved in third order false belief understanding and changed - in a greater way compared to the control group - in two of the three components of the Mentalizing Task. These evidences are promising about the idea that the creation of a mentalizing community promotes the mentalization abilities of its members.
ADM For Solving Linear Second-Order Fredholm Integro-Differential Equations
NASA Astrophysics Data System (ADS)
Karim, Mohd F.; Mohamad, Mahathir; Saifullah Rusiman, Mohd; Che-Him, Norziha; Roslan, Rozaini; Khalid, Kamil
2018-04-01
In this paper, we apply Adomian Decomposition Method (ADM) as numerically analyse linear second-order Fredholm Integro-differential Equations. The approximate solutions of the problems are calculated by Maple package. Some numerical examples have been considered to illustrate the ADM for solving this equation. The results are compared with the existing exact solution. Thus, the Adomian decomposition method can be the best alternative method for solving linear second-order Fredholm Integro-Differential equation. It converges to the exact solution quickly and in the same time reduces computational work for solving the equation. The result obtained by ADM shows the ability and efficiency for solving these equations.
NASA Astrophysics Data System (ADS)
Brekke, L. D.
2009-12-01
Presentation highlights recent methods carried by Reclamation to incorporate climate change and variability information into water supply assumptions for longer-term planning. Presentation also highlights limitations of these methods, and possible method adjustments that might be made to address these limitations. Reclamation was established more than one hundred years ago with a mission centered on the construction of irrigation and hydropower projects in the Western United States. Reclamation’s mission has evolved since its creation to include other activities, including municipal and industrial water supply projects, ecosystem restoration, and the protection and management of water supplies. Reclamation continues to explore ways to better address mission objectives, often considering proposals to develop new infrastructure and/or modify long-term criteria for operations. Such studies typically feature operations analysis to disclose benefits and effects of a given proposal, which are sensitive to assumptions made about future water supplies, water demands, and operating constraints. Development of these assumptions requires consideration to more fundamental future drivers such as land use, demographics, and climate. On the matter of establishing planning assumptions for water supplies under climate change, Reclamation has applied several methods. This presentation highlights two activities where the first focuses on potential changes in hydroclimate frequencies and the second focuses on potential changes in hydroclimate period-statistics. The first activity took place in the Colorado River Basin where there was interest in the interarrival possibilities of drought and surplus events of varying severity relevant to proposals on new criteria for handling lower basin shortages. The second activity occurred in California’s Central Valley where stakeholders were interested in how projected climate change possibilities translated into changes in hydrologic and water supply statistics relevant to a long-term federal Endangered Species Act consultation. Projected climate change possibilities were characterized by surveying a large ensemble of climate projections for change in period climate-statistics and then selecting a small set of projections featuring a bracketing set of period-changes relative to the those from the complete ensemble. Although both methods served the needs of their respective planning activities, each has limited applicability for other planning activities. First, each method addresses only one climate change aspect and not the other. Some planning activities may need to consider potential changes in both period-statistics and frequencies. Second, neither method addresses CMIP3 projected changes in climate variability. The first method bases frequency possibilities on historical information while the second method only surveys CMIP3 projections for change in period-mean and then superimposes those changes on historical variability. Third, artifacts of CMIP3 design lead to interpretation challenges when implementing the second method (e.g., inconsistent projection initialization, model-dependent expressions of multi-decadal variability). Presentation summarizes these issues and also potential method adjustments to address them when defining planning assumptions for water supplies.
General relaxation schemes in multigrid algorithms for higher order singularity methods
NASA Technical Reports Server (NTRS)
Oskam, B.; Fray, J. M. J.
1981-01-01
Relaxation schemes based on approximate and incomplete factorization technique (AF) are described. The AF schemes allow construction of a fast multigrid method for solving integral equations of the second and first kind. The smoothing factors for integral equations of the first kind, and comparison with similar results from the second kind of equations are a novel item. Application of the MD algorithm shows convergence to the level of truncation error of a second order accurate panel method.
On reinitializing level set functions
NASA Astrophysics Data System (ADS)
Min, Chohong
2010-04-01
In this paper, we consider reinitializing level functions through equation ϕt+sgn(ϕ0)(‖∇ϕ‖-1)=0[16]. The method of Russo and Smereka [11] is taken in the spatial discretization of the equation. The spatial discretization is, simply speaking, the second order ENO finite difference with subcell resolution near the interface. Our main interest is on the temporal discretization of the equation. We compare the three temporal discretizations: the second order Runge-Kutta method, the forward Euler method, and a Gauss-Seidel iteration of the forward Euler method. The fact that the time in the equation is fictitious makes a hypothesis that all the temporal discretizations result in the same result in their stationary states. The fact that the absolute stability region of the forward Euler method is not wide enough to include all the eigenvalues of the linearized semi-discrete system of the second order ENO spatial discretization makes another hypothesis that the forward Euler temporal discretization should invoke numerical instability. Our results in this paper contradict both the hypotheses. The Runge-Kutta and Gauss-Seidel methods obtain the second order accuracy, and the forward Euler method converges with order between one and two. Examining all their properties, we conclude that the Gauss-Seidel method is the best among the three. Compared to the Runge-Kutta, it is twice faster and requires memory two times less with the same accuracy.
NASA Astrophysics Data System (ADS)
Ping, Ping; Zhang, Yu; Xu, Yixian; Chu, Risheng
2016-12-01
In order to improve the perfectly matched layer (PML) efficiency in viscoelastic media, we first propose a split multi-axial PML (M-PML) and an unsplit convolutional PML (C-PML) in the second-order viscoelastic wave equations with the displacement as the only unknown. The advantage of these formulations is that it is easy and efficient to revise the existing codes of the second-order spectral element method (SEM) or finite-element method (FEM) with absorbing boundaries in a uniform equation, as well as more economical than the auxiliary differential equations PML. Three models which are easily suffered from late time instabilities are considered to validate our approaches. Through comparison the M-PML with C-PML efficiency of absorption and stability for long time simulation, it can be concluded that: (1) for an isotropic viscoelastic medium with high Poisson's ratio, the C-PML will be a sufficient choice for long time simulation because of its weak reflections and superior stability; (2) unlike the M-PML with high-order damping profile, the M-PML with second-order damping profile loses its stability in long time simulation for an isotropic viscoelastic medium; (3) in an anisotropic viscoelastic medium, the C-PML suffers from instabilities, while the M-PML with second-order damping profile can be a better choice for its superior stability and more acceptable weak reflections than the M-PML with high-order damping profile. The comparative analysis of the developed methods offers meaningful significance for long time seismic wave modeling in second-order viscoelastic wave equations.
Finite difference and Runge-Kutta methods for solving vibration problems
NASA Astrophysics Data System (ADS)
Lintang Renganis Radityani, Scolastika; Mungkasi, Sudi
2017-11-01
The vibration of a storey building can be modelled into a system of second order ordinary differential equations. If the number of floors of a building is large, then the result is a large scale system of second order ordinary differential equations. The large scale system is difficult to solve, and if it can be solved, the solution may not be accurate. Therefore, in this paper, we seek for accurate methods for solving vibration problems. We compare the performance of numerical finite difference and Runge-Kutta methods for solving large scale systems of second order ordinary differential equations. The finite difference methods include the forward and central differences. The Runge-Kutta methods include the Euler and Heun methods. Our research results show that the central finite difference and the Heun methods produce more accurate solutions than the forward finite difference and the Euler methods do.
Localized waves in three-component coupled nonlinear Schrödinger equation
NASA Astrophysics Data System (ADS)
Xu, Tao; Chen, Yong
2016-09-01
We study the generalized Darboux transformation to the three-component coupled nonlinear Schrödinger equation. First- and second-order localized waves are obtained by this technique. In first-order localized wave, we get the interactional solutions between first-order rogue wave and one-dark, one-bright soliton respectively. Meanwhile, the interactional solutions between one-breather and first-order rogue wave are also given. In second-order localized wave, one-dark-one-bright soliton together with second-order rogue wave is presented in the first component, and two-bright soliton together with second-order rogue wave are gained respectively in the other two components. Besides, we observe second-order rogue wave together with one-breather in three components. Moreover, by increasing the absolute values of two free parameters, the nonlinear waves merge with each other distinctly. These results further reveal the interesting dynamic structures of localized waves in the three-component coupled system. Project supported by the Global Change Research Program of China (Grant No. 2015CB953904), the National Natural Science Foundation of China (Grant Nos. 11275072 and 11435005), the Doctoral Program of Higher Education of China (Grant No. 20120076110024), the Network Information Physics Calculation of Basic Research Innovation Research Group of China (Grant No. 61321064), and Shanghai Collaborative Innovation Center of Trustworthy Software for Internet of Things, China (Grant No. ZF1213).
NASA Astrophysics Data System (ADS)
Pan, Liang; Xu, Kun; Li, Qibing; Li, Jiequan
2016-12-01
For computational fluid dynamics (CFD), the generalized Riemann problem (GRP) solver and the second-order gas-kinetic scheme (GKS) provide a time-accurate flux function starting from a discontinuous piecewise linear flow distributions around a cell interface. With the adoption of time derivative of the flux function, a two-stage Lax-Wendroff-type (L-W for short) time stepping method has been recently proposed in the design of a fourth-order time accurate method for inviscid flow [21]. In this paper, based on the same time-stepping method and the second-order GKS flux function [42], a fourth-order gas-kinetic scheme is constructed for the Euler and Navier-Stokes (NS) equations. In comparison with the formal one-stage time-stepping third-order gas-kinetic solver [24], the current fourth-order method not only reduces the complexity of the flux function, but also improves the accuracy of the scheme. In terms of the computational cost, a two-dimensional third-order GKS flux function takes about six times of the computational time of a second-order GKS flux function. However, a fifth-order WENO reconstruction may take more than ten times of the computational cost of a second-order GKS flux function. Therefore, it is fully legitimate to develop a two-stage fourth order time accurate method (two reconstruction) instead of standard four stage fourth-order Runge-Kutta method (four reconstruction). Most importantly, the robustness of the fourth-order GKS is as good as the second-order one. In the current computational fluid dynamics (CFD) research, it is still a difficult problem to extend the higher-order Euler solver to the NS one due to the change of governing equations from hyperbolic to parabolic type and the initial interface discontinuity. This problem remains distinctively for the hypersonic viscous and heat conducting flow. The GKS is based on the kinetic equation with the hyperbolic transport and the relaxation source term. The time-dependent GKS flux function provides a dynamic process of evolution from the kinetic scale particle free transport to the hydrodynamic scale wave propagation, which provides the physics for the non-equilibrium numerical shock structure construction to the near equilibrium NS solution. As a result, with the implementation of the fifth-order WENO initial reconstruction, in the smooth region the current two-stage GKS provides an accuracy of O ((Δx) 5 ,(Δt) 4) for the Euler equations, and O ((Δx) 5 ,τ2 Δt) for the NS equations, where τ is the time between particle collisions. Many numerical tests, including difficult ones for the Navier-Stokes solvers, have been used to validate the current method. Perfect numerical solutions can be obtained from the high Reynolds number boundary layer to the hypersonic viscous heat conducting flow. Following the two-stage time-stepping framework, the third-order GKS flux function can be used as well to construct a fifth-order method with the usage of both first-order and second-order time derivatives of the flux function. The use of time-accurate flux function may have great advantages on the development of higher-order CFD methods.
Paul, David L; McDaniel, Reuben R
2016-04-26
Very few telemedicine projects in medically underserved areas have been sustained over time. This research furthers understanding of telemedicine service sustainability by examining teleconsultation projects from the perspective of healthcare providers. Drivers influencing healthcare providers' continued participation in teleconsultation projects and how projects can be designed to effectively and efficiently address these drivers is examined. Case studies of fourteen teleconsultation projects that were part of two health sciences center (HSC) based telemedicine networks was utilized. Semi-structured interviews of 60 key informants (clinicians, administrators, and IT professionals) involved in teleconsultation projects were the primary data collection method. Two key drivers influenced providers' continued participation. First was severe time constraints. Second was remote site healthcare providers' (RSHCPs) sense of professional isolation. Two design steps to address these were identified. One involved implementing relatively simple technology and process solutions to make participation convenient. The more critical and difficult design step focused on designing teleconsultation projects for collaborative, active learning. This learning empowered participating RSHCPs by leveraging HSC specialists' expertise. In order to increase sustainability the fundamental purpose of teleconsultation projects needs to be re-conceptualized. Doing so requires HSC specialists and RSHCPs to assume new roles and highlights the importance of trust. By implementing these design steps, healthcare delivery in medically underserved areas can be positively impacted.
Solving ay'' + by' + cy = 0 with a Simple Product Rule Approach
ERIC Educational Resources Information Center
Tolle, John
2011-01-01
When elementary ordinary differential equations (ODEs) of first and second order are included in the calculus curriculum, second-order linear constant coefficient ODEs are typically solved by a method more appropriate to differential equations courses. This method involves the characteristic equation and its roots, complex-valued solutions, and…
Empirical studies of design software: Implications for software engineering environments
NASA Technical Reports Server (NTRS)
Krasner, Herb
1988-01-01
The empirical studies team of MCC's Design Process Group conducted three studies in 1986-87 in order to gather data on professionals designing software systems in a range of situations. The first study (the Lift Experiment) used thinking aloud protocols in a controlled laboratory setting to study the cognitive processes of individual designers. The second study (the Object Server Project) involved the observation, videotaping, and data collection of a design team of a medium-sized development project over several months in order to study team dynamics. The third study (the Field Study) involved interviews with the personnel from 19 large development projects in the MCC shareholders in order to study how the process of design is affected by organizationl and project behavior. The focus of this report will be on key observations of design process (at several levels) and their implications for the design of environments.
Testing the Stability of 2-D Recursive QP, NSHP and General Digital Filters of Second Order
NASA Astrophysics Data System (ADS)
Rathinam, Ananthanarayanan; Ramesh, Rengaswamy; Reddy, P. Subbarami; Ramaswami, Ramaswamy
Several methods for testing stability of first quadrant quarter-plane two dimensional (2-D) recursive digital filters have been suggested in 1970's and 80's. Though Jury's row and column algorithms, row and column concatenation stability tests have been considered as highly efficient mapping methods. They still fall short of accuracy as they need infinite number of steps to conclude about the exact stability of the filters and also the computational time required is enormous. In this paper, we present procedurally very simple algebraic method requiring only two steps when applied to the second order 2-D quarter - plane filter. We extend the same method to the second order Non-Symmetric Half-plane (NSHP) filters. Enough examples are given for both these types of filters as well as some lower order general recursive 2-D digital filters. We applied our method to barely stable or barely unstable filter examples available in the literature and got the same decisions thus showing that our method is accurate enough.
High-Order Residual-Distribution Hyperbolic Advection-Diffusion Schemes: 3rd-, 4th-, and 6th-Order
NASA Technical Reports Server (NTRS)
Mazaheri, Alireza R.; Nishikawa, Hiroaki
2014-01-01
In this paper, spatially high-order Residual-Distribution (RD) schemes using the first-order hyperbolic system method are proposed for general time-dependent advection-diffusion problems. The corresponding second-order time-dependent hyperbolic advection- diffusion scheme was first introduced in [NASA/TM-2014-218175, 2014], where rapid convergences over each physical time step, with typically less than five Newton iterations, were shown. In that method, the time-dependent hyperbolic advection-diffusion system (linear and nonlinear) was discretized by the second-order upwind RD scheme in a unified manner, and the system of implicit-residual-equations was solved efficiently by Newton's method over every physical time step. In this paper, two techniques for the source term discretization are proposed; 1) reformulation of the source terms with their divergence forms, and 2) correction to the trapezoidal rule for the source term discretization. Third-, fourth, and sixth-order RD schemes are then proposed with the above techniques that, relative to the second-order RD scheme, only cost the evaluation of either the first derivative or both the first and the second derivatives of the source terms. A special fourth-order RD scheme is also proposed that is even less computationally expensive than the third-order RD schemes. The second-order Jacobian formulation was used for all the proposed high-order schemes. The numerical results are then presented for both steady and time-dependent linear and nonlinear advection-diffusion problems. It is shown that these newly developed high-order RD schemes are remarkably efficient and capable of producing the solutions and the gradients to the same order of accuracy of the proposed RD schemes with rapid convergence over each physical time step, typically less than ten Newton iterations.
Designing optimal universal pulses using second-order, large-scale, non-linear optimization
NASA Astrophysics Data System (ADS)
Anand, Christopher Kumar; Bain, Alex D.; Curtis, Andrew Thomas; Nie, Zhenghua
2012-06-01
Recently, RF pulse design using first-order and quasi-second-order pulses has been actively investigated. We present a full second-order design method capable of incorporating relaxation, inhomogeneity in B0 and B1. Our model is formulated as a generic optimization problem making it easy to incorporate diverse pulse sequence features. To tame the computational cost, we present a method of calculating second derivatives in at most a constant multiple of the first derivative calculation time, this is further accelerated by using symbolic solutions of the Bloch equations. We illustrate the relative merits and performance of quasi-Newton and full second-order optimization with a series of examples, showing that even a pulse already optimized using other methods can be visibly improved. To be useful in CPMG experiments, a universal refocusing pulse should be independent of the delay time and insensitive of the relaxation time and RF inhomogeneity. We design such a pulse and show that, using it, we can obtain reliable R2 measurements for offsets within ±γB1. Finally, we compare our optimal refocusing pulse with other published refocusing pulses by doing CPMG experiments.
Component model reduction via the projection and assembly method
NASA Technical Reports Server (NTRS)
Bernard, Douglas E.
1989-01-01
The problem of acquiring a simple but sufficiently accurate model of a dynamic system is made more difficult when the dynamic system of interest is a multibody system comprised of several components. A low order system model may be created by reducing the order of the component models and making use of various available multibody dynamics programs to assemble them into a system model. The difficulty is in choosing the reduced order component models to meet system level requirements. The projection and assembly method, proposed originally by Eke, solves this difficulty by forming the full order system model, performing model reduction at the the system level using system level requirements, and then projecting the desired modes onto the components for component level model reduction. The projection and assembly method is analyzed to show the conditions under which the desired modes are captured exactly; to the numerical precision of the algorithm.
Software Program: Software Management Guidebook
NASA Technical Reports Server (NTRS)
1996-01-01
The purpose of this NASA Software Management Guidebook is twofold. First, this document defines the core products and activities required of NASA software projects. It defines life-cycle models and activity-related methods but acknowledges that no single life-cycle model is appropriate for all NASA software projects. It also acknowledges that the appropriate method for accomplishing a required activity depends on characteristics of the software project. Second, this guidebook provides specific guidance to software project managers and team leaders in selecting appropriate life cycles and methods to develop a tailored plan for a software engineering project.
Transsynaptic Mapping of Second-Order Taste Neurons in Flies by trans-Tango.
Talay, Mustafa; Richman, Ethan B; Snell, Nathaniel J; Hartmann, Griffin G; Fisher, John D; Sorkaç, Altar; Santoyo, Juan F; Chou-Freed, Cambria; Nair, Nived; Johnson, Mark; Szymanski, John R; Barnea, Gilad
2017-11-15
Mapping neural circuits across defined synapses is essential for understanding brain function. Here we describe trans-Tango, a technique for anterograde transsynaptic circuit tracing and manipulation. At the core of trans-Tango is a synthetic signaling pathway that is introduced into all neurons in the animal. This pathway converts receptor activation at the cell surface into reporter expression through site-specific proteolysis. Specific labeling is achieved by presenting a tethered ligand at the synapses of genetically defined neurons, thereby activating the pathway in their postsynaptic partners and providing genetic access to these neurons. We first validated trans-Tango in the Drosophila olfactory system and then implemented it in the gustatory system, where projections beyond the first-order receptor neurons are not fully characterized. We identified putative second-order neurons within the sweet circuit that include projection neurons targeting known neuromodulation centers in the brain. These experiments establish trans-Tango as a flexible platform for transsynaptic circuit analysis. Copyright © 2017 Elsevier Inc. All rights reserved.
Oscillator strengths, first-order properties, and nuclear gradients for local ADC(2).
Schütz, Martin
2015-06-07
We describe theory and implementation of oscillator strengths, orbital-relaxed first-order properties, and nuclear gradients for the local algebraic diagrammatic construction scheme through second order. The formalism is derived via time-dependent linear response theory based on a second-order unitary coupled cluster model. The implementation presented here is a modification of our previously developed algorithms for Laplace transform based local time-dependent coupled cluster linear response (CC2LR); the local approximations thus are state specific and adaptive. The symmetry of the Jacobian leads to considerable simplifications relative to the local CC2LR method; as a result, a gradient evaluation is about four times less expensive. Test calculations show that in geometry optimizations, usually very similar geometries are obtained as with the local CC2LR method (provided that a second-order method is applicable). As an exemplary application, we performed geometry optimizations on the low-lying singlet states of chlorophyllide a.
NASA Astrophysics Data System (ADS)
Karamat, Muhammad I.; Farncombe, Troy H.
2015-10-01
Simultaneous multi-isotope Single Photon Emission Computed Tomography (SPECT) imaging has a number of applications in cardiac, brain, and cancer imaging. The major concern however, is the significant crosstalk contamination due to photon scatter between the different isotopes. The current study focuses on a method of crosstalk compensation between two isotopes in simultaneous dual isotope SPECT acquisition applied to cancer imaging using 99mTc and 111In. We have developed an iterative image reconstruction technique that simulates the photon down-scatter from one isotope into the acquisition window of a second isotope. Our approach uses an accelerated Monte Carlo (MC) technique for the forward projection step in an iterative reconstruction algorithm. The MC estimated scatter contamination of a radionuclide contained in a given projection view is then used to compensate for the photon contamination in the acquisition window of other nuclide. We use a modified ordered subset-expectation maximization (OS-EM) algorithm named simultaneous ordered subset-expectation maximization (Sim-OSEM), to perform this step. We have undertaken a number of simulation tests and phantom studies to verify this approach. The proposed reconstruction technique was also evaluated by reconstruction of experimentally acquired phantom data. Reconstruction using Sim-OSEM showed very promising results in terms of contrast recovery and uniformity of object background compared to alternative reconstruction methods implementing alternative scatter correction schemes (i.e., triple energy window or separately acquired projection data). In this study the evaluation is based on the quality of reconstructed images and activity estimated using Sim-OSEM. In order to quantitate the possible improvement in spatial resolution and signal to noise ratio (SNR) observed in this study, further simulation and experimental studies are required.
ERIC Educational Resources Information Center
Mohanty, R. K.; Arora, Urvashi
2002-01-01
Three level-implicit finite difference methods of order four are discussed for the numerical solution of the mildly quasi-linear second-order hyperbolic equation A(x, t, u)u[subscript xx] + 2B(x, t, u)u[subscript xt] + C(x, t, u)u[subscript tt] = f(x, t, u, u[subscript x], u[subscript t]), 0 less than x less than 1, t greater than 0 subject to…
Transient analysis of an adaptive system for optimization of design parameters
NASA Technical Reports Server (NTRS)
Bayard, D. S.
1992-01-01
Averaging methods are applied to analyzing and optimizing the transient response associated with the direct adaptive control of an oscillatory second-order minimum-phase system. The analytical design methods developed for a second-order plant can be applied with some approximation to a MIMO flexible structure having a single dominant mode.
Between science and fiction: notes on the demography of the Holocaust.
DellaPergola, S
1996-01-01
The quantitative effects of the Holocaust have been the subject of much discussion and speculation, but rarely have they been seriously scrutinized using demographic methods. The first part of this article outlines the major factors that need to be fully examined in order to assess the short and long-term effects of the Shoah on the Jewish population. The second section cautiously offers demographic projections for the Jewish population had the Shoah never happened. The obviously speculative analysis presented here is open to many assumptions beyond those suggested by the author in this paper. The results of alternative projections reveal that because of the generations that were not born, high wartime child mortality, and the present day aging of the Jewish population, demographic losses continue to extend far beyond that of six million.
Hagen, Tobias
2018-03-09
Purpose During 2009‒2013 a pilot project was carried out in Zurich which aimed to increase the income of disability insurance (DI) benefit recipients in order to reduce their entitlement to DI benefits. The project consisted of placement coaching carried out by a private company that specialized in this field. It was exceptional with respect to three aspects: firstly, it did not include any formal training and/or medical aid; secondly, the coaches did not have the possibility of providing additional financial incentives or sanctioning lack of effort; and thirdly due to performance bonuses, the company not only had incentives to bring the participants into (higher paid) work, but also to keep them there for 52 weeks. This paper estimates the medium-run effects of the pilot project and assesses the net benefit from the Swiss social security system. Methods Different propensity score matching estimators are applied to administrative longitudinal data in order to construct suitable control groups. Results The estimates indicate a reduction in DI benefits and an increase in income even in the medium-run. A simple cost-benefit analysis suggests that the pilot project was a profitable investment for the social security system. Conclusion Given a healthy labor market, it seems possible to enhance the employment prospects of disabled persons with a relatively inexpensive intervention, which does not include any explicit investments in human capital.
Moment stability for a predator-prey model with parametric dichotomous noises
NASA Astrophysics Data System (ADS)
Jin, Yan-Fei
2015-06-01
In this paper, we investigate the solution moment stability for a Harrison-type predator-prey model with parametric dichotomous noises. Using the Shapiro-Loginov formula, the equations for the first-order and second-order moments are obtained and the corresponding stable conditions are given. It is found that the solution moment stability depends on the noise intensity and correlation time of noise. The first-order and second-order moments become unstable with the decrease of correlation time. That is, the dichotomous noise can improve the solution moment stability with respect to Gaussian white noise. Finally, some numerical results are presented to verify the theoretical analyses. Project supported by the National Natural Science Foundation of China (Grant No. 11272051).
A second order discontinuous Galerkin fast sweeping method for Eikonal equations
NASA Astrophysics Data System (ADS)
Li, Fengyan; Shu, Chi-Wang; Zhang, Yong-Tao; Zhao, Hongkai
2008-09-01
In this paper, we construct a second order fast sweeping method with a discontinuous Galerkin (DG) local solver for computing viscosity solutions of a class of static Hamilton-Jacobi equations, namely the Eikonal equations. Our piecewise linear DG local solver is built on a DG method developed recently [Y. Cheng, C.-W. Shu, A discontinuous Galerkin finite element method for directly solving the Hamilton-Jacobi equations, Journal of Computational Physics 223 (2007) 398-415] for the time-dependent Hamilton-Jacobi equations. The causality property of Eikonal equations is incorporated into the design of this solver. The resulting local nonlinear system in the Gauss-Seidel iterations is a simple quadratic system and can be solved explicitly. The compactness of the DG method and the fast sweeping strategy lead to fast convergence of the new scheme for Eikonal equations. Extensive numerical examples verify efficiency, convergence and second order accuracy of the proposed method.
NASA Astrophysics Data System (ADS)
Syrakos, Alexandros; Varchanis, Stylianos; Dimakopoulos, Yannis; Goulas, Apostolos; Tsamopoulos, John
2017-12-01
Finite volume methods (FVMs) constitute a popular class of methods for the numerical simulation of fluid flows. Among the various components of these methods, the discretisation of the gradient operator has received less attention despite its fundamental importance with regards to the accuracy of the FVM. The most popular gradient schemes are the divergence theorem (DT) (or Green-Gauss) scheme and the least-squares (LS) scheme. Both are widely believed to be second-order accurate, but the present study shows that in fact the common variant of the DT gradient is second-order accurate only on structured meshes whereas it is zeroth-order accurate on general unstructured meshes, and the LS gradient is second-order and first-order accurate, respectively. This is explained through a theoretical analysis and is confirmed by numerical tests. The schemes are then used within a FVM to solve a simple diffusion equation on unstructured grids generated by several methods; the results reveal that the zeroth-order accuracy of the DT gradient is inherited by the FVM as a whole, and the discretisation error does not decrease with grid refinement. On the other hand, use of the LS gradient leads to second-order accurate results, as does the use of alternative, consistent, DT gradient schemes, including a new iterative scheme that makes the common DT gradient consistent at almost no extra cost. The numerical tests are performed using both an in-house code and the popular public domain partial differential equation solver OpenFOAM.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xiaofeng, E-mail: xfyang@math.sc.edu; Han, Daozhi, E-mail: djhan@iu.edu
2017-02-01
In this paper, we develop a series of linear, unconditionally energy stable numerical schemes for solving the classical phase field crystal model. The temporal discretizations are based on the first order Euler method, the second order backward differentiation formulas (BDF2) and the second order Crank–Nicolson method, respectively. The schemes lead to linear elliptic equations to be solved at each time step, and the induced linear systems are symmetric positive definite. We prove that all three schemes are unconditionally energy stable rigorously. Various classical numerical experiments in 2D and 3D are performed to validate the accuracy and efficiency of the proposedmore » schemes.« less
Fourth order difference methods for hyperbolic IBVP's
NASA Technical Reports Server (NTRS)
Gustafsson, Bertil; Olsson, Pelle
1994-01-01
Fourth order difference approximations of initial-boundary value problems for hyperbolic partial differential equations are considered. We use the method of lines approach with both explicit and compact implicit difference operators in space. The explicit operator satisfies an energy estimate leading to strict stability. For the implicit operator we develop boundary conditions and give a complete proof of strong stability using the Laplace transform technique. We also present numerical experiments for the linear advection equation and Burgers' equation with discontinuities in the solution or in its derivative. The first equation is used for modeling contact discontinuities in fluid dynamics, the second one for modeling shocks and rarefaction waves. The time discretization is done with a third order Runge-Kutta TVD method. For solutions with discontinuities in the solution itself we add a filter based on second order viscosity. In case of the non-linear Burger's equation we use a flux splitting technique that results in an energy estimate for certain different approximations, in which case also an entropy condition is fulfilled. In particular we shall demonstrate that the unsplit conservative form produces a non-physical shock instead of the physically correct rarefaction wave. In the numerical experiments we compare our fourth order methods with a standard second order one and with a third order TVD-method. The results show that the fourth order methods are the only ones that give good results for all the considered test problems.
Time accurate application of the MacCormack 2-4 scheme on massively parallel computers
NASA Technical Reports Server (NTRS)
Hudson, Dale A.; Long, Lyle N.
1995-01-01
Many recent computational efforts in turbulence and acoustics research have used higher order numerical algorithms. One popular method has been the explicit MacCormack 2-4 scheme. The MacCormack 2-4 scheme is second order accurate in time and fourth order accurate in space, and is stable for CFL's below 2/3. Current research has shown that the method can give accurate results but does exhibit significant Gibbs phenomena at sharp discontinuities. The impact of adding Jameson type second, third, and fourth order artificial viscosity was examined here. Category 2 problems, the nonlinear traveling wave and the Riemann problem, were computed using a CFL number of 0.25. This research has found that dispersion errors can be significantly reduced or nearly eliminated by using a combination of second and third order terms in the damping. Use of second and fourth order terms reduced the magnitude of dispersion errors but not as effectively as the second and third order combination. The program was coded using Thinking Machine's CM Fortran, a variant of Fortran 90/High Performance Fortran, and was executed on a 2K CM-200. Simple extrapolation boundary conditions were used for both problems.
TU-F-18A-06: Dual Energy CT Using One Full Scan and a Second Scan with Very Few Projections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, T; Zhu, L
Purpose: The conventional dual energy CT (DECT) requires two full CT scans at different energy levels, resulting in dose increase as well as imaging errors from patient motion between the two scans. To shorten the scan time of DECT and thus overcome these drawbacks, we propose a new DECT algorithm using one full scan and a second scan with very few projections by preserving structural information. Methods: We first reconstruct a CT image on the full scan using a standard filtered-backprojection (FBP) algorithm. We then use a compressed sensing (CS) based iterative algorithm on the second scan for reconstruction frommore » very few projections. The edges extracted from the first scan are used as weights in the Objectives: function of the CS-based reconstruction to substantially improve the image quality of CT reconstruction. The basis material images are then obtained by an iterative image-domain decomposition method and an electron density map is finally calculated. The proposed method is evaluated on phantoms. Results: On the Catphan 600 phantom, the CT reconstruction mean error using the proposed method on 20 and 5 projections are 4.76% and 5.02%, respectively. Compared with conventional iterative reconstruction, the proposed edge weighting preserves object structures and achieves a better spatial resolution. With basis materials of Iodine and Teflon, our method on 20 projections obtains similar quality of decomposed material images compared with FBP on a full scan and the mean error of electron density in the selected regions of interest is 0.29%. Conclusion: We propose an effective method for reducing projections and therefore scan time in DECT. We show that a full scan plus a 20-projection scan are sufficient to provide DECT images and electron density with similar quality compared with two full scans. Our future work includes more phantom studies to validate the performance of our method.« less
Bai, Xiao-ping; Zhang, Xi-wei
2013-01-01
Selecting construction schemes of the building engineering project is a complex multiobjective optimization decision process, in which many indexes need to be selected to find the optimum scheme. Aiming at this problem, this paper selects cost, progress, quality, and safety as the four first-order evaluation indexes, uses the quantitative method for the cost index, uses integrated qualitative and quantitative methodologies for progress, quality, and safety indexes, and integrates engineering economics, reliability theories, and information entropy theory to present a new evaluation method for building construction project. Combined with a practical case, this paper also presents detailed computing processes and steps, including selecting all order indexes, establishing the index matrix, computing score values of all order indexes, computing the synthesis score, sorting all selected schemes, and making analysis and decision. Presented method can offer valuable references for risk computing of building construction projects.
NASA Technical Reports Server (NTRS)
Tsang, Leung; Chan, Chi Hou; Kong, Jin AU; Joseph, James
1992-01-01
Complete polarimetric signatures of a canopy of dielectric cylinders overlying a homogeneous half space are studied with the first and second order solutions of the vector radiative transfer theory. The vector radiative transfer equations contain a general nondiagonal extinction matrix and a phase matrix. The energy conservation issue is addressed by calculating the elements of the extinction matrix and the elements of the phase matrix in a manner that is consistent with energy conservation. Two methods are used. In the first method, the surface fields and the internal fields of the dielectric cylinder are calculated by using the fields of an infinite cylinder. The phase matrix is calculated and the extinction matrix is calculated by summing the absorption and scattering to ensure energy conservation. In the second method, the method of moments is used to calculate the elements of the extinction and phase matrices. The Mueller matrix based on the first order and second order multiple scattering solutions of the vector radiative transfer equation are calculated. Results from the two methods are compared. The vector radiative transfer equations, combined with the solution based on method of moments, obey both energy conservation and reciprocity. The polarimetric signatures, copolarized and depolarized return, degree of polarization, and phase differences are studied as a function of the orientation, sizes, and dielectric properties of the cylinders. It is shown that second order scattering is generally important for vegetation canopy at C band and can be important at L band for some cases.
On the accuracy of Whitham's method. [for steady ideal gas flow past cones
NASA Technical Reports Server (NTRS)
Zahalak, G. I.; Myers, M. K.
1974-01-01
The steady flow of an ideal gas past a conical body is studied by the method of matched asymptotic expansions and by Whitham's method in order to assess the accuracy of the latter. It is found that while Whitham's method does not yield a correct asymptotic representation of the perturbation field to second order in regions where the flow ahead of the Mach cone of the apex is disturbed, it does correctly predict the changes of the second-order perturbation quantities across a shock (the first-order shock strength). The results of the analysis are illustrated by a special case of a flat, rectangular plate at incidence.
NASA Astrophysics Data System (ADS)
Sajjadi, S. Maryam; Abdollahi, Hamid; Rahmanian, Reza; Bagheri, Leila
2016-03-01
A rapid, simple and inexpensive method using fluorescence spectroscopy coupled with multi-way methods for the determination of aflatoxins B1 and B2 in peanuts has been developed. In this method, aflatoxins are extracted with a mixture of water and methanol (90:10), and then monitored by fluorescence spectroscopy producing EEMs. Although the combination of EEMs and multi-way methods is commonly used to determine analytes in complex chemical systems with unknown interference(s), rank overlap problem in excitation and emission profiles may restrain the application of this strategy. If there is rank overlap in one mode, there are several three-way algorithms such as PARAFAC under some constraints that can resolve this kind of data successfully. However, the analysis of EEM data is impossible when some species have rank overlap in both modes because the information of the data matrix is equivalent to a zero-order data for that species, which is the case in our study. Aflatoxins B1 and B2 have the same shape of spectral profiles in both excitation and emission modes and we propose creating a third order data for each sample using solvent as a new additional selectivity mode. This third order data, in turn, converted to the second order data by augmentation, a fact which resurrects the second order advantage in original EEMs. The three-way data is constructed by stacking augmented data in the third way, and then analyzed by two powerful second order calibration methods (BLLS-RBL and PARAFAC) to quantify the analytes in four kinds of peanut samples. The results of both methods are in good agreement and reasonable recoveries are obtained.
Warsame, Abukar; Borg, Lena; Lind, Hans
2013-01-01
The aim of this paper is to argue for a number of statements about what is important for a client to do in order to improve quality in new infrastructure projects, with a focus on procurement and organizational issues. The paper synthesizes theoretical and empirical results concerning organizational performance, especially the role of the client for the quality of a project. The theoretical framework used is contract theory and transaction cost theory, where assumptions about rationality and self-interest are made and where incentive problems, asymmetric information, and moral hazard are central concepts. It is argued that choice of procurement type will not be a crucial factor. There is no procurement method that guarantees a better quality than another. We argue that given the right conditions all procurement methods can give good results, and given the wrong conditions, all of them can lead to low quality. What is crucial is how the client organization manages knowledge and the incentives for the members of the organization. This can be summarized as “organizational culture.” One way to improve knowledge and create incentives is to use independent second opinions in a systematic way. PMID:24250274
Ding, Yi; Peng, Kai; Yu, Miao; Lu, Lei; Zhao, Kun
2017-08-01
The performance of the two selected spatial frequency phase unwrapping methods is limited by a phase error bound beyond which errors will occur in the fringe order leading to a significant error in the recovered absolute phase map. In this paper, we propose a method to detect and correct the wrong fringe orders. Two constraints are introduced during the fringe order determination of two selected spatial frequency phase unwrapping methods. A strategy to detect and correct the wrong fringe orders is also described. Compared with the existing methods, we do not need to estimate the threshold associated with absolute phase values to determine the fringe order error, thus making it more reliable and avoiding the procedure of search in detecting and correcting successive fringe order errors. The effectiveness of the proposed method is validated by the experimental results.
Gao, Yingjie; Zhang, Jinhai; Yao, Zhenxing
2015-12-01
The complex frequency shifted perfectly matched layer (CFS-PML) can improve the absorbing performance of PML for nearly grazing incident waves. However, traditional PML and CFS-PML are based on first-order wave equations; thus, they are not suitable for second-order wave equation. In this paper, an implementation of CFS-PML for second-order wave equation is presented using auxiliary differential equations. This method is free of both convolution calculations and third-order temporal derivatives. As an unsplit CFS-PML, it can reduce the nearly grazing incidence. Numerical experiments show that it has better absorption than typical PML implementations based on second-order wave equation.
FY2007 Laboratory Directed Research and Development Annual Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Craig, W W; Sketchley, J A; Kotta, P R
The Laboratory Directed Research and Development (LDRD) annual report for fiscal year 2007 (FY07) provides a summary of LDRD-funded projects for the fiscal year and consists of two parts: An introduction to the LDRD Program, the LDRD portfolio-management process, program statistics for the year, and highlights of accomplishments for the year. A summary of each project, submitted by the principal investigator. Project summaries include the scope, motivation, goals, relevance to Department of Energy (DOE)/National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laboratory (LLNL) mission areas, the technical progress achieved in FY07, and a list of publications that resulted frommore » the research in FY07. Summaries are organized in sections by research category (in alphabetical order). Within each research category, the projects are listed in order of their LDRD project category: Strategic Initiative (SI), Exploratory Research (ER), Laboratory-Wide Competition (LW), and Feasibility Study (FS). Within each project category, the individual project summaries appear in order of their project tracking code, a unique identifier that consists of three elements. The first is the fiscal year the project began, the second represents the project category, and the third identifies the serial number of the proposal for that fiscal year.« less
Nonlinear dimensionality reduction of electroencephalogram (EEG) for Brain Computer interfaces.
Teli, Mohammad Nayeem; Anderson, Charles
2009-01-01
Patterns in electroencephalogram (EEG) signals are analyzed for a Brain Computer Interface (BCI). An important aspect of this analysis is the work on transformations of high dimensional EEG data to low dimensional spaces in which we can classify the data according to mental tasks being performed. In this research we investigate how a Neural Network (NN) in an auto-encoder with bottleneck configuration can find such a transformation. We implemented two approximate second-order methods to optimize the weights of these networks, because the more common first-order methods are very slow to converge for networks like these with more than three layers of computational units. The resulting non-linear projections of time embedded EEG signals show interesting separations that are related to tasks. The bottleneck networks do indeed discover nonlinear transformations to low-dimensional spaces that capture much of the information present in EEG signals. However, the resulting low-dimensional representations do not improve classification rates beyond what is possible using Quadratic Discriminant Analysis (QDA) on the original time-lagged EEG.
[Series: Utilization of Differential Equations and Methods for Solving Them in Medical Physics (1)].
Murase, Kenya
2014-01-01
Utilization of differential equations and methods for solving them in medical physics are presented. First, the basic concept and the kinds of differential equations were overviewed. Second, separable differential equations and well-known first-order and second-order differential equations were introduced, and the methods for solving them were described together with several examples. In the next issue, the symbolic and series expansion methods for solving differential equations will be mainly introduced.
Impact of agile methodologies on team capacity in automotive radio-navigation projects
NASA Astrophysics Data System (ADS)
Prostean, G.; Hutanu, A.; Volker, S.
2017-01-01
The development processes used in automotive radio-navigation projects are constantly under adaption pressure. While the software development models are based on automotive production processes, the integration of peripheral components into an automotive system will trigger a high number of requirement modifications. The use of traditional development models in automotive industry will bring team’s development capacity to its boundaries. The root cause lays in the inflexibility of actual processes and their adaption limits. This paper addresses a new project management approach for the development of radio-navigation projects. The understanding of weaknesses of current used models helped us in development and integration of agile methodologies in traditional development model structure. In the first part we focus on the change management methods to reduce request for change inflow. Established change management risk analysis processes enables the project management to judge the impact of a requirement change and also gives time to the project to implement some changes. However, in big automotive radio-navigation projects the saved time is not enough to implement the large amount of changes, which are submitted to the project. In the second phase of this paper we focus on increasing team capacity by integrating at critical project phases agile methodologies into the used traditional model. The overall objective of this paper is to prove the need of process adaption in order to solve project team capacity bottlenecks.
A New Factorisation of a General Second Order Differential Equation
ERIC Educational Resources Information Center
Clegg, Janet
2006-01-01
A factorisation of a general second order ordinary differential equation is introduced from which the full solution to the equation can be obtained by performing two integrations. The method is compared with traditional methods for solving these type of equations. It is shown how the Green's function can be derived directly from the factorisation…
Speaker normalization and adaptation using second-order connectionist networks.
Watrous, R L
1993-01-01
A method for speaker normalization and adaption using connectionist networks is developed. A speaker-specific linear transformation of observations of the speech signal is computed using second-order network units. Classification is accomplished by a multilayer feedforward network that operates on the normalized speech data. The network is adapted for a new talker by modifying the transformation parameters while leaving the classifier fixed. This is accomplished by backpropagating classification error through the classifier to the second-order transformation units. This method was evaluated for the classification of ten vowels for 76 speakers using the first two formant values of the Peterson-Barney data. The results suggest that rapid speaker adaptation resulting in high classification accuracy can be accomplished by this method.
NASA Astrophysics Data System (ADS)
Nonaka, Andrew; Day, Marcus S.; Bell, John B.
2018-01-01
We present a numerical approach for low Mach number combustion that conserves both mass and energy while remaining on the equation of state to a desired tolerance. We present both unconfined and confined cases, where in the latter the ambient pressure changes over time. Our overall scheme is a projection method for the velocity coupled to a multi-implicit spectral deferred corrections (SDC) approach to integrate the mass and energy equations. The iterative nature of SDC methods allows us to incorporate a series of pressure discrepancy corrections naturally that lead to additional mass and energy influx/outflux in each finite volume cell in order to satisfy the equation of state. The method is second order, and satisfies the equation of state to a desired tolerance with increasing iterations. Motivated by experimental results, we test our algorithm on hydrogen flames with detailed kinetics. We examine the morphology of thermodiffusively unstable cylindrical premixed flames in high-pressure environments for confined and unconfined cases. We also demonstrate that our algorithm maintains the equation of state for premixed methane flames and non-premixed dimethyl ether jet flames.
Passive Magnetic Bearing With Ferrofluid Stabilization
NASA Technical Reports Server (NTRS)
Jansen, Ralph; DiRusso, Eliseo
1996-01-01
A new class of magnetic bearings is shown to exist analytically and is demonstrated experimentally. The class of magnetic bearings utilize a ferrofluid/solid magnet interaction to stabilize the axial degree of freedom of a permanent magnet radial bearing. Twenty six permanent magnet bearing designs and twenty two ferrofluid stabilizer designs are evaluated. Two types of radial bearing designs are tested to determine their force and stiffness utilizing two methods. The first method is based on the use of frequency measurements to determine stiffness by utilizing an analytical model. The second method consisted of loading the system and measuring displacement in order to measure stiffness. Two ferrofluid stabilizers are tested and force displacement curves are measured. Two experimental test fixtures are designed and constructed in order to conduct the stiffness testing. Polynomial models of the data are generated and used to design the bearing prototype. The prototype was constructed and tested and shown to be stable. Further testing shows the possibility of using this technology for vibration isolation. The project successfully demonstrated the viability of the passive magnetic bearing with ferrofluid stabilization both experimentally and analytically.
Xu, Enhua; Zhao, Dongbo; Li, Shuhua
2015-10-13
A multireference second order perturbation theory based on a complete active space configuration interaction (CASCI) function or density matrix renormalized group (DMRG) function has been proposed. This method may be considered as an approximation to the CAS/A approach with the same reference, in which the dynamical correlation is simplified with blocked correlated second order perturbation theory based on the generalized valence bond (GVB) reference (GVB-BCPT2). This method, denoted as CASCI-BCPT2/GVB or DMRG-BCPT2/GVB, is size consistent and has a similar computational cost as the conventional second order perturbation theory (MP2). We have applied it to investigate a number of problems of chemical interest. These problems include bond-breaking potential energy surfaces in four molecules, the spectroscopic constants of six diatomic molecules, the reaction barrier for the automerization of cyclobutadiene, and the energy difference between the monocyclic and bicyclic forms of 2,6-pyridyne. Our test applications demonstrate that CASCI-BCPT2/GVB can provide comparable results with CASPT2 (second order perturbation theory based on the complete active space self-consistent-field wave function) for systems under study. Furthermore, the DMRG-BCPT2/GVB method is applicable to treat strongly correlated systems with large active spaces, which are beyond the capability of CASPT2.
2007-08-01
Infinite plate with a hole: sequence of meshes produced by h-refinement. The geometry of the coarsest mesh...recalled with an emphasis on k -refinement. In Section 3, the use of high-order NURBS within a projection technique is studied in the geometri - cally linear...case with a B̄ method to investigate the choice of approximation and projection spaces with NURBS.
Given a one-step numerical scheme, on which ordinary differential equations is it exact?
NASA Astrophysics Data System (ADS)
Villatoro, Francisco R.
2009-01-01
A necessary condition for a (non-autonomous) ordinary differential equation to be exactly solved by a one-step, finite difference method is that the principal term of its local truncation error be null. A procedure to determine some ordinary differential equations exactly solved by a given numerical scheme is developed. Examples of differential equations exactly solved by the explicit Euler, implicit Euler, trapezoidal rule, second-order Taylor, third-order Taylor, van Niekerk's second-order rational, and van Niekerk's third-order rational methods are presented.
Valle, Annalisa; Massaro, Davide; Castelli, Ilaria; Sangiuliano Intra, Francesca; Lombardi, Elisabetta; Bracaglia, Edoardo; Marchetti, Antonella
2016-01-01
Mentalization research focuses on different aspects of this topic, highlighting individual differences in mentalizing and proposing programs of intervention for children and adults to increase this ability. The “Thought in Mind Project” (TiM Project) provides training targeted to adults—teachers or parents—to increase their mentalization and, consequently, to obtain mentalization improvement in children. The present research aimed to explore for the first time ever the potential of training for teachers based on the TiM Project, regarding the enhancement of mentalizing of an adult who would have interacted as a teacher with children. For this reason, two teachers – similar for meta-cognitive and meta-emotional skills - and their classes (N = 46) were randomly assigned to the training or control condition. In the first case, the teacher participated in training on the implementation of promotion of mentalizing in everyday school teaching strategies; in the second case the teacher participated in a control activity, similar to training for scheduling and methods, but without promoting the implementation of mentalization (in both conditions two meetings lasting about 3 h at the beginning of the school year and two supervisions during the school year were conducted). The children were tested by tasks assessing several aspects of mentalization (second and third-order false belief understanding, Strange Stories, Reading the mind in the Eyes, Mentalizing Task) both before and after the teacher participate in the TiM or control training (i.e., at the beginning and at the end of the school year). The results showed that, although some measured components of mentalization progressed over time, only the TiM Project training group significantly improved in third order false belief understanding and changed - in a greater way compared to the control group – in two of the three components of the Mentalizing Task. These evidences are promising about the idea that the creation of a mentalizing community promotes the mentalization abilities of its members. PMID:27630586
Integrating physical and genetic maps: from genomes to interaction networks
Beyer, Andreas; Bandyopadhyay, Sourav; Ideker, Trey
2009-01-01
Physical and genetic mapping data have become as important to network biology as they once were to the Human Genome Project. Integrating physical and genetic networks currently faces several challenges: increasing the coverage of each type of network; establishing methods to assemble individual interaction measurements into contiguous pathway models; and annotating these pathways with detailed functional information. A particular challenge involves reconciling the wide variety of interaction types that are currently available. For this purpose, recent studies have sought to classify genetic and physical interactions along several complementary dimensions, such as ordered versus unordered, alleviating versus aggravating, and first versus second degree. PMID:17703239
LI, ZHILIN; JI, HAIFENG; CHEN, XIAOHONG
2016-01-01
A new augmented method is proposed for elliptic interface problems with a piecewise variable coefficient that has a finite jump across a smooth interface. The main motivation is not only to get a second order accurate solution but also a second order accurate gradient from each side of the interface. The key of the new method is to introduce the jump in the normal derivative of the solution as an augmented variable and re-write the interface problem as a new PDE that consists of a leading Laplacian operator plus lower order derivative terms near the interface. In this way, the leading second order derivatives jump relations are independent of the jump in the coefficient that appears only in the lower order terms after the scaling. An upwind type discretization is used for the finite difference discretization at the irregular grid points near or on the interface so that the resulting coefficient matrix is an M-matrix. A multi-grid solver is used to solve the linear system of equations and the GMRES iterative method is used to solve the augmented variable. Second order convergence for the solution and the gradient from each side of the interface has also been proved in this paper. Numerical examples for general elliptic interface problems have confirmed the theoretical analysis and efficiency of the new method. PMID:28983130
Mori-Zwanzig theory for dissipative forces in coarse-grained dynamics in the Markov limit
NASA Astrophysics Data System (ADS)
Izvekov, Sergei
2017-01-01
We derive alternative Markov approximations for the projected (stochastic) force and memory function in the coarse-grained (CG) generalized Langevin equation, which describes the time evolution of the center-of-mass coordinates of clusters of particles in the microscopic ensemble. This is done with the aid of the Mori-Zwanzig projection operator method based on the recently introduced projection operator [S. Izvekov, J. Chem. Phys. 138, 134106 (2013), 10.1063/1.4795091]. The derivation exploits the "generalized additive fluctuating force" representation to which the projected force reduces in the adopted projection operator formalism. For the projected force, we present a first-order time expansion which correctly extends the static fluctuating force ansatz with the terms necessary to maintain the required orthogonality of the projected dynamics in the Markov limit to the space of CG phase variables. The approximant of the memory function correctly accounts for the momentum dependence in the lowest (second) order and indicates that such a dependence may be important in the CG dynamics approaching the Markov limit. In the case of CG dynamics with a weak dependence of the memory effects on the particle momenta, the expression for the memory function presented in this work is applicable to non-Markov systems. The approximations are formulated in a propagator-free form allowing their efficient evaluation from the microscopic data sampled by standard molecular dynamics simulations. A numerical application is presented for a molecular liquid (nitromethane). With our formalism we do not observe the "plateau-value problem" if the friction tensors for dissipative particle dynamics (DPD) are computed using the Green-Kubo relation. Our formalism provides a consistent bottom-up route for hierarchical parametrization of DPD models from atomistic simulations.
NASA Astrophysics Data System (ADS)
Coco, Armando; Russo, Giovanni
2018-05-01
In this paper we propose a second-order accurate numerical method to solve elliptic problems with discontinuous coefficients (with general non-homogeneous jumps in the solution and its gradient) in 2D and 3D. The method consists of a finite-difference method on a Cartesian grid in which complex geometries (boundaries and interfaces) are embedded, and is second order accurate in the solution and the gradient itself. In order to avoid the drop in accuracy caused by the discontinuity of the coefficients across the interface, two numerical values are assigned on grid points that are close to the interface: a real value, that represents the numerical solution on that grid point, and a ghost value, that represents the numerical solution extrapolated from the other side of the interface, obtained by enforcing the assigned non-homogeneous jump conditions on the solution and its flux. The method is also extended to the case of matrix coefficient. The linear system arising from the discretization is solved by an efficient multigrid approach. Unlike the 1D case, grid points are not necessarily aligned with the normal derivative and therefore suitable stencils must be chosen to discretize interface conditions in order to achieve second order accuracy in the solution and its gradient. A proper treatment of the interface conditions will allow the multigrid to attain the optimal convergence factor, comparable with the one obtained by Local Fourier Analysis for rectangular domains. The method is robust enough to handle large jump in the coefficients: order of accuracy, monotonicity of the errors and good convergence factor are maintained by the scheme.
Lozano, Valeria A; Ibañez, Gabriela A; Olivieri, Alejandro C
2009-10-05
In the presence of analyte-background interactions and a significant background signal, both second-order multivariate calibration and standard addition are required for successful analyte quantitation achieving the second-order advantage. This report discusses a modified second-order standard addition method, in which the test data matrix is subtracted from the standard addition matrices, and quantitation proceeds via the classical external calibration procedure. It is shown that this novel data processing method allows one to apply not only parallel factor analysis (PARAFAC) and multivariate curve resolution-alternating least-squares (MCR-ALS), but also the recently introduced and more flexible partial least-squares (PLS) models coupled to residual bilinearization (RBL). In particular, the multidimensional variant N-PLS/RBL is shown to produce the best analytical results. The comparison is carried out with the aid of a set of simulated data, as well as two experimental data sets: one aimed at the determination of salicylate in human serum in the presence of naproxen as an additional interferent, and the second one devoted to the analysis of danofloxacin in human serum in the presence of salicylate.
ERIC Educational Resources Information Center
Guadalupe, Deana R.
Integrated Methods for Pupils to Reinforce Occupational and Vocational Effectiveness (Project IMPROVE) was a federally funded project in its second year of operation in two Manhattan (New York) high schools in 1992-93. It served limited-English-proficient students, 186 Latino and 13 Asian-American, in grades 9-12. Students received instruction in…
A second-order shock-expansion method applicable to bodies of revolution near zero lift
NASA Technical Reports Server (NTRS)
1957-01-01
A second-order shock-expansion method applicable to bodies of revolution is developed by the use of the predictions of the generalized shock-expansion method in combination with characteristics theory. Equations defining the zero-lift pressure distributions and the normal-force and pitching-moment derivatives are derived. Comparisons with experimental results show that the method is applicable at values of the similarity parameter, the ratio of free-stream Mach number to nose fineness ratio, from about 0.4 to 2.
Numerical solution of second order ODE directly by two point block backward differentiation formula
NASA Astrophysics Data System (ADS)
Zainuddin, Nooraini; Ibrahim, Zarina Bibi; Othman, Khairil Iskandar; Suleiman, Mohamed; Jamaludin, Noraini
2015-12-01
Direct Two Point Block Backward Differentiation Formula, (BBDF2) for solving second order ordinary differential equations (ODEs) will be presented throughout this paper. The method is derived by differentiating the interpolating polynomial using three back values. In BBDF2, two approximate solutions are produced simultaneously at each step of integration. The method derived is implemented by using fixed step size and the numerical results that follow demonstrate the advantage of the direct method as compared to the reduction method.
Grant, A; Biley, F C; Leigh-Phippard, H; Walker, H
2012-12-01
This paper is the second part of a two-article practice development report. It builds on the first part by introducing and discussing a Writing for Recovery practice development project, conducted at two UK sites. The paper begins by briefly describing the project within the context of helping mental health users, carers and survivors develop skills in creative writing in order to engage in the process of narrative re-storying in line with preferred identity. A selective overview of broad and focal background literature relevant to the project is then provided in order to position it within a values-based mental health nursing practice. Following this, the specific plan for running the project is briefly summarized, covering actual and anticipated ethical issues. The paper ends with a discussion of dissemination aims. © 2012 Blackwell Publishing.
NASA Astrophysics Data System (ADS)
Iqbal, Mohsin; Duivenvoorden, Kasper; Schuch, Norbert
2018-05-01
We use projected entangled pair states (PEPS) to study topological quantum phase transitions. The local description of topological order in the PEPS formalism allows us to set up order parameters which measure condensation and deconfinement of anyons and serve as substitutes for conventional order parameters. We apply these order parameters, together with anyon-anyon correlation functions and some further probes, to characterize topological phases and phase transitions within a family of models based on a Z4 symmetry, which contains Z4 quantum double, toric code, double semion, and trivial phases. We find a diverse phase diagram which exhibits a variety of different phase transitions of both first and second order which we comprehensively characterize, including direct transitions between the toric code and the double semion phase.
Improved diffusion Monte Carlo propagators for bosonic systems using Itô calculus
NASA Astrophysics Data System (ADS)
Hâkansson, P.; Mella, M.; Bressanini, Dario; Morosi, Gabriele; Patrone, Marta
2006-11-01
The construction of importance sampled diffusion Monte Carlo (DMC) schemes accurate to second order in the time step is discussed. A central aspect in obtaining efficient second order schemes is the numerical solution of the stochastic differential equation (SDE) associated with the Fokker-Plank equation responsible for the importance sampling procedure. In this work, stochastic predictor-corrector schemes solving the SDE and consistent with Itô calculus are used in DMC simulations of helium clusters. These schemes are numerically compared with alternative algorithms obtained by splitting the Fokker-Plank operator, an approach that we analyze using the analytical tools provided by Itô calculus. The numerical results show that predictor-corrector methods are indeed accurate to second order in the time step and that they present a smaller time step bias and a better efficiency than second order split-operator derived schemes when computing ensemble averages for bosonic systems. The possible extension of the predictor-corrector methods to higher orders is also discussed.
A lattice Boltzmann model for the Burgers-Fisher equation.
Zhang, Jianying; Yan, Guangwu
2010-06-01
A lattice Boltzmann model is developed for the one- and two-dimensional Burgers-Fisher equation based on the method of the higher-order moment of equilibrium distribution functions and a series of partial differential equations in different time scales. In order to obtain the two-dimensional Burgers-Fisher equation, vector sigma(j) has been used. And in order to overcome the drawbacks of "error rebound," a new assumption of additional distribution is presented, where two additional terms, in first order and second order separately, are used. Comparisons with the results obtained by other methods reveal that the numerical solutions obtained by the proposed method converge to exact solutions. The model under new assumption gives better results than that with second order assumption. (c) 2010 American Institute of Physics.
ERIC Educational Resources Information Center
Ndemanu, Michael Takafor
2012-01-01
An online epistolary project was conducted with Cameroonian French-speaking students in order to boost English language learning. The project involved email exchanges (in English) between a small group of students from Cameroon and Canada, and it was coordinated by their teachers in both countries. At the end of the study, student emails were…
Solution of second order quasi-linear boundary value problems by a wavelet method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Lei; Zhou, Youhe; Wang, Jizeng, E-mail: jzwang@lzu.edu.cn
2015-03-10
A wavelet Galerkin method based on expansions of Coiflet-like scaling function bases is applied to solve second order quasi-linear boundary value problems which represent a class of typical nonlinear differential equations. Two types of typical engineering problems are selected as test examples: one is about nonlinear heat conduction and the other is on bending of elastic beams. Numerical results are obtained by the proposed wavelet method. Through comparing to relevant analytical solutions as well as solutions obtained by other methods, we find that the method shows better efficiency and accuracy than several others, and the rate of convergence can evenmore » reach orders of 5.8.« less
Second-order Born calculation of coplanar symmetric (e, 2e) process on Mg
NASA Astrophysics Data System (ADS)
Zhang, Yong-Zhi; Wang, Yang; Zhou, Ya-Jun
2014-06-01
The second-order distorted wave Born approximation (DWBA) method is employed to investigate the triple differential cross sections (TDCS) of coplanar doubly symmetric (e, 2e) collisions for magnesium at excess energies of 6 eV-20 eV. Comparing with the standard first-order DWBA calculations, the inclusion of the second-order Born term in the scattering amplitude improves the degree of agreement with experiments, especially for backward scattering region of TDCS. This indicates that the present second-order Born term is capable to give a reasonable correction to DWBA model in studying coplanar symmetric (e, 2e) problems of two-valence-electron target in low energy range.
New robust bilinear least squares method for the analysis of spectral-pH matrix data.
Goicoechea, Héctor C; Olivieri, Alejandro C
2005-07-01
A new second-order multivariate method has been developed for the analysis of spectral-pH matrix data, based on a bilinear least-squares (BLLS) model achieving the second-order advantage and handling multiple calibration standards. A simulated Monte Carlo study of synthetic absorbance-pH data allowed comparison of the newly proposed BLLS methodology with constrained parallel factor analysis (PARAFAC) and with the combination multivariate curve resolution-alternating least-squares (MCR-ALS) technique under different conditions of sample-to-sample pH mismatch and analyte-background ratio. The results indicate an improved prediction ability for the new method. Experimental data generated by measuring absorption spectra of several calibration standards of ascorbic acid and samples of orange juice were subjected to second-order calibration analysis with PARAFAC, MCR-ALS, and the new BLLS method. The results indicate that the latter method provides the best analytical results in regard to analyte recovery in samples of complex composition requiring strict adherence to the second-order advantage. Linear dependencies appear when multivariate data are produced by using the pH or a reaction time as one of the data dimensions, posing a challenge to classical multivariate calibration models. The presently discussed algorithm is useful for these latter systems.
Highly Accurate Analytical Approximate Solution to a Nonlinear Pseudo-Oscillator
NASA Astrophysics Data System (ADS)
Wu, Baisheng; Liu, Weijia; Lim, C. W.
2017-07-01
A second-order Newton method is presented to construct analytical approximate solutions to a nonlinear pseudo-oscillator in which the restoring force is inversely proportional to the dependent variable. The nonlinear equation is first expressed in a specific form, and it is then solved in two steps, a predictor and a corrector step. In each step, the harmonic balance method is used in an appropriate manner to obtain a set of linear algebraic equations. With only one simple second-order Newton iteration step, a short, explicit, and highly accurate analytical approximate solution can be derived. The approximate solutions are valid for all amplitudes of the pseudo-oscillator. Furthermore, the method incorporates second-order Taylor expansion in a natural way, and it is of significant faster convergence rate.
Multilabel user classification using the community structure of online networks
Papadopoulos, Symeon; Kompatsiaris, Yiannis
2017-01-01
We study the problem of semi-supervised, multi-label user classification of networked data in the online social platform setting. We propose a framework that combines unsupervised community extraction and supervised, community-based feature weighting before training a classifier. We introduce Approximate Regularized Commute-Time Embedding (ARCTE), an algorithm that projects the users of a social graph onto a latent space, but instead of packing the global structure into a matrix of predefined rank, as many spectral and neural representation learning methods do, it extracts local communities for all users in the graph in order to learn a sparse embedding. To this end, we employ an improvement of personalized PageRank algorithms for searching locally in each user’s graph structure. Then, we perform supervised community feature weighting in order to boost the importance of highly predictive communities. We assess our method performance on the problem of user classification by performing an extensive comparative study among various recent methods based on graph embeddings. The comparison shows that ARCTE significantly outperforms the competition in almost all cases, achieving up to 35% relative improvement compared to the second best competing method in terms of F1-score. PMID:28278242
Multilabel user classification using the community structure of online networks.
Rizos, Georgios; Papadopoulos, Symeon; Kompatsiaris, Yiannis
2017-01-01
We study the problem of semi-supervised, multi-label user classification of networked data in the online social platform setting. We propose a framework that combines unsupervised community extraction and supervised, community-based feature weighting before training a classifier. We introduce Approximate Regularized Commute-Time Embedding (ARCTE), an algorithm that projects the users of a social graph onto a latent space, but instead of packing the global structure into a matrix of predefined rank, as many spectral and neural representation learning methods do, it extracts local communities for all users in the graph in order to learn a sparse embedding. To this end, we employ an improvement of personalized PageRank algorithms for searching locally in each user's graph structure. Then, we perform supervised community feature weighting in order to boost the importance of highly predictive communities. We assess our method performance on the problem of user classification by performing an extensive comparative study among various recent methods based on graph embeddings. The comparison shows that ARCTE significantly outperforms the competition in almost all cases, achieving up to 35% relative improvement compared to the second best competing method in terms of F1-score.
NASA Astrophysics Data System (ADS)
Khe Sun, Pak; Vorona-Slivinskaya, Lubov; Voskresenskay, Elena
2017-10-01
The article highlights the necessity of a complex approach to assess economic security of municipalities, which would consider municipal management specifics. The approach allows comparing the economic security level of municipalities, but it does not describe parameter differences between compared municipalities. Therefore, there is a second method suggested: parameter rank order method. Applying these methods allowed to figure out the leaders and outsiders of the economic security among municipalities and rank all economic security parameters according to the significance level. Complex assessment of the economic security of municipalities, based on the combination of the two approaches, allowed to assess the security level more accurate. In order to assure economic security and equalize its threshold values, one should pay special attention to transportation system development in municipalities. Strategic aims of projects in the area of transportation infrastructure development in municipalities include the following issues: contribution into creating and elaborating transportation logistics and manufacture transport complexes, development of transportation infrastructure with account of internal and external functions of the region, public transport development, improvement of transport security and reducing its negative influence on the environment.
Theory and Practice Meets in Industrial Process Design -Educational Perspective-
NASA Astrophysics Data System (ADS)
Aramo-Immonen, Heli; Toikka, Tarja
Software engineer should see himself as a business process designer in enterprise resource planning system (ERP) re-engineering project. Software engineers and managers should have design dialogue. The objective of this paper is to discuss the motives to study the design research in connection of management education in order to envision and understand the soft human issues in the management context. Second goal is to develop means of practicing social skills between designers and managers. This article explores the affective components of design thinking in industrial management domain. In the conceptual part of this paper are discussed concepts of network and project economy, creativity, communication, use of metaphors, and design thinking. Finally is introduced empirical research plan and first empirical results from design method experiments among the multi-disciplined groups of the master-level students of industrial engineering and management and software engineering.
Hybrid DG/FV schemes for magnetohydrodynamics and relativistic hydrodynamics
NASA Astrophysics Data System (ADS)
Núñez-de la Rosa, Jonatan; Munz, Claus-Dieter
2018-01-01
This paper presents a high order hybrid discontinuous Galerkin/finite volume scheme for solving the equations of the magnetohydrodynamics (MHD) and of the relativistic hydrodynamics (SRHD) on quadrilateral meshes. In this approach, for the spatial discretization, an arbitrary high order discontinuous Galerkin spectral element (DG) method is combined with a finite volume (FV) scheme in order to simulate complex flow problems involving strong shocks. Regarding the time discretization, a fourth order strong stability preserving Runge-Kutta method is used. In the proposed hybrid scheme, a shock indicator is computed at the beginning of each Runge-Kutta stage in order to flag those elements containing shock waves or discontinuities. Subsequently, the DG solution in these troubled elements and in the current time step is projected onto a subdomain composed of finite volume subcells. Right after, the DG operator is applied to those unflagged elements, which, in principle, are oscillation-free, meanwhile the troubled elements are evolved with a robust second/third order FV operator. With this approach we are able to numerically simulate very challenging problems in the context of MHD and SRHD in one, and two space dimensions and with very high order polynomials. We make convergence tests and show a comprehensive one- and two dimensional testbench for both equation systems, focusing in problems with strong shocks. The presented hybrid approach shows that numerical schemes of very high order of accuracy are able to simulate these complex flow problems in an efficient and robust manner.
Design and development of second order MEMS sound pressure gradient sensor
NASA Astrophysics Data System (ADS)
Albahri, Shehab
The design and development of a second order MEMS sound pressure gradient sensor is presented in this dissertation. Inspired by the directional hearing ability of the parasitoid fly, Ormia ochracea, a novel first order directional microphone that mimics the mechanical structure of the fly's ears and detects the sound pressure gradient has been developed. While the first order directional microphones can be very beneficial in a large number of applications, there is great potential for remarkable improvements in performance through the use of second order systems. The second order directional microphone is able to provide a theoretical improvement in Sound to Noise ratio (SNR) of 9.5dB, compared to the first-order system that has its maximum SNR of 6dB. Although second order microphone is more sensitive to sound angle of incidence, the nature of the design and fabrication process imposes different factors that could lead to deterioration in its performance. The first Ormia ochracea second order directional microphone was designed in 2004 and fabricated in 2006 at Binghamton University. The results of the tested parts indicate that the Ormia ochracea second order directional microphone performs mostly as an Omni directional microphone. In this work, the previous design is reexamined and analyzed to explain the unexpected results. A more sophisticated tool implementing a finite element package ANSYS is used to examine the previous design response. This new tool is used to study different factors that used to be ignored in the previous design, mainly; response mismatch and fabrication uncertainty. A continuous model using Hamilton's principle is introduced to verify the results using the new method. Both models agree well, and propose a new way for optimizing the second order directional microphone using geometrical manipulation. In this work we also introduce a new fabrication process flow to increase the fabrication yield. The newly suggested method uses the shell layered analysis method in ANSYS. The developed models simulate the fabricated chips at different stages; with the stress at each layer is introduced using thermal loading. The results indicate a new fabrication process flow to increase the rigidity of the composite layers, and countering the deformation caused by the high stress in the thermal oxide layer.
Review Article: Recent Publications on Research Methods in Second Language Acquisition
ERIC Educational Resources Information Center
Ionin, Tania
2013-01-01
The central goal of the field of second language acquisition (SLA) is to describe and explain how second language learners acquire the target language. In order to achieve this goal, SLA researchers work with second language data, which can take a variety of forms, including (but not limited to) such commonly used methods as naturalistic…
A novel method for predicting the power outputs of wave energy converters
NASA Astrophysics Data System (ADS)
Wang, Yingguang
2018-03-01
This paper focuses on realistically predicting the power outputs of wave energy converters operating in shallow water nonlinear waves. A heaving two-body point absorber is utilized as a specific calculation example, and the generated power of the point absorber has been predicted by using a novel method (a nonlinear simulation method) that incorporates a second order random wave model into a nonlinear dynamic filter. It is demonstrated that the second order random wave model in this article can be utilized to generate irregular waves with realistic crest-trough asymmetries, and consequently, more accurate generated power can be predicted by subsequently solving the nonlinear dynamic filter equation with the nonlinearly simulated second order waves as inputs. The research findings demonstrate that the novel nonlinear simulation method in this article can be utilized as a robust tool for ocean engineers in their design, analysis and optimization of wave energy converters.
Accuracy of Orthomosaic Generated by Different Methods in Example of UAV Platform MUST Q
NASA Astrophysics Data System (ADS)
Liba, N.; Berg-Jürgens, J.
2015-11-01
Development of photogrammetry has reached a new level due to the use of unmanned aerial vehicles (UAV). In Estonia, the main areas of use of UAVs are monitoring overhead power lines for energy companies and fields in agriculture, and estimating the use of stockpile in mining. The project was carried out by the order of the City of Tartu for future road construction. In this research, automation of UAV platform MUST Q aerial image processing and reduction of time spent on the use of ground control points (GCP) is studied. For that two projects were created with software Pix4D. First one was processed automatically without GCP. Second one did use GCP, but all the processing was done automatically. As the result of the project, two orthomosaics with the pixel size of 5 cm were composed. Projects allowed ensuring accuracy limit of three times of the pixel size. The project that turned out to be the most accurate was the one using ground control points to do the levelling, which remained within the error limit allowed and the accuracy of the orthomosaic was 0.132 m. The project that didn't use ground control points had the accuracy of 1.417 m.
Thermal management of VECSELs by front surface direct liquid cooling
NASA Astrophysics Data System (ADS)
Smyth, Conor J. C.; Mirkhanov, Shamil; Quarterman, Adrian H.; Wilcox, Keith G.
2016-03-01
Efficient thermal management is vital for VECSELs, affecting the output power and several aspects of performance of the device. Presently there exist two distinct methods of effective thermal management which both possess their merits and disadvantages. Substrate removal of the VECSEL gain chip has proved a successful method in devices emitting at a wavelength near 1μm. However for other wavelengths the substrate removal technique has proved less effective primarily due to the thermal impedance of the distributed Bragg reflectors. The second method of thermal management involves the use of crystalline heat spreaders bonded to the gain chip surface. Although this is an effective thermal management scheme, the disadvantages are additional loss and the etalon effect that filters the gain spectrum, making mode locking more difficult and normally resulting in multiple peaks in the spectrum. There are considerable disadvantages associated with both methods attributed to heatspreader cost and sample processing. It is for these reasons that a proposed alternative, front surface liquid cooling, has been investigated in this project. Direct liquid cooling involves flowing a temperature-controlled liquid over the sample's surface. In this project COMSOL was used to model surface liquid cooling of a VECSEL sample in order to investigate and compare its potential thermal management with current standard thermal management techniques. Based on modelling, experiments were carried out in order to evaluate the performance of the technique. While modelling suggests that this is potentially a mid-performance low cost alternative to existing techniques, experimental measurements to date do not reflect the performance predicted from modelling.
Yokoyama, Yoshie; Jelenkovic, Aline; Sund, Reijo; Sung, Joohon; Hopper, John L; Ooki, Syuichi; Heikkilä, Kauko; Aaltonen, Sari; Tarnoki, Adam D; Tarnoki, David L; Willemsen, Gonneke; Bartels, Meike; van Beijsterveldt, Toos CEM; Saudino, Kimberly J; Cutler, Tessa L; Nelson, Tracy L; Whitfield, Keith E; Wardle, Jane; Llewellyn, Clare H; Fisher, Abigail; He, Mingguang; Ding, Xiaohu; Bjerregaard-Andersen, Morten; Beck-Nielsen, Henning; Sodemann, Morten; Song, Yun-Mi; Yang, Sarah; Lee, Kayoung; Jeong, Hoe-Uk; Knafo-Noam, Ariel; Mankuta, David; Abramson, Lior; Burt, S Alexandra; Klump, Kelly L; Ordoñana, Juan R; Sánchez-Romera, Juan F; Colodro-Conde, Lucia; Harris, Jennifer R; Brandt, Ingunn; Nilsen, Thomas Sevenius; Craig, Jeffrey M; Saffery, Richard; Ji, Fuling; Ning, Feng; Pang, Zengchang; Dubois, Lise; Boivin, Michel; Brendgen, Mara; Dionne, Ginette; Vitaro, Frank; Martin, Nicholas G; Medland, Sarah E; Montgomery, Grant W; Magnusson, Patrik KE; Pedersen, Nancy L; Aslan, Anna K Dahl; Tynelius, Per; Haworth, Claire MA; Plomin, Robert; Rebato, Esther; Rose, Richard J; Goldberg, Jack H; Rasmussen, Finn; Hur, Yoon-Mi; Sørensen, Thorkild IA; Boomsma, Dorret I; Kaprio, Jaakko; Silventoinen, Karri
2016-01-01
We analyzed birth order differences in means and variances of height and body mass index (BMI) in monozygotic (MZ) and dizygotic (DZ) twins from infancy to old age. The data were derived from the international CODATwins database. The total number of height and BMI measures from 0.5 to 79.5 years of age was 397,466. As expected, first-born twins had greater birth weight than second-born twins. With respect to height, first-born twins were slightly taller than second-born twins in childhood. After adjusting the results for birth weight, the birth order differences decreased and were not statistically significant anymore. First-born twins had greater BMI than the second-born twins over childhood and adolescence. After adjusting the results for birth weight, birth order was still associated with BMI until 12 years of age. No interaction effect between birth order and zygosity was found. Only limited evidence was found that birth order influenced variances of height or BMI. The results were similar among boys and girls and also in MZ and DZ twins. Overall, the differences in height and BMI between first and second born twins were modest even in early childhood, while adjustment for birth weight reduced the birth order differences but did not remove them for BMI. PMID:26996222
Yokoyama, Yoshie; Jelenkovic, Aline; Sund, Reijo; Sung, Joohon; Hopper, John L; Ooki, Syuichi; Heikkilä, Kauko; Aaltonen, Sari; Tarnoki, Adam D; Tarnoki, David L; Willemsen, Gonneke; Bartels, Meike; van Beijsterveldt, Toos C E M; Saudino, Kimberly J; Cutler, Tessa L; Nelson, Tracy L; Whitfield, Keith E; Wardle, Jane; Llewellyn, Clare H; Fisher, Abigail; He, Mingguang; Ding, Xiaohu; Bjerregaard-Andersen, Morten; Beck-Nielsen, Henning; Sodemann, Morten; Song, Yun-Mi; Yang, Sarah; Lee, Kayoung; Jeong, Hoe-Uk; Knafo-Noam, Ariel; Mankuta, David; Abramson, Lior; Burt, S Alexandra; Klump, Kelly L; Ordoñana, Juan R; Sánchez-Romera, Juan F; Colodro-Conde, Lucia; Harris, Jennifer R; Brandt, Ingunn; Nilsen, Thomas Sevenius; Craig, Jeffrey M; Saffery, Richard; Ji, Fuling; Ning, Feng; Pang, Zengchang; Dubois, Lise; Boivin, Michel; Brendgen, Mara; Dionne, Ginette; Vitaro, Frank; Martin, Nicholas G; Medland, Sarah E; Montgomery, Grant W; Magnusson, Patrik K E; Pedersen, Nancy L; Aslan, Anna K Dahl; Tynelius, Per; Haworth, Claire M A; Plomin, Robert; Rebato, Esther; Rose, Richard J; Goldberg, Jack H; Rasmussen, Finn; Hur, Yoon-Mi; Sørensen, Thorkild I A; Boomsma, Dorret I; Kaprio, Jaakko; Silventoinen, Karri
2016-04-01
We analyzed birth order differences in means and variances of height and body mass index (BMI) in monozygotic (MZ) and dizygotic (DZ) twins from infancy to old age. The data were derived from the international CODATwins database. The total number of height and BMI measures from 0.5 to 79.5 years of age was 397,466. As expected, first-born twins had greater birth weight than second-born twins. With respect to height, first-born twins were slightly taller than second-born twins in childhood. After adjusting the results for birth weight, the birth order differences decreased and were no longer statistically significant. First-born twins had greater BMI than the second-born twins over childhood and adolescence. After adjusting the results for birth weight, birth order was still associated with BMI until 12 years of age. No interaction effect between birth order and zygosity was found. Only limited evidence was found that birth order influenced variances of height or BMI. The results were similar among boys and girls and also in MZ and DZ twins. Overall, the differences in height and BMI between first- and second-born twins were modest even in early childhood, while adjustment for birth weight reduced the birth order differences but did not remove them for BMI.
Filtered back-projection algorithm for Compton telescopes
Gunter, Donald L [Lisle, IL
2008-03-18
A method for the conversion of Compton camera data into a 2D image of the incident-radiation flux on the celestial sphere includes detecting coincident gamma radiation flux arriving from various directions of a 2-sphere. These events are mapped by back-projection onto the 2-sphere to produce a convolution integral that is subsequently stereographically projected onto a 2-plane to produce a second convolution integral which is deconvolved by the Fourier method to produce an image that is then projected onto the 2-sphere.
[Progress and Future of the Training Plan for Cancer Professionals - Looking Back for 10 Years].
Matsuura, Nariaki
2017-06-01
In order to increase cancer professionals in Japan, the first phase of training plan for cancer professionals was performed for 5 years from 2007t o 2011, and the second one was performed for 5 years from 2012 to 2016. 95 universities for 18 hubs in the first phase and 100 universities for 15 hubs in the second one participated in this project 2,590 graduate students in the first phase and 2,319 students for 3 years in the second phase learned. Although the number of cancer professionals increased after the start of this project, it is still half of the set points and more efforts are required for this project. From 2017, the new training plan for cancer professionals will start for the third phase, and various professionals such as genome medicine professionals, rare cancer professionals, pediatric cancer professionals and those for the life-stage problems in cancer patients will be educated.
Does Project-Based Learning Enhance Iranian EFL Learners' Vocabulary Recall and Retention?
ERIC Educational Resources Information Center
Shafaei, Azadeh; Rahim, Hajar Abdul
2015-01-01
Vocabulary knowledge is an integral part of second/foreign language learning. Thus, using teaching methods that can help learners retain and expand their vocabulary knowledge is necessary to facilitate the language learning process. The current research investigated the effectiveness of an interactive classroom method, known as Project-Based…
NASA Astrophysics Data System (ADS)
Wang, Tonghe; Zhu, Lei
2016-09-01
Conventional dual-energy CT (DECT) reconstruction requires two full-size projection datasets with two different energy spectra. In this study, we propose an iterative algorithm to enable a new data acquisition scheme which requires one full scan and a second sparse-view scan for potential reduction in imaging dose and engineering cost of DECT. A bilateral filter is calculated as a similarity matrix from the first full-scan CT image to quantify the similarity between any two pixels, which is assumed unchanged on a second CT image since DECT scans are performed on the same object. The second CT image from reduced projections is reconstructed by an iterative algorithm which updates the image by minimizing the total variation of the difference between the image and its filtered image by the similarity matrix under data fidelity constraint. As the redundant structural information of the two CT images is contained in the similarity matrix for CT reconstruction, we refer to the algorithm as structure preserving iterative reconstruction (SPIR). The proposed method is evaluated on both digital and physical phantoms, and is compared with the filtered-backprojection (FBP) method, the conventional total-variation-regularization-based algorithm (TVR) and prior-image-constrained-compressed-sensing (PICCS). SPIR with a second 10-view scan reduces the image noise STD by a factor of one order of magnitude with same spatial resolution as full-view FBP image. SPIR substantially improves over TVR on the reconstruction accuracy of a 10-view scan by decreasing the reconstruction error from 6.18% to 1.33%, and outperforms TVR at 50 and 20-view scans on spatial resolution with a higher frequency at the modulation transfer function value of 10% by an average factor of 4. Compared with the 20-view scan PICCS result, the SPIR image has 7 times lower noise STD with similar spatial resolution. The electron density map obtained from the SPIR-based DECT images with a second 10-view scan has an average error of less than 1%.
Numerical investigation of sixth order Boussinesq equation
NASA Astrophysics Data System (ADS)
Kolkovska, N.; Vucheva, V.
2017-10-01
We propose a family of conservative finite difference schemes for the Boussinesq equation with sixth order dispersion terms. The schemes are of second order of approximation. The method is conditionally stable with a mild restriction τ = O(h) on the step sizes. Numerical tests are performed for quadratic and cubic nonlinearities. The numerical experiments show second order of convergence of the discrete solution to the exact one.
Unconditionally stable finite-difference time-domain methods for modeling the Sagnac effect
NASA Astrophysics Data System (ADS)
Novitski, Roman; Scheuer, Jacob; Steinberg, Ben Z.
2013-02-01
We present two unconditionally stable finite-difference time-domain (FDTD) methods for modeling the Sagnac effect in rotating optical microsensors. The methods are based on the implicit Crank-Nicolson scheme, adapted to hold in the rotating system reference frame—the rotating Crank-Nicolson (RCN) methods. The first method (RCN-2) is second order accurate in space whereas the second method (RCN-4) is fourth order accurate. Both methods are second order accurate in time. We show that the RCN-4 scheme is more accurate and has better dispersion isotropy. The numerical results show good correspondence with the expression for the classical Sagnac resonant frequency splitting when using group refractive indices of the resonant modes of a microresonator. Also we show that the numerical results are consistent with the perturbation theory for the rotating degenerate microcavities. We apply our method to simulate the effect of rotation on an entire Coupled Resonator Optical Waveguide (CROW) consisting of a set of coupled microresonators. Preliminary results validate the formation of a rotation-induced gap at the center of a transfer function of a CROW.
Application of the Finite Element Method in Atomic and Molecular Physics
NASA Technical Reports Server (NTRS)
Shertzer, Janine
2007-01-01
The finite element method (FEM) is a numerical algorithm for solving second order differential equations. It has been successfully used to solve many problems in atomic and molecular physics, including bound state and scattering calculations. To illustrate the diversity of the method, we present here details of two applications. First, we calculate the non-adiabatic dipole polarizability of Hi by directly solving the first and second order equations of perturbation theory with FEM. In the second application, we calculate the scattering amplitude for e-H scattering (without partial wave analysis) by reducing the Schrodinger equation to set of integro-differential equations, which are then solved with FEM.
NASA Technical Reports Server (NTRS)
Barker, R. E., Jr.; Campbell, K. W.
1985-01-01
The applicability of classical nucleation theory to second (and higher) order thermodynamic transitions in the Ehrenfest sense has been investigated and expressions have been derived upon which the qualitative and quantitative success of the basic approach must ultimately depend. The expressions describe the effect of temperature undercooling, hydrostatic pressure, and tensile stress upon the critical parameters, the critical nucleus size, and critical free energy barrier, for nucleation in a thermodynamic transition of any general order. These expressions are then specialized for the case of first and second order transitions. The expressions for the case of undercooling are then used in conjunction with literature data to estimate values for the critical quantities in a system undergoing a pseudo-second order transition (the glass transition in polystyrene). Methods of estimating the interfacial energy gamma in systems undergoing a first and second order transition are also discussed.
Learning in a World of Change: Methods and Approaches in the Classroom.
ERIC Educational Resources Information Center
Richardson, Robin
1979-01-01
Recommends that teachers use a curriculum development project (the World Studies Project) to help students increase their understanding of global affairs such as human rights, economic order, disarmament, the world environment, and the law of the sea. Activities and objectives of the project are presented and ordering information for additional…
Dong, Jian; Hayakawa, Yoshihiko; Kannenberg, Sven; Kober, Cornelia
2013-02-01
The objective of this study was to reduce metal-induced streak artifact on oral and maxillofacial x-ray computed tomography (CT) images by developing the fast statistical image reconstruction system using iterative reconstruction algorithms. Adjacent CT images often depict similar anatomical structures in thin slices. So, first, images were reconstructed using the same projection data of an artifact-free image. Second, images were processed by the successive iterative restoration method where projection data were generated from reconstructed image in sequence. Besides the maximum likelihood-expectation maximization algorithm, the ordered subset-expectation maximization algorithm (OS-EM) was examined. Also, small region of interest (ROI) setting and reverse processing were applied for improving performance. Both algorithms reduced artifacts instead of slightly decreasing gray levels. The OS-EM and small ROI reduced the processing duration without apparent detriments. Sequential and reverse processing did not show apparent effects. Two alternatives in iterative reconstruction methods were effective for artifact reduction. The OS-EM algorithm and small ROI setting improved the performance. Copyright © 2012 Elsevier Inc. All rights reserved.
Using Primary Literature to Teach Science Literacy to Introductory Biology Students
Krontiris-Litowitz, Johanna
2013-01-01
Undergraduate students struggle to read the scientific literature and educators have suggested that this may reflect deficiencies in their science literacy skills. In this two-year study we develop and test a strategy for using the scientific literature to teach science literacy skills to novice life science majors. The first year of the project served as a preliminary investigation in which we evaluated student science literacy skills, created a set of science literacy learning objectives aligned with Bloom’s taxonomy, and developed a set of homework assignments that used peer-reviewed articles to teach science literacy. In the second year of the project the effectiveness of the assignments and the learning objectives were evaluated. Summative student learning was evaluated in the second year on a final exam. The mean score was 83.5% (±20.3%) and there were significant learning gains (p < 0.05) in seven of nine of science literacy skills. Project data indicated that even though students achieved course-targeted lower-order science literacy objectives, many were deficient in higher-order literacy skills. Results of this project suggest that building scientific literacy is a continuing process which begins in first-year science courses with a set of fundamental skills that can serve the progressive development of literacy skills throughout the undergraduate curriculum. PMID:23858355
Approach for Input Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives
NASA Technical Reports Server (NTRS)
Putko, Michele M.; Taylor, Arthur C., III; Newman, Perry A.; Green, Lawrence L.
2002-01-01
An implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for quasi 3-D Euler CFD code is presented. Given uncertainties in statistically independent, random, normally distributed input variables, first- and second-order statistical moment procedures are performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, these moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlberg, Kevin Thomas; Drohmann, Martin; Tuminaro, Raymond S.
2014-10-01
Model reduction for dynamical systems is a promising approach for reducing the computational cost of large-scale physics-based simulations to enable high-fidelity models to be used in many- query (e.g., Bayesian inference) and near-real-time (e.g., fast-turnaround simulation) contexts. While model reduction works well for specialized problems such as linear time-invariant systems, it is much more difficult to obtain accurate, stable, and efficient reduced-order models (ROMs) for systems with general nonlinearities. This report describes several advances that enable nonlinear reduced-order models (ROMs) to be deployed in a variety of time-critical settings. First, we present an error bound for the Gauss-Newton with Approximatedmore » Tensors (GNAT) nonlinear model reduction technique. This bound allows the state-space error for the GNAT method to be quantified when applied with the backward Euler time-integration scheme. Second, we present a methodology for preserving classical Lagrangian structure in nonlinear model reduction. This technique guarantees that important properties--such as energy conservation and symplectic time-evolution maps--are preserved when performing model reduction for models described by a Lagrangian formalism (e.g., molecular dynamics, structural dynamics). Third, we present a novel technique for decreasing the temporal complexity --defined as the number of Newton-like iterations performed over the course of the simulation--by exploiting time-domain data. Fourth, we describe a novel method for refining projection-based reduced-order models a posteriori using a goal-oriented framework similar to mesh-adaptive h -refinement in finite elements. The technique allows the ROM to generate arbitrarily accurate solutions, thereby providing the ROM with a 'failsafe' mechanism in the event of insufficient training data. Finally, we present the reduced-order model error surrogate (ROMES) method for statistically quantifying reduced- order-model errors. This enables ROMs to be rigorously incorporated in uncertainty-quantification settings, as the error model can be treated as a source of epistemic uncertainty. This work was completed as part of a Truman Fellowship appointment. We note that much additional work was performed as part of the Fellowship. One salient project is the development of the Trilinos-based model-reduction software module Razor , which is currently bundled with the Albany PDE code and currently allows nonlinear reduced-order models to be constructed for any application supported in Albany. Other important projects include the following: 1. ROMES-equipped ROMs for Bayesian inference: K. Carlberg, M. Drohmann, F. Lu (Lawrence Berkeley National Laboratory), M. Morzfeld (Lawrence Berkeley National Laboratory). 2. ROM-enabled Krylov-subspace recycling: K. Carlberg, V. Forstall (University of Maryland), P. Tsuji, R. Tuminaro. 3. A pseudo balanced POD method using only dual snapshots: K. Carlberg, M. Sarovar. 4. An analysis of discrete v. continuous optimality in nonlinear model reduction: K. Carlberg, M. Barone, H. Antil (George Mason University). Journal articles for these projects are in progress at the time of this writing.« less
A mixed-order nonlinear diffusion compressed sensing MR image reconstruction.
Joy, Ajin; Paul, Joseph Suresh
2018-03-07
Avoid formation of staircase artifacts in nonlinear diffusion-based MR image reconstruction without compromising computational speed. Whereas second-order diffusion encourages the evolution of pixel neighborhood with uniform intensities, fourth-order diffusion considers smooth region to be not necessarily a uniform intensity region but also a planar region. Therefore, a controlled application of fourth-order diffusivity function is used to encourage second-order diffusion to reconstruct the smooth regions of the image as a plane rather than a group of blocks, while not being strong enough to introduce the undesirable speckle effect. Proposed method is compared with second- and fourth-order nonlinear diffusion reconstruction, total variation (TV), total generalized variation, and higher degree TV using in vivo data sets for different undersampling levels with application to dictionary learning-based reconstruction. It is observed that the proposed technique preserves sharp boundaries in the image while preventing the formation of staircase artifacts in the regions of smoothly varying pixel intensities. It also shows reduced error measures compared with second-order nonlinear diffusion reconstruction or TV and converges faster than TV-based methods. Because nonlinear diffusion is known to be an effective alternative to TV for edge-preserving reconstruction, the crucial aspect of staircase artifact removal is addressed. Reconstruction is found to be stable for the experimentally determined range of fourth-order regularization parameter, and therefore not does not introduce a parameter search. Hence, the computational simplicity of second-order diffusion is retained. © 2018 International Society for Magnetic Resonance in Medicine.
Methods and Strategies: Derby Design Day
ERIC Educational Resources Information Center
Kennedy, Katheryn
2013-01-01
In this article the author describes the "Derby Design Day" project--a project that paired high school honors physics students with second-grade children for a design challenge and competition. The overall project goals were to discover whether collaboration in a design process would: (1) increase an interest in science; (2) enhance the…
A New Linearized Crank-Nicolson Mixed Element Scheme for the Extended Fisher-Kolmogorov Equation
Wang, Jinfeng; Li, Hong; He, Siriguleng; Gao, Wei
2013-01-01
We present a new mixed finite element method for solving the extended Fisher-Kolmogorov (EFK) equation. We first decompose the EFK equation as the two second-order equations, then deal with a second-order equation employing finite element method, and handle the other second-order equation using a new mixed finite element method. In the new mixed finite element method, the gradient ∇u belongs to the weaker (L 2(Ω))2 space taking the place of the classical H(div; Ω) space. We prove some a priori bounds for the solution for semidiscrete scheme and derive a fully discrete mixed scheme based on a linearized Crank-Nicolson method. At the same time, we get the optimal a priori error estimates in L 2 and H 1-norm for both the scalar unknown u and the diffusion term w = −Δu and a priori error estimates in (L 2)2-norm for its gradient χ = ∇u for both semi-discrete and fully discrete schemes. PMID:23864831
A new linearized Crank-Nicolson mixed element scheme for the extended Fisher-Kolmogorov equation.
Wang, Jinfeng; Li, Hong; He, Siriguleng; Gao, Wei; Liu, Yang
2013-01-01
We present a new mixed finite element method for solving the extended Fisher-Kolmogorov (EFK) equation. We first decompose the EFK equation as the two second-order equations, then deal with a second-order equation employing finite element method, and handle the other second-order equation using a new mixed finite element method. In the new mixed finite element method, the gradient ∇u belongs to the weaker (L²(Ω))² space taking the place of the classical H(div; Ω) space. We prove some a priori bounds for the solution for semidiscrete scheme and derive a fully discrete mixed scheme based on a linearized Crank-Nicolson method. At the same time, we get the optimal a priori error estimates in L² and H¹-norm for both the scalar unknown u and the diffusion term w = -Δu and a priori error estimates in (L²)²-norm for its gradient χ = ∇u for both semi-discrete and fully discrete schemes.
NASA Astrophysics Data System (ADS)
Voytishek, Anton V.; Shipilov, Nikolay M.
2017-11-01
In this paper, the systematization of numerical (implemented on a computer) randomized functional algorithms for approximation of a solution of Fredholm integral equation of the second kind is carried out. Wherein, three types of such algorithms are distinguished: the projection, the mesh and the projection-mesh methods. The possibilities for usage of these algorithms for solution of practically important problems is investigated in detail. The disadvantages of the mesh algorithms, related to the necessity of calculation values of the kernels of integral equations in fixed points, are identified. On practice, these kernels have integrated singularities, and calculation of their values is impossible. Thus, for applied problems, related to solving Fredholm integral equation of the second kind, it is expedient to use not mesh, but the projection and the projection-mesh randomized algorithms.
Li, Zhilin; Xiao, Li; Cai, Qin; Zhao, Hongkai; Luo, Ray
2016-01-01
In this paper, a new Navier–Stokes solver based on a finite difference approximation is proposed to solve incompressible flows on irregular domains with open, traction, and free boundary conditions, which can be applied to simulations of fluid structure interaction, implicit solvent model for biomolecular applications and other free boundary or interface problems. For some problems of this type, the projection method and the augmented immersed interface method (IIM) do not work well or does not work at all. The proposed new Navier–Stokes solver is based on the local pressure boundary method, and a semi-implicit augmented IIM. A fast Poisson solver can be used in our algorithm which gives us the potential for developing fast overall solvers in the future. The time discretization is based on a second order multi-step method. Numerical tests with exact solutions are presented to validate the accuracy of the method. Application to fluid structure interaction between an incompressible fluid and a compressible gas bubble is also presented. PMID:27087702
Li, Zhilin; Xiao, Li; Cai, Qin; Zhao, Hongkai; Luo, Ray
2015-08-15
In this paper, a new Navier-Stokes solver based on a finite difference approximation is proposed to solve incompressible flows on irregular domains with open, traction, and free boundary conditions, which can be applied to simulations of fluid structure interaction, implicit solvent model for biomolecular applications and other free boundary or interface problems. For some problems of this type, the projection method and the augmented immersed interface method (IIM) do not work well or does not work at all. The proposed new Navier-Stokes solver is based on the local pressure boundary method, and a semi-implicit augmented IIM. A fast Poisson solver can be used in our algorithm which gives us the potential for developing fast overall solvers in the future. The time discretization is based on a second order multi-step method. Numerical tests with exact solutions are presented to validate the accuracy of the method. Application to fluid structure interaction between an incompressible fluid and a compressible gas bubble is also presented.
An almost symmetric Strang splitting scheme for nonlinear evolution equations.
Einkemmer, Lukas; Ostermann, Alexander
2014-07-01
In this paper we consider splitting methods for the time integration of parabolic and certain classes of hyperbolic partial differential equations, where one partial flow cannot be computed exactly. Instead, we use a numerical approximation based on the linearization of the vector field. This is of interest in applications as it allows us to apply splitting methods to a wider class of problems from the sciences. However, in the situation described, the classic Strang splitting scheme, while still being a method of second order, is not longer symmetric. This, in turn, implies that the construction of higher order methods by composition is limited to order three only. To remedy this situation, based on previous work in the context of ordinary differential equations, we construct a class of Strang splitting schemes that are symmetric up to a desired order. We show rigorously that, under suitable assumptions on the nonlinearity, these methods are of second order and can then be used to construct higher order methods by composition. In addition, we illustrate the theoretical results by conducting numerical experiments for the Brusselator system and the KdV equation.
An almost symmetric Strang splitting scheme for nonlinear evolution equations☆
Einkemmer, Lukas; Ostermann, Alexander
2014-01-01
In this paper we consider splitting methods for the time integration of parabolic and certain classes of hyperbolic partial differential equations, where one partial flow cannot be computed exactly. Instead, we use a numerical approximation based on the linearization of the vector field. This is of interest in applications as it allows us to apply splitting methods to a wider class of problems from the sciences. However, in the situation described, the classic Strang splitting scheme, while still being a method of second order, is not longer symmetric. This, in turn, implies that the construction of higher order methods by composition is limited to order three only. To remedy this situation, based on previous work in the context of ordinary differential equations, we construct a class of Strang splitting schemes that are symmetric up to a desired order. We show rigorously that, under suitable assumptions on the nonlinearity, these methods are of second order and can then be used to construct higher order methods by composition. In addition, we illustrate the theoretical results by conducting numerical experiments for the Brusselator system and the KdV equation. PMID:25844017
TWRS Retrieval and Storage Mission and Immobilized Low Activity Waste (ILAW) Disposal Plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
BURBANK, D.A.
This project plan has a twofold purpose. First, it provides a waste stream project plan specific to the River Protection Project (RPP) (formerly the Tank Waste Remediation System [TWRS] Project) Immobilized Low-Activity Waste (LAW) Disposal Subproject for the Washington State Department of Ecology (Ecology) that meets the requirements of Hanford Federal Facility Agreement and Consent Order (Tri-Party Agreement) Milestone M-90-01 (Ecology et al. 1994) and is consistent with the project plan content guidelines found in Section 11.5 of the Tri-Party Agreement action plan (Ecology et al. 1998). Second, it provides an upper tier document that can be used as themore » basis for future subproject line-item construction management plans. The planning elements for the construction management plans are derived from applicable U.S. Department of Energy (DOE) planning guidance documents (DOE Orders 4700.1 [DOE 1992] and 430.1 [DOE 1995a]). The format and content of this project plan are designed to accommodate the requirements mentioned by the Tri-Party Agreement and the DOE orders. A cross-check matrix is provided in Appendix A to explain where in the plan project planning elements required by Section 11.5 of the Tri-Party Agreement are addressed.« less
WEAK GALERKIN METHODS FOR SECOND ORDER ELLIPTIC INTERFACE PROBLEMS
MU, LIN; WANG, JUNPING; WEI, GUOWEI; YE, XIU; ZHAO, SHAN
2013-01-01
Weak Galerkin methods refer to general finite element methods for partial differential equations (PDEs) in which differential operators are approximated by their weak forms as distributions. Such weak forms give rise to desirable flexibilities in enforcing boundary and interface conditions. A weak Galerkin finite element method (WG-FEM) is developed in this paper for solving elliptic PDEs with discontinuous coefficients and interfaces. Theoretically, it is proved that high order numerical schemes can be designed by using the WG-FEM with polynomials of high order on each element. Extensive numerical experiments have been carried to validate the WG-FEM for solving second order elliptic interface problems. High order of convergence is numerically confirmed in both L2 and L∞ norms for the piecewise linear WG-FEM. Special attention is paid to solve many interface problems, in which the solution possesses a certain singularity due to the nonsmoothness of the interface. A challenge in research is to design nearly second order numerical methods that work well for problems with low regularity in the solution. The best known numerical scheme in the literature is of order O(h) to O(h1.5) for the solution itself in L∞ norm. It is demonstrated that the WG-FEM of the lowest order, i.e., the piecewise constant WG-FEM, is capable of delivering numerical approximations that are of order O(h1.75) to O(h2) in the L∞ norm for C1 or Lipschitz continuous interfaces associated with a C1 or H2 continuous solution. PMID:24072935
A numerical solution of a singular boundary value problem arising in boundary layer theory.
Hu, Jiancheng
2016-01-01
In this paper, a second-order nonlinear singular boundary value problem is presented, which is equivalent to the well-known Falkner-Skan equation. And the one-dimensional third-order boundary value problem on interval [Formula: see text] is equivalently transformed into a second-order boundary value problem on finite interval [Formula: see text]. The finite difference method is utilized to solve the singular boundary value problem, in which the amount of computational effort is significantly less than the other numerical methods. The numerical solutions obtained by the finite difference method are in agreement with those obtained by previous authors.
A novel unsplit perfectly matched layer for the second-order acoustic wave equation.
Ma, Youneng; Yu, Jinhua; Wang, Yuanyuan
2014-08-01
When solving acoustic field equations by using numerical approximation technique, absorbing boundary conditions (ABCs) are widely used to truncate the simulation to a finite space. The perfectly matched layer (PML) technique has exhibited excellent absorbing efficiency as an ABC for the acoustic wave equation formulated as a first-order system. However, as the PML was originally designed for the first-order equation system, it cannot be applied to the second-order equation system directly. In this article, we aim to extend the unsplit PML to the second-order equation system. We developed an efficient unsplit implementation of PML for the second-order acoustic wave equation based on an auxiliary-differential-equation (ADE) scheme. The proposed method can benefit to the use of PML in simulations based on second-order equations. Compared with the existing PMLs, it has simpler implementation and requires less extra storage. Numerical results from finite-difference time-domain models are provided to illustrate the validity of the approach. Copyright © 2014 Elsevier B.V. All rights reserved.
Sagiyama, Koki; Rudraraju, Shiva; Garikipati, Krishna
2016-09-13
Here, we consider solid state phase transformations that are caused by free energy densities with domains of non-convexity in strain-composition space; we refer to the non-convex domains as mechano-chemical spinodals. The non-convexity with respect to composition and strain causes segregation into phases with different crystal structures. We work on an existing model that couples the classical Cahn-Hilliard model with Toupin’s theory of gradient elasticity at finite strains. Both systems are represented by fourth-order, nonlinear, partial differential equations. The goal of this work is to develop unconditionally stable, second-order accurate time-integration schemes, motivated by the need to carry out large scalemore » computations of dynamically evolving microstructures in three dimensions. We also introduce reduced formulations naturally derived from these proposed schemes for faster computations that are still second-order accurate. Although our method is developed and analyzed here for a specific class of mechano-chemical problems, one can readily apply the same method to develop unconditionally stable, second-order accurate schemes for any problems for which free energy density functions are multivariate polynomials of solution components and component gradients. Apart from an analysis and construction of methods, we present a suite of numerical results that demonstrate the schemes in action.« less
Efficient and accurate time-stepping schemes for integrate-and-fire neuronal networks.
Shelley, M J; Tao, L
2001-01-01
To avoid the numerical errors associated with resetting the potential following a spike in simulations of integrate-and-fire neuronal networks, Hansel et al. and Shelley independently developed a modified time-stepping method. Their particular scheme consists of second-order Runge-Kutta time-stepping, a linear interpolant to find spike times, and a recalibration of postspike potential using the spike times. Here we show analytically that such a scheme is second order, discuss the conditions under which efficient, higher-order algorithms can be constructed to treat resets, and develop a modified fourth-order scheme. To support our analysis, we simulate a system of integrate-and-fire conductance-based point neurons with all-to-all coupling. For six-digit accuracy, our modified Runge-Kutta fourth-order scheme needs a time-step of Delta(t) = 0.5 x 10(-3) seconds, whereas to achieve comparable accuracy using a recalibrated second-order or a first-order algorithm requires time-steps of 10(-5) seconds or 10(-9) seconds, respectively. Furthermore, since the cortico-cortical conductances in standard integrate-and-fire neuronal networks do not depend on the value of the membrane potential, we can attain fourth-order accuracy with computational costs normally associated with second-order schemes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Jovanca J.; Bishop, Joseph E.
2013-11-01
This report summarizes the work performed by the graduate student Jovanca Smith during a summer internship in the summer of 2012 with the aid of mentor Joe Bishop. The projects were a two-part endeavor that focused on the use of the numerical model called the Lattice Discrete Particle Model (LDPM). The LDPM is a discrete meso-scale model currently used at Northwestern University and the ERDC to model the heterogeneous quasi-brittle material, concrete. In the first part of the project, LDPM was compared to the Karagozian and Case Concrete Model (K&C) used in Presto, an explicit dynamics finite-element code, developed atmore » Sandia National Laboratories. In order to make this comparison, a series of quasi-static numerical experiments were performed, namely unconfined uniaxial compression tests on four varied cube specimen sizes, three-point bending notched experiments on three proportional specimen sizes, and six triaxial compression tests on a cylindrical specimen. The second part of this project focused on the application of LDPM to simulate projectile perforation on an ultra high performance concrete called CORTUF. This application illustrates the strengths of LDPM over traditional continuum models.« less
NASA Astrophysics Data System (ADS)
Resita Arum, Sari; A, Suparmi; C, Cari
2016-01-01
The Dirac equation for Eckart potential and trigonometric Manning Rosen potential with exact spin symmetry is obtained using an asymptotic iteration method. The combination of the two potentials is substituted into the Dirac equation, then the variables are separated into radial and angular parts. The Dirac equation is solved by using an asymptotic iteration method that can reduce the second order differential equation into a differential equation with substitution variables of hypergeometry type. The relativistic energy is calculated using Matlab 2011. This study is limited to the case of spin symmetry. With the asymptotic iteration method, the energy spectra of the relativistic equations and equations of orbital quantum number l can be obtained, where both are interrelated between quantum numbers. The energy spectrum is also numerically solved using the Matlab software, where the increase in the radial quantum number nr causes the energy to decrease. The radial part and the angular part of the wave function are defined as hypergeometry functions and visualized with Matlab 2011. The results show that the disturbance of a combination of the Eckart potential and trigonometric Manning Rosen potential can change the radial part and the angular part of the wave function. Project supported by the Higher Education Project (Grant No. 698/UN27.11/PN/2015).
NASA Astrophysics Data System (ADS)
Simos, T. E.
2017-11-01
A family of four stages high algebraic order embedded explicit six-step methods, for the numerical solution of second order initial or boundary-value problems with periodical and/or oscillating solutions, are studied in this paper. The free parameters of the new proposed methods are calculated solving the linear system of equations which is produced by requesting the vanishing of the phase-lag of the methods and the vanishing of the phase-lag's derivatives of the schemes. For the new obtained methods we investigate: • Its local truncation error (LTE) of the methods.• The asymptotic form of the LTE obtained using as model problem the radial Schrödinger equation.• The comparison of the asymptotic forms of LTEs for several methods of the same family. This comparison leads to conclusions on the efficiency of each method of the family.• The stability and the interval of periodicity of the obtained methods of the new family of embedded finite difference pairs.• The applications of the new obtained family of embedded finite difference pairs to the numerical solution of several second order problems like the radial Schrödinger equation, astronomical problems etc. The above applications lead to conclusion on the efficiency of the methods of the new family of embedded finite difference pairs.
Three dimensional microelectrode system for dielectrophoresis
Dehlinger, Dietrich A; Rose, Klint A; Shusteff, Maxim; Bailey, Christopher G; Mariella, Jr., Raymond P
2014-12-02
A dielectrophoresis method for separating particles from a sample, including a dielectrophoresis channel, the dielectrophoresis channel having a central axis, a bottom, a top, a first side, and a second side; a first mesa projecting into the dielectrophoresis channel from the bottom and extending from the first side across the dielectrophoresis channel to the second side, the first mesa extending at an angle to the central axis of the dielectrophoresis channel; a second mesa projecting into the dielectrophoresis channel from the bottom and extending from the first side across the dielectrophoresis channel to the second side, the second mesa parallel to said first mesa; a space between at least one of the first electrode and the second side or the second electrode and the second side; and a gap between the first electrode and the second electrode, and pumping a recovery fluid through said gap between said first electrode and into said space between at least one of said first mesa and said second side or said second mesa and said second side.
NASA Technical Reports Server (NTRS)
Megier, J. (Principal Investigator)
1976-01-01
The author has identified the following significant results. Some qualitative results were obtained out of the experiment of reflectance measurements under greenhouse conditions. An effort was made to correlate phenological stages, production, and radiometric measurements. It was found that the first order effect of exposure variability to sun irradiation is responsible for different rice productivity classes. Effects of rice variety and fertilization become second order, because they are completely masked by the first order effects.
Development of a Catalytic Wet Air Oxidation Method to Produce Feedstock Gases from Waste Polymers
NASA Technical Reports Server (NTRS)
Kulis, Michael J.; Guerrero-Medina, Karen J.; Hepp, Aloysius F.
2012-01-01
Given the high cost of space launch, the repurposing of biological and plastic wastes to reduce the need for logistical support during long distance and long duration space missions has long been recognized as a high priority. Described in this paper are the preliminary efforts to develop a wet air oxidation system in order to produce fuels from waste polymers. Preliminary results of partial oxidation in near supercritical water conditions are presented. Inherent corrosion and salt precipitation are discussed as system design issues for a thorough assessment of a second generation wet air oxidation system. This work is currently being supported by the In-Situ Resource Utilization Project.
ERIC Educational Resources Information Center
Phelps, Amy L.; Dostilio, Lina
2008-01-01
The present study addresses the efficacy of using service-learning methods to meet the GAISE guidelines (http://www.amstat.org/education/gaise/GAISECollege.htm) in a second business statistics course and further explores potential advantages of assigning a service-learning (SL) project as compared to the traditional statistics project assignment.…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Imhoff, Seth D.
LANL was approached to provide material and design guidance for a fan-shaped fuel element. A total of at least three castings were planned. The first casting is a simple billet mold to be made from high carbon DU-10Mo charge material. The second and third castings are for optimization of the actual fuel plate mold. The experimental scope for optimization is only broad enough for a second iteration of the mold design. It is important to note that partway through FY17, this project was cancelled by the sponsor. This report is being written in order to capture the knowledge gained shouldmore » this project resume at a later date.« less
Tang, Chen; Han, Lin; Ren, Hongwei; Zhou, Dongjian; Chang, Yiming; Wang, Xiaohang; Cui, Xiaolong
2008-10-01
We derive the second-order oriented partial-differential equations (PDEs) for denoising in electronic-speckle-pattern interferometry fringe patterns from two points of view. The first is based on variational methods, and the second is based on controlling diffusion direction. Our oriented PDE models make the diffusion along only the fringe orientation. The main advantage of our filtering method, based on oriented PDE models, is that it is very easy to implement compared with the published filtering methods along the fringe orientation. We demonstrate the performance of our oriented PDE models via application to two computer-simulated and experimentally obtained speckle fringes and compare with related PDE models.
NASA Technical Reports Server (NTRS)
Carpenter, Mark H.; Gottlieb, David; Abarbanel, Saul; Don, Wai-Sun
1993-01-01
The conventional method of imposing time dependent boundary conditions for Runge-Kutta (RK) time advancement reduces the formal accuracy of the space-time method to first order locally, and second order globally, independently of the spatial operator. This counter intuitive result is analyzed in this paper. Two methods of eliminating this problem are proposed for the linear constant coefficient case: (1) impose the exact boundary condition only at the end of the complete RK cycle, (2) impose consistent intermediate boundary conditions derived from the physical boundary condition and its derivatives. The first method, while retaining the RK accuracy in all cases, results in a scheme with much reduced CFL condition, rendering the RK scheme less attractive. The second method retains the same allowable time step as the periodic problem. However it is a general remedy only for the linear case. For non-linear hyperbolic equations the second method is effective only for for RK schemes of third order accuracy or less. Numerical studies are presented to verify the efficacy of each approach.
The research on multi-projection correction based on color coding grid array
NASA Astrophysics Data System (ADS)
Yang, Fan; Han, Cheng; Bai, Baoxing; Zhang, Chao; Zhao, Yunxiu
2017-10-01
There are many disadvantages such as lower timeliness, greater manual intervention in multi-channel projection system, in order to solve the above problems, this paper proposes a multi-projector correction technology based on color coding grid array. Firstly, a color structured light stripe is generated by using the De Bruijn sequences, then meshing the feature information of the color structured light stripe image. We put the meshing colored grid intersection as the center of the circle, and build a white solid circle as the feature sample set of projected images. It makes the constructed feature sample set not only has the perceptual localization, but also has good noise immunity. Secondly, we establish the subpixel geometric mapping relationship between the projection screen and the individual projectors by using the structure of light encoding and decoding based on the color array, and the geometrical mapping relation is used to solve the homography matrix of each projector. Lastly the brightness inconsistency of the multi-channel projection overlap area is seriously interfered, it leads to the corrected image doesn't fit well with the observer's visual needs, and we obtain the projection display image of visual consistency by using the luminance fusion correction algorithm. The experimental results show that this method not only effectively solved the problem of distortion of multi-projection screen and the issue of luminance interference in overlapping region, but also improved the calibration efficient of multi-channel projective system and reduced the maintenance cost of intelligent multi-projection system.
Project Approach: Teaching. Second Edition.
ERIC Educational Resources Information Center
Ho, Rose
The primary objective of the action research chronicled (in English and Chinese) in this book was to shift the teaching method used by preschool teachers in Hong Kong from a teacher-directed mode by training them to use the Project Approach. The secondary objective was to measure children's achievement while using the Project Approach, focusing on…
NASA Astrophysics Data System (ADS)
Vermeire, B. C.; Witherden, F. D.; Vincent, P. E.
2017-04-01
First- and second-order accurate numerical methods, implemented for CPUs, underpin the majority of industrial CFD solvers. Whilst this technology has proven very successful at solving steady-state problems via a Reynolds Averaged Navier-Stokes approach, its utility for undertaking scale-resolving simulations of unsteady flows is less clear. High-order methods for unstructured grids and GPU accelerators have been proposed as an enabling technology for unsteady scale-resolving simulations of flow over complex geometries. In this study we systematically compare accuracy and cost of the high-order Flux Reconstruction solver PyFR running on GPUs and the industry-standard solver STAR-CCM+ running on CPUs when applied to a range of unsteady flow problems. Specifically, we perform comparisons of accuracy and cost for isentropic vortex advection (EV), decay of the Taylor-Green vortex (TGV), turbulent flow over a circular cylinder, and turbulent flow over an SD7003 aerofoil. We consider two configurations of STAR-CCM+: a second-order configuration, and a third-order configuration, where the latter was recommended by CD-adapco for more effective computation of unsteady flow problems. Results from both PyFR and STAR-CCM+ demonstrate that third-order schemes can be more accurate than second-order schemes for a given cost e.g. going from second- to third-order, the PyFR simulations of the EV and TGV achieve 75× and 3× error reduction respectively for the same or reduced cost, and STAR-CCM+ simulations of the cylinder recovered wake statistics significantly more accurately for only twice the cost. Moreover, advancing to higher-order schemes on GPUs with PyFR was found to offer even further accuracy vs. cost benefits relative to industry-standard tools.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vermeire, B.C., E-mail: brian.vermeire@concordia.ca; Witherden, F.D.; Vincent, P.E.
First- and second-order accurate numerical methods, implemented for CPUs, underpin the majority of industrial CFD solvers. Whilst this technology has proven very successful at solving steady-state problems via a Reynolds Averaged Navier–Stokes approach, its utility for undertaking scale-resolving simulations of unsteady flows is less clear. High-order methods for unstructured grids and GPU accelerators have been proposed as an enabling technology for unsteady scale-resolving simulations of flow over complex geometries. In this study we systematically compare accuracy and cost of the high-order Flux Reconstruction solver PyFR running on GPUs and the industry-standard solver STAR-CCM+ running on CPUs when applied to amore » range of unsteady flow problems. Specifically, we perform comparisons of accuracy and cost for isentropic vortex advection (EV), decay of the Taylor–Green vortex (TGV), turbulent flow over a circular cylinder, and turbulent flow over an SD7003 aerofoil. We consider two configurations of STAR-CCM+: a second-order configuration, and a third-order configuration, where the latter was recommended by CD-adapco for more effective computation of unsteady flow problems. Results from both PyFR and STAR-CCM+ demonstrate that third-order schemes can be more accurate than second-order schemes for a given cost e.g. going from second- to third-order, the PyFR simulations of the EV and TGV achieve 75× and 3× error reduction respectively for the same or reduced cost, and STAR-CCM+ simulations of the cylinder recovered wake statistics significantly more accurately for only twice the cost. Moreover, advancing to higher-order schemes on GPUs with PyFR was found to offer even further accuracy vs. cost benefits relative to industry-standard tools.« less
Zuo, Hao-yi; Gao, Jie; Yang, Jing-guo
2007-03-01
A new method to enhance the intensity of the different orders of Stokes lines of SRS by using mixed dye fluorescence is reported. The Stokes lines from the second-order to the fifth-order of CCl4 were enhanced by the fluorescence of mixed R6G and RB solutions in different proportions of 20:2, 20:13 and 20:40 (R6g:Rb), respectively. It is considered that the Stokes lines from the second-order to the fifth-order are near the fluorescence peaks of the three mixed solutions, and far from the absorption peaks of R6g and Rb, so the enhancement effect dominates the absorption effect; as a result, these stokes lines are enhanced. On the contrary, the first-order stokes line is near the absorption peak of RB and far from the fluorescence peaks of the mixed solutions, which leads to the weakening of this stokes line. It is also reported that the first-order, the second-order and the third-order Stokes lines of benzene were enhanced by the fluorescence of mixed solutions of R6g and DCM with of different proportions. The potential application of this method is forecasted.
Research on cross - Project software defect prediction based on transfer learning
NASA Astrophysics Data System (ADS)
Chen, Ya; Ding, Xiaoming
2018-04-01
According to the two challenges in the prediction of cross-project software defects, the distribution differences between the source project and the target project dataset and the class imbalance in the dataset, proposing a cross-project software defect prediction method based on transfer learning, named NTrA. Firstly, solving the source project data's class imbalance based on the Augmented Neighborhood Cleaning Algorithm. Secondly, the data gravity method is used to give different weights on the basis of the attribute similarity of source project and target project data. Finally, a defect prediction model is constructed by using Trad boost algorithm. Experiments were conducted using data, come from NASA and SOFTLAB respectively, from a published PROMISE dataset. The results show that the method has achieved good values of recall and F-measure, and achieved good prediction results.
Mullins, C Daniel; Wang, Junling; Cooke, Jesse L; Blatt, Lisa; Baquet, Claudia R
2004-01-01
Projecting future breast cancer treatment expenditure is critical for budgeting purposes, medical decision making and the allocation of resources in order to maximise the overall impact on health-related outcomes of care. Currently, both longitudinal and cross-sectional methodologies are used to project the economic burden of cancer. This pilot study examined the differences in estimates that were obtained using these two methods, focusing on Maryland, US Medicaid reimbursement data for chemotherapy and prescription drugs for the years 1999-2000. Two different methodologies for projecting life cycles of cancer expenditure were considered. The first examined expenditure according to chronological time (calendar quarter) for all cancer patients in the database in a given quarter. The second examined only the most recent quarter and constructed a hypothetical expenditure life cycle by taking into consideration the number of quarters since the respective patient had her first claim. We found different average expenditures using the same data and over the same time period. The longitudinal measurement had less extreme peaks and troughs, and yielded average expenditure in the final period that was 60% higher than that produced using the cross-sectional analysis; however, the longitudinal analysis had intermediate periods with significantly lower estimated expenditure than the cross-sectional data. These disparate results signify that each of the methods has merit. The longitudinal method tracks changes over time while the cross-sectional approach reflects more recent data, e.g. current practice patterns. Thus, this study reiterates the importance of considering the methodology when projecting future cancer expenditure.
High-Order Hyperbolic Residual-Distribution Schemes on Arbitrary Triangular Grids
2015-06-22
Galerkin methodology formulated in the framework of the residual-distribution method. For both second- and third- 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND...construct these schemes based on the Low-Diffusion-A and the Streamwise-Upwind-Petrov-Galerkin methodology formulated in the framework of the residual...methodology formulated in the framework of the residual-distribution method. For both second- and third-order-schemes, we construct a fully implicit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, Cheong R.
The structural changes of kinetic Alfvén solitary waves (KASWs) due to higher-order terms are investigated. While the first-order differential equation for KASWs provides the dispersion relation for kinetic Alfvén waves, the second-order differential equation describes the structural changes of the solitary waves due to higher-order nonlinearity. The reductive perturbation method is used to obtain the second-order and third-order partial differential equations; then, Kodama and Taniuti's technique [J. Phys. Soc. Jpn. 45, 298 (1978)] is applied in order to remove the secularities in the third-order differential equations and derive a linear second-order inhomogeneous differential equation. The solution to this new second-ordermore » equation indicates that, as the amplitude increases, the hump-type Korteweg-de Vries solution is concentrated more around the center position of the soliton and that dip-type structures form near the two edges of the soliton. This result has a close relationship with the interpretation of the complex KASW structures observed in space with satellites.« less
Importance of curvature evaluation scale for predictive simulations of dynamic gas-liquid interfaces
NASA Astrophysics Data System (ADS)
Owkes, Mark; Cauble, Eric; Senecal, Jacob; Currie, Robert A.
2018-07-01
The effect of the scale used to compute the interfacial curvature on the prediction of dynamic gas-liquid interfaces is investigated. A new interface curvature calculation methodology referred to herein as the Adjustable Curvature Evaluation Scale (ACES) is proposed. ACES leverages a weighted least squares regression to fit a polynomial through points computed on the volume-of-fluid representation of the gas-liquid interface. The interface curvature is evaluated from this polynomial. Varying the least squares weight with distance from the location where the curvature is being computed, adjusts the scale the curvature is evaluated on. ACES is verified using canonical static test cases and compared against second- and fourth-order height function methods. Simulations of dynamic interfaces, including a standing wave and oscillating droplet, are performed to assess the impact of the curvature evaluation scale for predicting interface motions. ACES and the height function methods are combined with two different unsplit geometric volume-of-fluid (VoF) schemes that define the interface on meshes with different levels of refinement. We find that the results depend significantly on curvature evaluation scale. Particularly, the ACES scheme with a properly chosen weight function is accurate, but fails when the scale is too small or large. Surprisingly, the second-order height function method is more accurate than the fourth-order variant for the dynamic tests even though the fourth-order method performs better for static interfaces. Comparing the curvature evaluation scale of the second- and fourth-order height function methods, we find the second-order method is closer to the optimum scale identified with ACES. This result suggests that the curvature scale is driving the accuracy of the dynamics. This work highlights the importance of studying numerical methods with realistic (dynamic) test cases and that the interactions of the various discretizations is as important as the accuracy of one part of the discretization.
Probing Graphene χ((2)) Using a Gold Photon Sieve.
Lobet, Michaël; Sarrazin, Michaël; Cecchet, Francesca; Reckinger, Nicolas; Vlad, Alexandru; Colomer, Jean-François; Lis, Dan
2016-01-13
Nonlinear second harmonic optical activity of graphene covering a gold photon sieve was determined for different polarizations. The photon sieve consists of a subwavelength gold nanohole array placed on glass. It combines the benefits of efficient light trapping and surface plasmon propagation to unravel different elements of graphene second-order susceptibility χ((2)). Those elements efficiently contribute to second harmonic generation. In fact, the graphene-coated photon sieve produces a second harmonic intensity at least two orders of magnitude higher compared with a bare, flat gold layer and an order of magnitude coming from the plasmonic effect of the photon sieve; the remaining enhancement arises from the graphene layer itself. The measured second harmonic generation yield, supplemented by semianalytical computations, provides an original method to constrain the graphene χ((2)) elements. The values obtained are |d31 + d33| ≤ 8.1 × 10(3) pm(2)/V and |d15| ≤ 1.4 × 10(6) pm(2)/V for a second harmonic signal at 780 nm. This original method can be applied to any kind of 2D materials covering such a plasmonic structure.
Assessing Equating Results on Different Equating Criteria
ERIC Educational Resources Information Center
Tong, Ye; Kolen, Michael
2005-01-01
The performance of three equating methods--the presmoothed equipercentile method, the item response theory (IRT) true score method, and the IRT observed score method--were examined based on three equating criteria: the same distributions property, the first-order equity property, and the second-order equity property. The magnitude of the…
NASA Astrophysics Data System (ADS)
Han, Shenchao; Yang, Yanchun; Liu, Yude; Zhang, Peng; Li, Siwei
2018-01-01
It is effective to reduce haze in winter by changing the distributed heat supply system. Thus, the studies on comprehensive index system and scientific evaluation method of distributed heat supply project are essential. Firstly, research the influence factors of heating modes, and an index system with multiple dimension including economic, environmental, risk and flexibility was built and all indexes were quantified. Secondly, a comprehensive evaluation method based on AHP was put forward to analyze the proposed multiple and comprehensive index system. Lastly, the case study suggested that supplying heat with electricity has great advantage and promotional value. The comprehensive index system of distributed heating supply project and evaluation method in this paper can evaluate distributed heat supply project effectively and provide scientific support for choosing the distributed heating project.
Research of infrared laser based pavement imaging and crack detection
NASA Astrophysics Data System (ADS)
Hong, Hanyu; Wang, Shu; Zhang, Xiuhua; Jing, Genqiang
2013-08-01
Road crack detection is seriously affected by many factors in actual applications, such as some shadows, road signs, oil stains, high frequency noise and so on. Due to these factors, the current crack detection methods can not distinguish the cracks in complex scenes. In order to solve this problem, a novel method based on infrared laser pavement imaging is proposed. Firstly, single sensor laser pavement imaging system is adopted to obtain pavement images, high power laser line projector is well used to resist various shadows. Secondly, the crack extraction algorithm which has merged multiple features intelligently is proposed to extract crack information. In this step, the non-negative feature and contrast feature are used to extract the basic crack information, and circular projection based on linearity feature is applied to enhance the crack area and eliminate noise. A series of experiments have been performed to test the proposed method, which shows that the proposed automatic extraction method is effective and advanced.
A high precision extrapolation method in multiphase-field model for simulating dendrite growth
NASA Astrophysics Data System (ADS)
Yang, Cong; Xu, Qingyan; Liu, Baicheng
2018-05-01
The phase-field method coupling with thermodynamic data has become a trend for predicting the microstructure formation in technical alloys. Nevertheless, the frequent access to thermodynamic database and calculation of local equilibrium conditions can be time intensive. The extrapolation methods, which are derived based on Taylor expansion, can provide approximation results with a high computational efficiency, and have been proven successful in applications. This paper presents a high precision second order extrapolation method for calculating the driving force in phase transformation. To obtain the phase compositions, different methods in solving the quasi-equilibrium condition are tested, and the M-slope approach is chosen for its best accuracy. The developed second order extrapolation method along with the M-slope approach and the first order extrapolation method are applied to simulate dendrite growth in a Ni-Al-Cr ternary alloy. The results of the extrapolation methods are compared with the exact solution with respect to the composition profile and dendrite tip position, which demonstrate the high precision and efficiency of the newly developed algorithm. To accelerate the phase-field and extrapolation computation, the graphic processing unit (GPU) based parallel computing scheme is developed. The application to large-scale simulation of multi-dendrite growth in an isothermal cross-section has demonstrated the ability of the developed GPU-accelerated second order extrapolation approach for multiphase-field model.
Optimal second order sliding mode control for nonlinear uncertain systems.
Das, Madhulika; Mahanta, Chitralekha
2014-07-01
In this paper, a chattering free optimal second order sliding mode control (OSOSMC) method is proposed to stabilize nonlinear systems affected by uncertainties. The nonlinear optimal control strategy is based on the control Lyapunov function (CLF). For ensuring robustness of the optimal controller in the presence of parametric uncertainty and external disturbances, a sliding mode control scheme is realized by combining an integral and a terminal sliding surface. The resulting second order sliding mode can effectively reduce chattering in the control input. Simulation results confirm the supremacy of the proposed optimal second order sliding mode control over some existing sliding mode controllers in controlling nonlinear systems affected by uncertainty. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
The influence of client brief and change order in construction project
NASA Astrophysics Data System (ADS)
Mahat, N. A. A.; Adnan, H.
2018-02-01
Construction briefing is a statement of needs about intentions and projects objectives. Briefing process is the preliminary stage in the design process and successful briefing can achieve project delivery right on target time, cost and quality of project confidently. Although there are many efforts to approach client’s requirement and needs for a project, it is still not collected adequately to make proper solutions in design. Thus, these may lead the client to include change orders during the construction phase. This paper is concerned toward the influence of client’s briefing of a construction project that impact on the change order on the construction works. The research objective is to identify the influence of client’s brief on change orders, therefore, the aims of the research is to reduce change orders in project delivery. This research adopted both qualitative and quantitative data collection methods which are content analysis and semi structure interview. The findings highlight factors contributing to change orders and the essential attributes of clients during the briefing stage that may help minimise them.
Stability and Hamiltonian formulation of higher derivative theories
NASA Astrophysics Data System (ADS)
Schmidt, Hans-Jürgen
1994-06-01
We analyze the presuppositions leading to instabilities in theories of order higher than second. The type of fourth-order gravity which leads to an inflationary (quasi-de Sitter) period of cosmic evolution by inclusion of one curvature-squared term (i.e., the Starobinsky model) is used as an example. The corresponding Hamiltonian formulation (which is necessary for deducing the Wheeler-DeWitt equation) is found both in the Ostrogradski approach and in another form. As an example, a closed form solution of the Wheeler-DeWitt equation for a spatially flat Friedmann model and L=R2 is found. The method proposed by Simon to bring fourth order gravity to second order can be (if suitably generalized) applied to bring sixth-order gravity to second order.
NASA Astrophysics Data System (ADS)
Meng, Zeng; Yang, Dixiong; Zhou, Huanlin; Yu, Bo
2018-05-01
The first order reliability method has been extensively adopted for reliability-based design optimization (RBDO), but it shows inaccuracy in calculating the failure probability with highly nonlinear performance functions. Thus, the second order reliability method is required to evaluate the reliability accurately. However, its application for RBDO is quite challenge owing to the expensive computational cost incurred by the repeated reliability evaluation and Hessian calculation of probabilistic constraints. In this article, a new improved stability transformation method is proposed to search the most probable point efficiently, and the Hessian matrix is calculated by the symmetric rank-one update. The computational capability of the proposed method is illustrated and compared to the existing RBDO approaches through three mathematical and two engineering examples. The comparison results indicate that the proposed method is very efficient and accurate, providing an alternative tool for RBDO of engineering structures.
ERIC Educational Resources Information Center
Baldwin, Virginia
The purpose of this document is to help teachers stimulate children and provide successful learning experiences in order to develop positive self-concepts. Part I contains lists of suggestions of activities for unsupervised work at the following centers: (1) language, (2) chalk, (3) math, (4) measuring, (5) music, (6) games, toys, and puzzles, (7)…
Computer-Assisted Classification Patterns in Autoimmune Diagnostics: The AIDA Project
Benammar Elgaaied, Amel; Cascio, Donato; Bruno, Salvatore; Ciaccio, Maria Cristina; Cipolla, Marco; Fauci, Alessandro; Morgante, Rossella; Taormina, Vincenzo; Gorgi, Yousr; Marrakchi Triki, Raja; Ben Ahmed, Melika; Louzir, Hechmi; Yalaoui, Sadok; Imene, Sfar; Issaoui, Yassine; Abidi, Ahmed; Ammar, Myriam; Bedhiafi, Walid; Ben Fraj, Oussama; Bouhaha, Rym; Hamdi, Khouloud; Soumaya, Koudhi; Neili, Bilel; Asma, Gati; Lucchese, Mariano; Catanzaro, Maria; Barbara, Vincenza; Brusca, Ignazio; Fregapane, Maria; Amato, Gaetano; Friscia, Giuseppe; Neila, Trai; Turkia, Souayeh; Youssra, Haouami; Rekik, Raja; Bouokez, Hayet; Vasile Simone, Maria; Fauci, Francesco; Raso, Giuseppe
2016-01-01
Antinuclear antibodies (ANAs) are significant biomarkers in the diagnosis of autoimmune diseases in humans, done by mean of Indirect ImmunoFluorescence (IIF) method, and performed by analyzing patterns and fluorescence intensity. This paper introduces the AIDA Project (autoimmunity: diagnosis assisted by computer) developed in the framework of an Italy-Tunisia cross-border cooperation and its preliminary results. A database of interpreted IIF images is being collected through the exchange of images and double reporting and a Gold Standard database, containing around 1000 double reported images, has been settled. The Gold Standard database is used for optimization of a CAD (Computer Aided Detection) solution and for the assessment of its added value, in order to be applied along with an Immunologist as a second Reader in detection of autoantibodies. This CAD system is able to identify on IIF images the fluorescence intensity and the fluorescence pattern. Preliminary results show that CAD, used as second Reader, appeared to perform better than Junior Immunologists and hence may significantly improve their efficacy; compared with two Junior Immunologists, the CAD system showed higher Intensity Accuracy (85,5% versus 66,0% and 66,0%), higher Patterns Accuracy (79,3% versus 48,0% and 66,2%), and higher Mean Class Accuracy (79,4% versus 56,7% and 64.2%). PMID:27042658
Method for suppressing noise in measurements
NASA Technical Reports Server (NTRS)
Carson, Paul L. (Inventor); Madsen, Louis A. (Inventor); Leskowitz, Garett M. (Inventor); Weitekamp, Daniel P. (Inventor)
2000-01-01
Methods for suppressing noise in measurements by correlating functions based on at least two different measurements of a system at two different times. In one embodiment, a measurement operation is performed on at least a portion of a system that has a memory. A property of the system is measured during a first measurement period to produce a first response indicative of a first state of the system. Then the property of the system is measured during a second measurement period to produce a second response indicative of a second state of the system. The second measurement is performed after an evolution duration subsequent to the first measurement period when the system still retains a degree of memory of an aspect of the first state. Next, a first function of the first response is combined with a second function of the second response to form a second-order correlation function. Information of the system is then extracted from the second-order correlation function.
Approach for Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives
NASA Technical Reports Server (NTRS)
Putko, Michele M.; Newman, Perry A.; Taylor, Arthur C., III; Green, Lawrence L.
2001-01-01
This paper presents an implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for a quasi 1-D Euler CFD (computational fluid dynamics) code. Given uncertainties in statistically independent, random, normally distributed input variables, a first- and second-order statistical moment matching procedure is performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, the moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.
Yu, Shuling; Yuan, Xuejie; Yang, Jing; Yuan, Jintao; Shi, Jiahua; Wang, Yali; Chen, Yuewen; Gao, Shufang
2015-01-01
An attractive method of generating second-order data was developed by a dropping technique to generate pH gradient simultaneously coupled with diode-array spectrophotometer scanning. A homemade apparatus designed for the pH gradient. The method and the homemade apparatus were used to simultaneously determine malachite green (MG) and crystal violet (CV) in water samples. The absorbance-pH second-order data of MG or CV were obtained from the spectra of MG or CV in a series of pH values of HCl-KCl solution. The second-order data of mixtures containing MG and CV that coexisted with interferents were analyzed using multidimensional partial least-squares with residual bilinearization. The method and homemade apparatus were used to simultaneously determine MG and CV in fish farming water samples and in river ones with satisfactory results. The presented method and the homemade apparatus could serve as an alternative tool to handle some analysis problems. Copyright © 2015 Elsevier B.V. All rights reserved.
Kurashige, Yuki; Yanai, Takeshi
2011-09-07
We present a second-order perturbation theory based on a density matrix renormalization group self-consistent field (DMRG-SCF) reference function. The method reproduces the solution of the complete active space with second-order perturbation theory (CASPT2) when the DMRG reference function is represented by a sufficiently large number of renormalized many-body basis, thereby being named DMRG-CASPT2 method. The DMRG-SCF is able to describe non-dynamical correlation with large active space that is insurmountable to the conventional CASSCF method, while the second-order perturbation theory provides an efficient description of dynamical correlation effects. The capability of our implementation is demonstrated for an application to the potential energy curve of the chromium dimer, which is one of the most demanding multireference systems that require best electronic structure treatment for non-dynamical and dynamical correlation as well as large basis sets. The DMRG-CASPT2/cc-pwCV5Z calculations were performed with a large (3d double-shell) active space consisting of 28 orbitals. Our approach using large-size DMRG reference addressed the problems of why the dissociation energy is largely overestimated by CASPT2 with the small active space consisting of 12 orbitals (3d4s), and also is oversensitive to the choice of the zeroth-order Hamiltonian. © 2011 American Institute of Physics
Highly ordered nanocomposites via a monomer self-assembly in situ condensation approach
Gin, D.L.; Fischer, W.M.; Gray, D.H.; Smith, R.C.
1998-12-15
A method for synthesizing composites with architectural control on the nanometer scale is described. A polymerizable lyotropic liquid-crystalline monomer is used to form an inverse hexagonal phase in the presence of a second polymer precursor solution. The monomer system acts as an organic template, providing the underlying matrix and order of the composite system. Polymerization of the template in the presence of an optional cross-linking agent with retention of the liquid-crystalline order is carried out followed by a second polymerization of the second polymer precursor within the channels of the polymer template to provide an ordered nanocomposite material. 13 figs.
Highly ordered nanocomposites via a monomer self-assembly in situ condensation approach
Gin, Douglas L.; Fischer, Walter M.; Gray, David H.; Smith, Ryan C.
1998-01-01
A method for synthesizing composites with architectural control on the nanometer scale is described. A polymerizable lyotropic liquid-crystalline monomer is used to form an inverse hexagonal phase in the presence of a second polymer precursor solution. The monomer system acts as an organic template, providing the underlying matrix and order of the composite system. Polymerization of the template in the presence of an optional cross-linking agent with retention of the liquid-crystalline order is carried out followed by a second polymerization of the second polymer precursor within the channels of the polymer template to provide an ordered nanocomposite material.
Generalized Gibbs state with modified Redfield solution: Exact agreement up to second order
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thingna, Juzar; Wang, Jian-Sheng; Haenggi, Peter
A novel scheme for the steady state solution of the standard Redfield quantum master equation is developed which yields agreement with the exact result for the corresponding reduced density matrix up to second order in the system-bath coupling strength. We achieve this objective by use of an analytic continuation of the off-diagonal matrix elements of the Redfield solution towards its diagonal limit. Notably, our scheme does not require the provision of yet higher order relaxation tensors. Testing this modified method for a heat bath consisting of a collection of harmonic oscillators we assess that the system relaxes towards its correctmore » coupling-dependent, generalized quantum Gibbs state in second order. We numerically compare our formulation for a damped quantum harmonic system with the nonequilibrium Green's function formalism: we find good agreement at low temperatures for coupling strengths that are even larger than expected from the very regime of validity of the second-order Redfield quantum master equation. Yet another advantage of our method is that it markedly reduces the numerical complexity of the problem; thus, allowing to study efficiently large-sized system Hilbert spaces.« less
Critical study of higher order numerical methods for solving the boundary-layer equations
NASA Technical Reports Server (NTRS)
Wornom, S. F.
1978-01-01
A fourth order box method is presented for calculating numerical solutions to parabolic, partial differential equations in two variables or ordinary differential equations. The method, which is the natural extension of the second order box scheme to fourth order, was demonstrated with application to the incompressible, laminar and turbulent, boundary layer equations. The efficiency of the present method is compared with two point and three point higher order methods, namely, the Keller box scheme with Richardson extrapolation, the method of deferred corrections, a three point spline method, and a modified finite element method. For equivalent accuracy, numerical results show the present method to be more efficient than higher order methods for both laminar and turbulent flows.
NASA Astrophysics Data System (ADS)
Lee, Euntaek; Ahn, Hyung Taek; Luo, Hong
2018-02-01
We apply a hyperbolic cell-centered finite volume method to solve a steady diffusion equation on unstructured meshes. This method, originally proposed by Nishikawa using a node-centered finite volume method, reformulates the elliptic nature of viscous fluxes into a set of augmented equations that makes the entire system hyperbolic. We introduce an efficient and accurate solution strategy for the cell-centered finite volume method. To obtain high-order accuracy for both solution and gradient variables, we use a successive order solution reconstruction: constant, linear, and quadratic (k-exact) reconstruction with an efficient reconstruction stencil, a so-called wrapping stencil. By the virtue of the cell-centered scheme, the source term evaluation was greatly simplified regardless of the solution order. For uniform schemes, we obtain the same order of accuracy, i.e., first, second, and third orders, for both the solution and its gradient variables. For hybrid schemes, recycling the gradient variable information for solution variable reconstruction makes one order of additional accuracy, i.e., second, third, and fourth orders, possible for the solution variable with less computational work than needed for uniform schemes. In general, the hyperbolic method can be an effective solution technique for diffusion problems, but instability is also observed for the discontinuous diffusion coefficient cases, which brings necessity for further investigation about the monotonicity preserving hyperbolic diffusion method.
Stability of streamwise vortices
NASA Technical Reports Server (NTRS)
Khorrami, M. K.; Grosch, C. E.; Ash, R. L.
1987-01-01
A brief overview of some theoretical and computational studies of the stability of streamwise vortices is given. The local induction model and classical hydrodynamic vortex stability theories are discussed in some detail. The importance of the three-dimensionality of the mean velocity profile to the results of stability calculations is discussed briefly. The mean velocity profile is provided by employing the similarity solution of Donaldson and Sullivan. The global method of Bridges and Morris was chosen for the spatial stability calculations for the nonlinear eigenvalue problem. In order to test the numerical method, a second order accurate central difference scheme was used to obtain the coefficient matrices. It was shown that a second order finite difference method lacks the required accuracy for global eigenvalue calculations. Finally the problem was formulated using spectral methods and a truncated Chebyshev series.
["Practical clinical competence" - a joint programme to improve training in surgery].
Ruesseler, M; Schill, A; Stibane, T; Damanakis, A; Schleicher, I; Menzler, S; Braunbeck, A; Walcher, F
2013-12-01
Practical clinical competence is, as a result of the complexity of the required skills and the immediate consequences of their insufficient mastery, fundamentally important for undergraduate medical education. However, in the daily clinical routine, undergraduate training competes with patient care and experimental research, mostly to the disadvantage of the training of clinical skills and competencies. All students have to spend long periods in compulsory surgical training courses during their undergraduate studies. Thus, surgical undergraduate training is predestined to exemplarily develop, analyse and implement a training concept comprising defined learning objectives, elaborated teaching materials, analysed teaching methods, as well as objective and reliable assessment methods. The aim of this project is to improve and strengthen undergraduate training in practical clinical skills and competencies. The project is funded by the German Federal Ministry of Education and Research with almost two million Euro as a joint research project of the medical faculties of the universities of Frankfurt/Main, Gießen and Marburg, in collaboration with the German Society of Surgery, the German Society of Medical Education and the German Medical Students' Association. Nine packages in three pillars are combined in order to improve undergraduate medical training on a methodical, didactic and curricular level in a nation-wide network. Each partner of this network provides a systematic contribution to the project based on individual experience and competence. Based on the learning objectives, which were defined by the working group "Education" of the German Society of Surgery, teaching contents will be analysed with respect to their quality and will be available for both teachers and students as mobile learning tool (first pillar). The existing surgical curricula at the cooperating medical faculties will be analysed and teaching methods as well as assessment methods for clinical skills will be evaluated regarding their methodological quality and evidence. The existing surgical curricula will be revised and adapted on the basis of these results (second pillar). Qualification programmes for physicians will be implemented in order to improve both undergraduate education and the attractiveness of educational research, the required teaching quality will be imparted in a nationwide "train-the-teacher" program for surgical clinical skills (third pillar). Georg Thieme Verlag KG Stuttgart · New York.
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Hou, Gene W.
1994-01-01
The straightforward automatic-differentiation and the hand-differentiated incremental iterative methods are interwoven to produce a hybrid scheme that captures some of the strengths of each strategy. With this compromise, discrete aerodynamic sensitivity derivatives are calculated with the efficient incremental iterative solution algorithm of the original flow code. Moreover, the principal advantage of automatic differentiation is retained (i.e., all complicated source code for the derivative calculations is constructed quickly with accuracy). The basic equations for second-order sensitivity derivatives are presented; four methods are compared. Each scheme requires that large systems are solved first for the first-order derivatives and, in all but one method, for the first-order adjoint variables. Of these latter three schemes, two require no solutions of large systems thereafter. For the other two for which additional systems are solved, the equations and solution procedures are analogous to those for the first order derivatives. From a practical viewpoint, implementation of the second-order methods is feasible only with software tools such as automatic differentiation, because of the extreme complexity and large number of terms. First- and second-order sensitivities are calculated accurately for two airfoil problems, including a turbulent flow example; both geometric-shape and flow-condition design variables are considered. Several methods are tested; results are compared on the basis of accuracy, computational time, and computer memory. For first-order derivatives, the hybrid incremental iterative scheme obtained with automatic differentiation is competitive with the best hand-differentiated method; for six independent variables, it is at least two to four times faster than central finite differences and requires only 60 percent more memory than the original code; the performance is expected to improve further in the future.
Higher order thinking skills: using e-portfolio in project-based learning
NASA Astrophysics Data System (ADS)
Lukitasari, M.; Handhika, J.; Murtafiah, W.
2018-03-01
The purpose of this research is to describe students' higher-order thinking skills through project-based learning using e-portfolio. The method used in this research is descriptive qualitative method. The research instruments used were test, unstructured interview, and documentation. Research subjects were students of mathematics, physics and biology education department who take the Basics Physics course. The result shows that through project-based learning using e-portfolio the students’ ability to: analyze (medium category, N-Gain 0.67), evaluate (medium category, N-Gain 0.51), and create (medium Category, N-Gain 0.44) are improved.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-21
... Not To Review an Initial Determination Granting Respondent's Second Amended Motion To Terminate the... administrative law judge's (``ALJ'') initial determination (``ID'') (Order No. 6) granting respondent's second amended motion to terminate the investigation in its entirety based on a consent order stipulation and to...
High-Order/Low-Order methods for ocean modeling
Newman, Christopher; Womeldorff, Geoff; Chacón, Luis; ...
2015-06-01
In this study, we examine a High Order/Low Order (HOLO) approach for a z-level ocean model and show that the traditional semi-implicit and split-explicit methods, as well as a recent preconditioning strategy, can easily be cast in the framework of HOLO methods. The HOLO formulation admits an implicit-explicit method that is algorithmically scalable and second-order accurate, allowing timesteps much larger than the barotropic time scale. We show how HOLO approaches, in particular the implicit-explicit method, can provide a solid route for ocean simulation to heterogeneous computing and exascale environments.
NASA Technical Reports Server (NTRS)
Phillips, J. R.
1996-01-01
In this paper we derive error bounds for a collocation-grid-projection scheme tuned for use in multilevel methods for solving boundary-element discretizations of potential integral equations. The grid-projection scheme is then combined with a precorrected FFT style multilevel method for solving potential integral equations with 1/r and e(sup ikr)/r kernels. A complexity analysis of this combined method is given to show that for homogeneous problems, the method is order n natural log n nearly independent of the kernel. In addition, it is shown analytically and experimentally that for an inhomogeneity generated by a very finely discretized surface, the combined method slows to order n(sup 4/3). Finally, examples are given to show that the collocation-based grid-projection plus precorrected-FFT scheme is competitive with fast-multipole algorithms when considering realistic problems and 1/r kernels, but can be used over a range of spatial frequencies with only a small performance penalty.
ERIC Educational Resources Information Center
Bickett, Marianne
2011-01-01
The author noticed that, when painting self-portraits, her students struggled with size relationships between the head, neck, and shoulders. In order to address this without having to deal with facial proportions, she had her second-graders take turns drawing a partner from the back. Students began this project by learning about Mary Cassatt,…
NASA Astrophysics Data System (ADS)
Ribalaygua, Jaime; Gaitán, Emma; Pórtoles, Javier; Monjo, Robert
2018-05-01
A two-step statistical downscaling method has been reviewed and adapted to simulate twenty-first-century climate projections for the Gulf of Fonseca (Central America, Pacific Coast) using Coupled Model Intercomparison Project (CMIP5) climate models. The downscaling methodology is adjusted after looking for good predictor fields for this area (where the geostrophic approximation fails and the real wind fields are the most applicable). The method's performance for daily precipitation and maximum and minimum temperature is analysed and revealed suitable results for all variables. For instance, the method is able to simulate the characteristic cycle of the wet season for this area, which includes a mid-summer drought between two peaks. Future projections show a gradual temperature increase throughout the twenty-first century and a change in the features of the wet season (the first peak and mid-summer rainfall being reduced relative to the second peak, earlier onset of the wet season and a broader second peak).
Predicting Ice Sheet and Climate Evolution at Extreme Scales
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heimbach, Patrick
2016-02-06
A main research objectives of PISCEES is the development of formal methods for quantifying uncertainties in ice sheet modeling. Uncertainties in simulating and projecting mass loss from the polar ice sheets arise primarily from initial conditions, surface and basal boundary conditions, and model parameters. In general terms, two main chains of uncertainty propagation may be identified: 1. inverse propagation of observation and/or prior onto posterior control variable uncertainties; 2. forward propagation of prior or posterior control variable uncertainties onto those of target output quantities of interest (e.g., climate indices or ice sheet mass loss). A related goal is the developmentmore » of computationally efficient methods for producing initial conditions for an ice sheet that are close to available present-day observations and essentially free of artificial model drift, which is required in order to be useful for model projections (“initialization problem”). To be of maximum value, such optimal initial states should be accompanied by “useful” uncertainty estimates that account for the different sources of uncerainties, as well as the degree to which the optimum state is constrained by available observations. The PISCEES proposal outlined two approaches for quantifying uncertainties. The first targets the full exploration of the uncertainty in model projections with sampling-based methods and a workflow managed by DAKOTA (the main delivery vehicle for software developed under QUEST). This is feasible for low-dimensional problems, e.g., those with a handful of global parameters to be inferred. This approach can benefit from derivative/adjoint information, but it is not necessary, which is why it often referred to as “non-intrusive”. The second approach makes heavy use of derivative information from model adjoints to address quantifying uncertainty in high-dimensions (e.g., basal boundary conditions in ice sheet models). The use of local gradient, or Hessian information (i.e., second derivatives of the cost function), requires additional code development and implementation, and is thus often referred to as an “intrusive” approach. Within PISCEES, MIT has been tasked to develop methods for derivative-based UQ, the ”intrusive” approach discussed above. These methods rely on the availability of first (adjoint) and second (Hessian) derivative code, developed through intrusive methods such as algorithmic differentiation (AD). While representing a significant burden in terms of code development, derivative-baesd UQ is able to cope with very high-dimensional uncertainty spaces. That is, unlike sampling methods (all variations of Monte Carlo), calculational burden is independent of the dimension of the uncertainty space. This is a significant advantage for spatially distributed uncertainty fields, such as threedimensional initial conditions, three-dimensional parameter fields, or two-dimensional surface and basal boundary conditions. Importantly, uncertainty fields for ice sheet models generally fall into this category.« less
Electroless atomic layer deposition
Robinson, David Bruce; Cappillino, Patrick J.; Sheridan, Leah B.; Stickney, John L.; Benson, David M.
2017-10-31
A method of electroless atomic layer deposition is described. The method electrolessly generates a layer of sacrificial material on a surface of a first material. The method adds doses of a solution of a second material to the substrate. The method performs a galvanic exchange reaction to oxidize away the layer of the sacrificial material and deposit a layer of the second material on the surface of the first material. The method can be repeated for a plurality of iterations in order to deposit a desired thickness of the second material on the surface of the first material.
Spanish Literacy Investigation Project.
ERIC Educational Resources Information Center
Cook, Jacqueline; Quinones, Anisia
The Spanish Literacy Investigation Project was implemented to identify adult Spanish literacy programs throughout the country, to explore the availability of relevant Spanish literacy teaching methods, to determine relevant elements between Spanish literacy and English as a Second Language (ESL), and to describe a model for incorporating a Spanish…
CMB spectral distortions as solutions to the Boltzmann equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ota, Atsuhisa, E-mail: a.ota@th.phys.titech.ac.jp
2017-01-01
We propose to re-interpret the cosmic microwave background spectral distortions as solutions to the Boltzmann equation. This approach makes it possible to solve the second order Boltzmann equation explicitly, with the spectral y distortion and the momentum independent second order temperature perturbation, while generation of μ distortion cannot be explained even at second order in this framework. We also extend our method to higher order Boltzmann equations systematically and find new type spectral distortions, assuming that the collision term is linear in the photon distribution functions, namely, in the Thomson scattering limit. As an example, we concretely construct solutions tomore » the cubic order Boltzmann equation and show that the equations are closed with additional three parameters composed of a cubic order temperature perturbation and two cubic order spectral distortions. The linear Sunyaev-Zel'dovich effect whose momentum dependence is different from the usual y distortion is also discussed in the presence of the next leading order Kompaneets terms, and we show that higher order spectral distortions are also generated as a result of the diffusion process in a framework of higher order Boltzmann equations. The method may be applicable to a wider class of problems and has potential to give a general prescription to non-equilibrium physics.« less
Glucose dispersion measurement using white-light LCI
NASA Astrophysics Data System (ADS)
Liu, Juan; Bagherzadeh, Morteza; Hitzenberger, Christoph K.; Pircher, Michael; Zawadzki, Robert; Fercher, Adolf F.
2003-07-01
We measured second order dispersion of glucose solution using a Michelson Low Coherent Interferometer (LCI). Three different glucose concentrations: 20mg/dl (hypoglycemia), 100mg/dl (normal level), and 500mg/dl (hyperglycemia) are investigated over the wavelength range 0.5μm to 0.85μm, and the investigation shows that different concentrations are associated with different second-order dispersions. The second-order dispersions for wavelengths from 0.55μm to 0.8μm are determined by Fourier analysis of the interferogram. This approach can be applied to measure the second-order dispersion for distinguishing the different glucose concentrations. It can be considered as a potentially noninvasive method to determine glucose concentration in human eye. A brief discussion is presented in this poster as well.
Fast analytical scatter estimation using graphics processing units.
Ingleby, Harry; Lippuner, Jonas; Rickey, Daniel W; Li, Yue; Elbakri, Idris
2015-01-01
To develop a fast patient-specific analytical estimator of first-order Compton and Rayleigh scatter in cone-beam computed tomography, implemented using graphics processing units. The authors developed an analytical estimator for first-order Compton and Rayleigh scatter in a cone-beam computed tomography geometry. The estimator was coded using NVIDIA's CUDA environment for execution on an NVIDIA graphics processing unit. Performance of the analytical estimator was validated by comparison with high-count Monte Carlo simulations for two different numerical phantoms. Monoenergetic analytical simulations were compared with monoenergetic and polyenergetic Monte Carlo simulations. Analytical and Monte Carlo scatter estimates were compared both qualitatively, from visual inspection of images and profiles, and quantitatively, using a scaled root-mean-square difference metric. Reconstruction of simulated cone-beam projection data of an anthropomorphic breast phantom illustrated the potential of this method as a component of a scatter correction algorithm. The monoenergetic analytical and Monte Carlo scatter estimates showed very good agreement. The monoenergetic analytical estimates showed good agreement for Compton single scatter and reasonable agreement for Rayleigh single scatter when compared with polyenergetic Monte Carlo estimates. For a voxelized phantom with dimensions 128 × 128 × 128 voxels and a detector with 256 × 256 pixels, the analytical estimator required 669 seconds for a single projection, using a single NVIDIA 9800 GX2 video card. Accounting for first order scatter in cone-beam image reconstruction improves the contrast to noise ratio of the reconstructed images. The analytical scatter estimator, implemented using graphics processing units, provides rapid and accurate estimates of single scatter and with further acceleration and a method to account for multiple scatter may be useful for practical scatter correction schemes.
Multiple source associated particle imaging for simultaneous capture of multiple projections
Bingham, Philip R; Hausladen, Paul A; McConchi, Seth M; Mihalczo, John T; Mullens, James A
2013-11-19
Disclosed herein are representative embodiments of methods, apparatus, and systems for performing neutron radiography. For example, in one exemplary method, an object is interrogated with a plurality of neutrons. The plurality of neutrons includes a first portion of neutrons generated from a first neutron source and a second portion of neutrons generated from a second neutron source. Further, at least some of the first portion and the second portion are generated during a same time period. In the exemplary method, one or more neutrons from the first portion and one or more neutrons from the second portion are detected, and an image of the object is generated based at least in part on the detected neutrons from the first portion and the detected neutrons from the second portion.
NASA Technical Reports Server (NTRS)
Barker, L. E., Jr.; Bowles, R. L.; Williams, L. H.
1973-01-01
High angular rates encountered in real-time flight simulation problems may require a more stable and accurate integration method than the classical methods normally used. A study was made to develop a general local linearization procedure of integrating dynamic system equations when using a digital computer in real-time. The procedure is specifically applied to the integration of the quaternion rate equations. For this application, results are compared to a classical second-order method. The local linearization approach is shown to have desirable stability characteristics and gives significant improvement in accuracy over the classical second-order integration methods.
Multi-octave analog photonic link with improved second- and third-order SFDRs
NASA Astrophysics Data System (ADS)
Tan, Qinggui; Gao, Yongsheng; Fan, Yangyu; He, You
2018-03-01
The second- and third-order spurious free dynamic ranges (SFDRs) are two key performance indicators for a multi-octave analogy photonic link (APL). The linearization methods for either second- or third-order intermodulation distortion (IMD2 or IMD3) have been intensively studied, but the simultaneous suppression for the both were merely reported. In this paper, we propose an APL with improved second- and third-order SFDRs for multi-octave applications based on two parallel DPMZM-based sub-APLs. The IMD3 in each sub-APL is suppressed by properly biasing the DPMZM, and the IMD2 is suppressed by balanced detecting the two sub-APLs. The experiment demonstrates significant suppression ratios for both the IMD2 and IMD3 after linearization in the proposed link, and the measured second- and third-order SFDRs with the operating frequency from 6 to 40 GHz are above 91 dB ṡHz 1 / 2 and 116 dB ṡHz 2 / 3, respectively.
NASA Astrophysics Data System (ADS)
Hermanns, R. L.; Zentel, K.-O.; Wenzel, F.; Hövel, M.; Hesse, A.
In order to benefit from synergies and to avoid replication in the field of disaster re- duction programs and related scientific projects it is important to create an overview on the state of art, the fields of activity and their key aspects. Therefore, the German Committee for Disaster Reduction intends to document projects and institution related to natural disaster prevention in three databases. One database is designed to docu- ment scientific programs and projects related to natural hazards. In a first step data acquisition concentrated on projects carried out by German institutions. In a second step projects from all other European countries will be archived. The second database focuses on projects on early-warning systems and has no regional limit. Data mining started in November 2001 and will be finished soon. The third database documents op- erational projects dealing with disaster prevention and concentrates on international projects or internationally funded projects. These databases will be available on the internet end of spring 2002 (http://www.dkkv.org) and will be updated continuously. They will allow rapid and concise information on various international projects, pro- vide up-to-date descriptions, and facilitate exchange as all relevant information in- cluding contact addresses are available to the public. The aim of this contribution is to present concepts and the work done so far, to invite participation, and to contact other organizations with similar objectives.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jan Hesthaven
2012-02-06
Final report for DOE Contract DE-FG02-98ER25346 entitled Parallel High Order Accuracy Methods Applied to Non-Linear Hyperbolic Equations and to Problems in Materials Sciences. Principal Investigator Jan S. Hesthaven Division of Applied Mathematics Brown University, Box F Providence, RI 02912 Jan.Hesthaven@Brown.edu February 6, 2012 Note: This grant was originally awarded to Professor David Gottlieb and the majority of the work envisioned reflects his original ideas. However, when Prof Gottlieb passed away in December 2008, Professor Hesthaven took over as PI to ensure proper mentoring of students and postdoctoral researchers already involved in the project. This unusual circumstance has naturally impacted themore » project and its timeline. However, as the report reflects, the planned work has been accomplished and some activities beyond the original scope have been pursued with success. Project overview and main results The effort in this project focuses on the development of high order accurate computational methods for the solution of hyperbolic equations with application to problems with strong shocks. While the methods are general, emphasis is on applications to gas dynamics with strong shocks.« less
Non-local Second Order Closure Scheme for Boundary Layer Turbulence and Convection
NASA Astrophysics Data System (ADS)
Meyer, Bettina; Schneider, Tapio
2017-04-01
There has been scientific consensus that the uncertainty in the cloud feedback remains the largest source of uncertainty in the prediction of climate parameters like climate sensitivity. To narrow down this uncertainty, not only a better physical understanding of cloud and boundary layer processes is required, but specifically the representation of boundary layer processes in models has to be improved. General climate models use separate parameterisation schemes to model the different boundary layer processes like small-scale turbulence, shallow and deep convection. Small scale turbulence is usually modelled by local diffusive parameterisation schemes, which truncate the hierarchy of moment equations at first order and use second-order equations only to estimate closure parameters. In contrast, the representation of convection requires higher order statistical moments to capture their more complex structure, such as narrow updrafts in a quasi-steady environment. Truncations of moment equations at second order may lead to more accurate parameterizations. At the same time, they offer an opportunity to take spatially correlated structures (e.g., plumes) into account, which are known to be important for convective dynamics. In this project, we study the potential and limits of local and non-local second order closure schemes. A truncation of the momentum equations at second order represents the same dynamics as a quasi-linear version of the equations of motion. We study the three-dimensional quasi-linear dynamics in dry and moist convection by implementing it in a LES model (PyCLES) and compare it to a fully non-linear LES. In the quasi-linear LES, interactions among turbulent eddies are suppressed but nonlinear eddy—mean flow interactions are retained, as they are in the second order closure. In physical terms, suppressing eddy—eddy interactions amounts to suppressing, e.g., interactions among convective plumes, while retaining interactions between plumes and the environment (e.g., entrainment and detrainment). In a second part, we employ the possibility to include non-local statistical correlations in a second-order closure scheme. Such non-local correlations allow to directly incorporate the spatially coherent structures that occur in the form of convective updrafts penetrating the boundary layer. This allows us to extend the work that has been done using assumed-PDF schemes for parameterising boundary layer turbulence and shallow convection in a non-local sense.
Increasing High School Student Interest in Science: An Action Research Study
NASA Astrophysics Data System (ADS)
Vartuli, Cindy A.
An action research study was conducted to determine how to increase student interest in learning science and pursuing a STEM career. The study began by exploring 10th-grade student and teacher perceptions of student interest in science in order to design an instructional strategy for stimulating student interest in learning and pursuing science. Data for this study included responses from 270 students to an on-line science survey and interviews with 11 students and eight science teachers. The action research intervention included two iterations of the STEM Career Project. The first iteration introduced four chemistry classes to the intervention. The researcher used student reflections and a post-project survey to determine if the intervention had influence on the students' interest in pursuing science. The second iteration was completed by three science teachers who had implemented the intervention with their chemistry classes, using student reflections and post-project surveys, as a way to make further procedural refinements and improvements to the intervention and measures. Findings from the exploratory phase of the study suggested students generally had interest in learning science but increasing that interest required including personally relevant applications and laboratory experiences. The intervention included a student-directed learning module in which students investigated three STEM careers and presented information on one of their chosen careers. The STEM Career Project enabled students to explore career possibilities in order to increase their awareness of STEM careers. Findings from the first iteration of the intervention suggested a positive influence on student interest in learning and pursuing science. The second iteration included modifications to the intervention resulting in support for the findings of the first iteration. Results of the second iteration provided modifications that would allow the project to be used for different academic levels. Insights from conducting the action research study provided the researcher with effective ways to make positive changes in her own teaching praxis and the tools used to improve student awareness of STEM career options.
Entanglement branching operator
NASA Astrophysics Data System (ADS)
Harada, Kenji
2018-01-01
We introduce an entanglement branching operator to split a composite entanglement flow in a tensor network which is a promising theoretical tool for many-body systems. We can optimize an entanglement branching operator by solving a minimization problem based on squeezing operators. The entanglement branching is a new useful operation to manipulate a tensor network. For example, finding a particular entanglement structure by an entanglement branching operator, we can improve a higher-order tensor renormalization group method to catch a proper renormalization flow in a tensor network space. This new method yields a new type of tensor network states. The second example is a many-body decomposition of a tensor by using an entanglement branching operator. We can use it for a perfect disentangling among tensors. Applying a many-body decomposition recursively, we conceptually derive projected entangled pair states from quantum states that satisfy the area law of entanglement entropy.
Space-time least-squares Petrov-Galerkin projection in nonlinear model reduction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, Youngsoo; Carlberg, Kevin Thomas
Our work proposes a space-time least-squares Petrov-Galerkin (ST-LSPG) projection method for model reduction of nonlinear dynamical systems. In contrast to typical nonlinear model-reduction methods that first apply Petrov-Galerkin projection in the spatial dimension and subsequently apply time integration to numerically resolve the resulting low-dimensional dynamical system, the proposed method applies projection in space and time simultaneously. To accomplish this, the method first introduces a low-dimensional space-time trial subspace, which can be obtained by computing tensor decompositions of state-snapshot data. The method then computes discrete-optimal approximations in this space-time trial subspace by minimizing the residual arising after time discretization over allmore » space and time in a weighted ℓ 2-norm. This norm can be de ned to enable complexity reduction (i.e., hyper-reduction) in time, which leads to space-time collocation and space-time GNAT variants of the ST-LSPG method. Advantages of the approach relative to typical spatial-projection-based nonlinear model reduction methods such as Galerkin projection and least-squares Petrov-Galerkin projection include: (1) a reduction of both the spatial and temporal dimensions of the dynamical system, (2) the removal of spurious temporal modes (e.g., unstable growth) from the state space, and (3) error bounds that exhibit slower growth in time. Numerical examples performed on model problems in fluid dynamics demonstrate the ability of the method to generate orders-of-magnitude computational savings relative to spatial-projection-based reduced-order models without sacrificing accuracy.« less
Kinematic Methods of Designing Free Form Shells
NASA Astrophysics Data System (ADS)
Korotkiy, V. A.; Khmarova, L. I.
2017-11-01
The geometrical shell model is formed in light of the set requirements expressed through surface parameters. The shell is modelled using the kinematic method according to which the shell is formed as a continuous one-parameter set of curves. The authors offer a kinematic method based on the use of second-order curves with a variable eccentricity as a form-making element. Additional guiding ruled surfaces are used to control the designed surface form. The authors made a software application enabling to plot a second-order curve specified by a random set of five coplanar points and tangents.
A new weak Galerkin finite element method for elliptic interface problems
Mu, Lin; Wang, Junping; Ye, Xiu; ...
2016-08-26
We introduce and analyze a new weak Galerkin (WG) finite element method in this paper for solving second order elliptic equations with discontinuous coefficients and interfaces. Comparing with the existing WG algorithm for solving the same type problems, the present WG method has a simpler variational formulation and fewer unknowns. Moreover, the new WG algorithm allows the use of finite element partitions consisting of general polytopal meshes and can be easily generalized to high orders. Optimal order error estimates in both H1 and L2 norms are established for the present WG finite element solutions. We conducted extensive numerical experiments inmore » order to examine the accuracy, flexibility, and robustness of the proposed WG interface approach. In solving regular elliptic interface problems, high order convergences are numerically confirmed by using piecewise polynomial basis functions of high degrees. Moreover, the WG method is shown to be able to accommodate very complicated interfaces, due to its flexibility in choosing finite element partitions. Finally, in dealing with challenging problems with low regularities, the piecewise linear WG method is capable of delivering a second order of accuracy in L∞ norm for both C1 and H2 continuous solutions.« less
A new weak Galerkin finite element method for elliptic interface problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mu, Lin; Wang, Junping; Ye, Xiu
We introduce and analyze a new weak Galerkin (WG) finite element method in this paper for solving second order elliptic equations with discontinuous coefficients and interfaces. Comparing with the existing WG algorithm for solving the same type problems, the present WG method has a simpler variational formulation and fewer unknowns. Moreover, the new WG algorithm allows the use of finite element partitions consisting of general polytopal meshes and can be easily generalized to high orders. Optimal order error estimates in both H1 and L2 norms are established for the present WG finite element solutions. We conducted extensive numerical experiments inmore » order to examine the accuracy, flexibility, and robustness of the proposed WG interface approach. In solving regular elliptic interface problems, high order convergences are numerically confirmed by using piecewise polynomial basis functions of high degrees. Moreover, the WG method is shown to be able to accommodate very complicated interfaces, due to its flexibility in choosing finite element partitions. Finally, in dealing with challenging problems with low regularities, the piecewise linear WG method is capable of delivering a second order of accuracy in L∞ norm for both C1 and H2 continuous solutions.« less
NASA Astrophysics Data System (ADS)
Wang, L. M.
2017-09-01
A novel model-free adaptive sliding mode strategy is proposed for a generalized projective synchronization (GPS) between two entirely unknown fractional-order chaotic systems subject to the external disturbances. To solve the difficulties from the little knowledge about the master-slave system and to overcome the bad effects of the external disturbances on the generalized projective synchronization, the radial basis function neural networks are used to approach the packaged unknown master system and the packaged unknown slave system (including the external disturbances). Consequently, based on the slide mode technology and the neural network theory, a model-free adaptive sliding mode controller is designed to guarantee asymptotic stability of the generalized projective synchronization error. The main contribution of this paper is that a control strategy is provided for the generalized projective synchronization between two entirely unknown fractional-order chaotic systems subject to the unknown external disturbances, and the proposed control strategy only requires that the master system has the same fractional orders as the slave system. Moreover, the proposed method allows us to achieve all kinds of generalized projective chaos synchronizations by turning the user-defined parameters onto the desired values. Simulation results show the effectiveness of the proposed method and the robustness of the controlled system.
Evaluation of a wave-vector-frequency-domain method for nonlinear wave propagation
Jing, Yun; Tao, Molei; Clement, Greg T.
2011-01-01
A wave-vector-frequency-domain method is presented to describe one-directional forward or backward acoustic wave propagation in a nonlinear homogeneous medium. Starting from a frequency-domain representation of the second-order nonlinear acoustic wave equation, an implicit solution for the nonlinear term is proposed by employing the Green’s function. Its approximation, which is more suitable for numerical implementation, is used. An error study is carried out to test the efficiency of the model by comparing the results with the Fubini solution. It is shown that the error grows as the propagation distance and step-size increase. However, for the specific case tested, even at a step size as large as one wavelength, sufficient accuracy for plane-wave propagation is observed. A two-dimensional steered transducer problem is explored to verify the nonlinear acoustic field directional independence of the model. A three-dimensional single-element transducer problem is solved to verify the forward model by comparing it with an existing nonlinear wave propagation code. Finally, backward-projection behavior is examined. The sound field over a plane in an absorptive medium is backward projected to the source and compared with the initial field, where good agreement is observed. PMID:21302985
Construction of higher order accurate vortex and particle methods
NASA Technical Reports Server (NTRS)
Nicolaides, R. A.
1986-01-01
The standard point vortex method has recently been shown to be of high order of accuracy for problems on the whole plane, when using a uniform initial subdivision for assigning the vorticity to the points. If obstacles are present in the flow, this high order deteriorates to first or second order. New vortex methods are introduced which are of arbitrary accuracy (under regularity assumptions) regardless of the presence of bodies and the uniformity of the initial subdivision.
Insights about Psychotherapy Training and Curricular Sequencing: Portal of Discovery
ERIC Educational Resources Information Center
McGowen, K. Ramsey; Miller, Merry Noel; Floyd, Michael; Miller, Barney; Coyle, Brent
2009-01-01
Objective: The authors discuss the curricular implications of a research project originally designed to evaluate the instructional strategy of using standardized patients in a psychotherapy training seminar. Methods: The original project included second-year residents enrolled in an introductory psychotherapy seminar that employed sequential…
Comparison of Transmission Line Methods for Surface Acoustic Wave Modeling
NASA Technical Reports Server (NTRS)
Wilson, William; Atkinson, Gary
2009-01-01
Surface Acoustic Wave (SAW) technology is low cost, rugged, lightweight, extremely low power and can be used to develop passive wireless sensors. For these reasons, NASA is investigating the use of SAW technology for Integrated Vehicle Health Monitoring (IVHM) of aerospace structures. To facilitate rapid prototyping of passive SAW sensors for aerospace applications, SAW models have been developed. This paper reports on the comparison of three methods of modeling SAWs. The three models are the Impulse Response Method (a first order model), and two second order matrix methods; the conventional matrix approach, and a modified matrix approach that is extended to include internal finger reflections. The second order models are based upon matrices that were originally developed for analyzing microwave circuits using transmission line theory. Results from the models are presented with measured data from devices. Keywords: Surface Acoustic Wave, SAW, transmission line models, Impulse Response Method.
NASA Astrophysics Data System (ADS)
Milic, Vladimir; Kasac, Josip; Novakovic, Branko
2015-10-01
This paper is concerned with ?-gain optimisation of input-affine nonlinear systems controlled by analytic fuzzy logic system. Unlike the conventional fuzzy-based strategies, the non-conventional analytic fuzzy control method does not require an explicit fuzzy rule base. As the first contribution of this paper, we prove, by using the Stone-Weierstrass theorem, that the proposed fuzzy system without rule base is universal approximator. The second contribution of this paper is an algorithm for solving a finite-horizon minimax problem for ?-gain optimisation. The proposed algorithm consists of recursive chain rule for first- and second-order derivatives, Newton's method, multi-step Adams method and automatic differentiation. Finally, the results of this paper are evaluated on a second-order nonlinear system.
Second- and Higher-Order Virial Coefficients Derived from Equations of State for Real Gases
ERIC Educational Resources Information Center
Parkinson, William A.
2009-01-01
Derivation of the second- and higher-order virial coefficients for models of the gaseous state is demonstrated by employing a direct differential method and subsequent term-by-term comparison to power series expansions. This communication demonstrates the application of this technique to van der Waals representations of virial coefficients.…
Mertz, Pamela; Streu, Craig
2015-01-01
This article describes a synergistic two-semester writing sequence for biochemistry courses. In the first semester, students select a putative protein and are tasked with researching their protein largely through bioinformatics resources. In the second semester, students develop original ideas and present them in the form of a research grant proposal. Both projects involve multiple drafts and peer review. The complementarity of the projects increases student exposure to bioinformatics and literature resources, fosters higher-order thinking skills, and develops teamwork and communication skills. Student feedback and responses on perception surveys demonstrated that the students viewed both projects as favorable learning experiences. © 2015 The International Union of Biochemistry and Molecular Biology.
Configuration-shape-size optimization of space structures by material redistribution
NASA Technical Reports Server (NTRS)
Vandenbelt, D. N.; Crivelli, L. A.; Felippa, C. A.
1993-01-01
This project investigates the configuration-shape-size optimization (CSSO) of orbiting and planetary space structures. The project embodies three phases. In the first one the material-removal CSSO method introduced by Kikuchi and Bendsoe (KB) is further developed to gain understanding of finite element homogenization techniques as well as associated constrained optimization algorithms that must carry along a very large number (thousands) of design variables. In the CSSO-KB method an optimal structure is 'carved out' of a design domain initially filled with finite elements, by allowing perforations (microholes) to develop, grow and merge. The second phase involves 'materialization' of space structures from the void, thus reversing the carving process. The third phase involves analysis of these structures for construction and operational constraints, with emphasis in packaging and deployment. The present paper describes progress in selected areas of the first project phase and the start of the second one.
The architectonic encoding of the minor lunar standstills in the horizon of the Giza pyramids.
NASA Astrophysics Data System (ADS)
Hossam, M. K. Aboulfotouh
The paper is an attempt to show the architectonic method of the ancient Egyptian designers for encoding the horizontal-projections of the moon's declinations during two events of the minor lunar standstills, in the design of the site-plan of the horizon of the Giza pyramids, using the methods of descriptive geometry. It shows that the distance of the eastern side of the second Giza pyramid from the north-south axis of the great pyramid encodes a projection of a lunar declination, when earth's obliquity-angle was ~24.10°. Besides, it shows that the angle of inclination of the causeway of the second Giza pyramid, of ~13.54° south of the cardinal east, encodes the projection of another lunar declination when earth's obliquity-angle reaches ~22.986°. In addition, it shows the encoded coordinate system in the site-plan of the horizon of the Giza pyramids.
A Comparison of Surface Acoustic Wave Modeling Methods
NASA Technical Reports Server (NTRS)
Wilson, W. c.; Atkinson, G. M.
2009-01-01
Surface Acoustic Wave (SAW) technology is low cost, rugged, lightweight, extremely low power and can be used to develop passive wireless sensors. For these reasons, NASA is investigating the use of SAW technology for Integrated Vehicle Health Monitoring (IVHM) of aerospace structures. To facilitate rapid prototyping of passive SAW sensors for aerospace applications, SAW models have been developed. This paper reports on the comparison of three methods of modeling SAWs. The three models are the Impulse Response Method a first order model, and two second order matrix methods; the conventional matrix approach, and a modified matrix approach that is extended to include internal finger reflections. The second order models are based upon matrices that were originally developed for analyzing microwave circuits using transmission line theory. Results from the models are presented with measured data from devices.
A second-order unconstrained optimization method for canonical-ensemble density-functional methods
NASA Astrophysics Data System (ADS)
Nygaard, Cecilie R.; Olsen, Jeppe
2013-03-01
A second order converging method of ensemble optimization (SOEO) in the framework of Kohn-Sham Density-Functional Theory is presented, where the energy is minimized with respect to an ensemble density matrix. It is general in the sense that the number of fractionally occupied orbitals is not predefined, but rather it is optimized by the algorithm. SOEO is a second order Newton-Raphson method of optimization, where both the form of the orbitals and the occupation numbers are optimized simultaneously. To keep the occupation numbers between zero and two, a set of occupation angles is defined, from which the occupation numbers are expressed as trigonometric functions. The total number of electrons is controlled by a built-in second order restriction of the Newton-Raphson equations, which can be deactivated in the case of a grand-canonical ensemble (where the total number of electrons is allowed to change). To test the optimization method, dissociation curves for diatomic carbon are produced using different functionals for the exchange-correlation energy. These curves show that SOEO favors symmetry broken pure-state solutions when using functionals with exact exchange such as Hartree-Fock and Becke three-parameter Lee-Yang-Parr. This is explained by an unphysical contribution to the exact exchange energy from interactions between fractional occupations. For functionals without exact exchange, such as local density approximation or Becke Lee-Yang-Parr, ensemble solutions are favored at interatomic distances larger than the equilibrium distance. Calculations on the chromium dimer are also discussed. They show that SOEO is able to converge to ensemble solutions for systems that are more complicated than diatomic carbon.
Construction risk assessment of deep foundation pit in metro station based on G-COWA method
NASA Astrophysics Data System (ADS)
You, Weibao; Wang, Jianbo; Zhang, Wei; Liu, Fangmeng; Yang, Diying
2018-05-01
In order to get an accurate understanding of the construction safety of deep foundation pit in metro station and reduce the probability and loss of risk occurrence, a risk assessment method based on G-COWA is proposed. Firstly, relying on the specific engineering examples and the construction characteristics of deep foundation pit, an evaluation index system based on the five factors of “human, management, technology, material and environment” is established. Secondly, the C-OWA operator is introduced to realize the evaluation index empowerment and weaken the negative influence of expert subjective preference. The gray cluster analysis and fuzzy comprehensive evaluation method are combined to construct the construction risk assessment model of deep foundation pit, which can effectively solve the uncertainties. Finally, the model is applied to the actual project of deep foundation pit of Qingdao Metro North Station, determine its construction risk rating is “medium”, evaluate the model is feasible and reasonable. And then corresponding control measures are put forward and useful reference are provided.
NASA Astrophysics Data System (ADS)
Haghshenas, R.; Sheng, D. N.
2018-05-01
We develop an improved variant of U (1 ) -symmetric infinite projected entangled-pair states (iPEPS) ansatz to investigate the ground-state phase diagram of the spin-1 /2 square J1-J2 Heisenberg model. In order to improve the accuracy of the ansatz, we discuss a simple strategy to select automatically relevant symmetric sectors and also introduce an optimization method to treat second-neighbor interactions more efficiently. We show that variational ground-state energies of the model obtained by the U (1 ) -symmetric iPEPS ansatz (for a fixed bond dimension D ) set a better upper bound, improving previous tensor-network-based results. By studying the finite-D scaling of the magnetically order parameter, we find a Néel phase for J2/J1<0.53 . For 0.53
Two new methods to increase the contrast of track-etch neutron radiographs
NASA Technical Reports Server (NTRS)
Morley, J.
1973-01-01
In one method, fluorescent dye is deposited into tracks of radiograph and viewed under ultraviolet light. In second method, track-etch radiograph is placed between crossed polaroid filters, exposed to diffused light and resulting image is projected onto photographic film.
NASA Astrophysics Data System (ADS)
Wang, Li; Wu, Hai-Long; Yin, Xiao-Li; Hu, Yong; Gu, Hui-Wen; Yu, Ru-Qin
2017-01-01
A chemometrics-assisted excitation-emission matrix (EEM) fluorescence method is presented for simultaneous determination of umbelliferone and scopoletin in Tibetan medicine Saussurea laniceps (SL) and traditional Chinese medicine Radix angelicae pubescentis (RAP). Using the strategy of combining EEM fluorescence data with second-order calibration method based on the alternating trilinear decomposition (ATLD) algorithm, the simultaneous quantification of umbelliferone and scopoletin in the two different complex systems was achieved successfully, even in the presence of potential interferents. The pretreatment is simple due to the "second-order advantage" and the use of "mathematical separation" instead of awkward "physical or chemical separation". Satisfactory results have been achieved with the limits of detection (LODs) of umbelliferone and scopoletin being 0.06 ng mL- 1 and 0.16 ng mL- 1, respectively. The average spike recoveries of umbelliferone and scopoletin are 98.8 ± 4.3% and 102.5 ± 3.3%, respectively. Besides, HPLC-DAD method was used to further validate the presented strategy, and t-test indicates that prediction results of the two methods have no significant differences. Satisfactory experimental results imply that our method is fast, low-cost and sensitive when compared with HPLC-DAD method.
Junk, J; Ulber, B; Vidal, S; Eickermann, M
2015-11-01
Agricultural production is directly affected by projected increases in air temperature and changes in precipitation. A multi-model ensemble of regional climate change projections indicated shifts towards higher air temperatures and changing precipitation patterns during the summer and winter seasons up to the year 2100 for the region of Goettingen (Lower Saxony, Germany). A second major controlling factor of the agricultural production is the infestation level by pests. Based on long-term field surveys and meteorological observations, a calibration of an existing model describing the migration of the pest insect Ceutorhynchus napi was possible. To assess the impacts of climate on pests under projected changing environmental conditions, we combined the results of regional climate models with the phenological model to describe the crop invasion of this species. In order to reduce systematic differences between the output of the regional climate models and observational data sets, two different bias correction methods were applied: a linear correction for air temperature and a quantile mapping approach for precipitation. Only the results derived from the bias-corrected output of the regional climate models showed satisfying results. An earlier onset, as well as a prolongation of the possible time window for the immigration of Ceutorhynchus napi, was projected by the majority of the ensemble members.
NASA Astrophysics Data System (ADS)
Junk, J.; Ulber, B.; Vidal, S.; Eickermann, M.
2015-11-01
Agricultural production is directly affected by projected increases in air temperature and changes in precipitation. A multi-model ensemble of regional climate change projections indicated shifts towards higher air temperatures and changing precipitation patterns during the summer and winter seasons up to the year 2100 for the region of Goettingen (Lower Saxony, Germany). A second major controlling factor of the agricultural production is the infestation level by pests. Based on long-term field surveys and meteorological observations, a calibration of an existing model describing the migration of the pest insect Ceutorhynchus napi was possible. To assess the impacts of climate on pests under projected changing environmental conditions, we combined the results of regional climate models with the phenological model to describe the crop invasion of this species. In order to reduce systematic differences between the output of the regional climate models and observational data sets, two different bias correction methods were applied: a linear correction for air temperature and a quantile mapping approach for precipitation. Only the results derived from the bias-corrected output of the regional climate models showed satisfying results. An earlier onset, as well as a prolongation of the possible time window for the immigration of Ceutorhynchus napi, was projected by the majority of the ensemble members.
Chan, Rachel W; von Deuster, Constantin; Giese, Daniel; Stoeck, Christian T; Harmer, Jack; Aitken, Andrew P; Atkinson, David; Kozerke, Sebastian
2014-07-01
Diffusion tensor imaging (DTI) of moving organs is gaining increasing attention but robust performance requires sequence modifications and dedicated correction methods to account for system imperfections. In this study, eddy currents in the "unipolar" Stejskal-Tanner and the velocity-compensated "bipolar" spin-echo diffusion sequences were investigated and corrected for using a magnetic field monitoring approach in combination with higher-order image reconstruction. From the field-camera measurements, increased levels of second-order eddy currents were quantified in the unipolar sequence relative to the bipolar diffusion sequence while zeroth and linear orders were found to be similar between both sequences. Second-order image reconstruction based on field-monitoring data resulted in reduced spatial misalignment artifacts and residual displacements of less than 0.43 mm and 0.29 mm (in the unipolar and bipolar sequences, respectively) after second-order eddy-current correction. Results demonstrate the need for second-order correction in unipolar encoding schemes but also show that bipolar sequences benefit from second-order reconstruction to correct for incomplete intrinsic cancellation of eddy-currents. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
On Some Methods in Safety Evaluation in Geotechnics
NASA Astrophysics Data System (ADS)
Puła, Wojciech; Zaskórski, Łukasz
2015-06-01
The paper demonstrates how the reliability methods can be utilised in order to evaluate safety in geotechnics. Special attention is paid to the so-called reliability based design that can play a useful and complementary role to Eurocode 7. In the first part, a brief review of first- and second-order reliability methods is given. Next, two examples of reliability-based design are demonstrated. The first one is focussed on bearing capacity calculation and is dedicated to comparison with EC7 requirements. The second one analyses a rigid pile subjected to lateral load and is oriented towards working stress design method. In the second part, applications of random field to safety evaluations in geotechnics are addressed. After a short review of the theory a Random Finite Element algorithm to reliability based design of shallow strip foundation is given. Finally, two illustrative examples for cohesive and cohesionless soils are demonstrated.
Some Advanced Concepts in Discrete Aerodynamic Sensitivity Analysis
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Green, Lawrence L.; Newman, Perry A.; Putko, Michele M.
2001-01-01
An efficient incremental-iterative approach for differentiating advanced flow codes is successfully demonstrated on a 2D inviscid model problem. The method employs the reverse-mode capability of the automatic- differentiation software tool ADIFOR 3.0, and is proven to yield accurate first-order aerodynamic sensitivity derivatives. A substantial reduction in CPU time and computer memory is demonstrated in comparison with results from a straight-forward, black-box reverse- mode application of ADIFOR 3.0 to the same flow code. An ADIFOR-assisted procedure for accurate second-order aerodynamic sensitivity derivatives is successfully verified on an inviscid transonic lifting airfoil example problem. The method requires that first-order derivatives are calculated first using both the forward (direct) and reverse (adjoint) procedures; then, a very efficient non-iterative calculation of all second-order derivatives can be accomplished. Accurate second derivatives (i.e., the complete Hessian matrices) of lift, wave-drag, and pitching-moment coefficients are calculated with respect to geometric- shape, angle-of-attack, and freestream Mach number
EVALUATING DESIGN AND VERIFYING COMPLIANCE OF CREATED WETLANDS IN THE VICINITY OF TAMPA, FLORIDA
Completed mitigation projects are being studied by the Wetlands Research Program nationwide to identify critical design features, develop methods for evaluating projects, determine the functions they perform, and describe how they change with time. his report is the second in a s...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hopkins, Matthew Morgan; DeChant, Lawrence Justin.; Piekos, Edward Stanley
2009-02-01
This report summarizes the work completed during FY2007 and FY2008 for the LDRD project ''Hybrid Plasma Modeling''. The goal of this project was to develop hybrid methods to model plasmas across the non-continuum-to-continuum collisionality spectrum. The primary methodology to span these regimes was to couple a kinetic method (e.g., Particle-In-Cell) in the non-continuum regions to a continuum PDE-based method (e.g., finite differences) in continuum regions. The interface between the two would be adjusted dynamically ased on statistical sampling of the kinetic results. Although originally a three-year project, it became clear during the second year (FY2008) that there were not sufficientmore » resources to complete the project and it was terminated mid-year.« less
The relative importance of regional, watershed, and in-stream environmental factors on stream fish assemblage structure and function was investigated as part of a comparative watershed project in the western Lake Superior basin. We selected 48 second and third order watersheds fr...
Un-collided-flux preconditioning for the first order transport equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rigley, M.; Koebbe, J.; Drumm, C.
2013-07-01
Two codes were tested for the first order neutron transport equation using finite element methods. The un-collided-flux solution is used as a preconditioner for each of these methods. These codes include a least squares finite element method and a discontinuous finite element method. The performance of each code is shown on problems in one and two dimensions. The un-collided-flux preconditioner shows good speedup on each of the given methods. The un-collided-flux preconditioner has been used on the second-order equation, and here we extend those results to the first order equation. (authors)
NASA Astrophysics Data System (ADS)
Halverson, Peter G.; Loya, Frank M.
2017-11-01
Projects such as the Space Interferometry Mission (SIM) [1] and Terrestrial Planet Finder (TPF) [2] rely heavily on sub-nanometer accuracy metrology systems to define their optical paths and geometries. The James Web Space Telescope (JWST) is using this metrology in a cryogenic dilatometer for characterizing material properties (thermal expansion, creep) of optical materials. For all these projects, a key issue has been the reliability and stability of the electronics that convert displacement metrology signals into real-time distance determinations. A particular concern is the behavior of the electronics in situations where laser heterodyne signals are weak or noisy and subject to abrupt Doppler shifts due to vibrations or the slewing of motorized optics. A second concern is the long-term (hours to days) stability of the distance measurements under conditions of drifting laser power and ambient temperature. This paper describes heterodyne displacement metrology gauge signal processing methods that achieve satisfactory robustness against low signal strength and spurious signals, and good long-term stability. We have a proven displacement-measuring approach that is useful not only to space-optical projects at JPL, but also to the wider field of distance measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffman, J; Martin, T; Young, S
Purpose: CT neuro perfusion scans are one of the highest dose exams. Methods to reduce dose include decreasing the number of projections acquired per gantry rotation, however conventional reconstruction of such scans leads to sampling artifacts. In this study we investigated a projection view-sharing reconstruction algorithm used in dynamic MRI – “K-space Weighted Image Contrast” (KWIC) – applied to simulated perfusion exams and evaluated dose savings and impacts on perfusion metrics. Methods: A FORBILD head phantom containing simulated time-varying objects was developed and a set of parallel-beam CT projection data was created. The simulated scans were 60 seconds long, 1152more » projections per turn, with a rotation time of one second. No noise was simulated. 5mm, 10mm, and 50mm objects were modeled in the brain. A baseline, “full dose” simulation used all projections and reduced dose cases were simulated by downsampling the number of projections per turn from 1152 to 576 (50% dose), 288 (25% dose), and 144 (12.5% dose). KWIC was further evaluated at 72 projections per rotation (6.25%). One image per second was reconstructed using filtered backprojection (FBP) and KWIC. KWIC reconstructions utilized view cores of 36, 72, 144, and 288 views and 16, 8, 4, and 2 subapertures respectively. From the reconstructed images, time-to-peak (TTP), cerebral blood flow (CBF) and the FWHM of the perfusion curve were calculated and compared against reference values from the full-dose FBP data. Results: TTP, CBF, and the FWHM were unaffected by dose reduction (to 12.5%) and reconstruction method, however image quality was improved when using KWIC. Conclusion: This pilot study suggests that KWIC preserves image quality and perfusion metrics when under-sampling projections and that the unique contrast weighting of KWIC could provided substantial dose-savings for perfusion CT scans. Evaluation of KWIC in clinical CT data will be performed in the near future. R01 EB014922, NCI Grant U01 CA181156 (Quantitative Imaging Network), and Tobacco Related Disease Research Project grant 22RT-0131.« less
Numerical optimization methods for controlled systems with parameters
NASA Astrophysics Data System (ADS)
Tyatyushkin, A. I.
2017-10-01
First- and second-order numerical methods for optimizing controlled dynamical systems with parameters are discussed. In unconstrained-parameter problems, the control parameters are optimized by applying the conjugate gradient method. A more accurate numerical solution in these problems is produced by Newton's method based on a second-order functional increment formula. Next, a general optimal control problem with state constraints and parameters involved on the righthand sides of the controlled system and in the initial conditions is considered. This complicated problem is reduced to a mathematical programming one, followed by the search for optimal parameter values and control functions by applying a multimethod algorithm. The performance of the proposed technique is demonstrated by solving application problems.
Establishing monitoring programs for travel time reliability.
DOT National Transportation Integrated Search
2014-01-01
Within the second Strategic Highway Research Program (SHRP 2), Project L02 focused on creating a suite of methods by which transportation agencies could monitor and evaluate travel time reliability. Creation of the methods also produced an improved u...
NASA Astrophysics Data System (ADS)
Shiozaki, Toru; Győrffy, Werner; Celani, Paolo; Werner, Hans-Joachim
2011-08-01
The extended multireference quasi-degenerate perturbation theory, proposed by Granovsky [J. Chem. Phys. 134, 214113 (2011)], is combined with internally contracted multi-state complete active space second-order perturbation theory (XMS-CASPT2). The first-order wavefunction is expanded in terms of the union of internally contracted basis functions generated from all the reference functions, which guarantees invariance of the theory with respect to unitary rotations of the reference functions. The method yields improved potentials in the vicinity of avoided crossings and conical intersections. The theory for computing nuclear energy gradients for MS-CASPT2 and XMS-CASPT2 is also presented and the first implementation of these gradient methods is reported. A number of illustrative applications of the new methods are presented.
Laser Guide Star Based Astrophysics at Lick Observatory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Max, C; Gavel, D.; Friedman, H.
2000-03-10
The resolution of ground-based telescopes is typically limited to {approx}1 second of arc because of the blurring effects of atmospheric turbulence. Adaptive optics (AO) technology senses and corrects for the optical distortions due to turbulence hundreds of times per second using high-speed sensors, computers, deformable mirror, and laser technology. The goal of this project is to make AO systems widely useful astronomical tools providing resolutions up to an order of magnitude better than current, ground-based telescopes. Astronomers at the University of California Lick Observatory at Mt. Hamilton now routinely use the LLNL developed AO system for high resolution imaging ofmore » astrophysical objects. We report here on the instrument development progress and on the science observations made with this system during this 3-year ERI project.« less
Reduced order modeling of fluid/structure interaction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barone, Matthew Franklin; Kalashnikova, Irina; Segalman, Daniel Joseph
2009-11-01
This report describes work performed from October 2007 through September 2009 under the Sandia Laboratory Directed Research and Development project titled 'Reduced Order Modeling of Fluid/Structure Interaction.' This project addresses fundamental aspects of techniques for construction of predictive Reduced Order Models (ROMs). A ROM is defined as a model, derived from a sequence of high-fidelity simulations, that preserves the essential physics and predictive capability of the original simulations but at a much lower computational cost. Techniques are developed for construction of provably stable linear Galerkin projection ROMs for compressible fluid flow, including a method for enforcing boundary conditions that preservesmore » numerical stability. A convergence proof and error estimates are given for this class of ROM, and the method is demonstrated on a series of model problems. A reduced order method, based on the method of quadratic components, for solving the von Karman nonlinear plate equations is developed and tested. This method is applied to the problem of nonlinear limit cycle oscillations encountered when the plate interacts with an adjacent supersonic flow. A stability-preserving method for coupling the linear fluid ROM with the structural dynamics model for the elastic plate is constructed and tested. Methods for constructing efficient ROMs for nonlinear fluid equations are developed and tested on a one-dimensional convection-diffusion-reaction equation. These methods are combined with a symmetrization approach to construct a ROM technique for application to the compressible Navier-Stokes equations.« less
Numerical Methods for 2-Dimensional Modeling
1980-12-01
high-order finite element methods, and a multidimensional version of the method of lines, both utilizing an optimized stiff integrator for the time...integration. The finite element methods have proved disappointing, but the method of lines has provided an unexpectedly large gain in speed. Two...diffusion problems with the same number of unknowns (a 21 x 41 grid), solved by second-order finite element methods, took over seven minutes on the Cray-i
Solving Second-Order Ordinary Differential Equations without Using Complex Numbers
ERIC Educational Resources Information Center
Kougias, Ioannis E.
2009-01-01
Ordinary differential equations (ODEs) is a subject with a wide range of applications and the need of introducing it to students often arises in the last year of high school, as well as in the early stages of tertiary education. The usual methods of solving second-order ODEs with constant coefficients, among others, rely upon the use of complex…
Aerodynamic Modeling of Oscillating Wing in Hypersonic Flow: a Numerical Study
NASA Astrophysics Data System (ADS)
Zhu, Jian; Hou, Ying-Yu; Ji, Chen; Liu, Zi-Qiang
2016-06-01
Various approximations to unsteady aerodynamics are examined for the unsteady aerodynamic force of a pitching thin double wedge airfoil in hypersonic flow. Results of piston theory, Van Dyke’s second-order theory, Newtonian impact theory, and CFD method are compared in the same motion and Mach number effects. The results indicate that, for this thin double wedge airfoil, Newtonian impact theory is not suitable for these Mach number, while piston theory and Van Dyke’s second-order theory are in good agreement with CFD method for Ma<7.
Analytical methods for the development of Reynolds stress closures in turbulence
NASA Technical Reports Server (NTRS)
Speziale, Charles G.
1990-01-01
Analytical methods for the development of Reynolds stress models in turbulence are reviewed in detail. Zero, one and two equation models are discussed along with second-order closures. A strong case is made for the superior predictive capabilities of second-order closure models in comparison to the simpler models. The central points are illustrated by examples from both homogeneous and inhomogeneous turbulence. A discussion of the author's views concerning the progress made in Reynolds stress modeling is also provided along with a brief history of the subject.
A fast direct solver for a class of two-dimensional separable elliptic equations on the sphere
NASA Technical Reports Server (NTRS)
Moorthi, Shrinivas; Higgins, R. Wayne
1992-01-01
An efficient, direct, second-order solver for the discrete solution of two-dimensional separable elliptic equations on the sphere is presented. The method involves a Fourier transformation in longitude and a direct solution of the resulting coupled second-order finite difference equations in latitude. The solver is made efficient by vectorizing over longitudinal wavenumber and by using a vectorized fast Fourier transform routine. It is evaluated using a prescribed solution method and compared with a multigrid solver and the standard direct solver from FISHPAK.
Xu, Enhua; Li, Shuhua
2013-11-07
The block correlated second-order perturbation theory with a generalized valence bond (GVB) reference (GVB-BCPT2) is proposed. In this approach, each geminal in the GVB reference is considered as a "multi-orbital" block (a subset of spin orbitals), and each occupied or virtual spin orbital is also taken as a single block. The zeroth-order Hamiltonian is set to be the summation of the individual Hamiltonians of all blocks (with explicit two-electron operators within each geminal) so that the GVB reference function and all excited configuration functions are its eigenfunctions. The GVB-BCPT2 energy can be directly obtained without iteration, just like the second order Mo̸ller-Plesset perturbation method (MP2), both of which are size consistent. We have applied this GVB-BCPT2 method to investigate the equilibrium distances and spectroscopic constants of 7 diatomic molecules, conformational energy differences of 8 small molecules, and bond-breaking potential energy profiles in 3 systems. GVB-BCPT2 is demonstrated to have noticeably better performance than MP2 for systems with significant multi-reference character, and provide reasonably accurate results for some systems with large active spaces, which are beyond the capability of all CASSCF-based methods.
Nonlinear dynamic analysis of voices before and after surgical excision of vocal polyps
NASA Astrophysics Data System (ADS)
Zhang, Yu; McGilligan, Clancy; Zhou, Liang; Vig, Mark; Jiang, Jack J.
2004-05-01
Phase space reconstruction, correlation dimension, and second-order entropy, methods from nonlinear dynamics, are used to analyze sustained vowels generated by patients before and after surgical excision of vocal polyps. Two conventional acoustic perturbation parameters, jitter and shimmer, are also employed to analyze voices before and after surgery. Presurgical and postsurgical analyses of jitter, shimmer, correlation dimension, and second-order entropy are statistically compared. Correlation dimension and second-order entropy show a statistically significant decrease after surgery, indicating reduced complexity and higher predictability of postsurgical voice dynamics. There is not a significant postsurgical difference in shimmer, although jitter shows a significant postsurgical decrease. The results suggest that jitter and shimmer should be applied to analyze disordered voices with caution; however, nonlinear dynamic methods may be useful for analyzing abnormal vocal function and quantitatively evaluating the effects of surgical excision of vocal polyps.
NASA Astrophysics Data System (ADS)
Xu, B. Y.; Ye, Y.; Liao, L. C.
2016-07-01
A new method was developed to determine the methamphetamine and morphine concentrations in urine and saliva based on excitation-emission matrix fluorescence coupled to a second-order calibration algorithm. In the case of single-drug abuse, the results showed that the average recoveries of methamphetamine and morphine were 95.3 and 96.7% in urine samples, respectively, and 98.1 and 106.2% in saliva samples, respectively. The relative errors were all below 5%. The simultaneous determination of methamphetamine and morphine in urine using two second-order algorithms was also investigated. Satisfactory results were obtained with a self-weighted alternating trilinear decomposition algorithm. The root-mean-square errors of the predictions were 0.540 and 0.0382 μg/mL for methamphetamine and morphine, respectively. The limits of detection of the proposed methods were very low and sufficient for studying methamphetamine and morphine in urine.
Zooming in on vibronic structure by lowest-value projection reconstructed 4D coherent spectroscopy
NASA Astrophysics Data System (ADS)
Harel, Elad
2018-05-01
A fundamental goal of chemical physics is an understanding of microscopic interactions in liquids at and away from equilibrium. In principle, this microscopic information is accessible by high-order and high-dimensionality nonlinear optical measurements. Unfortunately, the time required to execute such experiments increases exponentially with the dimensionality, while the signal decreases exponentially with the order of the nonlinearity. Recently, we demonstrated a non-uniform acquisition method based on radial sampling of the time-domain signal [W. O. Hutson et al., J. Phys. Chem. Lett. 9, 1034 (2018)]. The four-dimensional spectrum was then reconstructed by filtered back-projection using an inverse Radon transform. Here, we demonstrate an alternative reconstruction method based on the statistical analysis of different back-projected spectra which results in a dramatic increase in sensitivity and at least a 100-fold increase in dynamic range compared to conventional uniform sampling and Fourier reconstruction. These results demonstrate that alternative sampling and reconstruction methods enable applications of increasingly high-order and high-dimensionality methods toward deeper insights into the vibronic structure of liquids.
NASA Technical Reports Server (NTRS)
Kim, Hyoungin; Liou, Meng-Sing
2011-01-01
In this paper, we demonstrate improved accuracy of the level set method for resolving deforming interfaces by proposing two key elements: (1) accurate level set solutions on adapted Cartesian grids by judiciously choosing interpolation polynomials in regions of different grid levels and (2) enhanced reinitialization by an interface sharpening procedure. The level set equation is solved using a fifth order WENO scheme or a second order central differencing scheme depending on availability of uniform stencils at each grid point. Grid adaptation criteria are determined so that the Hamiltonian functions at nodes adjacent to interfaces are always calculated by the fifth order WENO scheme. This selective usage between the fifth order WENO and second order central differencing schemes is confirmed to give more accurate results compared to those in literature for standard test problems. In order to further improve accuracy especially near thin filaments, we suggest an artificial sharpening method, which is in a similar form with the conventional re-initialization method but utilizes sign of curvature instead of sign of the level set function. Consequently, volume loss due to numerical dissipation on thin filaments is remarkably reduced for the test problems
Simulated lumped-parameter system reduced-order adaptive control studies
NASA Technical Reports Server (NTRS)
Johnson, C. R., Jr.; Lawrence, D. A.; Taylor, T.; Malakooti, M. V.
1981-01-01
Two methods of interpreting the misbehavior of reduced order adaptive controllers are discussed. The first method is based on system input-output description and the second is based on state variable description. The implementation of the single input, single output, autoregressive, moving average system is considered.
NASA Astrophysics Data System (ADS)
Liu, Chun-Ho; Leung, Dennis Y. C.
2006-02-01
This study employed a direct numerical simulation (DNS) technique to contrast the plume behaviours and mixing of passive scalar emitted from line sources (aligned with the spanwise direction) in neutrally and unstably stratified open-channel flows. The DNS model was developed using the Galerkin finite element method (FEM) employing trilinear brick elements with equal-order interpolating polynomials that solved the momentum and continuity equations, together with conservation of energy and mass equations in incompressible flow. The second-order accurate fractional-step method was used to handle the implicit velocity-pressure coupling in incompressible flow. It also segregated the solution to the advection and diffusion terms, which were then integrated in time, respectively, by the explicit third-order accurate Runge-Kutta method and the implicit second-order accurate Crank-Nicolson method. The buoyancy term under unstable stratification was integrated in time explicitly by the first-order accurate Euler method. The DNS FEM model calculated the scalar-plume development and the mean plume path. In particular, it calculated the plume meandering in the wall-normal direction under unstable stratification that agreed well with the laboratory and field measurements, as well as previous modelling results available in literature.
A single-loop optimization method for reliability analysis with second order uncertainty
NASA Astrophysics Data System (ADS)
Xie, Shaojun; Pan, Baisong; Du, Xiaoping
2015-08-01
Reliability analysis may involve random variables and interval variables. In addition, some of the random variables may have interval distribution parameters owing to limited information. This kind of uncertainty is called second order uncertainty. This article develops an efficient reliability method for problems involving the three aforementioned types of uncertain input variables. The analysis produces the maximum and minimum reliability and is computationally demanding because two loops are needed: a reliability analysis loop with respect to random variables and an interval analysis loop for extreme responses with respect to interval variables. The first order reliability method and nonlinear optimization are used for the two loops, respectively. For computational efficiency, the two loops are combined into a single loop by treating the Karush-Kuhn-Tucker (KKT) optimal conditions of the interval analysis as constraints. Three examples are presented to demonstrate the proposed method.
Application of the moving frame method to deformed Willmore surfaces in space forms
NASA Astrophysics Data System (ADS)
Paragoda, Thanuja
2018-06-01
The main goal of this paper is to use the theory of exterior differential forms in deriving variations of the deformed Willmore energy in space forms and study the minimizers of the deformed Willmore energy in space forms. We derive both first and second order variations of deformed Willmore energy in space forms explicitly using moving frame method. We prove that the second order variation of deformed Willmore energy depends on the intrinsic Laplace Beltrami operator, the sectional curvature and some special operators along with mean and Gauss curvatures of the surface embedded in space forms, while the first order variation depends on the extrinsic Laplace Beltrami operator.
Adaptive wavefront sensor based on the Talbot phenomenon.
Podanchuk, Dmytro V; Goloborodko, Andrey A; Kotov, Myhailo M; Kovalenko, Andrey V; Kurashov, Vitalij N; Dan'ko, Volodymyr P
2016-04-20
A new adaptive method of wavefront sensing is proposed and demonstrated. The method is based on the Talbot self-imaging effect, which is observed in an illuminating light beam with strong second-order aberration. Compensation of defocus and astigmatism is achieved with an appropriate choice of size of the rectangular unit cell of the diffraction grating, which is performed iteratively. A liquid-crystal spatial light modulator is used for this purpose. Self-imaging of rectangular grating in the astigmatic light beam is demonstrated experimentally. High-order aberrations are detected with respect to the compensated second-order aberration. The comparative results of wavefront sensing with a Shack-Hartmann sensor and the proposed sensor are adduced.
NASA Astrophysics Data System (ADS)
Song, Qing; Zhu, Sijia; Yan, Han; Wu, Wenqian
2008-03-01
Parallel light projection method for the diameter measurement is to project the workpiece to be measured on the photosensitive units of CCD, but the original signal output from CCD cannot be directly used for counting or measurement. The weak signal with high-frequency noise should be filtered and amplified firstly. This paper introduces RC low-pass filter and multiple feed-back second-order low-pass filter with infinite gain. Additionally there is always dispersion on the light band and the output signal has a transition between the irradiant area and the shadow, because of the instability of the light source intensity and the imperfection of the light system adjustment. To obtain exactly the shadow size related to the workpiece diameter, binary-value processing is necessary to achieve a square wave. Comparison method and differential method can be adopted for binary-value processing. There are two ways to decide the threshold value when using voltage comparator: the fixed level method and the floated level method. The latter has a high accuracy. Deferential method is to output two spike pulses with opposite pole by the rising edge and the failing edge of the video signal related to the differential circuit firstly, then the rising edge of the signal output from the differential circuit is acquired by half-wave rectifying circuit. After traveling through the zero passing comparator and the maintain- resistance edge trigger, the square wave which indicates the measured size is acquired at last. And then it is used for filling through standard pulses and for counting through the counter. Data acquisition and information processing is accomplished by the computer and the control software. This paper will introduce in detail the design and analysis of the filter circuit, binary-value processing circuit and the interface circuit towards the computer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rana, A.; Ravichandran, R.; Park, J. H.
The second-order non-Navier-Fourier constitutive laws, expressed in a compact algebraic mathematical form, were validated for the force-driven Poiseuille gas flow by the deterministic atomic-level microscopic molecular dynamics (MD). Emphasis is placed on how completely different methods (a second-order continuum macroscopic theory based on the kinetic Boltzmann equation, the probabilistic mesoscopic direct simulation Monte Carlo, and, in particular, the deterministic microscopic MD) describe the non-classical physics, and whether the second-order non-Navier-Fourier constitutive laws derived from the continuum theory can be validated using MD solutions for the viscous stress and heat flux calculated directly from the molecular data using the statistical method.more » Peculiar behaviors (non-uniform tangent pressure profile and exotic instantaneous heat conduction from cold to hot [R. S. Myong, “A full analytical solution for the force-driven compressible Poiseuille gas flow based on a nonlinear coupled constitutive relation,” Phys. Fluids 23(1), 012002 (2011)]) were re-examined using atomic-level MD results. It was shown that all three results were in strong qualitative agreement with each other, implying that the second-order non-Navier-Fourier laws are indeed physically legitimate in the transition regime. Furthermore, it was shown that the non-Navier-Fourier constitutive laws are essential for describing non-zero normal stress and tangential heat flux, while the classical and non-classical laws remain similar for shear stress and normal heat flux.« less
Optimal least-squares finite element method for elliptic problems
NASA Technical Reports Server (NTRS)
Jiang, Bo-Nan; Povinelli, Louis A.
1991-01-01
An optimal least squares finite element method is proposed for two dimensional and three dimensional elliptic problems and its advantages are discussed over the mixed Galerkin method and the usual least squares finite element method. In the usual least squares finite element method, the second order equation (-Delta x (Delta u) + u = f) is recast as a first order system (-Delta x p + u = f, Delta u - p = 0). The error analysis and numerical experiment show that, in this usual least squares finite element method, the rate of convergence for flux p is one order lower than optimal. In order to get an optimal least squares method, the irrotationality Delta x p = 0 should be included in the first order system.
NASA Astrophysics Data System (ADS)
Abdel Wahab, F. A.; El-Diasty, Fouad; Abdel-Baki, Manal
2009-10-01
A method correlates Fresnel-based spectrophotometric measurements and Lorentz dispersion theory is presented to study the dispersion of nonlinear optical parameters in particularly oxide glasses in a very wide range of angular frequency. The second-order refractive index and third-order optical susceptibility of Cr-doped glasses are determined from linear refractive index. Furthermore, both real and imaginary components of the complex susceptibility are carried out. The study reveals the importance of determining the dispersion of nonlinear absorption (two-photon absorption coefficient) to find the maximum resonant and nonresonant susceptibilities of investigated glasses. The present method is applied on Cr-doped lithium aluminum silicate (LAS) glasses due to their semiconductor-like behavior and also to their application in laser industry.
ColDICE: A parallel Vlasov–Poisson solver using moving adaptive simplicial tessellation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sousbie, Thierry, E-mail: tsousbie@gmail.com; Department of Physics, The University of Tokyo, Tokyo 113-0033; Research Center for the Early Universe, School of Science, The University of Tokyo, Tokyo 113-0033
2016-09-15
Resolving numerically Vlasov–Poisson equations for initially cold systems can be reduced to following the evolution of a three-dimensional sheet evolving in six-dimensional phase-space. We describe a public parallel numerical algorithm consisting in representing the phase-space sheet with a conforming, self-adaptive simplicial tessellation of which the vertices follow the Lagrangian equations of motion. The algorithm is implemented both in six- and four-dimensional phase-space. Refinement of the tessellation mesh is performed using the bisection method and a local representation of the phase-space sheet at second order relying on additional tracers created when needed at runtime. In order to preserve in the bestmore » way the Hamiltonian nature of the system, refinement is anisotropic and constrained by measurements of local Poincaré invariants. Resolution of Poisson equation is performed using the fast Fourier method on a regular rectangular grid, similarly to particle in cells codes. To compute the density projected onto this grid, the intersection of the tessellation and the grid is calculated using the method of Franklin and Kankanhalli [65–67] generalised to linear order. As preliminary tests of the code, we study in four dimensional phase-space the evolution of an initially small patch in a chaotic potential and the cosmological collapse of a fluctuation composed of two sinusoidal waves. We also perform a “warm” dark matter simulation in six-dimensional phase-space that we use to check the parallel scaling of the code.« less
A second-order accurate immersed boundary-lattice Boltzmann method for particle-laden flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Qiang; Fan, Liang-Shih, E-mail: fan.1@osu.edu
A new immersed boundary-lattice Boltzmann method (IB-LBM) is presented for fully resolved simulations of incompressible viscous flows laden with rigid particles. The immersed boundary method (IBM) recently developed by Breugem (2012) [19] is adopted in the present method, development including the retraction technique, the multi-direct forcing method and the direct account of the inertia of the fluid contained within the particles. The present IB-LBM is, however, formulated with further improvement with the implementation of the high-order Runge–Kutta schemes in the coupled fluid–particle interaction. The major challenge to implement high-order Runge–Kutta schemes in the LBM is that the flow information suchmore » as density and velocity cannot be directly obtained at a fractional time step from the LBM since the LBM only provides the flow information at an integer time step. This challenge can be, however, overcome as given in the present IB-LBM by extrapolating the flow field around particles from the known flow field at the previous integer time step. The newly calculated fluid–particle interactions from the previous fractional time steps of the current integer time step are also accounted for in the extrapolation. The IB-LBM with high-order Runge–Kutta schemes developed in this study is validated by several benchmark applications. It is demonstrated, for the first time, that the IB-LBM has the capacity to resolve the translational and rotational motion of particles with the second-order accuracy. The optimal retraction distances for spheres and tubes that help the method achieve the second-order accuracy are found to be around 0.30 and −0.47 times of the lattice spacing, respectively. Simulations of the Stokes flow through a simple cubic lattice of rotational spheres indicate that the lift force produced by the Magnus effect can be very significant in view of the magnitude of the drag force when the practical rotating speed of the spheres is encountered. This finding may lead to more comprehensive studies of the effect of the particle rotation on fluid–solid drag laws. It is also demonstrated that, when the third-order or the fourth-order Runge–Kutta scheme is used, the numerical stability of the present IB-LBM is better than that of all methods in the literature, including the previous IB-LBMs and also the methods with the combination of the IBM and the traditional incompressible Navier–Stokes solver. - Highlights: • The IBM is embedded in the LBM using Runge–Kutta time schemes. • The effectiveness of the present IB-LBM is validated by benchmark applications. • For the first time, the IB-LBM achieves the second-order accuracy. • The numerical stability of the present IB-LBM is better than previous methods.« less
Pouillot, Régis; Delignette-Muller, Marie Laure
2010-09-01
Quantitative risk assessment has emerged as a valuable tool to enhance the scientific basis of regulatory decisions in the food safety domain. This article introduces the use of two new computing resources (R packages) specifically developed to help risk assessors in their projects. The first package, "fitdistrplus", gathers tools for choosing and fitting a parametric univariate distribution to a given dataset. The data may be continuous or discrete. Continuous data may be right-, left- or interval-censored as is frequently obtained with analytical methods, with the possibility of various censoring thresholds within the dataset. Bootstrap procedures then allow the assessor to evaluate and model the uncertainty around the parameters and to transfer this information into a quantitative risk assessment model. The second package, "mc2d", helps to build and study two dimensional (or second-order) Monte-Carlo simulations in which the estimation of variability and uncertainty in the risk estimates is separated. This package easily allows the transfer of separated variability and uncertainty along a chain of conditional mathematical and probabilistic models. The usefulness of these packages is illustrated through a risk assessment of hemolytic and uremic syndrome in children linked to the presence of Escherichia coli O157:H7 in ground beef. These R packages are freely available at the Comprehensive R Archive Network (cran.r-project.org). Copyright 2010 Elsevier B.V. All rights reserved.
Gomes, Adriano de Araújo; Alcaraz, Mirta Raquel; Goicoechea, Hector C; Araújo, Mario Cesar U
2014-02-06
In this work the Successive Projection Algorithm is presented for intervals selection in N-PLS for three-way data modeling. The proposed algorithm combines noise-reduction properties of PLS with the possibility of discarding uninformative variables in SPA. In addition, second-order advantage can be achieved by the residual bilinearization (RBL) procedure when an unexpected constituent is present in a test sample. For this purpose, SPA was modified in order to select intervals for use in trilinear PLS. The ability of the proposed algorithm, namely iSPA-N-PLS, was evaluated on one simulated and two experimental data sets, comparing the results to those obtained by N-PLS. In the simulated system, two analytes were quantitated in two test sets, with and without unexpected constituent. In the first experimental system, the determination of the four fluorophores (l-phenylalanine; l-3,4-dihydroxyphenylalanine; 1,4-dihydroxybenzene and l-tryptophan) was conducted with excitation-emission data matrices. In the second experimental system, quantitation of ofloxacin was performed in water samples containing two other uncalibrated quinolones (ciprofloxacin and danofloxacin) by high performance liquid chromatography with UV-vis diode array detector. For comparison purpose, a GA algorithm coupled with N-PLS/RBL was also used in this work. In most of the studied cases iSPA-N-PLS proved to be a promising tool for selection of variables in second-order calibration, generating models with smaller RMSEP, when compared to both the global model using all of the sensors in two dimensions and GA-NPLS/RBL. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Keil, M.; Esch, T.; Feigenspan, S.; Marconcini, M.; Metz, A.; Ottinger, M.; Zeidler, J.
2015-04-01
For the update 2012 of CORINE Land Cover, in Germany a new approach has been developed in order to profit from the higher accuracies of the national topographic database. In agreement between the Federal Environment Agency (UBA) and the Federal Agency for Cartography and Geodesy (BKG), CLC2012 has been derived from an updated digital landscape model DLM-DE, which is based on the Official Topographical Cartographic Information System ATKIS of the land survey authorities. The DLM-DE 2009 created by the BKG served as the base for the update 2012 in the national and EU context, both under the responsibility of the BKG. In addition to the updated CLC2012, a second product, the layer "CLC_Change" (2006-2012) was also requested by the European Environment Agency. The objective of the project part of DLR-DFD was to contribute the primary change areas from 2006 to 2009 in the phase of method change using the refined 2009 geometry of the DLM-DE 2009 for a retrospective view back to 2006. A semiautomatic approach was developed for this task, with an important role of AWiFS time series data of 2005 / 2006 in the context of separation between grassland - arable land. Other valuable datasets for the project were already available GMES land monitoring products of 2006 like the soil sealing layer 2006. The paper describes the developed method and discusses exemplary results of the CORINE backdating project part.
NASA Astrophysics Data System (ADS)
Longhurst, G. R.
1991-04-01
Gas evolution from spherical solids or liquids where no convective processes are active is analyzed. Three problem classes are considered: (1) constant concentration boundary, (2) Henry's law (first order) boundary, and (3) Sieverts' law (second order) boundary. General expressions are derived for dimensionless times and transport parameters appropriate to each of the classes considered. However, in the second order case, the non-linearities of the problem require the presence of explicit dimensional variables in the solution. Sample problems are solved to illustrate the method.
Constructing failure in big biology: The socio-technical anatomy of Japan's Protein 3000 Project.
Fukushima, Masato
2016-02-01
This study focuses on the 5-year Protein 3000 Project launched in 2002, the largest biological project in Japan. The project aimed to overcome Japan's alleged failure to contribute fully to the Human Genome Project, by determining 3000 protein structures, 30 percent of the global target. Despite its achievement of this goal, the project was fiercely criticized in various sectors of society and was often branded an awkward failure. This article tries to solve the mystery of why such failure discourse was prevalent. Three explanatory factors are offered: first, because some goals were excluded during project development, there was a dynamic of failed expectations; second, structural genomics, while promoting collaboration with the international community, became an 'anti-boundary object', only the absence of which bound heterogeneous domestic actors; third, there developed an urgent sense of international competition in order to obtain patents on such structural information.
Non-destructive evaluation of containment walls in nuclear power plants
NASA Astrophysics Data System (ADS)
Garnier, V.; Payan, C.; Lott, M.; Ranaivomanana, N.; Balayssac, J. P.; Verdier, J.; Larose, E.; Zhang, Y.; Saliba, J.; Boniface, A.; Sbartai, Z. M.; Piwakowski, B.; Ciccarone, C.; Hafid, H.; Henault, J. M.; Buffet, F. Ouvrier
2017-02-01
Two functions are regularly tested on containment walls in order to anticipate a possible accident. The first is mechanical to resist a possible internal over-pressure and the second is to prevent leakage. The AAPR reference accident is the rupture of a pipe in the primary circuit of a nuclear plant. In this case, the pressure and temperature can reach 5 bar and 180°C in 20 seconds. The national project `Non-destructive testing of the containment structures of nuclear plants' aims at studying the non-destructive techniques capable to evaluate the concrete properties and its damaging and cracks. This 4-year-project is segmented into two parts. The first consists in developing and selecting the most relevant NDEs in the laboratory to reach these goals. These evaluations are developed in conditions representing the real conditions of the stresses generated during ten-yearly visits of the plants or those related to an accident. The second part consists in applying the selected techniques to two containment structures under pressure. The first structure is proposed by ONERA and the second is a mockup of a containment wall on a 1/3 scale made by EDF within the VeRCoRs project. Communication is focused on the part of the project that concerns the damage and crack process characterization by means of NDT. The tests are done in 3 or 4 points bending in order to study the cracks' generation, their propagation, as well as their opening and closing. The main ultrasonic techniques developed concern linear or non-linear acoustic: acoustic emission [1], Locadiff [2], energy diffusion, surface wave's velocity and attenuation, DAET [3]. The recorded data contribute to providing the mapping of the investigated parameters, either in volume, in surface or globally. Digital image correlation is an important additional asset to validate the coherence of the data. The spatial normalization of the data in the specimen space allows proposing algorithms on the combination of the experimental data. The tests results are presented and they show the capacity and the limits of the evaluation of the volume, surface or global data. A data fusion procedure is associated with these results.
Asadpour-Zeynali, Karim; Maryam Sajjadi, S; Taherzadeh, Fatemeh; Rahmanian, Reza
2014-04-05
Bilinear least square (BLLS) method is one of the most suitable algorithms for second-order calibration. Original BLLS method is not applicable to the second order pH-spectral data when an analyte has more than one spectroscopically active species. Bilinear least square-residual bilinearization (BLLS-RBL) was developed to achieve the second order advantage for analysis of complex mixtures. Although the modified method is useful, the pure profiles cannot be obtained and only the linear combination will be obtained. Moreover, for prediction of analyte in an unknown sample, the original algorithm of RBL may diverge; instead of converging to the desired analyte concentrations. Therefore, Gauss Newton-RLB algorithm should be used, which is not as simple as original protocol. Also, the analyte concentration can be predicted on the basis of each of the equilibrating species of the component of interest that are not exactly the same. The aim of the present work is to tackle the non-uniqueness problem in the second order calibration of monoprotic acid mixtures and divergence of RBL. Each pH-absorbance matrix was pretreated by subtraction of the first spectrum from other spectra in the data set to produce full rank array that is called variation matrix. Then variation matrices were analyzed uniquely by original BLLS-RBL that is more parsimonious than its modified counterpart. The proposed method was performed on the simulated as well as the analysis of real data. Sunset yellow and Carmosine as monoprotic acids were determined in candy sample in the presence of unknown interference by this method. Copyright © 2013 Elsevier B.V. All rights reserved.
Validation of a RANS transition model using a high-order weighted compact nonlinear scheme
NASA Astrophysics Data System (ADS)
Tu, GuoHua; Deng, XiaoGang; Mao, MeiLiang
2013-04-01
A modified transition model is given based on the shear stress transport (SST) turbulence model and an intermittency transport equation. The energy gradient term in the original model is replaced by flow strain rate to saving computational costs. The model employs local variables only, and then it can be conveniently implemented in modern computational fluid dynamics codes. The fifth-order weighted compact nonlinear scheme and the fourth-order staggered scheme are applied to discrete the governing equations for the purpose of minimizing discretization errors, so as to mitigate the confusion between numerical errors and transition model errors. The high-order package is compared with a second-order TVD method on simulating the transitional flow of a flat plate. Numerical results indicate that the high-order package give better grid convergence property than that of the second-order method. Validation of the transition model is performed for transitional flows ranging from low speed to hypersonic speed.
NASA Astrophysics Data System (ADS)
Evans, Garrett Nolan
In this work, I present two projects that both contribute to the aim of discovering how intelligence manifests in the brain. The first project is a method for analyzing recorded neural signals, which takes the form of a convolution-based metric on neural membrane potential recordings. Relying only on integral and algebraic operations, the metric compares the timing and number of spikes within recordings as well as the recordings' subthreshold features: summarizing differences in these with a single "distance" between the recordings. Like van Rossum's (2001) metric for spike trains, the metric is based on a convolution operation that it performs on the input data. The kernel used for the convolution is carefully chosen such that it produces a desirable frequency space response and, unlike van Rossum's kernel, causes the metric to be first order both in differences between nearby spike times and in differences between same-time membrane potential values: an important trait. The second project is a combinatorial syntax method for connectionist semantic network encoding. Combinatorial syntax has been a point on which those who support a symbol-processing view of intelligent processing and those who favor a connectionist view have had difficulty seeing eye-to-eye. Symbol-processing theorists have persuasively argued that combinatorial syntax is necessary for certain intelligent mental operations, such as reasoning by analogy. Connectionists have focused on the versatility and adaptability offered by self-organizing networks of simple processing units. With this project, I show that there is a way to reconcile the two perspectives and to ascribe a combinatorial syntax to a connectionist network. The critical principle is to interpret nodes, or units, in the connectionist network as bound integrations of the interpretations for nodes that they share links with. Nodes need not correspond exactly to neurons and may correspond instead to distributed sets, or assemblies, of neurons.
NASA Astrophysics Data System (ADS)
Pristera, Jessica L.
2004-05-01
An acoustical study was conducted to determine the potential for airborne noise and ground-borne noise and vibration impacts generated by construction and operation of the Second Avenue Subway. The study was performed in support of an environmental impact statement (EIS) that defined the areas along the proposed Second Avenue Subway corridor where any significiant impacts would occur as a result of construction activity and operation of the Second Avenue Subway. Using FTA guideline procedures, project-generated noise levels from subway construction and operations were determined. Construction noise levels exceeded operational noise levels. With limited alternative construction methods, practical mitigation methods were determined to reduce impacts.
A Numerical Method for Integrating Orbits
NASA Astrophysics Data System (ADS)
Sahakyan, Karen P.; Melkonyan, Anahit A.; Hayrapetyan, S. R.
2007-08-01
A numerical method based of trigonometric polynomials for integrating of ordinary differential equations of first and second order is suggested. This method is a trigonometric analogue of Everhart's method and can be especially useful for periodical trajectories.
Collaborative decision-making on wind power projects based on AHP method
NASA Astrophysics Data System (ADS)
Badea, A.; Proştean, G.; Tămăşilă, M.; Vârtosu, A.
2017-01-01
The complexity of projects implementation in Renewable Energy Sources (RES) requires finding collaborative alliances between suppliers and project developers in RES. Links activities in supply chain in RES, respectively, transportation of heavy components, processing orders to purchase quality raw materials, storage and materials handling, packaging, and other complex activities requiring a logistics system collaboratively to be permanently dimensioned properly selected and monitored. Requirements imposed by stringency of wind power energy projects implementation inevitably involves constraints in infrastructure, implementation and logistics. Thus, following an extensive research in RES project, to eliminate these constraints were identified alternative collaboration to provide feasible solutions on different levels of performance. The paper presents a critical analysis of different collaboration alternatives in supply chain for RES projects, selecting the ones most suitable for particular situations by using decision-making method Analytic Hierarchy Process (AHP). The role of AHP method was to formulate a decision model by which can be establish the collaboration alternative choice through mathematical calculation to reduce the impact created by constraints encountered. The solution provided through AHP provides a framework for detecting optimal alternative collaboration between suppliers and project developers in RES and avoids some breaks in the chain by resizing safety buffers for leveling orders in RES projects.
Finite amplitude instability of second-order fluids in plane Poiseuille flow.
NASA Technical Reports Server (NTRS)
Mcintire, L. V.; Lin, C. H.
1972-01-01
The hydrodynamic stability of plane Poiseuille flow of second-order fluids to finite amplitude disturbances is examined using the method of Stuart and Watson as extended by Reynolds and Potter. For slightly non-Newtonian fluids subcritical instabilities are predicted. No supercritical equilibrium states are expected if the entire spectrum of disturbance wavelengths is present. Possible implications with respect to the Toms phenomenon are discussed.
Research on Bidding Decision-making of International Public-Private Partnership Projects
NASA Astrophysics Data System (ADS)
Hu, Zhen Yu; Zhang, Shui Bo; Liu, Xin Yan
2018-06-01
In order to select the optimal quasi-bidding project for an investment enterprise, a bidding decision-making model for international PPP projects was established in this paper. Firstly, the literature frequency statistics method was adopted to screen out the bidding decision-making indexes, and accordingly the bidding decision-making index system for international PPP projects was constructed. Then, the group decision-making characteristic root method, the entropy weight method, and the optimization model based on least square method were used to set the decision-making index weights. The optimal quasi-bidding project was thus determined by calculating the consistent effect measure of each decision-making index value and the comprehensive effect measure of each quasi-bidding project. Finally, the bidding decision-making model for international PPP projects was further illustrated by a hypothetical case. This model can effectively serve as a theoretical foundation and technical support for the bidding decision-making of international PPP projects.
Dynamic Projection Mapping onto Deforming Non-Rigid Surface Using Deformable Dot Cluster Marker.
Narita, Gaku; Watanabe, Yoshihiro; Ishikawa, Masatoshi
2017-03-01
Dynamic projection mapping for moving objects has attracted much attention in recent years. However, conventional approaches have faced some issues, such as the target objects being limited to rigid objects, and the limited moving speed of the targets. In this paper, we focus on dynamic projection mapping onto rapidly deforming non-rigid surfaces with a speed sufficiently high that a human does not perceive any misalignment between the target object and the projected images. In order to achieve such projection mapping, we need a high-speed technique for tracking non-rigid surfaces, which is still a challenging problem in the field of computer vision. We propose the Deformable Dot Cluster Marker (DDCM), a novel fiducial marker for high-speed tracking of non-rigid surfaces using a high-frame-rate camera. The DDCM has three performance advantages. First, it can be detected even when it is strongly deformed. Second, it realizes robust tracking even in the presence of external and self occlusions. Third, it allows millisecond-order computational speed. Using DDCM and a high-speed projector, we realized dynamic projection mapping onto a deformed sheet of paper and a T-shirt with a speed sufficiently high that the projected images appeared to be printed on the objects.
NASA Technical Reports Server (NTRS)
Dunn, Michael R.
2014-01-01
Over the course of my internship in the Flight Projects Office of NASA's Launch Services Program (LSP), I worked on two major projects, both of which dealt with updating current systems to make them more accurate and to allow them to operate more efficiently. The first project dealt with the Mission Integration Reporting System (MIRS), a web-accessible database application used to manage and provide mission status reporting for the LSP portfolio of awarded missions. MIRS had not gone through any major updates since its implementation in 2005, and it was my job to formulate a recommendation for the improvement of the system. The second project I worked on dealt with the Mission Plan, a document that contains an overview of the general life cycle that is followed by every LSP mission. My job on this project was to update the information currently in the mission plan and to add certain features in order to increase the accuracy and thoroughness of the document. The outcomes of these projects have implications in the orderly and efficient operation of the Flight Projects Office, and the process of Mission Management in the Launch Services Program as a whole.
Wang, Weiping; Tang, Jianghong; Wang, Shumin; Zhou, Lei; Hu, Zhide
2007-04-27
A capillary zone electrophoresis (CZE) with indirect laser-induced fluorescence detection (ILIFD) method is described for the simultaneous determination of esculin, esculetin, isofraxidin, genistein, naringin and sophoricoside. The baseline separation was achieved within 5 min with running buffer (pH 9.4) composed of 5mM borate, 20% methanol (v/v) as organic modifier, 10(-7)M fluorescein sodium as background fluorophore and 20 kV of applied voltage at 30 degrees C of cartridge temperature. Good linearity relationships (correlation coefficients >0.9900) between the second-order derivative peak-heights (RFU) and concentrations of the analytes (mol L(-1)) were obtained. The detection limits for all analytes in second-order derivative electrophoregrams were in the range of 3.8-15 microM. The RSD data of intra-day for migration times and second-order derivative peak-height were less than 0.95 and 5.02%, respectively. This developed method was applied to the analysis of the courmin compounds in herb plants with recoveries in the range of 94.7-102.1%. In this work, although the detection sensitivity was lower than that of direct LIF, yet the method would extend the application range of LIF detection.
Head Mounted Display with a Roof Mirror Array Fold
NASA Technical Reports Server (NTRS)
Olczak, Eugene (Inventor)
2014-01-01
The present invention includes a head mounted display (HMD) worn by a user. The HMD includes a display projecting an image through an optical lens. The HMD also includes a one-dimensional retro reflective array receiving the image through the optical lens at a first angle with respect to the display and deflecting the image at a second angle different than the first angle with respect to the display. The one-dimensional retro reflective array reflects the image in order to project the image onto an eye of the user.
Probability techniques for reliability analysis of composite materials
NASA Technical Reports Server (NTRS)
Wetherhold, Robert C.; Ucci, Anthony M.
1994-01-01
Traditional design approaches for composite materials have employed deterministic criteria for failure analysis. New approaches are required to predict the reliability of composite structures since strengths and stresses may be random variables. This report will examine and compare methods used to evaluate the reliability of composite laminae. The two types of methods that will be evaluated are fast probability integration (FPI) methods and Monte Carlo methods. In these methods, reliability is formulated as the probability that an explicit function of random variables is less than a given constant. Using failure criteria developed for composite materials, a function of design variables can be generated which defines a 'failure surface' in probability space. A number of methods are available to evaluate the integration over the probability space bounded by this surface; this integration delivers the required reliability. The methods which will be evaluated are: the first order, second moment FPI methods; second order, second moment FPI methods; the simple Monte Carlo; and an advanced Monte Carlo technique which utilizes importance sampling. The methods are compared for accuracy, efficiency, and for the conservativism of the reliability estimation. The methodology involved in determining the sensitivity of the reliability estimate to the design variables (strength distributions) and importance factors is also presented.
Solving Ordinary Differential Equations
NASA Technical Reports Server (NTRS)
Krogh, F. T.
1987-01-01
Initial-value ordinary differential equation solution via variable order Adams method (SIVA/DIVA) package is collection of subroutines for solution of nonstiff ordinary differential equations. There are versions for single-precision and double-precision arithmetic. Requires fewer evaluations of derivatives than other variable-order Adams predictor/ corrector methods. Option for direct integration of second-order equations makes integration of trajectory problems significantly more efficient. Written in FORTRAN 77.
A Method of Character Detection and Segmentation for Highway Guide Signs
NASA Astrophysics Data System (ADS)
Xu, Jiawei; Zhang, Chongyang
2018-01-01
In this paper, a method of character detection and segmentation for highway signs in China is proposed. It consists of four steps. Firstly, the highway sign area is detectedby colour and geometric features, andthe possible character region is obtained by multi-level projection strategy. Secondly, pseudo target character region is removed by local binary patterns (LBP) feature. Thirdly, convolutional neural network (CNN)is used to classify target regions. Finally, adaptive projection strategies are used to segment characters strings. Experimental results indicate that the proposed method achieves new state-of-the-art results.
Tensor-GMRES method for large sparse systems of nonlinear equations
NASA Technical Reports Server (NTRS)
Feng, Dan; Pulliam, Thomas H.
1994-01-01
This paper introduces a tensor-Krylov method, the tensor-GMRES method, for large sparse systems of nonlinear equations. This method is a coupling of tensor model formation and solution techniques for nonlinear equations with Krylov subspace projection techniques for unsymmetric systems of linear equations. Traditional tensor methods for nonlinear equations are based on a quadratic model of the nonlinear function, a standard linear model augmented by a simple second order term. These methods are shown to be significantly more efficient than standard methods both on nonsingular problems and on problems where the Jacobian matrix at the solution is singular. A major disadvantage of the traditional tensor methods is that the solution of the tensor model requires the factorization of the Jacobian matrix, which may not be suitable for problems where the Jacobian matrix is large and has a 'bad' sparsity structure for an efficient factorization. We overcome this difficulty by forming and solving the tensor model using an extension of a Newton-GMRES scheme. Like traditional tensor methods, we show that the new tensor method has significant computational advantages over the analogous Newton counterpart. Consistent with Krylov subspace based methods, the new tensor method does not depend on the factorization of the Jacobian matrix. As a matter of fact, the Jacobian matrix is never needed explicitly.
NASA Astrophysics Data System (ADS)
Kuleshov, Alexander S.; Katasonova, Vera A.
2018-05-01
The problem of rolling without slipping of a rotationally symmetric rigid body on a sphere is considered. The rolling body is assumed to be subjected to the forces, the resultant of which is directed from the center of mass G of the body to the center O of the sphere, and depends only on the distance between G and O. In this case the solution of this problem is reduced to solving the second order linear differential equation over the projection of the angular velocity of the body onto its axis of symmetry. Using the Kovacic algorithm we search for liouvillian solutions of the corresponding second order differential equation in the case, when the rolling body is a dynamically symmetric ball.
de Lima, Camila; Salomão Helou, Elias
2018-01-01
Iterative methods for tomographic image reconstruction have the computational cost of each iteration dominated by the computation of the (back)projection operator, which take roughly O(N 3 ) floating point operations (flops) for N × N pixels images. Furthermore, classical iterative algorithms may take too many iterations in order to achieve acceptable images, thereby making the use of these techniques unpractical for high-resolution images. Techniques have been developed in the literature in order to reduce the computational cost of the (back)projection operator to O(N 2 logN) flops. Also, incremental algorithms have been devised that reduce by an order of magnitude the number of iterations required to achieve acceptable images. The present paper introduces an incremental algorithm with a cost of O(N 2 logN) flops per iteration and applies it to the reconstruction of very large tomographic images obtained from synchrotron light illuminated data.
Incremental online learning in high dimensions.
Vijayakumar, Sethu; D'Souza, Aaron; Schaal, Stefan
2005-12-01
Locally weighted projection regression (LWPR) is a new algorithm for incremental nonlinear function approximation in high-dimensional spaces with redundant and irrelevant input dimensions. At its core, it employs nonparametric regression with locally linear models. In order to stay computationally efficient and numerically robust, each local model performs the regression analysis with a small number of univariate regressions in selected directions in input space in the spirit of partial least squares regression. We discuss when and how local learning techniques can successfully work in high-dimensional spaces and review the various techniques for local dimensionality reduction before finally deriving the LWPR algorithm. The properties of LWPR are that it (1) learns rapidly with second-order learning methods based on incremental training, (2) uses statistically sound stochastic leave-one-out cross validation for learning without the need to memorize training data, (3) adjusts its weighting kernels based on only local information in order to minimize the danger of negative interference of incremental learning, (4) has a computational complexity that is linear in the number of inputs, and (5) can deal with a large number of-possibly redundant-inputs, as shown in various empirical evaluations with up to 90 dimensional data sets. For a probabilistic interpretation, predictive variance and confidence intervals are derived. To our knowledge, LWPR is the first truly incremental spatially localized learning method that can successfully and efficiently operate in very high-dimensional spaces.
Uğurbil, Kamil; Xu, Junqian; Auerbach, Edward J.; Moeller, Steen; Vu, An; Duarte-Carvajalino, Julio M.; Lenglet, Christophe; Wu, Xiaoping; Schmitter, Sebastian; Van de Moortele, Pierre Francois; Strupp, John; Sapiro, Guillermo; De Martino, Federico; Wang, Dingxin; Harel, Noam; Garwood, Michael; Chen, Liyong; Feinberg, David A.; Smith, Stephen M.; Miller, Karla L.; Sotiropoulos, Stamatios N; Jbabdi, Saad; Andersson, Jesper L; Behrens, Timothy EJ; Glasser, Matthew F.; Van Essen, David; Yacoub, Essa
2013-01-01
The human connectome project (HCP) relies primarily on three complementary magnetic resonance (MR) methods. These are: 1) resting state functional MR imaging (rfMRI) which uses correlations in the temporal fluctuations in an fMRI time series to deduce ‘functional connectivity’; 2) diffusion imaging (dMRI), which provides the input for tractography algorithms used for the reconstruction of the complex axonal fiber architecture; and 3) task based fMRI (tfMRI), which is employed to identify functional parcellation in the human brain in order to assist analyses of data obtained with the first two methods. We describe technical improvements and optimization of these methods as well as instrumental choices that impact speed of acquisition of fMRI and dMRI images at 3 Tesla, leading to whole brain coverage with 2 mm isotropic resolution in 0.7 second for fMRI, and 1.25 mm isotropic resolution dMRI data for tractography analysis with three-fold reduction in total data acquisition time. Ongoing technical developments and optimization for acquisition of similar data at 7 Tesla magnetic field are also presented, targeting higher resolution, specificity of functional imaging signals, mitigation of the inhomogeneous radio frequency (RF) fields and power deposition. Results demonstrate that overall, these approaches represent a significant advance in MR imaging of the human brain to investigate brain function and structure. PMID:23702417
Parallel Cartesian grid refinement for 3D complex flow simulations
NASA Astrophysics Data System (ADS)
Angelidis, Dionysios; Sotiropoulos, Fotis
2013-11-01
A second order accurate method for discretizing the Navier-Stokes equations on 3D unstructured Cartesian grids is presented. Although the grid generator is based on the oct-tree hierarchical method, fully unstructured data-structure is adopted enabling robust calculations for incompressible flows, avoiding both the need of synchronization of the solution between different levels of refinement and usage of prolongation/restriction operators. The current solver implements a hybrid staggered/non-staggered grid layout, employing the implicit fractional step method to satisfy the continuity equation. The pressure-Poisson equation is discretized by using a novel second order fully implicit scheme for unstructured Cartesian grids and solved using an efficient Krylov subspace solver. The momentum equation is also discretized with second order accuracy and the high performance Newton-Krylov method is used for integrating them in time. Neumann and Dirichlet conditions are used to validate the Poisson solver against analytical functions and grid refinement results to a significant reduction of the solution error. The effectiveness of the fractional step method results in the stability of the overall algorithm and enables the performance of accurate multi-resolution real life simulations. This material is based upon work supported by the Department of Energy under Award Number DE-EE0005482.
Constrained Kinematics of ICMEs from Multi-point in Situ and Heliospheric Imaging Data
NASA Astrophysics Data System (ADS)
Rollett, T.; Temmer, M.; Moestl, C.; Veronig, A. M.; Lugaz, N.; Vrsnak, B.; Farrugia, C. J.; Amerstorfer, U.
2013-12-01
The constrained harmonic mean (CHM) method is used to calculate the direction of motion of ICMEs and their kinematical profiles. Combining single spacecraft white-light observations from STEREO/HI with supplementary in situ data, it is possible to derive the propagation speed varying with heliocentric distance. This is a big advantage against other single-viewpoint methods, i.e. fitting methods, which assume a constant propagation speed. We show two different applications for the CHM method: first, an analysis of the interaction between the solar wind and ICMEs, and second, the interaction between two ICMEs. For analyzing interaction processes it is crucial to use a method that has the ability to investigate the corresponding effects on ICME kinematics. Additionally, we show the analysis of an outstanding fast ICME event of March 2012, which was detected in situ by Venus Express, Messenger and Wind and also observed by STEREO-A/HI. Due to these multiple in situ measurements it was possible to constrain the ICME kinematics by three different boundary values. These studies are fundamental in order to deepen the understanding of ICME evolution and to enhance existing forecasting methods. This work has received funding from the European Commission FP7 Project COMESEP (263252).
NASA Astrophysics Data System (ADS)
Mester, Dávid; Nagy, Péter R.; Kállay, Mihály
2018-03-01
A reduced-cost implementation of the second-order algebraic-diagrammatic construction [ADC(2)] method is presented. We introduce approximations by restricting virtual natural orbitals and natural auxiliary functions, which results, on average, in more than an order of magnitude speedup compared to conventional, density-fitting ADC(2) algorithms. The present scheme is the successor of our previous approach [D. Mester, P. R. Nagy, and M. Kállay, J. Chem. Phys. 146, 194102 (2017)], which has been successfully applied to obtain singlet excitation energies with the linear-response second-order coupled-cluster singles and doubles model. Here we report further methodological improvements and the extension of the method to compute singlet and triplet ADC(2) excitation energies and transition moments. The various approximations are carefully benchmarked, and conservative truncation thresholds are selected which guarantee errors much smaller than the intrinsic error of the ADC(2) method. Using the canonical values as reference, we find that the mean absolute error for both singlet and triplet ADC(2) excitation energies is 0.02 eV, while that for oscillator strengths is 0.001 a.u. The rigorous cutoff parameters together with the significantly reduced operation count and storage requirements allow us to obtain accurate ADC(2) excitation energies and transition properties using triple-ζ basis sets for systems of up to one hundred atoms.
NASA Technical Reports Server (NTRS)
Myhill, Elizabeth A.; Boss, Alan P.
1993-01-01
In Boss & Myhill (1992) we described the derivation and testing of a spherical coordinate-based scheme for solving the hydrodynamic equations governing the gravitational collapse of nonisothermal, nonmagnetic, inviscid, radiative, three-dimensional protostellar clouds. Here we discuss a Cartesian coordinate-based scheme based on the same set of hydrodynamic equations. As with the spherical coorrdinate-based code, the Cartesian coordinate-based scheme employs explicit Eulerian methods which are both spatially and temporally second-order accurate. We begin by describing the hydrodynamic equations in Cartesian coordinates and the numerical methods used in this particular code. Following Finn & Hawley (1989), we pay special attention to the proper implementations of high-order accuracy, finite difference methods. We evaluate the ability of the Cartesian scheme to handle shock propagation problems, and through convergence testing, we show that the code is indeed second-order accurate. To compare the Cartesian scheme discussed here with the spherical coordinate-based scheme discussed in Boss & Myhill (1992), the two codes are used to calculate the standard isothermal collapse test case described by Bodenheimer & Boss (1981). We find that with the improved codes, the intermediate bar-configuration found previously disappears, and the cloud fragments directly into a binary protostellar system. Finally, we present the results from both codes of a new test for nonisothermal protostellar collapse.
Karimi, Hamid Reza; Gao, Huijun
2008-07-01
A mixed H2/Hinfinity output-feedback control design methodology is presented in this paper for second-order neutral linear systems with time-varying state and input delays. Delay-dependent sufficient conditions for the design of a desired control are given in terms of linear matrix inequalities (LMIs). A controller, which guarantees asymptotic stability and a mixed H2/Hinfinity performance for the closed-loop system of the second-order neutral linear system, is then developed directly instead of coupling the model to a first-order neutral system. A Lyapunov-Krasovskii method underlies the LMI-based mixed H2/Hinfinity output-feedback control design using some free weighting matrices. The simulation results illustrate the effectiveness of the proposed methodology.
On the maximum principle for complete second-order elliptic operators in general domains
NASA Astrophysics Data System (ADS)
Vitolo, Antonio
This paper is concerned with the maximum principle for second-order linear elliptic equations in a wide generality. By means of a geometric condition previously stressed by Berestycki-Nirenberg-Varadhan, Cabré was very able to improve the classical ABP estimate obtaining the maximum principle also in unbounded domains, such as infinite strips and open connected cones with closure different from the whole space. Now we introduce a new geometric condition that extends the result to a more general class of domains including the complements of hypersurfaces, as for instance the cut plane. The methods developed here allow us to deal with complete second-order equations, where the admissible first-order term, forced to be zero in a preceding result with Cafagna, depends on the geometry of the domain.
A Proposed Method for the Computer-aided Discovery and Design of High-strength, Ductile Metals
NASA Astrophysics Data System (ADS)
Winter, Ian Stewart
Gum Metal, a class of Ti-Nb alloys, has generated a great deal of interest in the metallurgical community since its development in 2003. These alloys display numerous novel and anomalous properties, many of which only occur after severe plastic deformation has been incurred on the material. Such properties include: super-elasticity, super-coldworkability, Invar and Elinvar behavior, high ductility, as well as high strength. The high strength of gum metal has generated particular enthusiasm as it is on the order of the predicted ideal strength of the material. Many of the properties of gum metal appear to be a direct result of tuning the composition to be near an elastic instability resulting in a high degree of elastic anisotropy. This presents an opportunity for the computer-aided discovery and design of structural materials as the ideal strength and elastic anisotropy can be approximated from the elastic constants. Two approaches are described for searching for this high ansitropy. In the first, The possibility of forming gum metal in Mg is explored by tuning the material to be near the BCC-HCP transition either by pressure or alloying with Li. The second makes use of the Materials Project's elastic constants database, which contains thousands of ordered compounds, in order to screen for gum metal candidates. By defining an elastic anisotropy parameter consistent with the behavior of gum metal and calculating it for all cubic materials in the elastic constants database several gum metal candidates are found. In order to better assess their candidacy information on the intrinsic ductility of these materials is necessary. A method is proposed for calculating the ideal strength and deformation mode of a solid solution from first-principles. In order to validate this method the intrinsic ductile-to-brittle transition composition of Ti-V systems is calculated. It is further shown that this method can be applied to the calculation of an ideal tensile yield surface.
NASA Astrophysics Data System (ADS)
Yin, Hui; Yu, Dejie; Yin, Shengwen; Xia, Baizhan
2016-10-01
This paper introduces mixed fuzzy and interval parametric uncertainties into the FE components of the hybrid Finite Element/Statistical Energy Analysis (FE/SEA) model for mid-frequency analysis of built-up systems, thus an uncertain ensemble combining non-parametric with mixed fuzzy and interval parametric uncertainties comes into being. A fuzzy interval Finite Element/Statistical Energy Analysis (FIFE/SEA) framework is proposed to obtain the uncertain responses of built-up systems, which are described as intervals with fuzzy bounds, termed as fuzzy-bounded intervals (FBIs) in this paper. Based on the level-cut technique, a first-order fuzzy interval perturbation FE/SEA (FFIPFE/SEA) and a second-order fuzzy interval perturbation FE/SEA method (SFIPFE/SEA) are developed to handle the mixed parametric uncertainties efficiently. FFIPFE/SEA approximates the response functions by the first-order Taylor series, while SFIPFE/SEA improves the accuracy by considering the second-order items of Taylor series, in which all the mixed second-order items are neglected. To further improve the accuracy, a Chebyshev fuzzy interval method (CFIM) is proposed, in which the Chebyshev polynomials is used to approximate the response functions. The FBIs are eventually reconstructed by assembling the extrema solutions at all cut levels. Numerical results on two built-up systems verify the effectiveness of the proposed methods.
Apparatus and methods for using achromatic phase matching at high orders of dispersion
Richman, Bruce; Trebino, Rick; Bisson, Scott; Sidick, Erkin
2001-01-01
Achromatic phase-matching (APM) is used for efficiently multiplying the frequency of broad bandwidth light by using a nonlinear optical medium comprising a second-harmonic generation (SHG) crystal. Stationary optical elements whose configuration, properties, and arrangement have been optimized to match the dispersion characteristics of the SHG crystal to at least the second order. These elements include a plurality of prismatic elements for directing an input light beam onto the SHG crystal such that each ray wavelength is aligned to match the phase-matching angle for the crystal at each wavelength of light to at least the second order and such that every ray wavelength overlap within the crystal.
Maxwell's second- and third-order equations of transfer for non-Maxwellian gases
NASA Technical Reports Server (NTRS)
Baganoff, D.
1992-01-01
Condensed algebraic forms for Maxwell's second- and third-order equations of transfer are developed for the case of molecules described by either elastic hard spheres, inverse-power potentials, or by Bird's variable hard-sphere model. These hardly reduced, yet exact, equations provide a new point of origin, when using the moment method, in seeking approximate solutions in the kinetic theory of gases for molecular models that are physically more realistic than that provided by the Maxwell model. An important by-product of the analysis when using these second- and third-order relations is that a clear mathematical connection develops between Bird's variable hard-sphere model and that for the inverse-power potential.
Implicit multiplane 3D camera calibration matrices for stereo image processing
NASA Astrophysics Data System (ADS)
McKee, James W.; Burgett, Sherrie J.
1997-12-01
By implicit camera calibration, we mean the process of calibrating cameras without explicitly computing their physical parameters. We introduce a new implicit model based on a generalized mapping between an image plane and multiple, parallel calibration planes (usually between four to seven planes). This paper presents a method of computing a relationship between a point on a three-dimensional (3D) object and its corresponding two-dimensional (2D) coordinate in a camera image. This relationship is expanded to form a mapping of points in 3D space to points in image (camera) space and visa versa that requires only matrix multiplication operations. This paper presents the rationale behind the selection of the forms of four matrices and the algorithms to calculate the parameters for the matrices. Two of the matrices are used to map 3D points in object space to 2D points on the CCD camera image plane. The other two matrices are used to map 2D points on the image plane to points on user defined planes in 3D object space. The mappings include compensation for lens distortion and measurement errors. The number of parameters used can be increased, in a straight forward fashion, to calculate and use as many parameters as needed to obtain a user desired accuracy. Previous methods of camera calibration use a fixed number of parameters which can limit the obtainable accuracy and most require the solution of nonlinear equations. The procedure presented can be used to calibrate a single camera to make 2D measurements or calibrate stereo cameras to make 3D measurements. Positional accuracy of better than 3 parts in 10,000 have been achieved. The algorithms in this paper were developed and are implemented in MATLABR (registered trademark of The Math Works, Inc.). We have developed a system to analyze the path of optical fiber during high speed payout (unwinding) of optical fiber off a bobbin. This requires recording and analyzing high speed (5 microsecond exposure time), synchronous, stereo images of the optical fiber during payout. A 3D equation for the fiber at an instant in time is calculated from the corresponding pair of stereo images as follows. In each image, about 20 points along the 2D projection of the fiber are located. Each of these 'fiber points' in one image is mapped to its projection line in 3D space. Each projection line is mapped into another line in the second image. The intersection of each mapped projection line and a curve fitted to the fiber points of the second image (fiber projection in second image) is calculated. Each intersection point is mapped back to the 3D space. A 3D fiber coordinate is formed from the intersection, in 3D space, of a mapped intersection point with its corresponding projection line. The 3D equation for the fiber is computed from this ordered list of 3D coordinates. This process requires a method of accurately mapping 2D (image space) to 3D (object space) and visa versa.3173
NASA Astrophysics Data System (ADS)
Geiger, Tobias
2018-04-01
Gross domestic product (GDP) represents a widely used metric to compare economic development across time and space. GDP estimates have been routinely assembled only since the beginning of the second half of the 20th century, making comparisons with prior periods cumbersome or even impossible. In recent years various efforts have been put forward to re-estimate national GDP for specific years in the past centuries and even millennia, providing new insights into past economic development on a snapshot basis. In order to make this wealth of data utilizable across research disciplines, we here present a first continuous and consistent data set of GDP time series for 195 countries from 1850 to 2009, based mainly on data from the Maddison Project and other population and GDP sources. The GDP data are consistent with Penn World Tables v8.1 and future GDP projections from the Shared Socio-economic Pathways (SSPs), and are freely available at http://doi.org/10.5880/pik.2018.010 (Geiger and Frieler, 2018). To ease usability, we additionally provide GDP per capita data and further supplementary and data description files in the online archive. We utilize various methods to handle missing data and discuss the advantages and limitations of our methodology. Despite known shortcomings this data set provides valuable input, e.g., for climate impact research, in order to consistently analyze economic impacts from pre-industrial times to the future.
Parametric instability analysis of truncated conical shells using the Haar wavelet method
NASA Astrophysics Data System (ADS)
Dai, Qiyi; Cao, Qingjie
2018-05-01
In this paper, the Haar wavelet method is employed to analyze the parametric instability of truncated conical shells under static and time dependent periodic axial loads. The present work is based on the Love first-approximation theory for classical thin shells. The displacement field is expressed as the Haar wavelet series in the axial direction and trigonometric functions in the circumferential direction. Then the partial differential equations are reduced into a system of coupled Mathieu-type ordinary differential equations describing dynamic instability behavior of the shell. Using Bolotin's method, the first-order and second-order approximations of principal instability regions are determined. The correctness of present method is examined by comparing the results with those in the literature and very good agreement is observed. The difference between the first-order and second-order approximations of principal instability regions for tensile and compressive loads is also investigated. Finally, numerical results are presented to bring out the influences of various parameters like static load factors, boundary conditions and shell geometrical characteristics on the domains of parametric instability of conical shells.
DOE Office of Scientific and Technical Information (OSTI.GOV)
ZHANG, H; Huang, J; Ma, J
2014-06-15
Purpose: To study the noise correlation properties of cone-beam CT (CBCT) projection data and to incorporate the noise correlation information to a statistics-based projection restoration algorithm for noise reduction in low-dose CBCT. Methods: In this study, we systematically investigated the noise correlation properties among detector bins of CBCT projection data by analyzing repeated projection measurements. The measurements were performed on a TrueBeam on-board CBCT imaging system with a 4030CB flat panel detector. An anthropomorphic male pelvis phantom was used to acquire 500 repeated projection data at six different dose levels from 0.1 mAs to 1.6 mAs per projection at threemore » fixed angles. To minimize the influence of the lag effect, lag correction was performed on the consecutively acquired projection data. The noise correlation coefficient between detector bin pairs was calculated from the corrected projection data. The noise correlation among CBCT projection data was then incorporated into the covariance matrix of the penalized weighted least-squares (PWLS) criterion for noise reduction of low-dose CBCT. Results: The analyses of the repeated measurements show that noise correlation coefficients are non-zero between the nearest neighboring bins of CBCT projection data. The average noise correlation coefficients for the first- and second- order neighbors are about 0.20 and 0.06, respectively. The noise correlation coefficients are independent of the dose level. Reconstruction of the pelvis phantom shows that the PWLS criterion with consideration of noise correlation (PWLS-Cor) results in a lower noise level as compared to the PWLS criterion without considering the noise correlation (PWLS-Dia) at the matched resolution. Conclusion: Noise is correlated among nearest neighboring detector bins of CBCT projection data. An accurate noise model of CBCT projection data can improve the performance of the statistics-based projection restoration algorithm for low-dose CBCT.« less
The Stories of Inventions: An Interdisciplinary, Project-Based Unit for U.S. History Students
ERIC Educational Resources Information Center
Nargund-Joshi, Vanashri; Bragg, John
2017-01-01
During the second industrial revolution (1870-1914), scientists moved away from trial-anderror methods to more systematically apply the principles of chemistry, physics, and biology (Mokyr 1998). The authors chose this period as the foundation of a project-based learning (PBL) unit integrated with the ninth-grade U.S. history curriculum (Thomas…
The TeachScheme! Project: Computing and Programming for Every Student
ERIC Educational Resources Information Center
Felleisen, Matthias; Findler, Robert Bruce; Flatt, Matthew; Krishnamurthi, Shriram
2004-01-01
The TeachScheme! Project aims to reform three aspects of introductory programming courses in secondary schools. First, we use a design method that asks students to develop programs in a stepwise fashion such that each step produces a well-specified intermediate product. Second, we use an entire series of sublanguages, not just one. Each element of…
Agile Methods: Selected DoD Management and Acquisition Concerns
2011-10-01
SIDRE Software Intensive Innovative Development and Reengineering/Evolution SLIM Software Lifecycle Management -Estimate SLOC source lines of code...ISBN #0321502752 Coaching Agile Teams Lyssa Adkins ISBN #0321637704 Agile Project Management : Creating Innovative Products – Second Edition Jim...Accessed July 13, 2011. [Highsmith 2009] Highsmith, J. Agile Project Management : Creating Innovative Products, 2nd ed. Addison- Wesley, 2009
Segmentation of blurred objects using wavelet transform: application to x-ray images
NASA Astrophysics Data System (ADS)
Barat, Cecile S.; Ducottet, Christophe; Bilgot, Anne; Desbat, Laurent
2004-02-01
First, we present a wavelet-based algorithm for edge detection and characterization, which is an adaptation of Mallat and Hwang"s method. This algorithm relies on a modelization of contours as smoothed singularities of three particular types (transitions, peaks and lines). On the one hand, it allows to detect and locate edges at an adapted scale. On the other hand, it is able to identify the type of each detected edge point and to measure its amplitude and smoothing size. The latter parameters represent respectively the contrast and the smoothness level of the edge point. Second, we explain that this method has been integrated in a 3D bone surface reconstruction algorithm designed for computer-assisted and minimal invasive orthopaedic surgery. In order to decrease the dose to the patient and to obtain rapidly a 3D image, we propose to identify a bone shape from few X-ray projections by using statistical shape models registered to segmented X-ray projections. We apply this approach to pedicle screw insertion (scoliosis, fractures...) where ten to forty percent of the screws are known to be misplaced. In this context, the proposed edge detection algorithm allows to overcome the major problem of vertebrae segmentation in the X-ray images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rudeen, David Keith; Weber, Paula D.; Lord, David L.
The U.S. Strategic Petroleum Reserve implemented the first stage of a leach plan in 2011-2012 to expand storage volume in the existing Bryan Mound 113 cavern from a starting volume of 7.4 million barrels (MMB) to its design volume of 11.2 MMB. The first stage was terminated several months earlier than expected in August, 2012, as the upper section of the leach zone expanded outward more quickly than design. The oil-brine interface was then re-positioned with the intent to resume leaching in the second stage configuration. This report evaluates the as-built configuration of the cavern at the end of themore » first stage, and recommends changes to the second stage plan in order to accommodate for the variance between the first stage plan and the as-built cavern. SANSMIC leach code simulations are presented and compared with sonar surveys in order to aid in the analysis and offer projections of likely outcomes from the revised plan for the second stage leach.« less
NASA Astrophysics Data System (ADS)
de Laborderie, J.; Duchaine, F.; Gicquel, L.; Vermorel, O.; Wang, G.; Moreau, S.
2018-06-01
Large-Eddy Simulation (LES) is recognized as a promising method for high-fidelity flow predictions in turbomachinery applications. The presented approach consists of the coupling of several instances of the same LES unstructured solver through an overset grid method. A high-order interpolation, implemented within this coupling method, is introduced and evaluated on several test cases. It is shown to be third order accurate, to preserve the accuracy of various second and third order convective schemes and to ensure the continuity of diffusive fluxes and subgrid scale tensors even in detrimental interface configurations. In this analysis, three types of spurious waves generated at the interface are identified. They are significantly reduced by the high-order interpolation at the interface. The latter having the same cost as the original lower order method, the high-order overset grid method appears as a promising alternative to be used in all the applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berry, R.S.
The objectives of this project are to develop methods for the evaluation of syntheses of gaseous fuels in terms of their optimum possible performance, particularly when they are required to supply those fuels at nonzero rates. The first objective is entirely in the tradition of classical thermodynamics, the processes, given the characteristics and constraints that define them. The new element which this project introduces is the capability to set limits more realistic than those from classical thermodynamics, by the inclusion of the influence of the rate or duration of a process on its performance. The development of these analyses ismore » a natural step in the evolution represented by the evaluative papers of Appendix IV, e.g., by Funk et al., Abraham, Shinnar, Bilgen and Fletcher. A second objective is to determine how any given process should be carried out, within its constraints, in order to yield its optimum performance and to use this information whenever possible to help guide the design of that process.« less
Status of the Large Underground Xenon (LUX) Detector
NASA Astrophysics Data System (ADS)
Larsen, Nicole
2012-03-01
The LUX (Large Underground Xenon) experiment is a 350-kg xenon-based direct dark matter detection experiment consisting of a two-phase (liquid/gas) xenon time projection chamber with a 100-kg fiducial mass. This technology has many advantages, including scalability, self-shielding, the absence of any long-lived isotopes, high gamma ray stopping power, and the ability to precisely measure the charge-to-light ratio of interactions within the detector, which provides an accurate method for discriminating between electron recoils (gamma rays, beta decays) and nuclear recoils (neutrons, WIMPS) within the detector. LUX's projected sensitivity for 300 days of acquisition is a cross-section of 7 x10-46 cm^2 for a WIMP mass of 100 GeV, representing an increase of nearly an order of magnitude over previous WIMP cross-section limits. From November 2011 through February 2012, LUX was deployed in a surface laboratory at the Homestake Mine in South Dakota for its second surface run. This talk will provide an overview of the LUX design and a report on the status of the experiment after the surface run and before underground deployment.
An integrated optical CO2 sensor. Phase 0: Design and fabrication of critical elements
NASA Technical Reports Server (NTRS)
Murphy, Michael C.; Kelly, Kevin W.; Li, B. Q.; Ma, EN; Wang, Wanjun; Vladimirsky, Yuli; Vladimirsky, Olga
1994-01-01
Significant progress has been made toward all of the goals for the first phase of the project short of actual fabrication of a light path. Two alternative approaches to fabricating gold mirrors using the basic LIGA process were developed, one using electroplated solid gold mirrors and the second using gold plated over a nickel base. A new method of fabrication, the transfer mask process, was developed and demonstrated. Analysis of the projected surface roughness and beam divergence effects was completed. With gold surface with low surface roughness scattering losses are expected to be insignificant. Beam divergence due to diffraction will require a modification of the original design, but should be eliminated by fabricating mirrors 1000 mu m in height by 1000 mu m in width and using a source with an initial beam radius greater than 300 mu m. This may eliminate any need for focusing optics. Since the modified design does not affect the mask layout, ordering of the mask and fabrication of the test structures can begin immediately at the start of Phase 1.
NASA Astrophysics Data System (ADS)
Josey, C.; Forget, B.; Smith, K.
2017-12-01
This paper introduces two families of A-stable algorithms for the integration of y‧ = F (y , t) y: the extended predictor-corrector (EPC) and the exponential-linear (EL) methods. The structure of the algorithm families are described, and the method of derivation of the coefficients presented. The new algorithms are then tested on a simple deterministic problem and a Monte Carlo isotopic evolution problem. The EPC family is shown to be only second order for systems of ODEs. However, the EPC-RK45 algorithm had the highest accuracy on the Monte Carlo test, requiring at least a factor of 2 fewer function evaluations to achieve a given accuracy than a second order predictor-corrector method (center extrapolation / center midpoint method) with regards to Gd-157 concentration. Members of the EL family can be derived to at least fourth order. The EL3 and the EL4 algorithms presented are shown to be third and fourth order respectively on the systems of ODE test. In the Monte Carlo test, these methods did not overtake the accuracy of EPC methods before statistical uncertainty dominated the error. The statistical properties of the algorithms were also analyzed during the Monte Carlo problem. The new methods are shown to yield smaller standard deviations on final quantities as compared to the reference predictor-corrector method, by up to a factor of 1.4.
Probabilistic margin evaluation on accidental transients for the ASTRID reactor project
NASA Astrophysics Data System (ADS)
Marquès, Michel
2014-06-01
ASTRID is a technological demonstrator of Sodium cooled Fast Reactor (SFR) under development. The conceptual design studies are being conducted in accordance with the Generation IV reactor objectives, particularly in terms of improving safety. For the hypothetical events, belonging to the accidental category "severe accident prevention situations" having a very low frequency of occurrence, the safety demonstration is no more based on a deterministic demonstration with conservative assumptions on models and parameters but on a "Best-Estimate Plus Uncertainty" (BEPU) approach. This BEPU approach ispresented in this paper for an Unprotected Loss-of-Flow (ULOF) event. The Best-Estimate (BE) analysis of this ULOFt ransient is performed with the CATHARE2 code, which is the French reference system code for SFR applications. The objective of the BEPU analysis is twofold: first evaluate the safety margin to sodium boiling in taking into account the uncertainties on the input parameters of the CATHARE2 code (twenty-two uncertain input parameters have been identified, which can be classified into five groups: reactor power, accident management, pumps characteristics, reactivity coefficients, thermal parameters and head losses); secondly quantify the contribution of each input uncertainty to the overall uncertainty of the safety margins, in order to refocusing R&D efforts on the most influential factors. This paper focuses on the methodological aspects of the evaluation of the safety margin. At least for the preliminary phase of the project (conceptual design), a probabilistic criterion has been fixed in the context of this BEPU analysis; this criterion is the value of the margin to sodium boiling, which has a probability 95% to be exceeded, obtained with a confidence level of 95% (i.e. the M5,95percentile of the margin distribution). This paper presents two methods used to assess this percentile: the Wilks method and the Bootstrap method ; the effectiveness of the two methods is compared on the basis of 500 simulations performed with theCATHARE2 code. We conclude that, with only 100 simulations performed with the CATHARE2 code, which is a number of simulations workable in the conceptual design phase of the ASTRID project where the models and the hypothesis are often modified, it is best in order to evaluate the percentile M5,95 of the margin to sodium boiling to use the bootstrap method, which will provide a slightly conservative result. On the other hand, in order to obtain an accurate estimation of the percentileM5,95, for the safety report for example, it will be necessary to perform at least 300 simulations with the CATHARE2 code. In this case, both methods (Wilks and Bootstrap) would give equivalent results.
Tomography of the East African Rift System in Mozambique
NASA Astrophysics Data System (ADS)
Domingues, A.; Silveira, G. M.; Custodio, S.; Chamussa, J.; Lebedev, S.; Chang, S. J.; Ferreira, A. M. G.; Fonseca, J. F. B. D.
2014-12-01
Unlike the majority of the East African Rift, the Mozambique region has not been deeply studied, not only due to political instabilities but also because of the difficult access to its most interior regions. An earthquake with M7 occurred in Machaze in 2006, which triggered the investigation of this particular region. The MOZART project (funded by FCT, Lisbon) installed a temporary seismic network, with a total of 30 broadband stations from the SEIS-UK pool, from April 2011 to July 2013. Preliminary locations of the seismicity were estimated with the data recorded from April 2011 to July 2012. A total of 307 earthquakes were located, with ML magnitudes ranging from 0.9 to 3.9. We observe a linear northeast-southwest distribution of the seismicity that seems associated to the Inhaminga fault. The seismicity has an extension of ~300km reaching the Machaze earthquake area. The northeast sector of the seismicity shows a good correlation with the topography, tracing the Urema rift valley. In order to obtain an initial velocity model of the region, the ambient noise method is used. This method is applied to the entire data set available and two additional stations of the AfricaARRAY project. Ambient noise surface wave tomography is possible by computing cross-correlations between all pairs of stations and measuring the group velocities for all interstation paths. With this approach we obtain Rayleigh wave group velocity dispersion curves in the period range from 3 to 50 seconds. Group velocity maps are calculated for several periods and allowing a geological and tectonic interpretation. In order to extend the investigation to longer wave periods and thus probe both the crust and upper mantle, we apply a recent implementation of the surface-wave two-station method (teleseismic interferometry - Meier el al 2004) to augment our dataset with Rayleigh wave phase velocities curves in a broad period range. Using this method we expect to be able to explore the lithosphere-asthenosphere depth range beneath Mozambique.
Domain Decomposition Algorithms for First-Order System Least Squares Methods
NASA Technical Reports Server (NTRS)
Pavarino, Luca F.
1996-01-01
Least squares methods based on first-order systems have been recently proposed and analyzed for second-order elliptic equations and systems. They produce symmetric and positive definite discrete systems by using standard finite element spaces, which are not required to satisfy the inf-sup condition. In this paper, several domain decomposition algorithms for these first-order least squares methods are studied. Some representative overlapping and substructuring algorithms are considered in their additive and multiplicative variants. The theoretical and numerical results obtained show that the classical convergence bounds (on the iteration operator) for standard Galerkin discretizations are also valid for least squares methods.
How They (Should Have) Built the Pyramids
NASA Astrophysics Data System (ADS)
Gallagher, Gregory; West, Joseph; Waters, Kevin
2014-03-01
A novel ``polygon method'' is proposed for moving large stone blocks. The method is implemented by the attachment of rods of analytically chosen radii to the block by means of rope. The chosen rods are placed on each side of the square-prism block in order to transform the square prism into a prism of higher order polygon, i.e. octagon, dodecagon etc. Experimental results are presented and compared to other methods proposed by the authors, including a dragging method and a rail method which includes the idea of dragging the block on rails made from arbitrarily chosen rod-shaped ``tracks,'' and to independent work by another group which utilized wooden attachments providing a cylindrical shape. It is found that the polygon method when used on small scale stone blocks across level open ground has an equivalent of a coefficient of friction order of 0.1. For full scale pyramid blocks, the wooden ``rods'' would need to be of order 30 cm in diameter, certainly within reason, given the diameter of wooden masts used on ships in that region during the relevant time period in Egypt. This project also inspired a ``spin-off'' project in which the behavior or rolling polygons is investigated and preliminary data is presented.
Fernandes, N M; Pinto, B D L; Almeida, L O B; Slaets, J F W; Köberle, R
2010-10-01
We study the reconstruction of visual stimuli from spike trains, representing the reconstructed stimulus by a Volterra series up to second order. We illustrate this procedure in a prominent example of spiking neurons, recording simultaneously from the two H1 neurons located in the lobula plate of the fly Chrysomya megacephala. The fly views two types of stimuli, corresponding to rotational and translational displacements. Second-order reconstructions require the manipulation of potentially very large matrices, which obstructs the use of this approach when there are many neurons. We avoid the computation and inversion of these matrices using a convenient set of basis functions to expand our variables in. This requires approximating the spike train four-point functions by combinations of two-point functions similar to relations, which would be true for gaussian stochastic processes. In our test case, this approximation does not reduce the quality of the reconstruction. The overall contribution to stimulus reconstruction of the second-order kernels, measured by the mean squared error, is only about 5% of the first-order contribution. Yet at specific stimulus-dependent instants, the addition of second-order kernels represents up to 100% improvement, but only for rotational stimuli. We present a perturbative scheme to facilitate the application of our method to weakly correlated neurons.
Multigrid methods for numerical simulation of laminar diffusion flames
NASA Technical Reports Server (NTRS)
Liu, C.; Liu, Z.; Mccormick, S.
1993-01-01
This paper documents the result of a computational study of multigrid methods for numerical simulation of 2D diffusion flames. The focus is on a simplified combustion model, which is assumed to be a single step, infinitely fast and irreversible chemical reaction with five species (C3H8, O2, N2, CO2 and H2O). A fully-implicit second-order hybrid scheme is developed on a staggered grid, which is stretched in the streamwise coordinate direction. A full approximation multigrid scheme (FAS) based on line distributive relaxation is developed as a fast solver for the algebraic equations arising at each time step. Convergence of the process for the simplified model problem is more than two-orders of magnitude faster than other iterative methods, and the computational results show good grid convergence, with second-order accuracy, as well as qualitatively agreement with the results of other researchers.
A Least-Squares-Based Weak Galerkin Finite Element Method for Second Order Elliptic Equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mu, Lin; Wang, Junping; Ye, Xiu
Here, in this article, we introduce a least-squares-based weak Galerkin finite element method for the second order elliptic equation. This new method is shown to provide very accurate numerical approximations for both the primal and the flux variables. In contrast to other existing least-squares finite element methods, this new method allows us to use discontinuous approximating functions on finite element partitions consisting of arbitrary polygon/polyhedron shapes. We also develop a Schur complement algorithm for the resulting discretization problem by eliminating all the unknowns that represent the solution information in the interior of each element. Optimal order error estimates for bothmore » the primal and the flux variables are established. An extensive set of numerical experiments are conducted to demonstrate the robustness, reliability, flexibility, and accuracy of the least-squares-based weak Galerkin finite element method. Finally, the numerical examples cover a wide range of applied problems, including singularly perturbed reaction-diffusion equations and the flow of fluid in porous media with strong anisotropy and heterogeneity.« less
A Least-Squares-Based Weak Galerkin Finite Element Method for Second Order Elliptic Equations
Mu, Lin; Wang, Junping; Ye, Xiu
2017-08-17
Here, in this article, we introduce a least-squares-based weak Galerkin finite element method for the second order elliptic equation. This new method is shown to provide very accurate numerical approximations for both the primal and the flux variables. In contrast to other existing least-squares finite element methods, this new method allows us to use discontinuous approximating functions on finite element partitions consisting of arbitrary polygon/polyhedron shapes. We also develop a Schur complement algorithm for the resulting discretization problem by eliminating all the unknowns that represent the solution information in the interior of each element. Optimal order error estimates for bothmore » the primal and the flux variables are established. An extensive set of numerical experiments are conducted to demonstrate the robustness, reliability, flexibility, and accuracy of the least-squares-based weak Galerkin finite element method. Finally, the numerical examples cover a wide range of applied problems, including singularly perturbed reaction-diffusion equations and the flow of fluid in porous media with strong anisotropy and heterogeneity.« less
An accurate algorithm to match imperfectly matched images for lung tumor detection without markers
Rozario, Timothy; Bereg, Sergey; Yan, Yulong; Chiu, Tsuicheng; Liu, Honghuan; Kearney, Vasant; Jiang, Lan
2015-01-01
In order to locate lung tumors on kV projection images without internal markers, digitally reconstructed radiographs (DRRs) are created and compared with projection images. However, lung tumors always move due to respiration and their locations change on projection images while they are static on DRRs. In addition, global image intensity discrepancies exist between DRRs and projections due to their different image orientations, scattering, and noises. This adversely affects comparison accuracy. A simple but efficient comparison algorithm is reported to match imperfectly matched projection images and DRRs. The kV projection images were matched with different DRRs in two steps. Preprocessing was performed in advance to generate two sets of DRRs. The tumors were removed from the planning 3D CT for a single phase of planning 4D CT images using planning contours of tumors. DRRs of background and DRRs of tumors were generated separately for every projection angle. The first step was to match projection images with DRRs of background signals. This method divided global images into a matrix of small tiles and similarities were evaluated by calculating normalized cross‐correlation (NCC) between corresponding tiles on projections and DRRs. The tile configuration (tile locations) was automatically optimized to keep the tumor within a single projection tile that had a bad matching with the corresponding DRR tile. A pixel‐based linear transformation was determined by linear interpolations of tile transformation results obtained during tile matching. The background DRRs were transformed to the projection image level and subtracted from it. The resulting subtracted image now contained only the tumor. The second step was to register DRRs of tumors to the subtracted image to locate the tumor. This method was successfully applied to kV fluoro images (about 1000 images) acquired on a Vero (BrainLAB) for dynamic tumor tracking on phantom studies. Radiation opaque markers were implanted and used as ground truth for tumor positions. Although other organs and bony structures introduced strong signals superimposed on tumors at some angles, this method accurately located tumors on every projection over 12 gantry angles. The maximum error was less than 2.2 mm, while the total average error was less than 0.9 mm. This algorithm was capable of detecting tumors without markers, despite strong background signals. PACS numbers: 87.57.cj, 87.57.cp87.57.nj, 87.57.np, 87.57.Q‐, 87.59.bf, 87.63.lm
Using multi-attribute decision-making approaches in the selection of a hospital management system.
Arasteh, Mohammad Ali; Shamshirband, Shahaboddin; Yee, Por Lip
2018-01-01
The most appropriate organizational software is always a real challenge for managers, especially, the IT directors. The illustration of the term "enterprise software selection", is to purchase, create, or order a software that; first, is best adapted to require of the organization; and second, has suitable price and technical support. Specifying selection criteria and ranking them, is the primary prerequisite for this action. This article provides a method to evaluate, rank, and compare the available enterprise software for choosing the apt one. The prior mentioned method is constituted of three-stage processes. First, the method identifies the organizational requires and assesses them. Second, it selects the best method throughout three possibilities; indoor-production, buying software, and ordering special software for the native use. Third, the method evaluates, compares and ranks the alternative software. The third process uses different methods of multi attribute decision making (MADM), and compares the consequent results. Based on different characteristics of the problem; several methods had been tested, namely, Analytic Hierarchy Process (AHP), Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS), Elimination and Choice Expressing Reality (ELECTURE), and easy weight method. After all, we propose the most practical method for same problems.
ERIC Educational Resources Information Center
Bjerstedt, Ake
This interview explores the views of Eva Norland, an educational researcher and peace activist. A discussion of peace education examines definitions, school contribution, age levels, teacher training, and instructional approach. Eva Norland offers her opinion on the concept of peace from environmental development, solidarity work, human rights,…
Some Advanced Concepts in Discrete Aerodynamic Sensitivity Analysis
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Green, Lawrence L.; Newman, Perry A.; Putko, Michele M.
2003-01-01
An efficient incremental iterative approach for differentiating advanced flow codes is successfully demonstrated on a two-dimensional inviscid model problem. The method employs the reverse-mode capability of the automatic differentiation software tool ADIFOR 3.0 and is proven to yield accurate first-order aerodynamic sensitivity derivatives. A substantial reduction in CPU time and computer memory is demonstrated in comparison with results from a straightforward, black-box reverse-mode applicaiton of ADIFOR 3.0 to the same flow code. An ADIFOR-assisted procedure for accurate second-rder aerodynamic sensitivity derivatives is successfully verified on an inviscid transonic lifting airfoil example problem. The method requires that first-order derivatives are calculated first using both the forward (direct) and reverse (adjoinct) procedures; then, a very efficient noniterative calculation of all second-order derivatives can be accomplished. Accurate second derivatives (i.e., the complete Hesian matrices) of lift, wave drag, and pitching-moment coefficients are calculated with respect to geometric shape, angle of attack, and freestream Mach number.
Incoherent coincidence imaging of space objects
NASA Astrophysics Data System (ADS)
Mao, Tianyi; Chen, Qian; He, Weiji; Gu, Guohua
2016-10-01
Incoherent Coincidence Imaging (ICI), which is based on the second or higher order correlation of fluctuating light field, has provided great potentialities with respect to standard conventional imaging. However, the deployment of reference arm limits its practical applications in the detection of space objects. In this article, an optical aperture synthesis with electronically connected single-pixel photo-detectors was proposed to remove the reference arm. The correlation in our proposed method is the second order correlation between the intensity fluctuations observed by any two detectors. With appropriate locations of single-pixel detectors, this second order correlation is simplified to absolute-square Fourier transform of source and the unknown object. We demonstrate the image recovery with the Gerchberg-Saxton-like algorithms and investigate the reconstruction quality of our approach. Numerical experiments has been made to show that both binary and gray-scale objects can be recovered. This proposed method provides an effective approach to promote detection of space objects and perhaps even the exo-planets.
NASA Technical Reports Server (NTRS)
Manning, Robert M.
2005-01-01
Solutions are derived for the generalized mutual coherence function (MCF), i.e., the second order moment, of a random wave field propagating through a random medium within the context of the extended parabolic equation. Here, "generalized" connotes the consideration of both the transverse as well as the longitudinal second order moments (with respect to the direction of propagation). Such solutions will afford a comparison between the results of the parabolic equation within the pararaxial approximation and those of the wide-angle extended theory. To this end, a statistical operator method is developed which gives a general equation for an arbitrary spatial statistical moment of the wave field. The generality of the operator method allows one to obtain an expression for the second order field moment in the direction longitudinal to the direction of propagation. Analytical solutions to these equations are derived for the Kolmogorov and Tatarskii spectra of atmospheric permittivity fluctuations within the Markov approximation.
Exploratory factor analysis of the Oral Health Impact Profile.
John, M T; Reissmann, D R; Feuerstahler, L; Waller, N; Baba, K; Larsson, P; Celebić, A; Szabo, G; Rener-Sitar, K
2014-09-01
Although oral health-related quality of life (OHRQoL) as measured by the Oral Health Impact Profile (OHIP) is thought to be multidimensional, the nature of these dimensions is not known. The aim of this report was to explore the dimensionality of the OHIP using the Dimensions of OHRQoL (DOQ) Project, an international study of general population subjects and prosthodontic patients. Using the project's Learning Sample (n = 5173), we conducted an exploratory factor analysis on the 46 OHIP items not specifically referring to dentures for 5146 subjects with sufficiently complete data. The first eigenvalue (27·0) of the polychoric correlation matrix was more than ten times larger than the second eigenvalue (2·6), suggesting the presence of a dominant, higher-order general factor. Follow-up analyses with Horn's parallel analysis revealed a viable second-order, four-factor solution. An oblique rotation of this solution revealed four highly correlated factors that we named Oral Function, Oro-facial Pain, Oro-facial Appearance and Psychosocial Impact. These four dimensions and the strong general factor are two viable hypotheses for the factor structure of the OHIP. © 2014 John Wiley & Sons Ltd.
Vosough, Maryam; Salemi, Amir
2007-08-15
In the present work two second-order calibration methods, generalized rank annihilation method (GRAM) and multivariate curve resolution-alternating least square (MCR-ALS) have been applied on standard addition data matrices obtained by gas chromatography-mass spectrometry (GC-MS) to characterize and quantify four unsaturated fatty acids cis-9-hexadecenoic acid (C16:1omega7c), cis-9-octadecenoic acid (C18:1omega9c), cis-11-eicosenoic acid (C20:1omega9) and cis-13-docosenoic acid (C22:1omega9) in fish oil considering matrix interferences. With these methods, the area does not need to be directly measured and predictions are more accurate. Because of non-trilinear conditions of GC-MS data matrices, at first MCR-ALS and GRAM have been used on uncorrected data matrices. In comparison to MCR-ALS, biased and imprecise concentrations (%R.S.D.=27.3) were obtained using GRAM without correcting the retention time-shift. As trilinearity is the essential requirement for implementing GRAM, the data need to be corrected. Multivariate rank alignment objectively corrects the run-to-run retention time variations between sample GC-MS data matrix and a standard addition GC-MS data matrix. Then, two second-order algorithms have been compared with each other. The above algorithms provided similar mean predictions, pure concentrations and spectral profiles. The results validated using standard mass spectra of target compounds. In addition, some of the quantification results were compared with the concentration values obtained using the selected mass chromatograms. As in the case of strong peak-overlap and the matrix effect, the classical univariate method of determination of the area of the peaks of the analytes will fail, the "second-order advantage" has solved this problem successfully.
Thellamurege, Nandun M; Si, Dejun; Cui, Fengchao; Li, Hui
2014-05-07
A combined quantum mechanical/molecular mechanical/continuum (QM/MM/C) style second order Møller-Plesset perturbation theory (MP2) method that incorporates induced dipole polarizable force field and induced surface charge continuum solvation model is established. The Z-vector method is modified to include induced dipoles and induced surface charges to determine the MP2 response density matrix, which can be used to evaluate MP2 properties. In particular, analytic nuclear gradient is derived and implemented for this method. Using the Assisted Model Building with Energy Refinement induced dipole polarizable protein force field, the QM/MM/C style MP2 method is used to study the hydrogen bonding distances and strengths of the photoactive yellow protein chromopore in the wild type and the Glu46Gln mutant.
Projection pursuit water quality evaluation model based on chicken swam algorithm
NASA Astrophysics Data System (ADS)
Hu, Zhe
2018-03-01
In view of the uncertainty and ambiguity of each index in water quality evaluation, in order to solve the incompatibility of evaluation results of individual water quality indexes, a projection pursuit model based on chicken swam algorithm is proposed. The projection index function which can reflect the water quality condition is constructed, the chicken group algorithm (CSA) is introduced, the projection index function is optimized, the best projection direction of the projection index function is sought, and the best projection value is obtained to realize the water quality evaluation. The comparison between this method and other methods shows that it is reasonable and feasible to provide decision-making basis for water pollution control in the basin.
NASA Technical Reports Server (NTRS)
Shu, Chi-Wang
2000-01-01
This project is about the investigation of the development of the discontinuous Galerkin finite element methods, for general geometry and triangulations, for solving convection dominated problems, with applications to aeroacoustics. On the analysis side, we have studied the efficient and stable discontinuous Galerkin framework for small second derivative terms, for example in Navier-Stokes equations, and also for related equations such as the Hamilton-Jacobi equations. This is a truly local discontinuous formulation where derivatives are considered as new variables. On the applied side, we have implemented and tested the efficiency of different approaches numerically. Related issues in high order ENO and WENO finite difference methods and spectral methods have also been investigated. Jointly with Hu, we have presented a discontinuous Galerkin finite element method for solving the nonlinear Hamilton-Jacobi equations. This method is based on the RungeKutta discontinuous Galerkin finite element method for solving conservation laws. The method has the flexibility of treating complicated geometry by using arbitrary triangulation, can achieve high order accuracy with a local, compact stencil, and are suited for efficient parallel implementation. One and two dimensional numerical examples are given to illustrate the capability of the method. Jointly with Hu, we have constructed third and fourth order WENO schemes on two dimensional unstructured meshes (triangles) in the finite volume formulation. The third order schemes are based on a combination of linear polynomials with nonlinear weights, and the fourth order schemes are based on combination of quadratic polynomials with nonlinear weights. We have addressed several difficult issues associated with high order WENO schemes on unstructured mesh, including the choice of linear and nonlinear weights, what to do with negative weights, etc. Numerical examples are shown to demonstrate the accuracies and robustness of the methods for shock calculations. Jointly with P. Montarnal, we have used a recently developed energy relaxation theory by Coquel and Perthame and high order weighted essentially non-oscillatory (WENO) schemes to simulate the Euler equations of real gas. The main idea is an energy decomposition under the form epsilon = epsilon(sub 1) + epsilon(sub 2), where epsilon(sub 1) is associated with a simpler pressure law (gamma)-law in this paper) and the nonlinear deviation epsilon(sub 2) is convected with the flow. A relaxation process is performed for each time step to ensure that the original pressure law is satisfied. The necessary characteristic decomposition for the high order WENO schemes is performed on the characteristic fields based on the epsilon(sub l) gamma-law. The algorithm only calls for the original pressure law once per grid point per time step, without the need to compute its derivatives or any Riemann solvers. Both one and two dimensional numerical examples are shown to illustrate the effectiveness of this approach.
Improved Adaptive LSB Steganography Based on Chaos and Genetic Algorithm
NASA Astrophysics Data System (ADS)
Yu, Lifang; Zhao, Yao; Ni, Rongrong; Li, Ting
2010-12-01
We propose a novel steganographic method in JPEG images with high performance. Firstly, we propose improved adaptive LSB steganography, which can achieve high capacity while preserving the first-order statistics. Secondly, in order to minimize visual degradation of the stego image, we shuffle bits-order of the message based on chaos whose parameters are selected by the genetic algorithm. Shuffling message's bits-order provides us with a new way to improve the performance of steganography. Experimental results show that our method outperforms classical steganographic methods in image quality, while preserving characteristics of histogram and providing high capacity.
ERIC Educational Resources Information Center
Pereira, Juan A.; Sanz-Santamaría, Silvia; Montero, Raúl; Gutiérrez, Julián
2012-01-01
Attaining a satisfactory level of oral communication in a second language is a laborious process. In this action research paper we describe a new method applied through the use of interactive videos and the Babelium Project Rich Internet Application (RIA), which allows students to practice speaking skills through a variety of exercises. We present…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sutheerawatthana, Pitch, E-mail: pitch.venture@gmail.co; Minato, Takayuki, E-mail: minato@k.u-tokyo.ac.j
The response of a social group is a missing element in the formal impact assessment model. Previous discussion of the involvement of social groups in an intervention has mainly focused on the formation of the intervention. This article discusses the involvement of social groups in a different way. A descriptive model is proposed by incorporating a social group's response into the concept of second- and higher-order effects. The model is developed based on a cause-effect relationship through the observation of phenomena in case studies. The model clarifies the process by which social groups interact with a lower-order effect and thenmore » generate a higher-order effect in an iterative manner. This study classifies social groups' responses into three forms-opposing, modifying, and advantage-taking action-and places them in six pathways. The model is expected to be used as an analytical tool for investigating and identifying impacts in the planning stage and as a framework for monitoring social groups' responses during the implementation stage of a policy, plan, program, or project (PPPPs).« less
First integrals of the axisymmetric shape equation of lipid membranes
NASA Astrophysics Data System (ADS)
Zhang, Yi-Heng; McDargh, Zachary; Tu, Zhan-Chun
2018-03-01
The shape equation of lipid membranes is a fourth-order partial differential equation. Under the axisymmetric condition, this equation was transformed into a second-order ordinary differential equation (ODE) by Zheng and Liu (Phys. Rev. E 48 2856 (1993)). Here we try to further reduce this second-order ODE to a first-order ODE. First, we invert the usual process of variational calculus, that is, we construct a Lagrangian for which the ODE is the corresponding Euler–Lagrange equation. Then, we seek symmetries of this Lagrangian according to the Noether theorem. Under a certain restriction on Lie groups of the shape equation, we find that the first integral only exists when the shape equation is identical to the Willmore equation, in which case the symmetry leading to the first integral is scale invariance. We also obtain the mechanical interpretation of the first integral by using the membrane stress tensor. Project supported by the National Natural Science Foundation of China (Grant No. 11274046) and the National Science Foundation of the United States (Grant No. 1515007).
High-Order Hyperbolic Residual-Distribution Schemes on Arbitrary Triangular Grids
NASA Technical Reports Server (NTRS)
Mazaheri, Alireza; Nishikawa, Hiroaki
2015-01-01
In this paper, we construct high-order hyperbolic residual-distribution schemes for general advection-diffusion problems on arbitrary triangular grids. We demonstrate that the second-order accuracy of the hyperbolic schemes can be greatly improved by requiring the scheme to preserve exact quadratic solutions. We also show that the improved second-order scheme can be easily extended to third-order by further requiring the exactness for cubic solutions. We construct these schemes based on the LDA and the SUPG methodology formulated in the framework of the residual-distribution method. For both second- and third-order-schemes, we construct a fully implicit solver by the exact residual Jacobian of the second-order scheme, and demonstrate rapid convergence of 10-15 iterations to reduce the residuals by 10 orders of magnitude. We demonstrate also that these schemes can be constructed based on a separate treatment of the advective and diffusive terms, which paves the way for the construction of hyperbolic residual-distribution schemes for the compressible Navier-Stokes equations. Numerical results show that these schemes produce exceptionally accurate and smooth solution gradients on highly skewed and anisotropic triangular grids, including curved boundary problems, using linear elements. We also present Fourier analysis performed on the constructed linear system and show that an under-relaxation parameter is needed for stabilization of Gauss-Seidel relaxation.
The TMDL Program Results Analysis Project: Matching Results Measures with Program Expectations
The paper provides a detailed description of the aims, methods and outputs of the program evaluation project undertaken by EPA in order to generate the insights needed to make TMDL program improvements.
An improved finite-difference analysis of uncoupled vibrations of tapered cantilever beams
NASA Technical Reports Server (NTRS)
Subrahmanyam, K. B.; Kaza, K. R. V.
1983-01-01
An improved finite difference procedure for determining the natural frequencies and mode shapes of tapered cantilever beams undergoing uncoupled vibrations is presented. Boundary conditions are derived in the form of simple recursive relations involving the second order central differences. Results obtained by using the conventional first order central differences and the present second order central differences are compared, and it is observed that the present second order scheme is more efficient than the conventional approach. An important advantage offered by the present approach is that the results converge to exact values rapidly, and thus the extrapolation of the results is not necessary. Consequently, the basic handicap with the classical finite difference method of solution that requires the Richardson's extrapolation procedure is eliminated. Furthermore, for the cases considered herein, the present approach produces consistent lower bound solutions.
Model reductions using a projection formulation
NASA Technical Reports Server (NTRS)
De Villemagne, Christian; Skelton, Robert E.
1987-01-01
A new methodology for model reduction of MIMO systems exploits the notion of an oblique projection. A reduced model is uniquely defined by a projector whose range space and orthogonal to the null space are chosen among the ranges of generalized controllability and observability matrices. The reduced order models match various combinations (chosen by the designer) of four types of parameters of the full order system associated with (1) low frequency response, (2) high frequency response, (3) low frequency power spectral density, and (4) high frequency power spectral density. Thus, the proposed method is a computationally simple substitute for many existing methods, has an extreme flexibility to embrace combinations of existing methods and offers some new features.
Eikonal solutions to optical model coupled-channel equations
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.; Khandelwal, Govind S.; Maung, Khin M.; Townsend, Lawrence W.; Wilson, John W.
1988-01-01
Methods of solution are presented for the Eikonal form of the nucleus-nucleus coupled-channel scattering amplitudes. Analytic solutions are obtained for the second-order optical potential for elastic scattering. A numerical comparison is made between the first and second order optical model solutions for elastic and inelastic scattering of H-1 and He-4 on C-12. The effects of bound-state excitations on total and reaction cross sections are also estimated.
NASA Astrophysics Data System (ADS)
Kamagara, Abel; Wang, Xiangzhao; Li, Sikun
2018-03-01
We propose a method to compensate for the projector intensity nonlinearity induced by gamma effect in three-dimensional (3-D) fringe projection metrology by extending high-order spectra analysis and bispectral norm minimization to digital sinusoidal fringe pattern analysis. The bispectrum estimate allows extraction of vital signal information features such as spectral component correlation relationships in fringe pattern images. Our approach exploits the fact that gamma introduces high-order harmonic correlations in the affected fringe pattern image. Estimation and compensation of projector nonlinearity is realized by detecting and minimizing the normed bispectral coherence of these correlations. The proposed technique does not require calibration information and technical knowledge or specification of fringe projection unit. This is promising for developing a modular and calibration-invariant model for intensity nonlinear gamma compensation in digital fringe pattern projection profilometry. Experimental and numerical simulation results demonstrate this method to be efficient and effective in improving the phase measuring accuracies with phase-shifting fringe pattern projection profilometry.
Accuracy Improvement in Magnetic Field Modeling for an Axisymmetric Electromagnet
NASA Technical Reports Server (NTRS)
Ilin, Andrew V.; Chang-Diaz, Franklin R.; Gurieva, Yana L.; Il,in, Valery P.
2000-01-01
This paper examines the accuracy and calculation speed for the magnetic field computation in an axisymmetric electromagnet. Different numerical techniques, based on an adaptive nonuniform grid, high order finite difference approximations, and semi-analitical calculation of boundary conditions are considered. These techniques are being applied to the modeling of the Variable Specific Impulse Magnetoplasma Rocket. For high-accuracy calculations, a fourth-order scheme offers dramatic advantages over a second order scheme. For complex physical configurations of interest in plasma propulsion, a second-order scheme with nonuniform mesh gives the best results. Also, the relative advantages of various methods are described when the speed of computation is an important consideration.
Modeling of transport phenomena in concrete porous media.
Plecas, Ilija
2014-02-01
Two fundamental concerns must be addressed when attempting to isolate low-level waste in a disposal facility on land. The first concern is isolating the waste from water, or hydrologic isolation. The second is preventing movement of the radionuclides out of the disposal facility, or radionuclide migration. Particularly, we have investigated here the latter modified scenario. To assess the safety for disposal of radioactive waste-concrete composition, the leakage of 60Co from a waste composite into a surrounding fluid has been studied. Leakage tests were carried out by the original method, developed at the Vinča Institute. Transport phenomena involved in the leaching of a radioactive material from a cement composite matrix are investigated using three methods based on theoretical equations. These are: the diffusion equation for a plane source: an equation for diffusion coupled to a first-order equation, and an empirical method employing a polynomial equation. The results presented in this paper are from a 25-y mortar and concrete testing project that will influence the design choices for radioactive waste packaging for a future Serbian radioactive waste disposal center.
Dirac equation on a curved surface
NASA Astrophysics Data System (ADS)
Brandt, F. T.; Sánchez-Monroy, J. A.
2016-09-01
The dynamics of Dirac particles confined to a curved surface is examined employing the thin-layer method. We perform a perturbative expansion to first-order and split the Dirac field into normal and tangential components to the surface. In contrast to the known behavior of second order equations like Schrödinger, Maxwell and Klein-Gordon, we find that there is no geometric potential for the Dirac equation on a surface. This implies that the non-relativistic limit does not commute with the thin-layer method. Although this problem can be overcome when second-order terms are retained in the perturbative expansion, this would preclude the decoupling of the normal and tangential degrees of freedom. Therefore, we propose to introduce a first-order term which rescues the non-relativistic limit and also clarifies the effect of the intrinsic and extrinsic curvatures on the dynamics of the Dirac particles.
A strategic plan for the second phase (2013-2015) of the Korea biobank project.
Park, Ok; Cho, Sang Yun; Shin, So Youn; Park, Jae-Sun; Kim, Jun Woo; Han, Bok-Ghee
2013-04-01
The Korea Biobank Project (KBP) was led by the Ministry of Health and Welfare to establish a network between the National Biobank of Korea and biobanks run by university-affiliated general hospitals (regional biobanks). The Ministry of Health and Welfare started the project to enhance medical and health technology by collecting, managing, and providing researchers with high-quality human bioresources. The National Biobank of Korea, under the leadership of the Ministry of Health and Welfare, collects specimens through various cohorts and regional biobanks within university hospitals gather specimens from patients. The project began in 2008, and the first phase ended in 2012, which meant that there needed to be a plan for the second phase that begins in 2013. Consequently, professionals from within and outside the project were gathered to develop a plan for the second phase. Under the leadership of the planning committee, six working groups were formed to formulate a practical plan. By conducting two workshops with experts in the six working groups and the planning committee and three forums in 2011 and 2012, they have developed a strategic plan for the second phase of the KBP. This document presents a brief report of the second phase of the project based on a discussion with them. During the first phase of the project (2008-2012), a network was set up between the National Biobank of Korea and 17 biobanks at university-affiliated hospitals in an effort to unify informatics and governance among the participating biobanks. The biobanks within the network manage data on their biospecimens with a unified Biobank Information Management System. Continuous efforts are being made to develop a common standard operating procedure for resource collection, management, distribution, and personal information security, and currently, management of these data is carried out in a somewhat unified manner. In addition, the KBP has trained and educated professionals to work within the biobanks, and has also carried out various publicity promotions to the public and researchers. During the first phase, biospecimens from more than 300,000 participants through various cohorts and biospecimens from more than 200,000 patients from hospitals were collected, which were distributed to approximately 600 research projects. The planning committee for the second phase evaluated that the first phase of the KBP was successful. However, the first phase of the project was meant to allow autonomy to the individual biobanks. The biobanks were able to choose the kind of specimens they were going to collect and the amount of specimen they would set as a goal, as well as being allowed to choose their own methods to manage their biobanks (autonomy). Therefore, some biobanks collected resources that were easy to collect and the resources needed by researchers were not strategically collected. In addition, there was also a low distribution rate to researchers outside of hospitals, who do not have as much access to specimens and cases as those in hospitals. There were also many cases in which researchers were not aware of the KBP, and the distribution processes were not set up to be convenient to the demands of researchers. Accordingly, the second phase of the KBP will be focused on increasing the integration and cooperation between the biobanks within the network. The KBP plans to set goals for the strategic collection of the needed human bioresources. Although the main principle of the first phase was to establish infrastructure and resource collection, the key objective of the second phase is the efficient utilization of gathered resources. In order to fully utilize the gathered resources in an efficient way, distribution systems and policies must be improved. Vitalization of distribution, securing of high-value resource and related clinical and laboratory information, international standardization of resource management systems, and establishment of a virtuous cycle between research and development (R&D) and biobanks are the four main strategies. Based on these strategies, 12 related objectives have been set and are planned to be executed.
The determination of third order linear models from a seventh order nonlinear jet engine model
NASA Technical Reports Server (NTRS)
Lalonde, Rick J.; Hartley, Tom T.; De Abreu-Garcia, J. Alex
1989-01-01
Results are presented that demonstrate how good reduced-order models can be obtained directly by recursive parameter identification using input/output (I/O) data of high-order nonlinear systems. Three different methods of obtaining a third-order linear model from a seventh-order nonlinear turbojet engine model are compared. The first method is to obtain a linear model from the original model and then reduce the linear model by standard reduction techniques such as residualization and balancing. The second method is to identify directly a third-order linear model by recursive least-squares parameter estimation using I/O data of the original model. The third method is to obtain a reduced-order model from the original model and then linearize the reduced model. Frequency responses are used as the performance measure to evaluate the reduced models. The reduced-order models along with their Bode plots are presented for comparison purposes.
ERIC Educational Resources Information Center
Cai, Shengrong; Zhu, Wei
2012-01-01
This study investigated the impact of an online learning community project on university students' motivation in learning Chinese as a foreign language. A newly proposed second language (L2) motivation theory--the L2 motivational self system (Dornyei, 2005, 2009)--guided this study. A concurrent transformative mixed-methods design was employed to…
Performance Benchmarking of tsunami-HySEA for NTHMP Inundation Mapping Activities
NASA Astrophysics Data System (ADS)
González Vida, Jose M.; Castro, Manuel J.; Ortega Acosta, Sergio; Macías, Jorge; Millán, Alejandro
2016-04-01
According to the 2006 USA Tsunami Warning and Education Act, the tsunami inundation models used in the National Tsunami Hazard Mitigation Program (NTHMP) projects must be validated against some existing standard problems (see [OAR-PMEL-135], [Proceedings of the 2011 NTHMP Model Benchmarking Workshop]). These Benchmark Problems (BPs) cover different tsunami processes related to the inundation stage that the models must meet to achieve the NTHMP Mapping and Modeling Subcommittee (MMS) approval. Tsunami-HySEA solves the two-dimensional shallow-water system using a high-order path-conservative finite volume method. Values of h, qx and qy in each grid cell represent cell averages of the water depth and momentum components. The numerical scheme is conservative for both mass and momentum in flat bathymetries, and, in general, is mass preserving for arbitrary bathymetries. Tsunami-HySEA implements a PVM-type method that uses the fastest and the slowest wave speeds, similar to HLL method (see [Castro et al, 2012]). A general overview of the derivation of the high order methods is performed in [Castro et al, 2009]. For very big domains, Tsunami-HySEA also implements a two-step scheme similar to leap-frog for the propagation step and a second-order TVD-WAF flux-limiter scheme described in [de la Asunción et al, 2013] for the inundation step. Here, we present the results obtained by the model tsunami-HySEA against the proposed BPs. BP1: Solitary wave on a simple beach (non-breaking - analytic experiment). BP4: Solitary wave on a simple beach (breaking - laboratory experiment). BP6: Solitary wave on a conical island (laboratory experiment). BP7 - Runup on Monai Valley beach (laboratory experiment) and BP9: Okushiri Island tsunami (field experiment). The analysis and results of Tsunami-HySEA model are presented, concluding that the model meets the required objectives for all the BP proposed. References - Castro M.J., E.D. Fernández, A.M. Ferreiro, A. García, C. Parés (2009). High order extension of Roe schemes for two dimensional nonconservative hyperbolic systems. J. Sci. Comput. 39(1), 67-114. - Castro M.J., E.D. Fernández-Nieto (2012). A class of computationally fast first order finite volume solvers: PVM methods. SIAM J. Sci. Comput. 34, A2173-2196. - de la Asunción M., M.J. Castro, E.D. Fernández-Nieto, J.M. Mantas, et al. Efficient GPU implementation of a two waves TVD-WAF method for the two-dimensional one layer shallow water system on structured meshes (2013). Computers & Fluids 80, 441-452. - OAR PMEL-135. Synolakis, C.E., E.N. Bernard, V.V. Titov, U. Kânoǧlu, and F.I. González (2007). Standards, criteria, and procedures for NOAA evaluation of tsunami numerical models. NOAA Tech. Memo. NOAA/Pacific Marine Environmental Laboratory, Seattle, WA, 55 pp. - Proceedings and results of the 2011 NTHMP Model Benchmarking Workshop. NOAA Special Report. July 2012. Acknowledgements This research has been partially supported by the Junta de Andalucía research project TESELA (P11-RNM7069), the Spanish Government Research project DAIFLUID (MTM2012-38383-C02-01) and the Unit of Numerical Methods (UNM) of the Research Support Central Services (SCAI) of the University of Málaga.
DOT National Transportation Integrated Search
1979-10-01
The goal of the project is to provide sufficient information to allow a transit system with given track and car conditions and budgetary constraints to determine the mix of available noise control methods which will result in the greatest overall ben...
WE-G-18A-02: Calibration-Free Combined KV/MV Short Scan CBCT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, M; Loo, B; Bazalova, M
Purpose: To combine orthogonal kilo-voltage (kV) and Mega-voltage (MV) projection data for short scan cone-beam CT to reduce imaging time on current radiation treatment systems, using a calibration-free gain correction method. Methods: Combining two orthogonal projection data sets for kV and MV imaging hardware can reduce the scan angle to as small as 110° (90°+fan) such that the total scan time is ∼18 seconds, or within a breath hold. To obtain an accurate reconstruction, the MV projection data is first linearly corrected using linear regression using the redundant data from the start and end of the sinogram, and then themore » combined data is reconstructed using the FDK method. To correct for the different changes of attenuation coefficients in kV/MV between soft tissue and bone, the forward projection of the segmented bone and soft tissue from the first reconstruction in the redundant region are added to the linear regression model. The MV data is corrected again using the additional information from the segmented image, and combined with kV for a second FDK reconstruction. We simulated polychromatic 120 kVp (conventional a-Si EPID with CsI) and 2.5 MVp (prototype high-DQE MV detector) projection data with Poisson noise using the XCAT phantom. The gain correction and combined kV/MV short scan reconstructions were tested with head and thorax cases, and simple contrast-to-noise ratio measurements were made in a low-contrast pattern in the head. Results: The FDK reconstruction using the proposed gain correction method can effectively reduce artifacts caused by the differences of attenuation coefficients in the kV/MV data. The CNRs of the short scans for kV, MV, and kV/MV are 5.0, 2.6 and 3.4 respectively. The proposed gain correction method also works with truncated projections. Conclusion: A novel gain correction and reconstruction method was developed to generate short scan CBCT from orthogonal kV/MV projections. This work is supported by NIH Grant 5R01CA138426-05.« less
Higher-order automatic differentiation of mathematical functions
NASA Astrophysics Data System (ADS)
Charpentier, Isabelle; Dal Cappello, Claude
2015-04-01
Functions of mathematical physics such as the Bessel functions, the Chebyshev polynomials, the Gauss hypergeometric function and so forth, have practical applications in many scientific domains. On the one hand, differentiation formulas provided in reference books apply to real or complex variables. These do not account for the chain rule. On the other hand, based on the chain rule, the automatic differentiation has become a natural tool in numerical modeling. Nevertheless automatic differentiation tools do not deal with the numerous mathematical functions. This paper describes formulas and provides codes for the higher-order automatic differentiation of mathematical functions. The first method is based on Faà di Bruno's formula that generalizes the chain rule. The second one makes use of the second order differential equation they satisfy. Both methods are exemplified with the aforementioned functions.
Project-Based Learning in Programmable Logic Controller
NASA Astrophysics Data System (ADS)
Seke, F. R.; Sumilat, J. M.; Kembuan, D. R. E.; Kewas, J. C.; Muchtar, H.; Ibrahim, N.
2018-02-01
Project-based learning is a learning method that uses project activities as the core of learning and requires student creativity in completing the project. The aims of this study is to investigate the influence of project-based learning methods on students with a high level of creativity in learning the Programmable Logic Controller (PLC). This study used experimental methods with experimental class and control class consisting of 24 students, with 12 students of high creativity and 12 students of low creativity. The application of project-based learning methods into the PLC courses combined with the level of student creativity enables the students to be directly involved in the work of the PLC project which gives them experience in utilizing PLCs for the benefit of the industry. Therefore, it’s concluded that project-based learning method is one of the superior learning methods to apply on highly creative students to PLC courses. This method can be used as an effort to improve student learning outcomes and student creativity as well as to educate prospective teachers to become reliable educators in theory and practice which will be tasked to create qualified human resources candidates in order to meet future industry needs.
Semi-Analytic Reconstruction of Flux in Finite Volume Formulations
NASA Technical Reports Server (NTRS)
Gnoffo, Peter A.
2006-01-01
Semi-analytic reconstruction uses the analytic solution to a second-order, steady, ordinary differential equation (ODE) to simultaneously evaluate the convective and diffusive flux at all interfaces of a finite volume formulation. The second-order ODE is itself a linearized approximation to the governing first- and second- order partial differential equation conservation laws. Thus, semi-analytic reconstruction defines a family of formulations for finite volume interface fluxes using analytic solutions to approximating equations. Limiters are not applied in a conventional sense; rather, diffusivity is adjusted in the vicinity of changes in sign of eigenvalues in order to achieve a sufficiently small cell Reynolds number in the analytic formulation across critical points. Several approaches for application of semi-analytic reconstruction for the solution of one-dimensional scalar equations are introduced. Results are compared with exact analytic solutions to Burger s Equation as well as a conventional, upwind discretization using Roe s method. One approach, the end-point wave speed (EPWS) approximation, is further developed for more complex applications. One-dimensional vector equations are tested on a quasi one-dimensional nozzle application. The EPWS algorithm has a more compact difference stencil than Roe s algorithm but reconstruction time is approximately a factor of four larger than for Roe. Though both are second-order accurate schemes, Roe s method approaches a grid converged solution with fewer grid points. Reconstruction of flux in the context of multi-dimensional, vector conservation laws including effects of thermochemical nonequilibrium in the Navier-Stokes equations is developed.
NASA Technical Reports Server (NTRS)
Lancaster, J. E.
1973-01-01
Previously published asymptotic solutions for lunar and interplanetery trajectories have been modified and combined to formulate a general analytical solution to the problem of N-bodies. The earlier first-order solutions, derived by the method of matched asymptotic expansions, have been extended to second order for the purpose of obtaining increased accuracy. The complete derivation of the second-order solution, including the application of a regorous matching principle, is given. It is shown that the outer and inner expansions can be matched in a region of order mu to the alpha power, where 2/5 alpha 1/2, and mu (the moon/earth or planet/sun mass ratio) is much less than one. The second-order asymptotic solution has been used as a basis for formulating a number of analytical two-point boundary value solutions. These include earth-to-moon, one- and two-impulse moon-to-Earth, and interplanetary solutions. Each is presented as an explicit analytical solution which does not require iterative steps to satisfy the boundary conditions. The complete derivation of each solution is shown, as well as instructions for numerical evaluation. For Vol. 1, see N73-27738.
An accurate method for solving a class of fractional Sturm-Liouville eigenvalue problems
NASA Astrophysics Data System (ADS)
Kashkari, Bothayna S. H.; Syam, Muhammed I.
2018-06-01
This article is devoted to both theoretical and numerical study of the eigenvalues of nonsingular fractional second-order Sturm-Liouville problem. In this paper, we implement a fractional-order Legendre Tau method to approximate the eigenvalues. This method transforms the Sturm-Liouville problem to a sparse nonsingular linear system which is solved using the continuation method. Theoretical results for the considered problem are provided and proved. Numerical results are presented to show the efficiency of the proposed method.
NASA Astrophysics Data System (ADS)
Cheng, Qing; Yang, Xiaofeng; Shen, Jie
2017-07-01
In this paper, we consider numerical approximations of a hydro-dynamically coupled phase field diblock copolymer model, in which the free energy contains a kinetic potential, a gradient entropy, a Ginzburg-Landau double well potential, and a long range nonlocal type potential. We develop a set of second order time marching schemes for this system using the "Invariant Energy Quadratization" approach for the double well potential, the projection method for the Navier-Stokes equation, and a subtle implicit-explicit treatment for the stress and convective term. The resulting schemes are linear and lead to symmetric positive definite systems at each time step, thus they can be efficiently solved. We further prove that these schemes are unconditionally energy stable. Various numerical experiments are performed to validate the accuracy and energy stability of the proposed schemes.
Geometric Modeling of Construction Communications with Specified Dynamic Properties
NASA Astrophysics Data System (ADS)
Korotkiy, V. A.; Usmanova, E. A.; Khmarova, L. I.
2017-11-01
Among many construction communications the pipelines designed for the organized supply or removal of liquid or loose working bodies are distinguished for their functional purpose. Such communications should have dynamic properties which allow one to reduce losses on friction and vortex formation. From the point of view of geometric modeling, the given dynamic properties of the projected communication mean the required degree of smoothness of its center line. To model the axial line (flat or spatial), it is proposed to use composite curve lines consisting of the curve arcs of the second order or from their quadratic images. The advantage of the proposed method is that the designer gets the model of a given curve not as a set of coordinates of its points but in the form of a matrix of coefficients of the canonical equations for each arc.
Second-Order Slender-Body Theory-Axisymmetric Flow
NASA Technical Reports Server (NTRS)
VanDyke, Milton D.
1959-01-01
Slender-body theory for subsonic and supersonic flow past bodies of revolution is extended to a second approximation, Methods are developed for handling the difficulties that arise at round ends, Comparison is made with experiment and with other theories for several simple shapes.
NASA Technical Reports Server (NTRS)
Moorthi, Shrinivas; Higgins, R. W.
1993-01-01
An efficient, direct, second-order solver for the discrete solution of a class of two-dimensional separable elliptic equations on the sphere (which generally arise in implicit and semi-implicit atmospheric models) is presented. The method involves a Fourier transformation in longitude and a direct solution of the resulting coupled second-order finite-difference equations in latitude. The solver is made efficient by vectorizing over longitudinal wave-number and by using a vectorized fast Fourier transform routine. It is evaluated using a prescribed solution method and compared with a multigrid solver and the standard direct solver from FISHPAK.
Low Dissipative High Order Shock-Capturing Methods Using Characteristic-Based Filters
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sandham, N. D.; Djomehri, M. J.
1998-01-01
An approach which closely maintains the non-dissipative nature of classical fourth or higher- order spatial differencing away from shock waves and steep gradient regions while being capable of accurately capturing discontinuities, steep gradient and fine scale turbulent structures in a stable and efficient manner is described. The approach is a generalization of the method of Gustafsson and Oisson and the artificial compression method (ACM) of Harten. Spatially non-dissipative fourth or higher-order compact and non-compact spatial differencings are used as the base schemes. Instead of applying a scalar filter as in Gustafsson and Olsson, an ACM like term is used to signal the appropriate amount of second or third-order TVD or ENO types of characteristic based numerical dissipation. This term acts as a characteristic filter to minimize numerical dissipation for the overall scheme. For time-accurate computations, time discretizations with low dissipation are used. Numerical experiments on 2-D vortical flows, vortex-shock interactions and compressible spatially and temporally evolving mixing layers showed that the proposed schemes have the desired property with only a 10% increase in operations count over standard second-order TVD schemes. Aside from the ability to accurately capture shock-turbulence interaction flows, this approach is also capable of accurately preserving vortex convection. Higher accuracy is achieved with fewer grid points when compared to that of standard second-order TVD or ENO schemes. To demonstrate the applicability of these schemes in sustaining turbulence where shock waves are absent, a simulation of 3-D compressible turbulent channel flow in a small domain is conducted.
Low Dissipative High Order Shock-Capturing Methods using Characteristic-Based Filters
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sandham, N. D.; Djomehri, M. J.
1998-01-01
An approach which closely maintains the non-dissipative nature of classical fourth or higher- order spatial differencing away from shock waves and steep gradient regions while being capable of accurately capturing discontinuities, steep gradient and fine scale turbulent structures in a stable and efficient manner is described. The approach is a generalization of the method of Gustafsson and Olsson and the artificial compression method (ACM) of Harten. Spatially non-dissipative fourth or higher-order compact and non-compact spatial differencings are used as the base schemes. Instead of applying a scalar filter as in Gustafsson and Olsson, an ACM like term is used to signal the appropriate amount of second or third-order TVD or ENO types of characteristic based numerical dissipation. This term acts as a characteristic filter to minimize numerical dissipation for the overall scheme. For time-accurate computations, time discretizations with low dissipation are used. Numerical experiments on 2-D vortical flows, vortex-shock interactions and compressible spatially and temporally evolving mixing layers showed that the proposed schemes have the desired property with only a 10% increase in operations count over standard second-order TVD schemes. Aside from the ability to accurately capture shock-turbulence interaction flows, this approach is also capable of accurately preserving vortex convection. Higher accuracy is achieved with fewer grid points when compared to that of standard second-order TVD or ENO schemes. To demonstrate the applicability of these schemes in sustaining turbulence where shock waves are absent, a simulation of 3-D compressible turbulent channel flow in a small domain is conducted.
Risky Group Decision-Making Method for Distribution Grid Planning
NASA Astrophysics Data System (ADS)
Li, Cunbin; Yuan, Jiahang; Qi, Zhiqiang
2015-12-01
With rapid speed on electricity using and increasing in renewable energy, more and more research pay attention on distribution grid planning. For the drawbacks of existing research, this paper proposes a new risky group decision-making method for distribution grid planning. Firstly, a mixing index system with qualitative and quantitative indices is built. On the basis of considering the fuzziness of language evaluation, choose cloud model to realize "quantitative to qualitative" transformation and construct interval numbers decision matrices according to the "3En" principle. An m-dimensional interval numbers decision vector is regarded as super cuboids in m-dimensional attributes space, using two-level orthogonal experiment to arrange points uniformly and dispersedly. The numbers of points are assured by testing numbers of two-level orthogonal arrays and these points compose of distribution points set to stand for decision-making project. In order to eliminate the influence of correlation among indices, Mahalanobis distance is used to calculate the distance from each solutions to others which means that dynamic solutions are viewed as the reference. Secondly, due to the decision-maker's attitude can affect the results, this paper defines the prospect value function based on SNR which is from Mahalanobis-Taguchi system and attains the comprehensive prospect value of each program as well as the order. At last, the validity and reliability of this method is illustrated by examples which prove the method is more valuable and superiority than the other.
Chang, Yue-Yue; Wu, Hai-Long; Fang, Huan; Wang, Tong; Liu, Zhi; Ouyang, Yang-Zi; Ding, Yu-Jie; Yu, Ru-Qin
2018-06-15
In this study, a smart and green analytical method based on the second-order calibration algorithm coupled with excitation-emission matrix (EEM) fluorescence was developed for the determination of rhodamine dyes illegally added into chilli samples. The proposed method not only has the advantage of high sensitivity over the traditional fluorescence method but also fully displays the "second-order advantage". Pure signals of analytes were successfully extracted from severely interferential EEMs profiles via using alternating trilinear decomposition (ATLD) algorithm even in the presence of common fluorescence problems such as scattering, peak overlaps and unknown interferences. It is worth noting that the unknown interferents can denote different kinds of backgrounds, not only refer to a constant background. In addition, the method using interpolation method could avoid the information loss of analytes of interest. The use of "mathematical separation" instead of complicated "chemical or physical separation" strategy can be more effective and environmentally friendly. A series of statistical parameters including figures of merit and RSDs of intra- (≤1.9%) and inter-day (≤6.6%) were calculated to validate the accuracy of the proposed method. Furthermore, the authoritative method of HPLC-FLD was adopted to verify the qualitative and quantitative results of the proposed method. Compared with the two methods, it also showed that the ATLD-EEMs method has the advantages of accuracy, rapidness, simplicity and green, which is expected to be developed as an attractive alternative method for simultaneous and interference-free determination of rhodamine dyes illegally added into complex matrices. Copyright © 2018. Published by Elsevier B.V.
Low-cost optical interconnect module for parallel optical data links
NASA Astrophysics Data System (ADS)
Noddings, Chad; Hirsch, Tom J.; Olla, M.; Spooner, C.; Yu, Jason J.
1995-04-01
We have designed, fabricated, and tested a prototype parallel ten-channel unidirectional optical data link. When scaled to production, we project that this technology will satisfy the following market penetration requirements: (1) up to 70 meters transmission distance, (2) at least 1 gigabyte/second data rate, and (3) 0.35 to 0.50 MByte/second volume selling price. These goals can be achieved by means of the assembly innovations described in this paper: a novel alignment method that is integrated with low-cost, few chip module packaging techniques, yielding high coupling and reducing the component count. Furthermore, high coupling efficiency increases projected reliability reducing the driver's power requirements.
POD/DEIM reduced-order strategies for efficient four dimensional variational data assimilation
NASA Astrophysics Data System (ADS)
Ştefănescu, R.; Sandu, A.; Navon, I. M.
2015-08-01
This work studies reduced order modeling (ROM) approaches to speed up the solution of variational data assimilation problems with large scale nonlinear dynamical models. It is shown that a key requirement for a successful reduced order solution is that reduced order Karush-Kuhn-Tucker conditions accurately represent their full order counterparts. In particular, accurate reduced order approximations are needed for the forward and adjoint dynamical models, as well as for the reduced gradient. New strategies to construct reduced order based are developed for proper orthogonal decomposition (POD) ROM data assimilation using both Galerkin and Petrov-Galerkin projections. For the first time POD, tensorial POD, and discrete empirical interpolation method (DEIM) are employed to develop reduced data assimilation systems for a geophysical flow model, namely, the two dimensional shallow water equations. Numerical experiments confirm the theoretical framework for Galerkin projection. In the case of Petrov-Galerkin projection, stabilization strategies must be considered for the reduced order models. The new reduced order shallow water data assimilation system provides analyses similar to those produced by the full resolution data assimilation system in one tenth of the computational time.
An FP7 "Space" project: Aphorism "Advanced PRocedures for volcanic and Seismic Monitoring"
NASA Astrophysics Data System (ADS)
Di Iorio, A., Sr.; Stramondo, S.; Bignami, C.; Corradini, S.; Merucci, L.
2014-12-01
APHORISM project proposes the development and testing of two new methods to combine Earth Observation satellite data from different sensors, and ground data. The aim is to demonstrate that this two types of data, appropriately managed and integrated, can provide new improved GMES products useful for seismic and volcanic crisis management. The first method, APE - A Priori information for Earthquake damage mapping, concerns the generation of maps to address the detection and estimate of damage caused by a seism. The use of satellite data to investigate earthquake damages is not an innovative issue. We can find a wide literature and projects concerning such issue, but usually the approach is only based on change detection techniques and classifications algorithms. The novelty of APE relies on the exploitation of a priori information derived by InSAR time series to measure surface movements, shake maps obtained from seismological data, and vulnerability information. This a priori information is then integrated with change detection map to improve accuracy and to limit false alarms. The second method deals with volcanic crisis management. The method, MACE - Multi-platform volcanic Ash Cloud Estimation, concerns the exploitation of GEO (Geosynchronous Earth Orbit) sensor platform, LEO (Low Earth Orbit) satellite sensors and ground measures to improve the ash detection and retrieval and to characterize the volcanic ash clouds. The basic idea of MACE consists of an improvement of volcanic ash retrievals at the space-time scale by using both the LEO and GEO estimations and in-situ data. Indeed the standard ash thermal infrared retrieval is integrated with data coming from a wider spectral range from visible to microwave. The ash detection is also extended in case of cloudy atmosphere or steam plumes. APE and MACE methods have been defined in order to provide products oriented toward the next ESA Sentinels satellite missions.The project is funded under the European Union FP7 program and the Kick-Off meeting has been held at INGV premises in Rome on 18th December 2013.
Finite Differences and Collocation Methods for the Solution of the Two Dimensional Heat Equation
NASA Technical Reports Server (NTRS)
Kouatchou, Jules
1999-01-01
In this paper we combine finite difference approximations (for spatial derivatives) and collocation techniques (for the time component) to numerically solve the two dimensional heat equation. We employ respectively a second-order and a fourth-order schemes for the spatial derivatives and the discretization method gives rise to a linear system of equations. We show that the matrix of the system is non-singular. Numerical experiments carried out on serial computers, show the unconditional stability of the proposed method and the high accuracy achieved by the fourth-order scheme.
NASA Astrophysics Data System (ADS)
Fehn, Niklas; Wall, Wolfgang A.; Kronbichler, Martin
2017-12-01
The present paper deals with the numerical solution of the incompressible Navier-Stokes equations using high-order discontinuous Galerkin (DG) methods for discretization in space. For DG methods applied to the dual splitting projection method, instabilities have recently been reported that occur for small time step sizes. Since the critical time step size depends on the viscosity and the spatial resolution, these instabilities limit the robustness of the Navier-Stokes solver in case of complex engineering applications characterized by coarse spatial resolutions and small viscosities. By means of numerical investigation we give evidence that these instabilities are related to the discontinuous Galerkin formulation of the velocity divergence term and the pressure gradient term that couple velocity and pressure. Integration by parts of these terms with a suitable definition of boundary conditions is required in order to obtain a stable and robust method. Since the intermediate velocity field does not fulfill the boundary conditions prescribed for the velocity, a consistent boundary condition is derived from the convective step of the dual splitting scheme to ensure high-order accuracy with respect to the temporal discretization. This new formulation is stable in the limit of small time steps for both equal-order and mixed-order polynomial approximations. Although the dual splitting scheme itself includes inf-sup stabilizing contributions, we demonstrate that spurious pressure oscillations appear for equal-order polynomials and small time steps highlighting the necessity to consider inf-sup stability explicitly.
Ding, Zhixia; Shen, Yi
2016-04-01
This paper investigates global projective synchronization of nonidentical fractional-order neural networks (FNNs) based on sliding mode control technique. We firstly construct a fractional-order integral sliding surface. Then, according to the sliding mode control theory, we design a sliding mode controller to guarantee the occurrence of the sliding motion. Based on fractional Lyapunov direct methods, system trajectories are driven to the proposed sliding surface and remain on it evermore, and some novel criteria are obtained to realize global projective synchronization of nonidentical FNNs. As the special cases, some sufficient conditions are given to ensure projective synchronization of identical FNNs, complete synchronization of nonidentical FNNs and anti-synchronization of nonidentical FNNs. Finally, one numerical example is given to demonstrate the effectiveness of the obtained results. Copyright © 2016 Elsevier Ltd. All rights reserved.
Estimation of seismically detectable portion of a gas plume: CO2CRC Otway project case study
NASA Astrophysics Data System (ADS)
Pevzner, Roman; Caspari, Eva; Bona, Andrej; Galvin, Robert; Gurevich, Boris
2013-04-01
CO2CRC Otway project comprises of several experiments involving CO2/CH4 or pure CO2 gas injection into different geological formations at the Otway test site (Victoria, Australia). During the first stage of the project, which was finished in 2010, more than 64,000 t of gas were injected into the depleted gas reservoir at ~2 km depth. At the moment, preparations for the next stage of the project aiming to examine capabilities of seismic monitoring of small scale injection (up to 15,000 t) into saline formation are ongoing. Time-lapse seismic is one of the most typical methods for CO2 geosequestration monitoring. Significant experience was gained during the first stage of the project through acquisition and analysis of the 4D surface seismic and numerous time-lapse VSP surveys. In order to justify the second stage of the project and optimise parameters of the experiment, several modelling studies were conducted. In order to predict seismic signal we populate realistic geological model with elastic properties, model their changes using fluid substitution technique applied to the fluid flow simulation results and compute synthetic seismic baseline and monitor volumes. To assess detectability of the time-lapse signal caused by the injection, we assume that the time-lapse noise level will be equivalent to the level of difference between the last two Otway 3D surveys acquired in 2009 and 2010 using conventional surface technique (15,000 lbs vibroseis sources and single geophones as the receivers). In order to quantify the uncertainties in plume imaging/visualisation due to the time-lapse noise realisation we propose to use multiple noise realisations with the same F-Kx-Ky amplitude spectra as the field noise for each synthetic signal volume. Having signal detection criterion defined in the terms of signal/time- lapse noise level on a single trace we estimate visible portion of the plume as a function of this criterion. This approach also gives an opportunity to attempt to evaluate probability of the signal detection. The authors acknowledge the funding provided by the Australian government through its CRC program to support this CO2CRC research project. We also acknowledge the CO2CRC's corporate sponsors and the financial assistance provided through Australian National Low Emissions Coal Research and Development (ANLEC R&D). ANLEC R&D is supported by Australian Coal Association Low Emissions Technology Limited and the Australian Government through the Clean Energy Initiative.
Planning and leading of the technological processes by mechanical working with microsoft project
NASA Astrophysics Data System (ADS)
Nae, I.; Grigore, N.
2016-08-01
Nowadays, fabrication systems and methods are being modified; new processing technologies come up, flow sheets develop a minimum number of phases, the flexibility of the technologies grows up, new methods and instruments of monitoring and leading the processing operations also come up. The technological course (route, entry, scheme, guiding) referring to the series of the operation, putting and execution phases of a mark in order to obtain the final product from the blank is represented by a sequence of activities realized by a logic manner, on a well determined schedule, with a determined budget and resources. Also, a project can be defined as a series of specific activities, methodical structured which they aim to finish a specific objective, within a fixed schedule and budget. Within the homogeneity between the project and the technological course, this research is presenting the defining of the technological course of mechanical chip removing process using Microsoft Project. Under these circumstances, this research highlights the advantages of this method: the celerity using of other technological alternatives in order to pick the optimal process, the job scheduling being constrained by any kinds, the standardization of some processing technological operations.
NASA Astrophysics Data System (ADS)
Reis, C.; Clain, S.; Figueiredo, J.; Baptista, M. A.; Miranda, J. M. A.
2015-12-01
Numerical tools turn to be very important for scenario evaluations of hazardous phenomena such as tsunami. Nevertheless, the predictions highly depends on the numerical tool quality and the design of efficient numerical schemes still receives important attention to provide robust and accurate solutions. In this study we propose a comparative study between the efficiency of two volume finite numerical codes with second-order discretization implemented with different method to solve the non-conservative shallow water equations, the MUSCL (Monotonic Upstream-Centered Scheme for Conservation Laws) and the MOOD methods (Multi-dimensional Optimal Order Detection) which optimize the accuracy of the approximation in function of the solution local smoothness. The MUSCL is based on a priori criteria where the limiting procedure is performed before updated the solution to the next time-step leading to non-necessary accuracy reduction. On the contrary, the new MOOD technique uses a posteriori detectors to prevent the solution from oscillating in the vicinity of the discontinuities. Indeed, a candidate solution is computed and corrections are performed only for the cells where non-physical oscillations are detected. Using a simple one-dimensional analytical benchmark, 'Single wave on a sloping beach', we show that the classical 1D shallow-water system can be accurately solved with the finite volume method equipped with the MOOD technique and provide better approximation with sharper shock and less numerical diffusion. For the code validation, we also use the Tohoku-Oki 2011 tsunami and reproduce two DART records, demonstrating that the quality of the solution may deeply interfere with the scenario one can assess. This work is funded by the Portugal-France research agreement, through the research project GEONUM FCT-ANR/MAT-NAN/0122/2012.Numerical tools turn to be very important for scenario evaluations of hazardous phenomena such as tsunami. Nevertheless, the predictions highly depends on the numerical tool quality and the design of efficient numerical schemes still receives important attention to provide robust and accurate solutions. In this study we propose a comparative study between the efficiency of two volume finite numerical codes with second-order discretization implemented with different method to solve the non-conservative shallow water equations, the MUSCL (Monotonic Upstream-Centered Scheme for Conservation Laws) and the MOOD methods (Multi-dimensional Optimal Order Detection) which optimize the accuracy of the approximation in function of the solution local smoothness. The MUSCL is based on a priori criteria where the limiting procedure is performed before updated the solution to the next time-step leading to non-necessary accuracy reduction. On the contrary, the new MOOD technique uses a posteriori detectors to prevent the solution from oscillating in the vicinity of the discontinuities. Indeed, a candidate solution is computed and corrections are performed only for the cells where non-physical oscillations are detected. Using a simple one-dimensional analytical benchmark, 'Single wave on a sloping beach', we show that the classical 1D shallow-water system can be accurately solved with the finite volume method equipped with the MOOD technique and provide better approximation with sharper shock and less numerical diffusion. For the code validation, we also use the Tohoku-Oki 2011 tsunami and reproduce two DART records, demonstrating that the quality of the solution may deeply interfere with the scenario one can assess. This work is funded by the Portugal-France research agreement, through the research project GEONUM FCT-ANR/MAT-NAN/0122/2012.
Nuclear Structure Studies with Stable and Radioactive Beams: The SPES radioactive ion beam project
NASA Astrophysics Data System (ADS)
de Angelis, G.; SPES Collaboration; Prete, G.; Andrighetto, A.; Manzolaro, M.; Corradetti, S.; Scarpa, D.; Rossignoli, M.; Monetti, A.; Lollo, M.; Calderolla, M.; Vasquez, J.; Zafiropoulos, D.; Sarchiapone, L.; Benini, D.; Favaron, P.; Rigato, M.; Pegoraro, R.; Maniero, D.; Calabretta, L.; Comunian, M.; Maggiore, M.; Lombardi, A.; Piazza, L.; Porcellato, A. M.; Roncolato, C.; Bisoffi, G.; Pisent, A.; Galatà, A.; Giacchini, M.; Bassato, G.; Canella, S.; Gramegna, F.; Valiente, J.; Bermudez, J.; Mastinu, P. F.; Esposito, J.; Wyss, J.; Russo, A.; Zanella, S.
2015-04-01
A new Radioactive Ion Beam (RIB) facility (SPES) is presently under construction at the Legnaro National Laboratories of INFN. The SPES facility is based on the ISOL method using an UCx Direct Target able to sustain a power of 10 kW. The primary proton beam is provided by a high current Cyclotron accelerator with energy of 35-70 MeV and a beam current of 0.2-0.5 mA. Neutron-rich radioactive ions are produced by proton induced fission on an Uranium target at an expected fission rate of the order of 1013 fissions per second. After ionization and selection the exotic isotopes are re-accelerated by the ALPI superconducting LINAC at energies of 10A MeV for masses in the region A=130 amu. The expected secondary beam rates are of the order of 107 - 109 pps. Aim of the SPES facility is to deliver high intensity radioactive ion beams of neutron rich nuclei for nuclear physics research as well as to be an interdisciplinary research centre for radio-isotopes production for medicine and for neutron beams.
Navier-Stokes computation of compressible turbulent flows with a second order closure, part 1
NASA Technical Reports Server (NTRS)
Haminh, Hieu; Kollmann, Wolfgang; Vandromme, Dany
1990-01-01
A second order closure turbulence model for compressible flows is developed and implemented in a 2D Reynolds-averaged Navier-Stokes solver. From the beginning where a kappa-epsilon turbulence model was implemented in the bidiagonal implicit method of MACCORMACK (referred to as the MAC3 code) to the final stage of implementing a full second order closure in the efficient line Gauss-Seidel algorithm, numerous work was done, individually and collectively. Besides the collaboration itself, the final product of this work is a second order closure derived from the Launder, Reece, and Rodi model to account for near wall effects, which has been called FRAME model, which stands for FRench-AMerican-Effort. During the reporting period, two different problems were worked out. The first was to provide Ames researchers with a reliable compressible boundary layer code including a wide collection of turbulence models for quick testing of new terms, both in two equations and in second order closure (LRR and FRAME). The second topic was to complete the implementation of the FRAME model in the MAC5 code. The work related to these two different contributions is reported. dilatation in presence of stron shocks. This work, which has been conducted during a work at the Center for Turbulence Research with Zeman aimed also to cros-check earlier assumptions by Rubesin and Vandromme.
NASA Technical Reports Server (NTRS)
Laurenson, R. M.; Baumgarten, J. R.
1975-01-01
An approximation technique has been developed for determining the transient response of a nonlinear dynamic system. The nonlinearities in the system which has been considered appear in the system's dissipation function. This function was expressed as a second order polynomial in the system's velocity. The developed approximation is an extension of the classic Kryloff-Bogoliuboff technique. Two examples of the developed approximation are presented for comparative purposes with other approximation methods.
Fringe-period selection for a multifrequency fringe-projection phase unwrapping method
NASA Astrophysics Data System (ADS)
Zhang, Chunwei; Zhao, Hong; Jiang, Kejian
2016-08-01
The multi-frequency fringe-projection phase unwrapping method (MFPPUM) is a typical phase unwrapping algorithm for fringe projection profilometry. It has the advantage of being capable of correctly accomplishing phase unwrapping even in the presence of surface discontinuities. If the fringe frequency ratio of the MFPPUM is too large, fringe order error (FOE) may be triggered. FOE will result in phase unwrapping error. It is preferable for the phase unwrapping to be kept correct while the fewest sets of lower frequency fringe patterns are used. To achieve this goal, in this paper a parameter called fringe order inaccuracy (FOI) is defined, dominant factors which may induce FOE are theoretically analyzed, a method to optimally select the fringe periods for the MFPPUM is proposed with the aid of FOI, and experiments are conducted to research the impact of the dominant factors in phase unwrapping and demonstrate the validity of the proposed method. Some novel phenomena are revealed by these experiments. The proposed method helps to optimally select the fringe periods and detect the phase unwrapping error for the MFPPUM.
Liu, Gang; Jayathilake, Pahala Gedara; Khoo, Boo Cheong
2014-02-01
Two nonlinear models are proposed to investigate the focused acoustic waves that the nonlinear effects will be important inside the liquid around the scatterer. Firstly, the one dimensional solutions for the widely used Westervelt equation with different coordinates are obtained based on the perturbation method with the second order nonlinear terms. Then, by introducing the small parameter (Mach number), a dimensionless formulation and asymptotic perturbation expansion via the compressible potential flow theory is applied. This model permits the decoupling between the velocity potential and enthalpy to second order, with the first potential solutions satisfying the linear wave equation (Helmholtz equation), whereas the second order solutions are associated with the linear non-homogeneous equation. Based on the model, the local nonlinear effects of focused acoustic waves on certain volume are studied in which the findings may have important implications for bubble cavitation/initiation via focused ultrasound called HIFU (High Intensity Focused Ultrasound). The calculated results show that for the domain encompassing less than ten times the radius away from the center of the scatterer, the non-linear effect exerts a significant influence on the focused high intensity acoustic wave. Moreover, at the comparatively higher frequencies, for the model of spherical wave, a lower Mach number may result in stronger nonlinear effects. Copyright © 2013 Elsevier B.V. All rights reserved.
Manufacturing Methods & Technology (MMT) Project Execution Report
1982-10-01
managers. This document is used as a management tool for monitoring the progress of MMT projects. There are separate sections in the report showing...in this area will insure a use- ful review of the progression of the MMT Program. Relative to the second are of concern, there has always been a...THRU 5. DAMAGE TU ANY UNE BLADE DURING MANUFACTURING CR IN THE FIELD RESULTS IN SCRAPPING THE WHOLE BLISK. 25 I II I_ _IL PROJECTS ADWI IN 1ST HALF
ERIC Educational Resources Information Center
Malott, Curry; Ford, Derek R.
2015-01-01
Part two: This article is the second part of a project concerned with developing a Marxist critical pedagogy that moves beyond a critique of capital and toward a communist future. The article performs an educational reading of Marx's Critique of the Gotha Programme in order to delineate what a Marxist critical pedagogy of becoming communist might…
Implementing the Second-Order Fermi Process in a Kinetic Monte-Carlo Simulation
NASA Technical Reports Server (NTRS)
Summerlin, Errol J.
2010-01-01
Radio JOVE is an education and outreach project intended to give students and other interested individuals hands-on experience in learning radio astronomy. They can do this through building a radio telescope from a relatively inexpensive kit that includes the parts for a receiver and an antenna as well as software for a computer chart recorder emulator (Radio Skypipe) and other reference materials
Barton D. Clinton; James M. Vose; Dick L. Fowler
2010-01-01
Stream water protection during timber-harvesting activities is of primary interest to forest managers. In this study, we examine the potential impacts of riparian zone tree cutting on water temperature and total suspended solids. We monitored stream water temperature and total suspended solids before and after timber harvesting along a second-order tributary of the...
Translational Control in Bone Marrow Failure
2015-05-01
HCLS1 associated protein X-1 (HAX1), cause hereditary forms of neutropenia . Previously, competing hypotheses have posited that mutant forms of...derived induced pluripotent stem cell (iPSC) model of ELANE-associated neutropenia . During the second year of this project, in order to facilitate...pathology. 3 2. KEY WORDS neutropenia bone marrow failure neutrophil elastase ELANE HAX1 alternate translation induced pluripotent stem cells (iPSC
Finite Moment Tensors of Southern California Earthquakes
NASA Astrophysics Data System (ADS)
Jordan, T. H.; Chen, P.; Zhao, L.
2003-12-01
We have developed procedures for inverting broadband waveforms for the finite moment tensors (FMTs) of regional earthquakes. The FMT is defined in terms of second-order polynomial moments of the source space-time function and provides the lowest order representation of a finite fault rupture; it removes the fault-plane ambiguity of the centroid moment tensor (CMT) and yields several additional parameters of seismological interest: the characteristic length L{c}, width W{c}, and duration T{c} of the faulting, as well as the directivity vector {v}{d} of the fault slip. To formulate the inverse problem, we follow and extend the methods of McGuire et al. [2001, 2002], who have successfully recovered the second-order moments of large earthquakes using low-frequency teleseismic data. We express the Fourier spectra of a synthetic point-source waveform in its exponential (Rytov) form and represent the observed waveform relative to the synthetic in terms two frequency-dependent differential times, a phase delay δ τ {p}(ω ) and an amplitude-reduction time δ τ {q}(ω ), which we measure using Gee and Jordan's [1992] isolation-filter technique. We numerically calculate the FMT partial derivatives in terms of second-order spatiotemporal gradients, which allows us to use 3D finite-difference seismograms as our isolation filters. We have applied our methodology to a set of small to medium-sized earthquakes in Southern California. The errors in anelastic structure introduced perturbations larger than the signal level caused by finite source effect. We have therefore employed a joint inversion technique that recovers the CMT parameters of the aftershocks, as well as the CMT and FMT parameters of the mainshock, under the assumption that the source finiteness of the aftershocks can be ignored. The joint system of equations relating the δ τ {p} and δ τ {q} data to the source parameters of the mainshock-aftershock cluster is denuisanced for path anomalies in both observables; this projection operation effectively corrects the mainshock data for path-related amplitude anomalies in a way similar to, but more flexible than, empirical Green function (EGF) techniques.
Zhang, Xiao-Hua; Wu, Hai-Long; Wang, Jian-Yao; Tu, De-Zhu; Kang, Chao; Zhao, Juan; Chen, Yao; Miu, Xiao-Xia; Yu, Ru-Qin
2013-05-01
This paper describes the use of second-order calibration for development of HPLC-DAD method to quantify nine polyphenols in five kinds of honey samples. The sample treatment procedure was simplified effectively relative to the traditional ways. Baselines drift was also overcome by means of regarding the drift as additional factor(s) as well as the analytes of interest in the mathematical model. The contents of polyphenols obtained by the alternating trilinear decomposition (ATLD) method have been successfully used to distinguish different types of honey. This method shows good linearity (r>0.99), rapidity (t<7.60 min) and accuracy, which may be extremely promising as an excellent routine strategy for identification and quantification of polyphenols in the complex matrices. Copyright © 2012 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thellamurege, Nandun M.; Si, Dejun; Cui, Fengchao
A combined quantum mechanical/molecular mechanical/continuum (QM/MM/C) style second order Møller-Plesset perturbation theory (MP2) method that incorporates induced dipole polarizable force field and induced surface charge continuum solvation model is established. The Z-vector method is modified to include induced dipoles and induced surface charges to determine the MP2 response density matrix, which can be used to evaluate MP2 properties. In particular, analytic nuclear gradient is derived and implemented for this method. Using the Assisted Model Building with Energy Refinement induced dipole polarizable protein force field, the QM/MM/C style MP2 method is used to study the hydrogen bonding distances and strengths ofmore » the photoactive yellow protein chromopore in the wild type and the Glu46Gln mutant.« less
Scatter measurement and correction method for cone-beam CT based on single grating scan
NASA Astrophysics Data System (ADS)
Huang, Kuidong; Shi, Wenlong; Wang, Xinyu; Dong, Yin; Chang, Taoqi; Zhang, Hua; Zhang, Dinghua
2017-06-01
In cone-beam computed tomography (CBCT) systems based on flat-panel detector imaging, the presence of scatter significantly reduces the quality of slices. Based on the concept of collimation, this paper presents a scatter measurement and correction method based on single grating scan. First, according to the characteristics of CBCT imaging, the scan method using single grating and the design requirements of the grating are analyzed and figured out. Second, by analyzing the composition of object projection images and object-and-grating projection images, the processing method for the scatter image at single projection angle is proposed. In addition, to avoid additional scan, this paper proposes an angle interpolation method of scatter images to reduce scan cost. Finally, the experimental results show that the scatter images obtained by this method are accurate and reliable, and the effect of scatter correction is obvious. When the additional object-and-grating projection images are collected and interpolated at intervals of 30 deg, the scatter correction error of slices can still be controlled within 3%.
Symbolic Algebra Development for Higher-Order Electron Propagator Formulation and Implementation.
Tamayo-Mendoza, Teresa; Flores-Moreno, Roberto
2014-06-10
Through the use of symbolic algebra, implemented in a program, the algebraic expression of the elements of the self-energy matrix for the electron propagator to different orders were obtained. In addition, a module for the software package Lowdin was automatically generated. Second- and third-order electron propagator results have been calculated to test the correct operation of the program. It was found that the Fortran 90 modules obtained automatically with our algorithm succeeded in calculating ionization energies with the second- and third-order electron propagator in the diagonal approximation. The strategy for the development of this symbolic algebra program is described in detail. This represents a solid starting point for the automatic derivation and implementation of higher-order electron propagator methods.
Statistically generated weighted curve fit of residual functions for modal analysis of structures
NASA Technical Reports Server (NTRS)
Bookout, P. S.
1995-01-01
A statistically generated weighting function for a second-order polynomial curve fit of residual functions has been developed. The residual flexibility test method, from which a residual function is generated, is a procedure for modal testing large structures in an external constraint-free environment to measure the effects of higher order modes and interface stiffness. This test method is applicable to structures with distinct degree-of-freedom interfaces to other system components. A theoretical residual function in the displacement/force domain has the characteristics of a relatively flat line in the lower frequencies and a slight upward curvature in the higher frequency range. In the test residual function, the above-mentioned characteristics can be seen in the data, but due to the present limitations in the modal parameter evaluation (natural frequencies and mode shapes) of test data, the residual function has regions of ragged data. A second order polynomial curve fit is required to obtain the residual flexibility term. A weighting function of the data is generated by examining the variances between neighboring data points. From a weighted second-order polynomial curve fit, an accurate residual flexibility value can be obtained. The residual flexibility value and free-free modes from testing are used to improve a mathematical model of the structure. The residual flexibility modal test method is applied to a straight beam with a trunnion appendage and a space shuttle payload pallet simulator.
NASA Astrophysics Data System (ADS)
Canhanga, Betuel; Ni, Ying; Rančić, Milica; Malyarenko, Anatoliy; Silvestrov, Sergei
2017-01-01
After Black-Scholes proposed a model for pricing European Options in 1973, Cox, Ross and Rubinstein in 1979, and Heston in 1993, showed that the constant volatility assumption made by Black-Scholes was one of the main reasons for the model to be unable to capture some market details. Instead of constant volatilities, they introduced stochastic volatilities to the asset dynamic modeling. In 2009, Christoffersen empirically showed "why multifactor stochastic volatility models work so well". Four years later, Chiarella and Ziveyi solved the model proposed by Christoffersen. They considered an underlying asset whose price is governed by two factor stochastic volatilities of mean reversion type. Applying Fourier transforms, Laplace transforms and the method of characteristics they presented a semi-analytical formula to compute an approximate price for American options. The huge calculation involved in the Chiarella and Ziveyi approach motivated the authors of this paper in 2014 to investigate another methodology to compute European Option prices on a Christoffersen type model. Using the first and second order asymptotic expansion method we presented a closed form solution for European option, and provided experimental and numerical studies on investigating the accuracy of the approximation formulae given by the first order asymptotic expansion. In the present paper we will perform experimental and numerical studies for the second order asymptotic expansion and compare the obtained results with results presented by Chiarella and Ziveyi.
M-Adapting Low Order Mimetic Finite Differences for Dielectric Interface Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGregor, Duncan A.; Gyrya, Vitaliy; Manzini, Gianmarco
2016-03-07
We consider a problem of reducing numerical dispersion for electromagnetic wave in the domain with two materials separated by a at interface in 2D with a factor of two di erence in wave speed. The computational mesh in the homogeneous parts of the domain away from the interface consists of square elements. Here the method construction is based on m-adaptation construction in homogeneous domain that leads to fourth-order numerical dispersion (vs. second order in non-optimized method). The size of the elements in two domains also di ers by a factor of two, so as to preserve the same value ofmore » Courant number in each. Near the interface where two meshes merge the mesh with larger elements consists of degenerate pentagons. We demonstrate that prior to m-adaptation the accuracy of the method falls from second to rst due to breaking of symmetry in the mesh. Next we develop m-adaptation framework for the interface region and devise an optimization criteria. We prove that for the interface problem m-adaptation cannot produce increase in method accuracy. This is in contrast to homogeneous medium where m-adaptation can increase accuracy by two orders.« less
Modeling of second order space charge driven coherent sum and difference instabilities
NASA Astrophysics Data System (ADS)
Yuan, Yao-Shuo; Boine-Frankenheim, Oliver; Hofmann, Ingo
2017-10-01
Second order coherent oscillation modes in intense particle beams play an important role for beam stability in linear or circular accelerators. In addition to the well-known second order even envelope modes and their instability, coupled even envelope modes and odd (skew) modes have recently been shown in [Phys. Plasmas 23, 090705 (2016), 10.1063/1.4963851] to lead to parametric instabilities in periodic focusing lattices with sufficiently different tunes. While this work was partly using the usual envelope equations, partly also particle-in-cell (PIC) simulation, we revisit these modes here and show that the complete set of second order even and odd mode phenomena can be obtained in a unifying approach by using a single set of linearized rms moment equations based on "Chernin's equations." This has the advantage that accurate information on growth rates can be obtained and gathered in a "tune diagram." In periodic focusing we retrieve the parametric sum instabilities of coupled even and of odd modes. The stop bands obtained from these equations are compared with results from PIC simulations for waterbag beams and found to show very good agreement. The "tilting instability" obtained in constant focusing confirms the equivalence of this method with the linearized Vlasov-Poisson system evaluated in second order.
AN IMMERSED BOUNDARY METHOD FOR COMPLEX INCOMPRESSIBLE FLOWS
An immersed boundary method for time-dependant, three- dimensional, incompressible flows is presented in this paper. The incompressible Navier-Stokes equations are discretized using a low-diffusion flux splitting method for the inviscid fluxes and a second order central differenc...
NASA Technical Reports Server (NTRS)
Barker, R. E., Jr.
1986-01-01
The work includes an investigation of the applicability of the nucleation theory to second and higher order thermodynamic transitions in the Ehrenfest sense, and a number of significant conclusions relevant to first order transitions, as well. The underlying theoretical method consisted of expanding the Gibbs' free energy in a Maclarin or Taylor series and then using fundamental thermodynamic determinable quantities, and interpreting the results. Work was performed on the existence and interpretation of an interfacial energy between phases in a second order transition in addition to an investigation of the solid-liquid interfacial energy for various polymers. Extensive considerations were devoted to various aspects of a particular polymer, polyvinylidene fluoride (PVDF or PVF2), including an experimetal investigation of the effects of an applied electric field on the morphology of melt crystallization and on the nucleation and growth of polarized domains.
Progress Towards a Cartesian Cut-Cell Method for Viscous Compressible Flow
NASA Technical Reports Server (NTRS)
Berger, Marsha; Aftosmis, Michael J.
2011-01-01
The proposed paper reports advances in developing a method for high Reynolds number compressible viscous flow simulations using a Cartesian cut-cell method with embedded boundaries. This preliminary work focuses on accuracy of the discretization near solid wall boundaries. A model problem is used to investigate the accuracy of various difference stencils for second derivatives and to guide development of the discretization of the viscous terms in the Navier-Stokes equations. Near walls, quadratic reconstruction in the wall-normal direction is used to mitigate mesh irregularity and yields smooth skin friction distributions along the body. Multigrid performance is demonstrated using second-order coarse grid operators combined with second-order restriction and prolongation operators. Preliminary verification and validation for the method is demonstrated using flat-plate and airfoil examples at compressible Mach numbers. Simulations of flow on laminar and turbulent flat plates show skin friction and velocity profiles compared with those from boundary-layer theory. Airfoil simulations are performed at laminar and turbulent Reynolds numbers with results compared to both other simulations and experimental data
Grid Convergence of High Order Methods for Multiscale Complex Unsteady Viscous Compressible Flows
NASA Technical Reports Server (NTRS)
Sjoegreen, B.; Yee, H. C.
2001-01-01
Grid convergence of several high order methods for the computation of rapidly developing complex unsteady viscous compressible flows with a wide range of physical scales is studied. The recently developed adaptive numerical dissipation control high order methods referred to as the ACM and wavelet filter schemes are compared with a fifth-order weighted ENO (WENO) scheme. The two 2-D compressible full Navier-Stokes models considered do not possess known analytical and experimental data. Fine grid solutions from a standard second-order TVD scheme and a MUSCL scheme with limiters are used as reference solutions. The first model is a 2-D viscous analogue of a shock tube problem which involves complex shock/shear/boundary-layer interactions. The second model is a supersonic reactive flow concerning fuel breakup. The fuel mixing involves circular hydrogen bubbles in air interacting with a planar moving shock wave. Both models contain fine scale structures and are stiff in the sense that even though the unsteadiness of the flows are rapidly developing, extreme grid refinement and time step restrictions are needed to resolve all the flow scales as well as the chemical reaction scales.
On simulating flow with multiple time scales using a method of averages
DOE Office of Scientific and Technical Information (OSTI.GOV)
Margolin, L.G.
1997-12-31
The author presents a new computational method based on averaging to efficiently simulate certain systems with multiple time scales. He first develops the method in a simple one-dimensional setting and employs linear stability analysis to demonstrate numerical stability. He then extends the method to multidimensional fluid flow. His method of averages does not depend on explicit splitting of the equations nor on modal decomposition. Rather he combines low order and high order algorithms in a generalized predictor-corrector framework. He illustrates the methodology in the context of a shallow fluid approximation to an ocean basin circulation. He finds that his newmore » method reproduces the accuracy of a fully explicit second-order accurate scheme, while costing less than a first-order accurate scheme.« less
A fourth-order Cartesian grid embeddedboundary method for Poisson’s equation
Devendran, Dharshi; Graves, Daniel; Johansen, Hans; ...
2017-05-08
In this paper, we present a fourth-order algorithm to solve Poisson's equation in two and three dimensions. We use a Cartesian grid, embedded boundary method to resolve complex boundaries. We use a weighted least squares algorithm to solve for our stencils. We use convergence tests to demonstrate accuracy and we show the eigenvalues of the operator to demonstrate stability. We compare accuracy and performance with an established second-order algorithm. We also discuss in depth strategies for retaining higher-order accuracy in the presence of nonsmooth geometries.
A fourth-order Cartesian grid embeddedboundary method for Poisson’s equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Devendran, Dharshi; Graves, Daniel; Johansen, Hans
In this paper, we present a fourth-order algorithm to solve Poisson's equation in two and three dimensions. We use a Cartesian grid, embedded boundary method to resolve complex boundaries. We use a weighted least squares algorithm to solve for our stencils. We use convergence tests to demonstrate accuracy and we show the eigenvalues of the operator to demonstrate stability. We compare accuracy and performance with an established second-order algorithm. We also discuss in depth strategies for retaining higher-order accuracy in the presence of nonsmooth geometries.
The time-resolved photoelectron spectrum of toluene using a perturbation theory approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richings, Gareth W.; Worth, Graham A., E-mail: g.a.worth@bham.ac.uk
A theoretical study of the intra-molecular vibrational-energy redistribution of toluene using time-resolved photo-electron spectra calculated using nuclear quantum dynamics and a simple, two-mode model is presented. Calculations have been carried out using the multi-configuration time-dependent Hartree method, using three levels of approximation for the calculation of the spectra. The first is a full quantum dynamics simulation with a discretisation of the continuum wavefunction of the ejected electron, whilst the second uses first-order perturbation theory to calculate the wavefunction of the ion. Both methods rely on the explicit inclusion of both the pump and probe laser pulses. The third method includesmore » only the pump pulse and generates the photo-electron spectrum by projection of the pumped wavepacket onto the ion potential energy surface, followed by evaluation of the Fourier transform of the autocorrelation function of the subsequently propagated wavepacket. The calculations performed have been used to study the periodic population flow between the 6a and 10b16b modes in the S{sub 1} excited state, and compared to recent experimental data. We obtain results in excellent agreement with the experiment and note the efficiency of the perturbation method.« less
PDF methods for combustion in high-speed turbulent flows
NASA Technical Reports Server (NTRS)
Pope, Stephen B.
1995-01-01
This report describes the research performed during the second year of this three-year project. The ultimate objective of the project is extend the applicability of probability density function (pdf) methods from incompressible to compressible turbulent reactive flows. As described in subsequent sections, progress has been made on: (1) formulation and modelling of pdf equations for compressible turbulence, in both homogeneous and inhomogeneous inert flows; and (2) implementation of the compressible model in various flow configurations, namely decaying isotropic turbulence, homogeneous shear flow and plane mixing layer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kang, Hyun-Ju; Chung, Chin-Wook, E-mail: joykang@hanyang.ac.kr; Choi, Hyeok
A modified central difference method (MCDM) is proposed to obtain the electron energy distribution functions (EEDFs) in single Langmuir probes. Numerical calculation of the EEDF with MCDM is simple and has less noise. This method provides the second derivatives at a given point as the weighted average of second order central difference derivatives calculated at different voltage intervals, weighting each by the square of the interval. In this paper, the EEDFs obtained from MCDM are compared to those calculated via the averaged central difference method. It is found that MCDM effectively suppresses the noises in the EEDF, while the samemore » number of points are used to calculate of the second derivative.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakano, M; Haga, A; Hanaoka, S
2016-06-15
Purpose: The purpose of this study is to propose a new concept of four-dimensional (4D) cone-beam CT (CBCT) reconstruction for non-periodic organ motion using the Time-ordered Chain Graph Model (TCGM), and to compare the reconstructed results with the previously proposed methods, the total variation-based compressed sensing (TVCS) and prior-image constrained compressed sensing (PICCS). Methods: CBCT reconstruction method introduced in this study consisted of maximum a posteriori (MAP) iterative reconstruction combined with a regularization term derived from a concept of TCGM, which includes a constraint coming from the images of neighbouring time-phases. The time-ordered image series were concurrently reconstructed in themore » MAP iterative reconstruction framework. Angular range of projections for each time-phase was 90 degrees for TCGM and PICCS, and 200 degrees for TVCS. Two kinds of projection data, an elliptic-cylindrical digital phantom data and two clinical patients’ data, were used for reconstruction. The digital phantom contained an air sphere moving 3 cm along longitudinal axis, and temporal resolution of each method was evaluated by measuring the penumbral width of reconstructed moving air sphere. The clinical feasibility of non-periodic time-ordered 4D CBCT reconstruction was also examined using projection data of prostate cancer patients. Results: The results of reconstructed digital phantom shows that the penumbral widths of TCGM yielded the narrowest result; PICCS and TCGM were 10.6% and 17.4% narrower than that of TVCS, respectively. This suggests that the TCGM has the better temporal resolution than the others. Patients’ CBCT projection data were also reconstructed and all three reconstructed results showed motion of rectal gas and stool. The result of TCGM provided visually clearer and less blurring images. Conclusion: The present study demonstrates that the new concept for 4D CBCT reconstruction, TCGM, combined with MAP iterative reconstruction framework enables time-ordered image reconstruction with narrower time-window.« less
NASA Astrophysics Data System (ADS)
Freitag, Matthew
Polydiacetylenes (PDAs) are 1-dimensional polymers with a carbon-rich ene-yne backbone. Materials scientists are interested in PDAs because they are semiconductors, they have large multiphoton absorptions, and they can be prepared as ordered assemblies in the solid-state. Polydiacetylenes are formed from the topochemical 1,4-polymerization of a monomer unit made up of at least two sequential alkynes. This work describes attempts to form novel polydiacetylenes from several higher order polyyne monomers, as well as efforts to alter the morphology of known polydiacetylenes into thin films. The first project described here examined the formation of cocrystals of diiodohexatriyne with a bis(alkylnitrile) oxalamide host. Diiodohexatriyne undergoes 1,4-topochemical polymerization, with mild heating, to form poly(iodoethynyliododiacetylene), PIEDA. Polymerization was followed by extensive characterization through Raman spectroscopy, solid-state 13C MAS-NMR, and X-ray crystallography. This work represents the first ordered single-crystal to single-crystal 1,4-topochemical polymerization of a triyne, demonstrated through X-ray diffraction. The second project described efforts towards post-polymerization modification on PIEDA. Despite some success in model studies, isolated PIEDA was found to be too unstable to undergo controlled post-polymerization modification. The third project of this work described the demonstration of the formation of thin films of another PDA, polydiiododiacetylene (PIDA). Thin films of PIDA cocrystals could serve as components in solar cells or photovoltaic devices. Using lower concentration and allowing evaporation to occur in a fume hood, nanometer thick films were formed. However, thin films of PIDA cocrystals were too heterogeneous to be used within devices. The fourth project described here examined the preparation of cocrystals of bis(iodobutadiynyl)benzene monomer with several oxalamide hosts. The goal of this project is formation of conjugated ladder polydiacetylenes which have been theorized to have a lower band-gap than analogous linear polydiacetylenes. Cocrystals of monomer bis(iodobutadiynyl)benzene were formed with a variety of oxalamide hosts. Monomer cocrystals were heated at high temperatures and gave Raman signal consistent with polydiacetylene formation. Attempts to analyze heated cocrystals through single crystal X-ray diffraction have failed due to increased mosaicity. Other methods of inducing polymerization have been investigated but no ordered polymerization could be demonstrated. Halogen bonding has been demonstrated to be a reliable interaction for aligning these monomers. However, the polymerization and characterization of resultant polymer remains challenging due to the multiple reaction pathways of these materials.
NASA Astrophysics Data System (ADS)
Franco, J. M.; Rández, L.
The construction of new two-step hybrid (TSH) methods of explicit type with symmetric nodes and weights for the numerical integration of orbital and oscillatory second-order initial value problems (IVPs) is analyzed. These methods attain algebraic order eight with a computational cost of six or eight function evaluations per step (it is one of the lowest costs that we know in the literature) and they are optimal among the TSH methods in the sense that they reach a certain order of accuracy with minimal cost per step. The new TSH schemes also have high dispersion and dissipation orders (greater than 8) in order to be adapted to the solution of IVPs with oscillatory solutions. The numerical experiments carried out with several orbital and oscillatory problems show that the new eighth-order explicit TSH methods are more efficient than other standard TSH or Numerov-type methods proposed in the scientific literature.
Communities of Transformation and Their Work Scaling STEM Reform
ERIC Educational Resources Information Center
Kezar, Adrianna; Gehrke, Sean
2015-01-01
This mixed-methods study examined four STEM communities (BioQUEST, Project Kaleidoscope, the POGIL Project, and SENCER) in order to better understand the roles of these communities in advancing the goals of scaling STEM education reform. The project explored three key questions: (1) How do members and leaders of communities of practice (CoPs1)…
Feasibility study of new energy projects on three-level indicator system
NASA Astrophysics Data System (ADS)
Zhan, Zhigang
2018-06-01
With the rapid development of new energy industry, many new energy development projects are being carried out all over the world. To analyze the feasibility of the project. we build feasibility of new energy projects assessment model, based on the gathered abundant data about progress in new energy projects.12 indicators are selected by principal component analysis(PCA). Then we construct a new three-level indicator system, where the first level has 1 indicator, the second level has 5 indicators and the third level has 12 indicators to evaluate. Moreover, we use the entropy weight method (EWM) to get weight vector of the indicators in the third level and the multivariate statistical analysis(MVA)to get the weight vector of indicators in the second-class. We use this evaluation model to evaluate the feasibility of the new energy project and make a reference for the subsequent new energy investment. This could be a contribution to the world's low-carbon and green development by investing in sustainable new energy projects. We will introduce new variables and improve the weight model in the future. We also conduct a sensitivity analysis of the model and illustrate the strengths and weaknesses.
NASA Astrophysics Data System (ADS)
Saripalli, Ravi Kiran; Katturi, Naga Krishnakanth; Soma, Venugopal Rao; Bhat, H. L.; Elizabeth, Suja
2017-12-01
The linear, second order, and third order nonlinear optical properties of glucuronic acid γ-lactone single crystals were investigated. The optic axes and principal dielectric axes were identified through optical conoscopy and the principal refractive indices were obtained using the Brewster's angle method. Conic sections were observed which is perceived to be due to spontaneous non-collinear phase matching. The direction of collinear phase matching was determined and the deff evaluated in this direction was 0.71 pm/V. Open and closed aperture Z-scan measurements with femtosecond pulses revealed high third order nonlinearity in the form of self-defocusing, two-photon absorption, as well as saturable absorption.
NASA Astrophysics Data System (ADS)
Stuchi, Teresa; Cardozo Dias, P.
2013-05-01
Abstract (2,250 Maximum Characters): On a letter to Robert Hooke, Isaac Newton drew the orbit of a mass moving under a constant attracting central force. How he drew the orbit may indicate how and when he developed dynamic categories. Some historians claim that Newton used a method contrived by Hooke; others that he used some method of curvature. We prove geometrically: Hooke’s method is a second order symplectic area preserving algorithm, and the method of curvature is a first order algorithm without special features; then we integrate the hamiltonian equations. Integration by the method of curvature can also be done exploring geometric properties of curves. We compare three methods: Hooke’s method, the method of curvature and a first order method. A fourth order algorithm sets a standard of comparison. We analyze which of these methods best explains Newton’s drawing.