Science.gov

Sample records for numerical differencing analyzer

  1. SINDA, Systems Improved Numerical Differencing Analyzer

    NASA Technical Reports Server (NTRS)

    Fink, L. C.; Pan, H. M. Y.; Ishimoto, T.

    1972-01-01

    Computer program has been written to analyze group of 100-node areas and then provide for summation of any number of 100-node areas to obtain temperature profile. SINDA program options offer user variety of methods for solution of thermal analog modes presented in network format.

  2. CINDA-3G: Improved Numerical Differencing Analyzer Program for Third-Generation Computers

    NASA Technical Reports Server (NTRS)

    Gaski, J. D.; Lewis, D. R.; Thompson, L. R.

    1970-01-01

    The goal of this work was to develop a new and versatile program to supplement or replace the original Chrysler Improved Numerical Differencing Analyzer (CINDA) thermal analyzer program in order to take advantage of the improved systems software and machine speeds of the third-generation computers.

  3. SINDA'85/FLUINT - SYSTEMS IMPROVED NUMERICAL DIFFERENCING ANALYZER AND FLUID INTEGRATOR (CONVEX VERSION)

    NASA Technical Reports Server (NTRS)

    Cullimore, B.

    1994-01-01

    SINDA, the Systems Improved Numerical Differencing Analyzer, is a software system for solving lumped parameter representations of physical problems governed by diffusion-type equations. SINDA was originally designed for analyzing thermal systems represented in electrical analog, lumped parameter form, although its use may be extended to include other classes of physical systems which can be modeled in this form. As a thermal analyzer, SINDA can handle such interrelated phenomena as sublimation, diffuse radiation within enclosures, transport delay effects, and sensitivity analysis. FLUINT, the FLUid INTegrator, is an advanced one-dimensional fluid analysis program that solves arbitrary fluid flow networks. The working fluids can be single phase vapor, single phase liquid, or two phase. The SINDA'85/FLUINT system permits the mutual influences of thermal and fluid problems to be analyzed. The SINDA system consists of a programming language, a preprocessor, and a subroutine library. The SINDA language is designed for working with lumped parameter representations and finite difference solution techniques. The preprocessor accepts programs written in the SINDA language and converts them into standard FORTRAN. The SINDA library consists of a large number of FORTRAN subroutines that perform a variety of commonly needed actions. The use of these subroutines can greatly reduce the programming effort required to solve many problems. A complete run of a SINDA'85/FLUINT model is a four step process. First, the user's desired model is run through the preprocessor which writes out data files for the processor to read and translates the user's program code. Second, the translated code is compiled. The third step requires linking the user's code with the processor library. Finally, the processor is executed. SINDA'85/FLUINT program features include 20,000 nodes, 100,000 conductors, 100 thermal submodels, and 10 fluid submodels. SINDA'85/FLUINT can also model two phase flow

  4. Chrysler improved numerical differencing analyzer for third generation computers CINDA-3G

    NASA Technical Reports Server (NTRS)

    Gaski, J. D.; Lewis, D. R.; Thompson, L. R.

    1972-01-01

    New and versatile method has been developed to supplement or replace use of original CINDA thermal analyzer program in order to take advantage of improved systems software and machine speeds of third generation computers. CINDA-3G program options offer variety of methods for solution of thermal analog models presented in network format.

  5. Numerical methods for analyzing electromagnetic scattering

    NASA Technical Reports Server (NTRS)

    Lee, S. W.; Lo, Y. T.; Chuang, S. L.; Lee, C. S.

    1985-01-01

    Numerical methods to analyze electromagnetic scattering are presented. The dispersions and attenuations of the normal modes in a circular waveguide coated with lossy material were completely analyzed. The radar cross section (RCS) from a circular waveguide coated with lossy material was calculated. The following is observed: (1) the interior irradiation contributes to the RCS much more than does the rim diffraction; (2) at low frequency, the RCS from the circular waveguide terminated by a perfect electric conductor (PEC) can be reduced more than 13 dB down with a coating thickness less than 1% of the radius using the best lossy material available in a 6 radius-long cylinder; (3) at high frequency, a modal separation between the highly attenuated and the lowly attenuated modes is evident if the coating material is too lossy, however, a large RCS reduction can be achieved for a small incident angle with a thin layer of coating. It is found that the waveguide coated with a lossy magnetic material can be used as a substitute for a corrugated waveguide to produce a circularly polarized radiation yield.

  6. Numerical Procedures for Analyzing Dynamical Processes.

    DTIC Science & Technology

    1992-02-29

    different in nature and can be of the third coordinate of the numerically calcu- called crnamic in that information about the dy- lated solution. Such...recover the matrix A by changing coordinates back to the original basis. "The points x, are points on the attractor which are not For example, if we...the attractor contained witun a small distance (of rotate the coordinate axes by 45’, The dynamics Xrer. In this notation. x, and y, are consecutive

  7. Numerical methods for analyzing electromagnetic scattering

    NASA Technical Reports Server (NTRS)

    Lee, S. W.; Lo, Y. T.; Chuang, S. L.; Lee, C. S.

    1985-01-01

    Attenuation properties of the normal modes in an overmoded waveguide coated with a lossy material were analyzed. It is found that the low-order modes, can be significantly attenuated even with a thin layer of coating if the coating material is not too lossy. A thinner layer of coating is required for large attenuation of the low-order modes if the coating material is magnetic rather than dielectric. The Radar Cross Section (RCS) from an uncoated circular guide terminated by a perfect electric conductor was calculated and compared with available experimental data. It is confirmed that the interior irradiation contributes to the RCS. The equivalent-current method based on the geometrical theory of diffraction (GTD) was chosen for the calculation of the contribution from the rim diffraction. The RCS reduction from a coated circular guide terminated by a PEC are planned schemes for the experiments are included. The waveguide coated with a lossy magnetic material is suggested as a substitute for the corrugated waveguide.

  8. Progress in multi-dimensional upwind differencing

    NASA Technical Reports Server (NTRS)

    Vanleer, Bram

    1992-01-01

    Multi-dimensional upwind-differencing schemes for the Euler equations are reviewed. On the basis of the first-order upwind scheme for a one-dimensional convection equation, the two approaches to upwind differencing are discussed: the fluctuation approach and the finite-volume approach. The usual extension of the finite-volume method to the multi-dimensional Euler equations is not entirely satisfactory, because the direction of wave propagation is always assumed to be normal to the cell faces. This leads to smearing of shock and shear waves when these are not grid-aligned. Multi-directional methods, in which upwind-biased fluxes are computed in a frame aligned with a dominant wave, overcome this problem, but at the expense of robustness. The same is true for the schemes incorporating a multi-dimensional wave model not based on multi-dimensional data but on an 'educated guess' of what they could be. The fluctuation approach offers the best possibilities for the development of genuinely multi-dimensional upwind schemes. Three building blocks are needed for such schemes: a wave model, a way to achieve conservation, and a compact convection scheme. Recent advances in each of these components are discussed; putting them all together is the present focus of a worldwide research effort. Some numerical results are presented, illustrating the potential of the new multi-dimensional schemes.

  9. Non-oscillatory central differencing for hyperbolic conservation laws

    NASA Technical Reports Server (NTRS)

    Nessyahu, Haim; Tadmor, Eitan

    1988-01-01

    Many of the recently developed high resolution schemes for hyperbolic conservation laws are based on upwind differencing. The building block for these schemes is the averaging of an appropriate Godunov solver; its time consuming part involves the field-by-field decomposition which is required in order to identify the direction of the wind. Instead, the use of the more robust Lax-Friedrichs (LxF) solver is proposed. The main advantage is simplicity: no Riemann problems are solved and hence field-by-field decompositions are avoided. The main disadvantage is the excessive numerical viscosity typical to the LxF solver. This is compensated for by using high-resolution MUSCL-type interpolants. Numerical experiments show that the quality of results obtained by such convenient central differencing is comparable with those of the upwind schemes.

  10. Digital Data Registration and Differencing Compression System

    NASA Technical Reports Server (NTRS)

    Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)

    1996-01-01

    A process for X-ray registration and differencing results in more efficient compression. Differencing of registered modeled subject image with a modeled reference image forms a differenced image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three-dimensional model, which three-dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either a remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic X-ray digital images.

  11. Digital data registration and differencing compression system

    NASA Technical Reports Server (NTRS)

    Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)

    1990-01-01

    A process is disclosed for x ray registration and differencing which results in more efficient compression. Differencing of registered modeled subject image with a modeled reference image forms a differenced image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three-dimensional model, which three-dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either a remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic x ray digital images.

  12. TLE uncertainty estimation using robust weighted differencing

    NASA Astrophysics Data System (ADS)

    Geul, Jacco; Mooij, Erwin; Noomen, Ron

    2017-05-01

    Accurate knowledge of satellite orbit errors is essential for many types of analyses. Unfortunately, for two-line elements (TLEs) this is not available. This paper presents a weighted differencing method using robust least-squares regression for estimating many important error characteristics. The method is applied to both classic and enhanced TLEs, compared to previous implementations, and validated using Global Positioning System (GPS) solutions for the GOCE satellite in Low-Earth Orbit (LEO), prior to its re-entry. The method is found to be more accurate than previous TLE differencing efforts in estimating initial uncertainty, as well as error growth. The method also proves more reliable and requires no data filtering (such as outlier removal). Sensitivity analysis shows a strong relationship between argument of latitude and covariance (standard deviations and correlations), which the method is able to approximate. Overall, the method proves accurate, computationally fast, and robust, and is applicable to any object in the satellite catalogue (SATCAT).

  13. Digital data registration and differencing compression system

    NASA Technical Reports Server (NTRS)

    Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)

    1992-01-01

    A process for x ray registration and differencing results in more efficient compression is discussed. Differencing of registered modeled subject image with a modeled reference image forms a differential image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three dimensional model, which three dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic x ray digital images.

  14. Ice Sheet Change Detection by Satellite Image Differencing

    NASA Technical Reports Server (NTRS)

    Bindschadler, Robert A.; Scambos, Ted A.; Choi, Hyeungu; Haran, Terry M.

    2010-01-01

    Differencing of digital satellite image pairs highlights subtle changes in near-identical scenes of Earth surfaces. Using the mathematical relationships relevant to photoclinometry, we examine the effectiveness of this method for the study of localized ice sheet surface topography changes using numerical experiments. We then test these results by differencing images of several regions in West Antarctica, including some where changes have previously been identified in altimeter profiles. The technique works well with coregistered images having low noise, high radiometric sensitivity, and near-identical solar illumination geometry. Clouds and frosts detract from resolving surface features. The ETM(plus) sensor on Landsat-7, ALI sensor on EO-1, and MODIS sensor on the Aqua and Terra satellite platforms all have potential for detecting localized topographic changes such as shifting dunes, surface inflation and deflation features associated with sub-glacial lake fill-drain events, or grounding line changes. Availability and frequency of MODIS images favor this sensor for wide application, and using it, we demonstrate both qualitative identification of changes in topography and quantitative mapping of slope and elevation changes.

  15. Extended image differencing for change detection in UAV video mosaics

    NASA Astrophysics Data System (ADS)

    Saur, Günter; Krüger, Wolfgang; Schumann, Arne

    2014-03-01

    Change detection is one of the most important tasks when using unmanned aerial vehicles (UAV) for video reconnaissance and surveillance. We address changes of short time scale, i.e. the observations are taken in time distances from several minutes up to a few hours. Each observation is a short video sequence acquired by the UAV in near-nadir view and the relevant changes are, e.g., recently parked or moved vehicles. In this paper we extend our previous approach of image differencing for single video frames to video mosaics. A precise image-to-image registration combined with a robust matching approach is needed to stitch the video frames to a mosaic. Additionally, this matching algorithm is applied to mosaic pairs in order to align them to a common geometry. The resulting registered video mosaic pairs are the input of the change detection procedure based on extended image differencing. A change mask is generated by an adaptive threshold applied to a linear combination of difference images of intensity and gradient magnitude. The change detection algorithm has to distinguish between relevant and non-relevant changes. Examples for non-relevant changes are stereo disparity at 3D structures of the scene, changed size of shadows, and compression or transmission artifacts. The special effects of video mosaicking such as geometric distortions and artifacts at moving objects have to be considered, too. In our experiments we analyze the influence of these effects on the change detection results by considering several scenes. The results show that for video mosaics this task is more difficult than for single video frames. Therefore, we extended the image registration by estimating an elastic transformation using a thin plate spline approach. The results for mosaics are comparable to that of single video frames and are useful for interactive image exploitation due to a larger scene coverage.

  16. Performance of differenced range data types in Voyager navigation

    NASA Technical Reports Server (NTRS)

    Taylor, T. H.; Campbell, J. K.; Jacobson, R. A.; Moultrie, B.; Nichols, R. A., Jr.; Riedel, J. E.

    1982-01-01

    Voyager radio navigation made use of a differenced rage data type for both Saturn encounters because of the low declination singularity of Doppler data. Nearly simultaneous two-way range from two-station baselines was explicitly differenced to produce this data type. Concurrently, a differential VLBI data type (DDOR), utilizing doubly differenced quasar-spacecraft delays, with potentially higher precision was demonstrated. Performance of these data types is investigated on the Jupiter-to-Saturn leg of Voyager 2. The statistics of performance are presented in terms of actual data noise comparisons and sample orbit estimates. Use of DDOR as a primary data type for navigation to Uranus is discussed.

  17. Performance of differenced range data types in Voyager navigation

    NASA Technical Reports Server (NTRS)

    Taylor, T. H.; Campbell, J. K.; Jacobson, R. A.; Moultrie, B.; Nichols, R. A., Jr.; Riedel, J. E.

    1982-01-01

    Voyager radio navigation made use of differenced range data type for both Saturn encounters because of the low declination singularity of Doppler data. Nearly simultaneous two-way range from two-station baselines was explicitly differenced to produce this data type. Concurrently, a differential VLBI data type (DDOR), utilizing doubly differenced quasar-spacecraft delays, with potentially higher precision was demonstrated. Performance of these data types is investigated on the Jupiter to Saturn leg of Voyager 2. The statistics of performance are presented in terms of actual data noise comparisons and sample orbit estimates. Use of DDOR as a primary data type for navigation to Uranus is discussed.

  18. Fourth order exponential time differencing method with local discontinuous Galerkin approximation for coupled nonlinear Schrodinger equations

    DOE PAGES

    Liang, Xiao; Khaliq, Abdul Q. M.; Xing, Yulong

    2015-01-23

    In this paper, we study a local discontinuous Galerkin method combined with fourth order exponential time differencing Runge-Kutta time discretization and a fourth order conservative method for solving the nonlinear Schrödinger equations. Based on different choices of numerical fluxes, we propose both energy-conserving and energy-dissipative local discontinuous Galerkin methods, and have proven the error estimates for the semi-discrete methods applied to linear Schrödinger equation. The numerical methods are proven to be highly efficient and stable for long-range soliton computations. Finally, extensive numerical examples are provided to illustrate the accuracy, efficiency and reliability of the proposed methods.

  19. Efficient numerical method for analyzing optical bistability in photonic crystal microcavities.

    PubMed

    Yuan, Lijun; Lu, Ya Yan

    2013-05-20

    Nonlinear optical effects can be enhanced by photonic crystal microcavities and be used to develop practical ultra-compact optical devices with low power requirements. The finite-difference time-domain method is the standard numerical method for simulating nonlinear optical devices, but it has limitations in terms of accuracy and efficiency. In this paper, a rigorous and efficient frequency-domain numerical method is developed for analyzing nonlinear optical devices where the nonlinear effect is concentrated in the microcavities. The method replaces the linear problem outside the microcavities by a rigorous and numerically computed boundary condition, then solves the nonlinear problem iteratively in a small region around the microcavities. Convergence of the iterative method is much easier to achieve since the size of the problem is significantly reduced. The method is presented for a specific two-dimensional photonic crystal waveguide-cavity system with a Kerr nonlinearity, using numerical methods that can take advantage of the geometric features of the structure. The method is able to calculate multiple solutions exhibiting the optical bistability phenomenon in the strongly nonlinear regime.

  20. EXPONENTIAL TIME DIFFERENCING FOR HODGKIN–HUXLEY-LIKE ODES

    PubMed Central

    Börgers, Christoph; Nectow, Alexander R.

    2013-01-01

    Several authors have proposed the use of exponential time differencing (ETD) for Hodgkin–Huxley-like partial and ordinary differential equations (PDEs and ODEs). For Hodgkin–Huxley-like PDEs, ETD is attractive because it can deal effectively with the stiffness issues that diffusion gives rise to. However, large neuronal networks are often simulated assuming “space-clamped” neurons, i.e., using the Hodgkin–Huxley ODEs, in which there are no diffusion terms. Our goal is to clarify whether ETD is a good idea even in that case. We present a numerical comparison of first- and second-order ETD with standard explicit time-stepping schemes (Euler’s method, the midpoint method, and the classical fourth-order Runge–Kutta method). We find that in the standard schemes, the stable computation of the very rapid rising phase of the action potential often forces time steps of a small fraction of a millisecond. This can result in an expensive calculation yielding greater overall accuracy than needed. Although it is tempting at first to try to address this issue with adaptive or fully implicit time-stepping, we argue that neither is effective here. The main advantage of ETD for Hodgkin–Huxley-like systems of ODEs is that it allows underresolution of the rising phase of the action potential without causing instability, using time steps on the order of one millisecond. When high quantitative accuracy is not necessary and perhaps, because of modeling inaccuracies, not even useful, ETD allows much faster simulations than standard explicit time-stepping schemes. The second-order ETD scheme is found to be substantially more accurate than the first-order one even for large values of Δt. PMID:24058276

  1. A consistent spatial differencing scheme for the transonic full-potential equation in three dimensions

    NASA Technical Reports Server (NTRS)

    Thomas, S. D.; Holst, T. L.

    1985-01-01

    A full-potential steady transonic wing flow solver has been modified so that freestream density and residual are captured in regions of constant velocity. This numerically precise freestream consistency is obtained by slightly altering the differencing scheme without affecting the implicit solution algorithm. The changes chiefly affect the fifteen metrics per grid point, which are computed once and stored. With this new method, the outer boundary condition is captured accurately, and the smoothness of the solution is especially improved near regions of grid discontinuity.

  2. Upwind differencing and LU factorization for chemical non-equilibrium Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Shuen, Jian-Shun

    1992-01-01

    By means of either the Roe or the Van Leer flux-splittings for inviscid terms, in conjunction with central differencing for viscous terms in the explicit operator and the Steger-Warming splitting and lower-upper approximate factorization for the implicit operator, the present, robust upwind method for solving the chemical nonequilibrium Navier-Stokes equations yields formulas for finite-volume discretization in general coordinates. Numerical tests in the illustrative cases of a hypersonic blunt body, a ramped duct, divergent nozzle flows, and shock wave/boundary layer interactions, establish the method's efficiency.

  3. Analyzing asteroid reflectance spectra with numerical tools based on scattering simulations

    NASA Astrophysics Data System (ADS)

    Penttilä, Antti; Väisänen, Timo; Markkanen, Johannes; Martikainen, Julia; Gritsevich, Maria; Muinonen, Karri

    2017-04-01

    We are developing a set of numerical tools that can be used in analyzing the reflectance spectra of granular materials such as the regolith surface of atmosphereless Solar system objects. Our goal is to be able to explain, with realistic numerical scattering models, the spectral features arising when materials are intimately mixed together. We include the space-weathering -type effects in our simulations, i.e., mixing host mineral locally with small inclusions of another material in small proportions. Our motivation for this study comes from the present lack of such tools. The current common practice is to apply a semi-physical approximate model such as some variation of Hapke models [e.g., 1] or the Shkuratov model [2]. These models are expressed in a closed form so that they are relatively fast to apply. They are based on simplifications on the radiative transfer theory. The problem is that the validity of the model is not always guaranteed, and the derived physical properties related to particle scattering properties can be unrealistic [3]. We base our numerical tool into a chain of scattering simulations. Scattering properties of small inclusions inside an absorbing host matrix can be derived using exact methods solving the Maxwell equations of the system. The next step, scattering by a single regolith grain, is solved using a geometrical optics method accounting for surface reflections, internal absorption, and possibly the internal diffuse scattering. The third step involves the radiative transfer simulations of these regolith grains in a macroscopic planar element. The chain can be continued next with shadowing simulation over the target surface elements, and finally by integrating the bidirectional reflectance distribution function over the object's shape. Most of the tools in the proposed chain already exist, and one practical task for us is to tie these together into an easy-to-use toolchain that can be publicly distributed. We plan to open the

  4. ASDA - Advanced Suit Design Analyzer computer program

    NASA Technical Reports Server (NTRS)

    Bue, Grant C.; Conger, Bruce C.; Iovine, John V.; Chang, Chi-Min

    1992-01-01

    An ASDA model developed to evaluate the heat and mass transfer characteristics of advanced pressurized suit design concepts for low pressure or vacuum planetary applications is presented. The model is based on a generalized 3-layer suit that uses the Systems Integrated Numerical Differencing Analyzer '85 in conjunction with a 41-node FORTRAN routine. The latter simulates the transient heat transfer and respiratory processes of a human body in a suited environment. The user options for the suit encompass a liquid cooled garment, a removable jacket, a CO2/H2O permeable layer, and a phase change layer.

  5. Towards a category theory approach to analogy: Analyzing re-representation and acquisition of numerical knowledge.

    PubMed

    Navarrete, Jairo A; Dartnell, Pablo

    2017-08-01

    Category Theory, a branch of mathematics, has shown promise as a modeling framework for higher-level cognition. We introduce an algebraic model for analogy that uses the language of category theory to explore analogy-related cognitive phenomena. To illustrate the potential of this approach, we use this model to explore three objects of study in cognitive literature. First, (a) we use commutative diagrams to analyze an effect of playing particular educational board games on the learning of numbers. Second, (b) we employ a notion called coequalizer as a formal model of re-representation that explains a property of computational models of analogy called "flexibility" whereby non-similar representational elements are considered matches and placed in structural correspondence. Finally, (c) we build a formal learning model which shows that re-representation, language processing and analogy making can explain the acquisition of knowledge of rational numbers. These objects of study provide a picture of acquisition of numerical knowledge that is compatible with empirical evidence and offers insights on possible connections between notions such as relational knowledge, analogy, learning, conceptual knowledge, re-representation and procedural knowledge. This suggests that the approach presented here facilitates mathematical modeling of cognition and provides novel ways to think about analogy-related cognitive phenomena.

  6. Towards a category theory approach to analogy: Analyzing re-representation and acquisition of numerical knowledge

    PubMed Central

    2017-01-01

    Category Theory, a branch of mathematics, has shown promise as a modeling framework for higher-level cognition. We introduce an algebraic model for analogy that uses the language of category theory to explore analogy-related cognitive phenomena. To illustrate the potential of this approach, we use this model to explore three objects of study in cognitive literature. First, (a) we use commutative diagrams to analyze an effect of playing particular educational board games on the learning of numbers. Second, (b) we employ a notion called coequalizer as a formal model of re-representation that explains a property of computational models of analogy called “flexibility” whereby non-similar representational elements are considered matches and placed in structural correspondence. Finally, (c) we build a formal learning model which shows that re-representation, language processing and analogy making can explain the acquisition of knowledge of rational numbers. These objects of study provide a picture of acquisition of numerical knowledge that is compatible with empirical evidence and offers insights on possible connections between notions such as relational knowledge, analogy, learning, conceptual knowledge, re-representation and procedural knowledge. This suggests that the approach presented here facilitates mathematical modeling of cognition and provides novel ways to think about analogy-related cognitive phenomena. PMID:28841643

  7. Exact method for numerically analyzing a model of local denaturation in superhelically stressed DNA

    NASA Astrophysics Data System (ADS)

    Fye, Richard M.; Benham, Craig J.

    1999-03-01

    Local denaturation, the separation at specific sites of the two strands comprising the DNA double helix, is one of the most fundamental processes in biology, required to allow the base sequence to be read both in DNA transcription and in replication. In living organisms this process can be mediated by enzymes which regulate the amount of superhelical stress imposed on the DNA. We present a numerically exact technique for analyzing a model of denaturation in superhelically stressed DNA. This approach is capable of predicting the locations and extents of transition in circular superhelical DNA molecules of kilobase lengths and specified base pair sequences. It can also be used for closed loops of DNA which are typically found in vivo to be kilobases long. The analytic method consists of an integration over the DNA twist degrees of freedom followed by the introduction of auxiliary variables to decouple the remaining degrees of freedom, which allows the use of the transfer matrix method. The algorithm implementing our technique requires O(N2) operations and O(N) memory to analyze a DNA domain containing N base pairs. However, to analyze kilobase length DNA molecules it must be implemented in high precision floating point arithmetic. An accelerated algorithm is constructed by imposing an upper bound M on the number of base pairs that can simultaneously denature in a state. This accelerated algorithm requires O(MN) operations, and has an analytically bounded error. Sample calculations show that it achieves high accuracy (greater than 15 decimal digits) with relatively small values of M (M<0.05N) for kilobase length molecules under physiologically relevant conditions. Calculations are performed on the superhelical pBR322 DNA sequence to test the accuracy of the method. With no free parameters in the model, the locations and extents of local denaturation predicted by this analysis are in quantitatively precise agreement with in vitro experimental measurements. Calculations

  8. Purely numerical approach for analyzing flow to a well intercepting a vertical fracture

    SciTech Connect

    Narasimhan, T.N.; Palen, W.A.

    1979-03-01

    A numerical method, based on an Integral Finite Difference approach, is presented to investigate wells intercepting fractures in general and vertical fractures in particular. Such features as finite conductivity, wellbore storage, damage, and fracture deformability and its influence as permeability are easily handled. The advantage of the numerical approach is that it is based on fewer assumptions than analytic solutions and hence has greater generality. Illustrative examples are given to validate the method against known solutions. New results are presenteed to demonstrate the applicability of the method to problems not apparently considered in the literature so far.

  9. ANALYZING NUMERICAL ERRORS IN DOMAIN HEAT TRANSPORT MODELS USING THE CVBEM.

    USGS Publications Warehouse

    Hromadka, T.V.

    1987-01-01

    Besides providing an exact solution for steady-state heat conduction processes (Laplace-Poisson equations), the CVBEM (complex variable boundary element method) can be used for the numerical error analysis of domain model solutions. For problems where soil-water phase change latent heat effects dominate the thermal regime, heat transport can be approximately modeled as a time-stepped steady-state condition in the thawed and frozen regions, respectively. The CVBEM provides an exact solution of the two-dimensional steady-state heat transport problem, and also provides the error in matching the prescribed boundary conditions by the development of a modeling error distribution or an approximate boundary generation.

  10. ANALYZING NUMERICAL ERRORS IN DOMAIN HEAT TRANSPORT MODELS USING THE CVBEM.

    USGS Publications Warehouse

    Hromadka, T.V.; ,

    1985-01-01

    Besides providing an exact solution for steady-state heat conduction processes (Laplace Poisson equations), the CVBEM (complex variable boundary element method) can be used for the numerical error analysis of domain model solutions. For problems where soil water phase change latent heat effects dominate the thermal regime, heat transport can be approximately modeled as a time-stepped steady-state condition in the thawed and frozen regions, respectively. The CVBEM provides an exact solution of the two-dimensional steady-state heat transport problem, and also provides the error in matching the prescribed boundary conditions by the development of a modeling error distribution or an approximative boundary generation. This error evaluation can be used to develop highly accurate CVBEM models of the heat transport process, and the resulting model can be used as a test case for evaluating the precision of domain models based on finite elements or finite differences.

  11. Method and apparatus for rate integration supplement for attitude referencing with quaternion differencing

    NASA Technical Reports Server (NTRS)

    Rodden, John James (Inventor); Price, Xenophon (Inventor); Carrou, Stephane (Inventor); Stevens, Homer Darling (Inventor)

    2002-01-01

    A control system for providing attitude control in spacecraft. The control system comprising a primary attitude reference system, a secondary attitude reference system, and a hyper-complex number differencing system. The hyper-complex number differencing system is connectable to the primary attitude reference system and the secondary attitude reference system.

  12. Efficient and stable exponential time differencing Runge-Kutta methods for phase field elastic bending energy models

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoqiang; Ju, Lili; Du, Qiang

    2016-07-01

    The Willmore flow formulated by phase field dynamics based on the elastic bending energy model has been widely used to describe the shape transformation of biological lipid vesicles. In this paper, we develop and investigate some efficient and stable numerical methods for simulating the unconstrained phase field Willmore dynamics and the phase field Willmore dynamics with fixed volume and surface area constraints. The proposed methods can be high-order accurate and are completely explicit in nature, by combining exponential time differencing Runge-Kutta approximations for time integration with spectral discretizations for spatial operators on regular meshes. We also incorporate novel linear operator splitting techniques into the numerical schemes to improve the discrete energy stability. In order to avoid extra numerical instability brought by use of large penalty parameters in solving the constrained phase field Willmore dynamics problem, a modified augmented Lagrange multiplier approach is proposed and adopted. Various numerical experiments are performed to demonstrate accuracy and stability of the proposed methods.

  13. Artificial Vector Calibration Method for Differencing Magnetic Gradient Tensor Systems

    PubMed Central

    Li, Zhining; Zhang, Yingtang; Yin, Gang

    2018-01-01

    The measurement error of the differencing (i.e., using two homogenous field sensors at a known baseline distance) magnetic gradient tensor system includes the biases, scale factors, nonorthogonality of the single magnetic sensor, and the misalignment error between the sensor arrays, all of which can severely affect the measurement accuracy. In this paper, we propose a low-cost artificial vector calibration method for the tensor system. Firstly, the error parameter linear equations are constructed based on the single-sensor’s system error model to obtain the artificial ideal vector output of the platform, with the total magnetic intensity (TMI) scalar as a reference by two nonlinear conversions, without any mathematical simplification. Secondly, the Levenberg–Marquardt algorithm is used to compute the integrated model of the 12 error parameters by nonlinear least-squares fitting method with the artificial vector output as a reference, and a total of 48 parameters of the system is estimated simultaneously. The calibrated system outputs along the reference platform-orthogonal coordinate system. The analysis results show that the artificial vector calibrated output can track the orientation fluctuations of TMI accurately, effectively avoiding the “overcalibration” problem. The accuracy of the error parameters’ estimation in the simulation is close to 100%. The experimental root-mean-square error (RMSE) of the TMI and tensor components is less than 3 nT and 20 nT/m, respectively, and the estimation of the parameters is highly robust. PMID:29373544

  14. Orbit determination performances using single- and double-differenced methods: SAC-C and KOMPSAT-2

    NASA Astrophysics Data System (ADS)

    Hwang, Yoola; Lee, Byoung-Sun; Kim, Haedong; Kim, Jaehoon

    2011-01-01

    In this paper, Global Positioning System-based (GPS) Orbit Determination (OD) for the KOrea-Multi-Purpose-SATellite (KOMPSAT)-2 using single- and double-differenced methods is studied. The requirement of KOMPSAT-2 orbit accuracy is to allow 1 m positioning error to generate 1-m panchromatic images. KOMPSAT-2 OD is computed using real on-board GPS data. However, the local time of the KOMPSAT-2 GPS receiver is not synchronized with the zero fractional seconds of the GPS time internally, and it continuously drifts according to the pseudorange epochs. In order to resolve this problem, an OD based on single-differenced GPS data from the KOMPSAT-2 uses the tagged time of the GPS receiver, and the accuracy of the OD result is assessed using the overlapping orbit solution between two adjacent days. The clock error of the GPS satellites in the KOMPSAT-2 single-differenced method is corrected using International GNSS Service (IGS) clock information at 5-min intervals. KOMPSAT-2 OD using both double- and single-differenced methods satisfies the requirement of 1-m accuracy in overlapping three dimensional orbit solutions. The results of the SAC-C OD compared with JPL’s POE (Precise Orbit Ephemeris) are also illustrated to demonstrate the implementation of the single- and double-differenced methods using a satellite that has independent orbit information available for validation.

  15. Solving the Sea-Level Equation in an Explicit Time Differencing Scheme

    NASA Astrophysics Data System (ADS)

    Klemann, V.; Hagedoorn, J. M.; Thomas, M.

    2016-12-01

    In preparation of coupling the solid-earth to an ice-sheet compartment in an earth-system model, the dependency of initial topography on the ice-sheet history and viscosity structure has to be analysed. In this study, we discuss this dependency and how it influences the reconstruction of former sea level during a glacial cycle. The modelling is based on the VILMA code in which the field equations are solved in the time domain applying an explicit time-differencing scheme. The sea-level equation is solved simultaneously in the same explicit scheme as the viscoleastic field equations (Hagedoorn et al., 2007). With the assumption of only small changes, we neglect the iterative solution at each time step as suggested by e.g. Kendall et al. (2005). Nevertheless, the prediction of the initial paleo topography in case of moving coastlines remains to be iterated by repeated integration of the whole load history. The sensitivity study sketched at the beginning is accordingly motivated by the question if the iteration of the paleo topography can be replaced by a predefined one. This study is part of the German paleoclimate modelling initiative PalMod. Lit:Hagedoorn JM, Wolf D, Martinec Z, 2007. An estimate of global mean sea-level rise inferred from tide-gauge measurements using glacial-isostatic models consistent with the relative sea-level record. Pure appl. Geophys. 164: 791-818, doi:10.1007/s00024-007-0186-7Kendall RA, Mitrovica JX, Milne GA, 2005. On post-glacial sea level - II. Numerical formulation and comparative reesults on spherically symmetric models. Geophys. J. Int., 161: 679-706, doi:10.1111/j.365-246.X.2005.02553.x

  16. Pump-probe differencing technique for cavity-enhanced, noise-canceling saturation laser spectroscopy.

    PubMed

    de Vine, Glenn; McClelland, David E; Gray, Malcolm B; Close, John D

    2005-05-15

    We present an experimental technique that permits mechanical-noise-free, cavity-enhanced frequency measurements of an atomic transition and its hyperfine structure. We employ the 532-nm frequency-doubled output from a Nd:YAG laser and an iodine vapor cell. The cell is placed in a folded ring cavity (FRC) with counterpropagating pump and probe beams. The FRC is locked with the Pound-Drever-Hall technique. Mechanical noise is rejected by differencing the pump and probe signals. In addition, this differenced error signal provides a sensitive measure of differential nonlinearity within the FRC.

  17. Intrinsic imperfection of self-differencing single-photon detectors harms the security of high-speed quantum cryptography systems

    NASA Astrophysics Data System (ADS)

    Jiang, Mu-Sheng; Sun, Shi-Hai; Tang, Guang-Zhao; Ma, Xiang-Chun; Li, Chun-Yan; Liang, Lin-Mei

    2013-12-01

    Thanks to the high-speed self-differencing single-photon detector (SD-SPD), the secret key rate of quantum key distribution (QKD), which can, in principle, offer unconditionally secure private communications between two users (Alice and Bob), can exceed 1 Mbit/s. However, the SD-SPD may contain loopholes, which can be exploited by an eavesdropper (Eve) to hack into the unconditional security of the high-speed QKD systems. In this paper, we analyze the fact that the SD-SPD can be remotely controlled by Eve in order to spy on full information without being discovered, then proof-of-principle experiments are demonstrated. Here, we point out that this loophole is introduced directly by the operating principle of the SD-SPD, thus, it cannot be removed, except for the fact that some active countermeasures are applied by the legitimate parties.

  18. A Simple Compression Scheme Based on ASCII Value Differencing

    NASA Astrophysics Data System (ADS)

    Tommy; Siregar, Rosyidah; Lubis, Imran; Marwan E, Andi; Mahmud H, Amir; Harahap, Mawaddah

    2018-04-01

    ASCII characters have a different code representation where each character has a different numeric value between the characters to each other. The characters is usually used as a text message communication has the representation of a numeric code to each other or have a small difference. The value of the difference can be used as a substitution of the characters so it will generate a new message with a size that is a little more. This paper discusses the utilization value of the difference of characters ASCII in a message to a much simpler substitution by using a dynamic-sized window in order to obtain the difference from ASCII value contained on the window as the basis in determining the bit substitution on the file compression results.

  19. Validation of a Numerical Program for Analyzing Kinetic Energy Potential in the Bangka Strait, North Sulawesi, Indonesia

    NASA Astrophysics Data System (ADS)

    Rompas, P. T. D.; Taunaumang, H.; Sangari, F. J.

    2018-02-01

    The paper presents validation of the numerical program that computes the distribution of marine current velocities in the Bangka strait and the kinetic energy potential in the form the distributions of available power per area in the Bangka strait. The numerical program used the RANS model where the pressure distribution in the vertical assumed to be hydrostatic. The 2D and 3D numerical program results compared with the measurement results that are observation results to the moment conditions of low and high tide currents. It found no different significant between the numerical results and the measurement results. There are 0.97-2.2 kW/m2 the kinetic energy potential in the form the distributions of available power per area in the Bangka strait when low tide currents, whereas when high tide currents of 1.02-2.1 kW/m2. The results show that to be enabling the installation of marine current turbines for construction of power plant in the Bangka strait, North Sulawesi, Indonesia.

  20. A numerical wave-optical approach for the simulation of analyzer-based x-ray imaging

    NASA Astrophysics Data System (ADS)

    Bravin, A.; Mocella, V.; Coan, P.; Astolfo, A.; Ferrero, C.

    2007-04-01

    An advanced wave-optical approach for simulating a monochromator-analyzer set-up in Bragg geometry with high accuracy is presented. The polychromaticity of the incident wave on the monochromator is accounted for by using a distribution of incoherent point sources along the surface of the crystal. The resulting diffracted amplitude is modified by the sample and can be well represented by a scalar representation of the optical field where the limitations of the usual ‘weak object’ approximation are removed. The subsequent diffraction mechanism on the analyzer is described by the convolution of the incoming wave with the Green-Riemann function of the analyzer. The free space propagation up to the detector position is well reproduced by a classical Fresnel-Kirchhoff integral. The preliminary results of this innovative approach show an excellent agreement with experimental data.

  1. Numerical Technique for Analyzing Rotating Rake Mode Measurements in a Duct With Passive Treatment and Shear Flow

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.; Sutliff, Daniel L.

    2007-01-01

    A technique is presented for the analysis of measured data obtained from a rotating microphone rake system. The system is designed to measure the interaction modes of ducted fans. A Fourier analysis of the data from the rotating system results in a set of circumferential mode levels at each radial location of a microphone inside the duct. Radial basis functions are then least-squares fit to this data to obtain the radial mode amplitudes. For ducts with soft walls and mean flow, the radial basis functions must be numerically computed. The linear companion matrix method is used to obtain both the eigenvalues of interest, without an initial guess, and the radial basis functions. The governing equations allow for the mean flow to have a boundary layer at the wall. In addition, a nonlinear least-squares method is used to adjust the wall impedance to best fit the data in an attempt to use the rotating system as an in-duct wall impedance measurement tool. Simulated and measured data are used to show the effects of wall impedance and mean flow on the computed results.

  2. Steganography algorithm multi pixel value differencing (MPVD) to increase message capacity and data security

    NASA Astrophysics Data System (ADS)

    Rojali, Siahaan, Ida Sri Rejeki; Soewito, Benfano

    2017-08-01

    Steganography is the art and science of hiding the secret messages so the existence of the message cannot be detected by human senses. The data concealment is using the Multi Pixel Value Differencing (MPVD) algorithm, utilizing the difference from each pixel. The development was done by using six interval tables. The objective of this algorithm is to enhance the message capacity and to maintain the data security.

  3. Assessment of erosion and deposition in steep mountain basins by differencing sequential digital terrain models

    NASA Astrophysics Data System (ADS)

    Cavalli, Marco; Goldin, Beatrice; Comiti, Francesco; Brardinoni, Francesco; Marchi, Lorenzo

    2017-08-01

    Digital elevation models (DEMs) built from repeated topographic surveys permit producing DEM of Difference (DoD) that enables assessment of elevation variations and estimation of volumetric changes through time. In the framework of sediment transport studies, DEM differencing enables quantitative and spatially-distributed representation of erosion and deposition within the analyzed time window, at both the channel reach and the catchment scale. In this study, two high-resolution Digital Terrain Models (DTMs) derived from airborne LiDAR data (2 m resolution) acquired in 2005 and 2011 were used to characterize the topographic variations caused by sediment erosion, transport and deposition in two adjacent mountain basins (Gadria and Strimm, Vinschgau - Venosta valley, Eastern Alps, Italy). These catchments were chosen for their contrasting morphology and because they feature different types and intensity of sediment transfer processes. A method based on fuzzy logic, which takes into account spatially variable DTMs uncertainty, was used to derive the DoD of the study area. Volumes of erosion and deposition calculated from the DoD were then compared with post-event field surveys to test the consistency of two independent estimates. Results show an overall agreement between the estimates, with differences due to the intrinsic approximations of the two approaches. The consistency of DoD with post-event estimates encourages the integration of these two methods, whose combined application may permit to overcome the intrinsic limitations of the two estimations. The comparison between 2005 and 2011 DTMs allowed to investigate the relationships between topographic changes and geomorphometric parameters expressing the role of topography on sediment erosion and deposition (i.e., slope and contributing area) and describing the morphology influenced by debris flows and fluvial processes (i.e., curvature). Erosion and deposition relations in the slope-area space display substantial

  4. RESULTS FROM KINEROS STREAM CHANNEL ELEMENTS MODEL OUTPUT THROUGH AGWA DIFFERENCING 1973 AND 1997 NALC LANDCOVER DATA

    EPA Science Inventory

    Results from differencing KINEROS model output through AGWA for Sierra Vista subwatershed. Percent change between 1973 and 1997 is presented for all KINEROS output values (and some derived from the KINEROS output by AGWA) for the stream channels.

  5. Using classification and NDVI differencing methods for monitoring sparse vegetation coverage: a case study of saltcedar in Nevada, USA.

    USDA-ARS?s Scientific Manuscript database

    A change detection experiment for an invasive species, saltcedar, near Lovelock, Nevada, was conducted with multi-date Compact Airborne Spectrographic Imager (CASI) hyperspectral datasets. Classification and NDVI differencing change detection methods were tested, In the classification strategy, a p...

  6. Deep-space navigation with differenced data types. Part 3: An expanded information content and sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Estefan, J. A.; Thurman, S. W.

    1992-01-01

    An approximate six-parameter analytic model for Earth-based differenced range measurements is presented and is used to derive a representative analytic approximation for differenced Doppler measurements. The analytical models are tasked to investigate the ability of these data types to estimate spacecraft geocentric angular motion, Deep Space Network station oscillator (clock/frequency) offsets, and signal-path calibration errors over a period of a few days, in the presence of systematic station location and transmission media calibration errors. Quantitative results indicate that a few differenced Doppler plus ranging passes yield angular position estimates with a precision on the order of 0.1 to 0.4 microrad, and angular rate precision on the order of 10 to 25(10)(exp -12) rad/sec, assuming no a priori information on the coordinate parameters. Sensitivity analyses suggest that troposphere zenith delay calibration error is the dominant systematic error source in most of the tracking scenarios investigated; as expected, the differenced Doppler data were found to be much more sensitive to troposphere calibration errors than differenced range. By comparison, results computed using wide band and narrow band (delta)VLBI under similar circumstances yielded angular precisions of 0.07 to 0.4 /microrad, and angular rate precisions of 0.5 to 1.0(10)(exp -12) rad/sec.

  7. On the geodetic applications of simultaneous range-differencing to LAGEOS

    NASA Technical Reports Server (NTRS)

    Pablis, E. C.

    1982-01-01

    The possibility of improving the accuracy of geodetic results by use of simultaneously observed ranges to Lageos, in a differencing mode, from pairs of stations was studied. Simulation tests show that model errors can be effectively minimized by simultaneous range differencing (SRD) for a rather broad class of network satellite pass configurations. The methods of least squares approximation are compared with monomials and Chebyshev polynomials and the cubic spline interpolation. Analysis of three types of orbital biases (radial, along- and across track) shows that radial biases are the ones most efficiently minimized in the SRC mode. The degree to which the other two can be minimized depends on the type of parameters under estimation and the geometry of the problem. Sensitivity analyses of the SRD observation show that for baseline length estimations the most useful data are those collected in a direction parallel to the baseline and at a low elevation. Estimating individual baseline lengths with respect to an assumed but fixed orbit not only decreases the cost, but it further reduces the effects of model biases on the results as opposed to a network solution. Analogous results and conclusions are obtained for the estimates of the coordinates of the pole.

  8. Change Detection in Uav Video Mosaics Combining a Feature Based Approach and Extended Image Differencing

    NASA Astrophysics Data System (ADS)

    Saur, Günter; Krüger, Wolfgang

    2016-06-01

    Change detection is an important task when using unmanned aerial vehicles (UAV) for video surveillance. We address changes of short time scale using observations in time distances of a few hours. Each observation (previous and current) is a short video sequence acquired by UAV in near-Nadir view. Relevant changes are, e.g., recently parked or moved vehicles. Examples for non-relevant changes are parallaxes caused by 3D structures of the scene, shadow and illumination changes, and compression or transmission artifacts. In this paper we present (1) a new feature based approach to change detection, (2) a combination with extended image differencing (Saur et al., 2014), and (3) the application to video sequences using temporal filtering. In the feature based approach, information about local image features, e.g., corners, is extracted in both images. The label "new object" is generated at image points, where features occur in the current image and no or weaker features are present in the previous image. The label "vanished object" corresponds to missing or weaker features in the current image and present features in the previous image. This leads to two "directed" change masks and differs from image differencing where only one "undirected" change mask is extracted which combines both label types to the single label "changed object". The combination of both algorithms is performed by merging the change masks of both approaches. A color mask showing the different contributions is used for visual inspection by a human image interpreter.

  9. DIFFERENTIAL ANALYZER

    DOEpatents

    Sorensen, E.G.; Gordon, C.M.

    1959-02-10

    Improvements in analog eomputing machines of the class capable of evaluating differential equations, commonly termed differential analyzers, are described. In general form, the analyzer embodies a plurality of basic computer mechanisms for performing integration, multiplication, and addition, and means for directing the result of any one operation to another computer mechanism performing a further operation. In the device, numerical quantities are represented by the rotation of shafts, or the electrical equivalent of shafts.

  10. Continuous non-invasive blood glucose monitoring by spectral image differencing method

    NASA Astrophysics Data System (ADS)

    Huang, Hao; Liao, Ningfang; Cheng, Haobo; Liang, Jing

    2018-01-01

    Currently, the use of implantable enzyme electrode sensor is the main method for continuous blood glucose monitoring. But the effect of electrochemical reactions and the significant drift caused by bioelectricity in body will reduce the accuracy of the glucose measurements. So the enzyme-based glucose sensors need to be calibrated several times each day by the finger-prick blood corrections. This increases the patient's pain. In this paper, we proposed a method for continuous Non-invasive blood glucose monitoring by spectral image differencing method in the near infrared band. The method uses a high-precision CCD detector to switch the filter in a very short period of time, obtains the spectral images. And then by using the morphological method to obtain the spectral image differences, the dynamic change of blood sugar is reflected in the image difference data. Through the experiment proved that this method can be used to monitor blood glucose dynamically to a certain extent.

  11. Improved Spatial Differencing Scheme for 2-D DOA Estimation of Coherent Signals with Uniform Rectangular Arrays.

    PubMed

    Shi, Junpeng; Hu, Guoping; Sun, Fenggang; Zong, Binfeng; Wang, Xin

    2017-08-24

    This paper proposes an improved spatial differencing (ISD) scheme for two-dimensional direction of arrival (2-D DOA) estimation of coherent signals with uniform rectangular arrays (URAs). We first divide the URA into a number of row rectangular subarrays. Then, by extracting all the data information of each subarray, we only perform difference-operation on the auto-correlations, while the cross-correlations are kept unchanged. Using the reconstructed submatrices, both the forward only ISD (FO-ISD) and forward backward ISD (FB-ISD) methods are developed under the proposed scheme. Compared with the existing spatial smoothing techniques, the proposed scheme can use more data information of the sample covariance matrix and also suppress the effect of additive noise more effectively. Simulation results show that both FO-ISD and FB-ISD can improve the estimation performance largely as compared to the others, in white or colored noise conditions.

  12. Improved Spatial Differencing Scheme for 2-D DOA Estimation of Coherent Signals with Uniform Rectangular Arrays

    PubMed Central

    Hu, Guoping; Zong, Binfeng; Wang, Xin

    2017-01-01

    This paper proposes an improved spatial differencing (ISD) scheme for two-dimensional direction of arrival (2-D DOA) estimation of coherent signals with uniform rectangular arrays (URAs). We first divide the URA into a number of row rectangular subarrays. Then, by extracting all the data information of each subarray, we only perform difference-operation on the auto-correlations, while the cross-correlations are kept unchanged. Using the reconstructed submatrices, both the forward only ISD (FO-ISD) and forward backward ISD (FB-ISD) methods are developed under the proposed scheme. Compared with the existing spatial smoothing techniques, the proposed scheme can use more data information of the sample covariance matrix and also suppress the effect of additive noise more effectively. Simulation results show that both FO-ISD and FB-ISD can improve the estimation performance largely as compared to the others, in white or colored noise conditions. PMID:28837115

  13. The study and realization of BDS un-differenced network-RTK based on raw observations

    NASA Astrophysics Data System (ADS)

    Tu, Rui; Zhang, Pengfei; Zhang, Rui; Lu, Cuixian; Liu, Jinhai; Lu, Xiaochun

    2017-06-01

    A BeiDou Navigation Satellite System (BDS) Un-Differenced (UD) Network Real Time Kinematic (URTK) positioning algorithm, which is based on raw observations, is developed in this study. Given an integer ambiguity datum, the UD integer ambiguity can be recovered from Double-Differenced (DD) integer ambiguities, thus the UD observation corrections can be calculated and interpolated for the rover station to achieve the fast positioning. As this URTK model uses raw observations instead of the ionospheric-free combinations, it is applicable for both dual- and single-frequency users to realize the URTK service. The algorithm was validated with the experimental BDS data collected at four regional stations from day of year 080 to 083 in 2016. The achieved results confirmed the high efficiency of the proposed URTK for providing the rover users a rapid and precise positioning service compared to the standard NRTK. In our test, the BDS URTK can provide a positioning service with cm level accuracy, i.e., 1 cm in the horizontal components, and 2-3 cm in the vertical component. Within the regional network, the mean convergence time for the users to fix the UD ambiguities is 2.7 s for the dual-frequency observations and of 6.3 s for the single-frequency observations after the DD ambiguity resolution. Furthermore, due to the feature of realizing URTK technology under the UD processing mode, it is possible to integrate the global Precise Point Positioning (PPP) and the local NRTK into a seamless positioning service.

  14. Deep-space navigation with differenced data types. Part 3: An expanded information content and sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Estefan, J. A.; Thurman, S. W.

    1992-01-01

    An approximate six-parameter analytic model for Earth-based differential range measurements is presented and is used to derive a representative analytic approximation for differenced Doppler measurements. The analytical models are tasked to investigate the ability of these data types to estimate spacecraft geocentric angular motion, Deep Space Network station oscillator (clock/frequency) offsets, and signal-path calibration errors over a period of a few days, in the presence of systematic station location and transmission media calibration errors. Quantitative results indicate that a few differenced Doppler plus ranging passes yield angular position estimates with a precision on the order of 0.1 to 0.4 micro-rad, and angular rate precision on the order of 10 to 25 x 10(exp -12) rad/sec, assuming no a priori information on the coordinate parameters. Sensitivity analyses suggest that troposphere zenith delay calibration error is the dominant systematic error source in most of the tracking scenarios investigated; as expected, the differenced Doppler data were found to be much more sensitive to troposphere calibration errors than differenced range. By comparison, results computed using wideband and narrowband (delta) VLBI under similar circumstances yielded angular precisions of 0.07 to 0.4 micro-rad, and angular rate precisions of 0.5 to 1.0 x 10(exp -12) rad/sec.

  15. COMMIX-PPC: A three-dimensional transient multicomponent computer program for analyzing performance of power plant condensers. Volume 1, Equations and numerics

    SciTech Connect

    Chien, T.H.; Domanus, H.M.; Sha, W.T.

    1993-02-01

    The COMMIX-PPC computer pregrain is an extended and improved version of earlier COMMIX codes and is specifically designed for evaluating the thermal performance of power plant condensers. The COMMIX codes are general-purpose computer programs for the analysis of fluid flow and heat transfer in complex Industrial systems. In COMMIX-PPC, two major features have been added to previously published COMMIX codes. One feature is the incorporation of one-dimensional equations of conservation of mass, momentum, and energy on the tube stile and the proper accounting for the thermal interaction between shell and tube side through the porous-medium approach. The other added featuremore » is the extension of the three-dimensional conservation equations for shell-side flow to treat the flow of a multicomponent medium. COMMIX-PPC is designed to perform steady-state and transient. Three-dimensional analysis of fluid flow with heat transfer tn a power plant condenser. However, the code is designed in a generalized fashion so that, with some modification, it can be used to analyze processes in any heat exchanger or other single-phase engineering applications. Volume I (Equations and Numerics) of this report describes in detail the basic equations, formulation, solution procedures, and models for a phenomena. Volume II (User`s Guide and Manual) contains the input instruction, flow charts, sample problems, and descriptions of available options and boundary conditions.« less

  16. Analyzing the 2010-2011 La Niña signature in the tropical Pacific sea surface salinity using in situ data, SMOS observations, and a numerical simulation

    NASA Astrophysics Data System (ADS)

    Hasson, Audrey; Delcroix, Thierry; Boutin, Jacqueline; Dussin, Raphael; Ballabrera-Poy, Joaquim

    2014-06-01

    The tropical Pacific Ocean remained in a La Niña phase from mid-2010 to mid-2012. In this study, the 2010-2011 near-surface salinity signature of ENSO (El Niño-Southern Oscillation) is described and analyzed using a combination of numerical model output, in situ data, and SMOS satellite salinity products. Comparisons of all salinity products show a good agreement between them, with a RMS error of 0.2-0.3 between the thermosalinograph (TSG) and SMOS data and between the TSG and model data. The last 6 months of 2010 are characterized by an unusually strong tripolar anomaly captured by the three salinity products in the western half of the tropical Pacific. A positive SSS anomaly sits north of 10°S (>0.5), a negative tilted anomaly lies between 10°S and 20°S and a positive one south of 20°S. In 2011, anomalies shift south and amplify up to 0.8, except for the one south of 20°S. Equatorial SSS changes are mainly the result of anomalous zonal advection, resulting in negative anomalies during El Niño (early 2010), and positive ones thereafter during La Niña. The mean seasonal and interannual poleward drift exports those anomalies toward the south in the southern hemisphere, resulting in the aforementioned tripolar anomaly. The vertical salinity flux at the bottom of the mixed layer tends to resist the surface salinity changes. The observed basin-scale La Niña SSS signal is then compared with the historical 1998-1999 La Niña event using both observations and modeling.

  17. An RGB colour image steganography scheme using overlapping block-based pixel-value differencing

    PubMed Central

    Pal, Arup Kumar

    2017-01-01

    This paper presents a steganographic scheme based on the RGB colour cover image. The secret message bits are embedded into each colour pixel sequentially by the pixel-value differencing (PVD) technique. PVD basically works on two consecutive non-overlapping components; as a result, the straightforward conventional PVD technique is not applicable to embed the secret message bits into a colour pixel, since a colour pixel consists of three colour components, i.e. red, green and blue. Hence, in the proposed scheme, initially the three colour components are represented into two overlapping blocks like the combination of red and green colour components, while another one is the combination of green and blue colour components, respectively. Later, the PVD technique is employed on each block independently to embed the secret data. The two overlapping blocks are readjusted to attain the modified three colour components. The notion of overlapping blocks has improved the embedding capacity of the cover image. The scheme has been tested on a set of colour images and satisfactory results have been achieved in terms of embedding capacity and upholding the acceptable visual quality of the stego-image. PMID:28484623

  18. Effective image differencing with convolutional neural networks for real-time transient hunting

    NASA Astrophysics Data System (ADS)

    Sedaghat, Nima; Mahabal, Ashish

    2018-06-01

    Large sky surveys are increasingly relying on image subtraction pipelines for real-time (and archival) transient detection. In this process one has to contend with varying point-spread function (PSF) and small brightness variations in many sources, as well as artefacts resulting from saturated stars and, in general, matching errors. Very often the differencing is done with a reference image that is deeper than individual images and the attendant difference in noise characteristics can also lead to artefacts. We present here a deep-learning approach to transient detection that encapsulates all the steps of a traditional image-subtraction pipeline - image registration, background subtraction, noise removal, PSF matching and subtraction - in a single real-time convolutional network. Once trained, the method works lightening-fast and, given that it performs multiple steps in one go, the time saved and false positives eliminated for multi-CCD surveys like Zwicky Transient Facility and Large Synoptic Survey Telescope will be immense, as millions of subtractions will be needed per night.

  19. Automated Topographic Change Detection via Dem Differencing at Large Scales Using The Arcticdem Database

    NASA Astrophysics Data System (ADS)

    Candela, S. G.; Howat, I.; Noh, M. J.; Porter, C. C.; Morin, P. J.

    2016-12-01

    In the last decade, high resolution satellite imagery has become an increasingly accessible tool for geoscientists to quantify changes in the Arctic land surface due to geophysical, ecological and anthropomorphic processes. However, the trade off between spatial coverage and spatial-temporal resolution has limited detailed, process-level change detection over large (i.e. continental) scales. The ArcticDEM project utilized over 300,000 Worldview image pairs to produce a nearly 100% coverage elevation model (above 60°N) offering the first polar, high spatial - high resolution (2-8m by region) dataset, often with multiple repeats in areas of particular interest to geo-scientists. A dataset of this size (nearly 250 TB) offers endless new avenues of scientific inquiry, but quickly becomes unmanageable computationally and logistically for the computing resources available to the average scientist. Here we present TopoDiff, a framework for a generalized. automated workflow that requires minimal input from the end user about a study site, and utilizes cloud computing resources to provide a temporally sorted and differenced dataset, ready for geostatistical analysis. This hands-off approach allows the end user to focus on the science, without having to manage thousands of files, or petabytes of data. At the same time, TopoDiff provides a consistent and accurate workflow for image sorting, selection, and co-registration enabling cross-comparisons between research projects.

  20. Kinematic behaviour of a large earthflow defined by surface displacement monitoring, DEM differencing, and ERT imaging

    NASA Astrophysics Data System (ADS)

    Prokešová, Roberta; Kardoš, Miroslav; Tábořík, Petr; Medveďová, Alžbeta; Stacke, Václav; Chudý, František

    2014-11-01

    Large earthflow-type landslides are destructive mass movement phenomena with highly unpredictable behaviour. Knowledge of earthflow kinematics is essential for understanding the mechanisms that control its movements. The present paper characterises the kinematic behaviour of a large earthflow near the village of Ľubietová in Central Slovakia over a period of 35 years following its most recent reactivation in 1977. For this purpose, multi-temporal spatial data acquired by point-based in-situ monitoring and optical remote sensing methods have been used. Quantitative data analyses including strain modelling and DEM differencing techniques have enabled us to: (i) calculate the annual landslide movement rates; (ii) detect the trend of surface displacements; (iii) characterise spatial variability of movement rates; (iv) measure changes in the surface topography on a decadal scale; and (v) define areas with distinct kinematic behaviour. The results also integrate the qualitative characteristics of surface topography, in particular the distribution of surface structures as defined by a high-resolution DEM, and the landslide subsurface structure, as revealed by 2D resistivity imaging. Then, the ground surface kinematics of the landslide is evaluated with respect to the specific conditions encountered in the study area including slope morphology, landslide subsurface structure, and local geological and hydrometeorological conditions. Finally, the broader implications of the presented research are discussed with particular focus on the role that strain-related structures play in landslide kinematic behaviour.

  1. Controlling Reflections from Mesh Refinement Interfaces in Numerical Relativity

    NASA Technical Reports Server (NTRS)

    Baker, John G.; Van Meter, James R.

    2005-01-01

    A leading approach to improving the accuracy on numerical relativity simulations of black hole systems is through fixed or adaptive mesh refinement techniques. We describe a generic numerical error which manifests as slowly converging, artificial reflections from refinement boundaries in a broad class of mesh-refinement implementations, potentially limiting the effectiveness of mesh- refinement techniques for some numerical relativity applications. We elucidate this numerical effect by presenting a model problem which exhibits the phenomenon, but which is simple enough that its numerical error can be understood analytically. Our analysis shows that the effect is caused by variations in finite differencing error generated across low and high resolution regions, and that its slow convergence is caused by the presence of dramatic speed differences among propagation modes typical of 3+1 relativity. Lastly, we resolve the problem, presenting a class of finite-differencing stencil modifications which eliminate this pathology in both our model problem and in numerical relativity examples.

  2. Path length differencing and energy conservation of the S[sub N] Boltzmann/Spencer-Lewis equation

    SciTech Connect

    Filippone, W.L.; Monahan, S.P.

    It is shown that the S[sub N] Boltzmann/Spencer-Lewis equations conserve energy locally if and only if they satisfy particle balance and diamond differencing is used in path length. In contrast, the spatial differencing schemes have no bearing on the energy balance. Energy is conserved globally if it is conserved locally and the multigroup cross sections are energy conserving. Although the coupled electron-photon cross sections generated by CEPXS conserve particles and charge, they do not precisely conserve energy. It is demonstrated that these cross sections can be adjusted such that particles, charge, and energy are conserved. Finally, since a conventional negativemore » flux fixup destroys energy balance when applied to path legend, a modified fixup scheme that does not is presented.« less

  3. Surface Deformation Associated with the 1983 Borah Peak Earthquake Measured from Digital Surface Model Differencing

    NASA Astrophysics Data System (ADS)

    Reitman, N. G.; Briggs, R.; Gold, R. D.; DuRoss, C. B.

    2015-12-01

    Post-earthquake, field-based assessments of surface displacement commonly underestimate offsets observed with remote sensing techniques (e.g., InSAR, image cross-correlation) because they fail to capture the total deformation field. Modern earthquakes are readily characterized by comparing pre- and post-event remote sensing data, but historical earthquakes often lack pre-event data. To overcome this challenge, we use historical aerial photographs to derive pre-event digital surface models (DSMs), which we compare to modern, post-event DSMs. Our case study focuses on resolving on- and off-fault deformation along the Lost River fault that accompanied the 1983 M6.9 Borah Peak, Idaho, normal-faulting earthquake. We use 343 aerial images from 1952-1966 and vertical control points selected from National Geodetic Survey benchmarks measured prior to 1983 to construct a pre-event point cloud (average ~ 0.25 pts/m2) and corresponding DSM. The post-event point cloud (average ~ 1 pt/m2) and corresponding DSM are derived from WorldView 1 and 2 scenes processed with NASA's Ames Stereo Pipeline. The point clouds and DSMs are coregistered using vertical control points, an iterative closest point algorithm, and a DSM coregistration algorithm. Preliminary results of differencing the coregistered DSMs reveal a signal spanning the surface rupture that is consistent with tectonic displacement. Ongoing work is focused on quantifying the significance of this signal and error analysis. We expect this technique to yield a more complete understanding of on- and off-fault deformation patterns associated with the Borah Peak earthquake along the Lost River fault and to help improve assessments of surface deformation for other historical ruptures.

  4. Advantages of 3D FEM numerical modeling over 2D, analyzed in a case study of transient thermal-hydraulic groundwater utilization

    NASA Astrophysics Data System (ADS)

    Fuchsluger, Martin; Götzl, Gregor

    2014-05-01

    flow has been realized. In addition the effects of the basement of the building to the groundwater flow have been analyzed. The results of the 2D model show an underestimation of more than 10 % of the performance of the groundwater utilization facility and a considerable smaller groundwater table drawdown compared to the 3D simulations. This is due to the possibility of 3D modeling to consider (i) the heat distribution and storage in the adjacent layers, (ii) the climatic surface effect and (iii) vertical groundwater flow.

  5. A numerical study of the steady scalar convective diffusion equation for small viscosity

    NASA Technical Reports Server (NTRS)

    Giles, M. B.; Rose, M. E.

    1983-01-01

    A time-independent convection diffusion equation is studied by means of a compact finite difference scheme and numerical solutions are compared to the analytic inviscid solutions. The correct internal and external boundary layer behavior is observed, due to an inherent feature of the scheme which automatically produces upwind differencing in inviscid regions and the correct viscous behavior in viscous regions.

  6. A numerical study of the axisymmetric Couette-Taylor problem using a fast high-resolution second-order central scheme

    SciTech Connect

    Kupferman, R.

    The author presents a numerical study of the axisymmetric Couette-Taylor problem using a finite difference scheme. The scheme is based on a staggered version of a second-order central-differencing method combined with a discrete Hodge projection. The use of central-differencing operators obviates the need to trace the characteristic flow associated with the hyperbolic terms. The result is a simple and efficient scheme which is readily adaptable to other geometries and to more complicated flows. The scheme exhibits competitive performance in terms of accuracy, resolution, and robustness. The numerical results agree accurately with linear stability theory and with previous numerical studies.

  7. Best-Practice Criteria for Practical Security of Self-Differencing Avalanche Photodiode Detectors in Quantum Key Distribution

    NASA Astrophysics Data System (ADS)

    Koehler-Sidki, A.; Dynes, J. F.; Lucamarini, M.; Roberts, G. L.; Sharpe, A. W.; Yuan, Z. L.; Shields, A. J.

    2018-04-01

    Fast-gated avalanche photodiodes (APDs) are the most commonly used single photon detectors for high-bit-rate quantum key distribution (QKD). Their robustness against external attacks is crucial to the overall security of a QKD system, or even an entire QKD network. We investigate the behavior of a gigahertz-gated, self-differencing (In,Ga)As APD under strong illumination, a tactic Eve often uses to bring detectors under her control. Our experiment and modeling reveal that the negative feedback by the photocurrent safeguards the detector from being blinded through reducing its avalanche probability and/or strengthening the capacitive response. Based on this finding, we propose a set of best-practice criteria for designing and operating fast-gated APD detectors to ensure their practical security in QKD.

  8. Nonnegative methods for bilinear discontinuous differencing of the S N equations on quadrilaterals

    SciTech Connect

    Maginot, Peter G.; Ragusa, Jean C.; Morel, Jim E.

    Historically, matrix lumping and ad hoc flux fixups have been the only methods used to eliminate or suppress negative angular flux solutions associated with the unlumped bilinear discontinuous (UBLD) finite element spatial discretization of the two-dimensional S N equations. Though matrix lumping inhibits negative angular flux solutions of the S N equations, it does not guarantee strictly positive solutions. In this paper, we develop and define a strictly nonnegative, nonlinear, Petrov-Galerkin finite element method that fully preserves the bilinear discontinuous spatial moments of the transport equation. Additionally, we define two ad hoc fixups that maintain particle balance and explicitly setmore » negative nodes of the UBLD finite element solution to zero but use different auxiliary equations to fully define their respective solutions. We assess the ability to inhibit negative angular flux solutions and the accuracy of every spatial discretization that we consider using a glancing void test problem with a discontinuous solution known to stress numerical methods. Though significantly more computationally intense, the nonlinear Petrov-Galerkin scheme results in a strictly nonnegative solution and is a more accurate solution than all the other methods considered. One fixup, based on shape preserving, results in a strictly nonnegative final solution but has increased numerical diffusion relative to the Petrov-Galerkin scheme and is less accurate than the UBLD solution. The second fixup, which preserves as many spatial moments as possible while setting negative values of the unlumped solution to zero, is less accurate than the Petrov-Galerkin scheme but is more accurate than the other fixup. However, it fails to guarantee a strictly nonnegative final solution. As a result, the fully lumped bilinear discontinuous finite element solution is the least accurate method, with significantly more numerical diffusion than the Petrov-Galerkin scheme and both fixups.« less

  9. Nonnegative methods for bilinear discontinuous differencing of the S N equations on quadrilaterals

    DOE PAGES

    Maginot, Peter G.; Ragusa, Jean C.; Morel, Jim E.

    2016-12-22

    Historically, matrix lumping and ad hoc flux fixups have been the only methods used to eliminate or suppress negative angular flux solutions associated with the unlumped bilinear discontinuous (UBLD) finite element spatial discretization of the two-dimensional S N equations. Though matrix lumping inhibits negative angular flux solutions of the S N equations, it does not guarantee strictly positive solutions. In this paper, we develop and define a strictly nonnegative, nonlinear, Petrov-Galerkin finite element method that fully preserves the bilinear discontinuous spatial moments of the transport equation. Additionally, we define two ad hoc fixups that maintain particle balance and explicitly setmore » negative nodes of the UBLD finite element solution to zero but use different auxiliary equations to fully define their respective solutions. We assess the ability to inhibit negative angular flux solutions and the accuracy of every spatial discretization that we consider using a glancing void test problem with a discontinuous solution known to stress numerical methods. Though significantly more computationally intense, the nonlinear Petrov-Galerkin scheme results in a strictly nonnegative solution and is a more accurate solution than all the other methods considered. One fixup, based on shape preserving, results in a strictly nonnegative final solution but has increased numerical diffusion relative to the Petrov-Galerkin scheme and is less accurate than the UBLD solution. The second fixup, which preserves as many spatial moments as possible while setting negative values of the unlumped solution to zero, is less accurate than the Petrov-Galerkin scheme but is more accurate than the other fixup. However, it fails to guarantee a strictly nonnegative final solution. As a result, the fully lumped bilinear discontinuous finite element solution is the least accurate method, with significantly more numerical diffusion than the Petrov-Galerkin scheme and both fixups.« less

  10. Viscous flow computations using a second-order upwind differencing scheme

    NASA Technical Reports Server (NTRS)

    Chen, Y. S.

    1988-01-01

    In the present computations of a wide range of fluid flow problems by means of the primitive variables-incorporating Navier-Stokes equations, a mixed second-order upwinding scheme approximates the convective terms of the transport equations and the scheme's accuracy is verified for convection-dominated high Re number flow problems. An adaptive dissipation scheme is used as a monotonic supersonic shock flow capture mechanism. Many benchmark fluid flow problems, including the compressible and incompressible, laminar and turbulent, over a wide range of M and Re numbers, are presently studied to verify the accuracy and robustness of this numerical method.

  11. Proportionality between Doppler noise and integrated signal path electron density validated by differenced S-X range

    NASA Technical Reports Server (NTRS)

    Berman, A. L.

    1977-01-01

    Observations of Viking differenced S-band/X-band (S-X) range are shown to correlate strongly with Viking Doppler noise. A ratio of proportionality between downlink S-band plasma-induced range error and two-way Doppler noise is calculated. A new parameter (similar to the parameter epsilon which defines the ratio of local electron density fluctuations to mean electron density) is defined as a function of observed data sample interval (Tau) where the time-scale of the observations is 15 Tau. This parameter is interpreted to yield the ratio of net observed phase (or electron density) fluctuations to integrated electron density (in RMS meters/meter). Using this parameter and the thin phase-changing screen approximation, a value for the scale size L is calculated. To be consistent with Doppler noise observations, it is seen necessary for L to be proportional to closest approach distance a, and a strong function of the observed data sample interval, and hence the time-scale of the observations.

  12. Field programmable analog array based on current differencing transconductance amplifiers and its application to high-order filter

    NASA Astrophysics Data System (ADS)

    He, Haizhen; Luo, Rongming; Hu, Zhenhua; Wen, Lei

    2017-07-01

    A current-mode field programmable analog array(FPAA) is presented in this paper. The proposed FPAA consists of 9 configurable analog blocks(CABs) which are based on current differencing transconductance amplifiers (CDTA) and trans-impedance amplifier (TIA). The proposed CABs interconnect through global lines. These global lines contain some bridge switches, which used to reduce the parasitic capacitance effectively. High-order current-mode low-pass and band-pass filter with transmission zeros based on the simulation of general passive RLC ladder prototypes is proposed and mapped into the FPAA structure in order to demonstrate the versatility of the FPAA. These filters exhibit good performance on bandwidth. Filter's cutoff frequency can be tuned from 1.2MHz to 40MHz.The proposed FPAA is simulated in a standard Charted 0.18μm CMOS process with +/-1.2V power supply to confirm the presented theory, and the results have good agreement with the theoretical analysis.

  13. Application of an Upwind High Resolution Finite-Differencing Scheme and Multigrid Method in Steady-State Incompressible Flow Simulations

    NASA Technical Reports Server (NTRS)

    Yang, Cheng I.; Guo, Yan-Hu; Liu, C.- H.

    1996-01-01

    The analysis and design of a submarine propulsor requires the ability to predict the characteristics of both laminar and turbulent flows to a higher degree of accuracy. This report presents results of certain benchmark computations based on an upwind, high-resolution, finite-differencing Navier-Stokes solver. The purpose of the computations is to evaluate the ability, the accuracy and the performance of the solver in the simulation of detailed features of viscous flows. Features of interest include flow separation and reattachment, surface pressure and skin friction distributions. Those features are particularly relevant to the propulsor analysis. Test cases with a wide range of Reynolds numbers are selected; therefore, the effects of the convective and the diffusive terms of the solver can be evaluated separately. Test cases include flows over bluff bodies, such as circular cylinders and spheres, at various low Reynolds numbers, flows over a flat plate with and without turbulence effects, and turbulent flows over axisymmetric bodies with and without propulsor effects. Finally, to enhance the iterative solution procedure, a full approximation scheme V-cycle multigrid method is implemented. Preliminary results indicate that the method significantly reduces the computational effort.

  14. Differenced Range Versus Integrated Doppler (DRVID) ionospheric analysis of metric tracking in the Tracking and Data Relay Satellite System (TDRSS)

    NASA Technical Reports Server (NTRS)

    Radomski, M. S.; Doll, C. E.

    1995-01-01

    The Differenced Range (DR) Versus Integrated Doppler (ID) (DRVID) method exploits the opposition of high-frequency signal versus phase retardation by plasma media to obtain information about the plasma's corruption of simultaneous range and Doppler spacecraft tracking measurements. Thus, DR Plus ID (DRPID) is an observable independent of plasma refraction, while actual DRVID (DR minus ID) measures the time variation of the path electron content independently of spacecraft motion. The DRVID principle has been known since 1961. It has been used to observe interplanetary plasmas, is implemented in Deep Space Network tracking hardware, and has recently been applied to single-frequency Global Positioning System user navigation This paper discusses exploration at the Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD) of DRVID synthesized from simultaneous two-way range and Doppler tracking for low Earth-orbiting missions supported by the Tracking and Data Relay Satellite System (TDRSS) The paper presents comparisons of actual DR and ID residuals and relates those comparisons to predictions of the Bent model. The complications due to the pilot tone influence on relayed Doppler measurements are considered. Further use of DRVID to evaluate ionospheric models is discussed, as is use of DRPID in reducing dependence on ionospheric modeling in orbit determination.

  15. Gas Analyzer

    NASA Technical Reports Server (NTRS)

    1989-01-01

    The M200 originated in the 1970's under an Ames Research Center/Stanford University contract to develop a small, lightweight gas analyzer for Viking Landers. Although the unit was not used on the spacecraft, it was further developed by The National Institute for Occupational Safety and Health (NIOSH). Three researchers from the project later formed Microsensor Technology, Inc. (MTI) to commercialize the analyzer. The original version (Micromonitor 500) was introduced in 1982, and the M200 in 1988. The M200, a more advanced version, features dual gas chromatograph which separate a gaseous mixture into components and measure concentrations of each gas. It is useful for monitoring gas leaks, chemical spills, etc. Many analyses are completed in less than 30 seconds, and a wide range of mixtures can be analyzed.

  16. Process Analyzer

    NASA Technical Reports Server (NTRS)

    1994-01-01

    The ChemScan UV-6100 is a spectrometry system originally developed by Biotronics Technologies, Inc. under a Small Business Innovation Research (SBIR) contract. It is marketed to the water and wastewater treatment industries, replacing "grab sampling" with on-line data collection. It analyzes the light absorbance characteristics of a water sample, simultaneously detects hundreds of individual wavelengths absorbed by chemical substances in a process solution, and quantifies the information. Spectral data is then processed by ChemScan analyzer and compared with calibration files in the system's memory in order to calculate concentrations of chemical substances that cause UV light absorbance in specific patterns. Monitored substances can be analyzed for quality and quantity. Applications include detection of a variety of substances, and the information provided enables an operator to control a process more efficiently.

  17. Higher-order differencing method with a multigrid approach for the solution of the incompressible flow equations at high Reynolds numbers

    NASA Astrophysics Data System (ADS)

    Tzanos, Constantine P.

    1992-10-01

    A higher-order differencing scheme (Tzanos, 1990) is used in conjunction with a multigrid approach to obtain accurate solutions of the Navier-Stokes convection-diffusion equations at high Re numbers. Flow in a square cavity with a moving lid is used as a test problem. a multigrid approach based on the additive correction method (Settari and Aziz) and an iterative incomplete lower and upper solver demonstrated good performance for the whole range of Re number under consideration (from 1000 to 10,000) and for both uniform and nonuniform grids. It is concluded that the combination of the higher-order differencing scheme with a multigrid approach proved to be an effective technique for giving accurate solutions of the Navier-Stokes equations at high Re numbers.

  18. Process Analyzer

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Under a NASA Small Business Innovation Research (SBIR) contract, Axiomatics Corporation developed a shunting Dielectric Sensor to determine the nutrient level and analyze plant nutrient solutions in the CELSS, NASA's space life support program. (CELSS is an experimental facility investigating closed-cycle plant growth and food processing for long duration manned missions.) The DiComp system incorporates a shunt electrode and is especially sensitive to changes in dielectric property changes in materials at measurements much lower than conventional sensors. The analyzer has exceptional capabilities for predicting composition of liquid streams or reactions. It measures concentrations and solids content up to 100 percent in applications like agricultural products, petrochemicals, food and beverages. The sensor is easily installed; maintenance is low, and it can be calibrated on line. The software automates data collection and analysis.

  19. Oxygen analyzer

    DOEpatents

    Benner, W.H.

    1984-05-08

    An oxygen analyzer which identifies and classifies microgram quantities of oxygen in ambient particulate matter and for quantitating organic oxygen in solvent extracts of ambient particulate matter. A sample is pyrolyzed in oxygen-free nitrogen gas (N/sub 2/), and the resulting oxygen quantitatively converted to carbon monoxide (CO) by contact with hot granular carbon (C). Two analysis modes are made possible: (1) rapid determination of total pyrolyzable obtained by decomposing the sample at 1135/sup 0/C, or (2) temperature-programmed oxygen thermal analysis obtained by heating the sample from room temperature to 1135/sup 0/C as a function of time. The analyzer basically comprises a pyrolysis tube containing a bed of granular carbon under N/sub 2/, ovens used to heat the carbon and/or decompose the sample, and a non-dispersive infrared CO detector coupled to a mini-computer to quantitate oxygen in the decomposition products and control oven heating.

  20. Oxygen analyzer

    DOEpatents

    Benner, William H.

    1986-01-01

    An oxygen analyzer which identifies and classifies microgram quantities of oxygen in ambient particulate matter and for quantitating organic oxygen in solvent extracts of ambient particulate matter. A sample is pyrolyzed in oxygen-free nitrogen gas (N.sub.2), and the resulting oxygen quantitatively converted to carbon monoxide (CO) by contact with hot granular carbon (C). Two analysis modes are made possible: (1) rapid determination of total pyrolyzable oxygen obtained by decomposing the sample at 1135.degree. C., or (2) temperature-programmed oxygen thermal analysis obtained by heating the sample from room temperature to 1135.degree. C. as a function of time. The analyzer basically comprises a pyrolysis tube containing a bed of granular carbon under N.sub.2, ovens used to heat the carbon and/or decompose the sample, and a non-dispersive infrared CO detector coupled to a mini-computer to quantitate oxygen in the decomposition products and control oven heating.

  1. MULTICHANNEL ANALYZER

    DOEpatents

    Kelley, G.G.

    1959-11-10

    A multichannel pulse analyzer having several window amplifiers, each amplifier serving one group of channels, with a single fast pulse-lengthener and a single novel interrogation circuit serving all channels is described. A pulse followed too closely timewise by another pulse is disregarded by the interrogation circuit to prevent errors due to pulse pileup. The window amplifiers are connected to the pulse lengthener output, rather than the linear amplifier output, so need not have the fast response characteristic formerly required.

  2. Gas Analyzer

    NASA Technical Reports Server (NTRS)

    1983-01-01

    A miniature gas chromatograph, a system which separates a gaseous mixture into its components and measures the concentration of the individual gases, was designed for the Viking Lander. The technology was further developed under National Institute for Occupational Safety and Health (NIOSH) and funded by Ames Research Center/Stanford as a toxic gas leak detection device. Three researchers on the project later formed Microsensor Technology, Inc. to commercialize the product. It is a battery-powered system consisting of a sensing wand connected to a computerized analyzer. Marketed as the Michromonitor 500, it has a wide range of applications.

  3. Contamination Analyzer

    NASA Technical Reports Server (NTRS)

    1994-01-01

    Measurement of the total organic carbon content in water is important in assessing contamination levels in high purity water for power generation, pharmaceutical production and electronics manufacture. Even trace levels of organic compounds can cause defects in manufactured products. The Sievers Model 800 Total Organic Carbon (TOC) Analyzer, based on technology developed for the Space Station, uses a strong chemical oxidizing agent and ultraviolet light to convert organic compounds in water to carbon dioxide. After ionizing the carbon dioxide, the amount of ions is determined by measuring the conductivity of the deionized water. The new technique is highly sensitive, does not require compressed gas, and maintenance is minimal.

  4. CCFP Analyzer

    NASA Image and Video Library

    2016-05-06

    ISS047e106715 (05/06/2016) --- ESA (European Space Agency astronaut Tim Peake unpacks a cerebral and cochlear fluid pressure (CCFP) analyzer. The device is being tested to measure the pressure of the fluid in the skull, also known as intracranial pressure, which may increase due to fluid shifts in the body while in microgravity. It is hypothesized that the headward fluid shift that occurs during space flight leads to increased pressure in the brain, which may push on the back of the eye, causing it to change shape.

  5. Stress Analyzer

    NASA Technical Reports Server (NTRS)

    1990-01-01

    SPATE 900 Dynamic Stress Analyzer is an acronym for Stress Pattern Analysis by Thermal Emission. It detects stress-induced temperature changes in a structure and indicates the degree of stress. Ometron, Inc.'s SPATE 9000 consists of a scan unit and a data display. The scan unit contains an infrared channel focused on the test structure to collect thermal radiation, and a visual channel used to set up the scan area and interrogate the stress display. Stress data is produced by detecting minute temperature changes, down to one-thousandth of a degree Centigrade, resulting from the application to the structure of dynamic loading. The electronic data processing system correlates the temperature changes with a reference signal to determine stress level.

  6. Optical analyzer

    DOEpatents

    Hansen, A.D.

    1987-09-28

    An optical analyzer wherein a sample of particulate matter, and particularly of organic matter, which has been collected on a quartz fiber filter is placed in a combustion tube, and light from a light source is passed through the sample. The temperature of the sample is raised at a controlled rate and in a controlled atmosphere. The magnitude of the transmission of light through the sample is detected as the temperature is raised. A data processor, differentiator and a two pen recorder provide a chart of the optical transmission versus temperature and the rate of change of optical transmission versus temperature signatures (T and D) of the sample. These signatures provide information as to physical and chemical processes and a variety of quantitative and qualitative information about the sample. Additional information is obtained by repeating the run in different atmospheres and/or different rates or heating with other samples of the same particulate material collected on other filters. 7 figs.

  7. Speech analyzer

    NASA Technical Reports Server (NTRS)

    Lokerson, D. C. (Inventor)

    1977-01-01

    A speech signal is analyzed by applying the signal to formant filters which derive first, second and third signals respectively representing the frequency of the speech waveform in the first, second and third formants. A first pulse train having approximately a pulse rate representing the average frequency of the first formant is derived; second and third pulse trains having pulse rates respectively representing zero crossings of the second and third formants are derived. The first formant pulse train is derived by establishing N signal level bands, where N is an integer at least equal to two. Adjacent ones of the signal bands have common boundaries, each of which is a predetermined percentage of the peak level of a complete cycle of the speech waveform.

  8. Optical analyzer

    DOEpatents

    Hansen, Anthony D.

    1989-02-07

    An optical analyzer (10) wherein a sample (19) of particulate matter, and particularly of organic matter, which has been collected on a quartz fiber filter (20) is placed in a combustion tube (11), and light from a light source (14) is passed through the sample (19). The temperature of the sample (19) is raised at a controlled rate and in a controlled atmosphere. The magnitude of the transmission of light through the sample (19) is detected (18) as the temperature is raised. A data processor (23), differentiator (28) and a two pen recorder (24) provide a chart of the optical transmission versus temperature and the rate of change of optical transmission versus temperature signatures (T and D) of the sample (19). These signatures provide information as to physical and chemical processes and a variety of quantitative and qualitative information about the sample (19). Additional information is obtained by repeating the run in different atmospheres and/or different rates of heating with other samples of the same particulate material collected on other filters.

  9. Optical analyzer

    DOEpatents

    Hansen, Anthony D.

    1989-01-01

    An optical analyzer (10) wherein a sample (19) of particulate matter, and particularly of organic matter, which has been collected on a quartz fiber filter (20) is placed in a combustion tube (11), and light from a light source (14) is passed through the sample (19). The temperature of the sample (19) is raised at a controlled rate and in a controlled atmosphere. The magnitude of the transmission of light through the sample (19) is detected (18) as the temperature is raised. A data processor (23), differentiator (28) and a two pen recorder (24) provide a chart of the optical transmission versus temperature and the rate of change of optical transmission versus temperature signatures (T and D) of the sample (19). These signatures provide information as to physical and chemical processes and a variety of quantitative and qualitative information about the sample (19). Additional information is obtained by repeating the run in different atmospheres and/or different rates of heating with other samples of the same particulate material collected on other filters.

  10. ABSORPTION ANALYZER

    DOEpatents

    Brooksbank, W.A. Jr.; Leddicotte, G.W.; Strain, J.E.; Hendon, H.H. Jr.

    1961-11-14

    A means was developed for continuously computing and indicating the isotopic assay of a process solution and for automatically controlling the process output of isotope separation equipment to provide a continuous output of the desired isotopic ratio. A counter tube is surrounded with a sample to be analyzed so that the tube is exactly in the center of the sample. A source of fast neutrons is provided and is spaced from the sample. The neutrons from the source are thermalized by causing them to pass through a neutron moderator, and the neutrons are allowed to diffuse radially through the sample to actuate the counter. A reference counter in a known sample of pure solvent is also actuated by the thermal neutrons from the neutron source. The number of neutrons which actuate the detectors is a function of a concentration of the elements in solution and their neutron absorption cross sections. The pulses produced by the detectors responsive to each neu tron passing therethrough are amplified and counted. The respective times required to accumulate a selected number of counts are measured by associated timing devices. The concentration of a particular element in solution may be determined by utilizing the following relation: T2/Ti = BCR, where B is a constant proportional to the absorption cross sections, T2 is the time of count collection for the unknown solution, Ti is the time of count collection for the pure solvent, R is the isotopic ratlo, and C is the molar concentration of the element to be determined. Knowing the slope constant B for any element and when the chemical concentration is known, the isotopic concentration may be readily determined, and conversely when the isotopic ratio is known, the chemical concentrations may be determined. (AEC)

  11. Interactive numerals

    PubMed Central

    2017-01-01

    Although Arabic numerals (like ‘2016’ and ‘3.14’) are ubiquitous, we show that in interactive computer applications they are often misleading and surprisingly unreliable. We introduce interactive numerals as a new concept and show, like Roman numerals and Arabic numerals, interactive numerals introduce another way of using and thinking about numbers. Properly understanding interactive numerals is essential for all computer applications that involve numerical data entered by users, including finance, medicine, aviation and science. PMID:28484609

  12. Reducing numerical diffusion for incompressible flow calculations

    NASA Technical Reports Server (NTRS)

    Claus, R. W.; Neely, G. M.; Syed, S. A.

    1984-01-01

    A number of approaches for improving the accuracy of incompressible, steady-state flow calculations are examined. Two improved differencing schemes, Quadratic Upstream Interpolation for Convective Kinematics (QUICK) and Skew-Upwind Differencing (SUD), are applied to the convective terms in the Navier-Stokes equations and compared with results obtained using hybrid differencing. In a number of test calculations, it is illustrated that no single scheme exhibits superior performance for all flow situations. However, both SUD and QUICK are shown to be generally more accurate than hybrid differencing.

  13. Numerical solution of the incompressible Navier-Stokes equations. Ph.D. Thesis - Stanford Univ., Mar. 1989

    NASA Technical Reports Server (NTRS)

    Rogers, Stuart E.

    1990-01-01

    The current work is initiated in an effort to obtain an efficient, accurate, and robust algorithm for the numerical solution of the incompressible Navier-Stokes equations in two- and three-dimensional generalized curvilinear coordinates for both steady-state and time-dependent flow problems. This is accomplished with the use of the method of artificial compressibility and a high-order flux-difference splitting technique for the differencing of the convective terms. Time accuracy is obtained in the numerical solutions by subiterating the equations in psuedo-time for each physical time step. The system of equations is solved with a line-relaxation scheme which allows the use of very large pseudo-time steps leading to fast convergence for steady-state problems as well as for the subiterations of time-dependent problems. Numerous laminar test flow problems are computed and presented with a comparison against analytically known solutions or experimental results. These include the flow in a driven cavity, the flow over a backward-facing step, the steady and unsteady flow over a circular cylinder, flow over an oscillating plate, flow through a one-dimensional inviscid channel with oscillating back pressure, the steady-state flow through a square duct with a 90 degree bend, and the flow through an artificial heart configuration with moving boundaries. An adequate comparison with the analytical or experimental results is obtained in all cases. Numerical comparisons of the upwind differencing with central differencing plus artificial dissipation indicates that the upwind differencing provides a much more robust algorithm, which requires significantly less computing time. The time-dependent problems require on the order of 10 to 20 subiterations, indicating that the elliptical nature of the problem does require a substantial amount of computing effort.

  14. Application of Tracking and Data Relay Satellite (TDRS) Differenced One-Way Doppler (DOWD) Tracking Data for Orbit Determination and Station Acquisition Support of User Spacecraft Without TDRS Compatible Transponders

    NASA Technical Reports Server (NTRS)

    Olszewski, A. D., Jr.; Wilcox, T. P.; Beckman, Mark

    1996-01-01

    Many spacecraft are launched today with only an omni-directional (omni) antenna and do not have an onboard Tracking and Data Relay Satellite (TDRS) transponder that is capable of coherently returning a carrier signal through TDRS. Therefore, other means of tracking need to be explored and used to adequately acquire the spacecraft. Differenced One-Way Doppler (DOWD) tracking data are very useful in eliminating the problems associated with the instability of the onboard oscillators when using strictly one-way Doppler data. This paper investigates the TDRS DOWD tracking data received by the Goddard Space Flight Center (GSFC) Flight Dynamics Facility (FDF) during the launch and early orbit phases for the the Interplanetary Physics Laboratory (WIND) and the National Oceanographic and Atmospheric Administration (NOAA)-J missions. In particular FDF personnel performed an investigation of the data residuals and made an assessment of the acquisition capabilities of DOWD-based solutions. Comparisons of DOWD solutions with existing data types were performed and analyzed in this study. The evaluation also includes atmospheric editing of the DOWD data and a study of the feasibility of solving for Doppler biases in an attempt to minimize error. Furthermore, by comparing the results from WIND and NOAA-J, an attempt is made to show the limitations involved in using DOWD data for the two different mission profiles. The techniques discussed in this paper benefit the launches of spacecraft that do not have TDRS transponders on board, particularly those launched into a low Earth orbit. The use of DOWD data is a valuable asset to missions which do not have a stable local oscillator to enable high-quality solutions from the one-way/return-link Doppler tracking data.

  15. Lorentz force particle analyzer

    NASA Astrophysics Data System (ADS)

    Wang, Xiaodong; Thess, André; Moreau, René; Tan, Yanqing; Dai, Shangjun; Tao, Zhen; Yang, Wenzhi; Wang, Bo

    2016-07-01

    A new contactless technique is presented for the detection of micron-sized insulating particles in the flow of an electrically conducting fluid. A transverse magnetic field brakes this flow and tends to become entrained in the flow direction by a Lorentz force, whose reaction force on the magnetic-field-generating system can be measured. The presence of insulating particles suspended in the fluid produce changes in this Lorentz force, generating pulses in it; these pulses enable the particles to be counted and sized. A two-dimensional numerical model that employs a moving mesh method demonstrates the measurement principle when such a particle is present. Two prototypes and a three-dimensional numerical model are used to demonstrate the feasibility of a Lorentz force particle analyzer (LFPA). The findings of this study conclude that such an LFPA, which offers contactless and on-line quantitative measurements, can be applied to an extensive range of applications. These applications include measurements of the cleanliness of high-temperature and aggressive molten metal, such as aluminum and steel alloys, and the clean manufacturing of semiconductors.

  16. A Secure and Robust Compressed Domain Video Steganography for Intra- and Inter-Frames Using Embedding-Based Byte Differencing (EBBD) Scheme

    PubMed Central

    Idbeaa, Tarik; Abdul Samad, Salina; Husain, Hafizah

    2016-01-01

    This paper presents a novel secure and robust steganographic technique in the compressed video domain namely embedding-based byte differencing (EBBD). Unlike most of the current video steganographic techniques which take into account only the intra frames for data embedding, the proposed EBBD technique aims to hide information in both intra and inter frames. The information is embedded into a compressed video by simultaneously manipulating the quantized AC coefficients (AC-QTCs) of luminance components of the frames during MPEG-2 encoding process. Later, during the decoding process, the embedded information can be detected and extracted completely. Furthermore, the EBBD basically deals with two security concepts: data encryption and data concealing. Hence, during the embedding process, secret data is encrypted using the simplified data encryption standard (S-DES) algorithm to provide better security to the implemented system. The security of the method lies in selecting candidate AC-QTCs within each non-overlapping 8 × 8 sub-block using a pseudo random key. Basic performance of this steganographic technique verified through experiments on various existing MPEG-2 encoded videos over a wide range of embedded payload rates. Overall, the experimental results verify the excellent performance of the proposed EBBD with a better trade-off in terms of imperceptibility and payload, as compared with previous techniques while at the same time ensuring minimal bitrate increase and negligible degradation of PSNR values. PMID:26963093

  17. A Secure and Robust Compressed Domain Video Steganography for Intra- and Inter-Frames Using Embedding-Based Byte Differencing (EBBD) Scheme.

    PubMed

    Idbeaa, Tarik; Abdul Samad, Salina; Husain, Hafizah

    2016-01-01

    This paper presents a novel secure and robust steganographic technique in the compressed video domain namely embedding-based byte differencing (EBBD). Unlike most of the current video steganographic techniques which take into account only the intra frames for data embedding, the proposed EBBD technique aims to hide information in both intra and inter frames. The information is embedded into a compressed video by simultaneously manipulating the quantized AC coefficients (AC-QTCs) of luminance components of the frames during MPEG-2 encoding process. Later, during the decoding process, the embedded information can be detected and extracted completely. Furthermore, the EBBD basically deals with two security concepts: data encryption and data concealing. Hence, during the embedding process, secret data is encrypted using the simplified data encryption standard (S-DES) algorithm to provide better security to the implemented system. The security of the method lies in selecting candidate AC-QTCs within each non-overlapping 8 × 8 sub-block using a pseudo random key. Basic performance of this steganographic technique verified through experiments on various existing MPEG-2 encoded videos over a wide range of embedded payload rates. Overall, the experimental results verify the excellent performance of the proposed EBBD with a better trade-off in terms of imperceptibility and payload, as compared with previous techniques while at the same time ensuring minimal bitrate increase and negligible degradation of PSNR values.

  18. Numerical Development

    ERIC Educational Resources Information Center

    Siegler, Robert S.; Braithwaite, David W.

    2016-01-01

    In this review, we attempt to integrate two crucial aspects of numerical development: learning the magnitudes of individual numbers and learning arithmetic. Numerical magnitude development involves gaining increasingly precise knowledge of increasing ranges and types of numbers: from non-symbolic to small symbolic numbers, from smaller to larger…

  19. A numerical differentiation library exploiting parallel architectures

    NASA Astrophysics Data System (ADS)

    Voglis, C.; Hadjidoukas, P. E.; Lagaris, I. E.; Papageorgiou, D. G.

    2009-08-01

    We present a software library for numerically estimating first and second order partial derivatives of a function by finite differencing. Various truncation schemes are offered resulting in corresponding formulas that are accurate to order O(h), O(h), and O(h), h being the differencing step. The derivatives are calculated via forward, backward and central differences. Care has been taken that only feasible points are used in the case where bound constraints are imposed on the variables. The Hessian may be approximated either from function or from gradient values. There are three versions of the software: a sequential version, an OpenMP version for shared memory architectures and an MPI version for distributed systems (clusters). The parallel versions exploit the multiprocessing capability offered by computer clusters, as well as modern multi-core systems and due to the independent character of the derivative computation, the speedup scales almost linearly with the number of available processors/cores. Program summaryProgram title: NDL (Numerical Differentiation Library) Catalogue identifier: AEDG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 73 030 No. of bytes in distributed program, including test data, etc.: 630 876 Distribution format: tar.gz Programming language: ANSI FORTRAN-77, ANSI C, MPI, OPENMP Computer: Distributed systems (clusters), shared memory systems Operating system: Linux, Solaris Has the code been vectorised or parallelized?: Yes RAM: The library uses O(N) internal storage, N being the dimension of the problem Classification: 4.9, 4.14, 6.5 Nature of problem: The numerical estimation of derivatives at several accuracy levels is a common requirement in many computational tasks, such

  20. Landsat-Based Detection and Severity Analysis of Burned Sugarcane Plots in Tarlac, Philippines Using Differenced Normalized Burn Ratio (dNBR)

    NASA Astrophysics Data System (ADS)

    Baloloy, A. B.; Blanco, A. C.; Gana, B. S.; Sta. Ana, R. C.; Olalia, L. C.

    2016-09-01

    The Philippines has a booming sugarcane industry contributing about PHP 70 billion annually to the local economy through raw sugar, molasses and bioethanol production (SRA, 2012). Sugarcane planters adapt different farm practices in cultivating sugarcane, one of which is cane burning to eliminate unwanted plant material and facilitate easier harvest. Information on burned sugarcane extent is significant in yield estimation models to calculate total sugar lost during harvest. Pre-harvest burning can lessen sucrose by 2.7% - 5% of the potential yield (Gomez, et al 2006; Hiranyavasit, 2016). This study employs a method for detecting burn sugarcane area and determining burn severity through Differenced Normalized Burn Ratio (dNBR) using Landsat 8 Images acquired during the late milling season in Tarlac, Philippines. Total burned area was computed per burn severity based on pre-fire and post-fire images. Results show that 75.38% of the total sugarcane fields in Tarlac were burned with post-fire regrowth; 16.61% were recently burned; and only 8.01% were unburned. The monthly dNBR for February to March generated the largest area with low severity burn (1,436 ha) and high severity burn (31.14 ha) due to pre-harvest burning. Post-fire regrowth is highest in April to May when previously burned areas were already replanted with sugarcane. The maximum dNBR of the entire late milling season (February to May) recorded larger extent of areas with high and low post-fire regrowth compared to areas with low, moderate and high burn severity. Normalized Difference Vegetation Index (NDVI) was used to analyse vegetation dynamics between the burn severity classes. Significant positive correlation, rho = 0.99, was observed between dNBR and dNDVI at 5% level (p = 0.004). An accuracy of 89.03% was calculated for the Landsat-derived NBR validated using actual mill data for crop year 2015-2016.

  1. Property Differencing for Incremental Checking

    NASA Technical Reports Server (NTRS)

    Yang, Guowei; Khurshid, Sarfraz; Person, Suzette; Rungta, Neha

    2014-01-01

    This paper introduces iProperty, a novel approach that facilitates incremental checking of programs based on a property di erencing technique. Speci cally, iProperty aims to reduce the cost of checking properties as they are initially developed and as they co-evolve with the program. The key novelty of iProperty is to compute the di erences between the new and old versions of expected properties to reduce the number and size of the properties that need to be checked during the initial development of the properties. Furthermore, property di erencing is used in synergy with program behavior di erencing techniques to optimize common regression scenarios, such as detecting regression errors or checking feature additions for conformance to new expected properties. Experimental results in the context of symbolic execution of Java programs annotated with properties written as assertions show the e ectiveness of iProperty in utilizing change information to enable more ecient checking.

  2. Numerical Integration

    ERIC Educational Resources Information Center

    Sozio, Gerry

    2009-01-01

    Senior secondary students cover numerical integration techniques in their mathematics courses. In particular, students would be familiar with the "midpoint rule," the elementary "trapezoidal rule" and "Simpson's rule." This article derives these techniques by methods which secondary students may not be familiar with and an approach that…

  3. Numerical Relativity

    NASA Technical Reports Server (NTRS)

    Baker, John G.

    2009-01-01

    Recent advances in numerical relativity have fueled an explosion of progress in understanding the predictions of Einstein's theory of gravity, General Relativity, for the strong field dynamics, the gravitational radiation wave forms, and consequently the state of the remnant produced from the merger of compact binary objects. I will review recent results from the field, focusing on mergers of two black holes.

  4. SELECTION OF BURST-LIKE TRANSIENTS AND STOCHASTIC VARIABLES USING MULTI-BAND IMAGE DIFFERENCING IN THE PAN-STARRS1 MEDIUM-DEEP SURVEY

    SciTech Connect

    Kumar, S.; Gezari, S.; Heinis, S.

    2015-03-20

    We present a novel method for the light-curve characterization of Pan-STARRS1 Medium Deep Survey (PS1 MDS) extragalactic sources into stochastic variables (SVs) and burst-like (BL) transients, using multi-band image-differencing time-series data. We select detections in difference images associated with galaxy hosts using a star/galaxy catalog extracted from the deep PS1 MDS stacked images, and adopt a maximum a posteriori formulation to model their difference-flux time-series in four Pan-STARRS1 photometric bands g {sub P1}, r {sub P1}, i {sub P1}, and z {sub P1}. We use three deterministic light-curve models to fit BL transients; a Gaussian, a Gamma distribution, and anmore » analytic supernova (SN) model, and one stochastic light-curve model, the Ornstein-Uhlenbeck process, in order to fit variability that is characteristic of active galactic nuclei (AGNs). We assess the quality of fit of the models band-wise and source-wise, using their estimated leave-out-one cross-validation likelihoods and corrected Akaike information criteria. We then apply a K-means clustering algorithm on these statistics, to determine the source classification in each band. The final source classification is derived as a combination of the individual filter classifications, resulting in two measures of classification quality, from the averages across the photometric filters of (1) the classifications determined from the closest K-means cluster centers, and (2) the square distances from the clustering centers in the K-means clustering spaces. For a verification set of AGNs and SNe, we show that SV and BL occupy distinct regions in the plane constituted by these measures. We use our clustering method to characterize 4361 extragalactic image difference detected sources, in the first 2.5 yr of the PS1 MDS, into 1529 BL, and 2262 SV, with a purity of 95.00% for AGNs, and 90.97% for SN based on our verification sets. We combine our light-curve classifications with their nuclear or off

  5. Calibration and validation of the relative differenced Normalized Burn Ratio (RdNBR) to three measures of fire severity in the Sierra Nevada and Klamath Mountains, California, USA

    USGS Publications Warehouse

    Miller, J.D.; Knapp, E.E.; Key, C.H.; Skinner, C.N.; Isbell, C.J.; Creasy, R.M.; Sherlock, J.W.

    2009-01-01

    Multispectral satellite data have become a common tool used in the mapping of wildland fire effects. Fire severity, defined as the degree to which a site has been altered, is often the variable mapped. The Normalized Burn Ratio (NBR) used in an absolute difference change detection protocol (dNBR), has become the remote sensing method of choice for US Federal land management agencies to map fire severity due to wildland fire. However, absolute differenced vegetation indices are correlated to the pre-fire chlorophyll content of the vegetation occurring within the fire perimeter. Normalizing dNBR to produce a relativized dNBR (RdNBR) removes the biasing effect of the pre-fire condition. Employing RdNBR hypothetically allows creating categorical classifications using the same thresholds for fires occurring in similar vegetation types without acquiring additional calibration field data on each fire. In this paper we tested this hypothesis by developing thresholds on random training datasets, and then comparing accuracies for (1) fires that occurred within the same geographic region as the training dataset and in similar vegetation, and (2) fires from a different geographic region that is climatically and floristically similar to the training dataset region but supports more complex vegetation structure. We additionally compared map accuracies for three measures of fire severity: the composite burn index (CBI), percent change in tree canopy cover, and percent change in tree basal area. User's and producer's accuracies were highest for the most severe categories, ranging from 70.7% to 89.1%. Accuracies of the moderate fire severity category for measures describing effects only to trees (percent change in canopy cover and basal area) indicated that the classifications were generally not much better than random. Accuracies of the moderate category for the CBI classifications were somewhat better, averaging in the 50%-60% range. These results underscore the difficulty in

  6. Can Mapping Algorithms Based on Raw Scores Overestimate QALYs Gained by Treatment? A Comparison of Mappings Between the Roland-Morris Disability Questionnaire and the EQ-5D-3L Based on Raw and Differenced Score Data.

    PubMed

    Madan, Jason; Khan, Kamran A; Petrou, Stavros; Lamb, Sarah E

    2017-05-01

    Mapping algorithms are increasingly being used to predict health-utility values based on responses or scores from non-preference-based measures, thereby informing economic evaluations. We explored whether predictions in the EuroQol 5-dimension 3-level instrument (EQ-5D-3L) health-utility gains from mapping algorithms might differ if estimated using differenced versus raw scores, using the Roland-Morris Disability Questionnaire (RMQ), a widely used health status measure for low back pain, as an example. We estimated algorithms mapping within-person changes in RMQ scores to changes in EQ-5D-3L health utilities using data from two clinical trials with repeated observations. We also used logistic regression models to estimate response mapping algorithms from these data to predict within-person changes in responses to each EQ-5D-3L dimension from changes in RMQ scores. Predicted health-utility gains from these mappings were compared with predictions based on raw RMQ data. Using differenced scores reduced the predicted health-utility gain from a unit decrease in RMQ score from 0.037 (standard error [SE] 0.001) to 0.020 (SE 0.002). Analysis of response mapping data suggests that the use of differenced data reduces the predicted impact of reducing RMQ scores across EQ-5D-3L dimensions and that patients can experience health-utility gains on the EQ-5D-3L 'usual activity' dimension independent from improvements captured by the RMQ. Mappings based on raw RMQ data overestimate the EQ-5D-3L health utility gains from interventions that reduce RMQ scores. Where possible, mapping algorithms should reflect within-person changes in health outcome and be estimated from datasets containing repeated observations if they are to be used to estimate incremental health-utility gains.

  7. Corruption of accuracy and efficiency of Markov chain Monte Carlo simulation by inaccurate numerical implementation of conceptual hydrologic models

    NASA Astrophysics Data System (ADS)

    Schoups, G.; Vrugt, J. A.; Fenicia, F.; van de Giesen, N. C.

    2010-10-01

    Conceptual rainfall-runoff models have traditionally been applied without paying much attention to numerical errors induced by temporal integration of water balance dynamics. Reliance on first-order, explicit, fixed-step integration methods leads to computationally cheap simulation models that are easy to implement. Computational speed is especially desirable for estimating parameter and predictive uncertainty using Markov chain Monte Carlo (MCMC) methods. Confirming earlier work of Kavetski et al. (2003), we show here that the computational speed of first-order, explicit, fixed-step integration methods comes at a cost: for a case study with a spatially lumped conceptual rainfall-runoff model, it introduces artificial bimodality in the marginal posterior parameter distributions, which is not present in numerically accurate implementations of the same model. The resulting effects on MCMC simulation include (1) inconsistent estimates of posterior parameter and predictive distributions, (2) poor performance and slow convergence of the MCMC algorithm, and (3) unreliable convergence diagnosis using the Gelman-Rubin statistic. We studied several alternative numerical implementations to remedy these problems, including various adaptive-step finite difference schemes and an operator splitting method. Our results show that adaptive-step, second-order methods, based on either explicit finite differencing or operator splitting with analytical integration, provide the best alternative for accurate and efficient MCMC simulation. Fixed-step or adaptive-step implicit methods may also be used for increased accuracy, but they cannot match the efficiency of adaptive-step explicit finite differencing or operator splitting. Of the latter two, explicit finite differencing is more generally applicable and is preferred if the individual hydrologic flux laws cannot be integrated analytically, as the splitting method then loses its advantage.

  8. Quantifying uncertainty in morphologically-derived bedload transport rates for large braided rivers: insights from high-resolution, high-frequency digital elevation model differencing

    NASA Astrophysics Data System (ADS)

    Brasington, J.; Hicks, M.; Wheaton, J. M.; Williams, R. D.; Vericat, D.

    2013-12-01

    Repeat surveys of channel morphology provide a means to quantify fluvial sediment storage and enable inferences about changes in long-term sediment supply, watershed delivery and bed level adjustment; information vital to support effective river and land management. Over shorter time-scales, direct differencing of fluvial terrain models may also offer a route to predict reach-averaged sediment transport rates and quantify the patterns of channel morphodynamics and the processes that force them. Recent and rapid advances in geomatics have facilitated these goals by enabling the acquisition of topographic data at spatial resolutions and precisions suitable for characterising river morphology at the scale of individual grains over multi-kilometre reaches. Despite improvements in topographic surveying, inverting the terms of the sediment budget to derive estimates of sediment transport and link these to morphodynamic processes is, nonetheless, often confounded by limited knowledge of either the sediment supply or efflux across a boundary of the control volume, or unobserved cut-and-fill taking place between surveys. This latter problem is particularly poorly constrained, as field logistics frequently preclude surveys at a temporal frequency sufficient to capture changes in sediment storage associated with each competent event, let alone changes during individual floods. In this paper, we attempt to quantify the principal sources of uncertainty in morphologically-derived bedload transport rates for the large, labile, gravel-bed braided Rees River which drains the Southern Alps of NZ. During the austral summer of 2009-10, a unique timeseries of 10 high quality DEMs was derived for a 3 x 0.7 km reach of the Rees, using a combination of mobile terrestrial laser scanning, aDcp soundings and aerial image analysis. Complementary measurements of the forcing flood discharges and estimates of event-based particle step lengths were also acquired during the field campaign

  9. Numerical modeling method on the movement of water flow and suspended solids in two-dimensional sedimentation tanks in the wastewater treatment plant.

    PubMed

    Zeng, Guang-Ming; Jiang, Yi-Min; Qin, Xiao-Sheng; Huang, Guo-He; Li, Jian-Bing

    2003-01-01

    Taking the distributing calculation of velocity and concentration as an example, the paper established a series of governing equations by the vorticity-stream function method, and dispersed the equations by the finite differencing method. After figuring out the distribution field of velocity, the paper also calculated the concentration distribution in sedimentation tank by using the two-dimensional concentration transport equation. The validity and feasibility of the numerical method was verified through comparing with experimental data. Furthermore, the paper carried out a tentative exploration into the application of numerical simulation of sedimentation tanks.

  10. Analyzing Peace Pedagogies

    ERIC Educational Resources Information Center

    Haavelsrud, Magnus; Stenberg, Oddbjorn

    2012-01-01

    Eleven articles on peace education published in the first volume of the Journal of Peace Education are analyzed. This selection comprises peace education programs that have been planned or carried out in different contexts. In analyzing peace pedagogies as proposed in the 11 contributions, we have chosen network analysis as our method--enabling…

  11. Portable automatic blood analyzer

    NASA Technical Reports Server (NTRS)

    Coleman, R. L.

    1975-01-01

    Analyzer employs chemical-sensing electrodes for determination of blood, gas, and ion concentrations. It is rugged, easily serviced, and comparatively simple to operate. System can analyze up to eight parameters and can be modified to measure other blood constituents including nonionic species, such as urea, glucose, and oxygen.

  12. Gearbox vibration diagnostic analyzer

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This report describes the Gearbox Vibration Diagnostic Analyzer installed in the NASA Lewis Research Center's 500 HP Helicopter Transmission Test Stand to monitor gearbox testing. The vibration of the gearbox is analyzed using diagnostic algorithms to calculate a parameter indicating damaged components.

  13. Eulerian-Lagrangian numerical scheme for simulating advection, dispersion, and transient storage in streams and a comparison of numerical methods

    USGS Publications Warehouse

    Cox, T.J.; Runkel, R.L.

    2008-01-01

    Past applications of one-dimensional advection, dispersion, and transient storage zone models have almost exclusively relied on a central differencing, Eulerian numerical approximation to the nonconservative form of the fundamental equation. However, there are scenarios where this approach generates unacceptable error. A new numerical scheme for this type of modeling is presented here that is based on tracking Lagrangian control volumes across a fixed (Eulerian) grid. Numerical tests are used to provide a direct comparison of the new scheme versus nonconservative Eulerian numerical methods, in terms of both accuracy and mass conservation. Key characteristics of systems for which the Lagrangian scheme performs better than the Eulerian scheme include: nonuniform flow fields, steep gradient plume fronts, and pulse and steady point source loadings in advection-dominated systems. A new analytical derivation is presented that provides insight into the loss of mass conservation in the nonconservative Eulerian scheme. This derivation shows that loss of mass conservation in the vicinity of spatial flow changes is directly proportional to the lateral inflow rate and the change in stream concentration due to the inflow. While the nonconservative Eulerian scheme has clearly worked well for past published applications, it is important for users to be aware of the scheme's limitations. ?? 2008 ASCE.

  14. Automatic amino acid analyzer

    NASA Technical Reports Server (NTRS)

    Berdahl, B. J.; Carle, G. C.; Oyama, V. I.

    1971-01-01

    Analyzer operates unattended or up to 15 hours. It has an automatic sample injection system and can be programmed. All fluid-flow valve switching is accomplished pneumatically from miniature three-way solenoid pilot valves.

  15. Soil Rock Analyzer

    NASA Technical Reports Server (NTRS)

    1985-01-01

    A redesigned version of a soil/rock analyzer developed by Martin Marietta under a Langley Research Center contract is being marketed by Aurora Tech, Inc. Known as the Aurora ATX-100, it has self-contained power, an oscilloscope, a liquid crystal readout, and a multichannel spectrum analyzer. It measures energy emissions to determine what elements in what percentages a sample contains. It is lightweight and may be used for mineral exploration, pollution monitoring, etc.

  16. Universal MOSFET parameter analyzer

    NASA Astrophysics Data System (ADS)

    Klekachev, A. V.; Kuznetsov, S. N.; Pikulev, V. B.; Gurtov, V. A.

    2006-05-01

    MOSFET analyzer is developed to extract most important parameters of transistors. Instead of routine DC transfer and output characteristics, analyzer provides an evaluation of interface states density by applying charge pumping technique. There are two features that outperform the analyzer among similar products of other vendors. It is compact (100 × 80 × 50 mm 3 in dimensions) and lightweight (< 200 gram) instrument with ultra low power supply (< 2.5 W). The analyzer operates under control of IBM PC by means of USB interface that simultaneously provides power supply. Owing to the USB-compatible microcontroller as the basic element, designed analyzer offers cost-effective solution for diverse applications. The enclosed software runs under Windows 98/2000/XP operating systems, it has convenient graphical interface simplifying measurements for untrained user. Operational characteristics of analyzer are as follows: gate and drain output voltage within limits of +/-10V measuring current range of 1pA ÷ 10 mA; lowest limit of interface states density characterization of ~10 9 cm -2 • eV -1. The instrument was designed on the base of component parts from CYPRESS and ANALOG DEVICES (USA).

  17. Total organic carbon analyzer

    NASA Technical Reports Server (NTRS)

    Godec, Richard G.; Kosenka, Paul P.; Smith, Brian D.; Hutte, Richard S.; Webb, Johanna V.; Sauer, Richard L.

    1991-01-01

    The development and testing of a breadboard version of a highly sensitive total-organic-carbon (TOC) analyzer are reported. Attention is given to the system components including the CO2 sensor, oxidation reactor, acidification module, and the sample-inlet system. Research is reported for an experimental reagentless oxidation reactor, and good results are reported for linearity, sensitivity, and selectivity in the CO2 sensor. The TOC analyzer is developed with gravity-independent components and is designed for minimal additions of chemical reagents. The reagentless oxidation reactor is based on electrolysis and UV photolysis and is shown to be potentially useful. The stability of the breadboard instrument is shown to be good on a day-to-day basis, and the analyzer is capable of 5 sample analyses per day for a period of about 80 days. The instrument can provide accurate TOC and TIC measurements over a concentration range of 20 ppb to 50 ppm C.

  18. Micro acoustic spectrum analyzer

    DOEpatents

    Schubert, W. Kent; Butler, Michael A.; Adkins, Douglas R.; Anderson, Larry F.

    2004-11-23

    A micro acoustic spectrum analyzer for determining the frequency components of a fluctuating sound signal comprises a microphone to pick up the fluctuating sound signal and produce an alternating current electrical signal; at least one microfabricated resonator, each resonator having a different resonant frequency, that vibrate in response to the alternating current electrical signal; and at least one detector to detect the vibration of the microfabricated resonators. The micro acoustic spectrum analyzer can further comprise a mixer to mix a reference signal with the alternating current electrical signal from the microphone to shift the frequency spectrum to a frequency range that is a better matched to the resonant frequencies of the microfabricated resonators. The micro acoustic spectrum analyzer can be designed specifically for portability, size, cost, accuracy, speed, power requirements, and use in a harsh environment. The micro acoustic spectrum analyzer is particularly suited for applications where size, accessibility, and power requirements are limited, such as the monitoring of industrial equipment and processes, detection of security intrusions, or evaluation of military threats.

  19. Analyzing Political Television Advertisements.

    ERIC Educational Resources Information Center

    Burson, George

    1992-01-01

    Presents a lesson plan to help students understand that political advertisements often mislead, lie, or appeal to emotion. Suggests that the lesson will enable students to examine political advertisements analytically. Includes a worksheet to be used by students to analyze individual political advertisements. (DK)

  20. Electronic sleep analyzer

    NASA Technical Reports Server (NTRS)

    Frost, J. D., Jr.

    1970-01-01

    Electronic instrument automatically monitors the stages of sleep of a human subject. The analyzer provides a series of discrete voltage steps with each step corresponding to a clinical assessment of level of consciousness. It is based on the operation of an EEG and requires very little telemetry bandwidth or time.

  1. Proton Electrostatic Analyzer.

    DTIC Science & Technology

    1983-02-01

    Detector Assembly ......................................... 11 2.2 Analyzer (Energy Selector) Assembly............................ 12 2.3 Collimator...Spectrometer assembly ........................................ 13 2.2 Base plate .................................................. 14 - ~ 2.3 Detector ... sensitive vehicle systems. Space objects undergo differential charging due to variations in physical properties among their surface regions. The rate and

  2. List mode multichannel analyzer

    DOEpatents

    Archer, Daniel E [Livermore, CA; Luke, S John [Pleasanton, CA; Mauger, G Joseph [Livermore, CA; Riot, Vincent J [Berkeley, CA; Knapp, David A [Livermore, CA

    2007-08-07

    A digital list mode multichannel analyzer (MCA) built around a programmable FPGA device for onboard data analysis and on-the-fly modification of system detection/operating parameters, and capable of collecting and processing data in very small time bins (<1 millisecond) when used in histogramming mode, or in list mode as a list mode MCA.

  3. Analyzing Workforce Education. Monograph.

    ERIC Educational Resources Information Center

    Texas Community & Technical Coll. Workforce Education Consortium.

    This monograph examines the issue of task analysis as used in workplace literacy programs, debating the need for it and how to perform it in a rapidly changing environment. Based on experiences of community colleges in Texas, the report analyzes ways that task analysis can be done and how to implement work force education programs more quickly.…

  4. The ACS statistical analyzer

    DOT National Transportation Integrated Search

    2010-03-01

    This document provides guidance for using the ACS Statistical Analyzer. It is an Excel-based template for users of estimates from the American Community Survey (ACS) to assess the precision of individual estimates and to compare pairs of estimates fo...

  5. ANALYZING COHORT MORTALITY DATA

    EPA Science Inventory

    Several methods for analyzing data from mortality studies of occupationally or environmentally exposed cohorts are shown to be special cases of a single procedure. The procedure assumes a proportional hazards model for exposure effects and represents the log-likelihood kernel for...

  6. Numerical simulation of axisymmetric turbulent flow in combustors and diffusors. Ph.D. Thesis. Final Report

    NASA Technical Reports Server (NTRS)

    Yung, Chain Nan

    1988-01-01

    A method for predicting turbulent flow in combustors and diffusers is developed. The Navier-Stokes equations, incorporating a turbulence kappa-epsilon model equation, were solved in a nonorthogonal curvilinear coordinate system. The solution applied the finite volume method to discretize the differential equations and utilized the SIMPLE algorithm iteratively to solve the differenced equations. A zonal grid method, wherein the flow field was divided into several subsections, was developed. This approach permitted different computational schemes to be used in the various zones. In addition, grid generation was made a more simple task. However, treatment of the zonal boundaries required special handling. Boundary overlap and interpolating techniques were used and an adjustment of the flow variables was required to assure conservation of mass, momentum and energy fluxes. The numerical accuracy was assessed using different finite differencing methods, i.e., hybrid, quadratic upwind and skew upwind, to represent the convection terms. Flows in different geometries of combustors and diffusers were simulated and results compared with experimental data and good agreement was obtained.

  7. Soft Decision Analyzer

    NASA Technical Reports Server (NTRS)

    Lansdowne, Chatwin; Steele, Glen; Zucha, Joan; Schlesinger, Adam

    2013-01-01

    We describe the benefit of using closed-loop measurements for a radio receiver paired with a counterpart transmitter. We show that real-time analysis of the soft decision output of a receiver can provide rich and relevant insight far beyond the traditional hard-decision bit error rate (BER) test statistic. We describe a Soft Decision Analyzer (SDA) implementation for closed-loop measurements on single- or dual- (orthogonal) channel serial data communication links. The analyzer has been used to identify, quantify, and prioritize contributors to implementation loss in live-time during the development of software defined radios. This test technique gains importance as modern receivers are providing soft decision symbol synchronization as radio links are challenged to push more data and more protocol overhead through noisier channels, and software-defined radios (SDRs) use error-correction codes that approach Shannon's theoretical limit of performance.

  8. PULSE AMPLITUDE ANALYZER

    DOEpatents

    Greenblatt, M.H.

    1958-03-25

    This patent pertains to pulse amplitude analyzers for sorting and counting a serles of pulses, and specifically discloses an analyzer which ls simple in construction and presents the puise height distribution visually on an oscilloscope screen. According to the invention, the pulses are applied to the vertical deflection plates of an oscilloscope and trigger the horizontal sweep. Each pulse starts at the same point on the screen and has a maximum amplitude substantially along the same vertical line. A mask is placed over the screen except for a slot running along the line where the maximum amplitudes of the pulses appear. After the slot has been scanned by a photocell in combination with a slotted rotating disk, the photocell signal is displayed on an auxiliary oscilloscope as vertical deflection along a horizontal time base to portray the pulse amplitude distribution.

  9. PULSE AMPLITUDE ANALYZER

    DOEpatents

    Gray, G.W.; Jensen, A.S.

    1957-10-22

    A pulse-height analyzer system of improved design for sorting and counting a series of pulses, such as provided by a scintillation detector in nuclear radiation measurements, is described. The analyzer comprises a main transmission line, a cathode-ray tube for each section of the line with its deflection plates acting as the line capacitance; means to bias the respective cathode ray tubes so that the beam strikes a target only when a prearranged pulse amplitude is applied, with each tube progressively biased to respond to smaller amplitudes; pulse generating and counting means associated with each tube to respond when the beam is deflected; a control transmission line having the same time constant as the first line per section with pulse generating means for each tube for initiating a pulse on the second transmission line when a pulse triggers the tube of corresponding amplitude response, the former pulse acting to prevent successive tubes from responding to the pulse under test. This arrangement permits greater deflection sensitivity in the cathode ray tube and overcomes many of the disadvantages of prior art pulse-height analyzer circuits.

  10. Inductive dielectric analyzer

    NASA Astrophysics Data System (ADS)

    Agranovich, Daniel; Polygalov, Eugene; Popov, Ivan; Ben Ishai, Paul; Feldman, Yuri

    2017-03-01

    One of the approaches to bypass the problem of electrode polarization in dielectric measurements is the free electrode method. The advantage of this technique is that, the probing electric field in the material is not supplied by contact electrodes, but rather by electromagnetic induction. We have designed an inductive dielectric analyzer based on a sensor comprising two concentric toroidal coils. In this work, we present an analytic derivation of the relationship between the impedance measured by the sensor and the complex dielectric permittivity of the sample. The obtained relationship was successfully employed to measure the dielectric permittivity and conductivity of various alcohols and aqueous salt solutions.

  11. Fluorescence analyzer for lignin

    DOEpatents

    Berthold, John W.; Malito, Michael L.; Jeffers, Larry

    1993-01-01

    A method and apparatus for measuring lignin concentration in a sample of wood pulp or black liquor comprises a light emitting arrangement for emitting an excitation light through optical fiber bundles into a probe which has an undiluted sensing end facing the sample. The excitation light causes the lignin concentration to produce fluorescent emission light which is then conveyed through the probe to analyzing equipment which measures the intensity of the emission light. Measures a This invention was made with Government support under Contract Number DOE: DE-FC05-90CE40905 awarded by the Department of Energy (DOE). The Government has certain rights in this invention.

  12. Fractional channel multichannel analyzer

    DOEpatents

    Brackenbush, L.W.; Anderson, G.A.

    1994-08-23

    A multichannel analyzer incorporating the features of the present invention obtains the effect of fractional channels thus greatly reducing the number of actual channels necessary to record complex line spectra. This is accomplished by using an analog-to-digital converter in the asynchronous mode, i.e., the gate pulse from the pulse height-to-pulse width converter is not synchronized with the signal from a clock oscillator. This saves power and reduces the number of components required on the board to achieve the effect of radically expanding the number of channels without changing the circuit board. 9 figs.

  13. Fractional channel multichannel analyzer

    DOEpatents

    Brackenbush, Larry W.; Anderson, Gordon A.

    1994-01-01

    A multichannel analyzer incorporating the features of the present invention obtains the effect of fractional channels thus greatly reducing the number of actual channels necessary to record complex line spectra. This is accomplished by using an analog-to-digital converter in the asynscronous mode, i.e., the gate pulse from the pulse height-to-pulse width converter is not synchronized with the signal from a clock oscillator. This saves power and reduces the number of components required on the board to achieve the effect of radically expanding the number of channels without changing the circuit board.

  14. Mineral/Water Analyzer

    NASA Technical Reports Server (NTRS)

    1983-01-01

    An x-ray fluorescence spectrometer developed for the Viking Landers by Martin Marietta was modified for geological exploration, water quality monitoring, and aircraft engine maintenance. The aerospace system was highly miniaturized and used very little power. It irradiates the sample causing it to emit x-rays at various energies, then measures the energy levels for sample composition analysis. It was used in oceanographic applications and modified to identify element concentrations in ore samples, on site. The instrument can also analyze the chemical content of water, and detect the sudden development of excessive engine wear.

  15. Ring Image Analyzer

    NASA Technical Reports Server (NTRS)

    Strekalov, Dmitry V.

    2012-01-01

    Ring Image Analyzer software analyzes images to recognize elliptical patterns. It determines the ellipse parameters (axes ratio, centroid coordinate, tilt angle). The program attempts to recognize elliptical fringes (e.g., Newton Rings) on a photograph and determine their centroid position, the short-to-long-axis ratio, and the angle of rotation of the long axis relative to the horizontal direction on the photograph. These capabilities are important in interferometric imaging and control of surfaces. In particular, this program has been developed and applied for determining the rim shape of precision-machined optical whispering gallery mode resonators. The program relies on a unique image recognition algorithm aimed at recognizing elliptical shapes, but can be easily adapted to other geometric shapes. It is robust against non-elliptical details of the image and against noise. Interferometric analysis of precision-machined surfaces remains an important technological instrument in hardware development and quality analysis. This software automates and increases the accuracy of this technique. The software has been developed for the needs of an R&TD-funded project and has become an important asset for the future research proposal to NASA as well as other agencies.

  16. Analyzing Aeroelasticity in Turbomachines

    NASA Technical Reports Server (NTRS)

    Reddy, T. S. R.; Srivastava, R.

    2003-01-01

    ASTROP2-LE is a computer program that predicts flutter and forced responses of blades, vanes, and other components of such turbomachines as fans, compressors, and turbines. ASTROP2-LE is based on the ASTROP2 program, developed previously for analysis of stability of turbomachinery components. In developing ASTROP2- LE, ASTROP2 was modified to include a capability for modeling forced responses. The program was also modified to add a capability for analysis of aeroelasticity with mistuning and unsteady aerodynamic solutions from another program, LINFLX2D, that solves the linearized Euler equations of unsteady two-dimensional flow. Using LINFLX2D to calculate unsteady aerodynamic loads, it is possible to analyze effects of transonic flow on flutter and forced response. ASTROP2-LE can be used to analyze subsonic, transonic, and supersonic aerodynamics and structural mistuning for rotors with blades of differing structural properties. It calculates the aerodynamic damping of a blade system operating in airflow so that stability can be assessed. The code also predicts the magnitudes and frequencies of the unsteady aerodynamic forces on the airfoils of a blade row from incoming wakes. This information can be used in high-cycle fatigue analysis to predict the fatigue lives of the blades.

  17. Multiple capillary biochemical analyzer

    DOEpatents

    Dovichi, N.J.; Zhang, J.Z.

    1995-08-08

    A multiple capillary analyzer allows detection of light from multiple capillaries with a reduced number of interfaces through which light must pass in detecting light emitted from a sample being analyzed, using a modified sheath flow cuvette. A linear or rectangular array of capillaries is introduced into a rectangular flow chamber. Sheath fluid draws individual sample streams through the cuvette. The capillaries are closely and evenly spaced and held by a transparent retainer in a fixed position in relation to an optical detection system. Collimated sample excitation radiation is applied simultaneously across the ends of the capillaries in the retainer. Light emitted from the excited sample is detected by the optical detection system. The retainer is provided by a transparent chamber having inward slanting end walls. The capillaries are wedged into the chamber. One sideways dimension of the chamber is equal to the diameter of the capillaries and one end to end dimension varies from, at the top of the chamber, slightly greater than the sum of the diameters of the capillaries to, at the bottom of the chamber, slightly smaller than the sum of the diameters of the capillaries. The optical system utilizes optic fibers to deliver light to individual photodetectors, one for each capillary tube. A filter or wavelength division demultiplexer may be used for isolating fluorescence at particular bands. 21 figs.

  18. Multiple capillary biochemical analyzer

    DOEpatents

    Dovichi, Norman J.; Zhang, Jian Z.

    1995-01-01

    A multiple capillary analyzer allows detection of light from multiple capillaries with a reduced number of interfaces through which light must pass in detecting light emitted from a sample being analyzed, using a modified sheath flow cuvette. A linear or rectangular array of capillaries is introduced into a rectangular flow chamber. Sheath fluid draws individual sample streams through the cuvette. The capillaries are closely and evenly spaced and held by a transparent retainer in a fixed position in relation to an optical detection system. Collimated sample excitation radiation is applied simultaneously across the ends of the capillaries in the retainer. Light emitted from the excited sample is detected by the optical detection system. The retainer is provided by a transparent chamber having inward slanting end walls. The capillaries are wedged into the chamber. One sideways dimension of the chamber is equal to the diameter of the capillaries and one end to end dimension varies from, at the top of the chamber, slightly greater than the sum of the diameters of the capillaries to, at the bottom of the chamber, slightly smaller than the sum of the diameters of the capillaries. The optical system utilizes optic fibres to deliver light to individual photodetectors, one for each capillary tube. A filter or wavelength division demultiplexer may be used for isolating fluorescence at particular bands.

  19. Motion detector and analyzer

    DOEpatents

    Unruh, W.P.

    1987-03-23

    Method and apparatus are provided for deriving positive and negative Doppler spectrum to enable analysis of objects in motion, and particularly, objects having rotary motion. First and second returned radar signals are mixed with internal signals to obtain an in-phase process signal and a quadrature process signal. A broad-band phase shifter shifts the quadrature signal through 90/degree/ relative to the in-phase signal over a predetermined frequency range. A pair of signals is output from the broad-band phase shifter which are then combined to provide a first side band signal which is functionally related to a negative Doppler shift spectrum. The distinct positive and negative Doppler spectra may then be analyzed for the motion characteristics of the object being examined.

  20. Analyzing Water's Optical Absorption

    NASA Technical Reports Server (NTRS)

    2002-01-01

    A cooperative agreement between World Precision Instruments (WPI), Inc., and Stennis Space Center has led the UltraPath(TM) device, which provides a more efficient method for analyzing the optical absorption of water samples at sea. UltraPath is a unique, high-performance absorbance spectrophotometer with user-selectable light path lengths. It is an ideal tool for any study requiring precise and highly sensitive spectroscopic determination of analytes, either in the laboratory or the field. As a low-cost, rugged, and portable system capable of high- sensitivity measurements in widely divergent waters, UltraPath will help scientists examine the role that coastal ocean environments play in the global carbon cycle. UltraPath(TM) is a trademark of World Precision Instruments, Inc. LWCC(TM) is a trademark of World Precision Instruments, Inc.

  1. Numerical Hydrodynamics in Special Relativity.

    PubMed

    Martí, J M; Müller, E

    1999-01-01

    This review is concerned with a discussion of numerical methods for the solution of the equations of special relativistic hydrodynamics (SRHD). Particular emphasis is put on a comprehensive review of the application of high-resolution shock-capturing methods in SRHD. Results obtained with different numerical SRHD methods are compared, and two astrophysical applications of SRHD flows are discussed. An evaluation of the various numerical methods is given and future developments are analyzed. Supplementary material is available for this article at 10.12942/lrr-1999-3.

  2. ICAN: Integrated composites analyzer

    NASA Technical Reports Server (NTRS)

    Murthy, P. L. N.; Chamis, C. C.

    1984-01-01

    The ICAN computer program performs all the essential aspects of mechanics/analysis/design of multilayered fiber composites. Modular, open-ended and user friendly, the program can handle a variety of composite systems having one type of fiber and one matrix as constituents as well as intraply and interply hybrid composite systems. It can also simulate isotropic layers by considering a primary composite system with negligible fiber volume content. This feature is specifically useful in modeling thin interply matrix layers. Hygrothermal conditions and various combinations of in-plane and bending loads can also be considered. Usage of this code is illustrated with a sample input and the generated output. Some key features of output are stress concentration factors around a circular hole, locations of probable delamination, a summary of the laminate failure stress analysis, free edge stresses, microstresses and ply stress/strain influence coefficients. These features make ICAN a powerful, cost-effective tool to analyze/design fiber composite structures and components.

  3. Analyzing Visibility Configurations.

    PubMed

    Dachsbacher, C

    2011-04-01

    Many algorithms, such as level of detail rendering and occlusion culling methods, make decisions based on the degree of visibility of an object, but do not analyze the distribution, or structure, of the visible and occluded regions across surfaces. We present an efficient method to classify different visibility configurations and show how this can be used on top of existing methods based on visibility determination. We adapt co-occurrence matrices for visibility analysis and generalize them to operate on clusters of triangular surfaces instead of pixels. We employ machine learning techniques to reliably classify the thus extracted feature vectors. Our method allows perceptually motivated level of detail methods for real-time rendering applications by detecting configurations with expected visual masking. We exemplify the versatility of our method with an analysis of area light visibility configurations in ray tracing and an area-to-area visibility analysis suitable for hierarchical radiosity refinement. Initial results demonstrate the robustness, simplicity, and performance of our method in synthetic scenes, as well as real applications.

  4. PULSE HEIGHT ANALYZER

    DOEpatents

    Johnstone, C.W.

    1958-01-21

    An anticoincidence device is described for a pair of adjacent channels of a multi-channel pulse height analyzer for preventing the lower channel from generating a count pulse in response to an input pulse when the input pulse has sufficient magnitude to reach the upper level channel. The anticoincidence circuit comprises a window amplifier, upper and lower level discriminators, and a biased-off amplifier. The output of the window amplifier is coupled to the inputs of the discriminators, the output of the upper level discriminator is connected to the resistance end of a series R-C network, the output of the lower level discriminator is coupled to the capacitance end of the R-C network, and the grid of the biased-off amplifier is coupled to the junction of the R-C network. In operation each discriminator produces a negative pulse output when the input pulse traverses its voltage setting. As a result of the connections to the R-C network, a trigger pulse will be sent to the biased-off amplifier when the incoming pulse level is sufficient to trigger only the lower level discriminator.

  5. Climate Model Diagnostic Analyzer

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; Pan, Lei; Zhai, Chengxing; Tang, Benyang; Kubar, Terry; Zhang, Zia; Wang, Wei

    2015-01-01

    The comprehensive and innovative evaluation of climate models with newly available global observations is critically needed for the improvement of climate model current-state representation and future-state predictability. A climate model diagnostic evaluation process requires physics-based multi-variable analyses that typically involve large-volume and heterogeneous datasets, making them both computation- and data-intensive. With an exploratory nature of climate data analyses and an explosive growth of datasets and service tools, scientists are struggling to keep track of their datasets, tools, and execution/study history, let alone sharing them with others. In response, we have developed a cloud-enabled, provenance-supported, web-service system called Climate Model Diagnostic Analyzer (CMDA). CMDA enables the physics-based, multivariable model performance evaluations and diagnoses through the comprehensive and synergistic use of multiple observational data, reanalysis data, and model outputs. At the same time, CMDA provides a crowd-sourcing space where scientists can organize their work efficiently and share their work with others. CMDA is empowered by many current state-of-the-art software packages in web service, provenance, and semantic search.

  6. TEAMS Model Analyzer

    NASA Technical Reports Server (NTRS)

    Tijidjian, Raffi P.

    2010-01-01

    The TEAMS model analyzer is a supporting tool developed to work with models created with TEAMS (Testability, Engineering, and Maintenance System), which was developed by QSI. In an effort to reduce the time spent in the manual process that each TEAMS modeler must perform in the preparation of reporting for model reviews, a new tool has been developed as an aid to models developed in TEAMS. The software allows for the viewing, reporting, and checking of TEAMS models that are checked into the TEAMS model database. The software allows the user to selectively model in a hierarchical tree outline view that displays the components, failure modes, and ports. The reporting features allow the user to quickly gather statistics about the model, and generate an input/output report pertaining to all of the components. Rules can be automatically validated against the model, with a report generated containing resulting inconsistencies. In addition to reducing manual effort, this software also provides an automated process framework for the Verification and Validation (V&V) effort that will follow development of these models. The aid of such an automated tool would have a significant impact on the V&V process.

  7. Analyzing Spacecraft Telecommunication Systems

    NASA Technical Reports Server (NTRS)

    Kordon, Mark; Hanks, David; Gladden, Roy; Wood, Eric

    2004-01-01

    Multi-Mission Telecom Analysis Tool (MMTAT) is a C-language computer program for analyzing proposed spacecraft telecommunication systems. MMTAT utilizes parameterized input and computational models that can be run on standard desktop computers to perform fast and accurate analyses of telecommunication links. MMTAT is easy to use and can easily be integrated with other software applications and run as part of almost any computational simulation. It is distributed as either a stand-alone application program with a graphical user interface or a linkable library with a well-defined set of application programming interface (API) calls. As a stand-alone program, MMTAT provides both textual and graphical output. The graphs make it possible to understand, quickly and easily, how telecommunication performance varies with variations in input parameters. A delimited text file that can be read by any spreadsheet program is generated at the end of each run. The API in the linkable-library form of MMTAT enables the user to control simulation software and to change parameters during a simulation run. Results can be retrieved either at the end of a run or by use of a function call at any time step.

  8. Numeric invariants from multidimensional persistence

    SciTech Connect

    Skryzalin, Jacek; Carlsson, Gunnar

    2017-05-19

    In this paper, we analyze the space of multidimensional persistence modules from the perspectives of algebraic geometry. We first build a moduli space of a certain subclass of easily analyzed multidimensional persistence modules, which we construct specifically to capture much of the information which can be gained by using multidimensional persistence over one-dimensional persistence. We argue that the global sections of this space provide interesting numeric invariants when evaluated against our subclass of multidimensional persistence modules. Lastly, we extend these global sections to the space of all multidimensional persistence modules and discuss how the resulting numeric invariants might be usedmore » to study data.« less

  9. Digital Microfluidics Sample Analyzer

    NASA Technical Reports Server (NTRS)

    Pollack, Michael G.; Srinivasan, Vijay; Eckhardt, Allen; Paik, Philip Y.; Sudarsan, Arjun; Shenderov, Alex; Hua, Zhishan; Pamula, Vamsee K.

    2010-01-01

    Three innovations address the needs of the medical world with regard to microfluidic manipulation and testing of physiological samples in ways that can benefit point-of-care needs for patients such as premature infants, for which drawing of blood for continuous tests can be life-threatening in their own right, and for expedited results. A chip with sample injection elements, reservoirs (and waste), droplet formation structures, fluidic pathways, mixing areas, and optical detection sites, was fabricated to test the various components of the microfluidic platform, both individually and in integrated fashion. The droplet control system permits a user to control droplet microactuator system functions, such as droplet operations and detector operations. Also, the programming system allows a user to develop software routines for controlling droplet microactuator system functions, such as droplet operations and detector operations. A chip is incorporated into the system with a controller, a detector, input and output devices, and software. A novel filler fluid formulation is used for the transport of droplets with high protein concentrations. Novel assemblies for detection of photons from an on-chip droplet are present, as well as novel systems for conducting various assays, such as immunoassays and PCR (polymerase chain reaction). The lab-on-a-chip (a.k.a., lab-on-a-printed-circuit board) processes physiological samples and comprises a system for automated, multi-analyte measurements using sub-microliter samples of human serum. The invention also relates to a diagnostic chip and system including the chip that performs many of the routine operations of a central labbased chemistry analyzer, integrating, for example, colorimetric assays (e.g., for proteins), chemiluminescence/fluorescence assays (e.g., for enzymes, electrolytes, and gases), and/or conductometric assays (e.g., for hematocrit on plasma and whole blood) on a single chip platform.

  10. Crew Activity Analyzer

    NASA Technical Reports Server (NTRS)

    Murray, James; Kirillov, Alexander

    2008-01-01

    The crew activity analyzer (CAA) is a system of electronic hardware and software for automatically identifying patterns of group activity among crew members working together in an office, cockpit, workshop, laboratory, or other enclosed space. The CAA synchronously records multiple streams of data from digital video cameras, wireless microphones, and position sensors, then plays back and processes the data to identify activity patterns specified by human analysts. The processing greatly reduces the amount of time that the analysts must spend in examining large amounts of data, enabling the analysts to concentrate on subsets of data that represent activities of interest. The CAA has potential for use in a variety of governmental and commercial applications, including planning for crews for future long space flights, designing facilities wherein humans must work in proximity for long times, improving crew training and measuring crew performance in military settings, human-factors and safety assessment, development of team procedures, and behavioral and ethnographic research. The data-acquisition hardware of the CAA (see figure) includes two video cameras: an overhead one aimed upward at a paraboloidal mirror on the ceiling and one mounted on a wall aimed in a downward slant toward the crew area. As many as four wireless microphones can be worn by crew members. The audio signals received from the microphones are digitized, then compressed in preparation for storage. Approximate locations of as many as four crew members are measured by use of a Cricket indoor location system. [The Cricket indoor location system includes ultrasonic/radio beacon and listener units. A Cricket beacon (in this case, worn by a crew member) simultaneously transmits a pulse of ultrasound and a radio signal that contains identifying information. Each Cricket listener unit measures the difference between the times of reception of the ultrasound and radio signals from an identified beacon

  11. Whole blood coagulation analyzers.

    PubMed

    1997-08-01

    Whole blood Coagulation analyzers (WBCAs) are widely used point-of-care (POC) testing devices found primarily in cardiothoracic surgical suites and cardia catheterization laboratories. Most of these devices can perform a number of coagulation tests that provide information about a patient's blood clotting status. Clinicians use the results of the WBCA tests, which are available minutes after applying a blood sample, primarily to monitor the effectiveness of heparin therapy--an anticoagulation therapy used during cardiopulmonary bypass (CPB) surgery, angioplasty, hemodialysis, and other clinical procedures. In this study we evaluated five WBCAs from four suppliers. Our testing focused on the applications for which WBCAs are primarily used: Monitoring moderate to high heparin levels, as would be required, for example, during CPB are angioplasty. For this function, WCBAs are typically used to perform an activated clotting time (ACT) test or, as one supplier refers to its test, a heparin management test (HMT). All models included in this study offered an ACT test or an HMT. Monitoring low heparin levels, as would be required, for example,during hemodialysis. For this function, WBCAs would normally be used to perform either a low-range ACT (LACT) test or a whole blood activated partial thromboplastin time (WBAPTT) test. Most of the evaluated units could perform at least one of these tests; one unit did not offer either test and was therefore not rated for this application. We rated and ranked each evaluated model separately for each of these two applications. In addition, we provided a combined rating and ranking that considers the units' appropriateness for performing both application. We based our conclusions on a unit's performance and humans factor design, as determined by our testing, and on its five-year life-cycle cost, as determined by our net present value (NPV) analysis. While we rated all evaluated units acceptable for each appropriate category, we did

  12. Regolith Evolved Gas Analyzer

    NASA Technical Reports Server (NTRS)

    Hoffman, John H.; Hedgecock, Jud; Nienaber, Terry; Cooper, Bonnie; Allen, Carlton; Ming, Doug

    2000-01-01

    The Regolith Evolved Gas Analyzer (REGA) is a high-temperature furnace and mass spectrometer instrument for determining the mineralogical composition and reactivity of soil samples. REGA provides key mineralogical and reactivity data that is needed to understand the soil chemistry of an asteroid, which then aids in determining in-situ which materials should be selected for return to earth. REGA is capable of conducting a number of direct soil measurements that are unique to this instrument. These experimental measurements include: (1) Mass spectrum analysis of evolved gases from soil samples as they are heated from ambient temperature to 900 C; and (2) Identification of liberated chemicals, e.g., water, oxygen, sulfur, chlorine, and fluorine. REGA would be placed on the surface of a near earth asteroid. It is an autonomous instrument that is controlled from earth but does the analysis of regolith materials automatically. The REGA instrument consists of four primary components: (1) a flight-proven mass spectrometer, (2) a high-temperature furnace, (3) a soil handling system, and (4) a microcontroller. An external arm containing a scoop or drill gathers regolith samples. A sample is placed in the inlet orifice where the finest-grained particles are sifted into a metering volume and subsequently moved into a crucible. A movable arm then places the crucible in the furnace. The furnace is closed, thereby sealing the inner volume to collect the evolved gases for analysis. Owing to the very low g forces on an asteroid compared to Mars or the moon, the sample must be moved from inlet to crucible by mechanical means rather than by gravity. As the soil sample is heated through a programmed pattern, the gases evolved at each temperature are passed through a transfer tube to the mass spectrometer for analysis and identification. Return data from the instrument will lead to new insights and discoveries including: (1) Identification of the molecular masses of all of the gases

  13. Soft Decision Analyzer

    NASA Technical Reports Server (NTRS)

    Steele, Glen; Lansdowne, Chatwin; Zucha, Joan; Schlensinger, Adam

    2013-01-01

    The Soft Decision Analyzer (SDA) is an instrument that combines hardware, firmware, and software to perform realtime closed-loop end-to-end statistical analysis of single- or dual- channel serial digital RF communications systems operating in very low signal-to-noise conditions. As an innovation, the unique SDA capabilities allow it to perform analysis of situations where the receiving communication system slips bits due to low signal-to-noise conditions or experiences constellation rotations resulting in channel polarity in versions or channel assignment swaps. SDA s closed-loop detection allows it to instrument a live system and correlate observations with frame, codeword, and packet losses, as well as Quality of Service (QoS) and Quality of Experience (QoE) events. The SDA s abilities are not confined to performing analysis in low signal-to-noise conditions. Its analysis provides in-depth insight of a communication system s receiver performance in a variety of operating conditions. The SDA incorporates two techniques for identifying slips. The first is an examination of content of the received data stream s relation to the transmitted data content and the second is a direct examination of the receiver s recovered clock signals relative to a reference. Both techniques provide benefits in different ways and allow the communication engineer evaluating test results increased confidence and understanding of receiver performance. Direct examination of data contents is performed by two different data techniques, power correlation or a modified Massey correlation, and can be applied to soft decision data widths 1 to 12 bits wide over a correlation depth ranging from 16 to 512 samples. The SDA detects receiver bit slips within a 4 bits window and can handle systems with up to four quadrants (QPSK, SQPSK, and BPSK systems). The SDA continuously monitors correlation results to characterize slips and quadrant change and is capable of performing analysis even when the

  14. Quantile Regression for Analyzing Heterogeneity in Ultra-high Dimension

    PubMed Central

    Wang, Lan; Wu, Yichao

    2012-01-01

    Ultra-high dimensional data often display heterogeneity due to either heteroscedastic variance or other forms of non-location-scale covariate effects. To accommodate heterogeneity, we advocate a more general interpretation of sparsity which assumes that only a small number of covariates influence the conditional distribution of the response variable given all candidate covariates; however, the sets of relevant covariates may differ when we consider different segments of the conditional distribution. In this framework, we investigate the methodology and theory of nonconvex penalized quantile regression in ultra-high dimension. The proposed approach has two distinctive features: (1) it enables us to explore the entire conditional distribution of the response variable given the ultra-high dimensional covariates and provides a more realistic picture of the sparsity pattern; (2) it requires substantially weaker conditions compared with alternative methods in the literature; thus, it greatly alleviates the difficulty of model checking in the ultra-high dimension. In theoretic development, it is challenging to deal with both the nonsmooth loss function and the nonconvex penalty function in ultra-high dimensional parameter space. We introduce a novel sufficient optimality condition which relies on a convex differencing representation of the penalized loss function and the subdifferential calculus. Exploring this optimality condition enables us to establish the oracle property for sparse quantile regression in the ultra-high dimension under relaxed conditions. The proposed method greatly enhances existing tools for ultra-high dimensional data analysis. Monte Carlo simulations demonstrate the usefulness of the proposed procedure. The real data example we analyzed demonstrates that the new approach reveals substantially more information compared with alternative methods. PMID:23082036

  15. Numerical simulations of three-dimensional laminar flow over a backward facing step; flow near side walls

    NASA Technical Reports Server (NTRS)

    Steinthorsson, Erlendur; Liou, Meng-Sing; Povinelli, Louis A.; Arnone, Andrea

    1993-01-01

    This paper reports the results of numerical simulations of steady, laminar flow over a backward-facing step. The governing equations used in the simulations are the full 'compressible' Navier-Stokes equations, solutions to which were computed by using a cell-centered, finite volume discretization. The convection terms of the governing equations were discretized by using the Advection Upwind Splitting Method (AUSM), whereas the diffusion terms were discretized using central differencing formulas. The validity and accuracy of the numerical solutions were verified by comparing the results to existing experimental data for flow at identical Reynolds numbers in the same back step geometry. The paper focuses attention on the details of the flow field near the side wall of the geometry.

  16. Rip Current Velocity Structure in Drifter Trajectories and Numerical Simulations

    NASA Astrophysics Data System (ADS)

    Schmidt, W. E.; Slinn, D. N.

    2008-12-01

    Estimates of rip current velocity and cross-shore structure were made using surfzone drifters, bathymetric surveys, and rectified video images. Over 60 rip current trajectories were observed during a three year period at a Southern California beach in July 2000, 2001, and 2002. Incident wave heights (Hs) immediately offshore (~7 m depth) were obtained by initializing a refraction model with data from nearby directional wave buoys, and varied from 0.3 to 1.0 m. Tide levels varied over approximately 1 m and winds were light. Numerical simulations using the non-linear shallow water equations and modeled over measured bathymetry also produced similar flows and statistics. Time series of drifter position, sampled at 1 Hz, were first-differenced to produce velocity time series. Maximum observed velocities varied between 25 and 80 cm s-1, whereas model maximum velocities were lower by a factor 2 to 3. When velocity maxima were non-dimensionalized by respective trajectory mean velocity, both observed and modeled values varied between 1.5 and 3.5. Cross-shore location of rip current velocity maxima for both shore-normal and shore-oblique rip currents were strongly coincident with the surfzone edge (Xb), as determined by rectified video (observations) or breakpoint (model). Once outside of the surfzone, observed and modeled rip current velocities decreased to 10% of their peak values within 2 surfzone widths of the shoreline, a useful definition of rip current cross-shore extent.

  17. Droplet actuator analyzer with cartridge

    NASA Technical Reports Server (NTRS)

    Sturmer, Ryan A. (Inventor); Paik, Philip Y. (Inventor); Srinivasan, Vijay (Inventor); Brafford, Keith R. (Inventor); West, Richard M. (Inventor); Smith, Gregory F. (Inventor); Pollack, Michael G. (Inventor); Pamula, Vamsee K. (Inventor)

    2011-01-01

    A droplet actuator with cartridge is provided. According to one embodiment, a sample analyzer is provided and includes an analyzer unit comprising electronic or optical receiving means, a cartridge comprising self-contained droplet handling capabilities, and a wherein the cartridge is coupled to the analyzer unit by a means which aligns electronic and/or optical outputs from the cartridge with electronic or optical receiving means on the analyzer unit. According to another embodiment, a sample analyzer is provided and includes a sample analyzer comprising a cartridge coupled thereto and a means of electrical interface and/or optical interface between the cartridge and the analyzer, whereby electrical signals and/or optical signals may be transmitted from the cartridge to the analyzer.

  18. Soft Decision Analyzer and Method

    NASA Technical Reports Server (NTRS)

    Zucha, Joan P. (Inventor); Schlesinger, Adam M. (Inventor); Lansdowne, Chatwin (Inventor); Steele, Glen F. (Inventor)

    2015-01-01

    A soft decision analyzer system is operable to interconnect soft decision communication equipment and analyze the operation thereof to detect symbol wise alignment between a test data stream and a reference data stream in a variety of operating conditions.

  19. Soft Decision Analyzer and Method

    NASA Technical Reports Server (NTRS)

    Zucha, Joan P. (Inventor); Schlesinger, Adam M. (Inventor); Lansdowne, Chatwin (Inventor); Steele, Glen F. (Inventor)

    2016-01-01

    A soft decision analyzer system is operable to interconnect soft decision communication equipment and analyze the operation thereof to detect symbol wise alignment between a test data stream and a reference data stream in a variety of operating conditions.

  20. Analyzing a stochastic time series obeying a second-order differential equation.

    PubMed

    Lehle, B; Peinke, J

    2015-06-01

    The stochastic properties of a Langevin-type Markov process can be extracted from a given time series by a Markov analysis. Also processes that obey a stochastically forced second-order differential equation can be analyzed this way by employing a particular embedding approach: To obtain a Markovian process in 2N dimensions from a non-Markovian signal in N dimensions, the system is described in a phase space that is extended by the temporal derivative of the signal. For a discrete time series, however, this derivative can only be calculated by a differencing scheme, which introduces an error. If the effects of this error are not accounted for, this leads to systematic errors in the estimation of the drift and diffusion functions of the process. In this paper we will analyze these errors and we will propose an approach that correctly accounts for them. This approach allows an accurate parameter estimation and, additionally, is able to cope with weak measurement noise, which may be superimposed to a given time series.

  1. Pseudotachometer for mobile metabolic analyzer

    NASA Technical Reports Server (NTRS)

    Currie, J. R.

    1974-01-01

    Metabolic analyzer determines a patient's walking or ambulation speed and simultaneously measures his metabolic parameters. Analyzer is designed to move at some preselected human ambulation speed. During test, patient is connected to system and follows analyzer closely while his metabolic data is being monitored.

  2. Advanced numerical methods for three dimensional two-phase flow calculations

    SciTech Connect

    Toumi, I.; Caruge, D.

    1997-07-01

    This paper is devoted to new numerical methods developed for both one and three dimensional two-phase flow calculations. These methods are finite volume numerical methods and are based on the use of Approximate Riemann Solvers concepts to define convective fluxes versus mean cell quantities. The first part of the paper presents the numerical method for a one dimensional hyperbolic two-fluid model including differential terms as added mass and interface pressure. This numerical solution scheme makes use of the Riemann problem solution to define backward and forward differencing to approximate spatial derivatives. The construction of this approximate Riemann solver uses anmore » extension of Roe`s method that has been successfully used to solve gas dynamic equations. As far as the two-fluid model is hyperbolic, this numerical method seems very efficient for the numerical solution of two-phase flow problems. The scheme was applied both to shock tube problems and to standard tests for two-fluid computer codes. The second part describes the numerical method in the three dimensional case. The authors discuss also some improvements performed to obtain a fully implicit solution method that provides fast running steady state calculations. Such a scheme is not implemented in a thermal-hydraulic computer code devoted to 3-D steady-state and transient computations. Some results obtained for Pressurised Water Reactors concerning upper plenum calculations and a steady state flow in the core with rod bow effect evaluation are presented. In practice these new numerical methods have proved to be stable on non staggered grids and capable of generating accurate non oscillating solutions for two-phase flow calculations.« less

  3. Numerical Aerodynamic Simulation

    NASA Technical Reports Server (NTRS)

    1989-01-01

    An overview of historical and current numerical aerodynamic simulation (NAS) is given. The capabilities and goals of the Numerical Aerodynamic Simulation Facility are outlined. Emphasis is given to numerical flow visualization and its applications to structural analysis of aircraft and spacecraft bodies. The uses of NAS in computational chemistry, engine design, and galactic evolution are mentioned.

  4. Numerical Boundary Condition Procedures

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Topics include numerical procedures for treating inflow and outflow boundaries, steady and unsteady discontinuous surfaces, far field boundaries, and multiblock grids. In addition, the effects of numerical boundary approximations on stability, accuracy, and convergence rate of the numerical solution are discussed.

  5. Distributed numerical controllers

    NASA Astrophysics Data System (ADS)

    Orban, Peter E.

    2001-12-01

    While the basic principles of Numerical Controllers (NC) have not changed much during the years, the implementation of NCs' has changed tremendously. NC equipment has evolved from yesterday's hard-wired specialty control apparatus to today's graphics intensive, networked, increasingly PC based open systems, controlling a wide variety of industrial equipment with positioning needs. One of the newest trends in NC technology is the distributed implementation of the controllers. Distributed implementation promises to offer robustness, lower implementation costs, and a scalable architecture. Historically partitioning has been done along the hierarchical levels, moving individual modules into self contained units. The paper discusses various NC architectures, the underlying technology for distributed implementation, and relevant design issues. First the functional requirements of individual NC modules are analyzed. Module functionality, cycle times, and data requirements are examined. Next the infrastructure for distributed node implementation is reviewed. Various communication protocols and distributed real-time operating system issues are investigated and compared. Finally, a different, vertical system partitioning, offering true scalability and reconfigurability is presented.

  6. Analyzing Population Genetics Data: A Comparison of the Software

    USDA-ARS?s Scientific Manuscript database

    Choosing a software program for analyzing population genetic data can be a challenge without prior knowledge of the methods used by each program. There are numerous web sites listing programs by type of data analyzed, type of analyses performed, or other criteria. Even with programs categorized in ...

  7. Computational Divided Differencing and Divided Difference Arithmetics

    DTIC Science & Technology

    2001-01-01

    A 1t��923Ã@Zp &+7²; 2³)*[ tD ³ N+³µ´ AG6J@1t@ ´ FD62921t9+F3Ã@1t�B|B@�; G�62923TA衩V3T7+6R_\\ª;T7#�3ÃA"�?3z;¢6J7#�"�:8cb$@f8...3¢6J@ �2;¢�29<� @nPA1¢@�;¢;t��7E9;6d7+ 1q3 ¢6J@=Ft629<�d9@1t�2��@a6H7+1f3¢6J@Wət;73C58J@1t� 0 923¢� 0 @o7<AM@1t923Ã7+1 y v v Computational...92@17C 923Ã@�;@��B3¢6J@1f3¢6J@|8:92371t�^n / ! 7+6;¢�B|B@UC bU4UC 7+ 1q3 ¢6J@ 8¦92371t�^nB. 1 $¦7+6T;¢�B|B@ UC bU4UCA282@BA

  8. Progress in Multi-Dimensional Upwind Differencing

    DTIC Science & Technology

    1992-09-01

    Fligure 4a a shiock less t raii~so ilc soliit ion is reached from Ii itial val ies conitaining 1shiocks anid S sonic poinits: agarin. thle residiial...8217 j~ vi are chu i.ewil thtii( lealst alliilied \\wit the conivectiloll direct nwi. in1 .3 tlimeIros alj’uled. (hiul *IS a treg h11f,1r p( ir~to1to .3

  9. Numerical flux formulas for the Euler and Navier-Stokes equations. 2: Progress in flux-vector splitting

    NASA Technical Reports Server (NTRS)

    Coirier, William J.; Vanleer, Bram

    1991-01-01

    The accuracy of various numerical flux functions for the inviscid fluxes when used for Navier-Stokes computations is studied. The flux functions are benchmarked for solutions of the viscous, hypersonic flow past a 10 degree cone at zero angle of attack using first order, upwind spatial differencing. The Harten-Lax/Roe flux is found to give a good boundary layer representation, although its robustness is an issue. Some hybrid flux formulas, where the concepts of flux-vector and flux-difference splitting are combined, are shown to give unsatisfactory pressure distributions; there is still room for improvement. Investigations of low diffusion, pure flux-vector splittings indicate that a pure flux-vector splitting can be developed that eliminates spurious diffusion across the boundary layer. The resulting first-order scheme is marginally stable and not monotone.

  10. Comparison of fiber length analyzers

    Treesearch

    Don Guay; Nancy Ross Sutherland; Walter Rantanen; Nicole Malandri; Aimee Stephens; Kathleen Mattingly; Matt Schneider

    2005-01-01

    In recent years, several fiber new fiber length analyzers have been developed and brought to market. The new instruments provide faster measurements and the capability of both laboratory and on-line analysis. Do the various fiber analyzers provide the same length, coarseness, width, and fines measurements for a given fiber sample? This paper provides a comparison of...

  11. Numerical Computation of a Continuous-thrust State Transition Matrix Incorporating Accurate Hardware and Ephemeris Models

    NASA Technical Reports Server (NTRS)

    Ellison, Donald; Conway, Bruce; Englander, Jacob

    2015-01-01

    A significant body of work exists showing that providing a nonlinear programming (NLP) solver with expressions for the problem constraint gradient substantially increases the speed of program execution and can also improve the robustness of convergence, especially for local optimizers. Calculation of these derivatives is often accomplished through the computation of spacecraft's state transition matrix (STM). If the two-body gravitational model is employed as is often done in the context of preliminary design, closed form expressions for these derivatives may be provided. If a high fidelity dynamics model, that might include perturbing forces such as the gravitational effect from multiple third bodies and solar radiation pressure is used then these STM's must be computed numerically. We present a method for the power hardward model and a full ephemeris model. An adaptive-step embedded eight order Dormand-Prince numerical integrator is discussed and a method for the computation of the time of flight derivatives in this framework is presented. The use of these numerically calculated derivatieves offer a substantial improvement over finite differencing in the context of a global optimizer. Specifically the inclusion of these STM's into the low thrust missiondesign tool chain in use at NASA Goddard Spaceflight Center allows for an increased preliminary mission design cadence.

  12. Analyzing millet price regimes and market performance in Niger with remote sensing data

    NASA Astrophysics Data System (ADS)

    Essam, Timothy Michael

    This dissertation concerns the analysis of staple food prices and market performance in Niger using remotely sensed vegetation indices in the form of normalized differenced vegetation index (NDVI). By exploiting the link between weather-related vegetation production conditions, which serve as a proxy for spatially explicit millet yields and thus millet availability, this study analyzes the potential causal links between NDVI outcomes and millet market performance and presents an empirical approach for predicting changes in market performance based on NDVI outcomes. Overall, the thesis finds that inter-market price spreads and levels of market integration can be reasonably explained by deviations in vegetation index outcomes from the growing season. Negative (positive) NDVI shocks are associated with better (worse) than expected market performance as measured by converging inter-market price spreads. As the number of markets affected by negatively abnormal vegetation production conditions in the same month of the growing season increases, inter-market price dispersion declines. Positive NDVI shocks, however, do not mirror this pattern in terms of the magnitude of inter-market price divergence. Market integration is also found to be linked to vegetation index outcomes as below (above) average NDVI outcomes result in more integrated (segmented) markets. Climate change and food security policies and interventions should be guided by these findings and account for dynamic relationships among market structures and vegetation production outcomes.

  13. [Automated analyzer of enzyme immunoassay].

    PubMed

    Osawa, S

    1995-09-01

    Automated analyzers for enzyme immunoassay can be classified by several points of view: the kind of labeled antibodies or enzymes, detection methods, the number of tests per unit time, analytical time and speed per run. In practice, it is important for us consider the several points such as detection limits, the number of tests per unit time, analytical range, and precision. Most of the automated analyzers on the market can randomly access and measure samples. I will describe the recent advance of automated analyzers reviewing their labeling antibodies and enzymes, the detection methods, the number of test per unit time and analytical time and speed per test.

  14. Nuclear fuel microsphere gamma analyzer

    DOEpatents

    Valentine, Kenneth H.; Long, Jr., Ernest L.; Willey, Melvin G.

    1977-01-01

    A gamma analyzer system is provided for the analysis of nuclear fuel microspheres and other radioactive particles. The system consists of an analysis turntable with means for loading, in sequence, a plurality of stations within the turntable; a gamma ray detector for determining the spectrum of a sample in one section; means for analyzing the spectrum; and a receiver turntable to collect the analyzed material in stations according to the spectrum analysis. Accordingly, particles may be sorted according to their quality; e.g., fuel particles with fractured coatings may be separated from those that are not fractured, or according to other properties.

  15. Molecular wake shield gas analyzer

    NASA Technical Reports Server (NTRS)

    Hoffman, J. H.

    1980-01-01

    Techniques for measuring and characterizing the ultrahigh vacuum in the wake of an orbiting spacecraft are studied. A high sensitivity mass spectrometer that contains a double mass analyzer consisting of an open source miniature magnetic sector field neutral gas analyzer and an identical ion analyzer is proposed. These are configured to detect and identify gas and ion species of hydrogen, helium, nitrogen, oxygen, nitric oxide, and carbon dioxide and any other gas or ion species in the 1 to 46 amu mass range. This range covers the normal atmospheric constituents. The sensitivity of the instrument is sufficient to measure ambient gases and ion with a particle density of the order of one per cc. A chemical pump, or getter, is mounted near the entrance aperture of the neutral gas analyzer which integrates the absorption of ambient gases for a selectable period of time for subsequent release and analysis. The sensitivity is realizable for all but rare gases using this technique.

  16. Market study: Whole blood analyzer

    NASA Technical Reports Server (NTRS)

    1977-01-01

    A market survey was conducted to develop findings relative to the commercialization potential and key market factors of the whole blood analyzer which is being developed in conjunction with NASA's Space Shuttle Medical System.

  17. CSTT Update: Fuel Quality Analyzer

    SciTech Connect

    Brosha, Eric L.; Lujan, Roger W.; Mukundan, Rangachary

    These are slides from a presentation. The following topics are covered: project background (scope and approach), developing the prototype (timeline), update on intellectual property, analyzer comparisons (improving humidification, stabilizing the baseline, applying clean-up strategy, impact of ionomer content and improving clean-up), proposed operating mode, considerations for testing in real-world conditions (Gen 1 analyzer electronics development, testing partner identified, field trial planning), summary, and future work.

  18. A Categorization of Dynamic Analyzers

    NASA Technical Reports Server (NTRS)

    Lujan, Michelle R.

    1997-01-01

    Program analysis techniques and tools are essential to the development process because of the support they provide in detecting errors and deficiencies at different phases of development. The types of information rendered through analysis includes the following: statistical measurements of code, type checks, dataflow analysis, consistency checks, test data,verification of code, and debugging information. Analyzers can be broken into two major categories: dynamic and static. Static analyzers examine programs with respect to syntax errors and structural properties., This includes gathering statistical information on program content, such as the number of lines of executable code, source lines. and cyclomatic complexity. In addition, static analyzers provide the ability to check for the consistency of programs with respect to variables. Dynamic analyzers in contrast are dependent on input and the execution of a program providing the ability to find errors that cannot be detected through the use of static analysis alone. Dynamic analysis provides information on the behavior of a program rather than on the syntax. Both types of analysis detect errors in a program, but dynamic analyzers accomplish this through run-time behavior. This paper focuses on the following broad classification of dynamic analyzers: 1) Metrics; 2) Models; and 3) Monitors. Metrics are those analyzers that provide measurement. The next category, models, captures those analyzers that present the state of the program to the user at specified points in time. The last category, monitors, checks specified code based on some criteria. The paper discusses each classification and the techniques that are included under them. In addition, the role of each technique in the software life cycle is discussed. Familiarization with the tools that measure, model and monitor programs provides a framework for understanding the program's dynamic behavior from different, perspectives through analysis of the input

  19. Surveying the Numeric Databanks.

    ERIC Educational Resources Information Center

    O'Leary, Mick

    1987-01-01

    Describes six leading numeric databank services and compares them with bibliographic databases in terms of customers' needs, search software, pricing arrangements, and the role of the search specialist. A listing of the locations of the numeric databanks discussed is provided. (CLB)

  20. Numerical Linear Algebra.

    DTIC Science & Technology

    1980-09-08

    February 1979 through 31 March 1980 Title of Research: NUMERICAL LINEAR ALGEBRA Principal Investigators: Gene H. Golub James H. Wilkinson Research...BEFORE COMPLETING FORM 2 OTAgSSION NO. 3. RECIPIENT’S CATALOG NUMBER ITE~ btitle) ~qEE NUMERICAL LINEAR ALGEBRA #I ~ f#7&/8 PER.ORMING ORG. REPORT NUM 27R 7

  1. A numerical study of thermal stratification due to transient natural convection in densified liquid propellant tanks

    NASA Astrophysics Data System (ADS)

    Manalo, Lawrence B.

    A comprehensive, non-equilibrium, two-domain (liquid and vapor), physics based, mathematical model is developed to investigate the onset and growth of the natural circulation and thermal stratification inside cryogenic propellant storage tanks due to heat transfer from the surroundings. A two-dimensional (planar) model is incorporated for the liquid domain while a lumped, thermodynamic model is utilized for the vapor domain. The mathematical model in the liquid domain consists of the conservation of mass, momentum, and energy equations and incorporates the Boussinesq approximation (constant fluid density except in the buoyancy term of the momentum equation). In addition, the vapor is assumed to behave like an ideal gas with uniform thermodynamic properties. Furthermore, the time-dependent nature of the heat leaks from the surroundings to the propellant (due to imperfect tank insulation) is considered. Also, heterogeneous nucleation, although not significant in the temperature range of study, has been included. The transport of mass and energy between the liquid and vapor domains leads to transient ullage vapor temperatures and pressures. (The latter of which affects the saturation temperature of the liquid at the liquid-vapor interface.) This coupling between the two domains is accomplished through an energy balance (based on a micro-layer concept) at the interface. The resulting governing, non-linear, partial differential equations (which include a Poisson's equation for determining the pressure distribution) in the liquid domain are solved by an implicit, finite-differencing technique utilizing a non-uniform (stretched) mesh (in both directions) for predicting the velocity and temperature fields. (The accuracy of the numerical scheme is validated by comparing the model's results to a benchmark numerical case as well as to available experimental data.) The mass, temperature, and pressure of the vapor is determined by using a simple explicit finite-differencing

  2. Event and Apparent Horizon Finders for 3 + 1 Numerical Relativity.

    PubMed

    Thornburg, Jonathan

    2007-01-01

    Event and apparent horizons are key diagnostics for the presence and properties of black holes. In this article I review numerical algorithms and codes for finding event and apparent horizons in numerically-computed spacetimes, focusing on calculations done using the 3 + 1 ADM formalism. The event horizon of an asymptotically-flat spacetime is the boundary between those events from which a future-pointing null geodesic can reach future null infinity and those events from which no such geodesic exists. The event horizon is a (continuous) null surface in spacetime. The event horizon is defined nonlocally in time : it is a global property of the entire spacetime and must be found in a separate post-processing phase after all (or at least the nonstationary part) of spacetime has been numerically computed. There are three basic algorithms for finding event horizons, based on integrating null geodesics forwards in time, integrating null geodesics backwards in time, and integrating null surfaces backwards in time. The last of these is generally the most efficient and accurate. In contrast to an event horizon, an apparent horizon is defined locally in time in a spacelike slice and depends only on data in that slice, so it can be (and usually is) found during the numerical computation of a spacetime. A marginally outer trapped surface (MOTS) in a slice is a smooth closed 2-surface whose future-pointing outgoing null geodesics have zero expansion Θ. An apparent horizon is then defined as a MOTS not contained in any other MOTS. The MOTS condition is a nonlinear elliptic partial differential equation (PDE) for the surface shape, containing the ADM 3-metric, its spatial derivatives, and the extrinsic curvature as coefficients. Most "apparent horizon" finders actually find MOTSs. There are a large number of apparent horizon finding algorithms, with differing trade-offs between speed, robustness, accuracy, and ease of programming. In axisymmetry, shooting algorithms work well

  3. Numerical and theoretical analyses of underground explosion cavity decoupling

    NASA Astrophysics Data System (ADS)

    Jensen, R.; Aldridge, D. F.; Chael, E. P.

    2013-12-01

    It has long been established that the amplitudes of seismic waves radiated from an underground explosion can be reduced by detonating the explosive within a fluid-filled cavity of adequate size. Significant amplitude reduction occurs because the reflection coefficient at the fluid/rock interface (i.e., the cavity wall) is large. In fact, the DC frequency limit of the reflection coefficient for a spherically-diverging seismic wave incident upon a concentric spherical interface is -1.0, independent of radius of curvature and all material properties. In order to quantify to the degree of amplitude reduction expected in various realistic scenarios, we are conducting mathematical and numerical investigations into the so-called 'cavity decoupling problem' for a buried explosion. Our working tool is a numerical algorithm for simulating fully-coupled seismic and acoustic wave propagation in mixed solid/fluid media. Solution methodology involves explicit, time-domain, finite differencing of the elastodynamic velocity-stress partial differential system on a three-dimensional staggered spatial grid. Conditional logic is used to avoid shear stress updating within fluid zones; this approach leads to computational efficiency gains for models containing a significant proportion of ideal fluid. Numerical stability and accuracy are maintained at air/rock interfaces (where the contrast in mass density is on the order of 1 to 2000) via an FD operator 'order switching' formalism. The fourth-order spatial FD operator used throughout the bulk of the earth model is reduced to second-order in the immediate vicinity of a high-contrast interface. Point explosions detonated at the center of an air-filled or water-filled spherical cavity lead to strong resonant oscillations in radiated seismic energy, with period controlled by cavity radius and sound speed of the fill fluid. If the explosion is off-center, or the cavity is non-spherical, shear waves are generated in the surrounding elastic

  4. Rotor for centrifugal fast analyzers

    DOEpatents

    Lee, Norman E.

    1985-01-01

    The invention is an improved photometric analyzer of the rotary cuvette type, the analyzer incorporating a multicuvette rotor of novel design. The rotor (a) is leaktight, (b) permits operation in the 90.degree. and 180.degree. excitation modes, (c) is compatible with extensively used Centrifugal Fast Analyzers, and (d) can be used thousands of times. The rotor includes an assembly comprising a top plate, a bottom plate, and a central plate, the rim of the central plate being formed with circumferentially spaced indentations. A UV-transmitting ring is sealably affixed to the indented rim to define with the indentations an array of cuvettes. The ring serves both as a sealing means and an end window for the cuvettes.

  5. On-Demand Urine Analyzer

    NASA Technical Reports Server (NTRS)

    Farquharson, Stuart; Inscore, Frank; Shende, Chetan

    2010-01-01

    A lab-on-a-chip was developed that is capable of extracting biochemical indicators from urine samples and generating their surface-enhanced Raman spectra (SERS) so that the indicators can be quantified and identified. The development was motivated by the need to monitor and assess the effects of extended weightlessness, which include space motion sickness and loss of bone and muscle mass. The results may lead to developments of effective exercise programs and drug regimes that would maintain astronaut health. The analyzer containing the lab-on-a- chip includes materials to extract 3- methylhistidine (a muscle-loss indicator) and Risedronate (a bone-loss indicator) from the urine sample and detect them at the required concentrations using a Raman analyzer. The lab-on- a-chip has both an extractive material and a SERS-active material. The analyzer could be used to monitor the onset of diseases, such as osteoporosis.

  6. Rotor for centrifugal fast analyzers

    DOEpatents

    Lee, N.E.

    1984-01-01

    The invention is an improved photometric analyzer of the rotary cuvette type, the analyzer incorporating a multicuvette rotor of novel design. The rotor (a) is leaktight, (b) permits operation in the 90/sup 0/ and 180/sup 0/ excitation modes, (c) is compatible with extensively used Centrifugal Fast Analyzers, and (d) can be used thousands of times. The rotor includes an assembly comprising a top plate, a bottom plate, and a central plate, the rim of the central plate being formed with circumferentially spaced indentations. A uv-transmitting ring is sealably affixed to the indented rim to define with the indentations an array of cuvettes. The ring serves both as a sealing means and an end window for the cuvettes.

  7. Analyzing Dynamics of Cooperating Spacecraft

    NASA Technical Reports Server (NTRS)

    Hughes, Stephen P.; Folta, David C.; Conway, Darrel J.

    2004-01-01

    A software library has been developed to enable high-fidelity computational simulation of the dynamics of multiple spacecraft distributed over a region of outer space and acting with a common purpose. All of the modeling capabilities afforded by this software are available independently in other, separate software systems, but have not previously been brought together in a single system. A user can choose among several dynamical models, many high-fidelity environment models, and several numerical-integration schemes. The user can select whether to use models that assume weak coupling between spacecraft, or strong coupling in the case of feedback control or tethering of spacecraft to each other. For weak coupling, spacecraft orbits are propagated independently, and are synchronized in time by controlling the step size of the integration. For strong coupling, the orbits are integrated simultaneously. Among the integration schemes that the user can choose are Runge-Kutta Verner, Prince-Dormand, Adams-Bashforth-Moulton, and Bulirsh- Stoer. Comparisons of performance are included for both the weak- and strongcoupling dynamical models for all of the numerical integrators.

  8. Personal Computer (PC) Thermal Analyzer

    DTIC Science & Technology

    1990-03-01

    demonstrate the power of the PC Thermal Analyzer, it was compared with an existing thermal analysis method. Specifically, the PC Thermal Analyzer was...34Intelligence" I T Kowledge 1 User I Inference e Base I Interface 1i FMechanisms H 1 asI I II - I L m m m m m m - m m i m m - m m - m I- m i m Expert...Temperature in degrees centi- grade? (2) What is the total Heat Output ( power dissipation) in watts?). 25 BOARD ASSEMBLY ~UI U2 aooo 0i0000t00 U15

  9. Real time infrared aerosol analyzer

    DOEpatents

    Johnson, Stanley A.; Reedy, Gerald T.; Kumar, Romesh

    1990-01-01

    Apparatus for analyzing aerosols in essentially real time includes a virtual impactor which separates coarse particles from fine and ultrafine particles in an aerosol sample. The coarse and ultrafine particles are captured in PTFE filters, and the fine particles impact onto an internal light reflection element. The composition and quantity of the particles on the PTFE filter and on the internal reflection element are measured by alternately passing infrared light through the filter and the internal light reflection element, and analyzing the light through infrared spectrophotometry to identify the particles in the sample.

  10. Strategies for Analyzing Tone Languages

    ERIC Educational Resources Information Center

    Coupe, Alexander R.

    2014-01-01

    This paper outlines a method of auditory and acoustic analysis for determining the tonemes of a language starting from scratch, drawing on the author's experience of recording and analyzing tone languages of north-east India. The methodology is applied to a preliminary analysis of tone in the Thang dialect of Khiamniungan, a virtually undocumented…

  11. Therapy Talk: Analyzing Therapeutic Discourse

    ERIC Educational Resources Information Center

    Leahy, Margaret M.

    2004-01-01

    Therapeutic discourse is the talk-in-interaction that represents the social practice between clinician and client. This article invites speech-language pathologists to apply their knowledge of language to analyzing therapy talk and to learn how talking practices shape clinical roles and identities. A range of qualitative research approaches,…

  12. Helping Students Analyze Business Documents.

    ERIC Educational Resources Information Center

    Devet, Bonnie

    2001-01-01

    Notes that student writers gain greater insight into the importance of audience by analyzing business documents. Discusses how business writing teachers can help students understand the rhetorical refinements of writing to an audience. Presents an assignment designed to lead writers systematically through an analysis of two advertisements. (SG)

  13. Pollution Analyzing and Monitoring Instruments.

    ERIC Educational Resources Information Center

    1972

    Compiled in this book is basic, technical information useful in a systems approach to pollution control. Descriptions and specifications are given of what is available in ready made, on-the-line commercial equipment for sampling, monitoring, measuring and continuously analyzing the multitudinous types of pollutants found in the air, water, soil,…

  14. Methods of analyzing crude oil

    SciTech Connect

    Cooks, Robert Graham; Jjunju, Fred Paul Mark; Li, Anyin

    The invention generally relates to methods of analyzing crude oil. In certain embodiments, methods of the invention involve obtaining a crude oil sample, and subjecting the crude oil sample to mass spectrometry analysis. In certain embodiments, the method is performed without any sample pre-purification steps.

  15. Incorporating the gas analyzer response time in gas exchange computations.

    PubMed

    Mitchell, R R

    1979-11-01

    A simple method for including the gas analyzer response time in the breath-by-breath computation of gas exchange rates is described. The method uses a difference equation form of a model for the gas analyzer in the computation of oxygen uptake and carbon dioxide production and avoids a numerical differentiation required to correct the gas fraction wave forms. The effect of not accounting for analyzer response time is shown to be a 20% underestimation in gas exchange rate. The present method accurately measures gas exchange rate, is relatively insensitive to measurement errors in the analyzer time constant, and does not significantly increase the computation time.

  16. Rocket engine numerical simulation

    NASA Technical Reports Server (NTRS)

    Davidian, Ken

    1993-01-01

    The topics are presented in view graph form and include the following: a definition of the rocket engine numerical simulator (RENS); objectives; justification; approach; potential applications; potential users; RENS work flowchart; RENS prototype; and conclusions.

  17. Rocket engine numerical simulator

    NASA Technical Reports Server (NTRS)

    Davidian, Ken

    1993-01-01

    The topics are presented in viewgraph form and include the following: a rocket engine numerical simulator (RENS) definition; objectives; justification; approach; potential applications; potential users; RENS work flowchart; RENS prototype; and conclusion.

  18. Rocket engine numerical simulation

    NASA Astrophysics Data System (ADS)

    Davidian, Ken

    1993-12-01

    The topics are presented in view graph form and include the following: a definition of the rocket engine numerical simulator (RENS); objectives; justification; approach; potential applications; potential users; RENS work flowchart; RENS prototype; and conclusions.

  19. A Numerical Method of Calculating Propeller Noise Including Acoustic Nonlinear Effects

    NASA Technical Reports Server (NTRS)

    Korkan, K. D.

    1985-01-01

    Using the transonic flow fields(s) generated by the NASPROP-E computer code for an eight blade SR3-series propeller, a theoretical method is investigated to calculate the total noise values and frequency content in the acoustic near and far field without using the Ffowcs Williams - Hawkings equation. The flow field is numerically generated using an implicit three dimensional Euler equation solver in weak conservation law form. Numerical damping is required by the differencing method for stability in three dimensions, and the influence of the damping on the calculated acoustic values is investigated. The acoustic near field is solved by integrating with respect to time the pressure oscillations induced at a stationary observer location. The acoustic far field is calculated from the near field primitive variables as generated by NASPROP-E computer code using a method involving a perturbation velocity potential as suggested by Hawkings in the calculation of the acoustic pressure time-history at a specified far field observed location. the methodologies described are valid for calculating total noise levels and are applicable to any propeller geometry for which a flow field solution is available.

  20. The Statistical Loop Analyzer (SLA)

    NASA Technical Reports Server (NTRS)

    Lindsey, W. C.

    1985-01-01

    The statistical loop analyzer (SLA) is designed to automatically measure the acquisition, tracking and frequency stability performance characteristics of symbol synchronizers, code synchronizers, carrier tracking loops, and coherent transponders. Automated phase lock and system level tests can also be made using the SLA. Standard baseband, carrier and spread spectrum modulation techniques can be accomodated. Through the SLA's phase error jitter and cycle slip measurements the acquisition and tracking thresholds of the unit under test are determined; any false phase and frequency lock events are statistically analyzed and reported in the SLA output in probabilistic terms. Automated signal drop out tests can be performed in order to trouble shoot algorithms and evaluate the reacquisition statistics of the unit under test. Cycle slip rates and cycle slip probabilities can be measured using the SLA. These measurements, combined with bit error probability measurements, are all that are needed to fully characterize the acquisition and tracking performance of a digital communication system.

  1. DEEP WATER ISOTOPIC CURRENT ANALYZER

    DOEpatents

    Johnston, W.H.

    1964-04-21

    A deepwater isotopic current analyzer, which employs radioactive isotopes for measurement of ocean currents at various levels beneath the sea, is described. The apparatus, which can determine the direction and velocity of liquid currents, comprises a shaft having a plurality of radiation detectors extending equidistant radially therefrom, means for releasing radioactive isotopes from the shaft, and means for determining the time required for the isotope to reach a particular detector. (AEC)

  2. Analyzing Big Data in Psychology: A Split/Analyze/Meta-Analyze Approach

    PubMed Central

    Cheung, Mike W.-L.; Jak, Suzanne

    2016-01-01

    Big data is a field that has traditionally been dominated by disciplines such as computer science and business, where mainly data-driven analyses have been performed. Psychology, a discipline in which a strong emphasis is placed on behavioral theories and empirical research, has the potential to contribute greatly to the big data movement. However, one challenge to psychologists—and probably the most crucial one—is that most researchers may not have the necessary programming and computational skills to analyze big data. In this study we argue that psychologists can also conduct big data research and that, rather than trying to acquire new programming and computational skills, they should focus on their strengths, such as performing psychometric analyses and testing theories using multivariate analyses to explain phenomena. We propose a split/analyze/meta-analyze approach that allows psychologists to easily analyze big data. Two real datasets are used to demonstrate the proposed procedures in R. A new research agenda related to the analysis of big data in psychology is outlined at the end of the study. PMID:27242639

  3. Analyzing Big Data in Psychology: A Split/Analyze/Meta-Analyze Approach.

    PubMed

    Cheung, Mike W-L; Jak, Suzanne

    2016-01-01

    Big data is a field that has traditionally been dominated by disciplines such as computer science and business, where mainly data-driven analyses have been performed. Psychology, a discipline in which a strong emphasis is placed on behavioral theories and empirical research, has the potential to contribute greatly to the big data movement. However, one challenge to psychologists-and probably the most crucial one-is that most researchers may not have the necessary programming and computational skills to analyze big data. In this study we argue that psychologists can also conduct big data research and that, rather than trying to acquire new programming and computational skills, they should focus on their strengths, such as performing psychometric analyses and testing theories using multivariate analyses to explain phenomena. We propose a split/analyze/meta-analyze approach that allows psychologists to easily analyze big data. Two real datasets are used to demonstrate the proposed procedures in R. A new research agenda related to the analysis of big data in psychology is outlined at the end of the study.

  4. Frontiers in Numerical Relativity

    NASA Astrophysics Data System (ADS)

    Evans, Charles R.; Finn, Lee S.; Hobill, David W.

    2011-06-01

    Preface; Participants; Introduction; 1. Supercomputing and numerical relativity: a look at the past, present and future David W. Hobill and Larry L. Smarr; 2. Computational relativity in two and three dimensions Stuart L. Shapiro and Saul A. Teukolsky; 3. Slowly moving maximally charged black holes Robert C. Ferrell and Douglas M. Eardley; 4. Kepler's third law in general relativity Steven Detweiler; 5. Black hole spacetimes: testing numerical relativity David H. Bernstein, David W. Hobill and Larry L. Smarr; 6. Three dimensional initial data of numerical relativity Ken-ichi Oohara and Takashi Nakamura; 7. Initial data for collisions of black holes and other gravitational miscellany James W. York, Jr.; 8. Analytic-numerical matching for gravitational waveform extraction Andrew M. Abrahams; 9. Supernovae, gravitational radiation and the quadrupole formula L. S. Finn; 10. Gravitational radiation from perturbations of stellar core collapse models Edward Seidel and Thomas Moore; 11. General relativistic implicit radiation hydrodynamics in polar sliced space-time Paul J. Schinder; 12. General relativistic radiation hydrodynamics in spherically symmetric spacetimes A. Mezzacappa and R. A. Matzner; 13. Constraint preserving transport for magnetohydrodynamics John F. Hawley and Charles R. Evans; 14. Enforcing the momentum constraints during axisymmetric spacelike simulations Charles R. Evans; 15. Experiences with an adaptive mesh refinement algorithm in numerical relativity Matthew W. Choptuik; 16. The multigrid technique Gregory B. Cook; 17. Finite element methods in numerical relativity P. J. Mann; 18. Pseudo-spectral methods applied to gravitational collapse Silvano Bonazzola and Jean-Alain Marck; 19. Methods in 3D numerical relativity Takashi Nakamura and Ken-ichi Oohara; 20. Nonaxisymmetric rotating gravitational collapse and gravitational radiation Richard F. Stark; 21. Nonaxisymmetric neutron star collisions: initial results using smooth particle hydrodynamics

  5. Finite-difference numerical simulations of underground explosion cavity decoupling

    NASA Astrophysics Data System (ADS)

    Aldridge, D. F.; Preston, L. A.; Jensen, R. P.

    2012-12-01

    Earth models containing a significant portion of ideal fluid (e.g., air and/or water) are of increasing interest in seismic wave propagation simulations. Examples include a marine model with a thick water layer, and a land model with air overlying a rugged topographic surface. The atmospheric infrasound community is currently interested in coupled seismic-acoustic propagation of low-frequency signals over long ranges (~tens to ~hundreds of kilometers). Also, accurate and efficient numerical treatment of models containing underground air-filled voids (caves, caverns, tunnels, subterranean man-made facilities) is essential. In support of the Source Physics Experiment (SPE) conducted at the Nevada National Security Site (NNSS), we are developing a numerical algorithm for simulating coupled seismic and acoustic wave propagation in mixed solid/fluid media. Solution methodology involves explicit, time-domain, finite-differencing of the elastodynamic velocity-stress partial differential system on a three-dimensional staggered spatial grid. Conditional logic is used to avoid shear stress updating within the fluid zones; this approach leads to computational efficiency gains for models containing a significant proportion of ideal fluid. Numerical stability and accuracy are maintained at air/rock interfaces (where the contrast in mass density is on the order of 1 to 2000) via a finite-difference operator "order switching" formalism. The fourth-order spatial FD operator used throughout the bulk of the earth model is reduced to second-order in the immediate vicinity of a high-contrast interface. Current modeling efforts are oriented toward quantifying the amount of atmospheric infrasound energy generated by various underground seismic sources (explosions and earthquakes). Source depth and orientation, and surface topography play obvious roles. The cavity decoupling problem, where an explosion is detonated within an air-filled void, is of special interest. A point explosion

  6. Numeric invariants from multidimensional persistence

    DOE PAGES

    Skryzalin, Jacek; Carlsson, Gunnar

    2017-05-19

    Topological data analysis is the study of data using techniques from algebraic topology. Often, one begins with a finite set of points representing data and a “filter” function which assigns a real number to each datum. Using both the data and the filter function, one can construct a filtered complex for further analysis. For example, applying the homology functor to the filtered complex produces an algebraic object known as a “one-dimensional persistence module”, which can often be interpreted as a finite set of intervals representing various geometric features in the data. If one runs the above process incorporating multiple filtermore » functions simultaneously, one instead obtains a multidimensional persistence module. Unfortunately, these are much more difficult to interpret. In this article, we analyze the space of multidimensional persistence modules from the perspective of algebraic geometry. First we build a moduli space of a certain subclass of easily analyzed multidimensional persistence modules, which we construct specifically to capture much of the information which can be gained by using multidimensional persistence instead of one-dimensional persistence. Fruthermore, we argue that the global sections of this space provide interesting numeric invariants when evaluated against our subclass of multidimensional persistence modules. Finally, we extend these global sections to the space of all multidimensional persistence modules and discuss how the resulting numeric invariants might be used to study data. This paper extends the results of Adcock et al. (Homol Homotopy Appl 18(1), 381–402, 2016) by constructing numeric invariants from the computation of a multidimensional persistence module as given by Carlsson et al. (J Comput Geom 1(1), 72–100, 2010).« less

  7. The Aqueduct Global Flood Analyzer

    NASA Astrophysics Data System (ADS)

    Iceland, Charles

    2015-04-01

    As population growth and economic growth take place, and as climate change accelerates, many regions across the globe are finding themselves increasingly vulnerable to flooding. A recent OECD study of the exposure of the world's large port cities to coastal flooding found that 40 million people were exposed to a 1 in 100 year coastal flood event in 2005, and the total value of exposed assets was about US 3,000 billion, or 5% of global GDP. By the 2070s, those numbers were estimated to increase to 150 million people and US 35,000 billion, or roughly 9% of projected global GDP. Impoverished people in developing countries are particularly at risk because they often live in flood-prone areas and lack the resources to respond. WRI and its Dutch partners - Deltares, IVM-VU University Amsterdam, Utrecht University, and PBL Netherlands Environmental Assessment Agency - are in the initial stages of developing a robust set of river flood and coastal storm surge risk measures that show the extent of flooding under a variety of scenarios (both current and future), together with the projected human and economic impacts of these flood scenarios. These flood risk data and information will be accessible via an online, easy-to-use Aqueduct Global Flood Analyzer. We will also investigate the viability, benefits, and costs of a wide array of flood risk reduction measures that could be implemented in a variety of geographic and socio-economic settings. Together, the activities we propose have the potential for saving hundreds of thousands of lives and strengthening the resiliency and security of many millions more, especially those who are most vulnerable. Mr. Iceland will present Version 1.0 of the Aqueduct Global Flood Analyzer and provide a preview of additional elements of the Analyzer to be released in the coming years.

  8. Truck acoustic data analyzer system

    DOEpatents

    Haynes, Howard D.; Akerman, Alfred; Ayers, Curtis W.

    2006-07-04

    A passive vehicle acoustic data analyzer system having at least one microphone disposed in the acoustic field of a moving vehicle and a computer in electronic communication the microphone(s). The computer detects and measures the frequency shift in the acoustic signature emitted by the vehicle as it approaches and passes the microphone(s). The acoustic signature of a truck driving by a microphone can provide enough information to estimate the truck speed in miles-per-hour (mph), engine speed in rotations-per-minute (RPM), turbocharger speed in RPM, and vehicle weight.

  9. Method for analyzing microbial communities

    DOEpatents

    Zhou, Jizhong [Oak Ridge, TN; Wu, Liyou [Oak Ridge, TN

    2010-07-20

    The present invention provides a method for quantitatively analyzing microbial genes, species, or strains in a sample that contains at least two species or strains of microorganisms. The method involves using an isothermal DNA polymerase to randomly and representatively amplify genomic DNA of the microorganisms in the sample, hybridizing the resultant polynucleotide amplification product to a polynucleotide microarray that can differentiate different genes, species, or strains of microorganisms of interest, and measuring hybridization signals on the microarray to quantify the genes, species, or strains of interest.

  10. Metabolic analyzer. [for Skylab mission

    NASA Technical Reports Server (NTRS)

    Perry, C. L.

    1973-01-01

    An apparatus is described for the measurement of metabolic rate and breathing dynamics in which inhaled and exhaled breath are sensed by sealed, piston-displacement type spirometers. These spirometers electrically measure the volume of inhaled and exhaled breath. A mass spectrometer analyzes simultaneously for oxygen, carbon dioxide, nitrogen, and water vapor. Circuits responsive to the outputs of the spirometers, mass spectrometer, temperature, pressure, and timing signals compute oxygen consumption, carbon dioxide production, minute volume, and respiratory exchange ratio. A selective indicator provides for readout of these data at predetermined cyclic intervals.

  11. Trace Gas Analyzer (TGA) program

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The design, fabrication, and test of a breadboard trace gas analyzer (TGA) is documented. The TGA is a gas chromatograph/mass spectrometer system. The gas chromatograph subsystem employs a recirculating hydrogen carrier gas. The recirculation feature minimizes the requirement for transport and storage of large volumes of carrier gas during a mission. The silver-palladium hydrogen separator which permits the removal of the carrier gas and its reuse also decreases vacuum requirements for the mass spectrometer since the mass spectrometer vacuum system need handle only the very low sample pressure, not sample plus carrier. System performance was evaluated with a representative group of compounds.

  12. Charged particle mobility refrigerant analyzer

    DOEpatents

    Allman, S.L.; Chunghsuan Chen; Chen, F.C.

    1993-02-02

    A method for analyzing a gaseous electronegative species comprises the steps of providing an analysis chamber; providing an electric field of known potential within the analysis chamber; admitting into the analysis chamber a gaseous sample containing the gaseous electronegative species; providing a pulse of free electrons within the electric field so that the pulse of free electrons interacts with the gaseous electronegative species so that a swarm of electrically charged particles is produced within the electric field; and, measuring the mobility of the electrically charged particles within the electric field.

  13. Charged particle mobility refrigerant analyzer

    DOEpatents

    Allman, Steve L.; Chen, Chung-Hsuan; Chen, Fang C.

    1993-01-01

    A method for analyzing a gaseous electronegative species comprises the steps of providing an analysis chamber; providing an electric field of known potential within the analysis chamber; admitting into the analysis chamber a gaseous sample containing the gaseous electronegative species; providing a pulse of free electrons within the electric field so that the pulse of free electrons interacts with the gaseous electronegative species so that a swarm of electrically charged particles is produced within the electric field; and, measuring the mobility of the electrically charged particles within the electric field.

  14. MULTICHANNEL PULSE-HEIGHT ANALYZER

    DOEpatents

    Russell, J.T.; Lefevre, H.W.

    1958-01-21

    This patent deals with electronic computing circuits and more particularly to pulse-height analyzers used for classifying variable amplitude pulses into groups of different amplitudes. The device accomplishes this pulse allocation by by converting the pulses into frequencies corresponding to the amplitudes of the pulses, which frequencies are filtered in channels individually pretuned to a particular frequency and then detected and recorded in the responsive channel. This circuit substantially overcomes the disadvantages of prior annlyzers incorporating discriminators pre-set to respond to certain voltage levels, since small variation in component values is not as critical to satisfactory circuit operation.

  15. Managing healthcare information: analyzing trust.

    PubMed

    Söderström, Eva; Eriksson, Nomie; Åhlfeldt, Rose-Mharie

    2016-08-08

    Purpose - The purpose of this paper is to analyze two case studies with a trust matrix tool, to identify trust issues related to electronic health records. Design/methodology/approach - A qualitative research approach is applied using two case studies. The data analysis of these studies generated a problem list, which was mapped to a trust matrix. Findings - Results demonstrate flaws in current practices and point to achieving balance between organizational, person and technology trust perspectives. The analysis revealed three challenge areas, to: achieve higher trust in patient-focussed healthcare; improve communication between patients and healthcare professionals; and establish clear terminology. By taking trust into account, a more holistic perspective on healthcare can be achieved, where trust can be obtained and optimized. Research limitations/implications - A trust matrix is tested and shown to identify trust problems on different levels and relating to trusting beliefs. Future research should elaborate and more fully address issues within three identified challenge areas. Practical implications - The trust matrix's usefulness as a tool for organizations to analyze trust problems and issues is demonstrated. Originality/value - Healthcare trust issues are captured to a greater extent and from previously unchartered perspectives.

  16. IRISpy: Analyzing IRIS Data in Python

    NASA Astrophysics Data System (ADS)

    Ryan, Daniel; Christe, Steven; Mumford, Stuart; Baruah, Ankit; Timothy, Shelbe; Pereira, Tiago; De Pontieu, Bart

    2017-08-01

    IRISpy is a new community-developed open-source software library for analysing IRIS level 2 data. It is written in Python, a free, cross-platform, general-purpose, high-level programming language. A wide array of scientific computing software packages have already been developed in Python, from numerical computation (NumPy, SciPy, etc.), to visualization and plotting (matplotlib), to solar-physics-specific data analysis (SunPy). IRISpy is currently under development as a SunPy-affiliated package which means it depends on the SunPy library, follows similar standards and conventions, and is developed with the support of of the SunPy development team. IRISpy’s has two primary data objects, one for analyzing slit-jaw imager data and another for analyzing spectrograph data. Both objects contain basic slicing, indexing, plotting, and animating functionality to allow users to easily inspect, reduce and analyze the data. As part of this functionality the objects can output SunPy Maps, TimeSeries, Spectra, etc. of relevant data slices for easier inspection and analysis. Work is also ongoing to provide additional data analysis functionality including derivation of systematic measurement errors (e.g. readout noise), exposure time correction, residual wavelength calibration, radiometric calibration, and fine scale pointing corrections. IRISpy’s code base is publicly available through github.com and can be contributed to by anyone. In this poster we demonstrate IRISpy’s functionality and future goals of the project. We also encourage interested users to become involved in further developing IRISpy.

  17. Toward Scientific Numerical Modeling

    NASA Technical Reports Server (NTRS)

    Kleb, Bil

    2007-01-01

    Ultimately, scientific numerical models need quantified output uncertainties so that modeling can evolve to better match reality. Documenting model input uncertainties and verifying that numerical models are translated into code correctly, however, are necessary first steps toward that goal. Without known input parameter uncertainties, model sensitivities are all one can determine, and without code verification, output uncertainties are simply not reliable. To address these two shortcomings, two proposals are offered: (1) an unobtrusive mechanism to document input parameter uncertainties in situ and (2) an adaptation of the Scientific Method to numerical model development and deployment. Because these two steps require changes in the computational simulation community to bear fruit, they are presented in terms of the Beckhard-Harris-Gleicher change model.

  18. Introduction to Numerical Methods

    SciTech Connect

    Schoonover, Joseph A.

    2016-06-14

    These are slides for a lecture for the Parallel Computing Summer Research Internship at the National Security Education Center. This gives an introduction to numerical methods. Repetitive algorithms are used to obtain approximate solutions to mathematical problems, using sorting, searching, root finding, optimization, interpolation, extrapolation, least squares regresion, Eigenvalue problems, ordinary differential equations, and partial differential equations. Many equations are shown. Discretizations allow us to approximate solutions to mathematical models of physical systems using a repetitive algorithm and introduce errors that can lead to numerical instabilities if we are not careful.

  19. Coaxial charged particle energy analyzer

    NASA Technical Reports Server (NTRS)

    Kelly, Michael A. (Inventor); Bryson, III, Charles E. (Inventor); Wu, Warren (Inventor)

    2011-01-01

    A non-dispersive electrostatic energy analyzer for electrons and other charged particles having a generally coaxial structure of a sequentially arranged sections of an electrostatic lens to focus the beam through an iris and preferably including an ellipsoidally shaped input grid for collimating a wide acceptance beam from a charged-particle source, an electrostatic high-pass filter including a planar exit grid, and an electrostatic low-pass filter. The low-pass filter is configured to reflect low-energy particles back towards a charged particle detector located within the low-pass filter. Each section comprises multiple tubular or conical electrodes arranged about the central axis. The voltages on the lens are scanned to place a selected energy band of the accepted beam at a selected energy at the iris. Voltages on the high-pass and low-pass filters remain substantially fixed during the scan.

  20. Compact Microwave Fourier Spectrum Analyzer

    NASA Technical Reports Server (NTRS)

    Savchenkov, Anatoliy; Matsko, Andrey; Strekalov, Dmitry

    2009-01-01

    A compact photonic microwave Fourier spectrum analyzer [a Fourier-transform microwave spectrometer, (FTMWS)] with no moving parts has been proposed for use in remote sensing of weak, natural microwave emissions from the surfaces and atmospheres of planets to enable remote analysis and determination of chemical composition and abundances of critical molecular constituents in space. The instrument is based on a Bessel beam (light modes with non-zero angular momenta) fiber-optic elements. It features low power consumption, low mass, and high resolution, without a need for any cryogenics, beyond what is achievable by the current state-of-the-art in space instruments. The instrument can also be used in a wide-band scatterometer mode in active radar systems.

  1. Non-linear three dimensional spectral model of the Venusian thermosphere with super-rotation. I - Formulation and numerical technique. II - Temperature, composition and winds

    NASA Technical Reports Server (NTRS)

    Stevens-Rayburn, D. R.; Mengel, J. G.; Harris, I.; Mayr, H. G.

    1989-01-01

    A three-dimensional spectral model for the Venusion thermosphere is presented which uses spherical harmonics to represent the horizontal variations in longitude and latitude and which uses Fourier harmonics to represent the LT variations due to atmospheric rotation. A differencing scheme with tridiagonal block elimination is used to perform the height integration. Quadratic nonlinearities are taken into account. In the second part, numerical results obtained with the model are shown to reproduce the observed broad daytime maxima in CO2 and CO and the significantly larger values at dawn than at dusk. It is found that the diurnal variations in He are most sensitive to thermospheric superrotation, and that, given a globally uniform atmosphere as input, larger heating rates yield a larger temperature contrast between day and night.

  2. High Order Numerical Methods for the Investigation of the Two Dimensional Richtmyer-Meshkov Instability

    SciTech Connect

    Don, W-S; Gotllieb, D; Shu, C-W

    2001-11-26

    For flows that contain significant structure, high order schemes offer large advantages over low order schemes. Fundamentally, the reason comes from the truncation error of the differencing operators. If one examines carefully the expression for the truncation error, one will see that for a fixed computational cost that the error can be made much smaller by increasing the numerical order than by increasing the number of grid points. One can readily derive the following expression which holds for systems dominated by hyperbolic effects and advanced explicitly in time: flops = const * p{sup 2} * k{sup (d+1)(p+1)/p}/E{sup (d+1)/p} where flopsmore » denotes floating point operations, p denotes numerical order, d denotes spatial dimension, where E denotes the truncation error of the difference operator, and where k denotes the Fourier wavenumber. For flows that contain structure, such as turbulent flows or any calculation where, say, vortices are present, there will be significant energy in the high values of k. Thus, one can see that the rate of growth of the flops is very different for different values of p. Further, the constant in front of the expression is also very different. With a low order scheme, one quickly reaches the limit of the computer. With the high order scheme, one can obtain far more modes before the limit of the computer is reached. Here we examine the application of spectral methods and the Weighted Essentially Non-Oscillatory (WENO) scheme to the Richtmyer-Meshkov Instability. We show the intricate structure that these high order schemes can calculate and we show that the two methods, though very different, converge to the same numerical solution indicating that the numerical solution is very likely physically correct.« less

  3. Varieties of numerical abilities.

    PubMed

    Dehaene, S

    1992-08-01

    This paper provides a tutorial introduction to numerical cognition, with a review of essential findings and current points of debate. A tacit hypothesis in cognitive arithmetic is that numerical abilities derive from human linguistic competence. One aim of this special issue is to confront this hypothesis with current knowledge of number representations in animals, infants, normal and gifted adults, and brain-lesioned patients. First, the historical evolution of number notations is presented, together with the mental processes for calculating and transcoding from one notation to another. While these domains are well described by formal symbol-processing models, this paper argues that such is not the case for two other domains of numerical competence: quantification and approximation. The evidence for counting, subitizing and numerosity estimation in infants, children, adults and animals is critically examined. Data are also presented which suggest a specialization for processing approximate numerical quantities in animals and humans. A synthesis of these findings is proposed in the form of a triple-code model, which assumes that numbers are mentally manipulated in an arabic, verbal or analogical magnitude code depending on the requested mental operation. Only the analogical magnitude representation seems available to animals and preverbal infants.

  4. Approaches to Numerical Relativity

    NASA Astrophysics Data System (ADS)

    d'Inverno, Ray

    2005-07-01

    Introduction Ray d'Inverno; Preface C. J. S. Clarke; Part I. Theoretical Approaches: 1. Numerical relativity on a transputer array Ray d'Inverno; 2. Some aspects of the characteristic initial value problem in numerical relativity Nigel Bishop; 3. The characteristic initial value problem in general relativity J. M. Stewart; 4. Algebraic approachs to the characteristic initial value problem in general relativity Jõrg Frauendiener; 5. On hyperboidal hypersurfaces Helmut Friedrich; 6. The initial value problem on null cones J. A. Vickers; 7. Introduction to dual-null dynamics S. A. Hayward; 8. On colliding plane wave space-times J. B. Griffiths; 9. Boundary conditions for the momentum constraint Niall O Murchadha; 10. On the choice of matter model in general relativity A. D. Rendall; 11. A mathematical approach to numerical relativity J. W. Barrett; 12. Making sense of the effects of rotation in general relativity J. C. Miller; 13. Stability of charged boson stars and catastrophe theory Franz E. Schunck, Fjodor V. Kusmartsev and Eckehard W. Mielke; Part II. Practical Approaches: 14. Numerical asymptotics R. Gómez and J. Winicour; 15. Instabilities in rapidly rotating polytropes Scott C. Smith and Joan M. Centrella; 16. Gravitational radiation from coalescing binary neutron stars Ken-Ichi Oohara and Takashi Nakamura; 17. 'Critical' behaviour in massless scalar field collapse M. W. Choptuik; 18. Goudunov-type methods applied to general relativistic gravitational collapse José Ma. Ibánez, José Ma. Martí, Juan A. Miralles and J. V. Romero; 19. Astrophysical sources of gravitational waves and neutrinos Silvano Bonazzola, Eric Gourgoulhon, Pawel Haensel and Jean-Alain Marck; 20. Gravitational radiation from triaxial core collapse Jean-Alain Marck and Silvano Bonazzola; 21. A vacuum fully relativistic 3D numerical code C. Bona and J. Massó; 22. Solution of elliptic equations in numerical relativity using multiquadrics M. R. Dubal, S. R. Oliveira and R. A. Matzner; 23

  5. Charge Analyzer Responsive Local Oscillations

    NASA Technical Reports Server (NTRS)

    Krause, Linda Habash; Thornton, Gary

    2015-01-01

    The first transatlantic radio transmission, demonstrated by Marconi in December of 1901, revealed the essential role of the ionosphere for radio communications. This ionized layer of the upper atmosphere controls the amount of radio power transmitted through, reflected off of, and absorbed by the atmospheric medium. Low-frequency radio signals can propagate long distances around the globe via repeated reflections off of the ionosphere and the Earth's surface. Higher frequency radio signals can punch through the ionosphere to be received at orbiting satellites. However, any turbulence in the ionosphere can distort these signals, compromising the performance or even availability of space-based communication and navigations systems. The physics associated with this distortion effect is analogous to the situation when underwater images are distorted by convecting air bubbles. In fact, these ionospheric features are often called 'plasma bubbles' since they exhibit some of the similar behavior as underwater air bubbles. These events, instigated by solar and geomagnetic storms, can cause communication and navigation outages that last for hours. To help understand and predict these outages, a world-wide community of space scientists and technologists are devoted to researching this topic. One aspect of this research is to develop instruments capable of measuring the ionospheric plasma bubbles. Figure 1 shows a photo of the Charge Analyzer Responsive to Local Oscillations (CARLO), a new instrument under development at NASA Marshall Space Flight Center (MSFC). It is a frequency-domain ion spectrum analyzer designed to measure the distributions of ionospheric turbulence from 1 Hz to 10 kHz (i.e., spatial scales from a few kilometers down to a few centimeters). This frequency range is important since it focuses on turbulence scales that affect VHF/UHF satellite communications, GPS systems, and over-the-horizon radar systems. CARLO is based on the flight-proven Plasma Local

  6. Preparing and Analyzing Iced Airfoils

    NASA Technical Reports Server (NTRS)

    Vickerman, Mary B.; Baez, Marivell; Braun, Donald C.; Cotton, Barbara J.; Choo, Yung K.; Coroneos, Rula M.; Pennline, James A.; Hackenberg, Anthony W.; Schilling, Herbert W.; Slater, John W.; hide

    2004-01-01

    SmaggIce version 1.2 is a computer program for preparing and analyzing iced airfoils. It includes interactive tools for (1) measuring ice-shape characteristics, (2) controlled smoothing of ice shapes, (3) curve discretization, (4) generation of artificial ice shapes, and (5) detection and correction of input errors. Measurements of ice shapes are essential for establishing relationships between characteristics of ice and effects of ice on airfoil performance. The shape-smoothing tool helps prepare ice shapes for use with already available grid-generation and computational-fluid-dynamics software for studying the aerodynamic effects of smoothed ice on airfoils. The artificial ice-shape generation tool supports parametric studies since ice-shape parameters can easily be controlled with the artificial ice. In such studies, artificial shapes generated by this program can supplement simulated ice obtained from icing research tunnels and real ice obtained from flight test under icing weather condition. SmaggIce also automatically detects geometry errors such as tangles or duplicate points in the boundary which may be introduced by digitization and provides tools to correct these. By use of interactive tools included in SmaggIce version 1.2, one can easily characterize ice shapes and prepare iced airfoils for grid generation and flow simulations.

  7. Numerical Hydrodynamics in Special Relativity.

    PubMed

    Martí, José Maria; Müller, Ewald

    2003-01-01

    This review is concerned with a discussion of numerical methods for the solution of the equations of special relativistic hydrodynamics (SRHD). Particular emphasis is put on a comprehensive review of the application of high-resolution shock-capturing methods in SRHD. Results of a set of demanding test bench simulations obtained with different numerical SRHD methods are compared. Three applications (astrophysical jets, gamma-ray bursts and heavy ion collisions) of relativistic flows are discussed. An evaluation of various SRHD methods is presented, and future developments in SRHD are analyzed involving extension to general relativistic hydrodynamics and relativistic magneto-hydrodynamics. The review further provides FORTRAN programs to compute the exact solution of a 1D relativistic Riemann problem with zero and nonzero tangential velocities, and to simulate 1D relativistic flows in Cartesian Eulerian coordinates using the exact SRHD Riemann solver and PPM reconstruction. Supplementary material is available for this article at 10.12942/lrr-2003-7 and is accessible for authorized users.

  8. Recent advances in numerical PDEs

    NASA Astrophysics Data System (ADS)

    Zuev, Julia Michelle

    standard algorithm and is just as accurate. Topic 3. The well-known ADI-FDTD method for solving Maxwell's curl equations is second-order accurate in space/time, unconditionally stable, and computationally efficient. We research Richardson extrapolation -based techniques to improve time discretization accuracy for spatially oversampled ADI-FDTD. A careful analysis of temporal accuracy, computational efficiency, and the algorithm's overall stability is presented. Given the context of wave- type PDEs, we find that only a limited number of extrapolations to the ADI-FDTD method are beneficial, if its unconditional stability is to be preserved. We propose a practical approach for choosing the size of a time step that can be used to improve the efficiency of the ADI-FDTD algorithm, while maintaining its accuracy and stability. Topic 4. Shock waves and their energy dissipation properties are critical to understanding the dynamics controlling the MHD turbulence. Numerical advection algorithms used in MHD solvers (e.g. the ZEUS package) introduce undesirable numerical viscosity. To counteract its effects and to resolve shocks numerically, Richtmyer and von Neumann's artificial viscosity is commonly added to the model. We study shock power by analyzing the influence of both artificial and numerical viscosity on energy decay rates. Also, we analytically characterize the numerical diffusivity of various advection algorithms by quantifying their diffusion coefficients e.

  9. Numerical Assessment of Rockbursting.

    DTIC Science & Technology

    1987-05-27

    static equilibrium, nonlinear elasticity, strain-softening • material , unstable propagation of pre-existing cracks , and finally - surface...structure of LINOS, which is common to most of the large finite element codes, the library of element and material subroutines can be easily expanded... material model subroutines , are tested by comparing finite element results with analytical or numerical results derived for hypo-elastic and

  10. Analyzing risks of adverse pregnancy outcomes.

    PubMed

    Kramer, Michael S; Zhang, Xun; Platt, Robert W

    2014-02-01

    Approaches for analyzing the risks of adverse pregnancy outcomes have been the source of much debate and many publications. Much of the problem, in our view, is the conflation of time at risk with gestational age at birth (or birth weight, a proxy for gestational age). We consider the causal questions underlying such analyses with the help of a generic directed acyclic graph. We discuss competing risks and populations at risk in the context of appropriate numerators and denominators, respectively. We summarize 3 different approaches to quantifying risks with respect to gestational age, each of which addresses a distinct etiological or prognostic question (i.e., cumulative risk, prospective risk, or instantaneous risk (hazard)) and suggest the appropriate denominators for each. We show how the gestational age-specific risk of perinatal death (PND) can be decomposed as the product of the gestational age-specific risk of birth and the risk of PND conditional on birth at a given gestational age. Finally, we demonstrate how failure to consider the first of these 2 risks leads to selection bias. This selection bias creates the well-known crossover paradox, thus obviating the need to posit common causes of early birth and PND other than the study exposure.

  11. On Numerical Heating

    NASA Astrophysics Data System (ADS)

    Liou, Meng-Sing

    2013-11-01

    The development of computational fluid dynamics over the last few decades has yielded enormous successes and capabilities that are being routinely employed today; however there remain some open problems to be properly resolved. One example is the so-called overheating problem, which can arise in two very different scenarios, from either colliding or receding streams. Common in both is a localized, numerically over-predicted temperature. Von Neumann reported the former, a compressive overheating, nearly 70 years ago and numerically smeared the temperature peak by introducing artificial diffusion. However, the latter is unphysical in an expansive (rarefying) situation; it still dogs every method known to the author. We will present a study aiming at resolving this overheating problem and we find that: (1) the entropy increase is one-to-one linked to the increase in the temperature rise and (2) the overheating is inevitable in the current computational fluid dynamics framework in practice. Finally we will show a simple hybrid method that fundamentally cures the overheating problem in a rarefying flow, but also retains the property of accurate shock capturing. Moreover, this remedy (enhancement of current numerical methods) can be included easily in the present Eulerian codes. This work is performed under NASA's Fundamental Aeronautics Program.

  12. Numerical Analysis Objects

    NASA Astrophysics Data System (ADS)

    Henderson, Michael

    1997-08-01

    The Numerical Analysis Objects project (NAO) is a project in the Mathematics Department of IBM's TJ Watson Research Center. While there are plenty of numerical tools available today, it is not an easy task to combine them into a custom application. NAO is directed at the dual problems of building applications from a set of tools, and creating those tools. There are several "reuse" projects, which focus on the problems of identifying and cataloging tools. NAO is directed at the specific context of scientific computing. Because the type of tools is restricted, problems such as tools with incompatible data structures for input and output, and dissimilar interfaces to tools which solve similar problems can be addressed. The approach we've taken is to define interfaces to those objects used in numerical analysis, such as geometries, functions and operators, and to start collecting (and building) a set of tools which use these interfaces. We have written a class library (a set of abstract classes and implementations) in C++ which demonstrates the approach. Besides the classes, the class library includes "stub" routines which allow the library to be used from C or Fortran, and an interface to a Visual Programming Language. The library has been used to build a simulator for petroleum reservoirs, using a set of tools for discretizing nonlinear differential equations that we have written, and includes "wrapped" versions of packages from the Netlib repository. Documentation can be found on the Web at "http://www.research.ibm.com/nao". I will describe the objects and their interfaces, and give examples ranging from mesh generation to solving differential equations.

  13. Numerical Aerodynamic Simulation (NAS)

    NASA Technical Reports Server (NTRS)

    Peterson, V. L.; Ballhaus, W. F., Jr.; Bailey, F. R.

    1983-01-01

    The history of the Numerical Aerodynamic Simulation Program, which is designed to provide a leading-edge capability to computational aerodynamicists, is traced back to its origin in 1975. Factors motivating its development and examples of solutions to successively refined forms of the governing equations are presented. The NAS Processing System Network and each of its eight subsystems are described in terms of function and initial performance goals. A proposed usage allocation policy is discussed and some initial problems being readied for solution on the NAS system are identified.

  14. Numerical Propulsion System Simulation

    NASA Technical Reports Server (NTRS)

    Naiman, Cynthia

    2006-01-01

    The NASA Glenn Research Center, in partnership with the aerospace industry, other government agencies, and academia, is leading the effort to develop an advanced multidisciplinary analysis environment for aerospace propulsion systems called the Numerical Propulsion System Simulation (NPSS). NPSS is a framework for performing analysis of complex systems. The initial development of NPSS focused on the analysis and design of airbreathing aircraft engines, but the resulting NPSS framework may be applied to any system, for example: aerospace, rockets, hypersonics, power and propulsion, fuel cells, ground based power, and even human system modeling. NPSS provides increased flexibility for the user, which reduces the total development time and cost. It is currently being extended to support the NASA Aeronautics Research Mission Directorate Fundamental Aeronautics Program and the Advanced Virtual Engine Test Cell (AVETeC). NPSS focuses on the integration of multiple disciplines such as aerodynamics, structure, and heat transfer with numerical zooming on component codes. Zooming is the coupling of analyses at various levels of detail. NPSS development includes capabilities to facilitate collaborative engineering. The NPSS will provide improved tools to develop custom components and to use capability for zooming to higher fidelity codes, coupling to multidiscipline codes, transmitting secure data, and distributing simulations across different platforms. These powerful capabilities extend NPSS from a zero-dimensional simulation tool to a multi-fidelity, multidiscipline system-level simulation tool for the full development life cycle.

  15. Application of Finite Element Method to Analyze Inflatable Waveguide Structures

    NASA Technical Reports Server (NTRS)

    Deshpande, M. D.

    1998-01-01

    A Finite Element Method (FEM) is presented to determine propagation characteristics of deformed inflatable rectangular waveguide. Various deformations that might be present in an inflatable waveguide are analyzed using the FEM. The FEM procedure and the code developed here are so general that they can be used for any other deformations that are not considered in this report. The code is validated by applying the present code to rectangular waveguide without any deformations and comparing the numerical results with earlier published results.

  16. Numerical stability in problems of linear algebra.

    NASA Technical Reports Server (NTRS)

    Babuska, I.

    1972-01-01

    Mathematical problems are introduced as mappings from the space of input data to that of the desired output information. Then a numerical process is defined as a prescribed recurrence of elementary operations creating the mapping of the underlying mathematical problem. The ratio of the error committed by executing the operations of the numerical process (the roundoff errors) to the error introduced by perturbations of the input data (initial error) gives rise to the concept of lambda-stability. As examples, several processes are analyzed from this point of view, including, especially, old and new processes for solving systems of linear algebraic equations with tridiagonal matrices. In particular, it is shown how such a priori information can be utilized as, for instance, a knowledge of the row sums of the matrix. Information of this type is frequently available where the system arises in connection with the numerical solution of differential equations.

  17. The Spatial-Numerical Congruity Effect in Preschoolers

    ERIC Educational Resources Information Center

    Patro, Katarzyna; Haman, Maciej

    2012-01-01

    Number-to-space mapping and its directionality are compelling topics in the study of numerical cognition. Usually, literacy and math education are thought to shape a left-to-right number line. We challenged this claim by analyzing performance of preliterate precounting preschoolers in a spatial-numerical task. In our experiment, children exhibited…

  18. Pavement profile viewer and analyzer : product brief.

    DOT National Transportation Integrated Search

    2003-06-01

    Pavement Profile Viewer and Analyzer, or ProVAL, is a software package that imports, displays, and analyzes the characteristics of pavement profiles from many different sources. ProVAL can analyze pavement profiles using several methods, including In...

  19. Numerical Modeling of Airblast.

    DTIC Science & Technology

    1987-06-01

    OIL . > L 3 4X4, Z, 8 W~ 0 N § I E L 4 CM0u5 L OF L L 0 0V00E0 01U 0 0 00 C C L 4...0 . . . .8 1- 12 𔃾 .6I~ 2. 22 .4 .62. 1. 3-21. 94 Lj UU LUV NOS >, , , , I ~ 4 j ~ 3 5 j 4 j I I JI ’ .LiI 4- ZAz 4-r:0 P. w 9- 0I1’ f *K I u V1 96...CHARI JR~Atj_ 49 w gI&’ I ~II OIICFILE Copy 0 NUMERICAL MODELING OF AIRLAST 1ST YEAR FINAL REPORT SAIC 87!/7Ol JUNE 1987 *dne4. -m~ca bilm i

  20. Perspectives in numerical astrophysics:

    NASA Astrophysics Data System (ADS)

    Reverdy, V.

    2016-12-01

    In this discussion paper, we investigate the current and future status of numerical astrophysics and highlight key questions concerning the transition to the exascale era. We first discuss the fact that one of the main motivation behind high performance simulations should not be the reproduction of observational or experimental data, but the understanding of the emergence of complexity from fundamental laws. This motivation is put into perspective regarding the quest for more computational power and we argue that extra computational resources can be used to gain in abstraction. Then, the readiness level of present-day simulation codes in regard to upcoming exascale architecture is examined and two major challenges are raised concerning both the central role of data movement for performances and the growing complexity of codes. Software architecture is finally presented as a key component to make the most of upcoming architectures while solving original physics problems.

  1. Numerical relativity beyond astrophysics.

    PubMed

    Garfinkle, David

    2017-01-01

    Though the main applications of computer simulations in relativity are to astrophysical systems such as black holes and neutron stars, nonetheless there are important applications of numerical methods to the investigation of general relativity as a fundamental theory of the nature of space and time. This paper gives an overview of some of these applications. In particular we cover (i) investigations of the properties of spacetime singularities such as those that occur in the interior of black holes and in big bang cosmology. (ii) investigations of critical behavior at the threshold of black hole formation in gravitational collapse. (iii) investigations inspired by string theory, in particular analogs of black holes in more than 4 spacetime dimensions and gravitational collapse in spacetimes with a negative cosmological constant.

  2. Numerical relativity beyond astrophysics

    NASA Astrophysics Data System (ADS)

    Garfinkle, David

    2017-01-01

    Though the main applications of computer simulations in relativity are to astrophysical systems such as black holes and neutron stars, nonetheless there are important applications of numerical methods to the investigation of general relativity as a fundamental theory of the nature of space and time. This paper gives an overview of some of these applications. In particular we cover (i) investigations of the properties of spacetime singularities such as those that occur in the interior of black holes and in big bang cosmology. (ii) investigations of critical behavior at the threshold of black hole formation in gravitational collapse. (iii) investigations inspired by string theory, in particular analogs of black holes in more than 4 spacetime dimensions and gravitational collapse in spacetimes with a negative cosmological constant.

  3. A Numerical Study on Microwave Coagulation Therapy

    DTIC Science & Technology

    2013-01-01

    hepatocellular carcinoma (small size liver tumor). Through extensive numerical simulations, we reveal the mathematical relationships between some critical parameters in the therapy, including input power, frequency, temperature, and regions of impact. It is shown that these relationships can be approximated using simple polynomial functions. Compared to solutions of partial differential equations, these functions are significantly easier to compute and simpler to analyze for engineering design and clinical

  4. Numerical methods in heat transfer

    SciTech Connect

    Lewis, R.W.

    1985-01-01

    This third volume in the series in Numerical Methods in Engineering presents expanded versions of selected papers given at the Conference on Numerical Methods in Thermal Problems held in Venice in July 1981. In this reference work, contributors offer the current state of knowledge on the numerical solution of convective heat transfer problems and conduction heat transfer problems.

  5. Expert system for analyzing eddy current measurements

    DOEpatents

    Levy, Arthur J.; Oppenlander, Jane E.; Brudnoy, David M.; Englund, James M.; Loomis, Kent C.

    1994-01-01

    A method and apparatus (called DODGER) analyzes eddy current data for heat exchanger tubes or any other metallic object. DODGER uses an expert system to analyze eddy current data by reasoning with uncertainty and pattern recognition. The expert system permits DODGER to analyze eddy current data intelligently, and obviate operator uncertainty by analyzing the data in a uniform and consistent manner.

  6. Numerical model SMODERP

    NASA Astrophysics Data System (ADS)

    Kavka, P.; Jeřábek, J.; Strouhal, L.

    2016-12-01

    The contribution presents a numerical model SMODERP that is used for calculation and prediction of surface runoff and soil erosion from agricultural land. The physically based model includes the processes of infiltration (Phillips equation), surface runoff routing (kinematic wave based equation), surface retention, surface roughness and vegetation impact on runoff. The model is being developed at the Department of Irrigation, Drainage and Landscape Engineering, Civil Engineering Faculty, CTU in Prague. 2D version of the model was introduced in last years. The script uses ArcGIS system tools for data preparation. The physical relations are implemented through Python scripts. The main computing part is stand alone in numpy arrays. Flow direction is calculated by Steepest Descent algorithm and in multiple flow algorithm. Sheet flow is described by modified kinematic wave equation. Parameters for five different soil textures were calibrated on the set of hundred measurements performed on the laboratory and filed rainfall simulators. Spatially distributed models enable to estimate not only surface runoff but also flow in the rills. Development of the rills is based on critical shear stress and critical velocity. For modelling of the rills a specific sub model was created. This sub model uses Manning formula for flow estimation. Flow in the ditches and streams are also computed. Numerical stability of the model is controled by Courant criterion. Spatial scale is fixed. Time step is dynamic and depends on the actual discharge. The model is used in the framework of the project "Variability of Short-term Precipitation and Runoff in Small Czech Drainage Basins and its Influence on Water Resources Management". Main goal of the project is to elaborate a methodology and online utility for deriving short-term design precipitation series, which could be utilized by a broad community of scientists, state administration as well as design planners. The methodology will account for

  7. Numerical propulsion system simulation

    NASA Technical Reports Server (NTRS)

    Lytle, John K.; Remaklus, David A.; Nichols, Lester D.

    1990-01-01

    The cost of implementing new technology in aerospace propulsion systems is becoming prohibitively expensive. One of the major contributors to the high cost is the need to perform many large scale system tests. Extensive testing is used to capture the complex interactions among the multiple disciplines and the multiple components inherent in complex systems. The objective of the Numerical Propulsion System Simulation (NPSS) is to provide insight into these complex interactions through computational simulations. This will allow for comprehensive evaluation of new concepts early in the design phase before a commitment to hardware is made. It will also allow for rapid assessment of field-related problems, particularly in cases where operational problems were encountered during conditions that would be difficult to simulate experimentally. The tremendous progress taking place in computational engineering and the rapid increase in computing power expected through parallel processing make this concept feasible within the near future. However it is critical that the framework for such simulations be put in place now to serve as a focal point for the continued developments in computational engineering and computing hardware and software. The NPSS concept which is described will provide that framework.

  8. Numerical simulations in stochastic mechanics

    NASA Astrophysics Data System (ADS)

    McClendon, Marvin; Rabitz, Herschel

    1988-05-01

    The stochastic differential equation of Nelson's stochastic mechanics is integrated numerically for several simple quantum systems. The calculations are performed with use of Helfand and Greenside's method and pseudorandom numbers. The resulting trajectories are analyzed both individually and collectively to yield insight into momentum, uncertainty principles, interference, tunneling, quantum chaos, and common models of diatomic molecules from the stochastic quantization point of view. In addition to confirming Shucker's momentum theorem, these simulations illustrate, within the context of stochastic mechanics, the position-momentum and time-energy uncertainty relations, the two-slit diffraction pattern, exponential decay of an unstable system, and the greater degree of anticorrelation in a valence-bond model as compared with a molecular-orbital model of H2. The attempt to find exponential divergence of initially nearby trajectories, potentially useful as a criterion for quantum chaos, in a periodically forced oscillator is inconclusive. A way of computing excited energies from the ground-state motion is presented. In all of these studies the use of particle trajectories allows a more insightful interpretation of physical phenomena than is possible within traditional wave mechanics.

  9. Ferrofluids: Modeling, numerical analysis, and scientific computation

    NASA Astrophysics Data System (ADS)

    Tomas, Ignacio

    This dissertation presents some developments in the Numerical Analysis of Partial Differential Equations (PDEs) describing the behavior of ferrofluids. The most widely accepted PDE model for ferrofluids is the Micropolar model proposed by R.E. Rosensweig. The Micropolar Navier-Stokes Equations (MNSE) is a subsystem of PDEs within the Rosensweig model. Being a simplified version of the much bigger system of PDEs proposed by Rosensweig, the MNSE are a natural starting point of this thesis. The MNSE couple linear velocity u, angular velocity w, and pressure p. We propose and analyze a first-order semi-implicit fully-discrete scheme for the MNSE, which decouples the computation of the linear and angular velocities, is unconditionally stable and delivers optimal convergence rates under assumptions analogous to those used for the Navier-Stokes equations. Moving onto the much more complex Rosensweig's model, we provide a definition (approximation) for the effective magnetizing field h, and explain the assumptions behind this definition. Unlike previous definitions available in the literature, this new definition is able to accommodate the effect of external magnetic fields. Using this definition we setup the system of PDEs coupling linear velocity u, pressure p, angular velocity w, magnetization m, and magnetic potential ϕ We show that this system is energy-stable and devise a numerical scheme that mimics the same stability property. We prove that solutions of the numerical scheme always exist and, under certain simplifying assumptions, that the discrete solutions converge. A notable outcome of the analysis of the numerical scheme for the Rosensweig's model is the choice of finite element spaces that allow the construction of an energy-stable scheme. Finally, with the lessons learned from Rosensweig's model, we develop a diffuse-interface model describing the behavior of two-phase ferrofluid flows and present an energy-stable numerical scheme for this model. For a

  10. Numerical and Experimental Studies of the Natural Convection Flow Within a Horizontal Cylinder Subjected to a Uniformly Cold Wall Boundary Condition. Ph.D. Thesis - Va. Poly. Inst. and State Univ.

    NASA Technical Reports Server (NTRS)

    Stewart, R. B.

    1972-01-01

    Numberical solutions are obtained for the quasi-compressible Navier-Stokes equations governing the time dependent natural convection flow within a horizontal cylinder. The early time flow development and wall heat transfer is obtained after imposing a uniformly cold wall boundary condition on the cylinder. Solutions are also obtained for the case of a time varying cold wall boundary condition. Windware explicit differ-encing is used for the numerical solutions. The viscous truncation error associated with this scheme is controlled so that first order accuracy is maintained in time and space. The results encompass a range of Grashof numbers from 8.34 times 10,000 to 7 times 10 to the 7th power which is within the laminar flow regime for gravitationally driven fluid flows. Experiments within a small scale instrumented horizontal cylinder revealed the time development of the temperature distribution across the boundary layer and also the decay of wall heat transfer with time.

  11. Numerical studies in geophysics

    NASA Astrophysics Data System (ADS)

    Hier Majumder, Catherine Anne

    2003-10-01

    This thesis focuses on the use of modern numerical techniques in the geo- and environmental sciences. Four topics are discussed in this thesis: finite Prandtl number convection, wavelet analysis, inverse methods and data assimilation, and nuclear waste tank mixing. The finite Prandtl number convection studies examine how convection behavior changes as Prandtl numbers are increased to as high as 2 x 104, on the order of Prandtl numbers expected in very hot magmas or mushy ice diapirs. I found that there are significant differences in the convection style between finite Prandtl number convection and the infinite Prandtl number approximation even for Prandtl numbers on the order of 104. This indicates that the infinite Prandtl convection approximation might not accurately model behavior in fluids with large, but finite Prandtl numbers. The section on inverse methods and data assimilation used the technique of four dimensional variational data assimilation (4D-VAR) developed by meteorologists to integrate observations into forecasts. It was useful in studying the predictability and dependence on initial conditions of finite Prandtl simulations. This technique promises to be useful in a wide range of geological and geophysical fields, including mantle convection, hydrogeology, and sedimentology. Wavelet analysis was used to help image and scrutinize at small-scales both temperature and vorticity fields from convection simulations and the geoid. It was found to be extremely helpful in both cases. It allowed us to separate the information in the data into various spatial scales without losing the locations of the signals in space. This proved to be essential in understanding the processes producing the total signal in the datasets. The nuclear waste study showed that techniques developed in geology and geophysics can be used to solve scientific problems in other fields. I applied state-of-the-art techniques currently employed in geochemistry, sedimentology, and mantle

  12. 40 CFR 86.1422 - Analyzer calibration.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Trucks; Certification Short Test Procedures § 86.1422 Analyzer calibration. (a) Determine that the... check. Prior to its introduction into service and at specified periods thereafter, the analyzer must...

  13. A network of automatic atmospherics analyzer

    NASA Technical Reports Server (NTRS)

    Schaefer, J.; Volland, H.; Ingmann, P.; Eriksson, A. J.; Heydt, G.

    1980-01-01

    The design and function of an atmospheric analyzer which uses a computer are discussed. Mathematical models which show the method of measurement are presented. The data analysis and recording procedures of the analyzer are discussed.

  14. Development of an Infrared Fluorescent Gas Analyzer.

    ERIC Educational Resources Information Center

    McClatchie, E. A.

    A prototype model low level carbon monoxide analyzer was developed using fluorescent cell and negative chopping techniques to achieve a device superior to state of art NDIR (Nondispersive infrared) analyzers in stability and cross-sensitivity to other gaseous species. It is clear that this type of analyzer has that capacity. The prototype…

  15. 46 CFR 154.1360 - Oxygen analyzer.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 5 2014-10-01 2014-10-01 false Oxygen analyzer. 154.1360 Section 154.1360 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) CERTAIN BULK DANGEROUS CARGOES SAFETY STANDARDS... Instrumentation § 154.1360 Oxygen analyzer. The vessel must have a portable analyzer that measures oxygen levels...

  16. 46 CFR 154.1360 - Oxygen analyzer.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 5 2011-10-01 2011-10-01 false Oxygen analyzer. 154.1360 Section 154.1360 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) CERTAIN BULK DANGEROUS CARGOES SAFETY STANDARDS... Instrumentation § 154.1360 Oxygen analyzer. The vessel must have a portable analyzer that measures oxygen levels...

  17. 46 CFR 154.1360 - Oxygen analyzer.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 5 2013-10-01 2013-10-01 false Oxygen analyzer. 154.1360 Section 154.1360 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) CERTAIN BULK DANGEROUS CARGOES SAFETY STANDARDS... Instrumentation § 154.1360 Oxygen analyzer. The vessel must have a portable analyzer that measures oxygen levels...

  18. 46 CFR 154.1360 - Oxygen analyzer.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 5 2010-10-01 2010-10-01 false Oxygen analyzer. 154.1360 Section 154.1360 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) CERTAIN BULK DANGEROUS CARGOES SAFETY STANDARDS... Instrumentation § 154.1360 Oxygen analyzer. The vessel must have a portable analyzer that measures oxygen levels...

  19. 46 CFR 154.1360 - Oxygen analyzer.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 5 2012-10-01 2012-10-01 false Oxygen analyzer. 154.1360 Section 154.1360 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) CERTAIN BULK DANGEROUS CARGOES SAFETY STANDARDS... Instrumentation § 154.1360 Oxygen analyzer. The vessel must have a portable analyzer that measures oxygen levels...

  20. Uncertainty in Analyzed Water and Energy Budgets at Continental Scales

    NASA Technical Reports Server (NTRS)

    Bosilovich, Michael G.; Robertson, F. R.; Mocko, D.; Chen, J.

    2011-01-01

    Operational analyses and retrospective-analyses provide all the physical terms of mater and energy budgets, guided by the assimilation of atmospheric observations. However, there is significant reliance on the numerical models, and so, uncertainty in the budget terms is always present. Here, we use a recently developed data set consisting of a mix of 10 analyses (both operational and retrospective) to quantify the uncertainty of analyzed water and energy budget terms for GEWEX continental-scale regions, following the evaluation of Dr. John Roads using individual reanalyses data sets.

  1. Numerical and experimental investigations on cavitation erosion

    NASA Astrophysics Data System (ADS)

    Fortes Patella, R.; Archer, A.; Flageul, C.

    2012-11-01

    A method is proposed to predict cavitation damage from cavitating flow simulations. For this purpose, a numerical process coupling cavitating flow simulations and erosion models was developed and applied to a two-dimensional (2D) hydrofoil tested at TUD (Darmstadt University of Technology, Germany) [1] and to a NACA 65012 tested at LMH-EPFL (Lausanne Polytechnic School) [2]. Cavitation erosion tests (pitting tests) were carried out and a 3D laser profilometry was used to analyze surfaces damaged by cavitation [3]. The method allows evaluating the pit characteristics, and mainly the volume damage rates. The paper describes the developed erosion model, the technique of cavitation damage measurement and presents some comparisons between experimental results and numerical damage predictions. The extent of cavitation erosion was correctly estimated in both hydrofoil geometries. The simulated qualitative influence of flow velocity, sigma value and gas content on cavitation damage agreed well with experimental observations.

  2. Consistency and convergence for numerical radiation conditions

    NASA Technical Reports Server (NTRS)

    Hagstrom, Thomas

    1990-01-01

    The problem of imposing radiation conditions at artificial boundaries for the numerical simulation of wave propagation is considered. Emphasis is on the behavior and analysis of the error which results from the restriction of the domain. The theory of error estimation is briefly outlined for boundary conditions. Use is made of the asymptotic analysis of propagating wave groups to derive and analyze boundary operators. For dissipative problems this leads to local, accurate conditions, but falls short in the hyperbolic case. A numerical experiment on the solution of the wave equation with cylindrical symmetry is described. A unified presentation of a number of conditions which have been proposed in the literature is given and the time dependence of the error which results from their use is displayed. The results are in qualitative agreement with theoretical considerations. It was found, however, that for this model problem it is particularly difficult to force the error to decay rapidly in time.

  3. Numerical approach of the quantum circuit theory

    SciTech Connect

    Silva, J.J.B., E-mail: jaedsonfisica@hotmail.com; Duarte-Filho, G.C.; Almeida, F.A.G.

    2017-03-15

    In this paper we develop a numerical method based on the quantum circuit theory to approach the coherent electronic transport in a network of quantum dots connected with arbitrary topology. The algorithm was employed in a circuit formed by quantum dots connected each other in a shape of a linear chain (associations in series), and of a ring (associations in series, and in parallel). For both systems we compute two current observables: conductance and shot noise power. We find an excellent agreement between our numerical results and the ones found in the literature. Moreover, we analyze the algorithm efficiency formore » a chain of quantum dots, where the mean processing time exhibits a linear dependence with the number of quantum dots in the array.« less

  4. Numerical Asymptotic Solutions Of Differential Equations

    NASA Technical Reports Server (NTRS)

    Thurston, Gaylen A.

    1992-01-01

    Numerical algorithms derived and compared with classical analytical methods. In method, expansions replaced with integrals evaluated numerically. Resulting numerical solutions retain linear independence, main advantage of asymptotic solutions.

  5. Numerical systems on a minicomputer

    SciTech Connect

    Brown, Jr., Roy Leonard

    1973-02-01

    This thesis defines the concept of a numerical system for a minicomputer and provides a description of the software and computer system configuration necessary to implement such a system. A procedure for creating a numerical system from a FORTRAN program is developed and an example is presented.

  6. Aerodynamic design using numerical optimization

    NASA Technical Reports Server (NTRS)

    Murman, E. M.; Chapman, G. T.

    1983-01-01

    The procedure of using numerical optimization methods coupled with computational fluid dynamic (CFD) codes for the development of an aerodynamic design is examined. Several approaches that replace wind tunnel tests, develop pressure distributions and derive designs, or fulfill preset design criteria are presented. The method of Aerodynamic Design by Numerical Optimization (ADNO) is described and illustrated with examples.

  7. Computerized Numerical Control Curriculum Guide.

    ERIC Educational Resources Information Center

    Reneau, Fred; And Others

    This guide is intended for use in a course in programming and operating a computerized numerical control system. Addressed in the course are various aspects of programming and planning, setting up, and operating machines with computerized numerical control, including selecting manual or computer-assigned programs and matching them with…

  8. Numerical study of heat transfer and fluid flow for steady crystal growth in a vertical Bridgman device

    NASA Astrophysics Data System (ADS)

    Pohlman, Matthew Michael

    The study of heat transfer and fluid flow in a vertical Bridgman device is motivated by current industrial difficulties in growing crystals with as few defects as possible. For example, Gallium Arsenide (GaAs) is of great interest to the semiconductor industry but remains an uneconomical alternative to silicon because of the manufacturing problems. This dissertation is a two dimensional study of the fluid in an idealized Bridgman device. The model nonlinear PDEs are discretized using second order finite differencing. Newton's method solves the resulting nonlinear discrete equations. The large sparse linear systems involving the Jacobian are solved iteratively using the Generalized Minimum Residual method (GMRES). By adapting fast direct solvers for elliptic equations with simple boundary conditions, a good preconditioner is developed which is essential for GMRES to converge quickly. Trends of the fluid flow and heat transfer for typical ranges of the physical parameters are determined. Also, the size of the terms in the mathematical model are found by numerical investigation, in order to find what terms are in balance as the physical parameters vary. The results suggest the plausibility of simpler asymptotic solutions.

  9. Web-based multi-channel analyzer

    DOEpatents

    Gritzo, Russ E.

    2003-12-23

    The present invention provides an improved multi-channel analyzer designed to conveniently gather, process, and distribute spectrographic pulse data. The multi-channel analyzer may operate on a computer system having memory, a processor, and the capability to connect to a network and to receive digitized spectrographic pulses. The multi-channel analyzer may have a software module integrated with a general-purpose operating system that may receive digitized spectrographic pulses for at least 10,000 pulses per second. The multi-channel analyzer may further have a user-level software module that may receive user-specified controls dictating the operation of the multi-channel analyzer, making the multi-channel analyzer customizable by the end-user. The user-level software may further categorize and conveniently distribute spectrographic pulse data employing non-proprietary, standard communication protocols and formats.

  10. Altitude characteristics of selected air quality analyzers

    NASA Technical Reports Server (NTRS)

    White, J. H.; Strong, R.; Tommerdahl, J. B.

    1979-01-01

    The effects of altitude (pressure) on the operation and sensitivity of various air quality analyzers frequently flown on aircraft were analyzed. Two ozone analyzers were studied at altitudes from 600 to 7500 m and a nitrogen oxides chemiluminescence detector and a sulfur dioxide flame photometric detector were studied at altitudes from 600 to 3000 m. Calibration curves for altitude corrections to the sensitivity of the instruments are presented along with discussion of observed instrument behavior.

  11. Technique for analyzing human respiratory process

    NASA Technical Reports Server (NTRS)

    Liu, F. F.

    1970-01-01

    Electronic system /MIRACLE 2/ places frequency and gas flow rate of the respiratory process within a common frame of reference to render them comparable and compatible with ''real clock time.'' Numerous measurements are accomplished accurately on a strict one-minute half-minute, breath-by-breath, or other period basis.

  12. A Method for Analyzing Commonalities in Clinical Trial Target Populations

    PubMed Central

    He, Zhe; Carini, Simona; Hao, Tianyong; Sim, Ida; Weng, Chunhua

    2014-01-01

    ClinicalTrials.gov presents great opportunities for analyzing commonalities in clinical trial target populations to facilitate knowledge reuse when designing eligibility criteria of future trials or to reveal potential systematic biases in selecting population subgroups for clinical research. Towards this goal, this paper presents a novel data resource for enabling such analyses. Our method includes two parts: (1) parsing and indexing eligibility criteria text; and (2) mining common eligibility features and attributes of common numeric features (e.g., A1c). We designed and built a database called “Commonalities in Target Populations of Clinical Trials” (COMPACT), which stores structured eligibility criteria and trial metadata in a readily computable format. We illustrate its use in an example analytic module called CONECT using COMPACT as the backend. Type 2 diabetes is used as an example to analyze commonalities in the target populations of 4,493 clinical trials on this disease. PMID:25954450

  13. A wideband, high-resolution spectrum analyzer

    NASA Technical Reports Server (NTRS)

    Quirk, M. P.; Wilck, H. C.; Garyantes, M. F.; Grimm, M. J.

    1988-01-01

    A two-million-channel, 40 MHz bandwidth, digital spectrum analyzer under development at the Jet Propulsion Laboratory is described. The analyzer system will serve as a prototype processor for the sky survey portion of NASA's Search for Extraterrestrial Intelligence program and for other applications in the Deep Space Network. The analyzer digitizes an analog input, performs a 2 (sup 21) point Discrete Fourier Transform, accumulates the output power, normalizes the output to remove frequency-dependent gain, and automates simple signal detection algorithms. Due to its built-in frequency-domain processing functions and configuration flexibility, the analyzer is a very powerful tool for real-time signal analysis.

  14. Direct numerical simulations and modeling of a spatially-evolving turbulent wake

    NASA Technical Reports Server (NTRS)

    Cimbala, John M.

    1994-01-01

    Understanding of turbulent free shear flows (wakes, jets, and mixing layers) is important, not only for scientific interest, but also because of their appearance in numerous practical applications. Turbulent wakes, in particular, have recently received increased attention by researchers at NASA Langley. The turbulent wake generated by a two-dimensional airfoil has been selected as the test-case for detailed high-resolution particle image velocimetry (PIV) experiments. This same wake has also been chosen to enhance NASA's turbulence modeling efforts. Over the past year, the author has completed several wake computations, while visiting NASA through the 1993 and 1994 ASEE summer programs, and also while on sabbatical leave during the 1993-94 academic year. These calculations have included two-equation (K-omega and K-epsilon) models, algebraic stress models (ASM), full Reynolds stress closure models, and direct numerical simulations (DNS). Recently, there has been mutually beneficial collaboration of the experimental and computational efforts. In fact, these projects have been chosen for joint presentation at the NASA Turbulence Peer Review, scheduled for September 1994. DNS calculations are presently underway for a turbulent wake at Re(sub theta) = 1000 and at a Mach number of 0.20. (Theta is the momentum thickness, which remains constant in the wake of a two dimensional body.) These calculations utilize a compressible DNS code written by M. M. Rai of NASA Ames, and modified for the wake by J. Cimbala. The code employs fifth-order accurate upwind-biased finite differencing for the convective terms, fourth-order accurate central differencing for the viscous terms, and an iterative-implicit time-integration scheme. The computational domain for these calculations starts at x/theta = 10, and extends to x/theta = 610. Fully developed turbulent wake profiles, obtained from experimental data from several wake generators, are supplied at the computational inlet, along with

  15. Numerical study of combustion processes in afterburners

    NASA Technical Reports Server (NTRS)

    Zhou, Xiaoqing; Zhang, Xiaochun

    1986-01-01

    Mathematical models and numerical methods are presented for computer modeling of aeroengine afterburners. A computer code GEMCHIP is described briefly. The algorithms SIMPLER, for gas flow predictions, and DROPLET, for droplet flow calculations, are incorporated in this code. The block correction technique is adopted to facilitate convergence. The method of handling irregular shapes of combustors and flameholders is described. The predicted results for a low-bypass-ratio turbofan afterburner in the cases of gaseous combustion and multiphase spray combustion are provided and analyzed, and engineering guides for afterburner optimization are presented.

  16. Numerical Simulation in a Supercirtical CFB Boiler

    NASA Astrophysics Data System (ADS)

    Zhang, Yanjun; Gaol, Xiang; Luo, Zhongyang; Jiang, Xiaoguo

    The dimension of the hot circulation loop of the supercritical CFB boiler is large, and there are many unknowns and challenges that should be identified and resolved during the development. In order to realize a reasonable and reliable design of the hot circulation loop, numerical simulation of gas-solid flow in a supercritical CFB boiler was conducted by using FLUENT software. The working condition of hot circulation loop flow field, gas-solid flow affected by three unsymmetrical cyclones, air distribution and pressure drop in furnace were analyzed. The simulation results showed that the general arrangement of the 600MWe supercritical CFB boiler is reasonable.

  17. Numerical simulation of swept-wing flows

    NASA Technical Reports Server (NTRS)

    Reed, Helen L.

    1991-01-01

    Efforts of the last six months to computationally model the transition process characteristics of flow over swept wings are described. Specifically, the crossflow instability and crossflow/Tollmien-Schlichting wave interactions are analyzed through the numerical solution of the full 3D Navier-Stokes equations including unsteadiness, curvature, and sweep. This approach is chosen because of the complexity of the problem and because it appears that linear stability theory is insufficient to explain the discrepancies between different experiments and between theory and experiment. The leading edge region of a swept wing is considered in a 3D spatial simulation with random disturbances as the initial conditions.

  18. 21 CFR 882.1020 - Rigidity analyzer.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Rigidity analyzer. 882.1020 Section 882.1020 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES NEUROLOGICAL DEVICES Neurological Diagnostic Devices § 882.1020 Rigidity analyzer. (a...

  19. 21 CFR 882.1020 - Rigidity analyzer.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Rigidity analyzer. 882.1020 Section 882.1020 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES NEUROLOGICAL DEVICES Neurological Diagnostic Devices § 882.1020 Rigidity analyzer. (a...

  20. Numerical Solution of Incompressible Navier-Stokes Equations Using a Fractional-Step Approach

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Kwak, Dochan

    1999-01-01

    A fractional step method for the solution of steady and unsteady incompressible Navier-Stokes equations is outlined. The method is based on a finite volume formulation and uses the pressure in the cell center and the mass fluxes across the faces of each cell as dependent variables. Implicit treatment of convective and viscous terms in the momentum equations enables the numerical stability restrictions to be relaxed. The linearization error in the implicit solution of momentum equations is reduced by using three subiterations in order to achieve second order temporal accuracy for time-accurate calculations. In spatial discretizations of the momentum equations, a high-order (3rd and 5th) flux-difference splitting for the convective terms and a second-order central difference for the viscous terms are used. The resulting algebraic equations are solved with a line-relaxation scheme which allows the use of large time step. A four color ZEBRA scheme is employed after the line-relaxation procedure in the solution of the Poisson equation for pressure. This procedure is applied to a Couette flow problem using a distorted computational grid to show that the method minimizes grid effects. Additional benchmark cases include the unsteady laminar flow over a circular cylinder for Reynolds Numbers of 200, and a 3-D, steady, turbulent wingtip vortex wake propagation study. The solution algorithm does a very good job in resolving the vortex core when 5th-order upwind differencing and a modified production term in the Baldwin-Barth one-equation turbulence model are used with adequate grid resolution.

  1. A NUMERICAL ALGORITHM FOR MODELING MULTIGROUP NEUTRINO-RADIATION HYDRODYNAMICS IN TWO SPATIAL DIMENSIONS

    SciTech Connect

    Swesty, F. Douglas; Myra, Eric S.

    It is now generally agreed that multidimensional, multigroup, neutrino-radiation hydrodynamics (RHD) is an indispensable element of any realistic model of stellar-core collapse, core-collapse supernovae, and proto-neutron star instabilities. We have developed a new, two-dimensional, multigroup algorithm that can model neutrino-RHD flows in core-collapse supernovae. Our algorithm uses an approach similar to the ZEUS family of algorithms, originally developed by Stone and Norman. However, this completely new implementation extends that previous work in three significant ways: first, we incorporate multispecies, multigroup RHD in a flux-limited-diffusion approximation. Our approach is capable of modeling pair-coupled neutrino-RHD, and includes effects of Pauli blocking inmore » the collision integrals. Blocking gives rise to nonlinearities in the discretized radiation-transport equations, which we evolve implicitly in time. We employ parallelized Newton-Krylov methods to obtain a solution of these nonlinear, implicit equations. Our second major extension to the ZEUS algorithm is the inclusion of an electron conservation equation that describes the evolution of electron-number density in the hydrodynamic flow. This permits calculating deleptonization of a stellar core. Our third extension modifies the hydrodynamics algorithm to accommodate realistic, complex equations of state, including those having nonconvex behavior. In this paper, we present a description of our complete algorithm, giving sufficient details to allow others to implement, reproduce, and extend our work. Finite-differencing details are presented in appendices. We also discuss implementation of this algorithm on state-of-the-art, parallel-computing architectures. Finally, we present results of verification tests that demonstrate the numerical accuracy of this algorithm on diverse hydrodynamic, gravitational, radiation-transport, and RHD sample problems. We believe our methods to be of general use in

  2. Numerical Hydrodynamics in General Relativity.

    PubMed

    Font, José A

    2000-01-01

    The current status of numerical solutions for the equations of ideal general relativistic hydrodynamics is reviewed. Different formulations of the equations are presented, with special mention of conservative and hyperbolic formulations well-adapted to advanced numerical methods. A representative sample of available numerical schemes is discussed and particular emphasis is paid to solution procedures based on schemes exploiting the characteristic structure of the equations through linearized Riemann solvers. A comprehensive summary of relevant astrophysical simulations in strong gravitational fields, including gravitational collapse, accretion onto black holes and evolution of neutron stars, is also presented. Supplementary material is available for this article at 10.12942/lrr-2000-2.

  3. Numerical bifurcation analysis of immunological models with time delays

    NASA Astrophysics Data System (ADS)

    Luzyanina, Tatyana; Roose, Dirk; Bocharov, Gennady

    2005-12-01

    In recent years, a large number of mathematical models that are described by delay differential equations (DDEs) have appeared in the life sciences. To analyze the models' dynamics, numerical methods are necessary, since analytical studies can only give limited results. In turn, the availability of efficient numerical methods and software packages encourages the use of time delays in mathematical modelling, which may lead to more realistic models. We outline recently developed numerical methods for bifurcation analysis of DDEs and illustrate the use of these methods in the analysis of a mathematical model of human hepatitis B virus infection.

  4. Numerical simulations of quantum devices

    NASA Astrophysics Data System (ADS)

    Sandu, Titus

    This work has been motivated by the tremendous effort toward the next generation of electron devices that will replace the present CMOS (Complementary Metal Oxide Semiconductor). Non-equilibrium Green's function formalism (NEGF) and empirical tight-binding (ETB) methods have been utilized in this dissertation. We studied the transport properties of Si/SiO2 resonant tunneling diodes (RTDs) by employing NEGF. We analyzed the physics of electron transport in Si/SiO2 RTDs and provided some guidelines for the fabrication of such devices by considering the effect of interface roughness scattering. Atomic scale roughness is shown to be acceptable. As the island size of the roughness increases, the peak-to-valley ratio degrades to less than 5 for 1 nm roughness and less than 2 for 2 nm roughness. By the ETB method we calculated electronic and optical properties of the relatively new Si/BeSe0.41Te0.59 system, more precisely Si/BeSe0.41Te0.59 [001] superlattices (SLs). Two interface bands were found in the band gap of bulk silicon. They were related to the polar Si/BeSe0.41Te0.59 interface. In addition, numerical calculations showed that the optical gap is close to the fundamental gap of bulk Si and the transitions are optically allowed. Two more aspects have been studied with NEGF: intrinsic bistability and off-zone center current flow of electrons in the RTD. We showed that broadening of the quasi-bound state in the emitter by scattering reduces intrinsic bistability. So far in different theoretical papers dealing with intrinsic bistability, only the scattering in the well has been considered. Finally, we demonstrated that scattering induces off-zone center current flow of electrons in RTDs. In RTDs electrons usually have a zone-center current flow. This is due to the coherent transport for which Tsu-Esaki formula is valid. On the contrary, holes have off-zone-center current flow. We show that, generally, carrier current flow is off-center, which means that the hole

  5. Numerical implementation, verification and validation of two-phase flow four-equation drift flux model with Jacobian-free Newton–Krylov method

    DOE PAGES

    Zou, Ling; Zhao, Haihua; Zhang, Hongbin

    2016-08-24

    This study presents a numerical investigation on using the Jacobian-free Newton–Krylov (JFNK) method to solve the two-phase flow four-equation drift flux model with realistic constitutive correlations (‘closure models’). The drift flux model is based on Isshi and his collaborators’ work. Additional constitutive correlations for vertical channel flow, such as two-phase flow pressure drop, flow regime map, wall boiling and interfacial heat transfer models, were taken from the RELAP5-3D Code Manual and included to complete the model. The staggered grid finite volume method and fully implicit backward Euler method was used for the spatial discretization and time integration schemes, respectively. Themore » Jacobian-free Newton–Krylov method shows no difficulty in solving the two-phase flow drift flux model with a discrete flow regime map. In addition to the Jacobian-free approach, the preconditioning matrix is obtained by using the default finite differencing method provided in the PETSc package, and consequently the labor-intensive implementation of complex analytical Jacobian matrix is avoided. Extensive and successful numerical verification and validation have been performed to prove the correct implementation of the models and methods. Code-to-code comparison with RELAP5-3D has further demonstrated the successful implementation of the drift flux model.« less

  6. Analyzing machine noise for real time maintenance

    NASA Astrophysics Data System (ADS)

    Yamato, Yoji; Fukumoto, Yoshifumi; Kumazaki, Hiroki

    2017-02-01

    Recently, IoT technologies have been progressed and applications of maintenance area are expected. However, IoT maintenance applications are not spread in Japan yet because of one-off solution of sensing and analyzing for each case, high cost to collect sensing data and insufficient maintenance automation. This paper proposes a maintenance platform which analyzes sound data in edges, analyzes only anomaly data in cloud and orders maintenance automatically to resolve existing technology problems. We also implement a sample application and compare related work.

  7. Systems Analyze Water Quality in Real Time

    NASA Technical Reports Server (NTRS)

    2010-01-01

    A water analyzer developed under Small Business Innovation Research (SBIR) contracts with Kennedy Space Center now monitors treatment processes at water and wastewater facilities around the world. Originally designed to provide real-time detection of nutrient levels in hydroponic solutions for growing plants in space, the ChemScan analyzer, produced by ASA Analytics Inc., of Waukesha, Wisconsin, utilizes spectrometry and chemometric algorithms to automatically analyze multiple parameters in the water treatment process with little need for maintenance, calibration, or operator intervention. The company has experienced a compound annual growth rate of 40 percent over its 15-year history as a direct result of the technology's success.

  8. Numerical Optimization Using Computer Experiments

    NASA Technical Reports Server (NTRS)

    Trosset, Michael W.; Torczon, Virginia

    1997-01-01

    Engineering design optimization often gives rise to problems in which expensive objective functions are minimized by derivative-free methods. We propose a method for solving such problems that synthesizes ideas from the numerical optimization and computer experiment literatures. Our approach relies on kriging known function values to construct a sequence of surrogate models of the objective function that are used to guide a grid search for a minimizer. Results from numerical experiments on a standard test problem are presented.

  9. Trees, bialgebras and intrinsic numerical algorithms

    NASA Technical Reports Server (NTRS)

    Crouch, Peter; Grossman, Robert; Larson, Richard

    1990-01-01

    Preliminary work about intrinsic numerical integrators evolving on groups is described. Fix a finite dimensional Lie group G; let g denote its Lie algebra, and let Y(sub 1),...,Y(sub N) denote a basis of g. A class of numerical algorithms is presented that approximate solutions to differential equations evolving on G of the form: dot-x(t) = F(x(t)), x(0) = p is an element of G. The algorithms depend upon constants c(sub i) and c(sub ij), for i = 1,...,k and j is less than i. The algorithms have the property that if the algorithm starts on the group, then it remains on the group. In addition, they also have the property that if G is the abelian group R(N), then the algorithm becomes the classical Runge-Kutta algorithm. The Cayley algebra generated by labeled, ordered trees is used to generate the equations that the coefficients c(sub i) and c(sub ij) must satisfy in order for the algorithm to yield an rth order numerical integrator and to analyze the resulting algorithms.

  10. Numerical Hydrodynamics in General Relativity.

    PubMed

    Font, José A

    2003-01-01

    The current status of numerical solutions for the equations of ideal general relativistic hydrodynamics is reviewed. With respect to an earlier version of the article, the present update provides additional information on numerical schemes, and extends the discussion of astrophysical simulations in general relativistic hydrodynamics. Different formulations of the equations are presented, with special mention of conservative and hyperbolic formulations well-adapted to advanced numerical methods. A large sample of available numerical schemes is discussed, paying particular attention to solution procedures based on schemes exploiting the characteristic structure of the equations through linearized Riemann solvers. A comprehensive summary of astrophysical simulations in strong gravitational fields is presented. These include gravitational collapse, accretion onto black holes, and hydrodynamical evolutions of neutron stars. The material contained in these sections highlights the numerical challenges of various representative simulations. It also follows, to some extent, the chronological development of the field, concerning advances on the formulation of the gravitational field and hydrodynamic equations and the numerical methodology designed to solve them. Supplementary material is available for this article at 10.12942/lrr-2003-4.

  11. System for analyzing coal liquefaction products

    DOEpatents

    Dinsmore, Stanley R.; Mrochek, John E.

    1984-01-01

    A system for analyzing constituents of coal-derived materials comprises three adsorption columns and a flow-control arrangement which permits separation of both aromatic and polar hydrocarbons by use of two eluent streams.

  12. Guide to analyzing investment options using TWIGS.

    Treesearch

    Charles R Blinn; Dietmar W. Rose; Monique L. Belli

    1988-01-01

    Describes methods for analyzing economic return of simulated stand management alternatives in TWIGS. Defines and discusses net present value, equivalent annual income, soil expectation value, and real vs. nominal analyses. Discusses risk and sensitivity analysis when comparing alternatives.

  13. Analyzing the economic impacts of transportation projects.

    DOT National Transportation Integrated Search

    2013-09-01

    The main goal of the study is to explore methods, approaches and : analytical software tools for analyzing economic activity that results from largescale : transportation investments in Connecticut. The primary conclusion is that the : transportation...

  14. A Numerical and Theoretical Study of Seismic Wave Diffraction in Complex Geologic Structure

    DTIC Science & Technology

    1989-04-14

    element methods for analyzing linear and nonlinear seismic effects in the surficial geologies relevant to several Air Force missions. The second...exact solution evaluated here indicates that edge-diffracted seismic wave fields calculated by discrete numerical methods probably exhibits significant...study is to demonstrate and validate some discrete numerical methods essential for analyzing linear and nonlinear seismic effects in the surficial

  15. Development of a numerical pump testing framework.

    PubMed

    Kaufmann, Tim A S; Gregory, Shaun D; Büsen, Martin R; Tansley, Geoff D; Steinseifer, Ulrich

    2014-09-01

    It has been shown that left ventricular assist devices (LVADs) increase the survival rate in end-stage heart failure patients. However, there is an ongoing demand for an increased quality of life, fewer adverse events, and more physiological devices. These challenges necessitate new approaches during the design process. In this study, computational fluid dynamics (CFD), lumped parameter (LP) modeling, mock circulatory loops (MCLs), and particle image velocimetry (PIV) are combined to develop a numerical Pump Testing Framework (nPTF) capable of analyzing local flow patterns and the systemic response of LVADs. The nPTF was created by connecting a CFD model of the aortic arch, including an LVAD outflow graft to an LP model of the circulatory system. Based on the same geometry, a three-dimensional silicone model was crafted using rapid prototyping and connected to an MCL. PIV studies of this setup were performed to validate the local flow fields (PIV) and the systemic response (MCL) of the nPTF. After validation, different outflow graft positions were compared using the nPTF. Both the numerical and the experimental setup were able to generate physiological responses by adjusting resistances and systemic compliance, with mean aortic pressures of 72.2-132.6 mm Hg for rotational speeds of 2200-3050 rpm. During LVAD support, an average flow to the distal branches (cerebral and subclavian) of 24% was found in the experiments and the nPTF. The flow fields from PIV and CFD were in good agreement. Numerical and experimental tools were combined to develop and validate the nPTF, which can be used to analyze local flow fields and the systemic response of LVADs during the design process. This allows analysis of physiological control parameters at early development stages and may, therefore, help to improve patient outcomes. Copyright © 2014 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  16. Automated Calibration For Numerical Models Of Riverflow

    NASA Astrophysics Data System (ADS)

    Fernandez, Betsaida; Kopmann, Rebekka; Oladyshkin, Sergey

    2017-04-01

    Calibration of numerical models is fundamental since the beginning of all types of hydro system modeling, to approximate the parameters that can mimic the overall system behavior. Thus, an assessment of different deterministic and stochastic optimization methods is undertaken to compare their robustness, computational feasibility, and global search capacity. Also, the uncertainty of the most suitable methods is analyzed. These optimization methods minimize the objective function that comprises synthetic measurements and simulated data. Synthetic measurement data replace the observed data set to guarantee an existing parameter solution. The input data for the objective function derivate from a hydro-morphological dynamics numerical model which represents an 180-degree bend channel. The hydro- morphological numerical model shows a high level of ill-posedness in the mathematical problem. The minimization of the objective function by different candidate methods for optimization indicates a failure in some of the gradient-based methods as Newton Conjugated and BFGS. Others reveal partial convergence, such as Nelder-Mead, Polak und Ribieri, L-BFGS-B, Truncated Newton Conjugated, and Trust-Region Newton Conjugated Gradient. Further ones indicate parameter solutions that range outside the physical limits, such as Levenberg-Marquardt and LeastSquareRoot. Moreover, there is a significant computational demand for genetic optimization methods, such as Differential Evolution and Basin-Hopping, as well as for Brute Force methods. The Deterministic Sequential Least Square Programming and the scholastic Bayes Inference theory methods present the optimal optimization results. keywords: Automated calibration of hydro-morphological dynamic numerical model, Bayesian inference theory, deterministic optimization methods.

  17. In-situ continuous water analyzing module

    DOEpatents

    Thompson, Cyril V.; Wise, Marcus B.

    1998-01-01

    An in-situ continuous liquid analyzing system for continuously analyzing volatile components contained in a water source comprises: a carrier gas supply, an extraction container and a mass spectrometer. The carrier gas supply continuously supplies the carrier gas to the extraction container and is mixed with a water sample that is continuously drawn into the extraction container. The carrier gas continuously extracts the volatile components out of the water sample. The water sample is returned to the water source after the volatile components are extracted from it. The extracted volatile components and the carrier gas are delivered continuously to the mass spectometer and the volatile components are continuously analyzed by the mass spectrometer.

  18. The Cosmic Dust Analyzer for Cassini

    NASA Technical Reports Server (NTRS)

    Bradley, James G.; Gruen, Eberhard; Srama, Ralf

    1996-01-01

    The Cosmic Dust Analyzer (CDA) is designed to characterize the dust environment in interplanetary space, in the Jovian and in the Saturnian systems. The instrument consists of two major components, the Dust Analyzer (DA) and the High Rate Detector (HRD). The DA has a large aperture to provide a large cross section for detection in low flux environments. The DA has the capability of determining dust particle mass, velocity, flight direction, charge, and chemical composition. The chemical composition is determined by the Chemical Analyzer system based on a time-of-flight mass spectrometer. The DA is capable of making full measurements up to one impact/second. The HRD contains two smaller PVDF detectors and electronics designed to characterize dust particle masses at impact rates up to 10(exp 4) impacts/second. These high impact rates are expected during Saturn ring, plane crossings.

  19. OASIS: Organics Analyzer for Sampling Icy Surfaces

    NASA Technical Reports Server (NTRS)

    Getty, S. A.; Dworkin, J. P.; Glavin, D. P.; Martin, M.; Zheng, Y.; Balvin, M.; Southard, A. E.; Ferrance, J.; Malespin, C.

    2012-01-01

    Liquid chromatography mass spectrometry (LC-MS) is a well established laboratory technique for detecting and analyzing organic molecules. This approach has been especially fruitful in the analysis of nucleobases, amino acids, and establishing chirol ratios [1 -3]. We are developing OASIS, Organics Analyzer for Sampling Icy Surfaces, for future in situ landed missions to astrochemically important icy bodies, such as asteroids, comets, and icy moons. The OASIS design employs a microfabricated, on-chip analytical column to chromatographically separate liquid ana1ytes using known LC stationary phase chemistries. The elution products are then interfaced through electrospray ionization (ESI) and analyzed by a time-of-flight mass spectrometer (TOF-MS). A particular advantage of this design is its suitability for microgravity environments, such as for a primitive small body.

  20. Analyzed DTS Data, Guelph, ON Canada

    DOE Data Explorer

    Coleman, Thomas

    2015-07-01

    Analyzed DTS datasets from active heat injection experiments in Guelph, ON Canada is included. A .pdf file of images including borehole temperature distributions, temperature difference distributions, temperature profiles, and flow interpretations is included as the primary analyzed dataset. Analyzed data used to create the .pdf images are included as a matlab data file that contains the following 5 types of data: 1) Borehole Temperature (matrix of temperature data collected in the borehole), 2) Borehole Temperature Difference (matrix of temperature difference above ambient for each test), 3) Borehole Time (time in both min and sec since the start of a DTS test), 4) Borehole Depth (channel depth locations for the DTS measurements), 5) Temperature Profiles (ambient, active, active off early time, active off late time, and injection).

  1. The Deep Space Network stability analyzer

    NASA Technical Reports Server (NTRS)

    Breidenthal, Julian C.; Greenhall, Charles A.; Hamell, Robert L.; Kuhnle, Paul F.

    1995-01-01

    A stability analyzer for testing NASA Deep Space Network installations during flight radio science experiments is described. The stability analyzer provides realtime measurements of signal properties of general experimental interest: power, phase, and amplitude spectra; Allan deviation; and time series of amplitude, phase shift, and differential phase shift. Input ports are provided for up to four 100 MHz frequency standards and eight baseband analog (greater than 100 kHz bandwidth) signals. Test results indicate the following upper bounds to noise floors when operating on 100 MHz signals: -145 dBc/Hz for phase noise spectrum further than 200 Hz from carrier, 2.5 x 10(exp -15) (tau =1 second) and 1.5 x 10(exp -17) (tau =1000 seconds) for Allan deviation, and 1 x 10(exp -4) degrees for 1-second averages of phase deviation. Four copies of the stability analyzer have been produced, plus one transportable unit for use at non-NASA observatories.

  2. Numerical Package in Computer Supported Numeric Analysis Teaching

    ERIC Educational Resources Information Center

    Tezer, Murat

    2007-01-01

    At universities in the faculties of Engineering, Sciences, Business and Economics together with higher education in Computing, it is stated that because of the difficulty, calculators and computers can be used in Numerical Analysis (NA). In this study, the learning computer supported NA will be discussed together with important usage of the…

  3. Numerical cognition: Adding it up.

    PubMed

    LeFevre, Jo-Anne

    2016-03-01

    In this article, I provide a historical overview of the field of numerical cognition. I first situate the evolution and development of this field in the more general context of the cognitive revolution, which started in the mid-1950s. I then discuss the genesis of numerical cognition from 6 areas: psychophysics, information processing, neuropsychology, mathematics education, psychometrics, and cognitive development. This history is personal: I discuss some of my own work over the last 30 years and describe how each of the authors of the articles in this collection originally connected with the field. One important goal of the article is to highlight the major findings, both for experts and for those who are less familiar with research on numerical processing. In sum, I sketch a context within which to appreciate the neural, computational, and behavioural work that the other 4 authors summarise in their articles in this special section. (c) 2016 APA, all rights reserved).

  4. Numerical simulation of flood barriers

    NASA Astrophysics Data System (ADS)

    Srb, Pavel; Petrů, Michal; Kulhavý, Petr

    This paper deals with testing and numerical simulating of flood barriers. The Czech Republic has been hit by several very devastating floods in past years. These floods caused several dozens of causalities and property damage reached billions of Euros. The development of flood measures is very important, especially for the reduction the number of casualties and the amount of property damage. The aim of flood control measures is the detention of water outside populated areas and drainage of water from populated areas as soon as possible. For new flood barrier design it is very important to know its behaviour in case of a real flood. During the development of the barrier several standardized tests have to be carried out. Based on the results from these tests numerical simulation was compiled using Abaqus software and some analyses were carried out. Based on these numerical simulations it will be possible to predict the behaviour of barriers and thus improve their design.

  5. Numerical Simulation of Black Holes

    NASA Astrophysics Data System (ADS)

    Teukolsky, Saul

    2003-04-01

    Einstein's equations of general relativity are prime candidates for numerical solution on supercomputers. There is some urgency in being able to carry out such simulations: Large-scale gravitational wave detectors are now coming on line, and the most important expected signals cannot be predicted except numerically. Problems involving black holes are perhaps the most interesting, yet also particularly challenging computationally. One difficulty is that inside a black hole there is a physical singularity that cannot be part of the computational domain. A second difficulty is the disparity in length scales between the size of the black hole and the wavelength of the gravitational radiation emitted. A third difficulty is that all existing methods of evolving black holes in three spatial dimensions are plagued by instabilities that prohibit long-term evolution. I will describe the ideas that are being introduced in numerical relativity to deal with these problems, and discuss the results of recent calculations of black hole collisions.

  6. Analyzing diffuse scattering with supercomputers. Corrigendum

    SciTech Connect

    Michels-Clark, Tara M.; Lynch, Vickie E.; Hoffmann, Christina M.

    2016-03-01

    The study by Michels-Clark et al. (2013 [Michels-Clark, T. M., Lynch, V. E., Hoffmann, C. M., Hauser, J., Weber, T., Harrison, R. & Bürgi, H. B. (2013). J. Appl. Cryst. 46, 1616-1625.]) contains misleading errors which are corrected here. The numerical results reported in that paper and the conclusions given there are not affected and remain unchanged. The transition probabilities in Table 1 (rows 4, 5, 7, 8) and Fig. 2 (rows 1 and 2) of the original paper were different from those used in the numerical calculations. Corrected transition probabilities as used in the computations are given in Tablemore » 1 and Fig. 1 of this article. The Δ parameter in the stacking model expresses the preference for the fifth layer in a five-layer stack to be eclipsed with respect to the first layer. This statement corrects the original text on p. 1622, lines 4–7. In the original Fig. 2 the helicity of the layer stacks b L and b R in rows 3 and 4 had been given as opposite to those in rows 1, 2 and 5. Fig. 1 of this article shows rows 3 and 4 corrected to correspond to rows 1, 2 and 5.« less

  7. High responsivity secondary ion energy analyzer

    NASA Astrophysics Data System (ADS)

    Belov, A. S.; Chermoshentsev, D. A.; Gavrilov, S. A.; Frolov, O. T.; Netchaeva, L. P.; Nikulin, E. S.; Zubets, V. N.

    2018-05-01

    The degree of space charge compensation of a 70 mA, 400 keV pulsed hydrogen ion beam has been measured with the use of an electrostatic energy analyzer of secondary ions. The large azimuthal angle of the analyzer enables a high responsivity, defined as the ratio of the slow secondary ion current emerging from the partially-compensated ion beam to the fast ion beam current. We measured 84% space charge compensation of the ion beam. The current from the slow ions and the rise time from the degree of space charge compensation were measured and compared with expected values.

  8. Frequency spectrum analyzer with phase-lock

    DOEpatents

    Boland, Thomas J.

    1984-01-01

    A frequency-spectrum analyzer with phase-lock for analyzing the frequency and amplitude of an input signal is comprised of a voltage controlled oscillator (VCO) which is driven by a ramp generator, and a phase error detector circuit. The phase error detector circuit measures the difference in phase between the VCO and the input signal, and drives the VCO locking it in phase momentarily with the input signal. The input signal and the output of the VCO are fed into a correlator which transfers the input signal to a frequency domain, while providing an accurate absolute amplitude measurement of each frequency component of the input signal.

  9. PARALYZER FOR PULSE HEIGHT DISTRIBUTION ANALYZER

    DOEpatents

    Fairstein, E.

    1960-01-19

    A paralyzer circuit is described for use with a pulseheight distribution analyzer to prevent the analyzer from counting overlapping pulses where they would serve to provide a false indication. The paralyzer circuit comprises a pair of cathode-coupled amplifiers for amplifying pulses of opposite polarity. Diodes are provided having their anodes coupled to the separate outputs of the amplifiers to produce only positive signals, and a trigger circuit is coupled to the diodes ior operation by input pulses of either polarity from the amplifiers. A delay network couples the output of the trigger circuit for delaying the pulses.

  10. Empirical mode decomposition for analyzing acoustical signals

    NASA Technical Reports Server (NTRS)

    Huang, Norden E. (Inventor)

    2005-01-01

    The present invention discloses a computer implemented signal analysis method through the Hilbert-Huang Transformation (HHT) for analyzing acoustical signals, which are assumed to be nonlinear and nonstationary. The Empirical Decomposition Method (EMD) and the Hilbert Spectral Analysis (HSA) are used to obtain the HHT. Essentially, the acoustical signal will be decomposed into the Intrinsic Mode Function Components (IMFs). Once the invention decomposes the acoustic signal into its constituting components, all operations such as analyzing, identifying, and removing unwanted signals can be performed on these components. Upon transforming the IMFs into Hilbert spectrum, the acoustical signal may be compared with other acoustical signals.

  11. Analyzed Boise Data for Oscillatory Hydraulic Tomography

    DOE Data Explorer

    Lim, David

    2015-07-01

    Data here has been "pre-processed" and "analyzed" from the raw data submitted to the GDR previously (raw data files found at http://gdr.openei.org/submissions/479. doi:10.15121/1176944 after 30 September 2017). First, we submit .mat files which are the "pre-processed" data (must have MATLAB software to use). Secondly, the csv files contain submitted data in its final analyzed form before being used for inversion. Specifically, we have fourier coefficients obtained from Fast Fourier Transform Algorithms.

  12. Numeral Incorporation in Japanese Sign Language

    ERIC Educational Resources Information Center

    Ktejik, Mish

    2013-01-01

    This article explores the morphological process of numeral incorporation in Japanese Sign Language. Numeral incorporation is defined and the available research on numeral incorporation in signed language is discussed. The numeral signs in Japanese Sign Language are then introduced and followed by an explanation of the numeral morphemes which are…

  13. Particle yields from numerical simulations

    NASA Astrophysics Data System (ADS)

    Homor, Marietta M.; Jakovác, Antal

    2018-04-01

    In this paper we use numerical field theoretical simulations to calculate particle yields. We demonstrate that in the model of local particle creation the deviation from the pure exponential distribution is natural even in equilibrium, and an approximate Tsallis-Pareto-like distribution function can be well fitted to the calculated yields, in accordance with the experimental observations. We present numerical simulations in the classical Φ4 model as well as in the SU(3) quantum Yang-Mills theory to clarify this issue.

  14. Numerical studies of interacting vortices

    NASA Technical Reports Server (NTRS)

    Liu, G. C.; Hsu, C. H.

    1985-01-01

    To get a basic understanding of the physics of flowfields modeled by vortex filaments with finite vortical cores, systematic numerical studies of the interactions of two dimensional vortices and pairs of coaxial axisymmetric circular vortex rings were made. Finite difference solutions of the unsteady incompressible Navier-Stokes equations were carried out using vorticity and stream function as primary variables. Special emphasis was placed on the formulation of appropriate boundary conditions necessary for the calculations in a finite computational domain. Numerical results illustrate the interaction of vortex filaments, demonstrate when and how they merge with each other, and establish the region of validity for an asymptotic analysis.

  15. Development of a Video Coding Scheme for Analyzing the Usability and Usefulness of Health Information Systems.

    PubMed

    Kushniruk, Andre W; Borycki, Elizabeth M

    2015-01-01

    Usability has been identified as a key issue in health informatics. Worldwide numerous projects have been carried out in an attempt to increase and optimize health system usability. Usability testing, involving observing end users interacting with systems, has been widely applied and numerous publications have appeared describing such studies. However, to date, fewer works have been published describing methodological approaches to analyzing the rich data stream that results from usability testing. This includes analysis of video, audio and screen recordings. In this paper we describe our work in the development and application of a coding scheme for analyzing the usability of health information systems. The phases involved in such analyses are described.

  16. Numerical model updating technique for structures using firefly algorithm

    NASA Astrophysics Data System (ADS)

    Sai Kubair, K.; Mohan, S. C.

    2018-03-01

    Numerical model updating is a technique used for updating the existing experimental models for any structures related to civil, mechanical, automobiles, marine, aerospace engineering, etc. The basic concept behind this technique is updating the numerical models to closely match with experimental data obtained from real or prototype test structures. The present work involves the development of numerical model using MATLAB as a computational tool and with mathematical equations that define the experimental model. Firefly algorithm is used as an optimization tool in this study. In this updating process a response parameter of the structure has to be chosen, which helps to correlate the numerical model developed with the experimental results obtained. The variables for the updating can be either material or geometrical properties of the model or both. In this study, to verify the proposed technique, a cantilever beam is analyzed for its tip deflection and a space frame has been analyzed for its natural frequencies. Both the models are updated with their respective response values obtained from experimental results. The numerical results after updating show that there is a close relationship that can be brought between the experimental and the numerical models.

  17. Analyzing volatile compounds in dairy products

    USDA-ARS?s Scientific Manuscript database

    Volatile compounds give the first indication of the flavor in a dairy product. Volatiles are isolated from the sample matrix and then analyzed by chromatography, sensory methods, or an electronic nose. Isolation may be performed by solvent extraction or headspace analysis, and gas chromatography i...

  18. Analyzing the Teaching of Professional Practice

    ERIC Educational Resources Information Center

    Moss, Pamela A.

    2011-01-01

    Background/Context: Based on their case studies of preparation for professional practice in the clergy, teaching, and clinical psychology, Grossman and colleagues (2009) identified three key concepts for analyzing and comparing practice in professional education--representations, decomposition, and approximations--to support professional educators…

  19. Imaging thermal plasma mass and velocity analyzer

    NASA Astrophysics Data System (ADS)

    Yau, Andrew W.; Howarth, Andrew

    2016-07-01

    We present the design and principle of operation of the imaging ion mass and velocity analyzer on the Enhanced Polar Outflow Probe (e-POP), which measures low-energy (1-90 eV/e) ion mass composition (1-40 AMU/e) and velocity distributions using a hemispherical electrostatic analyzer (HEA), a time-of-flight (TOF) gate, and a pair of toroidal electrostatic deflectors (TED). The HEA and TOF gate measure the energy-per-charge and azimuth of each detected ion and the ion transit time inside the analyzer, respectively, providing the 2-D velocity distribution of each major ionospheric ion species and resolving the minor ion species under favorable conditions. The TED are in front of the TOF gate and optionally sample ions at different elevation angles up to ±60°, for measurement of 3-D velocity distribution. We present examples of observation data to illustrate the measurement capability of the analyzer, and show the occurrence of enhanced densities of heavy "minor" O++, N+, and molecular ions and intermittent, high-velocity (a few km/s) upward and downward flowing H+ ions in localized regions of the quiet time topside high-latitude ionosphere.

  20. Analyzing the Generality of Conflict Adaptation Effects

    ERIC Educational Resources Information Center

    Funes, Maria Jesus; Lupianez, Juan; Humphreys, Glyn

    2010-01-01

    Conflict adaptation effects refer to the reduction of interference when the incongruent stimulus occurs immediately after an incongruent trial, compared with when it occurs after a congruent trial. The present study analyzes the key conditions that lead to adaptation effects that are specific to the type of conflict involved versus those that are…

  1. A Teaching Procedure for the Diatype Analyzer

    ERIC Educational Resources Information Center

    Pringle, Girard F.; O'Brien, Michael James

    1973-01-01

    The Diatype Analyzer is a motor-driven typewriting device which attaches to the carriage of the typewriter and is used to diagnose students' typewriting difficulties (rhythm, difficult strokes, concentration, carriage return, operation of the shift key or space bar) more exactly than by teacher examination. (AG)

  2. Analyzing the Curriculum Alignment of Teachers

    ERIC Educational Resources Information Center

    Turan-Özpolat, Esen; Bay, Erdal

    2017-01-01

    The purpose of this research was to analyze the curriculum alignment of teachers in secondary education 5th grade Science course. Alignment levels of teachers in dimensions of acquisition, content, teaching methods and techniques, activity, material and measurement - assessment, and the reasons for their alignment/non-alignment to the curriculum…

  3. Studying Reliability Using Identical Handheld Lactate Analyzers

    ERIC Educational Resources Information Center

    Stewart, Mark T.; Stavrianeas, Stasinos

    2008-01-01

    Accusport analyzers were used to generate lactate performance curves in an investigative laboratory activity emphasizing the importance of reliable instrumentation. Both the calibration and testing phases of the exercise provided students with a hands-on opportunity to use laboratory-grade instrumentation while allowing for meaningful connections…

  4. Analyzing the Acoustic Beat with Mobile Devices

    ERIC Educational Resources Information Center

    Kuhn, Jochen; Vogt, Patrik; Hirth, Michael

    2014-01-01

    In this column, we have previously presented various examples of how physical relationships can be examined by analyzing acoustic signals using smartphones or tablet PCs. In this example, we will be exploring the acoustic phenomenon of small beats, which is produced by the overlapping of two tones with a low difference in frequency ?f. The…

  5. Multichannel analyzers at high rates of input

    NASA Technical Reports Server (NTRS)

    Rudnick, S. J.; Strauss, M. G.

    1969-01-01

    Multichannel analyzer, used with a gating system incorporating pole-zero compensation, pile-up rejection, and baseline-restoration, achieves good resolution at high rates of input. It improves resolution, reduces tailing and rate-contributed continuum, and eliminates spectral shift.

  6. Fluidization quality analyzer for fluidized beds

    DOEpatents

    Daw, C. Stuart; Hawk, James A.

    1995-01-01

    A control loop and fluidization quality analyzer for a fluidized bed utilizes time varying pressure drop measurements. A fast-response pressure transducer measures the overall bed pressure drop, or over some segment of the bed, and the pressure drop signal is processed to produce an output voltage which changes with the degree of fluidization turbulence.

  7. Fluidization quality analyzer for fluidized beds

    DOEpatents

    Daw, C.S.; Hawk, J.A.

    1995-07-25

    A control loop and fluidization quality analyzer for a fluidized bed utilizes time varying pressure drop measurements. A fast-response pressure transducer measures the overall bed pressure drop, or over some segment of the bed, and the pressure drop signal is processed to produce an output voltage which changes with the degree of fluidization turbulence. 9 figs.

  8. Reference guide for the soil compactor analyzer.

    DOT National Transportation Integrated Search

    2009-07-01

    The Soil Compactor Analyzer (SCA) attaches to the automatic tamper used for Test Methods Tex-113-E and 114-E and uses rapid sampling of the hammer displacement to measure impact velocity. With the known mass of the hammer and the determined velocity,...

  9. How to Analyze Company Using Social Network?

    NASA Astrophysics Data System (ADS)

    Palus, Sebastian; Bródka, Piotr; Kazienko, Przemysław

    Every single company or institution wants to utilize its resources in the most efficient way. In order to do so they have to be have good structure. The new way to analyze company structure by utilizing existing within company natural social network and example of its usage on Enron company are presented in this paper.

  10. Analyzing Faculty Salaries When Statistics Fail.

    ERIC Educational Resources Information Center

    Simpson, William A.

    The role played by nonstatistical procedures, in contrast to multivariant statistical approaches, in analyzing faculty salaries is discussed. Multivariant statistical methods are usually used to establish or defend against prima facia cases of gender and ethnic discrimination with respect to faculty salaries. These techniques are not applicable,…

  11. Strengthening 4-H by Analyzing Enrollment Data

    ERIC Educational Resources Information Center

    Hamilton, Stephen F.; Northern, Angela; Neff, Robert

    2014-01-01

    The study reported here used data from the ACCESS 4-H Enrollment System to gain insight into strengthening New York State's 4-H programming. Member enrollment lists from 2009 to 2012 were analyzed using Microsoft Excel to determine trends and dropout rates. The descriptive data indicate declining 4-H enrollment in recent years and peak enrollment…

  12. Photoelectron imaging using an ellipsoidal display analyzer

    NASA Astrophysics Data System (ADS)

    Dütemeyer, T.; Quitmann, C.; Kitz, M.; Dörnemann, K.; Johansson, L. S. O.; Reihl, B.

    2001-06-01

    We have built an ellipsoidal display analyzer (EDA) for angle-resolved photoelectron spectroscopy and related techniques. The instrument is an improved version of a design by Eastman et al. [Nucl. Instrum. Methods 172, 327 (1980)] and measures the angle-resolved intensity distribution of photoelectrons at fixed energy I(θ,φ)|E=const.. Such two-dimensional cuts through the Brillouin zone are recorded using a position-sensitive detector. The large acceptance angle (Δθ=43° in the polar direction and Δφ=360° in the azimuthal direction) leads to a collection efficiency which exceeds that of conventional hemispherical analyzers by a factor of about 3000. Using ray-tracing calculations we analyze the electron optical properties of the various analyzer components and optimize their arrangement. This minimizes distortions and aberrations in the recorded images and greatly improves the performance compared to previous realizations of this analyzer. We present examples demonstrating the performance of the analyzer and its versatility. Using a commercial He-discharge lamp we are able to measure complete angular distribution patterns in less than 5 s. The energy and angular resolution are ΔEEDA=85 meV and Δθ=1.2°, respectively. Complete stacks of such cuts through the Brillouin zone at different kinetic energies E can be acquired automatically using custom software. The raw data are processed leading to a three-dimensional set (I(EB,k∥) of photoelectron intensity versus binding energy E and wave vector k∥. From this all relevant information, like the dispersion relations EB(k∥) along arbitrary directions of the Brillouin zone or Fermi-surface maps, can then be computed. An additional electron gun enables low-energy electron diffraction, Auger electron spectroscopy, and electron energy-loss spectroscopy. Switching between electrons and photons as the excitation source is possible without any movement of the sample or analyzer. Because of the high acquisition

  13. BOOK REVIEW: Introduction to 3+1 Numerical Relativity

    NASA Astrophysics Data System (ADS)

    Gundlach, Carsten

    2008-11-01

    This is the first major textbook on the methods of numerical relativity. The selection of material is based on what is known to work reliably in astrophysical applications and would therefore be considered by many as the 'mainstream' of the field. This means spacelike slices, the BSSNOK or harmonic formulation of the Einstein equations, finite differencing for the spacetime variables, and high-resolution shock capturing methods for perfect fluid matter. (Arguably, pseudo-spectral methods also belong in this category, at least for elliptic equations, but are not covered in this book.) The account is self-contained, and comprehensive within its chosen scope. It could serve as a primer for the growing number of review papers on aspects of numerical relativity published in Living Reviews in Relativity (LRR). I will now discuss the contents by chapter. Chapter 1, an introduction to general relativity, is clearly written, but may be a little too concise to be used as a first text on this subject at postgraduate level, compared to the textbook by Schutz or the first half of Wald's book. Chapter 2 contains a good introduction to the 3+1 split of the field equations in the form mainly given by York. York's pedagogical presentation (in a 1979 conference volume) is still up to date, but Alcubierre makes a clearer distinction between the geometric split and its form in adapted coordinates, as well as filling in some derivations. Chapter 3 on initial data is close to Cook's 2001 LRR, but is beautifully unified by an emphasis on how different choices of conformal weights suit different purposes. Chapter 4 on gauge conditions covers a topic on which no review paper exists, and which is spread thinly over many papers. The presentation is both detailed and unified, making this an excellent resource also for experts. The chapter reflects the author's research interests while remaining canonical. Chapter 5 covers hyperbolic reductions of the field equations. Alcubierre's excellent

  14. Retarding potential analyzer for the Pioneer-Venus Orbiter Mission

    NASA Technical Reports Server (NTRS)

    Knudsen, W. C.; Bakke, J.; Spenner, K.; Novak, V.

    1979-01-01

    The retarding potential analyzer on the Pioneer-Venus Orbiter Mission has been designed to measure most of the thermal plasma parameters within and near the Venusian ionosphere. Parameters include total ion concentration, concentrations of the more abundant ions, ion temperatures, ion drift velocity, electron temperature, and low-energy (0-50 eV) electron distribution function. To accomplish these measurements on a spinning vehicle with a small telemetry bit rate, several functions, including decision functions not previously used in RPA's, have been developed and incorporated into this instrument. The more significant functions include automatic electrometer ranging with background current compensation; digital, quadratic retarding potential step generation for the ion and low-energy electron scans; a current sampling interval of 2 ms throughout all scans; digital logic inflection point detection and data selection; and automatic ram direction detection. Extensive numerical simulation and plasma chamber tests have been conducted to verify adequacy of the design for the Pioneer Mission.

  15. Using Tutte polynomials to analyze the structure of the benzodiazepines

    NASA Astrophysics Data System (ADS)

    Cadavid Muñoz, Juan José

    2014-05-01

    Graph theory in general and Tutte polynomials in particular, are implemented for analyzing the chemical structure of the benzodiazepines. Similarity analysis are used with the Tutte polynomials for finding other molecules that are similar to the benzodiazepines and therefore that might show similar psycho-active actions for medical purpose, in order to evade the drawbacks associated to the benzodiazepines based medicine. For each type of benzodiazepines, Tutte polynomials are computed and some numeric characteristics are obtained, such as the number of spanning trees and the number of spanning forests. Computations are done using the computer algebra Maple's GraphTheory package. The obtained analytical results are of great importance in pharmaceutical engineering. As a future research line, the usage of the chemistry computational program named Spartan, will be used to extent and compare it with the obtained results from the Tutte polynomials of benzodiazepines.

  16. Improvement of a respiratory ozone analyzer.

    PubMed

    Ultman, J S; Ben-Jebria, A; Mac Dougall, C S; Rigas, M L

    1997-10-01

    The breath-to-breath measurement of total respiratory ozone (O3) uptake requires monitoring O3 concentration at the airway opening with an instrument that responds rapidly relative to the breathing frequency. Our original chemiluminescent analyzer, using 2-methyl-2-butene as the reactant gas, had a 10% to 90% step-response time of 110 msec and a minimal detectable concentration of 0.018 parts per million (ppm) O3 (Ben-Jebria et al. 1990). This instrument was suitable for respiratory O3 monitoring during quiet breathing and light exercise. For this study, we constructed a more self-contained analyzer with a faster response time using ethylene as the reactant gas. When the analyzer was operated at a reaction chamber pressure of 350 torr, an ethylene-to-sample flow ratio of 4:1, and a sampling flow of 0.6 liters per minute (Lpm), it had a 10% to 90% step-response time of 70 msec and a minimal detectable concentration of 0.006 ppm. These specifications make respiratory O3 monitoring possible during moderate-to-heavy exercise. In addition, the nonlinear calibration and the carbon dioxide (CO2) interference exhibited by the original analyzer were eliminated. In breath-to-breath measurements in two healthy men, the fractional uptake of O3 during one minute of quiet breathing was comparable to the results obtained by using a slowly responding commercial analyzer with a quasi-steady material balance method (Wiester et al. 1996). In fact, fractional uptake was about 0.8 regardless of O3 exposure concentration (0.11 to 0.43 ppm) or ventilation rate (4 to 41 Lpm/m2).

  17. Numerical Calabi-Yau metrics

    NASA Astrophysics Data System (ADS)

    Douglas, Michael R.; Karp, Robert L.; Lukic, Sergio; Reinbacher, René

    2008-03-01

    We develop numerical methods for approximating Ricci flat metrics on Calabi-Yau hypersurfaces in projective spaces. Our approach is based on finding balanced metrics and builds on recent theoretical work by Donaldson. We illustrate our methods in detail for a one parameter family of quintics. We also suggest several ways to extend our results.

  18. Numerical Mediation and American Governmentality

    ERIC Educational Resources Information Center

    Monea, Alexander Paul

    2016-01-01

    This project looks to fill a critical gap in our knowledge of the emergence of new forms of power, knowledge, and subjectivation that emerged during the industrial period in the United States and that continue to operate today. This critical hole is the role of what we will term "numerical mediation," which is the means by which the…

  19. Building Blocks for Reliable Complex Nonlinear Numerical Simulations

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Mansour, Nagi N. (Technical Monitor)

    2002-01-01

    This talk describes some of the building blocks to ensure a higher level of confidence in the predictability and reliability (PAR) of numerical simulation of multiscale complex nonlinear problems. The focus is on relating PAR of numerical simulations with complex nonlinear phenomena of numerics. To isolate sources of numerical uncertainties, the possible discrepancy between the chosen partial differential equation (PDE) model and the real physics and/or experimental data is set aside. The discussion is restricted to how well numerical schemes can mimic the solution behavior of the underlying PDE model for finite time steps and grid spacings. The situation is complicated by the fact that the available theory for the understanding of nonlinear behavior of numerics is not at a stage to fully analyze the nonlinear Euler and Navier-Stokes equations. The discussion is based on the knowledge gained for nonlinear model problems with known analytical solutions to identify and explain the possible sources and remedies of numerical uncertainties in practical computations. Examples relevant to turbulent flow computations are included.

  20. Building Blocks for Reliable Complex Nonlinear Numerical Simulations

    NASA Technical Reports Server (NTRS)

    Yee, H. C.

    2005-01-01

    This chapter describes some of the building blocks to ensure a higher level of confidence in the predictability and reliability (PAR) of numerical simulation of multiscale complex nonlinear problems. The focus is on relating PAR of numerical simulations with complex nonlinear phenomena of numerics. To isolate sources of numerical uncertainties, the possible discrepancy between the chosen partial differential equation (PDE) model and the real physics and/or experimental data is set aside. The discussion is restricted to how well numerical schemes can mimic the solution behavior of the underlying PDE model for finite time steps and grid spacings. The situation is complicated by the fact that the available theory for the understanding of nonlinear behavior of numerics is not at a stage to fully analyze the nonlinear Euler and Navier-Stokes equations. The discussion is based on the knowledge gained for nonlinear model problems with known analytical solutions to identify and explain the possible sources and remedies of numerical uncertainties in practical computations.

  1. Building Blocks for Reliable Complex Nonlinear Numerical Simulations. Chapter 2

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Mansour, Nagi N. (Technical Monitor)

    2001-01-01

    This chapter describes some of the building blocks to ensure a higher level of confidence in the predictability and reliability (PAR) of numerical simulation of multiscale complex nonlinear problems. The focus is on relating PAR of numerical simulations with complex nonlinear phenomena of numerics. To isolate sources of numerical uncertainties, the possible discrepancy between the chosen partial differential equation (PDE) model and the real physics and/or experimental data is set aside. The discussion is restricted to how well numerical schemes can mimic the solution behavior of the underlying PDE model for finite time steps and grid spacings. The situation is complicated by the fact that the available theory for the understanding of nonlinear behavior of numerics is not at a stage to fully analyze the nonlinear Euler and Navier-Stokes equations. The discussion is based on the knowledge gained for nonlinear model problems with known analytical solutions to identify and explain the possible sources and remedies of numerical uncertainties in practical computations. Examples relevant to turbulent flow computations are included.

  2. Analyzing public health policy: three approaches.

    PubMed

    Coveney, John

    2010-07-01

    Policy is an important feature of public and private organizations. Within the field of health as a policy arena, public health has emerged in which policy is vital to decision making and the deployment of resources. Public health practitioners and students need to be able to analyze public health policy, yet many feel daunted by the subject's complexity. This article discusses three approaches that simplify policy analysis: Bacchi's "What's the problem?" approach examines the way that policy represents problems. Colebatch's governmentality approach provides a way of analyzing the implementation of policy. Bridgman and Davis's policy cycle allows for an appraisal of public policy development. Each approach provides an analytical framework from which to rigorously study policy. Practitioners and students of public health gain much in engaging with the politicized nature of policy, and a simple approach to policy analysis can greatly assist one's understanding and involvement in policy work.

  3. CRIE: An automated analyzer for Chinese texts.

    PubMed

    Sung, Yao-Ting; Chang, Tao-Hsing; Lin, Wei-Chun; Hsieh, Kuan-Sheng; Chang, Kuo-En

    2016-12-01

    Textual analysis has been applied to various fields, such as discourse analysis, corpus studies, text leveling, and automated essay evaluation. Several tools have been developed for analyzing texts written in alphabetic languages such as English and Spanish. However, currently there is no tool available for analyzing Chinese-language texts. This article introduces a tool for the automated analysis of simplified and traditional Chinese texts, called the Chinese Readability Index Explorer (CRIE). Composed of four subsystems and incorporating 82 multilevel linguistic features, CRIE is able to conduct the major tasks of segmentation, syntactic parsing, and feature extraction. Furthermore, the integration of linguistic features with machine learning models enables CRIE to provide leveling and diagnostic information for texts in language arts, texts for learning Chinese as a foreign language, and texts with domain knowledge. The usage and validation of the functions provided by CRIE are also introduced.

  4. Real-time airborne particle analyzer

    DOEpatents

    Reilly, Peter T.A.

    2012-10-16

    An aerosol particle analyzer includes a laser ablation chamber, a gas-filled conduit, and a mass spectrometer. The laser ablation chamber can be operated at a low pressure, which can be from 0.1 mTorr to 30 mTorr. The ablated ions are transferred into a gas-filled conduit. The gas-filled conduit reduces the electrical charge and the speed of ablated ions as they collide and mix with buffer gases in the gas-filled conduit. Preferably, the gas filled-conduit includes an electromagnetic multipole structure that collimates the nascent ions into a beam, which is guided into the mass spectrometer. Because the gas-filled conduit allows storage of vast quantities of the ions from the ablated particles, the ions from a single ablated particle can be analyzed multiple times and by a variety of techniques to supply statistically meaningful analysis of composition and isotope ratios.

  5. Compact fast analyzer of rotary cuvette type

    DOEpatents

    Thacker, Louis H.

    1976-01-01

    A compact fast analyzer of the rotary cuvette type is provided for simultaneously determining concentrations in a multiplicity of discrete samples using either absorbance or fluorescence measurement techniques. A rigid, generally rectangular frame defines optical passageways for the absorbance and fluorescence measurement systems. The frame also serves as a mounting structure for various optical components as well as for the cuvette rotor mount and drive system. A single light source and photodetector are used in making both absorbance and fluorescence measurements. Rotor removal and insertion are facilitated by a swing-out drive motor and rotor mount. BACKGROUND OF THE INVENTION The invention relates generally to concentration measuring instruments and more specifically to a compact fast analyzer of the rotary cuvette type which is suitable for making either absorbance or fluorescence measurements. It was made in the course of, or under, a contract with the U.S. Atomic Energy Commission.

  6. Calibration of optical particle-size analyzer

    DOEpatents

    Pechin, William H.; Thacker, Louis H.; Turner, Lloyd J.

    1979-01-01

    This invention relates to a system for the calibration of an optical particle-size analyzer of the light-intercepting type for spherical particles, wherein a rotary wheel or disc is provided with radially-extending wires of differing diameters, each wire corresponding to a particular equivalent spherical particle diameter. These wires are passed at an appropriate frequency between the light source and the light detector of the analyzer. The reduction of light as received at the detector is a measure of the size of the wire, and the electronic signal may then be adjusted to provide the desired signal for corresponding spherical particles. This calibrator may be operated at any time without interrupting other processing.

  7. Electric wind in a Differential Mobility Analyzer

    DOE PAGES

    Palo, Marus; Meelis Eller; Uin, Janek; ...

    2015-10-25

    Electric wind -- the movement of gas, induced by ions moving in an electric field -- can be a distorting factor in size distribution measurements using Differential Mobility Analyzers (DMAs). The aim of this study was to determine the conditions under which electric wind occurs in the locally-built VLDMA (Very Long Differential Mobility Analyzer) and TSI Long-DMA (3081) and to describe the associated distortion of the measured spectra. Electric wind proved to be promoted by the increase of electric field strength, aerosol layer thickness, particle number concentration and particle size. The measured size spectra revealed three types of distortion: wideningmore » of the size distribution, shift of the mode of the distribution to smaller diameters and smoothing out the peaks of the multiply charged particles. Electric wind may therefore be a source of severe distortion of the spectrum when measuring large particles at high concentrations.« less

  8. Real time speech formant analyzer and display

    DOEpatents

    Holland, George E.; Struve, Walter S.; Homer, John F.

    1987-01-01

    A speech analyzer for interpretation of sound includes a sound input which converts the sound into a signal representing the sound. The signal is passed through a plurality of frequency pass filters to derive a plurality of frequency formants. These formants are converted to voltage signals by frequency-to-voltage converters and then are prepared for visual display in continuous real time. Parameters from the inputted sound are also derived and displayed. The display may then be interpreted by the user. The preferred embodiment includes a microprocessor which is interfaced with a television set for displaying of the sound formants. The microprocessor software enables the sound analyzer to present a variety of display modes for interpretive and therapeutic used by the user.

  9. Real time speech formant analyzer and display

    DOEpatents

    Holland, G.E.; Struve, W.S.; Homer, J.F.

    1987-02-03

    A speech analyzer for interpretation of sound includes a sound input which converts the sound into a signal representing the sound. The signal is passed through a plurality of frequency pass filters to derive a plurality of frequency formants. These formants are converted to voltage signals by frequency-to-voltage converters and then are prepared for visual display in continuous real time. Parameters from the inputted sound are also derived and displayed. The display may then be interpreted by the user. The preferred embodiment includes a microprocessor which is interfaced with a television set for displaying of the sound formants. The microprocessor software enables the sound analyzer to present a variety of display modes for interpretive and therapeutic used by the user. 19 figs.

  10. Kranc: a Mathematica package to generate numerical codes for tensorial evolution equations

    NASA Astrophysics Data System (ADS)

    Husa, Sascha; Hinder, Ian; Lechner, Christiane

    2006-06-01

    towards handling very complex tensorial equations as they appear, e.g., in numerical relativity. The worked out examples comprise the Klein-Gordon equations, the Maxwell equations, and the ADM formulation of the Einstein equations. Method of solution: The method of numerical solution is finite differencing and method of lines time integration, the numerical code is generated through a high level Mathematica interface. Restrictions on the complexity of the program: Typical numerical relativity applications will contain up to several dozen evolution variables and thousands of source terms, Cactus applications have shown scaling up to several thousand processors and grid sizes exceeding 500 3. Typical running time: This depends on the number of variables and the grid size: the included ADM example takes approximately 100 seconds on a 1600 MHz Intel Pentium M processor. Unusual features of the program: based on Mathematica and Cactus

  11. Coordinating, Scheduling, Processing and Analyzing IYA09

    NASA Technical Reports Server (NTRS)

    Gipson, John; Behrend, Dirk; Gordon, David; Himwich, Ed; MacMillan, Dan; Titus, Mike; Corey, Brian

    2010-01-01

    The IVS scheduled a special astrometric VLBI session for the International Year of Astronomy 2009 (IYA09) commemorating 400 years of optical astronomy and 40 years of VLBI. The IYA09 session is the most ambitious geodetic session to date in terms of network size, number of sources, and number of observations. We describe the process of designing, coordinating, scheduling, pre-session station checkout, correlating, and analyzing this session.

  12. Analyzing Noise for the Muon Silicon Scanner

    SciTech Connect

    Marchan, Miguelangel; Utes, Michael

    2017-01-01

    The development of a silicon muon tomography detector is a joint project between Fermilab and National Security Technologies, LLC. The goal of this detector is to detect nuclear materials better than technology in the past. Using silicon strip detectors and readout chips used by experiments at CERN we have been developing the detector. This summer we have been testing components of the detector and have been analyzing noise characteristics.

  13. A Raman-Based Portable Fuel Analyzer

    NASA Astrophysics Data System (ADS)

    Farquharson, Stuart

    2010-08-01

    Fuel is the single most import supply during war. Consider that the US Military is employing over 25,000 vehicles in Iraq and Afghanistan. Most fuel is obtained locally, and must be characterized to ensure proper operation of these vehicles. Fuel properties are currently determined using a deployed chemical laboratory. Unfortunately, each sample requires in excess of 6 hours to characterize. To overcome this limitation, we have developed a portable fuel analyzer capable of determine 7 fuel properties that allow determining fuel usage. The analyzer uses Raman spectroscopy to measure the fuel samples without preparation in 2 minutes. The challenge, however, is that as distilled fractions of crude oil, all fuels are composed of hundreds of hydrocarbon components that boil at similar temperatures, and performance properties can not be simply correlated to a single component, and certainly not to specific Raman peaks. To meet this challenge, we measured over 800 diesel and jet fuels from around the world and used chemometrics to correlate the Raman spectra to fuel properties. Critical to the success of this approach is laser excitation at 1064 nm to avoid fluorescence interference (many fuels fluoresce) and a rugged interferometer that provides 0.1 cm-1 wavenumber (x-axis) accuracy to guarantee accurate correlations. Here we describe the portable fuel analyzer, the chemometric models, and the successful determination of these 7 fuel properties for over 100 unknown samples provided by the US Marine Corps, US Navy, and US Army.

  14. Harry Mergler with His Modified Differential Analyzer

    NASA Image and Video Library

    1951-06-21

    Harry Mergler stands at the control board of a differential analyzer in the new Instrument Research Laboratory at the National Advisory Committee for Aeronautics (NACA) Lewis Flight Propulsion Laboratory. The differential analyzer was a multi-variable analog computation machine devised in 1931 by Massachusetts Institute of Technology researcher and future NACA Committee member Vannevar Bush. The mechanical device could solve computations up to the sixth order, but had to be rewired before each new computation. Mergler modified Bush’s differential analyzer in the late 1940s to calculate droplet trajectories for Lewis’ icing research program. In four days Mergler’s machine could calculate what previously required weeks. NACA Lewis built the Instrument Research Laboratory in 1950 and 1951 to house the large analog computer equipment. The two-story structure also provided offices for the Mechanical Computational Analysis, and Flow Physics sections of the Physics Division. The division had previously operated from the lab’s hangar because of its icing research and flight operations activities. Mergler joined the Instrument Research Section of the Physics Division in 1948 after earning an undergraduate degree in Physics from the Case Institute of Technology. Mergler’s focus was on the synthesis of analog computers with the machine tools used to create compressor and turbine blades for jet engines.

  15. Thermo Scientific Sulfur Dioxide Analyzer Instrument Handbook

    SciTech Connect

    Springston, S. R.

    The Sulfur Dioxide Analyzer measures sulfur dioxide based on absorbance of UV light at one wavelength by SO2 molecules which then decay to a lower energy state by emitting UV light at a longer wavelength. Specifically, SO2 + hυ1 →SO2 *→SO2 + hυ2 The emitted light is proportional to the concentration of SO2 in the optical cell. External communication with the analyzer is available through an Ethernet port configured through the instrument network of the AOS systems. The Model 43i-TLE is part of the i-series of Thermo Scientific instruments. The i-series instruments are designed to interface with external computers throughmore » the proprietary Thermo Scientific iPort Software. However, this software is somewhat cumbersome and inflexible. BNL has written an interface program in National Instruments LabView that both controls the Model 43i-TLE Analyzer AND queries the unit for all measurement and housekeeping data. The LabView vi (the software program written by BNL) ingests all raw data from the instrument and outputs raw data files in a uniform data format similar to other instruments in the AOS and described more fully in Section 6.0 below.« less

  16. Semantic analyzability in children's understanding of idioms.

    PubMed

    Gibbs, R W

    1991-06-01

    This study investigated the role of semantic analyzability in children's understanding of idioms. Kindergartners and first, third, and fourth graders listened to idiomatic expressions either alone or at the end of short story contexts. Their task was to explain verbally the intended meanings of these phrases and then to choose their correct idiomatic interpretations. The idioms presented to the children differed in their degree of analyzability. Some idioms were highly analyzable or decomposable, with the meanings of their parts contributing independently to their overall figurative meanings. Other idioms were nondecomposable because it was difficult to see any relation between a phrase's individual components and the idiom's figurative meaning. The results showed that younger children (kindergartners and first graders) understood decomposable idioms better than they did nondecomposable phrases. Older children (third and fourth graders) understood both kinds of idioms equally well in supporting contexts, but were better at interpreting decomposable idioms than they were at understanding nondecomposable idioms without contextual information. These findings demonstrate that young children better understand idiomatic phrases whose individual parts independently contribute to their overall figurative meanings.

  17. Handheld Fluorescence Microscopy based Flow Analyzer.

    PubMed

    Saxena, Manish; Jayakumar, Nitin; Gorthi, Sai Siva

    2016-03-01

    Fluorescence microscopy has the intrinsic advantages of favourable contrast characteristics and high degree of specificity. Consequently, it has been a mainstay in modern biological inquiry and clinical diagnostics. Despite its reliable nature, fluorescence based clinical microscopy and diagnostics is a manual, labour intensive and time consuming procedure. The article outlines a cost-effective, high throughput alternative to conventional fluorescence imaging techniques. With system level integration of custom-designed microfluidics and optics, we demonstrate fluorescence microscopy based imaging flow analyzer. Using this system we have imaged more than 2900 FITC labeled fluorescent beads per minute. This demonstrates high-throughput characteristics of our flow analyzer in comparison to conventional fluorescence microscopy. The issue of motion blur at high flow rates limits the achievable throughput in image based flow analyzers. Here we address the issue by computationally deblurring the images and show that this restores the morphological features otherwise affected by motion blur. By further optimizing concentration of the sample solution and flow speeds, along with imaging multiple channels simultaneously, the system is capable of providing throughput of about 480 beads per second.

  18. Numerical Study of Base Pressure Characteristic Curve for a Four-Engine Clustered Nozzle Configuration

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See

    1993-01-01

    medium, accurate prediction of the base pressure distributions at high altitudes is the primary goal. Other factors which may influence the numerical results such as the effects of grid density, turbulence model, differencing scheme, and boundary conditions are also being addressed. Preliminary results of the computed base pressure agreed reasonably well with that of the measurement. Basic base flow features such as the reverse jet, wall jet, recompression shock, and static pressure field in plane of impingement have been captured.

  19. Numerical study of base pressure characteristic curve for a four-engine clustered nozzle configuration

    NASA Astrophysics Data System (ADS)

    Wang, Ten-See

    1993-07-01

    medium, accurate prediction of the base pressure distributions at high altitudes is the primary goal. Other factors which may influence the numerical results such as the effects of grid density, turbulence model, differencing scheme, and boundary conditions are also being addressed.

  20. Numerical simulations of regolith sampling processes

    NASA Astrophysics Data System (ADS)

    Schäfer, Christoph M.; Scherrer, Samuel; Buchwald, Robert; Maindl, Thomas I.; Speith, Roland; Kley, Wilhelm

    2017-07-01

    We present recent improvements in the simulation of regolith sampling processes in microgravity using the numerical particle method smooth particle hydrodynamics (SPH). We use an elastic-plastic soil constitutive model for large deformation and failure flows for dynamical behaviour of regolith. In the context of projected small body (asteroid or small moons) sample return missions, we investigate the efficiency and feasibility of a particular material sampling method: Brushes sweep material from the asteroid's surface into a collecting tray. We analyze the influence of different material parameters of regolith such as cohesion and angle of internal friction on the sampling rate. Furthermore, we study the sampling process in two environments by varying the surface gravity (Earth's and Phobos') and we apply different rotation rates for the brushes. We find good agreement of our sampling simulations on Earth with experiments and provide estimations for the influence of the material properties on the collecting rate.

  1. Numerical Analysis of Convection/Transpiration Cooling

    NASA Technical Reports Server (NTRS)

    Glass, David E.; Dilley, Arthur D.; Kelly, H. Neale

    1999-01-01

    An innovative concept utilizing the natural porosity of refractory-composite materials and hydrogen coolant to provide CONvective and TRANspiration (CONTRAN) cooling and oxidation protection has been numerically studied for surfaces exposed to a high heat flux, high temperature environment such as hypersonic vehicle engine combustor walls. A boundary layer code and a porous media finite difference code were utilized to analyze the effect of convection and transpiration cooling on surface heat flux and temperature. The boundary, layer code determined that transpiration flow is able to provide blocking of the surface heat flux only if it is above a minimum level due to heat addition from combustion of the hydrogen transpirant. The porous media analysis indicated that cooling of the surface is attained with coolant flow rates that are in the same range as those required for blocking, indicating that a coupled analysis would be beneficial.

  2. Nozzle Numerical Analysis Of The Scimitar Engine

    NASA Astrophysics Data System (ADS)

    Battista, F.; Marini, M.; Cutrone, L.

    2011-05-01

    This work describes part of the activities on the LAPCAT-II A2 vehicle, in which starting from the available conceptual vehicle design and the related pre- cooled turbo-ramjet engine called SCIMITAR, well- thought assumptions made for performance figures of different components during the iteration process within LAPCAT-I will be assessed in more detail. In this paper it is presented a numerical analysis aimed at the design optimization of the nozzle contour of the LAPCAT A2 SCIMITAR engine designed by Reaction Engines Ltd. (REL) (see Figure 1). In particular, nozzle shape optimization process is presented for cruise conditions. All the computations have been carried out by using the CIRA C3NS code in non equilibrium conditions. The effect of considering detailed or reduced chemical kinetic schemes has been analyzed with a particular focus on the production of pollutants. An analysis of engine performance parameters, such as thrust and combustion efficiency has been carried out.

  3. Changing Mental Representations Using Related Physical Models: The Effects of Analyzing Number Lines on Learner Internal Scale of Numerical Magnitude

    ERIC Educational Resources Information Center

    Bengtson, Barbara J.

    2013-01-01

    Understanding the linear relationship of numbers is essential for doing practical and abstract mathematics throughout education and everyday life. There is evidence that number line activities increase learners' number sense, improving the linearity of mental number line representations (Siegler & Ramani, 2009). Mental representations of…

  4. General purpose computer programs for numerically analyzing linear ac electrical and electronic circuits for steady-state conditions

    NASA Technical Reports Server (NTRS)

    Egebrecht, R. A.; Thorbjornsen, A. R.

    1967-01-01

    Digital computer programs determine steady-state performance characteristics of active and passive linear circuits. The ac analysis program solves the basic circuit parameters. The compiler program solves these circuit parameters and in addition provides a more versatile program by allowing the user to perform mathematical and logical operations.

  5. A software tool for analyzing multichannel cochlear implant signals.

    PubMed

    Lai, Wai Kong; Bögli, Hans; Dillier, Norbert

    2003-10-01

    A useful and convenient means to analyze the radio frequency (RF) signals being sent by a speech processor to a cochlear implant would be to actually capture and display them with appropriate software. This is particularly useful for development or diagnostic purposes. sCILab (Swiss Cochlear Implant Laboratory) is such a PC-based software tool intended for the Nucleus family of Multichannel Cochlear Implants. Its graphical user interface provides a convenient and intuitive means for visualizing and analyzing the signals encoding speech information. Both numerical and graphic displays are available for detailed examination of the captured CI signals, as well as an acoustic simulation of these CI signals. sCILab has been used in the design and verification of new speech coding strategies, and has also been applied as an analytical tool in studies of how different parameter settings of existing speech coding strategies affect speech perception. As a diagnostic tool, it is also useful for troubleshooting problems with the external equipment of the cochlear implant systems.

  6. Analyzing symptom data in indoor air questionnaires for primary schools.

    PubMed

    Ung-Lanki, S; Lampi, J; Pekkanen, J

    2017-09-01

    Questionnaires on symptoms and perceived quality of indoor environment are used to assess indoor environment problems, but mainly among adults. The aim of this article was to explore best ways to analyze and report such symptom data, as part of a project to develop a parent-administered indoor air questionnaire for primary school pupils. Indoor air questionnaire with 25 questions on child's symptoms in the last 4 weeks was sent to parents in five primary schools with indoor air problems and in five control schools. About 83% of parents (N=1470) in case schools and 82% (N=805) in control schools returned the questionnaire. In two schools, 351 (52%) parents answered the questionnaire twice with a 2-week interval. Based on prevalence of symptoms, their test-retest repeatability (ICC), and on principal component analysis (PCA), the number of symptoms was reduced to 17 and six symptoms scores were developed. Six variants of these six symptom scores were then formed and their ability to rank schools compared. Four symptom scores (respiratory, lower respiratory, eye, and general symptoms) analyzed dichotomized maintained sufficiently well the diversity of symptom data and captured the between-school differences in symptom prevalence, when compared to more complex and numerous scores. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  7. Numerical modeling of subsurface communication

    NASA Astrophysics Data System (ADS)

    Burke, G. J.; Dease, C. G.; Didwall, E. M.; Lytle, R. J.

    1985-02-01

    Techniques are described for numerical modeling of through-the-Earth communication. The basic problem considered is evaluation of the field at a surface or airborne station due to an antenna buried in the Earth. Equations are given for the field of a point source in a homogeneous or stratified earth. These expressions involve infinite integrals over wave number, sometimes known as Sommerfield integrals. Numerical techniques used for evaluating these integrals are outlined. The problem of determining the current on a real antenna in the Earth, including the effect of insulation, is considered. Results are included for the fields of a point source in homogeneous and stratified earths and the field of a finite insulated dipole. The results are for electromagnetic propagation in the ELF-VLF range, but the codes also can address propagation problems at higher frequencies.

  8. Numerical simulation of fire vortex

    NASA Astrophysics Data System (ADS)

    Barannikova, D. D.; Borzykh, V. E.; Obukhov, A. G.

    2018-05-01

    The article considers the numerical simulation of the swirling flow of air around the smoothly heated vertical cylindrical domain in the conditions of gravity and Coriolis forces action. The solutions of the complete system of Navie-Stocks equations are numerically solved at constant viscosity and heat conductivity factors. Along with the proposed initial and boundary conditions, these solutions describe the complex non-stationary 3D flows of viscous compressible heat conducting gas. For various instants of time of the initial flow formation stage using the explicit finite-difference scheme the calculations of all gas dynamics parameters, that is density, temperature, pressure and three velocity components of gas particles, have been run. The current instant lines corresponding to the trajectories of the particles movement in the emerging flow have been constructed. A negative direction of the air flow swirling occurred in the vertical cylindrical domain heating has been defined.

  9. A computer program for analyzing channel geometry

    USGS Publications Warehouse

    Regan, R.S.; Schaffranek, R.W.

    1985-01-01

    The Channel Geometry Analysis Program (CGAP) provides the capability to process, analyze, and format cross-sectional data for input to flow/transport simulation models or other computational programs. CGAP allows for a variety of cross-sectional data input formats through use of variable format specification. The program accepts data from various computer media and provides for modification of machine-stored parameter values. CGAP has been devised to provide a rapid and efficient means of computing and analyzing the physical properties of an open-channel reach defined by a sequence of cross sections. CGAP 's 16 options provide a wide range of methods by which to analyze and depict a channel reach and its individual cross-sectional properties. The primary function of the program is to compute the area, width, wetted perimeter, and hydraulic radius of cross sections at successive increments of water surface elevation (stage) from data that consist of coordinate pairs of cross-channel distances and land surface or channel bottom elevations. Longitudinal rates-of-change of cross-sectional properties are also computed, as are the mean properties of a channel reach. Output products include tabular lists of cross-sectional area, channel width, wetted perimeter, hydraulic radius, average depth, and cross-sectional symmetry computed as functions of stage; plots of cross sections; plots of cross-sectional area and (or) channel width as functions of stage; tabular lists of cross-sectional area and channel width computed as functions of stage for subdivisions of a cross section; plots of cross sections in isometric projection; and plots of cross-sectional area at a fixed stage as a function of longitudinal distance along an open-channel reach. A Command Procedure Language program and Job Control Language procedure exist to facilitate program execution on the U.S. Geological Survey Prime and Amdahl computer systems respectively. (Lantz-PTT)

  10. Air sampling unit for breath analyzers

    NASA Astrophysics Data System (ADS)

    Szabra, Dariusz; Prokopiuk, Artur; Mikołajczyk, Janusz; Ligor, Tomasz; Buszewski, Bogusław; Bielecki, Zbigniew

    2017-11-01

    The paper presents a portable breath sampling unit (BSU) for human breath analyzers. The developed unit can be used to probe air from the upper airway and alveolar for clinical and science studies. The BSU is able to operate as a patient interface device for most types of breath analyzers. Its main task is to separate and to collect the selected phases of the exhaled air. To monitor the so-called I, II, or III phase and to identify the airflow from the upper and lower parts of the human respiratory system, the unit performs measurements of the exhaled CO2 (ECO2) in the concentration range of 0%-20% (0-150 mm Hg). It can work in both on-line and off-line modes according to American Thoracic Society/European Respiratory Society standards. A Tedlar bag with a volume of 5 dm3 is mounted as a BSU sample container. This volume allows us to collect ca. 1-25 selected breath phases. At the user panel, each step of the unit operation is visualized by LED indicators. This helps us to regulate the natural breathing cycle of the patient. There is also an operator's panel to ensure monitoring and configuration setup of the unit parameters. The operation of the breath sampling unit was preliminarily verified using the gas chromatography/mass spectrometry (GC/MS) laboratory setup. At this setup, volatile organic compounds were extracted by solid phase microextraction. The tests were performed by the comparison of GC/MS signals from both exhaled nitric oxide and isoprene analyses for three breath phases. The functionality of the unit was proven because there was an observed increase in the signal level in the case of the III phase (approximately 40%). The described work made it possible to construct a prototype of a very efficient breath sampling unit dedicated to breath sample analyzers.

  11. Multiscale field-aligned current analyzer

    NASA Astrophysics Data System (ADS)

    Bunescu, C.; Marghitu, O.; Constantinescu, D.; Narita, Y.; Vogt, J.; Blǎgǎu, A.

    2015-11-01

    The magnetosphere-ionosphere coupling is achieved, essentially, by a superposition of quasi-stationary and time-dependent field-aligned currents (FACs), over a broad range of spatial and temporal scales. The planarity of the FAC structures observed by satellite data and the orientation of the planar FAC sheets can be investigated by the well-established minimum variance analysis (MVA) of the magnetic perturbation. However, such investigations are often constrained to a predefined time window, i.e., to a specific scale of the FAC. The multiscale field-aligned current analyzer, introduced here, relies on performing MVA continuously and over a range of scales by varying the width of the analyzing window, appropriate for the complexity of the magnetic field signatures above the auroral oval. The proposed technique provides multiscale information on the planarity and orientation of the observed FACs. A new approach, based on the derivative of the largest eigenvalue of the magnetic variance matrix with respect to the length of the analysis window, makes possible the inference of the current structures' location (center) and scale (thickness). The capabilities of the FAC analyzer are explored analytically for the magnetic field profile of the Harris sheet and tested on synthetic FAC structures with uniform current density and infinite or finite geometry in the cross-section plane of the FAC. The method is illustrated with data observed by the Cluster spacecraft on crossing the nightside auroral region, and the results are cross checked with the optical observations from the Time History of Events and Macroscale Interactions during Substorms ground network.

  12. Volatile Analyzer for Lunar Polar Missions

    NASA Technical Reports Server (NTRS)

    Gibons, Everett K.; Pillinger, Colin T.; McKay, David S.; Waugh, Lester J.

    2011-01-01

    One of the major questions remaining for the future exploration of the Moon by humans concerns the presence of volatiles on our nearest neighbor in space. Observational studies, and investigations involving returned lunar samples and using robotic spacecraft infer the existence of volatile compounds particularly water [1]. It seems very likely that a volatile component will be concentrated at the poles in circumstances where low-temperatures exist to provide cryogenic traps. However, the full inventory of species, their concentration and their origin and sources are unknown. Of particular importance is whether abundances are sufficient to act as a resource of consumables for future lunar expeditions especially if a long-term base involving humans is to be established. To address some of these issues requires a lander designed specifically for operation at a high-lunar latitude. A vital part of the payload needs to be a volatile analyzer such as the Gas Analysis Package specifically designed for identification quantification of volatile substances and collecting information which will allow the origin of these volatiles to be identified [1]. The equipment included, particularly the gas analyzer, must be capable of operation in the extreme environmental conditions to be encountered. No accurate information yet exists regarding volatile concentration even for sites closer to the lunar equator (because of contamination). In this respect it will be important to understand (and thus limit) contamination of the lunar surface by extraneous material contributed from a variety of sources. The only data for the concentrations of volatiles at the poles comes from orbiting spacecraft and whilst the levels at high latitudes may be greater than at the equator, the volatile analyzer package under consideration will be designed to operate at the highest specifications possible and in a way that does not compromise the data.

  13. Mass spectrometer calibration of Cosmic Dust Analyzer

    NASA Astrophysics Data System (ADS)

    Ahrens, Thomas J.; Gupta, Satish C.; Jyoti, G.; Beauchamp, J. L.

    2003-02-01

    The time-of-flight (TOF) mass spectrometer (MS) of the Cosmic Dust Analyzer (CDA) instrument aboard the Cassini spacecraft is expected to be placed in orbit about Saturn to sample submicrometer-diameter ring particles and impact ejecta from Saturn's satellites. The CDA measures a mass spectrum of each particle that impacts the chemical analyzer sector of the instrument. Particles impact a Rh target plate at velocities of 1-100 km/s and produce some 10-8 to 10-5 times the particle mass of positive valence, single-charged ions. These are analyzed via a TOF MS. Initial tests employed a pulsed N2 laser acting on samples of kamacite, pyrrhotite, serpentine, olivine, and Murchison meteorite induced bursts of ions which were detected with a microchannel plate and a charge sensitive amplifier (CSA). Pulses from the N2 laser (1011 W/cm2) are assumed to simulate particle impact. Using aluminum alloy as a test sample, each pulse produces a charge of ~4.6 pC (mostly Al+1), whereas irradiation of a stainless steel target produces a ~2.8 pC (Fe+1) charge. Thus the present system yields ~10-5% of the laser energy in resulting ions. A CSA signal indicates that at the position of the microchannel plate, the ion detector geometry is such that some 5% of the laser-induced ions are collected in the CDA geometry. Employing a multichannel plate detector in this MS yields for Al-Mg-Cu alloy and kamacite targets well-defined peaks at 24 (Mg+1), 27(Al+1), and 64 (Cu+1) and 56 (Fe+1), 58 (Ni+1), and 60 (Ni+1) dalton, respectively.

  14. Tools for Designing and Analyzing Structures

    NASA Technical Reports Server (NTRS)

    Luz, Paul L.

    2005-01-01

    Structural Design and Analysis Toolset is a collection of approximately 26 Microsoft Excel spreadsheet programs, each of which performs calculations within a different subdiscipline of structural design and analysis. These programs present input and output data in user-friendly, menu-driven formats. Although these programs cannot solve complex cases like those treated by larger finite element codes, these programs do yield quick solutions to numerous common problems more rapidly than the finite element codes, thereby making it possible to quickly perform multiple preliminary analyses - e.g., to establish approximate limits prior to detailed analyses by the larger finite element codes. These programs perform different types of calculations, as follows: 1. determination of geometric properties for a variety of standard structural components; 2. analysis of static, vibrational, and thermal- gradient loads and deflections in certain structures (mostly beams and, in the case of thermal-gradients, mirrors); 3. kinetic energies of fans; 4. detailed analysis of stress and buckling in beams, plates, columns, and a variety of shell structures; and 5. temperature dependent properties of materials, including figures of merit that characterize strength, stiffness, and deformation response to thermal gradients

  15. Cuba: Multidimensional numerical integration library

    NASA Astrophysics Data System (ADS)

    Hahn, Thomas

    2016-08-01

    The Cuba library offers four independent routines for multidimensional numerical integration: Vegas, Suave, Divonne, and Cuhre. The four algorithms work by very different methods, and can integrate vector integrands and have very similar Fortran, C/C++, and Mathematica interfaces. Their invocation is very similar, making it easy to cross-check by substituting one method by another. For further safeguarding, the output is supplemented by a chi-square probability which quantifies the reliability of the error estimate.

  16. Real-Time Occupancy Change Analyzer

    SciTech Connect

    2005-03-30

    The Real-Time Occupancy Change Analyzer (ROCA) produces an occupancy grid map of an environment around the robot, scans the environment to generate a current obstacle map relative to a current robot position, and converts the current obstacle map to a current occupancy grid map. Changes in the occupancy grid can be reported in real time to support a number of tracking capabilities. The benefit of ROCA is that rather than only providing a vector to the detected change, it provides the actual x,y position of the change.

  17. The GSFC NASTRAN thermal analyzer new capabilities

    NASA Technical Reports Server (NTRS)

    Lee, H. P.; Harder, R. L.

    1976-01-01

    An overview of four analysis capabilities, which developed and integrated into the NASTRAN Thermal Analyzer, is given. To broaden the scope of applications, these additions provide the NTA users with the following capabilities: (1) simulating a thermal louver as a means of the passive thermal control, (2) simulating a fluid loop for transporting energy as a means of the active thermal control, (3) condensing a large sized finite element model for an efficient transient thermal analysis, and (4) entering multiple boundary condition sets in a single submission for execution in steady state thermal analyses.

  18. Miniature integrated-optical wavelength analyzer chip

    NASA Astrophysics Data System (ADS)

    Kunz, R. E.; Dübendorfer, J.

    1995-11-01

    A novel integrated-optical chip suitable for realizing compact miniature wavelength analyzers with high linear dispersion is presented. The chip performs the complete task of converting the spectrum of an input beam into a corresponding spatial irradiance distribution without the need for an imaging function. We demonstrate the feasibility of this approach experimentally by monitoring the changes in the mode spectrum of a laser diode on varying its case temperature. Comparing the results with simultaneous measurements by a commercial spectrometer yielded a rms wavelength deviation of 0.01 nm.

  19. Using SCR methods to analyze requirements documentation

    NASA Technical Reports Server (NTRS)

    Callahan, John; Morrison, Jeffery

    1995-01-01

    Software Cost Reduction (SCR) methods are being utilized to analyze and verify selected parts of NASA's EOS-DIS Core System (ECS) requirements documentation. SCR is being used as a spot-inspection tool. Through this formal and systematic approach of the SCR requirements methods, insights as to whether the requirements are internally inconsistent or incomplete as the scenarios of intended usage evolve in the OC (Operations Concept) documentation. Thus, by modelling the scenarios and requirements as mode charts using the SCR methods, we have been able to identify problems within and between the documents.

  20. MULTI-CHANNEL PULSE HEIGHT ANALYZER

    DOEpatents

    Boyer, K.; Johnstone, C.W.

    1958-11-25

    An improved multi-channel pulse height analyzer of the type where the device translates the amplitude of each pulse into a time duration electrical quantity which is utilized to control the length of a train of pulses forwarded to a scaler is described. The final state of the scaler for any one train of pulses selects the appropriate channel in a magnetic memory in which an additional count of one is placed. The improvement consists of a storage feature for storing a signal pulse so that in many instances when two signal pulses occur in rapid succession, the second pulse is preserved and processed at a later time.

  1. Thermo Scientific Ozone Analyzer Instrument Handbook

    SciTech Connect

    Springston, S. R.

    The primary measurement output from the Thermo Scientific Ozone Analyzer is the concentration of the analyte (O3) reported at 1-s resolution in units of ppbv in ambient air. Note that because of internal pneumatic switching limitations the instrument only makes an independent measurement every 4 seconds. Thus, the same concentration number is repeated roughly 4 times at the uniform, monotonic 1-s time base used in the AOS systems. Accompanying instrument outputs include sample temperatures, flows, chamber pressure, lamp intensities and a multiplicity of housekeeping information. There is also a field for operator comments made at any time while data ismore » being collected.« less

  2. The SPAR thermal analyzer: Present and future

    NASA Astrophysics Data System (ADS)

    Marlowe, M. B.; Whetstone, W. D.; Robinson, J. C.

    The SPAR thermal analyzer, a system of finite-element processors for performing steady-state and transient thermal analyses, is described. The processors communicate with each other through the SPAR random access data base. As each processor is executed, all pertinent source data is extracted from the data base and results are stored in the data base. Steady state temperature distributions are determined by a direct solution method for linear problems and a modified Newton-Raphson method for nonlinear problems. An explicit and several implicit methods are available for the solution of transient heat transfer problems. Finite element plotting capability is available for model checkout and verification.

  3. The SPAR thermal analyzer: Present and future

    NASA Technical Reports Server (NTRS)

    Marlowe, M. B.; Whetstone, W. D.; Robinson, J. C.

    1982-01-01

    The SPAR thermal analyzer, a system of finite-element processors for performing steady-state and transient thermal analyses, is described. The processors communicate with each other through the SPAR random access data base. As each processor is executed, all pertinent source data is extracted from the data base and results are stored in the data base. Steady state temperature distributions are determined by a direct solution method for linear problems and a modified Newton-Raphson method for nonlinear problems. An explicit and several implicit methods are available for the solution of transient heat transfer problems. Finite element plotting capability is available for model checkout and verification.

  4. Nonlinear single-spin spectrum analyzer.

    PubMed

    Kotler, Shlomi; Akerman, Nitzan; Glickman, Yinnon; Ozeri, Roee

    2013-03-15

    Qubits have been used as linear spectrum analyzers of their environments. Here we solve the problem of nonlinear spectral analysis, required for discrete noise induced by a strongly coupled environment. Our nonperturbative analytical model shows a nonlinear signal dependence on noise power, resulting in a spectral resolution beyond the Fourier limit as well as frequency mixing. We develop a noise characterization scheme adapted to this nonlinearity. We then apply it using a single trapped ion as a sensitive probe of strong, non-Gaussian, discrete magnetic field noise. Finally, we experimentally compared the performance of equidistant vs Uhrig modulation schemes for spectral analysis.

  5. Light-weight analyzer for odor recognition

    DOEpatents

    Vass, Arpad A; Wise, Marcus B

    2014-05-20

    The invention provides a light weight analyzer, e.g., detector, capable of locating clandestine graves. The detector utilizes the very specific and unique chemicals identified in the database of human decompositional odor. This detector, based on specific chemical compounds found relevant to human decomposition, is the next step forward in clandestine grave detection and will take the guess-work out of current methods using canines and ground-penetrating radar, which have historically been unreliable. The detector is self contained, portable and built for field use. Both visual and auditory cues are provided to the operator.

  6. CRISP90 - SOFTWARE DESIGN ANALYZER SYSTEM

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.

    1994-01-01

    The CRISP90 Software Design Analyzer System, an update of CRISP-80, is a set of programs forming a software design and documentation tool which supports top-down, hierarchic, modular, structured design and programming methodologies. The quality of a computer program can often be significantly influenced by the design medium in which the program is developed. The medium must foster the expression of the programmer's ideas easily and quickly, and it must permit flexible and facile alterations, additions, and deletions to these ideas as the design evolves. The CRISP90 software design analyzer system was developed to provide the PDL (Programmer Design Language) programmer with such a design medium. A program design using CRISP90 consists of short, English-like textual descriptions of data, interfaces, and procedures that are imbedded in a simple, structured, modular syntax. The display is formatted into two-dimensional, flowchart-like segments for a graphic presentation of the design. Together with a good interactive full-screen editor or word processor, the CRISP90 design analyzer becomes a powerful tool for the programmer. In addition to being a text formatter, the CRISP90 system prepares material that would be tedious and error prone to extract manually, such as a table of contents, module directory, structure (tier) chart, cross-references, and a statistics report on the characteristics of the design. Referenced modules are marked by schematic logic symbols to show conditional, iterative, and/or concurrent invocation in the program. A keyword usage profile can be generated automatically and glossary definitions inserted into the output documentation. Another feature is the capability to detect changes that were made between versions. Thus, "change-bars" can be placed in the output document along with a list of changed pages and a version history report. Also, items may be marked as "to be determined" and each will appear on a special table until the item is

  7. Spectrum Analyzers Incorporating Tunable WGM Resonators

    NASA Technical Reports Server (NTRS)

    Savchenkov, Anatoliy; Matsko, Andrey; Strekalov, Dmitry; Maleki, Lute

    2009-01-01

    A photonic instrument is proposed to boost the resolution for ultraviolet/ optical/infrared spectral analysis and spectral imaging allowing the detection of narrow (0.00007-to-0.07-picometer wavelength resolution range) optical spectral signatures of chemical elements in space and planetary atmospheres. The idea underlying the proposal is to exploit the advantageous spectral characteristics of whispering-gallery-mode (WGM) resonators to obtain spectral resolutions at least three orders of magnitude greater than those of optical spectrum analyzers now in use. Such high resolutions would enable measurement of spectral features that could not be resolved by prior instruments.

  8. In Praise of Numerical Computation

    NASA Astrophysics Data System (ADS)

    Yap, Chee K.

    Theoretical Computer Science has developed an almost exclusively discrete/algebraic persona. We have effectively shut ourselves off from half of the world of computing: a host of problems in Computational Science & Engineering (CS&E) are defined on the continuum, and, for them, the discrete viewpoint is inadequate. The computational techniques in such problems are well-known to numerical analysis and applied mathematics, but are rarely discussed in theoretical algorithms: iteration, subdivision and approximation. By various case studies, I will indicate how our discrete/algebraic view of computing has many shortcomings in CS&E. We want embrace the continuous/analytic view, but in a new synthesis with the discrete/algebraic view. I will suggest a pathway, by way of an exact numerical model of computation, that allows us to incorporate iteration and approximation into our algorithms’ design. Some recent results give a peek into how this view of algorithmic development might look like, and its distinctive form suggests the name “numerical computational geometry” for such activities.

  9. Numerical ability predicts mortgage default.

    PubMed

    Gerardi, Kristopher; Goette, Lorenz; Meier, Stephan

    2013-07-09

    Unprecedented levels of US subprime mortgage defaults precipitated a severe global financial crisis in late 2008, plunging much of the industrialized world into a deep recession. However, the fundamental reasons for why US mortgages defaulted at such spectacular rates remain largely unknown. This paper presents empirical evidence showing that the ability to perform basic mathematical calculations is negatively associated with the propensity to default on one's mortgage. We measure several aspects of financial literacy and cognitive ability in a survey of subprime mortgage borrowers who took out loans in 2006 and 2007, and match them to objective, detailed administrative data on mortgage characteristics and payment histories. The relationship between numerical ability and mortgage default is robust to controlling for a broad set of sociodemographic variables, and is not driven by other aspects of cognitive ability. We find no support for the hypothesis that numerical ability impacts mortgage outcomes through the choice of the mortgage contract. Rather, our results suggest that individuals with limited numerical ability default on their mortgage due to behavior unrelated to the initial choice of their mortgage.

  10. Numerical simulation of conservation laws

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung; To, Wai-Ming

    1992-01-01

    A new numerical framework for solving conservation laws is being developed. This new approach differs substantially from the well established methods, i.e., finite difference, finite volume, finite element and spectral methods, in both concept and methodology. The key features of the current scheme include: (1) direct discretization of the integral forms of conservation laws, (2) treating space and time on the same footing, (3) flux conservation in space and time, and (4) unified treatment of the convection and diffusion fluxes. The model equation considered in the initial study is the standard one dimensional unsteady constant-coefficient convection-diffusion equation. In a stability study, it is shown that the principal and spurious amplification factors of the current scheme, respectively, are structurally similar to those of the leapfrog/DuFort-Frankel scheme. As a result, the current scheme has no numerical diffusion in the special case of pure convection and is unconditionally stable in the special case of pure diffusion. Assuming smooth initial data, it will be shown theoretically and numerically that, by using an easily determined optimal time step, the accuracy of the current scheme may reach a level which is several orders of magnitude higher than that of the MacCormack scheme, with virtually identical operation count.

  11. Constrained evolution in numerical relativity

    NASA Astrophysics Data System (ADS)

    Anderson, Matthew William

    The strongest potential source of gravitational radiation for current and future detectors is the merger of binary black holes. Full numerical simulation of such mergers can provide realistic signal predictions and enhance the probability of detection. Numerical simulation of the Einstein equations, however, is fraught with difficulty. Stability even in static test cases of single black holes has proven elusive. Common to unstable simulations is the growth of constraint violations. This work examines the effect of controlling the growth of constraint violations by solving the constraints periodically during a simulation, an approach called constrained evolution. The effects of constrained evolution are contrasted with the results of unconstrained evolution, evolution where the constraints are not solved during the course of a simulation. Two different formulations of the Einstein equations are examined: the standard ADM formulation and the generalized Frittelli-Reula formulation. In most cases constrained evolution vastly improves the stability of a simulation at minimal computational cost when compared with unconstrained evolution. However, in the more demanding test cases examined, constrained evolution fails to produce simulations with long-term stability in spite of producing improvements in simulation lifetime when compared with unconstrained evolution. Constrained evolution is also examined in conjunction with a wide variety of promising numerical techniques, including mesh refinement and overlapping Cartesian and spherical computational grids. Constrained evolution in boosted black hole spacetimes is investigated using overlapping grids. Constrained evolution proves to be central to the host of innovations required in carrying out such intensive simulations.

  12. Numerical ability predicts mortgage default

    PubMed Central

    Gerardi, Kristopher; Goette, Lorenz; Meier, Stephan

    2013-01-01

    Unprecedented levels of US subprime mortgage defaults precipitated a severe global financial crisis in late 2008, plunging much of the industrialized world into a deep recession. However, the fundamental reasons for why US mortgages defaulted at such spectacular rates remain largely unknown. This paper presents empirical evidence showing that the ability to perform basic mathematical calculations is negatively associated with the propensity to default on one’s mortgage. We measure several aspects of financial literacy and cognitive ability in a survey of subprime mortgage borrowers who took out loans in 2006 and 2007, and match them to objective, detailed administrative data on mortgage characteristics and payment histories. The relationship between numerical ability and mortgage default is robust to controlling for a broad set of sociodemographic variables, and is not driven by other aspects of cognitive ability. We find no support for the hypothesis that numerical ability impacts mortgage outcomes through the choice of the mortgage contract. Rather, our results suggest that individuals with limited numerical ability default on their mortgage due to behavior unrelated to the initial choice of their mortgage. PMID:23798401

  13. Numerical models as interactive art

    NASA Astrophysics Data System (ADS)

    Donchyts, G.; Baart, F.; van de Pas, B.; Joling, A.

    2017-12-01

    We capture our understanding of the environment in advanced computer models. We use these numerical models to simulate the growth of deltas, meandering rivers, dune erosion, river floodings, effects of interventions. If presented with care, models can help understand the complexity of our environment and show the beautiful patterns of nature. While the topics are relevant and appealing to the general public the use of numerical models has been limited to technical users. Not many people have appreciations for the pluriform of options, esoteric user interfaces, manual editing of configuration files and extensive jargon. The models are static, you can start them, but then you have to wait, usually hours or more, for the results to become available, not something that you could imagine resulting in an immersive, interactive experience for the general public. How can we go beyond just using results? How can we adapt existing numerical models so they can be used in an interactive environment? How can we touch them and feel them? Here we show how we adapted existing models (Delft3D, Lisflood, XBeach) and reused them in as the basis for interactive exhibitions in museums with an educative goal. We present our structured approach which consists of combining a story, inspiration, a canvas, colors, shapes and interactive elements. We show how the progression from simple presentation forms to interactive art installations.

  14. Vector Beam Polarization State Spectrum Analyzer.

    PubMed

    Moreno, Ignacio; Davis, Jeffrey A; Badham, Katherine; Sánchez-López, María M; Holland, Joseph E; Cottrell, Don M

    2017-05-22

    We present a proof of concept for a vector beam polarization state spectrum analyzer based on the combination of a polarization diffraction grating (PDG) and an encoded harmonic q-plate grating (QPG). As a result, a two-dimensional polarization diffraction grating is formed that generates six different q-plate channels with topological charges from -3 to +3 in the horizontal direction, and each is split in the vertical direction into the six polarization channels at the cardinal points of the corresponding higher-order Poincaré sphere. Consequently, 36 different channels are generated in parallel. This special polarization diffractive element is experimentally demonstrated using a single phase-only spatial light modulator in a reflective optical architecture. Finally, we show that this system can be used as a vector beam polarization state spectrum analyzer, where both the topological charge and the state of polarization of an input vector beam can be simultaneously determined in a single experiment. We expect that these results would be useful for applications in optical communications.

  15. Modular thermal analyzer routine, volume 1

    NASA Technical Reports Server (NTRS)

    Oren, J. A.; Phillips, M. A.; Williams, D. R.

    1972-01-01

    The Modular Thermal Analyzer Routine (MOTAR) is a general thermal analysis routine with strong capabilities for performing thermal analysis of systems containing flowing fluids, fluid system controls (valves, heat exchangers, etc.), life support systems, and thermal radiation situations. Its modular organization permits the analysis of a very wide range of thermal problems for simple problems containing a few conduction nodes to those containing complicated flow and radiation analysis with each problem type being analyzed with peak computational efficiency and maximum ease of use. The organization and programming methods applied to MOTAR achieved a high degree of computer utilization efficiency in terms of computer execution time and storage space required for a given problem. The computer time required to perform a given problem on MOTAR is approximately 40 to 50 percent that required for the currently existing widely used routines. The computer storage requirement for MOTAR is approximately 25 percent more than the most commonly used routines for the most simple problems but the data storage techniques for the more complicated options should save a considerable amount of space.

  16. Improving respiration measurements with gas exchange analyzers.

    PubMed

    Montero, R; Ribas-Carbó, M; Del Saz, N F; El Aou-Ouad, H; Berry, J A; Flexas, J; Bota, J

    2016-12-01

    Dark respiration measurements with open-flow gas exchange analyzers are often questioned for their low accuracy as their low values often reach the precision limit of the instrument. Respiration was measured in five species, two hypostomatous (Vitis Vinifera L. and Acanthus mollis) and three amphistomatous, one with similar amount of stomata in both sides (Eucalyptus citriodora) and two with different stomata density (Brassica oleracea and Vicia faba). CO 2 differential (ΔCO 2 ) increased two-fold with no change in apparent R d , when the two leaves with higher stomatal density faced outside. These results showed a clear effect of the position of stomata on ΔCO 2 . Therefore, it can be concluded that leaf position is important to guarantee the improvement of respiration measurements increasing ΔCO 2 without affecting the respiration results by leaf or mass units. This method will help to increase the accuracy of leaf respiration measurements using gas exchange analyzers. Copyright © 2016 Elsevier GmbH. All rights reserved.

  17. Developments on the Toroid Ion Trap Analyzer

    SciTech Connect

    Lammert, S.A.; Thompson, C.V.; Wise, M.B.

    1999-06-13

    Investigations into several areas of research have been undertaken to address the performance limitations of the toroid analyzer. The Simion 3D6 (2) ion optics simulation program was used to determine whether the potential well minimum of the toroid trapping field is in the physical center of the trap electrode structure. The results (Figures 1) indicate that the minimum of the potential well is shifted towards the inner ring electrode by an amount approximately equal to 10% of the r0 dimension. A simulation of the standard 3D ion trap under similar conditions was performed as a control. In this case, themore » ions settle to the minimum of the potential well at a point that is coincident with the physical center (both radial and axial) of the trapping electrodes. It is proposed that by using simulation programs, a set of new analyzer electrodes can be fashioned that will correct for the non- linear fields introduced by curving the substantially quadrupolar field about the toroid axis in order to provide a trapping field similar to the 3D ion trap cross- section. A new toroid electrode geometry has been devised to allow the use of channel- tron style detectors in place of the more expensive multichannel plate detector. Two different versions have been designed and constructed - one using the current ion trap cross- section (Figure 2) and another using the linear quedrupole cross- section design first reported by Bier and Syka (3).« less

  18. Solar Probe ANalyzer for Ions - Laboratory Performance

    NASA Astrophysics Data System (ADS)

    Livi, R.; Larson, D. E.; Kasper, J. C.; Korreck, K. E.; Whittlesey, P. L.

    2017-12-01

    The Parker Solar Probe (PSP) mission is a heliospheric satellite that will orbit the Sun closer than any prior mission to date with a perihelion of 35 solar radii (RS) and an aphelion of 10 RS. PSP includes the Solar Wind Electrons Alphas and Protons (SWEAP) instrument suite, which in turn consists of four instruments: the Solar Probe Cup (SPC) and three Solar Probe ANalyzers (SPAN) for ions and electrons. Together, this suite will take local measurements of particles and electromagnetic fields within the Sun's corona. SPAN-Ai has completed flight calibration and spacecraft integration and is set to be launched in July of 2018. The main mode of operation consists of an electrostatic analyzer (ESA) at its aperture followed by a Time-of-Flight section to measure the energy and mass per charge (m/q) of the ambient ions. SPAN-Ai's main objective is to measure solar wind ions within an energy range of 5 eV - 20 keV, a mass/q between 1-60 [amu/q] and a field of view of 2400x1200. Here we will show flight calibration results and performance.

  19. Analyzing endocrine system conservation and evolution.

    PubMed

    Bonett, Ronald M

    2016-08-01

    Analyzing variation in rates of evolution can provide important insights into the factors that constrain trait evolution, as well as those that promote diversification. Metazoan endocrine systems exhibit apparent variation in evolutionary rates of their constituent components at multiple levels, yet relatively few studies have quantified these patterns and analyzed them in a phylogenetic context. This may be in part due to historical and current data limitations for many endocrine components and taxonomic groups. However, recent technological advancements such as high-throughput sequencing provide the opportunity to collect large-scale comparative data sets for even non-model species. Such ventures will produce a fertile data landscape for evolutionary analyses of nucleic acid and amino acid based endocrine components. Here I summarize evolutionary rate analyses that can be applied to categorical and continuous endocrine traits, and also those for nucleic acid and protein-based components. I emphasize analyses that could be used to test whether other variables (e.g., ecology, ontogenetic timing of expression, etc.) are related to patterns of rate variation and endocrine component diversification. The application of phylogenetic-based rate analyses to comparative endocrine data will greatly enhance our understanding of the factors that have shaped endocrine system evolution. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Analyzing linear spatial features in ecology.

    PubMed

    Buettel, Jessie C; Cole, Andrew; Dickey, John M; Brook, Barry W

    2018-06-01

    The spatial analysis of dimensionless points (e.g., tree locations on a plot map) is common in ecology, for instance using point-process statistics to detect and compare patterns. However, the treatment of one-dimensional linear features (fiber processes) is rarely attempted. Here we appropriate the methods of vector sums and dot products, used regularly in fields like astrophysics, to analyze a data set of mapped linear features (logs) measured in 12 × 1-ha forest plots. For this demonstrative case study, we ask two deceptively simple questions: do trees tend to fall downhill, and if so, does slope gradient matter? Despite noisy data and many potential confounders, we show clearly that topography (slope direction and steepness) of forest plots does matter to treefall. More generally, these results underscore the value of mathematical methods of physics to problems in the spatial analysis of linear features, and the opportunities that interdisciplinary collaboration provides. This work provides scope for a variety of future ecological analyzes of fiber processes in space. © 2018 by the Ecological Society of America.

  1. The Solar Wind Ion Analyzer for MAVEN

    NASA Astrophysics Data System (ADS)

    Halekas, J. S.; Taylor, E. R.; Dalton, G.; Johnson, G.; Curtis, D. W.; McFadden, J. P.; Mitchell, D. L.; Lin, R. P.; Jakosky, B. M.

    2015-12-01

    The Solar Wind Ion Analyzer (SWIA) on the MAVEN mission will measure the solar wind ion flows around Mars, both in the upstream solar wind and in the magneto-sheath and tail regions inside the bow shock. The solar wind flux provides one of the key energy inputs that can drive atmospheric escape from the Martian system, as well as in part controlling the structure of the magnetosphere through which non-thermal ion escape must take place. SWIA measurements contribute to the top level MAVEN goals of characterizing the upper atmosphere and the processes that operate there, and parameterizing the escape of atmospheric gases to extrapolate the total loss to space throughout Mars' history. To accomplish these goals, SWIA utilizes a toroidal energy analyzer with electrostatic deflectors to provide a broad 360∘×90∘ field of view on a 3-axis spacecraft, with a mechanical attenuator to enable a very high dynamic range. SWIA provides high cadence measurements of ion velocity distributions with high energy resolution (14.5 %) and angular resolution (3.75∘×4.5∘ in the sunward direction, 22.5∘×22.5∘ elsewhere), and a broad energy range of 5 eV to 25 keV. Onboard computation of bulk moments and energy spectra enable measurements of the basic properties of the solar wind at 0.25 Hz.

  2. The MAVEN Solar Wind Electron Analyzer

    NASA Astrophysics Data System (ADS)

    Mitchell, D. L.; Mazelle, C.; Sauvaud, J.-A.; Thocaven, J.-J.; Rouzaud, J.; Fedorov, A.; Rouger, P.; Toublanc, D.; Taylor, E.; Gordon, D.; Robinson, M.; Heavner, S.; Turin, P.; Diaz-Aguado, M.; Curtis, D. W.; Lin, R. P.; Jakosky, B. M.

    2016-04-01

    The MAVEN Solar Wind Electron Analyzer (SWEA) is a symmetric hemispheric electrostatic analyzer with deflectors that is designed to measure the energy and angular distributions of 3-4600-eV electrons in the Mars environment. This energy range is important for impact ionization of planetary atmospheric species, and encompasses the solar wind core and halo populations, shock-energized electrons, auroral electrons, and ionospheric primary photoelectrons. The instrument is mounted at the end of a 1.5-meter boom to provide a clear field of view that spans nearly 80 % of the sky with ˜20° resolution. With an energy resolution of 17 % (Δ E/E), SWEA readily distinguishes electrons of solar wind and ionospheric origin. Combined with a 2-second measurement cadence and on-board real-time pitch angle mapping, SWEA determines magnetic topology with high (˜8-km) spatial resolution, so that local measurements of the plasma and magnetic field can be placed into global context.

  3. Optoacoustic 13C-breath test analyzer

    NASA Astrophysics Data System (ADS)

    Harde, Hermann; Helmrich, Günther; Wolff, Marcus

    2010-02-01

    The composition and concentration of exhaled volatile gases reflects the physical ability of a patient. Therefore, a breath analysis allows to recognize an infectious disease in an organ or even to identify a tumor. One of the most prominent breath tests is the 13C-urea-breath test, applied to ascertain the presence of the bacterium helicobacter pylori in the stomach wall as an indication of a gastric ulcer. In this contribution we present a new optical analyzer that employs a compact and simple set-up based on photoacoustic spectroscopy. It consists of two identical photoacoustic cells containing two breath samples, one taken before and one after capturing an isotope-marked substrate, where the most common isotope 12C is replaced to a large extent by 13C. The analyzer measures simultaneously the relative CO2 isotopologue concentrations in both samples by exciting the molecules on specially selected absorption lines with a semiconductor laser operating at a wavelength of 2.744 μm. For a reliable diagnosis changes of the 13CO2 concentration of 1% in the exhaled breath have to be detected at a concentration level of this isotope in the breath of about 500 ppm.

  4. Analyzing Virtual Physics Simulations with Tracker

    NASA Astrophysics Data System (ADS)

    Claessens, Tom

    2017-12-01

    In the physics teaching community, Tracker is well known as a user-friendly open source video analysis software, authored by Douglas Brown. With this tool, the user can trace markers indicated on a video or on stroboscopic photos and perform kinematic analyses. Tracker also includes a data modeling tool that allows one to fit some theoretical equations of motion onto experimentally obtained data. In the field of particle mechanics, Tracker has been effectively used for learning and teaching about projectile motion, "toss up" and free-fall vertical motion, and to explain the principle of mechanical energy conservation. Also, Tracker has been successfully used in rigid body mechanics to interpret the results of experiments with rolling/slipping cylinders and moving rods. In this work, I propose an original method in which Tracker is used to analyze virtual computer simulations created with a physics-based motion solver, instead of analyzing video recording or stroboscopic photos. This could be an interesting approach to study kinematics and dynamics problems in physics education, in particular when there is no or limited access to physical labs. I demonstrate the working method with a typical (but quite challenging) problem in classical mechanics: a slipping/rolling cylinder on a rough surface.

  5. Entropy Splitting and Numerical Dissipation

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Vinokur, M.; Djomehri, M. J.

    1999-01-01

    A rigorous stability estimate for arbitrary order of accuracy of spatial central difference schemes for initial-boundary value problems of nonlinear symmetrizable systems of hyperbolic conservation laws was established recently by Olsson and Oliger (1994) and Olsson (1995) and was applied to the two-dimensional compressible Euler equations for a perfect gas by Gerritsen and Olsson (1996) and Gerritsen (1996). The basic building block in developing the stability estimate is a generalized energy approach based on a special splitting of the flux derivative via a convex entropy function and certain homogeneous properties. Due to some of the unique properties of the compressible Euler equations for a perfect gas, the splitting resulted in the sum of a conservative portion and a non-conservative portion of the flux derivative. hereafter referred to as the "Entropy Splitting." There are several potential desirable attributes and side benefits of the entropy splitting for the compressible Euler equations that were not fully explored in Gerritsen and Olsson. The paper has several objectives. The first is to investigate the choice of the arbitrary parameter that determines the amount of splitting and its dependence on the type of physics of current interest to computational fluid dynamics. The second is to investigate in what manner the splitting affects the nonlinear stability of the central schemes for long time integrations of unsteady flows such as in nonlinear aeroacoustics and turbulence dynamics. If numerical dissipation indeed is needed to stabilize the central scheme, can the splitting help minimize the numerical dissipation compared to its un-split cousin? Extensive numerical study on the vortex preservation capability of the splitting in conjunction with central schemes for long time integrations will be presented. The third is to study the effect of the non-conservative proportion of splitting in obtaining the correct shock location for high speed complex shock

  6. Analyzing milestoning networks for molecular kinetics: definitions, algorithms, and examples.

    PubMed

    Viswanath, Shruthi; Kreuzer, Steven M; Cardenas, Alfredo E; Elber, Ron

    2013-11-07

    Network representations are becoming increasingly popular for analyzing kinetic data from techniques like Milestoning, Markov State Models, and Transition Path Theory. Mapping continuous phase space trajectories into a relatively small number of discrete states helps in visualization of the data and in dissecting complex dynamics to concrete mechanisms. However, not only are molecular networks derived from molecular dynamics simulations growing in number, they are also getting increasingly complex, owing partly to the growth in computer power that allows us to generate longer and better converged trajectories. The increased complexity of the networks makes simple interpretation and qualitative insight of the molecular systems more difficult to achieve. In this paper, we focus on various network representations of kinetic data and algorithms to identify important edges and pathways in these networks. The kinetic data can be local and partial (such as the value of rate coefficients between states) or an exact solution to kinetic equations for the entire system (such as the stationary flux between vertices). In particular, we focus on the Milestoning method that provides fluxes as the main output. We proposed Global Maximum Weight Pathways as a useful tool for analyzing molecular mechanism in Milestoning networks. A closely related definition was made in the context of Transition Path Theory. We consider three algorithms to find Global Maximum Weight Pathways: Recursive Dijkstra's, Edge-Elimination, and Edge-List Bisection. The asymptotic efficiency of the algorithms is analyzed and numerical tests on finite networks show that Edge-List Bisection and Recursive Dijkstra's algorithms are most efficient for sparse and dense networks, respectively. Pathways are illustrated for two examples: helix unfolding and membrane permeation. Finally, we illustrate that networks based on local kinetic information can lead to incorrect interpretation of molecular mechanisms.

  7. ENVIRONMENTAL TECHNOLOGY VERIFICATION REPORT - FIELD PORTABLE X-RAY FLUORESCENCE ANALYZER - SCITEC, MAP SPECTRUM ANALYZER

    EPA Science Inventory

    In April 1995, the U.S. Environmental Protection Agency (EPA) sponsored a demonstration of field portable X-ray fluorescence (FPXRF) analyzers. The primary objectives of this demonstration were (1) to determine how well FPXRF analyzers perform in comparison to standard reference...

  8. Analyzing Screening Policies for Childhood Obesity

    PubMed Central

    Yang, Yan; Goldhaber-Fiebert, Jeremy D.; Wein, Lawrence M.

    2013-01-01

    Due to the health and economic costs of childhood obesity, coupled with studies suggesting the benefits of comprehensive (dietary, physical activity and behavioral counseling) intervention, the United States Preventive Services Task Force recently recommended childhood screening and intervention for obesity beginning at age six. Using a longitudinal data set consisting of the body mass index of 3164 children up to age 18 and another longitudinal data set containing the body mass index at ages 18 and 40 and the presence or absence of disease (hypertension and diabetes) at age 40 for 747 people, we formulate and numerically solve – separately for boys and girls – a dynamic programming problem for the optimal biennial (i.e., at ages 2, 4, …, 16) obesity screening thresholds. Unlike most screening problem formulations, we take a societal viewpoint, where the state of the system at each age is the population-wide probability density function of the body mass index. Compared to the biennial version of the task force’s recommendation, the screening thresholds derived from the dynamic program achieve a relative reduction in disease prevalence of 3% at the same screening (and treatment) cost, or – due to the flatness of the disease vs. screening tradeoff curve – achieves the same disease prevalence at a 28% relative reduction in cost. Compared to the task force’s policy, which uses the 95th percentile of body mass index (from cross-sectional growth charts tabulated by the Centers for Disease Control and Prevention) as the screening threshold for each age, the dynamic programming policy treats mostly 16 year olds (including many who are not obese) and very few males under 14 years old. While our results suggest that adult hypertension and diabetes are minimized by focusing childhood obesity screening and treatment on older adolescents, the shortcomings in the available data and the narrowness of the medical outcomes considered prevent us from making a

  9. Automated Root Tracking with "Root System Analyzer"

    NASA Astrophysics Data System (ADS)

    Schnepf, Andrea; Jin, Meina; Ockert, Charlotte; Bol, Roland; Leitner, Daniel

    2015-04-01

    Crucial factors for plant development are water and nutrient availability in soils. Thus, root architecture is a main aspect of plant productivity and needs to be accurately considered when describing root processes. Images of root architecture contain a huge amount of information, and image analysis helps to recover parameters describing certain root architectural and morphological traits. The majority of imaging systems for root systems are designed for two-dimensional images, such as RootReader2, GiA Roots, SmartRoot, EZ-Rhizo, and Growscreen, but most of them are semi-automated and involve mouse-clicks in each root by the user. "Root System Analyzer" is a new, fully automated approach for recovering root architectural parameters from two-dimensional images of root systems. Individual roots can still be corrected manually in a user interface if required. The algorithm starts with a sequence of segmented two-dimensional images showing the dynamic development of a root system. For each image, morphological operators are used for skeletonization. Based on this, a graph representation of the root system is created. A dynamic root architecture model helps to determine which edges of the graph belong to an individual root. The algorithm elongates each root at the root tip and simulates growth confined within the already existing graph representation. The increment of root elongation is calculated assuming constant growth. For each root, the algorithm finds all possible paths and elongates the root in the direction of the optimal path. In this way, each edge of the graph is assigned to one or more coherent roots. Image sequences of root systems are handled in such a way that the previous image is used as a starting point for the current image. The algorithm is implemented in a set of Matlab m-files. Output of Root System Analyzer is a data structure that includes for each root an identification number, the branching order, the time of emergence, the parent

  10. Gaseous trace impurity analyzer and method

    DOEpatents

    Edwards, Jr., David; Schneider, William

    1980-01-01

    Simple apparatus for analyzing trace impurities in a gas, such as helium or hydrogen, comprises means for drawing a measured volume of the gas as sample into a heated zone. A segregable portion of the zone is then chilled to condense trace impurities in the gas in the chilled portion. The gas sample is evacuated from the heated zone including the chilled portion. Finally, the chilled portion is warmed to vaporize the condensed impurities in the order of their boiling points. As the temperature of the chilled portion rises, pressure will develop in the evacuated, heated zone by the vaporization of an impurity. The temperature at which the pressure increase occurs identifies that impurity and the pressure increase attained until the vaporization of the next impurity causes a further pressure increase is a measure of the quantity of the preceding impurity.

  11. Ultrasonic interface level analyzer shop test procedure

    SciTech Connect

    STAEHR, T.W.

    1999-05-24

    The Royce Instrument Corporation Model 2511 Interface Level Analyzer (URSILLA) system uses an ultrasonic ranging technique (SONAR) to measure sludge depths in holding tanks. Three URSILLA instrument assemblies provided by the W-151 project are planned to be used during mixer pump testing to provide data for determining sludge mobilization effectiveness of the mixer pumps and sludge settling rates. The purpose of this test is to provide a documented means of verifying that the functional components of the three URSILLA instruments operate properly. Successful completion of this Shop Test Procedure (STP) is a prerequisite for installation in the AZ-101 tank. Themore » objective of the test is to verify the operation of the URSILLA instruments and to verify data collection using a stand alone software program.« less

  12. Analyzing the generality of conflict adaptation effects.

    PubMed

    Funes, Maria Jesús; Lupiáñez, Juan; Humphreys, Glyn

    2010-02-01

    Conflict adaptation effects refer to the reduction of interference when the incongruent stimulus occurs immediately after an incongruent trial, compared with when it occurs after a congruent trial. The present study analyzes the key conditions that lead to adaptation effects that are specific to the type of conflict involved versus those that are conflict general. In the first 2 experiments, we combined 2 types of conflict for which compatibility arises from clearly different sources in terms of dimensional overlap while keeping the task context constant across conflict types. We found a clear pattern of specificity on conflict adaptation across conflict types. In subsequent experiments, we tested whether this pattern could be accounted in terms of feature integration processes contributing differently to repetition versus alternation of conflict types. The results clearly indicated that feature integration was not key to generating conflict type specificity on conflict adaptation. The data are consistent with there being separate modes of control for different types of cognitive conflict.

  13. Analyzing Strategic Business Rules through Simulation Modeling

    NASA Astrophysics Data System (ADS)

    Orta, Elena; Ruiz, Mercedes; Toro, Miguel

    Service Oriented Architecture (SOA) holds promise for business agility since it allows business process to change to meet new customer demands or market needs without causing a cascade effect of changes in the underlying IT systems. Business rules are the instrument chosen to help business and IT to collaborate. In this paper, we propose the utilization of simulation models to model and simulate strategic business rules that are then disaggregated at different levels of an SOA architecture. Our proposal is aimed to help find a good configuration for strategic business objectives and IT parameters. The paper includes a case study where a simulation model is built to help business decision-making in a context where finding a good configuration for different business parameters and performance is too complex to analyze by trial and error.

  14. METCAN: The metal matrix composite analyzer

    NASA Technical Reports Server (NTRS)

    Hopkins, Dale A.; Murthy, Pappu L. N.

    1988-01-01

    Metal matrix composites (MMC) are the subject of intensive study and are receiving serious consideration for critical structural applications in advanced aerospace systems. MMC structural analysis and design methodologies are studied. Predicting the mechanical and thermal behavior and the structural response of components fabricated from MMC requires the use of a variety of mathematical models. These models relate stresses to applied forces, stress intensities at the tips of cracks to nominal stresses, buckling resistance to applied force, or vibration response to excitation forces. The extensive research in computational mechanics methods for predicting the nonlinear behavior of MMC are described. This research has culminated in the development of the METCAN (METal Matrix Composite ANalyzer) computer code.

  15. The Improvement Cycle: Analyzing Our Experience

    NASA Technical Reports Server (NTRS)

    Pajerski, Rose; Waligora, Sharon

    1996-01-01

    NASA's Software Engineering Laboratory (SEL), one of the earliest pioneers in the areas of software process improvement and measurement, has had a significant impact on the software business at NASA Goddard. At the heart of the SEL's improvement program is a belief that software products can be improved by optimizing the software engineering process used to develop them and a long-term improvement strategy that facilitates small incremental improvements that accumulate into significant gains. As a result of its efforts, the SEL has incrementally reduced development costs by 60%, decreased error rates by 85%, and reduced cycle time by 25%. In this paper, we analyze the SEL's experiences on three major improvement initiatives to better understand the cyclic nature of the improvement process and to understand why some improvements take much longer than others.

  16. Composite blade structural analyzer (COBSTRAN) user's manual

    NASA Technical Reports Server (NTRS)

    Aiello, Robert A.

    1989-01-01

    The installation and use of a computer code, COBSTRAN (COmposite Blade STRuctrual ANalyzer), developed for the design and analysis of composite turbofan and turboprop blades and also for composite wind turbine blades was described. This code combines composite mechanics and laminate theory with an internal data base of fiber and matrix properties. Inputs to the code are constituent fiber and matrix material properties, factors reflecting the fabrication process, composite geometry and blade geometry. COBSTRAN performs the micromechanics, macromechanics and laminate analyses of these fiber composites. COBSTRAN generates a NASTRAN model with equivalent anisotropic homogeneous material properties. Stress output from NASTRAN is used to calculate individual ply stresses, strains, interply stresses, thru-the-thickness stresses and failure margins. Curved panel structures may be modeled providing the curvature of a cross-section is defined by a single value function. COBSTRAN is written in FORTRAN 77.

  17. Digital avionics design and reliability analyzer

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The description and specifications for a digital avionics design and reliability analyzer are given. Its basic function is to provide for the simulation and emulation of the various fault-tolerant digital avionic computer designs that are developed. It has been established that hardware emulation at the gate-level will be utilized. The primary benefit of emulation to reliability analysis is the fact that it provides the capability to model a system at a very detailed level. Emulation allows the direct insertion of faults into the system, rather than waiting for actual hardware failures to occur. This allows for controlled and accelerated testing of system reaction to hardware failures. There is a trade study which leads to the decision to specify a two-machine system, including an emulation computer connected to a general-purpose computer. There is also an evaluation of potential computers to serve as the emulation computer.

  18. Methods for automatically analyzing humpback song units.

    PubMed

    Rickwood, Peter; Taylor, Andrew

    2008-03-01

    This paper presents mathematical techniques for automatically extracting and analyzing bioacoustic signals. Automatic techniques are described for isolation of target signals from background noise, extraction of features from target signals and unsupervised classification (clustering) of the target signals based on these features. The only user-provided inputs, other than raw sound, is an initial set of signal processing and control parameters. Of particular note is that the number of signal categories is determined automatically. The techniques, applied to hydrophone recordings of humpback whales (Megaptera novaeangliae), produce promising initial results, suggesting that they may be of use in automated analysis of not only humpbacks, but possibly also in other bioacoustic settings where automated analysis is desirable.

  19. Buccal microbiology analyzed by infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    de Abreu, Geraldo Magno Alves; da Silva, Gislene Rodrigues; Khouri, Sônia; Favero, Priscila Pereira; Raniero, Leandro; Martin, Airton Abrahão

    2012-01-01

    Rapid microbiological identification and characterization are very important in dentistry and medicine. In addition to dental diseases, pathogens are directly linked to cases of endocarditis, premature delivery, low birth weight, and loss of organ transplants. Fourier Transform Infrared Spectroscopy (FTIR) was used to analyze oral pathogens Aggregatibacter actinomycetemcomitans ATCC 29523, Aggregatibacter actinomycetemcomitans-JP2, and Aggregatibacter actinomycetemcomitans which was clinically isolated from the human blood-CI. Significant spectra differences were found among each organism allowing the identification and characterization of each bacterial species. Vibrational modes in the regions of 3500-2800 cm-1, the 1484-1420 cm-1, and 1000-750 cm-1 were used in this differentiation. The identification and classification of each strain were performed by cluster analysis achieving 100% separation of strains. This study demonstrated that FTIR can be used to decrease the identification time, compared to the traditional methods, of fastidious buccal microorganisms associated with the etiology of the manifestation of periodontitis.

  20. Method for network analyzation and apparatus

    DOEpatents

    Bracht, Roger B.; Pasquale, Regina V.

    2001-01-01

    A portable network analyzer and method having multiple channel transmit and receive capability for real-time monitoring of processes which maintains phase integrity, requires low power, is adapted to provide full vector analysis, provides output frequencies of up to 62.5 MHz and provides fine sensitivity frequency resolution. The present invention includes a multi-channel means for transmitting and a multi-channel means for receiving, both in electrical communication with a software means for controlling. The means for controlling is programmed to provide a signal to a system under investigation which steps consecutively over a range of predetermined frequencies. The resulting received signal from the system provides complete time domain response information by executing a frequency transform of the magnitude and phase information acquired at each frequency step.

  1. Stackable differential mobility analyzer for aerosol measurement

    DOEpatents

    Cheng, Meng-Dawn [Oak Ridge, TN; Chen, Da-Ren [Creve Coeur, MO

    2007-05-08

    A multi-stage differential mobility analyzer (MDMA) for aerosol measurements includes a first electrode or grid including at least one inlet or injection slit for receiving an aerosol including charged particles for analysis. A second electrode or grid is spaced apart from the first electrode. The second electrode has at least one sampling outlet disposed at a plurality different distances along its length. A volume between the first and the second electrode or grid between the inlet or injection slit and a distal one of the plurality of sampling outlets forms a classifying region, the first and second electrodes for charging to suitable potentials to create an electric field within the classifying region. At least one inlet or injection slit in the second electrode receives a sheath gas flow into an upstream end of the classifying region, wherein each sampling outlet functions as an independent DMA stage and classifies different size ranges of charged particles based on electric mobility simultaneously.

  2. Rotating field mass and velocity analyzer

    NASA Technical Reports Server (NTRS)

    Smith, Steven Joel (Inventor); Chutjian, Ara (Inventor)

    1998-01-01

    A rotating field mass and velocity analyzer having a cell with four walls, time dependent RF potentials that are applied to each wall, and a detector. The time dependent RF potentials create an RF field in the cell which effectively rotates within the cell. An ion beam is accelerated into the cell and the rotating RF field disperses the incident ion beam according to the mass-to-charge (m/e) ratio and velocity distribution present in the ion beam. The ions of the beam either collide with the ion detector or deflect away from the ion detector, depending on the m/e, RF amplitude, and RF frequency. The detector counts the incident ions to determine the m/e and velocity distribution in the ion beam.

  3. Structural factoring approach for analyzing stochastic networks

    NASA Technical Reports Server (NTRS)

    Hayhurst, Kelly J.; Shier, Douglas R.

    1991-01-01

    The problem of finding the distribution of the shortest path length through a stochastic network is investigated. A general algorithm for determining the exact distribution of the shortest path length is developed based on the concept of conditional factoring, in which a directed, stochastic network is decomposed into an equivalent set of smaller, generally less complex subnetworks. Several network constructs are identified and exploited to reduce significantly the computational effort required to solve a network problem relative to complete enumeration. This algorithm can be applied to two important classes of stochastic path problems: determining the critical path distribution for acyclic networks and the exact two-terminal reliability for probabilistic networks. Computational experience with the algorithm was encouraging and allowed the exact solution of networks that have been previously analyzed only by approximation techniques.

  4. Drug stability analyzer for long duration spaceflights

    NASA Astrophysics Data System (ADS)

    Shende, Chetan; Smith, Wayne; Brouillette, Carl; Farquharson, Stuart

    2014-06-01

    Crewmembers of current and future long duration spaceflights require drugs to overcome the deleterious effects of weightlessness, sickness and injuries. Unfortunately, recent studies have shown that some of the drugs currently used may degrade more rapidly in space, losing their potency well before their expiration dates. To complicate matters, the degradation products of some drugs can be toxic. Consequently there is a need for an analyzer that can determine if a drug is safe at the time of use, as well as to monitor and understand space-induced degradation, so that drug types, formulations, and packaging can be improved. Towards this goal we have been investigating the ability of Raman spectroscopy to monitor and quantify drug degradation. Here we present preliminary data by measuring acetaminophen, and its degradation product, p-aminophenol, as pure samples, and during forced degradation reactions.

  5. Diffractive interference optical analyzer (DiOPTER)

    NASA Astrophysics Data System (ADS)

    Sasikumar, Harish; Prasad, Vishnu; Pal, Parama; Varma, Manoj M.

    2016-03-01

    This report demonstrates a method for high-resolution refractometric measurements using, what we have termed as, a Diffractive Interference Optical Analyzer (DiOpter). The setup consists of a laser, polarizer, a transparent diffraction grating and Si-photodetectors. The sensor is based on the differential response of diffracted orders to bulk refractive index changes. In these setups, the differential read-out of the diffracted orders suppresses signal drifts and enables time-resolved determination of refractive index changes in the sample cell. A remarkable feature of this device is that under appropriate conditions, the measurement sensitivity of the sensor can be enhanced by more than two orders of magnitude due to interference between multiply reflected diffracted orders. A noise-equivalent limit of detection (LoD) of 6x10-7 RIU was achieved in glass. This work focuses on devices with integrated sample well, made on low-cost PDMS. As the detection methodology is experimentally straightforward, it can be used across a wide array of applications, ranging from detecting changes in surface adsorbates via binding reactions to estimating refractive index (and hence concentration) variations in bulk samples. An exciting prospect of this technique is the potential integration of this device to smartphones using a simple interface based on transmission mode configuration. In a transmission configuration, we were able to achieve an LoD of 4x10-4 RIU which is sufficient to explore several applications in food quality testing and related fields. We are envisioning the future of this platform as a personal handheld optical analyzer for applications ranging from environmental sensing to healthcare and quality testing of food products.

  6. Finite difference methods for reducing numerical diffusion in TEACH-type calculations. [Teaching Elliptic Axisymmetric Characteristics Heuristically

    NASA Technical Reports Server (NTRS)

    Syed, S. A.; Chiappetta, L. M.

    1985-01-01

    A methodological evaluation for two-finite differencing schemes for computer-aided gas turbine design is presented. The two computational schemes include; a Bounded Skewed Finite Differencing Scheme (BSUDS); and a Quadratic Upwind Differencing Scheme (QSDS). In the evaluation, the derivations of the schemes were incorporated into two-dimensional and three-dimensional versions of the Teaching Axisymmetric Characteristics Heuristically (TEACH) computer code. Assessments were made according to performance criteria for the solution of problems of turbulent, laminar, and coannular turbulent flow. The specific performance criteria used in the evaluation were simplicity, accuracy, and computational economy. It is found that the BSUDS scheme performed better with respect to the criteria than the QUDS. Some of the reasons for the more successful performance BSUDS are discussed.

  7. Stability of numerical integration techniques for transient rotor dynamics

    NASA Technical Reports Server (NTRS)

    Kascak, A. F.

    1977-01-01

    A finite element model of a rotor bearing system was analyzed to determine the stability limits of the forward, backward, and centered Euler; Runge-Kutta; Milne; and Adams numerical integration techniques. The analysis concludes that the highest frequency mode determines the maximum time step for a stable solution. Thus, the number of mass elements should be minimized. Increasing the damping can sometimes cause numerical instability. For a uniform shaft, with 10 mass elements, operating at approximately the first critical speed, the maximum time step for the Runge-Kutta, Milne, and Adams methods is that which corresponds to approximately 1 degree of shaft movement. This is independent of rotor dimensions.

  8. Numerical expression of color emotion and its application

    NASA Astrophysics Data System (ADS)

    Sato, Tetsuya; Kajiwara, Kanji; Xin, John H.; Hansuebsai, Aran; Nobbs, Jim

    2002-06-01

    Human emotions induced by colors are various but the emotions are expressed through words and languages. In order to analyze the emotions expressed through words and languages, visual assessment tests against color emotions expressed by twelve kinds of word pairs were carried out in Japan, Thailand, Hong Kong and UK. The numerical expression of each color emotion is being tried as a formula with an ellipsoid-shape resembling that of a color difference formula. In this paper, the numerical expression of 'Soft- Hard' color emotion was mainly discussed. The application of color emotions via the empirical color emotions formulae derived from kansei database (database of sensory assessments) was also briefly reported.

  9. Gyrotactic trapping: A numerical study

    NASA Astrophysics Data System (ADS)

    Ghorai, S.

    2016-04-01

    Gyrotactic trapping is a mechanism proposed by Durham et al. ["Disruption of vertical motility by shear triggers formation of thin Phytoplankton layers," Science 323, 1067-1070 (2009)] to explain the formation of thin phytoplankton layer just below the ocean surface. This mechanism is examined numerically using a rational model based on the generalized Taylor dispersion theory. The crucial role of sedimentation speed in the thin layer formation is demonstrated. The effects of variation in different parameters on the thin layer formation are also investigated.

  10. Results from Numerical General Relativity

    NASA Technical Reports Server (NTRS)

    Baker, John G.

    2011-01-01

    For several years numerical simulations have been revealing the details of general relativity's predictions for the dynamical interactions of merging black holes. I will review what has been learned of the rich phenomenology of these mergers and the resulting gravitational wave signatures. These wave forms provide a potentially observable record of the powerful astronomical events, a central target of gravitational wave astronomy. Asymmetric radiation can produce a thrust on the system which may accelerate the single black hole resulting from the merger to high relative velocity.

  11. Time's arrow: A numerical experiment

    NASA Astrophysics Data System (ADS)

    Fowles, G. Richard

    1994-04-01

    The dependence of time's arrow on initial conditions is illustrated by a numerical example in which plane waves produced by an initial pressure pulse are followed as they are multiply reflected at internal interfaces of a layered medium. Wave interactions at interfaces are shown to be analogous to the retarded and advanced waves of point sources. The model is linear and the calculation is exact and demonstrably time reversible; nevertheless the results show most of the features expected of a macroscopically irreversible system, including the approach to the Maxwell-Boltzmann distribution, ergodicity, and concomitant entropy increase.

  12. On numerically pluricanonical cyclic coverings

    NASA Astrophysics Data System (ADS)

    Kulikov, V. S.; Kharlamov, V. M.

    2014-10-01

    We investigate some properties of cyclic coverings f\\colon Y\\to X (where X is a complex surface of general type) branched along smooth curves B\\subset X that are numerically equivalent to a multiple of the canonical class of X. Our main results concern coverings of surfaces of general type with p_g=0 and Miyaoka-Yau surfaces. In particular, such coverings provide new examples of multi-component moduli spaces of surfaces with given Chern numbers and new examples of surfaces that are not deformation equivalent to their complex conjugates.

  13. Numerical Error Estimation with UQ

    NASA Astrophysics Data System (ADS)

    Ackmann, Jan; Korn, Peter; Marotzke, Jochem

    2014-05-01

    Ocean models are still in need of means to quantify model errors, which are inevitably made when running numerical experiments. The total model error can formally be decomposed into two parts, the formulation error and the discretization error. The formulation error arises from the continuous formulation of the model not fully describing the studied physical process. The discretization error arises from having to solve a discretized model instead of the continuously formulated model. Our work on error estimation is concerned with the discretization error. Given a solution of a discretized model, our general problem statement is to find a way to quantify the uncertainties due to discretization in physical quantities of interest (diagnostics), which are frequently used in Geophysical Fluid Dynamics. The approach we use to tackle this problem is called the "Goal Error Ensemble method". The basic idea of the Goal Error Ensemble method is that errors in diagnostics can be translated into a weighted sum of local model errors, which makes it conceptually based on the Dual Weighted Residual method from Computational Fluid Dynamics. In contrast to the Dual Weighted Residual method these local model errors are not considered deterministically but interpreted as local model uncertainty and described stochastically by a random process. The parameters for the random process are tuned with high-resolution near-initial model information. However, the original Goal Error Ensemble method, introduced in [1], was successfully evaluated only in the case of inviscid flows without lateral boundaries in a shallow-water framework and is hence only of limited use in a numerical ocean model. Our work consists in extending the method to bounded, viscous flows in a shallow-water framework. As our numerical model, we use the ICON-Shallow-Water model. In viscous flows our high-resolution information is dependent on the viscosity parameter, making our uncertainty measures viscosity-dependent. We

  14. Multi-Pass Quadrupole Mass Analyzer

    NASA Technical Reports Server (NTRS)

    Prestage, John D.

    2013-01-01

    Analysis of the composition of planetary atmospheres is one of the most important and fundamental measurements in planetary robotic exploration. Quadrupole mass analyzers (QMAs) are the primary tool used to execute these investigations, but reductions in size of these instruments has sacrificed mass resolving power so that the best present-day QMA devices are still large, expensive, and do not deliver performance of laboratory instruments. An ultra-high-resolution QMA was developed to resolve N2 +/CO+ by trapping ions in a linear trap quadrupole filter. Because N2 and CO are resolved, gas chromatography columns used to separate species before analysis are eliminated, greatly simplifying gas analysis instrumentation. For highest performance, the ion trap mode is used. High-resolution (or narrow-band) mass selection is carried out in the central region, but near the DC electrodes at each end, RF/DC field settings are adjusted to allow broadband ion passage. This is to prevent ion loss during ion reflection at each end. Ions are created inside the trap so that low-energy particles are selected by low-voltage settings on the end electrodes. This is beneficial to good mass resolution since low-energy particles traverse many cycles of the RF filtering fields. Through Monte Carlo simulations, it is shown that ions are reflected at each end many tens of times, each time being sent back through the central section of the quadrupole where ultrahigh mass filtering is carried out. An analyzer was produced with electrical length orders of magnitude longer than its physical length. Since the selector fields are sized as in conventional devices, the loss of sensitivity inherent in miniaturizing quadrupole instruments is avoided. The no-loss, multi-pass QMA architecture will improve mass resolution of planetary QMA instruments while reducing demands on the RF electronics for high-voltage/high-frequency production since ion transit time is no longer limited to a single pass. The

  15. Optimally analyzing and implementing of bolt fittings in steel structure based on ANSYS

    NASA Astrophysics Data System (ADS)

    Han, Na; Song, Shuangyang; Cui, Yan; Wu, Yongchun

    2018-03-01

    ANSYS simulation software for its excellent performance become outstanding one in Computer-aided Engineering (CAE) family, it is committed to the innovation of engineering simulation to help users to shorten the design process. First, a typical procedure to implement CAE was design. The framework of structural numerical analysis on ANSYS Technology was proposed. Then, A optimally analyzing and implementing of bolt fittings in beam-column join of steel structure was implemented by ANSYS, which was display the cloud chart of XY-shear stress, the cloud chart of YZ-shear stress and the cloud chart of Y component of stress. Finally, ANSYS software simulating results was compared with the measured results by the experiment. The result of ANSYS simulating and analyzing is reliable, efficient and optical. In above process, a structural performance's numerical simulating and analyzing model were explored for engineering enterprises' practice.

  16. Basis-neutral Hilbert-space analyzers

    PubMed Central

    Martin, Lane; Mardani, Davood; Kondakci, H. Esat; Larson, Walker D.; Shabahang, Soroush; Jahromi, Ali K.; Malhotra, Tanya; Vamivakas, A. Nick; Atia, George K.; Abouraddy, Ayman F.

    2017-01-01

    Interferometry is one of the central organizing principles of optics. Key to interferometry is the concept of optical delay, which facilitates spectral analysis in terms of time-harmonics. In contrast, when analyzing a beam in a Hilbert space spanned by spatial modes – a critical task for spatial-mode multiplexing and quantum communication – basis-specific principles are invoked that are altogether distinct from that of ‘delay’. Here, we extend the traditional concept of temporal delay to the spatial domain, thereby enabling the analysis of a beam in an arbitrary spatial-mode basis – exemplified using Hermite-Gaussian and radial Laguerre-Gaussian modes. Such generalized delays correspond to optical implementations of fractional transforms; for example, the fractional Hankel transform is the generalized delay associated with the space of Laguerre-Gaussian modes, and an interferometer incorporating such a ‘delay’ obtains modal weights in the associated Hilbert space. By implementing an inherently stable, reconfigurable spatial-light-modulator-based polarization-interferometer, we have constructed a ‘Hilbert-space analyzer’ capable of projecting optical beams onto any modal basis. PMID:28344331

  17. Analyzing neural responses with vector fields.

    PubMed

    Buneo, Christopher A

    2011-04-15

    Analyzing changes in the shape and scale of single cell response fields is a key component of many neurophysiological studies. Typical analyses of shape change involve correlating firing rates between experimental conditions or "cross-correlating" single cell tuning curves by shifting them with respect to one another and correlating the overlapping data. Such shifting results in a loss of data, making interpretation of the resulting correlation coefficients problematic. The problem is particularly acute for two dimensional response fields, which require shifting along two axes. Here, an alternative method for quantifying response field shape and scale based on correlation of vector field representations is introduced. The merits and limitations of the methods are illustrated using both simulated and experimental data. It is shown that vector correlation provides more information on response field changes than scalar correlation without requiring field shifting and concomitant data loss. An extension of this vector field approach is also demonstrated which can be used to identify the manner in which experimental variables are encoded in studies of neural reference frames. Copyright © 2011 Elsevier B.V. All rights reserved.

  18. Analyzing the acoustic beat with mobile devices

    NASA Astrophysics Data System (ADS)

    Kuhn, Jochen; Vogt, Patrik; Hirth, Michael

    2014-04-01

    In this column, we have previously presented various examples of how physical relationships can be examined by analyzing acoustic signals using smartphones or tablet PCs. In this example, we will be exploring the acoustic phenomenon of small beats, which is produced by the overlapping of two tones with a low difference in frequency Δf. The resulting auditory sensation is a tone with a volume that varies periodically. Acoustic beats can be perceived repeatedly in day-to-day life and have some interesting applications. For example, string instruments are still tuned with the help of an acoustic beat, even with modern technology. If a reference tone (e.g., 440 Hz) and, for example, a slightly out-of-tune violin string produce a tone simultaneously, a beat can be perceived. The more similar the frequencies, the longer the duration of the beat. In the extreme case, when the frequencies are identical, a beat no longer arises. The string is therefore correctly tuned. Using the Oscilloscope app,4 it is possible to capture and save acoustic signals of this kind and determine the beat frequency fS of the signal, which represents the difference in frequency Δf of the two overlapping tones (for Android smartphones, the app OsciPrime Oscilloscope can be used).

  19. Walking-age analyzer for healthcare applications.

    PubMed

    Jin, Bo; Thu, Tran Hoai; Baek, Eunhye; Sakong, SungHwan; Xiao, Jin; Mondal, Tapas; Deen, M Jamal

    2014-05-01

    This paper describes a walking-age pattern analysis and identification system using a 3-D accelerometer and a gyroscope. First, a walking pattern database from 79 volunteers of ages ranging from 10 to 83 years is constructed. Second, using feature extraction and clustering, three distinct walking-age groups, children of ages 10 and below, adults in 20-60s, and elders in 70s and 80s, were identified. For this study, low-pass filtering, empirical mode decomposition, and K-means were used to process and analyze the experimental results. Analysis shows that volunteers' walking-ages can be categorized into distinct groups based on simple walking pattern signals. This grouping can then be used to detect persons with walking patterns outside their age groups. If the walking pattern puts an individual in a higher "walking age" grouping, then this could be an indicator of potential health/walking problems, such as weak joints, poor musculoskeletal support system or a tendency to fall.

  20. Analyzing costs of space debris mitigation methods

    NASA Astrophysics Data System (ADS)

    Wiedemann, C.; Krag, H.; Bendisch, J.; Sdunnus, H.

    2004-01-01

    The steadily increasing number of space objects poses a considerable hazard to all kinds of spacecraft. To reduce the risks to future space missions different debris mitigation measures and spacecraft protection techniques have been investigated during the last years. However, the economic efficiency has not been considered yet in this context. Current studies have the objective to evaluate the mission costs due to space debris in a business as usual (no mitigation) scenario compared to the missions costs considering debris mitigation. The aim is an estimation of the time until the investment in debris mitigation will lead to an effective reduction of mission costs. This paper presents the results of investigations on the key issues of cost estimation for spacecraft and the influence of debris mitigation and shielding on cost. Mitigation strategies like the reduction of orbital lifetime and de- or re-orbit of non-operational satellites are methods to control the space debris environment. These methods result in an increase of costs. In a first step the overall costs of different types of unmanned satellites are analyzed. A selected cost model is simplified and generalized for an application on all operational satellites. In a next step the influence of space debris on cost is treated, if the implementation of mitigation strategies is considered.

  1. MERCURY MEASUREMENTS USING DIRECT-ANALYZER ...

    EPA Pesticide Factsheets

    Under EPA's Water Quality Research Program, exposure studies are needed to determine how well control strategies and guidance are working. Consequently, reliable and convenient techniques that minimize waste production are of special interest. While traditional methods for determining mercury in solid samples involve the use of aggressive chemicals to dissolve the matrix and the use of other chemicals to properly reduce the mercury to the volatile elemental form, pyrolysis-based analyzers can be used by directly weighing the solid in a sampling boat and initiating the instrumental analysis for total mercury. The research focused on in the subtasks is the development and application of state-of the-art technologies to meet the needs of the public, Office of Water, and ORD in the area of Water Quality. Located In the subtasks are the various research projects being performed in support of this Task and more in-depth coverage of each project. Briefly, each project's objective is stated below.Subtask 1: To integrate state-of-the-art technologies (polar organic chemical integrative samplers, advanced solid-phase extraction methodologies with liquid chromatography/electrospray/mass spectrometry) and apply them to studying the sources and fate of a select list of PPCPs. Application and improvement of analytical methodologies that can detect non-volatile, polar, water-soluble pharmaceuticals in source waters at levels that could be environmentally significant (at con

  2. A Methodology to Analyze Photovoltaic Tracker Uptime

    SciTech Connect

    Muller, Matthew T; Ruth, Dan

    A metric is developed to analyze the daily performance of single-axis photovoltaic (PV) trackers. The metric relies on comparing correlations between the daily time series of the PV power output and an array of simulated plane-of-array irradiances for the given day. Mathematical thresholds and a logic sequence are presented, so the daily tracking metric can be applied in an automated fashion on large-scale PV systems. The results of applying the metric are visually examined against the time series of the power output data for a large number of days and for various systems. The visual inspection results suggest that overall,more » the algorithm is accurate in identifying stuck or functioning trackers on clear-sky days. Visual inspection also shows that there are days that are not classified by the metric where the power output data may be sufficient to identify a stuck tracker. Based on the daily tracking metric, uptime results are calculated for 83 different inverters at 34 PV sites. The mean tracker uptime is calculated at 99% based on 2 different calculation methods. The daily tracking metric clearly has limitations, but as there is no existing metrics in the literature, it provides a valuable tool for flagging stuck trackers.« less

  3. Complete denture analyzed by optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Negrutiu, Meda L.; Sinescu, Cosmin; Todea, Carmen; Podoleanu, Adrian G.

    2008-02-01

    The complete dentures are currently made using different technologies. In order to avoid deficiencies of the prostheses made using the classical technique, several alternative systems and procedures were imagined, directly related to the material used and also to the manufacturing technology. Thus, at the present time, there are several injecting systems and technologies on the market, that use chemoplastic materials, which are heat cured (90-100°C), in dry or wet environment, or cold cured (below 60°C). There are also technologies that plasticize a hard cured material by thermoplastic processing (without any chemical changes) and then inject it into a mold. The purpose of this study was to analyze the existence of possible defects in several dental prostheses using a non invasive method, before their insertion in the mouth. Different dental prostheses, fabricated from various materials were investigated using en-face optical coherence tomography. In order to discover the defects, the scanning was made in three planes, obtaining images at different depths, from 0,01 μm to 2 mm. In several of the investigated prostheses we found defects which may cause their fracture. These defects are totally included in the prostheses material and can not be vizualised with other imagistic methods. In conclusion, en-face OCT is an important investigative tool for the dental practice.

  4. Stackable differential mobility analyzer for aerosol measurement

    SciTech Connect

    Cheng, Meng-Dawn; Chen, Da-Ren

    2007-05-08

    A multi-stage differential mobility analyzer (MDMA) for aerosol measurements includes a first electrode or grid including at least one inlet or injection slit for receiving an aerosol including charged particles for analysis. A second electrode or grid is spaced apart from the first electrode. The second electrode has at least one sampling outlet disposed at a plurality different distances along its length. A volume between the first and the second electrode or grid between the inlet or injection slit and a distal one of the plurality of sampling outlets forms a classifying region, the first and second electrodes for chargingmore » to suitable potentials to create an electric field within the classifying region. At least one inlet or injection slit in the second electrode receives a sheath gas flow into an upstream end of the classifying region, wherein each sampling outlet functions as an independent DMA stage and classifies different size ranges of charged particles based on electric mobility simultaneously.« less

  5. Update on Integrated Optical Design Analyzer

    NASA Technical Reports Server (NTRS)

    Moore, James D., Jr.; Troy, Ed

    2003-01-01

    Updated information on the Integrated Optical Design Analyzer (IODA) computer program has become available. IODA was described in Software for Multidisciplinary Concurrent Optical Design (MFS-31452), NASA Tech Briefs, Vol. 25, No. 10 (October 2001), page 8a. To recapitulate: IODA facilitates multidisciplinary concurrent engineering of highly precise optical instruments. The architecture of IODA was developed by reviewing design processes and software in an effort to automate design procedures. IODA significantly reduces design iteration cycle time and eliminates many potential sources of error. IODA integrates the modeling efforts of a team of experts in different disciplines (e.g., optics, structural analysis, and heat transfer) working at different locations and provides seamless fusion of data among thermal, structural, and optical models used to design an instrument. IODA is compatible with data files generated by the NASTRAN structural-analysis program and the Code V (Registered Trademark) optical-analysis program, and can be used to couple analyses performed by these two programs. IODA supports multiple-load-case analysis for quickly accomplishing trade studies. IODA can also model the transient response of an instrument under the influence of dynamic loads and disturbances.

  6. On geometric factors for neutral particle analyzers

    SciTech Connect

    Stagner, L.; Heidbrink, W. W.

    2014-11-15

    Neutral particle analyzers (NPA) detect neutralized energetic particles that escape from plasmas. Geometric factors relate the counting rate of the detectors to the intensity of the particle source. Accurate geometric factors enable quick simulation of geometric effects without the need to resort to slower Monte Carlo methods. Previously derived expressions [G. R. Thomas and D. M. Willis, “Analytical derivation of the geometric factor of a particle detector having circular or rectangular geometry,” J. Phys. E: Sci. Instrum. 5(3), 260 (1972); J. D. Sullivan, “Geometric factor and directional response of single and multi-element particle telescopes,” Nucl. Instrum. Methods 95(1), 5–11 (1971)]more » for the geometric factor implicitly assume that the particle source is very far away from the detector (far-field); this excludes applications close to the detector (near-field). The far-field assumption does not hold in most fusion applications of NPA detectors. We derive, from probability theory, a generalized framework for deriving geometric factors that are valid for both near and far-field applications as well as for non-isotropic sources and nonlinear particle trajectories.« less

  7. Analyzing the extrusion mould for aluminum profile

    NASA Astrophysics Data System (ADS)

    Yun, Wang; Xu, Zhenying; Dai, Yachun; Dong, Peilong; Yuan, Guoding; Lan, Cai

    2007-12-01

    The die or mould used for extruding aluminum wallboard profile is in serious work conditions, so it is easy to appear drawbacks in the mould such as non-uniform stress and strain distributions, crack initiation and propagation, elastic warp, and even plastic distortion. As we know, the extrusion die or mould is subject to complex loads including the extrusion pressure, friction and thermal load, which make the mould complicated and hard to be designed and analyzed by using conventional analytical method. In this paper, we applied Deform-3D, FEA (Finite Element Analysis) software used frequently in all engineering fields, to simulate three-dimensional extruding process of aluminum profile. The simulation results show that the deformation increases gradually from inside to outside. Exterior deformation contour distribution is relative uniform since the influence of inner holes on deformation is small, and the contour form is regular and similar with the shape of the mould. However, the interior deformation contour is irregular as the influence of holes with basically symmetric equivalent curves. At the middle of the mould, the deformation reaches the largest, it reaches 0.633mm. The deformation of the mould can be reduced by increasing the distance between two holes or increasing thickness of the mould. Experiment result accords with simulation. The simulation process and results ensure the feasibility of finite element method, providing the support for mould design and structural optimization.

  8. Development of the electric vehicle analyzer

    NASA Astrophysics Data System (ADS)

    Dickey, Michael R.; Klucz, Raymond S.; Ennix, Kimberly A.; Matuszak, Leo M.

    1990-06-01

    The increasing technological maturity of high power (greater than 20 kW) electric propulsion devices has led to renewed interest in their use as a means of efficiently transferring payloads between earth orbits. Several systems and architecture studies have identified the potential cost benefits of high performance Electric Orbital Transfer Vehicles (EOTVs). These studies led to the initiation of the Electric Insertion Transfer Experiment (ELITE) in 1988. Managed by the Astronautics Laboratory, ELITE is a flight experiment designed to sufficiently demonstrate key technologies and options to pave the way for the full-scale development of an operational EOTV. An important consideration in the development of the ELITE program is the capability of available analytical tools to simulate the orbital mechanics of a low thrust, electric propulsion transfer vehicle. These tools are necessary not only for ELITE mission planning exercises but also for continued, efficient, accurate evaluation of DoD space transportation architectures which include EOTVs. This paper presents such a tool: the Electric Vehicle Analyzer (EVA).

  9. Citizen scientists analyzing tropical cyclone intensities

    NASA Astrophysics Data System (ADS)

    Hennon, Christopher C.

    2012-10-01

    A new crowd sourcing project called CycloneCenter enables the public to analyze historical global tropical cyclone (TC) intensities. The primary goal of CycloneCenter, which launched in mid-September, is to resolve discrepancies in the recent global TC record arising principally from inconsistent development of tropical cyclone intensity data. The historical TC record is composed of data sets called "best tracks," which contain a forecast agency's best assessment of TC tracks and intensities. Best track data have improved in quality since the beginning of the geostationary satellite era in the 1960s (because TCs could no longer disappear from sight). However, a global compilation of best track data (International Best Track Archive for Climate Stewardship (IBTrACS)) has brought to light large interagency differences between some TC best track intensities, even in the recent past [Knapp et al., 2010Knapp et al., 2010]. For example, maximum wind speed estimates for Tropical Cyclone Gay (1989) differed by as much as 70 knots as it was tracked by three different agencies.

  10. Signal processing and analyzing works of art

    NASA Astrophysics Data System (ADS)

    Johnson, Don H.; Johnson, C. Richard, Jr.; Hendriks, Ella

    2010-08-01

    In examining paintings, art historians use a wide variety of physico-chemical methods to determine, for example, the paints, the ground (canvas primer) and any underdrawing the artist used. However, the art world has been little touched by signal processing algorithms. Our work develops algorithms to examine x-ray images of paintings, not to analyze the artist's brushstrokes but to characterize the weave of the canvas that supports the painting. The physics of radiography indicates that linear processing of the x-rays is most appropriate. Our spectral analysis algorithms have an accuracy superior to human spot-measurements and have the advantage that, through "short-space" Fourier analysis, they can be readily applied to entire x-rays. We have found that variations in the manufacturing process create a unique pattern of horizontal and vertical thread density variations in the bolts of canvas produced. In addition, we measure the thread angles, providing a way to determine the presence of cusping and to infer the location of the tacks used to stretch the canvas on a frame during the priming process. We have developed weave matching software that employs a new correlation measure to find paintings that share canvas weave characteristics. Using a corpus of over 290 paintings attributed to Vincent van Gogh, we have found several weave match cliques that we believe will refine the art historical record and provide more insight into the artist's creative processes.

  11. Analyzing the development of Indonesia shrimp industry

    NASA Astrophysics Data System (ADS)

    Wati, L. A.

    2018-04-01

    This research aimed to analyze the development of shrimp industry in Indonesia. Porter’s Diamond Theory was used for analysis. The Porter’s Diamond theory is one of framework for industry analysis and business strategy development. The Porter’s Diamond theory has five forces that determine the competitive intensity in an industry, namely (1) the threat of substitute products, (2) the threat of competition, (3) the threat of new entrants, (4) bargaining power of suppliers, and (5) bargaining power of consumers. The development of Indonesian shrimp industry pretty good, explained by Porter Diamond Theory analysis. Analysis of Porter Diamond Theory through four main components namely factor conditions; demand condition; related and supporting industries; and firm strategy, structure and rivalry coupled with a two-component supporting (regulatory the government and the factor of chance). Based on the result of this research show that two-component supporting (regulatory the government and the factor of chance) have positive. Related and supporting industries have negative, firm and structure strategy have negative, rivalry has positive, factor condition have positive (except science and technology resources).

  12. A Method for Analyzing Volunteered Geographic Information ...

    EPA Pesticide Factsheets

    Volunteered geographic information (VGI) can be used to identify public valuation of ecosystem services in a defined geographic area using photos as a representation of lived experiences. This method can help researchers better survey and report on the values and preferences of stakeholders involved in rehabilitation and revitalization projects. Current research utilizes VGI in the form of geotagged social media photos from three platforms: Flickr, Instagram, and Panaramio. Social media photos have been obtained for the neighborhoods next to the St. Louis River in Duluth, Minnesota, and are being analyzed along several dimensions. These dimensions include the spatial distribution of each platform, the characteristics of the physical environment portrayed in the photos, and finally, the ecosystem service depicted. In this poster, we focus on the photos from the Irving and Fairmount neighborhoods of Duluth, MN to demonstrate the method at the neighborhood scale. This study demonstrates a method for translating the values expressed in social media photos into ecosystem services and spatially-explicit data to be used in multiple settings, including the City of Duluth’s Comprehensive Planning and community revitalization efforts, habitat restoration in a Great Lakes Area of Concern, and the USEPA’s Office of Research and Development. This poster will demonstrate a method for translating values expressed in social media photos into ecosystem services and spatially

  13. Multiple capillary biochemical analyzer with barrier member

    DOEpatents

    Dovichi, Norman J.; Zhang, Jian Z.

    1996-01-01

    A multiple capillary biochemical analyzer for sequencing DNA and performing other analyses, in which a set of capillaries extends from wells in a microtiter plate into a cuvette. In the cuvette the capillaries are held on fixed closely spaced centers by passing through a sandwich construction having a pair of metal shims which squeeze between them a rubber gasket, forming a leak proof seal for an interior chamber in which the capillary ends are positioned. Sheath fluid enters the chamber and entrains filament sample streams from the capillaries. The filament sample streams, and sheath fluid, flow through aligned holes in a barrier member spaced close to the capillary ends, into a collection chamber having a lower glass window. The filament streams are illuminated above the barrier member by a laser, causing them to fluoresce. The fluorescence is viewed end-on by a CCD camera chip located below the glass window. The arrangement ensures an equal optical path length from all fluorescing spots to the CCD chip and also blocks scattered fluorescence illumination, providing more uniform results and an improved signal to noise ratio.

  14. Multiple capillary biochemical analyzer with barrier member

    DOEpatents

    Dovichi, N.J.; Zhang, J.Z.

    1996-10-22

    A multiple capillary biochemical analyzer is disclosed for sequencing DNA and performing other analyses, in which a set of capillaries extends from wells in a microtiter plate into a cuvette. In the cuvette the capillaries are held on fixed closely spaced centers by passing through a sandwich construction having a pair of metal shims which squeeze between them a rubber gasket, forming a leak proof seal for an interior chamber in which the capillary ends are positioned. Sheath fluid enters the chamber and entrains filament sample streams from the capillaries. The filament sample streams, and sheath fluid, flow through aligned holes in a barrier member spaced close to the capillary ends, into a collection chamber having a lower glass window. The filament streams are illuminated above the barrier member by a laser, causing them to fluoresce. The fluorescence is viewed end-on by a CCD camera chip located below the glass window. The arrangement ensures an equal optical path length from all fluorescing spots to the CCD chip and also blocks scattered fluorescence illumination, providing more uniform results and an improved signal-to-noise ratio. 12 figs.

  15. A framework to analyze emissions implications of ...

    EPA Pesticide Factsheets

    Future year emissions depend highly on the evolution of the economy, technology and current and future regulatory drivers. A scenario framework was adopted to analyze various technology development pathways and societal change while considering existing regulations and future uncertainty in regulations and evaluate resulting emissions growth patterns. The framework integrates EPA’s energy systems model with an economic Input-Output (I/O) Life Cycle Assessment model. The EPAUS9r MARKAL database is assembled from a set of technologies to represent the U.S. energy system within MARKAL bottom-up technology rich energy modeling framework. The general state of the economy and consequent demands for goods and services from these sectors are taken exogenously in MARKAL. It is important to characterize exogenous inputs about the economy to appropriately represent the industrial sector outlook for each of the scenarios and case studies evaluated. An economic input-output (I/O) model of the US economy is constructed to link up with MARKAL. The I/O model enables user to change input requirements (e.g. energy intensity) for different sectors or the share of consumer income expended on a given good. This gives end-users a mechanism for modeling change in the two dimensions of technological progress and consumer preferences that define the future scenarios. The framework will then be extended to include environmental I/O framework to track life cycle emissions associated

  16. Analyzing human errors in flight mission operations

    NASA Technical Reports Server (NTRS)

    Bruno, Kristin J.; Welz, Linda L.; Barnes, G. Michael; Sherif, Josef

    1993-01-01

    A long-term program is in progress at JPL to reduce cost and risk of flight mission operations through a defect prevention/error management program. The main thrust of this program is to create an environment in which the performance of the total system, both the human operator and the computer system, is optimized. To this end, 1580 Incident Surprise Anomaly reports (ISA's) from 1977-1991 were analyzed from the Voyager and Magellan projects. A Pareto analysis revealed that 38 percent of the errors were classified as human errors. A preliminary cluster analysis based on the Magellan human errors (204 ISA's) is presented here. The resulting clusters described the underlying relationships among the ISA's. Initial models of human error in flight mission operations are presented. Next, the Voyager ISA's will be scored and included in the analysis. Eventually, these relationships will be used to derive a theoretically motivated and empirically validated model of human error in flight mission operations. Ultimately, this analysis will be used to make continuous process improvements continuous process improvements to end-user applications and training requirements. This Total Quality Management approach will enable the management and prevention of errors in the future.

  17. An informatics approach to analyzing the incidentalome.

    PubMed

    Berg, Jonathan S; Adams, Michael; Nassar, Nassib; Bizon, Chris; Lee, Kristy; Schmitt, Charles P; Wilhelmsen, Kirk C; Evans, James P

    2013-01-01

    Next-generation sequencing has transformed genetic research and is poised to revolutionize clinical diagnosis. However, the vast amount of data and inevitable discovery of incidental findings require novel analytic approaches. We therefore implemented for the first time a strategy that utilizes an a priori structured framework and a conservative threshold for selecting clinically relevant incidental findings. We categorized 2,016 genes linked with Mendelian diseases into "bins" based on clinical utility and validity, and used a computational algorithm to analyze 80 whole-genome sequences in order to explore the use of such an approach in a simulated real-world setting. The algorithm effectively reduced the number of variants requiring human review and identified incidental variants with likely clinical relevance. Incorporation of the Human Gene Mutation Database improved the yield for missense mutations but also revealed that a substantial proportion of purported disease-causing mutations were misleading. This approach is adaptable to any clinically relevant bin structure, scalable to the demands of a clinical laboratory workflow, and flexible with respect to advances in genomics. We anticipate that application of this strategy will facilitate pretest informed consent, laboratory analysis, and posttest return of results in a clinical context.

  18. Software Analyzes Complex Systems in Real Time

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Expert system software programs, also known as knowledge-based systems, are computer programs that emulate the knowledge and analytical skills of one or more human experts, related to a specific subject. SHINE (Spacecraft Health Inference Engine) is one such program, a software inference engine (expert system) designed by NASA for the purpose of monitoring, analyzing, and diagnosing both real-time and non-real-time systems. It was developed to meet many of the Agency s demanding and rigorous artificial intelligence goals for current and future needs. NASA developed the sophisticated and reusable software based on the experience and requirements of its Jet Propulsion Laboratory s (JPL) Artificial Intelligence Research Group in developing expert systems for space flight operations specifically, the diagnosis of spacecraft health. It was designed to be efficient enough to operate in demanding real time and in limited hardware environments, and to be utilized by non-expert systems applications written in conventional programming languages. The technology is currently used in several ongoing NASA applications, including the Mars Exploration Rovers and the Spacecraft Health Automatic Reasoning Pilot (SHARP) program for the diagnosis of telecommunication anomalies during the Neptune Voyager Encounter. It is also finding applications outside of the Space Agency.

  19. Analyzing handwriting biometrics in metadata context

    NASA Astrophysics Data System (ADS)

    Scheidat, Tobias; Wolf, Franziska; Vielhauer, Claus

    2006-02-01

    In this article, methods for user recognition by online handwriting are experimentally analyzed using a combination of demographic data of users in relation to their handwriting habits. Online handwriting as a biometric method is characterized by having high variations of characteristics that influences the reliance and security of this method. These variations have not been researched in detail so far. Especially in cross-cultural application it is urgent to reveal the impact of personal background to security aspects in biometrics. Metadata represent the background of writers, by introducing cultural, biological and conditional (changing) aspects like fist language, country of origin, gender, handedness, experiences the influence handwriting and language skills. The goal is the revelation of intercultural impacts on handwriting in order to achieve higher security in biometrical systems. In our experiments, in order to achieve a relatively high coverage, 48 different handwriting tasks have been accomplished by 47 users from three countries (Germany, India and Italy) have been investigated with respect to the relations of metadata and biometric recognition performance. For this purpose, hypotheses have been formulated and have been evaluated using the measurement of well-known recognition error rates from biometrics. The evaluation addressed both: system reliance and security threads by skilled forgeries. For the later purpose, a novel forgery type is introduced, which applies the personal metadata to security aspects and includes new methods of security tests. Finally in our paper, we formulate recommendations for specific user groups and handwriting samples.

  20. Numerical Propulsion System Simulation Architecture

    NASA Technical Reports Server (NTRS)

    Naiman, Cynthia G.

    2004-01-01

    The Numerical Propulsion System Simulation (NPSS) is a framework for performing analysis of complex systems. Because the NPSS was developed using the object-oriented paradigm, the resulting architecture is an extensible and flexible framework that is currently being used by a diverse set of participants in government, academia, and the aerospace industry. NPSS is being used by over 15 different institutions to support rockets, hypersonics, power and propulsion, fuel cells, ground based power, and aerospace. Full system-level simulations as well as subsystems may be modeled using NPSS. The NPSS architecture enables the coupling of analyses at various levels of detail, which is called numerical zooming. The middleware used to enable zooming and distributed simulations is the Common Object Request Broker Architecture (CORBA). The NPSS Developer's Kit offers tools for the developer to generate CORBA-based components and wrap codes. The Developer's Kit enables distributed multi-fidelity and multi-discipline simulations, preserves proprietary and legacy codes, and facilitates addition of customized codes. The platforms supported are PC, Linux, HP, Sun, and SGI.

  1. Numerical Analysis in Fracture Mechanics.

    DTIC Science & Technology

    1983-01-20

    pressuriza- tion has also been solved [66] by the HEMP code. The advantage of such supercode, however, lies in its ability to analyze elastic- plastic ...analyzing the elasto-dynamic and elastic- plastic dynamic states In fracturing 2- and 3-D prob’ems. The use of a super finite difference code to study...the finite difference elastic- plastic result of Jacobs in 1950 [2J which was followed by others In the 1960’s [3 - 5). Swedlow et al [6], on the other a

  2. Long-range temporal correlations in the Kardar-Parisi-Zhang growth: numerical simulations

    NASA Astrophysics Data System (ADS)

    Song, Tianshu; Xia, Hui

    2016-11-01

    To analyze long-range temporal correlations in surface growth, we study numerically the (1  +  1)-dimensional Kardar-Parisi-Zhang (KPZ) equation driven by temporally correlated noise, and obtain the scaling exponents based on two different numerical methods. Our simulations show that the numerical results are in good agreement with the dynamic renormalization group (DRG) predictions, and are also consistent with the simulation results of the ballistic deposition (BD) model.

  3. Climate Model Diagnostic Analyzer Web Service System

    NASA Astrophysics Data System (ADS)

    Lee, S.; Pan, L.; Zhai, C.; Tang, B.; Kubar, T. L.; Li, J.; Zhang, J.; Wang, W.

    2015-12-01

    Both the National Research Council Decadal Survey and the latest Intergovernmental Panel on Climate Change Assessment Report stressed the need for the comprehensive and innovative evaluation of climate models with the synergistic use of global satellite observations in order to improve our weather and climate simulation and prediction capabilities. The abundance of satellite observations for fundamental climate parameters and the availability of coordinated model outputs from CMIP5 for the same parameters offer a great opportunity to understand and diagnose model biases in climate models. In addition, the Obs4MIPs efforts have created several key global observational datasets that are readily usable for model evaluations. However, a model diagnostic evaluation process requires physics-based multi-variable comparisons that typically involve large-volume and heterogeneous datasets, making them both computationally- and data-intensive. In response, we have developed a novel methodology to diagnose model biases in contemporary climate models and implementing the methodology as a web-service based, cloud-enabled, provenance-supported climate-model evaluation system. The evaluation system is named Climate Model Diagnostic Analyzer (CMDA), which is the product of the research and technology development investments of several current and past NASA ROSES programs. The current technologies and infrastructure of CMDA are designed and selected to address several technical challenges that the Earth science modeling and model analysis community faces in evaluating and diagnosing climate models. In particular, we have three key technology components: (1) diagnostic analysis methodology; (2) web-service based, cloud-enabled technology; (3) provenance-supported technology. The diagnostic analysis methodology includes random forest feature importance ranking, conditional probability distribution function, conditional sampling, and time-lagged correlation map. We have implemented the

  4. Analyzing petabytes of data with Hadoop

    ScienceCinema

    Hammerbacher, Jeff

    2018-05-14

    The open source Apache Hadoop project provides a powerful suite of tools for storing and analyzing petabytes of data using commodity hardware. After several years of production use inside of web companies like Yahoo! and Facebook and nearly a year of commercial support and development by Cloudera, the technology is spreading rapidly through other disciplines, from financial services and government to life sciences and high energy physics. The talk will motivate the design of Hadoop and discuss some key implementation details in depth. It will also cover the major subprojects in the Hadoop ecosystem, go over some example applications, highlight best practices for deploying Hadoop in your environment, discuss plans for the future of the technology, and provide pointers to the many resources available for learning more. In addition to providing more information about the Hadoop platform, a major goal of this talk is to begin a dialogue with the ATLAS research team on how the tools commonly used in their environment compare to Hadoop, and how Hadoop could improve better to serve the high energy physics community. Short Biography: Jeff Hammerbacher is Vice President of Products and Chief Scientist at Cloudera. Jeff was an Entrepreneur in Residence at Accel Partners immediately prior to founding Cloudera. Before Accel, he conceived, built, and led the Data team at Facebook. The Data team was responsible for driving many of the applications of statistics and machine learning at Facebook, as well as building out the infrastructure to support these tasks for massive data sets. The team produced two open source projects: Hive, a system for offline analysis built above Hadoop, and Cassandra, a structured storage system on a P2P network. Before joining Facebook, Jeff was a quantitative analyst on Wall Street. Jeff earned his Bachelor's Degree in Mathematics from Harvard University and recently served as contributing editor to the book "Beautiful Data", published by O'Reilly in

  5. Analyzing petabytes of data with Hadoop

    SciTech Connect

    Hammerbacher, Jeff

    The open source Apache Hadoop project provides a powerful suite of tools for storing and analyzing petabytes of data using commodity hardware. After several years of production use inside of web companies like Yahoo! and Facebook and nearly a year of commercial support and development by Cloudera, the technology is spreading rapidly through other disciplines, from financial services and government to life sciences and high energy physics. The talk will motivate the design of Hadoop and discuss some key implementation details in depth. It will also cover the major subprojects in the Hadoop ecosystem, go over some example applications, highlightmore » best practices for deploying Hadoop in your environment, discuss plans for the future of the technology, and provide pointers to the many resources available for learning more. In addition to providing more information about the Hadoop platform, a major goal of this talk is to begin a dialogue with the ATLAS research team on how the tools commonly used in their environment compare to Hadoop, and how Hadoop could improve better to serve the high energy physics community. Short Biography: Jeff Hammerbacher is Vice President of Products and Chief Scientist at Cloudera. Jeff was an Entrepreneur in Residence at Accel Partners immediately prior to founding Cloudera. Before Accel, he conceived, built, and led the Data team at Facebook. The Data team was responsible for driving many of the applications of statistics and machine learning at Facebook, as well as building out the infrastructure to support these tasks for massive data sets. The team produced two open source projects: Hive, a system for offline analysis built above Hadoop, and Cassandra, a structured storage system on a P2P network. Before joining Facebook, Jeff was a quantitative analyst on Wall Street. Jeff earned his Bachelor's Degree in Mathematics from Harvard University and recently served as contributing editor to the book "Beautiful Data", published by O

  6. Analyzing costs of space debris mitigation methods

    NASA Astrophysics Data System (ADS)

    Wiedemann, C.; Krag, H.; Bendisch, J.; Sdunnus, H.

    The steadily increasing number of space objects poses a considerable hazard to all kinds of spacecraft. To reduce the risks to future space missions different debris mitigation measures and spacecraft protection techniques have been investigated during the last years. However, the economic efficiency has not been considered yet in this context. This economical background is not always clear to satellite operators and the space industry. Current studies have the objective to evaluate the mission costs due to space debris in a business as usual (no mitigation) scenario compared to the missions costs considering debris mitigation. The aim i an estimation of thes time until the investment in debris mitigation will lead to an effective reduction of mission costs. This paper presents the results of investigations on the key problems of cost estimation for spacecraft and the influence of debris mitigation and shielding on cost. The shielding of a satellite can be an effective method to protect the spacecraft against debris impact. Mitigation strategies like the reduction of orbital lifetime and de- or re-orbit of non-operational satellites are methods to control the space debris environment. These methods result in an increase of costs. In a first step the overall costs of different types of unmanned satellites are analyzed. The key problem is, that it is not possible to provide a simple cost model that can be applied to all types of satellites. Unmanned spacecraft differ very much in mission, complexity of design, payload and operational lifetime. It is important to classify relevant cost parameters and investigate their influence on the respective mission. The theory of empirical cost estimation and existing cost models are discussed. A selected cost model is simplified and generalized for an application on all operational satellites. In a next step the influence of space debris on cost is treated, if the implementation of mitigation strategies is considered.

  7. Analyzing Personalized Policies for Online Biometric Verification

    PubMed Central

    Sadhwani, Apaar; Yang, Yan; Wein, Lawrence M.

    2014-01-01

    Motivated by India’s nationwide biometric program for social inclusion, we analyze verification (i.e., one-to-one matching) in the case where we possess similarity scores for 10 fingerprints and two irises between a resident’s biometric images at enrollment and his biometric images during his first verification. At subsequent verifications, we allow individualized strategies based on these 12 scores: we acquire a subset of the 12 images, get new scores for this subset that quantify the similarity to the corresponding enrollment images, and use the likelihood ratio (i.e., the likelihood of observing these scores if the resident is genuine divided by the corresponding likelihood if the resident is an imposter) to decide whether a resident is genuine or an imposter. We also consider two-stage policies, where additional images are acquired in a second stage if the first-stage results are inconclusive. Using performance data from India’s program, we develop a new probabilistic model for the joint distribution of the 12 similarity scores and find near-optimal individualized strategies that minimize the false reject rate (FRR) subject to constraints on the false accept rate (FAR) and mean verification delay for each resident. Our individualized policies achieve the same FRR as a policy that acquires (and optimally fuses) 12 biometrics for each resident, which represents a five (four, respectively) log reduction in FRR relative to fingerprint (iris, respectively) policies previously proposed for India’s biometric program. The mean delay is sec for our proposed policy, compared to 30 sec for a policy that acquires one fingerprint and 107 sec for a policy that acquires all 12 biometrics. This policy acquires iris scans from 32–41% of residents (depending on the FAR) and acquires an average of 1.3 fingerprints per resident. PMID:24787752

  8. Analyzing wildfire exposure on Sardinia, Italy

    NASA Astrophysics Data System (ADS)

    Salis, Michele; Ager, Alan A.; Arca, Bachisio; Finney, Mark A.; Alcasena, Fermin; Bacciu, Valentina; Duce, Pierpaolo; Munoz Lozano, Olga; Spano, Donatella

    2014-05-01

    We used simulation modeling based on the minimum travel time algorithm (MTT) to analyze wildfire exposure of key ecological, social and economic features on Sardinia, Italy. Sardinia is the second largest island of the Mediterranean Basin, and in the last fifty years experienced large and dramatic wildfires, which caused losses and threatened urban interfaces, forests and natural areas, and agricultural productions. Historical fires and environmental data for the period 1995-2009 were used as input to estimate fine scale burn probability, conditional flame length, and potential fire size in the study area. With this purpose, we simulated 100,000 wildfire events within the study area, randomly drawing from the observed frequency distribution of burn periods and wind directions for each fire. Estimates of burn probability, excluding non-burnable fuels, ranged from 0 to 1.92x10-3, with a mean value of 6.48x10-5. Overall, the outputs provided a quantitative assessment of wildfire exposure at the landscape scale and captured landscape properties of wildfire exposure. We then examined how the exposure profiles varied among and within selected features and assets located on the island. Spatial variation in modeled outputs resulted in a strong effect of fuel models, coupled with slope and weather. In particular, the combined effect of Mediterranean maquis, woodland areas and complex topography on flame length was relevant, mainly in north-east Sardinia, whereas areas with herbaceous fuels and flat areas were in general characterized by lower fire intensity but higher burn probability. The simulation modeling proposed in this work provides a quantitative approach to inform wildfire risk management activities, and represents one of the first applications of burn probability modeling to capture fire risk and exposure profiles in the Mediterranean basin.

  9. Analyzing personalized policies for online biometric verification.

    PubMed

    Sadhwani, Apaar; Yang, Yan; Wein, Lawrence M

    2014-01-01

    Motivated by India's nationwide biometric program for social inclusion, we analyze verification (i.e., one-to-one matching) in the case where we possess similarity scores for 10 fingerprints and two irises between a resident's biometric images at enrollment and his biometric images during his first verification. At subsequent verifications, we allow individualized strategies based on these 12 scores: we acquire a subset of the 12 images, get new scores for this subset that quantify the similarity to the corresponding enrollment images, and use the likelihood ratio (i.e., the likelihood of observing these scores if the resident is genuine divided by the corresponding likelihood if the resident is an imposter) to decide whether a resident is genuine or an imposter. We also consider two-stage policies, where additional images are acquired in a second stage if the first-stage results are inconclusive. Using performance data from India's program, we develop a new probabilistic model for the joint distribution of the 12 similarity scores and find near-optimal individualized strategies that minimize the false reject rate (FRR) subject to constraints on the false accept rate (FAR) and mean verification delay for each resident. Our individualized policies achieve the same FRR as a policy that acquires (and optimally fuses) 12 biometrics for each resident, which represents a five (four, respectively) log reduction in FRR relative to fingerprint (iris, respectively) policies previously proposed for India's biometric program. The mean delay is [Formula: see text] sec for our proposed policy, compared to 30 sec for a policy that acquires one fingerprint and 107 sec for a policy that acquires all 12 biometrics. This policy acquires iris scans from 32-41% of residents (depending on the FAR) and acquires an average of 1.3 fingerprints per resident.

  10. Numerical modelling in biosciences using delay differential equations

    NASA Astrophysics Data System (ADS)

    Bocharov, Gennadii A.; Rihan, Fathalla A.

    2000-12-01

    Our principal purposes here are (i) to consider, from the perspective of applied mathematics, models of phenomena in the biosciences that are based on delay differential equations and for which numerical approaches are a major tool in understanding their dynamics, (ii) to review the application of numerical techniques to investigate these models. We show that there are prima facie reasons for using such models: (i) they have a richer mathematical framework (compared with ordinary differential equations) for the analysis of biosystem dynamics, (ii) they display better consistency with the nature of certain biological processes and predictive results. We analyze both the qualitative and quantitative role that delays play in basic time-lag models proposed in population dynamics, epidemiology, physiology, immunology, neural networks and cell kinetics. We then indicate suitable computational techniques for the numerical treatment of mathematical problems emerging in the biosciences, comparing them with those implemented by the bio-modellers.

  11. Numerical calculations of velocity and pressure distribution around oscillating airfoils

    NASA Technical Reports Server (NTRS)

    Bratanow, T.; Ecer, A.; Kobiske, M.

    1974-01-01

    An analytical procedure based on the Navier-Stokes equations was developed for analyzing and representing properties of unsteady viscous flow around oscillating obstacles. A variational formulation of the vorticity transport equation was discretized in finite element form and integrated numerically. At each time step of the numerical integration, the velocity field around the obstacle was determined for the instantaneous vorticity distribution from the finite element solution of Poisson's equation. The time-dependent boundary conditions around the oscillating obstacle were introduced as external constraints, using the Lagrangian Multiplier Technique, at each time step of the numerical integration. The procedure was then applied for determining pressures around obstacles oscillating in unsteady flow. The obtained results for a cylinder and an airfoil were illustrated in the form of streamlines and vorticity and pressure distributions.

  12. Thromboelastography platelet mapping in healthy dogs using 1 analyzer versus 2 analyzers.

    PubMed

    Blois, Shauna L; Banerjee, Amrita; Wood, R Darren; Park, Fiona M

    2013-07-01

    The objective of this study was to describe the results of thromboelastography platelet mapping (TEG-PM) carried out using 2 techniques in 20 healthy dogs. Maximum amplitudes (MA) generated by thrombin (MAthrombin), fibrin (MAfibrin), adenosine diphosphate (ADP) receptor activity (MAADP), and thromboxane A2 (TxA2) receptor activity (stimulated by arachidonic acid, MAAA) were recorded. Thromboelastography platelet mapping was carried out according to the manufacturer's guidelines (2-analyzer technique) and using a variation of this method employing only 1 analyzer (1-analyzer technique) on 2 separate blood samples obtained from each dog. Mean [± standard deviation (SD)] MA values for the 1-analyzer/2-analyzer techniques were: MAthrombin = 51.9 mm (± 7.1)/52.5 mm (± 8.0); MAfibrin = 20.7 mm (± 21.8)/23.0 mm (± 26.1); MAADP = 44.5 mm (± 15.6)/45.6 mm (± 17.0); and MAAA = 45.7 mm (± 11.6)/45.0 mm (± 15.4). Mean (± SD) percentage aggregation due to ADP receptor activity was 70.4% (± 32.8)/67.6% (± 33.7). Mean percentage aggregation due to TxA2 receptor activity was 77.3% (± 31.6)/78.1% (± 50.2). Results of TEG-PM were not significantly different for the 1-analyzer and 2-analyzer methods. High correlation was found between the 2 methods for MAfibrin [concordance correlation coefficient (r) = 0.930]; moderate correlation was found for MAthrombin (r = 0.70) and MAADP (r = 0.57); correlation between the 2 methods for MAAA was lower (r = 0.32). Thromboelastography platelet mapping (TEG-PM) should be further investigated to determine if it is a suitable method for measuring platelet dysfunction in dogs with thrombopathy.

  13. Thromboelastography platelet mapping in healthy dogs using 1 analyzer versus 2 analyzers

    PubMed Central

    Blois, Shauna L.; Banerjee, Amrita; Wood, R. Darren; Park, Fiona M.

    2013-01-01

    The objective of this study was to describe the results of thromboelastography platelet mapping (TEG-PM) carried out using 2 techniques in 20 healthy dogs. Maximum amplitudes (MA) generated by thrombin (MAthrombin), fibrin (MAfibrin), adenosine diphosphate (ADP) receptor activity (MAADP), and thromboxane A2 (TxA2) receptor activity (stimulated by arachidonic acid, MAAA) were recorded. Thromboelastography platelet mapping was carried out according to the manufacturer’s guidelines (2-analyzer technique) and using a variation of this method employing only 1 analyzer (1-analyzer technique) on 2 separate blood samples obtained from each dog. Mean [± standard deviation (SD)] MA values for the 1-analyzer/2-analyzer techniques were: MAthrombin = 51.9 mm (± 7.1)/52.5 mm (± 8.0); MAfibrin = 20.7 mm (± 21.8)/23.0 mm (± 26.1); MAADP = 44.5 mm (± 15.6)/45.6 mm (± 17.0); and MAAA = 45.7 mm (± 11.6)/45.0 mm (± 15.4). Mean (± SD) percentage aggregation due to ADP receptor activity was 70.4% (± 32.8)/67.6% (± 33.7). Mean percentage aggregation due to TxA2 receptor activity was 77.3% (± 31.6)/78.1% (± 50.2). Results of TEG-PM were not significantly different for the 1-analyzer and 2-analyzer methods. High correlation was found between the 2 methods for MAfibrin [concordance correlation coefficient (r) = 0.930]; moderate correlation was found for MAthrombin (r = 0.70) and MAADP (r = 0.57); correlation between the 2 methods for MAAA was lower (r = 0.32). Thromboelastography platelet mapping (TEG-PM) should be further investigated to determine if it is a suitable method for measuring platelet dysfunction in dogs with thrombopathy. PMID:24101802

  14. Numerical Modeling in Geodynamics: Success, Failure and Perspective

    NASA Astrophysics Data System (ADS)

    Ismail-Zadeh, A.

    2005-12-01

    A real success in numerical modeling of dynamics of the Earth can be achieved only by multidisciplinary research teams of experts in geodynamics, applied and pure mathematics, and computer science. The success in numerical modeling is based on the following basic, but simple, rules. (i) People need simplicity most, but they understand intricacies best (B. Pasternak, writer). Start from a simple numerical model, which describes basic physical laws by a set of mathematical equations, and move then to a complex model. Never start from a complex model, because you cannot understand the contribution of each term of the equations to the modeled geophysical phenomenon. (ii) Study the numerical methods behind your computer code. Otherwise it becomes difficult to distinguish true and erroneous solutions to the geodynamic problem, especially when your problem is complex enough. (iii) Test your model versus analytical and asymptotic solutions, simple 2D and 3D model examples. Develop benchmark analysis of different numerical codes and compare numerical results with laboratory experiments. Remember that the numerical tool you employ is not perfect, and there are small bugs in every computer code. Therefore the testing is the most important part of your numerical modeling. (iv) Prove (if possible) or learn relevant statements concerning the existence, uniqueness and stability of the solution to the mathematical and discrete problems. Otherwise you can solve an improperly-posed problem, and the results of the modeling will be far from the true solution of your model problem. (v) Try to analyze numerical models of a geological phenomenon using as less as possible tuning model variables. Already two tuning variables give enough possibilities to constrain your model well enough with respect to observations. The data fitting sometimes is quite attractive and can take you far from a principal aim of your numerical modeling: to understand geophysical phenomena. (vi) If the number of

  15. Human erythrocytes analyzed by generalized 2D Raman correlation spectroscopy

    NASA Astrophysics Data System (ADS)

    Wesełucha-Birczyńska, Aleksandra; Kozicki, Mateusz; Czepiel, Jacek; Łabanowska, Maria; Nowak, Piotr; Kowalczyk, Grzegorz; Kurdziel, Magdalena; Birczyńska, Malwina; Biesiada, Grażyna; Mach, Tomasz; Garlicki, Aleksander

    2014-07-01

    The most numerous elements of the blood cells, erythrocytes, consist mainly of two components: homogeneous interior filled with hemoglobin and closure which is the cell membrane. To gain insight into their specific properties we studied the process of disintegration, considering these two constituents, and comparing the natural aging process of human healthy blood cells. MicroRaman spectra of hemoglobin within the single RBC were recorded using 514.5, and 785 nm laser lines. The generalized 2D correlation method was applied to analyze the collected spectra. The time passed from blood donation was regarded as an external perturbation. The time was no more than 40 days according to the current storage limit of blood banks, although, the average RBC life span is 120 days. An analysis of the prominent synchronous and asynchronous cross peaks allow us to get insight into the mechanism of hemoglobin decomposition. Appearing asynchronous cross-peaks point towards globin and heme separation from each other, while synchronous shows already broken globin into individual amino acids. Raman scattering analysis of hemoglobin "wrapping", i.e. healthy erythrocyte ghosts, allows for the following peculiarity of their behavior. The increasing power of the excitation laser induced alterations in the assemblage of membrane lipids. 2D correlation maps, obtained with increasing laser power recognized as an external perturbation, allows for the consideration of alterations in the erythrocyte membrane structure and composition, which occurs first in the proteins. Cross-peaks were observed indicating an asynchronous correlation between the senescent-cell antigen (SCA) and heme or proteins vibrations. The EPR spectra of the whole blood was analyzed regarding time as an external stimulus. The 2D correlation spectra points towards participation of the selected metal ion centers in the disintegration process.

  16. Physical and Relativistic Numerical Cosmology.

    PubMed

    Anninos, Peter

    1998-01-01

    In order to account for the observable Universe, any comprehensive theory or model of cosmology must draw from many disciplines of physics, including gauge theories of strong and weak interactions, the hydrodynamics and microphysics of baryonic matter, electromagnetic fields, and spacetime curvature, for example. Although it is difficult to incorporate all these physical elements into a single complete model of our Universe, advances in computing methods and technologies have contributed significantly towards our understanding of cosmological models, the Universe, and astrophysical processes within them. A sample of numerical calculations addressing specific issues in cosmology are reviewed in this article: from the Big Bang singularity dynamics to the fundamental interactions of gravitational waves; from the quark-hadron phase transition to the large scale structure of the Universe. The emphasis, although not exclusively, is on those calculations designed to test different models of cosmology against the observed Universe.

  17. Numerical experiments in homogeneous turbulence

    NASA Technical Reports Server (NTRS)

    Rogallo, R. S.

    1981-01-01

    The direct simulation methods developed by Orszag and Patternson (1972) for isotropic turbulence were extended to homogeneous turbulence in an incompressible fluid subjected to uniform deformation or rotation. The results of simulations for irrotational strain (plane and axisymmetric), shear, rotation, and relaxation toward isotropy following axisymmetric strain are compared with linear theory and experimental data. Emphasis is placed on the shear flow because of its importance and because of the availability of accurate and detailed experimental data. The computed results are used to assess the accuracy of two popular models used in the closure of the Reynolds-stress equations. Data from a variety of the computed fields and the details of the numerical methods used in the simulation are also presented.

  18. Numerical classification of coding sequences

    NASA Technical Reports Server (NTRS)

    Collins, D. W.; Liu, C. C.; Jukes, T. H.

    1992-01-01

    DNA sequences coding for protein may be represented by counts of nucleotides or codons. A complete reading frame may be abbreviated by its base count, e.g. A76C158G121T74, or with the corresponding codon table, e.g. (AAA)0(AAC)1(AAG)9 ... (TTT)0. We propose that these numerical designations be used to augment current methods of sequence annotation. Because base counts and codon tables do not require revision as knowledge of function evolves, they are well-suited to act as cross-references, for example to identify redundant GenBank entries. These descriptors may be compared, in place of DNA sequences, to extract homologous genes from large databases. This approach permits rapid searching with good selectivity.

  19. On numerically accurate finite element

    NASA Technical Reports Server (NTRS)

    Nagtegaal, J. C.; Parks, D. M.; Rice, J. R.

    1974-01-01

    A general criterion for testing a mesh with topologically similar repeat units is given, and the analysis shows that only a few conventional element types and arrangements are, or can be made suitable for computations in the fully plastic range. Further, a new variational principle, which can easily and simply be incorporated into an existing finite element program, is presented. This allows accurate computations to be made even for element designs that would not normally be suitable. Numerical results are given for three plane strain problems, namely pure bending of a beam, a thick-walled tube under pressure, and a deep double edge cracked tensile specimen. The effects of various element designs and of the new variational procedure are illustrated. Elastic-plastic computation at finite strain are discussed.

  20. Numerical computation of Pop plot

    SciTech Connect

    Menikoff, Ralph

    The Pop plot — distance-of-run to detonation versus initial shock pressure — is a key characterization of shock initiation in a heterogeneous explosive. Reactive burn models for high explosives (HE) must reproduce the experimental Pop plot to have any chance of accurately predicting shock initiation phenomena. This report describes a methodology for automating the computation of a Pop plot for a specific explosive with a given HE model. Illustrative examples of the computation are shown for PBX 9502 with three burn models (SURF, WSD and Forest Fire) utilizing the xRage code, which is the Eulerian ASC hydrocode at LANL. Comparisonmore » of the numerical and experimental Pop plot can be the basis for a validation test or as an aid in calibrating the burn rate of an HE model. Issues with calibration are discussed.« less

  1. Numerical optimization using flow equations.

    PubMed

    Punk, Matthias

    2014-12-01

    We develop a method for multidimensional optimization using flow equations. This method is based on homotopy continuation in combination with a maximum entropy approach. Extrema of the optimizing functional correspond to fixed points of the flow equation. While ideas based on Bayesian inference such as the maximum entropy method always depend on a prior probability, the additional step in our approach is to perform a continuous update of the prior during the homotopy flow. The prior probability thus enters the flow equation only as an initial condition. We demonstrate the applicability of this optimization method for two paradigmatic problems in theoretical condensed matter physics: numerical analytic continuation from imaginary to real frequencies and finding (variational) ground states of frustrated (quantum) Ising models with random or long-range antiferromagnetic interactions.

  2. Numerical optimization using flow equations

    NASA Astrophysics Data System (ADS)

    Punk, Matthias

    2014-12-01

    We develop a method for multidimensional optimization using flow equations. This method is based on homotopy continuation in combination with a maximum entropy approach. Extrema of the optimizing functional correspond to fixed points of the flow equation. While ideas based on Bayesian inference such as the maximum entropy method always depend on a prior probability, the additional step in our approach is to perform a continuous update of the prior during the homotopy flow. The prior probability thus enters the flow equation only as an initial condition. We demonstrate the applicability of this optimization method for two paradigmatic problems in theoretical condensed matter physics: numerical analytic continuation from imaginary to real frequencies and finding (variational) ground states of frustrated (quantum) Ising models with random or long-range antiferromagnetic interactions.

  3. Numerical Modeling of Turbulent Combustion

    NASA Technical Reports Server (NTRS)

    Ghoneim, A. F.; Chorin, A. J.; Oppenheim, A. K.

    1983-01-01

    The work in numerical modeling is focused on the use of the random vortex method to treat turbulent flow fields associated with combustion while flame fronts are considered as interfaces between reactants and products, propagating with the flow and at the same time advancing in the direction normal to themselves at a prescribed burning speed. The latter is associated with the generation of specific volume (the flame front acting, in effect, as the locus of volumetric sources) to account for the expansion of the flow field due to the exothermicity of the combustion process. The model was applied to the flow in a channel equipped with a rearward facing step. The results obtained revealed the mechanism of the formation of large scale turbulent structure in the wake of the step, while it showed the flame to stabilize on the outer edges of these eddies.

  4. Analyzing the attributes of Indiana's STEM schools

    NASA Astrophysics Data System (ADS)

    Eltz, Jeremy

    "Primary and secondary schools do not seem able to produce enough students with the interest, motivation, knowledge, and skills they will need to compete and prosper in the emerging world" (National Academy of Sciences [NAS], 2007a, p. 94). This quote indicated that there are changing expectations for today's students which have ultimately led to new models of education, such as charters, online and blended programs, career and technical centers, and for the purposes of this research, STEM schools. STEM education as defined in this study is a non-traditional model of teaching and learning intended to "equip them [students] with critical thinking, problem solving, creative and collaborative skills, and ultimately establishes connections between the school, work place, community and the global economy" (Science Foundation Arizona, 2014, p. 1). Focusing on science, technology, engineering, and math (STEM) education is believed by many educational stakeholders to be the solution for the deficits many students hold as they move on to college and careers. The National Governors Association (NGA; 2011) believes that building STEM skills in the nation's students will lead to the ability to compete globally with a new workforce that has the capacity to innovate and will in turn spur economic growth. In order to accomplish the STEM model of education, a group of educators and business leaders from Indiana developed a comprehensive plan for STEM education as an option for schools to use in order to close this gap. This plan has been promoted by the Indiana Department of Education (IDOE, 2014a) with the goal of increasing STEM schools throughout Indiana. To determine what Indiana's elementary STEM schools are doing, this study analyzed two of the elementary schools that were certified STEM by the IDOE. This qualitative case study described the findings and themes from two elementary STEM schools. Specifically, the research looked at the vital components to accomplish STEM

  5. Climate Model Diagnostic Analyzer Web Service System

    NASA Astrophysics Data System (ADS)

    Lee, S.; Pan, L.; Zhai, C.; Tang, B.; Jiang, J. H.

    2013-12-01

    The latest Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report stressed the need for the comprehensive and innovative evaluation of climate models with newly available global observations. The traditional approach to climate model evaluation, which compares a single parameter at a time, identifies symptomatic model biases and errors but fails to diagnose the model problems. The model diagnosis process requires physics-based multi-variable comparisons that typically involve large-volume and heterogeneous datasets, making them both computationally- and data-intensive. To address these challenges, we are developing a parallel, distributed web-service system that enables the physics-based multi-variable model performance evaluations and diagnoses through the comprehensive and synergistic use of multiple observational data, reanalysis data, and model outputs. We have developed a methodology to transform an existing science application code into a web service using a Python wrapper interface and Python web service frameworks (i.e., Flask, Gunicorn, and Tornado). The web-service system, called Climate Model Diagnostic Analyzer (CMDA), currently supports (1) all the datasets from Obs4MIPs and a few ocean datasets from NOAA and Argo, which can serve as observation-based reference data for model evaluation and (2) many of CMIP5 model outputs covering a broad range of atmosphere, ocean, and land variables from the CMIP5 specific historical runs and AMIP runs. Analysis capabilities currently supported by CMDA are (1) the calculation of annual and seasonal means of physical variables, (2) the calculation of time evolution of the means in any specified geographical region, (3) the calculation of correlation between two variables, and (4) the calculation of difference between two variables. A web user interface is chosen for CMDA because it not only lowers the learning curve and removes the adoption barrier of the tool but also enables instantaneous use

  6. Climate Model Diagnostic Analyzer Web Service System

    NASA Astrophysics Data System (ADS)

    Lee, S.; Pan, L.; Zhai, C.; Tang, B.; Jiang, J. H.

    2014-12-01

    We have developed a cloud-enabled web-service system that empowers physics-based, multi-variable model performance evaluations and diagnoses through the comprehensive and synergistic use of multiple observational data, reanalysis data, and model outputs. We have developed a methodology to transform an existing science application code into a web service using a Python wrapper interface and Python web service frameworks. The web-service system, called Climate Model Diagnostic Analyzer (CMDA), currently supports (1) all the observational datasets from Obs4MIPs and a few ocean datasets from NOAA and Argo, which can serve as observation-based reference data for model evaluation, (2) many of CMIP5 model outputs covering a broad range of atmosphere, ocean, and land variables from the CMIP5 specific historical runs and AMIP runs, and (3) ECMWF reanalysis outputs for several environmental variables in order to supplement observational datasets. Analysis capabilities currently supported by CMDA are (1) the calculation of annual and seasonal means of physical variables, (2) the calculation of time evolution of the means in any specified geographical region, (3) the calculation of correlation between two variables, (4) the calculation of difference between two variables, and (5) the conditional sampling of one physical variable with respect to another variable. A web user interface is chosen for CMDA because it not only lowers the learning curve and removes the adoption barrier of the tool but also enables instantaneous use, avoiding the hassle of local software installation and environment incompatibility. CMDA will be used as an educational tool for the summer school organized by JPL's Center for Climate Science in 2014. In order to support 30+ simultaneous users during the school, we have deployed CMDA to the Amazon cloud environment. The cloud-enabled CMDA will provide each student with a virtual machine while the user interaction with the system will remain the same

  7. The Albuquerque Seismological Laboratory Data Quality Analyzer

    NASA Astrophysics Data System (ADS)

    Ringler, A. T.; Hagerty, M.; Holland, J.; Gee, L. S.; Wilson, D.

    2013-12-01

    The U.S. Geological Survey's Albuquerque Seismological Laboratory (ASL) has several efforts underway to improve data quality at its stations. The Data Quality Analyzer (DQA) is one such development. The DQA is designed to characterize station data quality in a quantitative and automated manner. Station quality is based on the evaluation of various metrics, such as timing quality, noise levels, sensor coherence, and so on. These metrics are aggregated into a measurable grade for each station. The DQA consists of a website, a metric calculator (Seedscan), and a PostgreSQL database. The website allows the user to make requests for various time periods, review specific networks and stations, adjust weighting of the station's grade, and plot metrics as a function of time. The website dynamically loads all station data from a PostgreSQL database. The database is central to the application; it acts as a hub where metric values and limited station descriptions are stored. Data is stored at the level of one sensor's channel per day. The database is populated by Seedscan. Seedscan reads and processes miniSEED data, to generate metric values. Seedscan, written in Java, compares hashes of metadata and data to detect changes and perform subsequent recalculations. This ensures that the metric values are up to date and accurate. Seedscan can be run in a scheduled task or on demand by way of a config file. It will compute metrics specified in its configuration file. While many metrics are currently in development, some are completed and being actively used. These include: availability, timing quality, gap count, deviation from the New Low Noise Model, deviation from a station's noise baseline, inter-sensor coherence, and data-synthetic fits. In all, 20 metrics are planned, but any number could be added. ASL is actively using the DQA on a daily basis for station diagnostics and evaluation. As Seedscan is scheduled to run every night, data quality analysts are able to then use the

  8. METCAN-PC - METAL MATRIX COMPOSITE ANALYZER

    NASA Technical Reports Server (NTRS)

    Murthy, P. L.

    1994-01-01

    High temperature metal matrix composites offer great potential for use in advanced aerospace structural applications. The realization of this potential however, requires concurrent developments in (1) a technology base for fabricating high temperature metal matrix composite structural components, (2) experimental techniques for measuring their thermal and mechanical characteristics, and (3) computational methods to predict their behavior. METCAN (METal matrix Composite ANalyzer) is a computer program developed to predict this behavior. METCAN can be used to computationally simulate the non-linear behavior of high temperature metal matrix composites (HT-MMC), thus allowing the potential payoff for the specific application to be assessed. It provides a comprehensive analysis of composite thermal and mechanical performance. METCAN treats material nonlinearity at the constituent (fiber, matrix, and interphase) level, where the behavior of each constituent is modeled accounting for time-temperature-stress dependence. The composite properties are synthesized from the constituent instantaneous properties by making use of composite micromechanics and macromechanics. Factors which affect the behavior of the composite properties include the fabrication process variables, the fiber and matrix properties, the bonding between the fiber and matrix and/or the properties of the interphase between the fiber and matrix. The METCAN simulation is performed as point-wise analysis and produces composite properties which are readily incorporated into a finite element code to perform a global structural analysis. After the global structural analysis is performed, METCAN decomposes the composite properties back into the localized response at the various levels of the simulation. At this point the constituent properties are updated and the next iteration in the analysis is initiated. This cyclic procedure is referred to as the integrated approach to metal matrix composite analysis. METCAN

  9. Numerical study of chemically reacting viscous flow relevant to pulsed detonation engines

    NASA Astrophysics Data System (ADS)

    Yi, Tae-Hyeong

    2005-11-01

    A computational fluid dynamics code for two-dimensional, multi-species, laminar Navier-Stokes equations is developed to simulate a recently proposed engine concept for a pulsed detonation based propulsion system and to investigate the feasibility of the engine of the concept. The governing equations that include transport phenomena such as viscosity, thermal conduction and diffusion are coupled with chemical reactions. The gas is assumed to be thermally perfect and in chemically non-equilibrium. The stiffness due to coupling the fluid dynamics and the chemical kinetics is properly taken care of by using a time-operator splitting method and a variable coefficient ordinary differential equation solver. A second-order Roe scheme with a minmod limiter is explicitly used for space descretization, while a second-order, two-step Runge-Kutta method is used for time descretization. In space integration, a finite volume method and a cell-centered scheme are employed. The first-order derivatives in the equations of transport properties are discretized by a central differencing with Green's theorem. Detailed chemistry is involved in this study. Two chemical reaction mechanisms are extracted from GRI-Mech, which are forty elementary reactions with thirteen species for a hydrogen-air mixture and twenty-seven reactions with eight species for a hydrogen-oxygen mixture. The code is ported to a high-performance parallel machine with Message-Passing Interface. Code validation is performed with chemical kinetic modeling for a stoichiometric hydrogen-air mixture, an one-dimensional detonation tube, a two-dimensional, inviscid flow over a wedge and a viscous flow over a flat plate. Detonation is initiated using a numerically simulated arc-ignition or shock-induced ignition system. Various freestream conditions are utilized to study the propagation of the detonation in the proposed concept of the engine. Investigation of the detonation propagation is performed for a pulsed detonation

  10. Improved data visualization techniques for analyzing macromolecule structural changes.

    PubMed

    Kim, Jae Hyun; Iyer, Vidyashankara; Joshi, Sangeeta B; Volkin, David B; Middaugh, C Russell

    2012-10-01

    The empirical phase diagram (EPD) is a colored representation of overall structural integrity and conformational stability of macromolecules in response to various environmental perturbations. Numerous proteins and macromolecular complexes have been analyzed by EPDs to summarize results from large data sets from multiple biophysical techniques. The current EPD method suffers from a number of deficiencies including lack of a meaningful relationship between color and actual molecular features, difficulties in identifying contributions from individual techniques, and a limited ability to be interpreted by color-blind individuals. In this work, three improved data visualization approaches are proposed as techniques complementary to the EPD. The secondary, tertiary, and quaternary structural changes of multiple proteins as a function of environmental stress were first measured using circular dichroism, intrinsic fluorescence spectroscopy, and static light scattering, respectively. Data sets were then visualized as (1) RGB colors using three-index EPDs, (2) equiangular polygons using radar charts, and (3) human facial features using Chernoff face diagrams. Data as a function of temperature and pH for bovine serum albumin, aldolase, and chymotrypsin as well as candidate protein vaccine antigens including a serine threonine kinase protein (SP1732) and surface antigen A (SP1650) from S. pneumoniae and hemagglutinin from an H1N1 influenza virus are used to illustrate the advantages and disadvantages of each type of data visualization technique. Copyright © 2012 The Protein Society.

  11. Improved data visualization techniques for analyzing macromolecule structural changes

    PubMed Central

    Kim, Jae Hyun; Iyer, Vidyashankara; Joshi, Sangeeta B; Volkin, David B; Middaugh, C Russell

    2012-01-01

    The empirical phase diagram (EPD) is a colored representation of overall structural integrity and conformational stability of macromolecules in response to various environmental perturbations. Numerous proteins and macromolecular complexes have been analyzed by EPDs to summarize results from large data sets from multiple biophysical techniques. The current EPD method suffers from a number of deficiencies including lack of a meaningful relationship between color and actual molecular features, difficulties in identifying contributions from individual techniques, and a limited ability to be interpreted by color-blind individuals. In this work, three improved data visualization approaches are proposed as techniques complementary to the EPD. The secondary, tertiary, and quaternary structural changes of multiple proteins as a function of environmental stress were first measured using circular dichroism, intrinsic fluorescence spectroscopy, and static light scattering, respectively. Data sets were then visualized as (1) RGB colors using three-index EPDs, (2) equiangular polygons using radar charts, and (3) human facial features using Chernoff face diagrams. Data as a function of temperature and pH for bovine serum albumin, aldolase, and chymotrypsin as well as candidate protein vaccine antigens including a serine threonine kinase protein (SP1732) and surface antigen A (SP1650) from S. pneumoniae and hemagglutinin from an H1N1 influenza virus are used to illustrate the advantages and disadvantages of each type of data visualization technique. PMID:22898970

  12. Pollen embryogenesis to induce, detect, and analyze mutants

    SciTech Connect

    Constantin, M.J.

    The development of fully differentiated plants from individual pollen grains through a series of developmental phases that resemble embryogenesis beginning with the zygote was demonstrated during the mid-1960's. This technology opened the door to the use of haploid plants (sporophytes with the gametic number of chromosomes) for plant breeding and genetic studies, biochemical and metabolic studies, and the selection of mutations. Although pollen embryogenesis has been demonstrated successfully in numerous plant genera, the procedure cannot as yet be used routinely to generate large populations of plants for experiments. Practical results from use of the technology in genetic toxicology research tomore » detect mutations have failed to fully realize the theoretical potential; further developments of the technology could overcome the limitations. Pollen embryogenesis could be used to develop plants from mutant pollen grains to verify that genetic changes are involved. Through either spontaneous or induced chromosome doubling, these plants can be made homozygous and used to analyze genetically the mutants involved. The success of this approach will depend on the mutant frequency relative to the fraction of pollen grains that undergo embryogenesis; these two factors will dictate population size needed for success. Research effort is needed to further develop pollen embryogenesis for use in the detection of genotoxins under both laboratory and in situ conditions.« less

  13. Comparison of Numerical Modeling Methods for Soil Vibration Cutting

    NASA Astrophysics Data System (ADS)

    Jiang, Jiandong; Zhang, Enguang

    2018-01-01

    In this paper, we studied the appropriate numerical simulation method for vibration soil cutting. Three numerical simulation methods, commonly used for uniform speed soil cutting, Lagrange, ALE and DEM, are analyzed. Three models of vibration soil cutting simulation model are established by using ls-dyna.The applicability of the three methods to this problem is analyzed in combination with the model mechanism and simulation results. Both the Lagrange method and the DEM method can show the force oscillation of the tool and the large deformation of the soil in the vibration cutting. Lagrange method shows better effect of soil debris breaking. Because of the poor stability of ALE method, it is not suitable to use soil vibration cutting problem.

  14. Experimental and Numerical analysis of Metallic Bellow for Acoustic Performance

    NASA Astrophysics Data System (ADS)

    Panchwadkar, Amit A.; Awasare, Pradeep J., Dr.; Ingle, Ravidra B., Dr.

    2017-08-01

    Noise will concern about the work environment of industry. Machinery environment has overall noise which interrupts communication between the workers. This problem of miscommunication and health hazard will make sense to go for noise attenuation. Modification in machine setup may affect the performance of it. Instead of that, Helmholtz resonator principle will be a better option for noise reduction along the transmission path. Resonator has design variables which gives resonating frequency will help us to confirm the frequency range. This paper deals with metallic bellow which behaves like inertial mass under incident sound wave. Sound wave energy is affected by hard boundary condition of resonator and bellow. Metallic bellow is used in combination with resonator to find out Transmission loss (TL). Microphone attachment with FFT analyzer will give the frequency range for numerical analysis. Numerical analysis of bellow and resonator is carried out to summarize the acoustic behavior of bellow. Bellow can be numerically analyzed to check noise attenuation for centrifugal blower. An impedance tube measurement technique is performed to validate the numerical results for assembly. Dimensional and shape modification can be done to get the acoustic performance of bellow.

  15. Teaching Mathematics with Technology: Numerical Relationships.

    ERIC Educational Resources Information Center

    Bright, George W.

    1989-01-01

    Developing numerical relationships with calculators is emphasized. Calculators furnish some needed support for students as they investigate the value of fractions as the numerators or denominators change. An example with Logo programing for computers is also included. (MNS)

  16. Micro-Analyzer: automatic preprocessing of Affymetrix microarray data.

    PubMed

    Guzzi, Pietro Hiram; Cannataro, Mario

    2013-08-01

    A current trend in genomics is the investigation of the cell mechanism using different technologies, in order to explain the relationship among genes, molecular processes and diseases. For instance, the combined use of gene-expression arrays and genomic arrays has been demonstrated as an effective instrument in clinical practice. Consequently, in a single experiment different kind of microarrays may be used, resulting in the production of different types of binary data (images and textual raw data). The analysis of microarray data requires an initial preprocessing phase, that makes raw data suitable for use on existing analysis platforms, such as the TIGR M4 (TM4) Suite. An additional challenge to be faced by emerging data analysis platforms is the ability to treat in a combined way those different microarray formats coupled with clinical data. In fact, resulting integrated data may include both numerical and symbolic data (e.g. gene expression and SNPs regarding molecular data), as well as temporal data (e.g. the response to a drug, time to progression and survival rate), regarding clinical data. Raw data preprocessing is a crucial step in analysis but is often performed in a manual and error prone way using different software tools. Thus novel, platform independent, and possibly open source tools enabling the semi-automatic preprocessing and annotation of different microarray data are needed. The paper presents Micro-Analyzer (Microarray Analyzer), a cross-platform tool for the automatic normalization, summarization and annotation of Affymetrix gene expression and SNP binary data. It represents the evolution of the μ-CS tool, extending the preprocessing to SNP arrays that were not allowed in μ-CS. The Micro-Analyzer is provided as a Java standalone tool and enables users to read, preprocess and analyse binary microarray data (gene expression and SNPs) by invoking TM4 platform. It avoids: (i) the manual invocation of external tools (e.g. the Affymetrix Power

  17. Analyzers Measure Greenhouse Gases, Airborne Pollutants

    NASA Technical Reports Server (NTRS)

    2012-01-01

    In complete darkness, a NASA observatory waits. When an eruption of boiling water billows from a nearby crack in the ground, the observatory s sensors seek particles in the fluid, measure shifts in carbon isotopes, and analyze samples for biological signatures. NASA has landed the observatory in this remote location, far removed from air and sunlight, to find life unlike any that scientists have ever seen. It might sound like a scene from a distant planet, but this NASA mission is actually exploring an ocean floor right here on Earth. NASA established a formal exobiology program in 1960, which expanded into the present-day Astrobiology Program. The program, which celebrated its 50th anniversary in 2010, not only explores the possibility of life elsewhere in the universe, but also examines how life begins and evolves, and what the future may hold for life on Earth and other planets. Answers to these questions may be found not only by launching rockets skyward, but by sending probes in the opposite direction. Research here on Earth can revise prevailing concepts of life and biochemistry and point to the possibilities for life on other planets, as was demonstrated in December 2010, when NASA researchers discovered microbes in Mono Lake in California that subsist and reproduce using arsenic, a toxic chemical. The Mono Lake discovery may be the first of many that could reveal possible models for extraterrestrial life. One primary area of interest for NASA astrobiologists lies with the hydrothermal vents on the ocean floor. These vents expel jets of water heated and enriched with chemicals from off-gassing magma below the Earth s crust. Also potentially within the vents: microbes that, like the Mono Lake microorganisms, defy the common characteristics of life on Earth. Basically all organisms on our planet generate energy through the Krebs Cycle, explains Mike Flynn, research scientist at NASA s Ames Research Center. This metabolic process breaks down sugars for energy

  18. Using Simulation to Analyze Acoustic Environments

    NASA Technical Reports Server (NTRS)

    Wood, Eric J.

    2016-01-01

    One of the main projects that was worked on this semester was creating an acoustic model for the Advanced Space Suit in Comsol Multiphysics. The geometry tools built into the software were used to create an accurate model of the helmet and upper torso of the suit. After running the simulation, plots of the sound pressure level within the suit were produced, as seen below in Figure 1. These plots show significant nulls which should be avoided when placing microphones inside the suit. In the future, this model can be easily adapted to changes in the suit design to determine optimal microphone placements and other acoustic properties. Another major project was creating an acoustic diverter that will potentially be used to route audio into the Space Station's Node 1. The concept of the project was to create geometry to divert sound from a neighboring module, the US Lab, into Node 1. By doing this, no new audio equipment would need to be installed in Node 1. After creating an initial design for the diverter, analysis was performed in Comsol in order to determine how changes in geometry would affect acoustic performance, as shown in Figure 2. These results were used to produce a physical prototype diverter on a 3D printer. With the physical prototype, testing was conducted in an anechoic chamber to determine the true effectiveness of the design, as seen in Figure 3. The results from this testing have been compared to the Comsol simulation results to analyze how closely the Comsol results are to real-world performance. While the Comsol results do not seem to closely resemble the real world performance, this testing has provided valuable insight into how much trust can be placed in the results of Comsol simulations. A final project that was worked on during this tour was the Audio Interface Unit (AIU) design for the Orion program. The AIU is a small device that will be used for as an audio communication device both during launch and on-orbit. The unit will have functions

  19. Numerical analysis of a microwave torch with axial gas injection

    SciTech Connect

    Gritsinin, S. I.; Davydov, A. M.; Kossyi, I. A., E-mail: kossyi@fpl.gpi.ru

    2013-07-15

    The characteristics of a microwave discharge in an argon jet injected axially into a coaxial channel with a shortened inner electrode are numerically analyzed using a self-consistent equilibrium gas-dynamic model. The specific features of the excitation and maintenance of the microwave discharge are determined, and the dependences of the discharge characteristics on the supplied electromagnetic power and gas flow rate are obtained. The calculated results are compared with experimental data.

  20. 3D numerical simulation of transient processes in hydraulic turbines

    NASA Astrophysics Data System (ADS)

    Cherny, S.; Chirkov, D.; Bannikov, D.; Lapin, V.; Skorospelov, V.; Eshkunova, I.; Avdushenko, A.

    2010-08-01

    An approach for numerical simulation of 3D hydraulic turbine flows in transient operating regimes is presented. The method is based on a coupled solution of incompressible RANS equations, runner rotation equation, and water hammer equations. The issue of setting appropriate boundary conditions is considered in detail. As an illustration, the simulation results for runaway process are presented. The evolution of vortex structure and its effect on computed runaway traces are analyzed.