Sample records for reduced basis techniques

  1. Approximate techniques of structural reanalysis

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Lowder, H. E.

    1974-01-01

    A study is made of two approximate techniques for structural reanalysis. These include Taylor series expansions for response variables in terms of design variables and the reduced-basis method. In addition, modifications to these techniques are proposed to overcome some of their major drawbacks. The modifications include a rational approach to the selection of the reduced-basis vectors and the use of Taylor series approximation in an iterative process. For the reduced basis a normalized set of vectors is chosen which consists of the original analyzed design and the first-order sensitivity analysis vectors. The use of the Taylor series approximation as a first (initial) estimate in an iterative process, can lead to significant improvements in accuracy, even with one iteration cycle. Therefore, the range of applicability of the reanalysis technique can be extended. Numerical examples are presented which demonstrate the gain in accuracy obtained by using the proposed modification techniques, for a wide range of variations in the design variables.

  2. Reduced basis technique for evaluating the sensitivity coefficients of the nonlinear tire response

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Tanner, John A.; Peters, Jeanne M.

    1992-01-01

    An efficient reduced-basis technique is proposed for calculating the sensitivity of nonlinear tire response to variations in the design variables. The tire is modeled using a 2-D, moderate rotation, laminated anisotropic shell theory, including the effects of variation in material and geometric parameters. The vector of structural response and its first-order and second-order sensitivity coefficients are each expressed as a linear combination of a small number of basis vectors. The effectiveness of the basis vectors used in approximating the sensitivity coefficients is demonstrated by a numerical example involving the Space Shuttle nose-gear tire, which is subjected to uniform inflation pressure.

  3. Jacobian projection reduced-order models for dynamic systems with contact nonlinearities

    NASA Astrophysics Data System (ADS)

    Gastaldi, Chiara; Zucca, Stefano; Epureanu, Bogdan I.

    2018-02-01

    In structural dynamics, the prediction of the response of systems with localized nonlinearities, such as friction dampers, is of particular interest. This task becomes especially cumbersome when high-resolution finite element models are used. While state-of-the-art techniques such as Craig-Bampton component mode synthesis are employed to generate reduced order models, the interface (nonlinear) degrees of freedom must still be solved in-full. For this reason, a new generation of specialized techniques capable of reducing linear and nonlinear degrees of freedom alike is emerging. This paper proposes a new technique that exploits spatial correlations in the dynamics to compute a reduction basis. The basis is composed of a set of vectors obtained using the Jacobian of partial derivatives of the contact forces with respect to nodal displacements. These basis vectors correspond to specifically chosen boundary conditions at the contacts over one cycle of vibration. The technique is shown to be effective in the reduction of several models studied using multiple harmonics with a coupled static solution. In addition, this paper addresses another challenge common to all reduction techniques: it presents and validates a novel a posteriori error estimate capable of evaluating the quality of the reduced-order solution without involving a comparison with the full-order solution.

  4. POD/MAC-Based Modal Basis Selection for a Reduced Order Nonlinear Response Analysis

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.; Przekop, Adam

    2007-01-01

    A feasibility study was conducted to explore the applicability of a POD/MAC basis selection technique to a nonlinear structural response analysis. For the case studied the application of the POD/MAC technique resulted in a substantial improvement of the reduced order simulation when compared to a classic approach utilizing only low frequency modes present in the excitation bandwidth. Further studies are aimed to expand application of the presented technique to more complex structures including non-planar and two-dimensional configurations. For non-planar structures the separation of different displacement components may not be necessary or desirable.

  5. Localized basis functions and other computational improvements in variational nonorthogonal basis function methods for quantum mechanical scattering problems involving chemical reactions

    NASA Technical Reports Server (NTRS)

    Schwenke, David W.; Truhlar, Donald G.

    1990-01-01

    The Generalized Newton Variational Principle for 3D quantum mechanical reactive scattering is briefly reviewed. Then three techniques are described which improve the efficiency of the computations. First, the fact that the Hamiltonian is Hermitian is used to reduce the number of integrals computed, and then the properties of localized basis functions are exploited in order to eliminate redundant work in the integral evaluation. A new type of localized basis function with desirable properties is suggested. It is shown how partitioned matrices can be used with localized basis functions to reduce the amount of work required to handle the complex boundary conditions. The new techniques do not introduce any approximations into the calculations, so they may be used to obtain converged solutions of the Schroedinger equation.

  6. Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, Qifeng, E-mail: liaoqf@shanghaitech.edu.cn; Lin, Guang, E-mail: guanglin@purdue.edu

    2016-07-15

    In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.

  7. Adaptive h -refinement for reduced-order models: ADAPTIVE h -refinement for reduced-order models

    DOE PAGES

    Carlberg, Kevin T.

    2014-11-05

    Our work presents a method to adaptively refine reduced-order models a posteriori without requiring additional full-order-model solves. The technique is analogous to mesh-adaptive h-refinement: it enriches the reduced-basis space online by ‘splitting’ a given basis vector into several vectors with disjoint support. The splitting scheme is defined by a tree structure constructed offline via recursive k-means clustering of the state variables using snapshot data. This method identifies the vectors to split online using a dual-weighted-residual approach that aims to reduce error in an output quantity of interest. The resulting method generates a hierarchy of subspaces online without requiring large-scale operationsmore » or full-order-model solves. Furthermore, it enables the reduced-order model to satisfy any prescribed error tolerance regardless of its original fidelity, as a completely refined reduced-order model is mathematically equivalent to the original full-order model. Experiments on a parameterized inviscid Burgers equation highlight the ability of the method to capture phenomena (e.g., moving shocks) not contained in the span of the original reduced basis.« less

  8. Optimization of auxiliary basis sets for the LEDO expansion and a projection technique for LEDO-DFT.

    PubMed

    Götz, Andreas W; Kollmar, Christian; Hess, Bernd A

    2005-09-01

    We present a systematic procedure for the optimization of the expansion basis for the limited expansion of diatomic overlap density functional theory (LEDO-DFT) and report on optimized auxiliary orbitals for the Ahlrichs split valence plus polarization basis set (SVP) for the elements H, Li--F, and Na--Cl. A new method to deal with near-linear dependences in the LEDO expansion basis is introduced, which greatly reduces the computational effort of LEDO-DFT calculations. Numerical results for a test set of small molecules demonstrate the accuracy of electronic energies, structural parameters, dipole moments, and harmonic frequencies. For larger molecular systems the numerical errors introduced by the LEDO approximation can lead to an uncontrollable behavior of the self-consistent field (SCF) process. A projection technique suggested by Löwdin is presented in the framework of LEDO-DFT, which guarantees for SCF convergence. Numerical results on some critical test molecules suggest the general applicability of the auxiliary orbitals presented in combination with this projection technique. Timing results indicate that LEDO-DFT is competitive with conventional density fitting methods. (c) 2005 Wiley Periodicals, Inc.

  9. The reduced space Sequential Quadratic Programming (SQP) method for calculating the worst resonance response of nonlinear systems

    NASA Astrophysics Data System (ADS)

    Liao, Haitao; Wu, Wenwang; Fang, Daining

    2018-07-01

    A coupled approach combining the reduced space Sequential Quadratic Programming (SQP) method with the harmonic balance condensation technique for finding the worst resonance response is developed. The nonlinear equality constraints of the optimization problem are imposed on the condensed harmonic balance equations. Making use of the null space decomposition technique, the original optimization formulation in the full space is mathematically simplified, and solved in the reduced space by means of the reduced SQP method. The transformation matrix that maps the full space to the null space of the constrained optimization problem is constructed via the coordinate basis scheme. The removal of the nonlinear equality constraints is accomplished, resulting in a simple optimization problem subject to bound constraints. Moreover, second order correction technique is introduced to overcome Maratos effect. The combination application of the reduced SQP method and condensation technique permits a large reduction of the computational cost. Finally, the effectiveness and applicability of the proposed methodology is demonstrated by two numerical examples.

  10. Hybrid Grid and Basis Set Approach to Quantum Chemistry DMRG

    NASA Astrophysics Data System (ADS)

    Stoudenmire, Edwin Miles; White, Steven

    We present a new approach for using DMRG for quantum chemistry that combines the advantages of a basis set with that of a grid approximation. Because DMRG scales linearly for quasi-one-dimensional systems, it is feasible to approximate the continuum with a fine grid in one direction while using a standard basis set approach for the transverse directions. Compared to standard basis set methods, we reach larger systems and achieve better scaling when approaching the basis set limit. The flexibility and reduced costs of our approach even make it feasible to incoporate advanced DMRG techniques such as simulating real-time dynamics. Supported by the Simons Collaboration on the Many-Electron Problem.

  11. A Multi-Resolution Nonlinear Mapping Technique for Design and Analysis Applications

    NASA Technical Reports Server (NTRS)

    Phan, Minh Q.

    1998-01-01

    This report describes a nonlinear mapping technique where the unknown static or dynamic system is approximated by a sum of dimensionally increasing functions (one-dimensional curves, two-dimensional surfaces, etc.). These lower dimensional functions are synthesized from a set of multi-resolution basis functions, where the resolutions specify the level of details at which the nonlinear system is approximated. The basis functions also cause the parameter estimation step to become linear. This feature is taken advantage of to derive a systematic procedure to determine and eliminate basis functions that are less significant for the particular system under identification. The number of unknown parameters that must be estimated is thus reduced and compact models obtained. The lower dimensional functions (identified curves and surfaces) permit a kind of "visualization" into the complexity of the nonlinearity itself.

  12. A Multi-Resolution Nonlinear Mapping Technique for Design and Analysis Application

    NASA Technical Reports Server (NTRS)

    Phan, Minh Q.

    1997-01-01

    This report describes a nonlinear mapping technique where the unknown static or dynamic system is approximated by a sum of dimensionally increasing functions (one-dimensional curves, two-dimensional surfaces, etc.). These lower dimensional functions are synthesized from a set of multi-resolution basis functions, where the resolutions specify the level of details at which the nonlinear system is approximated. The basis functions also cause the parameter estimation step to become linear. This feature is taken advantage of to derive a systematic procedure to determine and eliminate basis functions that are less significant for the particular system under identification. The number of unknown parameters that must be estimated is thus reduced and compact models obtained. The lower dimensional functions (identified curves and surfaces) permit a kind of "visualization" into the complexity of the nonlinearity itself.

  13. Adaptive parametric model order reduction technique for optimization of vibro-acoustic models: Application to hearing aid design

    NASA Astrophysics Data System (ADS)

    Creixell-Mediante, Ester; Jensen, Jakob S.; Naets, Frank; Brunskog, Jonas; Larsen, Martin

    2018-06-01

    Finite Element (FE) models of complex structural-acoustic coupled systems can require a large number of degrees of freedom in order to capture their physical behaviour. This is the case in the hearing aid field, where acoustic-mechanical feedback paths are a key factor in the overall system performance and modelling them accurately requires a precise description of the strong interaction between the light-weight parts and the internal and surrounding air over a wide frequency range. Parametric optimization of the FE model can be used to reduce the vibroacoustic feedback in a device during the design phase; however, it requires solving the model iteratively for multiple frequencies at different parameter values, which becomes highly time consuming when the system is large. Parametric Model Order Reduction (pMOR) techniques aim at reducing the computational cost associated with each analysis by projecting the full system into a reduced space. A drawback of most of the existing techniques is that the vector basis of the reduced space is built at an offline phase where the full system must be solved for a large sample of parameter values, which can also become highly time consuming. In this work, we present an adaptive pMOR technique where the construction of the projection basis is embedded in the optimization process and requires fewer full system analyses, while the accuracy of the reduced system is monitored by a cheap error indicator. The performance of the proposed method is evaluated for a 4-parameter optimization of a frequency response for a hearing aid model, evaluated at 300 frequencies, where the objective function evaluations become more than one order of magnitude faster than for the full system.

  14. Patch-based image reconstruction for PET using prior-image derived dictionaries

    NASA Astrophysics Data System (ADS)

    Tahaei, Marzieh S.; Reader, Andrew J.

    2016-09-01

    In PET image reconstruction, regularization is often needed to reduce the noise in the resulting images. Patch-based image processing techniques have recently been successfully used for regularization in medical image reconstruction through a penalized likelihood framework. Re-parameterization within reconstruction is another powerful regularization technique in which the object in the scanner is re-parameterized using coefficients for spatially-extensive basis vectors. In this work, a method for extracting patch-based basis vectors from the subject’s MR image is proposed. The coefficients for these basis vectors are then estimated using the conventional MLEM algorithm. Furthermore, using the alternating direction method of multipliers, an algorithm for optimizing the Poisson log-likelihood while imposing sparsity on the parameters is also proposed. This novel method is then utilized to find sparse coefficients for the patch-based basis vectors extracted from the MR image. The results indicate the superiority of the proposed methods to patch-based regularization using the penalized likelihood framework.

  15. Fukunaga-Koontz transform based dimensionality reduction for hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Ochilov, S.; Alam, M. S.; Bal, A.

    2006-05-01

    Fukunaga-Koontz Transform based technique offers some attractive properties for desired class oriented dimensionality reduction in hyperspectral imagery. In FKT, feature selection is performed by transforming into a new space where feature classes have complimentary eigenvectors. Dimensionality reduction technique based on these complimentary eigenvector analysis can be described under two classes, desired class and background clutter, such that each basis function best represent one class while carrying the least amount of information from the second class. By selecting a few eigenvectors which are most relevant to desired class, one can reduce the dimension of hyperspectral cube. Since the FKT based technique reduces data size, it provides significant advantages for near real time detection applications in hyperspectral imagery. Furthermore, the eigenvector selection approach significantly reduces computation burden via the dimensionality reduction processes. The performance of the proposed dimensionality reduction algorithm has been tested using real-world hyperspectral dataset.

  16. Advances in dental materials.

    PubMed

    Fleming, Garry J P

    2014-05-01

    The dental market is replete with new resorative materials marketed on the basis of novel technological advances in materials chemistry, bonding capability or reduced operator time and/or technique sensitivity. This paper aims to consider advances in current materials, with an emphasis on their role in supporting contemporary clinical practice.

  17. 78 FR 13856 - Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-01

    ..., electronic, mechanical, or other technological collection techniques or other forms of information technology... to determine children's eligibility for free and reduced-price meals on the basis of each child's... other discrimination against, or overt identification of children unable to pay the full price for meals...

  18. Structural reanalysis via a mixed method. [using Taylor series for accuracy improvement

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Lowder, H. E.

    1975-01-01

    A study is made of the approximate structural reanalysis technique based on the use of Taylor series expansion of response variables in terms of design variables in conjunction with the mixed method. In addition, comparisons are made with two reanalysis techniques based on the displacement method. These techniques are the Taylor series expansion and the modified reduced basis. It is shown that the use of the reciprocals of the sizing variables as design variables (which is the natural choice in the mixed method) can result in a substantial improvement in the accuracy of the reanalysis technique. Numerical results are presented for a space truss structure.

  19. Less is Better. Laboratory Chemical Management for Waste Reduction.

    ERIC Educational Resources Information Center

    American Chemical Society, Washington, DC.

    An objective of the American Chemical Society is to promote alternatives to landfilling for the disposal of laboratory chemical wastes. One method is to reduce the amount of chemicals that become wastes. This is the basis for the "less is better" philosophy. This bulletin discusses various techniques involved in purchasing control,…

  20. Ensembles of radial basis function networks for spectroscopic detection of cervical precancer

    NASA Technical Reports Server (NTRS)

    Tumer, K.; Ramanujam, N.; Ghosh, J.; Richards-Kortum, R.

    1998-01-01

    The mortality related to cervical cancer can be substantially reduced through early detection and treatment. However, current detection techniques, such as Pap smear and colposcopy, fail to achieve a concurrently high sensitivity and specificity. In vivo fluorescence spectroscopy is a technique which quickly, noninvasively and quantitatively probes the biochemical and morphological changes that occur in precancerous tissue. A multivariate statistical algorithm was used to extract clinically useful information from tissue spectra acquired from 361 cervical sites from 95 patients at 337-, 380-, and 460-nm excitation wavelengths. The multivariate statistical analysis was also employed to reduce the number of fluorescence excitation-emission wavelength pairs required to discriminate healthy tissue samples from precancerous tissue samples. The use of connectionist methods such as multilayered perceptrons, radial basis function (RBF) networks, and ensembles of such networks was investigated. RBF ensemble algorithms based on fluorescence spectra potentially provide automated and near real-time implementation of precancer detection in the hands of nonexperts. The results are more reliable, direct, and accurate than those achieved by either human experts or multivariate statistical algorithms.

  1. An Integrated Approach to Change the Outcome Part II: Targeted Neuromuscular Training Techniques to Reduce Identified ACL Injury Risk Factors

    PubMed Central

    Myer, Gregory D.; Ford, Kevin R.; Brent, Jensen L.; Hewett, Timothy E.

    2014-01-01

    Prior reports indicate that female athletes who demonstrate high knee abduction moments (KAMs) during landing are more responsive to neuromuscular training designed to reduce KAM. Identification of female athletes who demonstrate high KAM, which accurately identifies those at risk for noncontact anterior cruciate ligament (ACL) injury, may be ideal for targeted neuromuscular training. Specific neuromuscular training targeted to the underlying biomechanical components that increase KAM may provide the most efficient and effective training strategy to reduce noncontact ACL injury risk. The purpose of the current commentary is to provide an integrative approach to identify and target mechanistic underpinnings to increased ACL injury in female athletes. Specific neuromuscular training techniques will be presented that address individual algorithm components related to high knee load landing patterns. If these integrated techniques are employed on a widespread basis, prevention strategies for noncontact ACL injury among young female athletes may prove both more effective and efficient. PMID:22580980

  2. Use of a spiral rectal diaphragm technique to control anal sphincter incontinence in a cat.

    PubMed

    Pavletic, Michael; Mahn, Matt; Duddy, Jean

    2012-09-15

    A 10-year-old castrated male domestic shorthair cat was examined for a mass involving the right anal sac region. The mass was diagnosed as a fibrosarcoma, and resulted in progressive tenesmus, requiring repeated resection. Surgical removal of the fibrosarcoma was performed on 4 occasions, including complete resection of the anal sphincter muscles and portions of the rectum. A perineal urethrostomy was required during the third surgical procedure secondary to tumor invasion of the preputial tissues. To reduce involuntary loss of feces, the remaining rectal wall was rotated approximately 225° prior to surgical closure during the second, third, and fourth surgical procedures. This procedure created a natural spiral diaphragm within the rectal lumen. The elastic spiral barrier reduced inadvertent fecal loss and facilitated fecal distention of the terminal portion of the colon, allowing the patient to anticipate the impending passage of feces and to use the litter tray on a daily basis. With complete loss of the terminal portion of the rectum and anal sphincter muscles, spiraling the rectum created a deformable threshold barrier to reduce excessive loss of stool secondary to fecal incontinence. On the basis of the positive outcome in this patient, this novel technique may be a useful option to consider for the treatment of cats with loss of anal sphincter function.

  3. Preconditioned MoM Solutions for Complex Planar Arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fasenfest, B J; Jackson, D; Champagne, N

    2004-01-23

    The numerical analysis of large arrays is a complex problem. There are several techniques currently under development in this area. One such technique is the FAIM (Faster Adaptive Integral Method). This method uses a modification of the standard AIM approach which takes into account the reusability properties of matrices that arise from identical array elements. If the array consists of planar conducting bodies, the array elements are meshed using standard subdomain basis functions, such as the RWG basis. These bases are then projected onto a regular grid of interpolating polynomials. This grid can then be used in a 2D ormore » 3D FFT to accelerate the matrix-vector product used in an iterative solver. The method has been proven to greatly reduce solve time by speeding the matrix-vector product computation. The FAIM approach also reduces fill time and memory requirements, since only the near element interactions need to be calculated exactly. The present work extends FAIM by modifying it to allow for layered material Green's Functions and dielectrics. In addition, a preconditioner is implemented to greatly reduce the number of iterations required for a solution. The general scheme of the FAIM method is reported in; this contribution is limited to presenting new results.« less

  4. The Reflective Macintosh: A Computer-Assisted Approach to Understanding and Improving Managerial Practice. Project Report.

    ERIC Educational Resources Information Center

    Kerchner, Charles; And Others

    The early stages of a microcomputer-based project to integrate managerial knowledge and practice are described in this report. Analysis of the problem-framing process that effective principals use to reduce complex problems into more manageable ones forms the basis of the project. Three cognitive-mapping techniques are used to understand the…

  5. Optimum coding techniques for MST radars

    NASA Technical Reports Server (NTRS)

    Sulzer, M. P.; Woodman, R. F.

    1986-01-01

    The optimum coding technique for MST (mesosphere stratosphere troposphere) radars is that which gives the lowest possible sidelobes in practice and can be implemented without too much computing power. Coding techniques are described in Farley (1985). A technique mentioned briefly there but not fully developed and not in general use is discussed here. This is decoding by means of a filter which is not matched to the transmitted waveform, in order to reduce sidelobes below the level obtained with a matched filter. This is the first part of the technique discussed here; the second part consists of measuring the transmitted waveform and using it as the basis for the decoding filter, thus reducing errors due to imperfections in the transmitter. There are two limitations to this technique. The first is a small loss in signal to noise ratio (SNR), which usually is not significant. The second problem is related to incomplete information received at the lowest ranges. An appendix shows a technique for handling this problem. Finally, it is shown that the use of complementary codes on transmission and nonmatched decoding gives the lowest possible sidelobe level and the minimum loss in SNR due to mismatch.

  6. An extended Kalman filter approach to non-stationary Bayesian estimation of reduced-order vocal fold model parameters.

    PubMed

    Hadwin, Paul J; Peterson, Sean D

    2017-04-01

    The Bayesian framework for parameter inference provides a basis from which subject-specific reduced-order vocal fold models can be generated. Previously, it has been shown that a particle filter technique is capable of producing estimates and associated credibility intervals of time-varying reduced-order vocal fold model parameters. However, the particle filter approach is difficult to implement and has a high computational cost, which can be barriers to clinical adoption. This work presents an alternative estimation strategy based upon Kalman filtering aimed at reducing the computational cost of subject-specific model development. The robustness of this approach to Gaussian and non-Gaussian noise is discussed. The extended Kalman filter (EKF) approach is found to perform very well in comparison with the particle filter technique at dramatically lower computational cost. Based upon the test cases explored, the EKF is comparable in terms of accuracy to the particle filter technique when greater than 6000 particles are employed; if less particles are employed, the EKF actually performs better. For comparable levels of accuracy, the solution time is reduced by 2 orders of magnitude when employing the EKF. By virtue of the approximations used in the EKF, however, the credibility intervals tend to be slightly underpredicted.

  7. A method for reducing the order of nonlinear dynamic systems

    NASA Astrophysics Data System (ADS)

    Masri, S. F.; Miller, R. K.; Sassi, H.; Caughey, T. K.

    1984-06-01

    An approximate method that uses conventional condensation techniques for linear systems together with the nonparametric identification of the reduced-order model generalized nonlinear restoring forces is presented for reducing the order of discrete multidegree-of-freedom dynamic systems that possess arbitrary nonlinear characteristics. The utility of the proposed method is demonstrated by considering a redundant three-dimensional finite-element model half of whose elements incorporate hysteretic properties. A nonlinear reduced-order model, of one-third the order of the original model, is developed on the basis of wideband stationary random excitation and the validity of the reduced-order model is subsequently demonstrated by its ability to predict with adequate accuracy the transient response of the original nonlinear model under a different nonstationary random excitation.

  8. An adaptive technique for a redundant-sensor navigation system.

    NASA Technical Reports Server (NTRS)

    Chien, T.-T.

    1972-01-01

    An on-line adaptive technique is developed to provide a self-contained redundant-sensor navigation system with a capability to utilize its full potentiality in reliability and performance. This adaptive system is structured as a multistage stochastic process of detection, identification, and compensation. It is shown that the detection system can be effectively constructed on the basis of a design value, specified by mission requirements, of the unknown parameter in the actual system, and of a degradation mode in the form of a constant bias jump. A suboptimal detection system on the basis of Wald's sequential analysis is developed using the concept of information value and information feedback. The developed system is easily implemented, and demonstrates a performance remarkably close to that of the optimal nonlinear detection system. An invariant transformation is derived to eliminate the effect of nuisance parameters such that the ambiguous identification system can be reduced to a set of disjoint simple hypotheses tests. By application of a technique of decoupled bias estimation in the compensation system the adaptive system can be operated without any complicated reorganization.

  9. Using multi-dimensional Smolyak interpolation to make a sum-of-products potential

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Avila, Gustavo, E-mail: Gustavo-Avila@telefonica.net; Carrington, Tucker, E-mail: Tucker.Carrington@queensu.ca

    2015-07-28

    We propose a new method for obtaining potential energy surfaces in sum-of-products (SOP) form. If the number of terms is small enough, a SOP potential surface significantly reduces the cost of quantum dynamics calculations by obviating the need to do multidimensional integrals by quadrature. The method is based on a Smolyak interpolation technique and uses polynomial-like or spectral basis functions and 1D Lagrange-type functions. When written in terms of the basis functions from which the Lagrange-type functions are built, the Smolyak interpolant has only a modest number of terms. The ideas are tested for HONO (nitrous acid)

  10. Rerating the Movie Scores in Douban through Word Embedding

    NASA Astrophysics Data System (ADS)

    Cui, Mingyu

    2018-04-01

    The movie scores in the social networking service website such as IMDb, Totten Tomatoes and Douban are important references to evaluate the movies. Always, it will influence the box office directly. However, the public rating has strong bias depended on the types of movies, release time, and ages and background of the audiences. To fix the bias and give a movie a fair judgement is an important problem. In the paper, we focus on the movie scores on Douban, which is one of the most famous Chinese movie network community. We decompose the movie scores into two parts. One is the basis scores based on the basic properties of movies. The other is the extra scores which represent the excess value of the movies. We use the word-embedding technique to reduce the movies in a small dense subspace. Then, in the reduced subspace, we use the k-means method to offer the similar movies a basis scores.

  11. Advances in dental local anesthesia techniques and devices: An update

    PubMed Central

    Saxena, Payal; Gupta, Saurabh K.; Newaskar, Vilas; Chandra, Anil

    2013-01-01

    Although local anesthesia remains the backbone of pain control in dentistry, researches are going to seek new and better means of managing the pain. Most of the researches are focused on improvement in the area of anesthetic agents, delivery devices and technique involved. Newer technologies have been developed that can assist the dentist in providing enhanced pain relief with reduced injection pain and fewer adverse effects. This overview will enlighten the practicing dentists regarding newer devices and methods of rendering pain control comparing these with the earlier used ones on the basis of research and clinical studies available. PMID:24163548

  12. A new operational approach for solving fractional variational problems depending on indefinite integrals

    NASA Astrophysics Data System (ADS)

    Ezz-Eldien, S. S.; Doha, E. H.; Bhrawy, A. H.; El-Kalaawy, A. A.; Machado, J. A. T.

    2018-04-01

    In this paper, we propose a new accurate and robust numerical technique to approximate the solutions of fractional variational problems (FVPs) depending on indefinite integrals with a type of fixed Riemann-Liouville fractional integral. The proposed technique is based on the shifted Chebyshev polynomials as basis functions for the fractional integral operational matrix (FIOM). Together with the Lagrange multiplier method, these problems are then reduced to a system of algebraic equations, which greatly simplifies the solution process. Numerical examples are carried out to confirm the accuracy, efficiency and applicability of the proposed algorithm

  13. Assessment of multireference approaches to explicitly correlated full configuration interaction quantum Monte Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kersten, J. A. F., E-mail: jennifer.kersten@cantab.net; Alavi, Ali, E-mail: a.alavi@fkf.mpg.de; Max Planck Institute for Solid State Research, Heisenbergstraße 1, 70569 Stuttgart

    2016-08-07

    The Full Configuration Interaction Quantum Monte Carlo (FCIQMC) method has proved able to provide near-exact solutions to the electronic Schrödinger equation within a finite orbital basis set, without relying on an expansion about a reference state. However, a drawback to the approach is that being based on an expansion of Slater determinants, the FCIQMC method suffers from a basis set incompleteness error that decays very slowly with the size of the employed single particle basis. The FCIQMC results obtained in a small basis set can be improved significantly with explicitly correlated techniques. Here, we present a study that assesses andmore » compares two contrasting “universal” explicitly correlated approaches that fit into the FCIQMC framework: the [2]{sub R12} method of Kong and Valeev [J. Chem. Phys. 135, 214105 (2011)] and the explicitly correlated canonical transcorrelation approach of Yanai and Shiozaki [J. Chem. Phys. 136, 084107 (2012)]. The former is an a posteriori internally contracted perturbative approach, while the latter transforms the Hamiltonian prior to the FCIQMC simulation. These comparisons are made across the 55 molecules of the G1 standard set. We found that both methods consistently reduce the basis set incompleteness, for accurate atomization energies in small basis sets, reducing the error from 28 mE{sub h} to 3-4 mE{sub h}. While many of the conclusions hold in general for any combination of multireference approaches with these methodologies, we also consider FCIQMC-specific advantages of each approach.« less

  14. Time Domain Propagation of Quantum and Classical Systems using a Wavelet Basis Set Method

    NASA Astrophysics Data System (ADS)

    Lombardini, Richard; Nowara, Ewa; Johnson, Bruce

    2015-03-01

    The use of an orthogonal wavelet basis set (Optimized Maximum-N Generalized Coiflets) to effectively model physical systems in the time domain, in particular the electromagnetic (EM) pulse and quantum mechanical (QM) wavefunction, is examined in this work. Although past research has demonstrated the benefits of wavelet basis sets to handle computationally expensive problems due to their multiresolution properties, the overlapping supports of neighboring wavelet basis functions poses problems when dealing with boundary conditions, especially with material interfaces in the EM case. Specifically, this talk addresses this issue using the idea of derivative matching creating fictitious grid points (T.A. Driscoll and B. Fornberg), but replaces the latter element with fictitious wavelet projections in conjunction with wavelet reconstruction filters. Two-dimensional (2D) systems are analyzed, EM pulse incident on silver cylinders and the QM electron wave packet circling the proton in a hydrogen atom system (reduced to 2D), and the new wavelet method is compared to the popular finite-difference time-domain technique.

  15. Drude conductivity exhibited by chemically synthesized reduced graphene oxide

    NASA Astrophysics Data System (ADS)

    Younas, Daniyal; Javed, Qurat-ul-Ain; Fatima, Sabeen; Kalsoom, Riffat; Abbas, Hussain; Khan, Yaqoob

    2017-09-01

    Electrical conductance in graphene layers having Drude like response due to massless Dirac fermions have been well explained theoretically as well as experimentally. In this paper Drude like electrical conductivity response of reduced graphene oxide synthesized by chemical route is presented. A method slightly different from conventional methods is used to synthesize graphene oxide which is then converted to reduced graphene oxide. Various analytic techniques were employed to verify the successful oxidation and reductions in the process and were also used to measure various parameters like thickness of layers and conductivity. Obtained reduced graphene oxide has very thin layers of thickness around 13 nm on average and reduced graphene oxide has average thickness below 20 nm. Conductivity of the reduced graphene was observed to have Drude like response which is explained on basis of Drude model for conductors.

  16. Evolutionary optimization of radial basis function classifiers for data mining applications.

    PubMed

    Buchtala, Oliver; Klimek, Manuel; Sick, Bernhard

    2005-10-01

    In many data mining applications that address classification problems, feature and model selection are considered as key tasks. That is, appropriate input features of the classifier must be selected from a given (and often large) set of possible features and structure parameters of the classifier must be adapted with respect to these features and a given data set. This paper describes an evolutionary algorithm (EA) that performs feature and model selection simultaneously for radial basis function (RBF) classifiers. In order to reduce the optimization effort, various techniques are integrated that accelerate and improve the EA significantly: hybrid training of RBF networks, lazy evaluation, consideration of soft constraints by means of penalty terms, and temperature-based adaptive control of the EA. The feasibility and the benefits of the approach are demonstrated by means of four data mining problems: intrusion detection in computer networks, biometric signature verification, customer acquisition with direct marketing methods, and optimization of chemical production processes. It is shown that, compared to earlier EA-based RBF optimization techniques, the runtime is reduced by up to 99% while error rates are lowered by up to 86%, depending on the application. The algorithm is independent of specific applications so that many ideas and solutions can be transferred to other classifier paradigms.

  17. [The essence of Professor Wu Lian-Zhong's acupuncture manipulation].

    PubMed

    Liu, Jing; Guo, Yi; Wu, Lian-Zhong

    2014-05-01

    The painless needle insertion technique, summarized by Professor WU Lian-zhong during his decades of acupuncture clinical practice is introduced in this article, which is characterized as soft, flexible, fast, plucking and activating antipathogenic qi. The Sancai (three layers) lifting and thrusting manipulation technique is adopted by Professor WU for getting the qi sensation. And features of 10 kinds of needling sensation such as soreness, numbness, heaviness, distension, pain, cold, hot, radiation, jumping and contracture are summarized. Finger force, amplitude, speed and time length are also taken as the basis of reinforcing and reducing manipulations. Moreover, examples are also given to explain the needling technique on some specific points which further embodies Professor WU's unique experiences and understandings on acupuncture.

  18. A sparse representation of gravitational waves from precessing compact binaries

    NASA Astrophysics Data System (ADS)

    Blackman, Jonathan; Szilagyi, Bela; Galley, Chad; Tiglio, Manuel

    2014-03-01

    With the advanced generation of gravitational wave detectors coming online in the near future, there is a need for accurate models of gravitational waveforms emitted by binary neutron stars and/or black holes. Post-Newtonian approximations work well for the early inspiral and there are models covering the late inspiral as well as merger and ringdown for the non-precessing case. While numerical relativity simulations have no difficulty with precession and can now provide accurate waveforms for a broad range of parameters, covering the 7 dimensional precessing parameter space with ~107 simulations is not feasible. There is still hope, as reduced order modelling techniques have been highly successful in reducing the impact of the curse of dimensionality for lower dimensional cases. We construct a reduced basis of Post-Newtonian waveforms for the full parameter space with mass ratios up to 10 and spins up to 0 . 9 , and find that for the last 100 orbits only ~ 50 waveforms are needed. The huge compression relies heavily on a reparametrization which seeks to reduce the non-linearity of the waveforms. We also show that the addition of merger and ringdown only mildly increases the size of the basis.

  19. Groebner Basis Methods for Stationary Solutions of a Low-Dimensional Model for a Shear Flow

    NASA Astrophysics Data System (ADS)

    Pausch, Marina; Grossmann, Florian; Eckhardt, Bruno; Romanovski, Valery G.

    2014-10-01

    We use Groebner basis methods to extract all stationary solutions for the nine-mode shear flow model described in Moehlis et al. (New J Phys 6:56, 2004). Using rational approximations to irrational wave numbers and algebraic manipulation techniques we reduce the problem of determining all stationary states to finding roots of a polynomial of order 30. The coefficients differ by 30 powers of 10, so that algorithms for extended precision are needed to extract the roots reliably. We find that there are eight stationary solutions consisting of two distinct states, each of which appears in four symmetry-related phases. We discuss extensions of these results for other flows.

  20. A review on acidifying treatments for vegetable canned food.

    PubMed

    Derossi, A; Fiore, A G; De Pilli, T; Severini, C

    2011-12-01

    As is well known, pasteurization treatments are not sufficient for destroying heat resistance of spore forming microorganisms, which are prevented from germination and growing by pH reducing. So, the acidification process becomes one of the most important pre-treatments for the canning industry. It is commonly applied before pasteurization treatment with the purpose of inhibiting spore germination and for reducing heat resistance of the microorganism, thereby allowing to reduce the time or temperature values of the heat treatment. With the aim to reduce the pH of vegetables several techniques are available but their application is not easy to plan. Often, industries define operative conditions only on the basis of empirical experience, thus increasing the risk of microbial growth or imparting an unpleasant sour taste. With the aim of highlighting the correct plan and management of acidification treatments to reach safety without degrading quality of canned fruit and vegetables, the topics that are reviewed and discussed are the effects of low pH on heat resistance of the most important microorganisms, acidification techniques and significant process variables, the effect of low pH on sensorial properties, and future trends.

  1. Atomization Energies of SO and SO2; Basis Set Extrapolation Revisted

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Ricca, Alessandra; Arnold, James (Technical Monitor)

    1998-01-01

    The addition of tight functions to sulphur and extrapolation to the complete basis set limit are required to obtain accurate atomization energies. Six different extrapolation procedures are tried. The best atomization energies come from the series of basis sets that yield the most consistent results for all extrapolation techniques. In the variable alpha approach, alpha values larger than 4.5 or smaller than 3, appear to suggest that the extrapolation may not be reliable. It does not appear possible to determine a reliable basis set series using only the triple and quadruple zeta based sets. The scalar relativistic effects reduce the atomization of SO and SO2 by 0.34 and 0.81 kcal/mol, respectively, and clearly must be accounted for if a highly accurate atomization energy is to be computed. The magnitude of the core-valence (CV) contribution to the atomization is affected by missing diffuse valence functions. The CV contribution is much more stable if basis set superposition errors are accounted for. A similar study of SF, SF(+), and SF6 shows that the best family of basis sets varies with the nature of the S bonding.

  2. Basis function models for animal movement

    USGS Publications Warehouse

    Hooten, Mevin B.; Johnson, Devin S.

    2017-01-01

    Advances in satellite-based data collection techniques have served as a catalyst for new statistical methodology to analyze these data. In wildlife ecological studies, satellite-based data and methodology have provided a wealth of information about animal space use and the investigation of individual-based animal–environment relationships. With the technology for data collection improving dramatically over time, we are left with massive archives of historical animal telemetry data of varying quality. While many contemporary statistical approaches for inferring movement behavior are specified in discrete time, we develop a flexible continuous-time stochastic integral equation framework that is amenable to reduced-rank second-order covariance parameterizations. We demonstrate how the associated first-order basis functions can be constructed to mimic behavioral characteristics in realistic trajectory processes using telemetry data from mule deer and mountain lion individuals in western North America. Our approach is parallelizable and provides inference for heterogenous trajectories using nonstationary spatial modeling techniques that are feasible for large telemetry datasets. Supplementary materials for this article are available online.

  3. Model Order Reduction for the fast solution of 3D Stokes problems and its application in geophysical inversion

    NASA Astrophysics Data System (ADS)

    Ortega Gelabert, Olga; Zlotnik, Sergio; Afonso, Juan Carlos; Díez, Pedro

    2017-04-01

    The determination of the present-day physical state of the thermal and compositional structure of the Earth's lithosphere and sub-lithospheric mantle is one of the main goals in modern lithospheric research. All this data is essential to build Earth's evolution models and to reproduce many geophysical observables (e.g. elevation, gravity anomalies, travel time data, heat flow, etc) together with understanding the relationship between them. Determining the lithospheric state involves the solution of high-resolution inverse problems and, consequently, the solution of many direct models is required. The main objective of this work is to contribute to the existing inversion techniques in terms of improving the estimation of the elevation (topography) by including a dynamic component arising from sub-lithospheric mantle flow. In order to do so, we implement an efficient Reduced Order Method (ROM) built upon classic Finite Elements. ROM allows to reduce significantly the computational cost of solving a family of problems, for example all the direct models that are required in the solution of the inverse problem. The strategy of the method consists in creating a (reduced) basis of solutions, so that when a new problem has to be solved, its solution is sought within the basis instead of attempting to solve the problem itself. In order to check the Reduced Basis approach, we implemented the method in a 3D domain reproducing a portion of Earth that covers up to 400 km depth. Within the domain the Stokes equation is solved with realistic viscosities and densities. The different realizations (the family of problems) is created by varying viscosities and densities in a similar way as it would happen in an inversion problem. The Reduced Basis method is shown to be an extremely efficiently solver for the Stokes equation in this context.

  4. Correction of energy-dependent systematic errors in dual-energy X-ray CT using a basis material coefficients transformation method

    NASA Astrophysics Data System (ADS)

    Goh, K. L.; Liew, S. C.; Hasegawa, B. H.

    1997-12-01

    Computer simulation results from our previous studies showed that energy dependent systematic errors exist in the values of attenuation coefficient synthesized using the basis material decomposition technique with acrylic and aluminum as the basis materials, especially when a high atomic number element (e.g., iodine from radiographic contrast media) was present in the body. The errors were reduced when a basis set was chosen from materials mimicking those found in the phantom. In the present study, we employed a basis material coefficients transformation method to correct for the energy-dependent systematic errors. In this method, the basis material coefficients were first reconstructed using the conventional basis materials (acrylic and aluminum) as the calibration basis set. The coefficients were then numerically transformed to those for a more desirable set materials. The transformation was done at the energies of the low and high energy windows of the X-ray spectrum. With this correction method using acrylic and an iodine-water mixture as our desired basis set, computer simulation results showed that accuracy of better than 2% could be achieved even when iodine was present in the body at a concentration as high as 10% by mass. Simulation work had also been carried out on a more inhomogeneous 2D thorax phantom of the 3D MCAT phantom. The results of the accuracy of quantitation were presented here.

  5. Preserving Lagrangian Structure in Nonlinear Model Reduction with Application to Structural Dynamics

    DOE PAGES

    Carlberg, Kevin; Tuminaro, Ray; Boggs, Paul

    2015-03-11

    Our work proposes a model-reduction methodology that preserves Lagrangian structure and achieves computational efficiency in the presence of high-order nonlinearities and arbitrary parameter dependence. As such, the resulting reduced-order model retains key properties such as energy conservation and symplectic time-evolution maps. We focus on parameterized simple mechanical systems subjected to Rayleigh damping and external forces, and consider an application to nonlinear structural dynamics. To preserve structure, the method first approximates the system's “Lagrangian ingredients''---the Riemannian metric, the potential-energy function, the dissipation function, and the external force---and subsequently derives reduced-order equations of motion by applying the (forced) Euler--Lagrange equation with thesemore » quantities. Moreover, from the algebraic perspective, key contributions include two efficient techniques for approximating parameterized reduced matrices while preserving symmetry and positive definiteness: matrix gappy proper orthogonal decomposition and reduced-basis sparsification. Our results for a parameterized truss-structure problem demonstrate the practical importance of preserving Lagrangian structure and illustrate the proposed method's merits: it reduces computation time while maintaining high accuracy and stability, in contrast to existing nonlinear model-reduction techniques that do not preserve structure.« less

  6. Preserving Lagrangian Structure in Nonlinear Model Reduction with Application to Structural Dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlberg, Kevin; Tuminaro, Ray; Boggs, Paul

    Our work proposes a model-reduction methodology that preserves Lagrangian structure and achieves computational efficiency in the presence of high-order nonlinearities and arbitrary parameter dependence. As such, the resulting reduced-order model retains key properties such as energy conservation and symplectic time-evolution maps. We focus on parameterized simple mechanical systems subjected to Rayleigh damping and external forces, and consider an application to nonlinear structural dynamics. To preserve structure, the method first approximates the system's “Lagrangian ingredients''---the Riemannian metric, the potential-energy function, the dissipation function, and the external force---and subsequently derives reduced-order equations of motion by applying the (forced) Euler--Lagrange equation with thesemore » quantities. Moreover, from the algebraic perspective, key contributions include two efficient techniques for approximating parameterized reduced matrices while preserving symmetry and positive definiteness: matrix gappy proper orthogonal decomposition and reduced-basis sparsification. Our results for a parameterized truss-structure problem demonstrate the practical importance of preserving Lagrangian structure and illustrate the proposed method's merits: it reduces computation time while maintaining high accuracy and stability, in contrast to existing nonlinear model-reduction techniques that do not preserve structure.« less

  7. Efficient continuous-variable state tomography using Padua points

    NASA Astrophysics Data System (ADS)

    Landon-Cardinal, Olivier; Govia, Luke C. G.; Clerk, Aashish A.

    Further development of quantum technologies calls for efficient characterization methods for quantum systems. While recent work has focused on discrete systems of qubits, much remains to be done for continuous-variable systems such as a microwave mode in a cavity. We introduce a novel technique to reconstruct the full Husimi Q or Wigner function from measurements done at the Padua points in phase space, the optimal sampling points for interpolation in 2D. Our technique not only reduces the number of experimental measurements, but remarkably, also allows for the direct estimation of any density matrix element in the Fock basis, including off-diagonal elements. OLC acknowledges financial support from NSERC.

  8. Effectiveness of a dental gel to reduce plaque in beagle dogs.

    PubMed

    Hennet, Philippe

    2002-03-01

    Tooth brushing is considered a superior technique for reducing plaque accumulation. Chemical agents may be used to reduce plaque accumulation on tooth surfaces since many owners may not be willing or able to brush their dog's teeth. Following a professional teeth cleaning procedure, a dental gel containing chlorhexidine was applied in 11 dogs BID for 7-days, while 11 other dogs received a control dental gel applied in the same manner. Dogs in the treatment group had significantly less plaque accumulation during the trial period compared with dogs in the control group. The dental gel applied in the study reported here decreases plaque accumulation in the short-term and may be beneficial in reducing the severity of gingivitis and associated periodontal disease if provided on a long-term basis.

  9. Prevention of the Posttraumatic Fibrotic Response in Joints

    DTIC Science & Technology

    2015-10-01

    surgical procedures and subsequent collection of tissues have been developed and are currently used on a regular basis. Major Task 4: Evaluating the...needed to evaluate the utility of the inhibitory antibody to reduce the flexion contracture of injured knee joints. The employed techniques include...second surgery to remove a pin, and it did not change by the end of the 32nd week 1. Major Task 5: Task 4. Data analysis and statistical evaluation

  10. Application of Non-destructive Methods of Stress-strain State at Hazardous Production Facilities

    NASA Astrophysics Data System (ADS)

    Shram, V.; Kravtsova, Ye; Selsky, A.; Bezborodov, Yu; Lysyannikova, N.; Lysyannikov, A.

    2016-06-01

    The paper deals with the sources of accidents in distillation columns, on the basis of which the most dangerous defects are detected. The analysis of the currently existing methods of non-destructive testing of the stress-strain state is performed. It is proposed to apply strain and acoustic emission techniques to continuously monitor dangerous objects, which helps prevent the possibility of accidents, as well as reduce the work.

  11. [Application of rational ant colony optimization to improve the reproducibility degree of laser three-dimensional copy].

    PubMed

    Cui, Xiao-Yan; Huo, Zhong-Gang; Xin, Zhong-Hua; Tian, Xiao; Zhang, Xiao-Dong

    2013-07-01

    Three-dimensional (3D) copying of artificial ears and pistol printing are pushing laser three-dimensional copying technique to a new page. Laser three-dimensional scanning is a fresh field in laser application, and plays an irreplaceable part in three-dimensional copying. Its accuracy is the highest among all present copying techniques. Reproducibility degree marks the agreement of copied object with the original object on geometry, being the most important index property in laser three-dimensional copying technique. In the present paper, the error of laser three-dimensional copying was analyzed. The conclusion is that the data processing to the point cloud of laser scanning is the key technique to reduce the error and increase the reproducibility degree. The main innovation of this paper is as follows. On the basis of traditional ant colony optimization, rational ant colony optimization algorithm proposed by the author was applied to the laser three-dimensional copying as a new algorithm, and was put into practice. Compared with customary algorithm, rational ant colony optimization algorithm shows distinct advantages in data processing of laser three-dimensional copying, reducing the error and increasing the reproducibility degree of the copy.

  12. Gaussian functional regression for output prediction: Model assimilation and experimental design

    NASA Astrophysics Data System (ADS)

    Nguyen, N. C.; Peraire, J.

    2016-03-01

    In this paper, we introduce a Gaussian functional regression (GFR) technique that integrates multi-fidelity models with model reduction to efficiently predict the input-output relationship of a high-fidelity model. The GFR method combines the high-fidelity model with a low-fidelity model to provide an estimate of the output of the high-fidelity model in the form of a posterior distribution that can characterize uncertainty in the prediction. A reduced basis approximation is constructed upon the low-fidelity model and incorporated into the GFR method to yield an inexpensive posterior distribution of the output estimate. As this posterior distribution depends crucially on a set of training inputs at which the high-fidelity models are simulated, we develop a greedy sampling algorithm to select the training inputs. Our approach results in an output prediction model that inherits the fidelity of the high-fidelity model and has the computational complexity of the reduced basis approximation. Numerical results are presented to demonstrate the proposed approach.

  13. Hyperspherical Sparse Approximation Techniques for High-Dimensional Discontinuity Detection

    DOE PAGES

    Zhang, Guannan; Webster, Clayton G.; Gunzburger, Max; ...

    2016-08-04

    This work proposes a hyperspherical sparse approximation framework for detecting jump discontinuities in functions in high-dimensional spaces. The need for a novel approach results from the theoretical and computational inefficiencies of well-known approaches, such as adaptive sparse grids, for discontinuity detection. Our approach constructs the hyperspherical coordinate representation of the discontinuity surface of a function. Then sparse approximations of the transformed function are built in the hyperspherical coordinate system, with values at each point estimated by solving a one-dimensional discontinuity detection problem. Due to the smoothness of the hypersurface, the new technique can identify jump discontinuities with significantly reduced computationalmore » cost, compared to existing methods. Several approaches are used to approximate the transformed discontinuity surface in the hyperspherical system, including adaptive sparse grid and radial basis function interpolation, discrete least squares projection, and compressed sensing approximation. Moreover, hierarchical acceleration techniques are also incorporated to further reduce the overall complexity. In conclusion, rigorous complexity analyses of the new methods are provided, as are several numerical examples that illustrate the effectiveness of our approach.« less

  14. Galerkin v. discrete-optimal projection in nonlinear model reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlberg, Kevin Thomas; Barone, Matthew Franklin; Antil, Harbir

    Discrete-optimal model-reduction techniques such as the Gauss{Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible ow problems where standard Galerkin techniques have failed. However, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform projection at the time-continuous level, while discrete-optimal techniques do so at the time-discrete level. This work provides a detailed theoretical and experimental comparison of the two techniques for two common classes of time integrators: linear multistep schemes and Runge{Kutta schemes.more » We present a number of new ndings, including conditions under which the discrete-optimal ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and experimentally that decreasing the time step does not necessarily decrease the error for the discrete-optimal ROM; instead, the time step should be `matched' to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible- ow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the discrete-optimal reduced-order model by an order of magnitude.« less

  15. Application effectiveness of the microtremor survey method in the exploration of geothermal resources

    NASA Astrophysics Data System (ADS)

    Tian, Baoqing; Xu, Peifen; Ling, Suqun; Du, Jianguo; Xu, Xueqiu; Pang, Zhonghe

    2017-10-01

    Geophysical techniques are critical tools of geothermal resource surveys. In recent years, the microtremor survey method, which has two branch techniques (the microtremor sounding technique and the two-dimensional (2D) microtremor profiling technique), has become a common method for geothermal resource exploration. The results of microtremor surveys provide important deep information for probing structures of geothermal storing basins and researching the heat-controlling structures, as well as providing the basis for drilling positions of geothermal wells. In this paper, the southern Jiangsu geothermal resources area is taken as a study example. By comparing the results of microtremor surveys and drilling conclusions, and analyzing microtremor survey effectiveness, and geological and technical factors such as observation radius and sampling frequency, we study the applicability of the microtremor survey method and the optimal way of working with this method to achieve better detection results. A comparative study of survey results and geothermal drilling results shows that the microtremor sounding technique effectively distinguishes sub-layers and determines the depth of geothermal reservoirs in the area with excellent layer conditions. The error of depth is generally no more than 8% compared with the results of drilling. It detects deeper by adjusting the size of the probing radius. The 2D microtremor profiling technique probes exactly the buried structures which display as low velocity anomalies in the apparent velocity profile of the S-wave. The anomaly is the critical symbol of the 2D microtremor profiling technique to distinguish and explain the buried geothermal structures. 2D microtremor profiling results provide an important basis for locating exactly the geothermal well and reducing the risk of drilling dry wells.

  16. Reduced Order Modeling of Combustion Instability in a Gas Turbine Model Combustor

    NASA Astrophysics Data System (ADS)

    Arnold-Medabalimi, Nicholas; Huang, Cheng; Duraisamy, Karthik

    2017-11-01

    Hydrocarbon fuel based propulsion systems are expected to remain relevant in aerospace vehicles for the foreseeable future. Design of these devices is complicated by combustion instabilities. The capability to model and predict these effects at reduced computational cost is a requirement for both design and control of these devices. This work focuses on computational studies on a dual swirl model gas turbine combustor in the context of reduced order model development. Full fidelity simulations are performed utilizing URANS and Hybrid RANS-LES with finite rate chemistry. Following this, data decomposition techniques are used to extract a reduced basis representation of the unsteady flow field. These bases are first used to identify sensor locations to guide experimental interrogations and controller feedback. Following this, initial results on developing a control-oriented reduced order model (ROM) will be presented. The capability of the ROM will be further assessed based on different operating conditions and geometric configurations.

  17. Architecture and Key Techniques of Augmented Reality Maintenance Guiding System for Civil Aircrafts

    NASA Astrophysics Data System (ADS)

    hong, Zhou; Wenhua, Lu

    2017-01-01

    Augmented reality technology is introduced into the maintenance related field for strengthened information in real-world scenarios through integration of virtual assistant maintenance information with real-world scenarios. This can lower the difficulty of maintenance, reduce maintenance errors, and improve the maintenance efficiency and quality of civil aviation crews. Architecture of augmented reality virtual maintenance guiding system is proposed on the basis of introducing the definition of augmented reality and analyzing the characteristics of augmented reality virtual maintenance. Key techniques involved, such as standardization and organization of maintenance data, 3D registration, modeling of maintenance guidance information and virtual maintenance man-machine interaction, are elaborated emphatically, and solutions are given.

  18. A parametric model order reduction technique for poroelastic finite element models.

    PubMed

    Lappano, Ettore; Polanz, Markus; Desmet, Wim; Mundo, Domenico

    2017-10-01

    This research presents a parametric model order reduction approach for vibro-acoustic problems in the frequency domain of systems containing poroelastic materials (PEM). The method is applied to the Finite Element (FE) discretization of the weak u-p integral formulation based on the Biot-Allard theory and makes use of reduced basis (RB) methods typically employed for parametric problems. The parametric reduction is obtained rewriting the Biot-Allard FE equations for poroelastic materials using an affine representation of the frequency (therefore allowing for RB methods) and projecting the frequency-dependent PEM system on a global reduced order basis generated with the proper orthogonal decomposition instead of standard modal approaches. This has proven to be better suited to describe the nonlinear frequency dependence and the strong coupling introduced by damping. The methodology presented is tested on two three-dimensional systems: in the first experiment, the surface impedance of a PEM layer sample is calculated and compared with results of the literature; in the second, the reduced order model of a multilayer system coupled to an air cavity is assessed and the results are compared to those of the reference FE model.

  19. Resonant fiber optic gyro based on a sinusoidal wave modulation and square wave demodulation technique.

    PubMed

    Wang, Linglan; Yan, Yuchao; Ma, Huilian; Jin, Zhonghe

    2016-04-20

    New developments are made in the resonant fiber optic gyro (RFOG), which is an optical sensor for the measurement of rotation rate. The digital signal processing system based on the phase modulation technique is capable of detecting the weak frequency difference induced by the Sagnac effect and suppressing the reciprocal noise in the circuit, which determines the detection sensitivity of the RFOG. A new technique based on the sinusoidal wave modulation and square wave demodulation is implemented, and the demodulation curve of the system is simulated and measured. Compared with the past technique using sinusoidal modulation and demodulation, it increases the slope of the demodulation curve by a factor of 1.56, improves the spectrum efficiency of the modulated signal, and reduces the occupancy of the field-programmable gate array resource. On the basis of this new phase modulation technique, the loop is successfully locked and achieves a short-term bias stability of 1.08°/h, which is improved by a factor of 1.47.

  20. One-step sub-10 μm patterning of carbon-nanotube thin films for transparent conductor applications.

    PubMed

    Fukaya, Norihiro; Kim, Dong Young; Kishimoto, Shigeru; Noda, Suguru; Ohno, Yutaka

    2014-04-22

    We propose a technique for one-step micropatterning of as-grown carbon-nanotube films on a plastic substrate with sub-10 μm resolution on the basis of the dry transfer process. By utilizing this technique, we demonstrated the novel high-performance flexible carbon-nanotube transparent conductive film with a microgrid structure, which enabled improvement of the performance over the trade-off between the sheet resistance and transmittance of a conventional uniform carbon-nanotube film. The sheet resistance was reduced by 46% at its maximum by adding the microgrid, leading to a value of 53 Ω/sq at a transmittance of 80%. We also demonstrated easy fabrication of multitouch projected capacitive sensors with 12 × 12 electrodes. The technique is quite promising for energy-saving production of transparent conductor devices with 100% material utilization.

  1. Behavioural cues of reproductive status in seahorses Hippocampus abdominalis.

    PubMed

    Whittington, C M; Musolf, K; Sommer, S; Wilson, A B

    2013-07-01

    A method is described to assess the reproductive status of male Hippocampus abdominalis on the basis of behavioural traits. The non-invasive nature of this technique minimizes handling stress and reduces sampling requirements for experimental work. It represents a useful tool to assist researchers in sample collection for studies of reproduction and development in viviparous syngnathids, which are emerging as important model species. © 2013 The Authors. Journal of Fish Biology © 2013 The Fisheries Society of the British Isles.

  2. Initial postbuckling analysis of elastoplastic thin-shear structures

    NASA Technical Reports Server (NTRS)

    Carnoy, E. G.; Panosyan, G.

    1984-01-01

    The design of thin shell structures with respect to elastoplastic buckling requires an extended analysis of the influence of initial imperfections. For conservative design, the most critical defect should be assumed with the maximum allowable magnitude. This defect is closely related to the initial postbuckling behavior. An algorithm is given for the quasi-static analysis of the postbuckling behavior of structures that exhibit multiple buckling points. the algorithm based upon an energy criterion allows the computation of the critical perturbation which will be employed for the definition of the critical defect. For computational efficiency, the algorithm uses the reduced basis technique with automatic update of the modal basis. The method is applied to the axisymmetric buckling of cylindrical shells under axial compression, and conclusions are given for future research.

  3. Biased three-intensity decoy-state scheme on the measurement-device-independent quantum key distribution using heralded single-photon sources.

    PubMed

    Zhang, Chun-Hui; Zhang, Chun-Mei; Guo, Guang-Can; Wang, Qin

    2018-02-19

    At present, most of the measurement-device-independent quantum key distributions (MDI-QKD) are based on weak coherent sources and limited in the transmission distance under realistic experimental conditions, e.g., considering the finite-size-key effects. Hence in this paper, we propose a new biased decoy-state scheme using heralded single-photon sources for the three-intensity MDI-QKD, where we prepare the decoy pulses only in X basis and adopt both the collective constraints and joint parameter estimation techniques. Compared with former schemes with WCS or HSPS, after implementing full parameter optimizations, our scheme gives distinct reduced quantum bit error rate in the X basis and thus show excellent performance, especially when the data size is relatively small.

  4. Yielding physically-interpretable emulators - A Sparse PCA approach

    NASA Astrophysics Data System (ADS)

    Galelli, S.; Alsahaf, A.; Giuliani, M.; Castelletti, A.

    2015-12-01

    Projection-based techniques, such as Principal Orthogonal Decomposition (POD), are a common approach to surrogate high-fidelity process-based models by lower order dynamic emulators. With POD, the dimensionality reduction is achieved by using observations, or 'snapshots' - generated with the high-fidelity model -, to project the entire set of input and state variables of this model onto a smaller set of basis functions that account for most of the variability in the data. While reduction efficiency and variance control of POD techniques are usually very high, the resulting emulators are structurally highly complex and can hardly be given a physically meaningful interpretation as each basis is a projection of the entire set of inputs and states. In this work, we propose a novel approach based on Sparse Principal Component Analysis (SPCA) that combines the several assets of POD methods with the potential for ex-post interpretation of the emulator structure. SPCA reduces the number of non-zero coefficients in the basis functions by identifying a sparse matrix of coefficients. While the resulting set of basis functions may retain less variance of the snapshots, the presence of a few non-zero coefficients assists in the interpretation of the underlying physical processes. The SPCA approach is tested on the reduction of a 1D hydro-ecological model (DYRESM-CAEDYM) used to describe the main ecological and hydrodynamic processes in Tono Dam, Japan. An experimental comparison against a standard POD approach shows that SPCA achieves the same accuracy in emulating a given output variable - for the same level of dimensionality reduction - while yielding better insights of the main process dynamics.

  5. A Review of Dissimilar Welding Techniques for Magnesium Alloys to Aluminum Alloys.

    PubMed

    Liu, Liming; Ren, Daxin; Liu, Fei

    2014-05-08

    Welding of dissimilar magnesium alloys and aluminum alloys is an important issue because of their increasing applications in industries. In this document, the research and progress of a variety of welding techniques for joining dissimilar Mg alloys and Al alloys are reviewed from different perspectives. Welding of dissimilar Mg and Al is challenging due to the formation of brittle intermetallic compound (IMC) such as Mg 17 Al 12 and Mg₂Al₃. In order to increase the joint strength, three main research approaches were used to eliminate or reduce the Mg-Al intermetallic reaction layer. First, solid state welding techniques which have a low welding temperature were used to reduce the IMCs. Second, IMC variety and distribution were controlled to avoid the degradation of the joining strength in fusion welding. Third, techniques which have relatively controllable reaction time and energy were used to eliminate the IMCs. Some important processing parameters and their effects on weld quality are discussed, and the microstructure and metallurgical reaction are described. Mechanical properties of welds such as hardness, tensile, shear and fatigue strength are discussed. The aim of the report is to review the recent progress in the welding of dissimilar Mg and Al to provide a basis for follow-up research.

  6. A Review of Dissimilar Welding Techniques for Magnesium Alloys to Aluminum Alloys

    PubMed Central

    Liu, Liming; Ren, Daxin; Liu, Fei

    2014-01-01

    Welding of dissimilar magnesium alloys and aluminum alloys is an important issue because of their increasing applications in industries. In this document, the research and progress of a variety of welding techniques for joining dissimilar Mg alloys and Al alloys are reviewed from different perspectives. Welding of dissimilar Mg and Al is challenging due to the formation of brittle intermetallic compound (IMC) such as Mg17Al12 and Mg2Al3. In order to increase the joint strength, three main research approaches were used to eliminate or reduce the Mg-Al intermetallic reaction layer. First, solid state welding techniques which have a low welding temperature were used to reduce the IMCs. Second, IMC variety and distribution were controlled to avoid the degradation of the joining strength in fusion welding. Third, techniques which have relatively controllable reaction time and energy were used to eliminate the IMCs. Some important processing parameters and their effects on weld quality are discussed, and the microstructure and metallurgical reaction are described. Mechanical properties of welds such as hardness, tensile, shear and fatigue strength are discussed. The aim of the report is to review the recent progress in the welding of dissimilar Mg and Al to provide a basis for follow-up research. PMID:28788646

  7. Size Reduction of Hamiltonian Matrix for Large-Scale Energy Band Calculations Using Plane Wave Bases

    NASA Astrophysics Data System (ADS)

    Morifuji, Masato

    2018-01-01

    We present a method of reducing the size of a Hamiltonian matrix used in calculations of electronic states. In the electronic states calculations using plane wave basis functions, a large number of plane waves are often required to obtain precise results. Even using state-of-the-art techniques, the Hamiltonian matrix often becomes very large. The large computational time and memory necessary for diagonalization limit the widespread use of band calculations. We show a procedure of deriving a reduced Hamiltonian constructed using a small number of low-energy bases by renormalizing high-energy bases. We demonstrate numerically that the significant speedup of eigenstates evaluation is achieved without losing accuracy.

  8. Classification of high-resolution multi-swath hyperspectral data using Landsat 8 surface reflectance data as a calibration target and a novel histogram based unsupervised classification technique to determine natural classes from biophysically relevant fit parameters

    NASA Astrophysics Data System (ADS)

    McCann, C.; Repasky, K. S.; Morin, M.; Lawrence, R. L.; Powell, S. L.

    2016-12-01

    Compact, cost-effective, flight-based hyperspectral imaging systems can provide scientifically relevant data over large areas for a variety of applications such as ecosystem studies, precision agriculture, and land management. To fully realize this capability, unsupervised classification techniques based on radiometrically-calibrated data that cluster based on biophysical similarity rather than simply spectral similarity are needed. An automated technique to produce high-resolution, large-area, radiometrically-calibrated hyperspectral data sets based on the Landsat surface reflectance data product as a calibration target was developed and applied to three subsequent years of data covering approximately 1850 hectares. The radiometrically-calibrated data allows inter-comparison of the temporal series. Advantages of the radiometric calibration technique include the need for minimal site access, no ancillary instrumentation, and automated processing. Fitting the reflectance spectra of each pixel using a set of biophysically relevant basis functions reduces the data from 80 spectral bands to 9 parameters providing noise reduction and data compression. Examination of histograms of these parameters allows for determination of natural splitting into biophysical similar clusters. This method creates clusters that are similar in terms of biophysical parameters, not simply spectral proximity. Furthermore, this method can be applied to other data sets, such as urban scenes, by developing other physically meaningful basis functions. The ability to use hyperspectral imaging for a variety of important applications requires the development of data processing techniques that can be automated. The radiometric-calibration combined with the histogram based unsupervised classification technique presented here provide one potential avenue for managing big-data associated with hyperspectral imaging.

  9. Galerkin v. least-squares Petrov–Galerkin projection in nonlinear model reduction

    DOE PAGES

    Carlberg, Kevin Thomas; Barone, Matthew F.; Antil, Harbir

    2016-10-20

    Least-squares Petrov–Galerkin (LSPG) model-reduction techniques such as the Gauss–Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible flow problems where standard Galerkin techniques have failed. Furthermore, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform optimal projection associated with residual minimization at the time-continuous level, while LSPG techniques do so at the time-discrete level. This work provides a detailed theoretical and computational comparison of the two techniques for two common classes of timemore » integrators: linear multistep schemes and Runge–Kutta schemes. We present a number of new findings, including conditions under which the LSPG ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and computationally that decreasing the time step does not necessarily decrease the error for the LSPG ROM; instead, the time step should be ‘matched’ to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible-flow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the LSPG reduced-order model by an order of magnitude.« less

  10. Galerkin v. least-squares Petrov–Galerkin projection in nonlinear model reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlberg, Kevin Thomas; Barone, Matthew F.; Antil, Harbir

    Least-squares Petrov–Galerkin (LSPG) model-reduction techniques such as the Gauss–Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible flow problems where standard Galerkin techniques have failed. Furthermore, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform optimal projection associated with residual minimization at the time-continuous level, while LSPG techniques do so at the time-discrete level. This work provides a detailed theoretical and computational comparison of the two techniques for two common classes of timemore » integrators: linear multistep schemes and Runge–Kutta schemes. We present a number of new findings, including conditions under which the LSPG ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and computationally that decreasing the time step does not necessarily decrease the error for the LSPG ROM; instead, the time step should be ‘matched’ to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible-flow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the LSPG reduced-order model by an order of magnitude.« less

  11. Revision Stapedectomy with Necrosis of the Long Process of the Incus: Does the Degree of Necrosis Matter? A Retrospective Clinical Study.

    PubMed

    Ghonim, Mohamed; Shabana, Yousef; Ashraf, Bassem; Salem, Mohamed

    2017-04-01

    To discuss the different modalities for managing necrosis of the long process of the incus in revision stapedectomy on the basis of the degree of necrosis and compare the results with those reported in the literature. Thirty-six patients underwent revision stapedectomy with the necrosis of the long process of the incus from 2009 to 2016. The patients were divided into three groups on the basis of the degree of necrosis. For group A (minimal necrosis), augmentation technique with bone cement was performed. For group B (partial necrosis), the cement plug technique was performed. For group C (sever necrosis), malleus relocation with malleovestibulopexy was performed using reshaped necrosed incus. Air and bone conduction thresholds at frequencies of 500-3000 Hz were reviewed pre- and postoperatively using conventional audiometry. The air-bone gap (ABG) and bone conduction thresholds were measured. Postoperative ABG was reduced to <10 dB in 28 cases (77.8%) and <20 dB in all cases (100%). There was no significant change in postoperative bone conduction thresholds. The mean patient follow-up duration was 23 (range, 18-36) months. The cement plug technique was used in 75% of cases. Managing necrosis of the long process of the incus in revision stapedectomy should be considered according to the degree of necrosis. The cement plug technique is considered to be a reasonable option in most cases. Malleus relocation with malleovestibulopexy is an effective alternative to prosthesis.

  12. Transesterification of Waste Activated Sludge for Biosolids Reduction and Biodiesel Production.

    PubMed

    Maeng, Min Ho; Cha, Daniel K

    2018-02-01

      Transesterification of waste activated sludge (WAS) was evaluated as a cost-effective technique to reduce excess biosolids and recover biodiesel feedstock from activated sludge treatment processes. A laboratory-scale sequencing batch reactor (SBR) was operated with recycling transesterification-treated WAS back to the aeration basin. Seventy percent recycling of WAS resulted in a 48% reduction of excess biosolids in comparison with a conventional SBR, which was operated in parallel as the control SBR. Biodiesel recovery of 8.0% (dried weight basis) was achieved at an optimum transesterification condition using acidic methanol and xylene as cosolvent. Average effluent soluble chemical oxygen demand (COD) and total suspended solids (TSS) concentrations from the test SBR and control SBR were comparable, indicating that the recycling of transesterification-treated WAS did not have detrimental effect on the effluent quality. This study demonstrated that transesterification and recycling of WAS may be a feasible technique for reducing excess biosolids, while producing valuable biodiesel feedstock from the activated sludge process.

  13. GRACE L1b inversion through a self-consistent modified radial basis function approach

    NASA Astrophysics Data System (ADS)

    Yang, Fan; Kusche, Juergen; Rietbroek, Roelof; Eicker, Annette

    2016-04-01

    Implementing a regional geopotential representation such as mascons or, more general, RBFs (radial basis functions) has been widely accepted as an efficient and flexible approach to recover the gravity field from GRACE (Gravity Recovery and Climate Experiment), especially at higher latitude region like Greenland. This is since RBFs allow for regionally specific regularizations over areas which have sufficient and dense GRACE observations. Although existing RBF solutions show a better resolution than classical spherical harmonic solutions, the applied regularizations cause spatial leakage which should be carefully dealt with. It has been shown that leakage is a main error source which leads to an evident underestimation of yearly trend of ice-melting over Greenland. Unlike some popular post-processing techniques to mitigate leakage signals, this study, for the first time, attempts to reduce the leakage directly in the GRACE L1b inversion by constructing an innovative modified (MRBF) basis in place of the standard RBFs to retrieve a more realistic temporal gravity signal along the coastline. Our point of departure is that the surface mass loading associated with standard RBF is smooth but disregards physical consistency between continental mass and passive ocean response. In this contribution, based on earlier work by Clarke et al.(2007), a physically self-consistent MRBF representation is constructed from standard RBFs, with the help of the sea level equation: for a given standard RBF basis, the corresponding MRBF basis is first obtained by keeping the surface load over the continent unchanged, but imposing global mass conservation and equilibrium response of the oceans. Then, the updated set of MRBFs as well as standard RBFs are individually employed as the basis function to determine the temporal gravity field from GRACE L1b data. In this way, in the MRBF GRACE solution, the passive (e.g. ice melting and land hydrology response) sea level is automatically separated from ocean dynamic effects, and our hypothesis is that in this way we improve the partitioning of the GRACE signals into land and ocean contributions along the coastline. In particular, we inspect the ice-melting over Greenland from real GRACE data, and we evaluate the ability of the MRBF approach to recover true mass variations along the coastline. Finally, using independent measurements from multiple techniques including GPS vertical motion and altimetry, a validation will be presented to quantify to what extent it is possible to reduce the leakage through the MRBF approach.

  14. Fast Bound Methods for Large Scale Simulation with Application for Engineering Optimization

    NASA Technical Reports Server (NTRS)

    Patera, Anthony T.; Peraire, Jaime; Zang, Thomas A. (Technical Monitor)

    2002-01-01

    In this work, we have focused on fast bound methods for large scale simulation with application for engineering optimization. The emphasis is on the development of techniques that provide both very fast turnaround and a certificate of Fidelity; these attributes ensure that the results are indeed relevant to - and trustworthy within - the engineering context. The bound methodology which underlies this work has many different instantiations: finite element approximation; iterative solution techniques; and reduced-basis (parameter) approximation. In this grant we have, in fact, treated all three, but most of our effort has been concentrated on the first and third. We describe these below briefly - but with a pointer to an Appendix which describes, in some detail, the current "state of the art."

  15. A new technique for fire risk estimation in the wildland urban interface

    NASA Astrophysics Data System (ADS)

    Dasgupta, S.; Qu, J. J.; Hao, X.

    A novel technique based on the physical variable of pre-ignition energy is proposed for assessing fire risk in the Grassland-Urban-Interface The physical basis lends meaning a site and season independent applicability possibilities for computing spread rates and ignition probabilities features contemporary fire risk indices usually lack The method requires estimates of grass moisture content and temperature A constrained radiative-transfer inversion scheme on MODIS NIR-SWIR reflectances which reduces solution ambiguity is used for grass moisture retrieval while MODIS land surface temperature emissivity products are used for retrieving grass temperature Subpixel urban contamination of the MODIS reflective and thermal signals over a Grassland-Urban-Interface pixel is corrected using periodic estimates of urban influence from high spatial resolution ASTER

  16. ODF Maxima Extraction in Spherical Harmonic Representation via Analytical Search Space Reduction

    PubMed Central

    Aganj, Iman; Lenglet, Christophe; Sapiro, Guillermo

    2015-01-01

    By revealing complex fiber structure through the orientation distribution function (ODF), q-ball imaging has recently become a popular reconstruction technique in diffusion-weighted MRI. In this paper, we propose an analytical dimension reduction approach to ODF maxima extraction. We show that by expressing the ODF, or any antipodally symmetric spherical function, in the common fourth order real and symmetric spherical harmonic basis, the maxima of the two-dimensional ODF lie on an analytically derived one-dimensional space, from which we can detect the ODF maxima. This method reduces the computational complexity of the maxima detection, without compromising the accuracy. We demonstrate the performance of our technique on both artificial and human brain data. PMID:20879302

  17. Reliable Real-Time Solution of Parametrized Partial Differential Equations: Reduced-Basis Output Bound Methods. Appendix 2

    NASA Technical Reports Server (NTRS)

    Prudhomme, C.; Rovas, D. V.; Veroy, K.; Machiels, L.; Maday, Y.; Patera, A. T.; Turinici, G.; Zang, Thomas A., Jr. (Technical Monitor)

    2002-01-01

    We present a technique for the rapid and reliable prediction of linear-functional outputs of elliptic (and parabolic) partial differential equations with affine parameter dependence. The essential components are (i) (provably) rapidly convergent global reduced basis approximations, Galerkin projection onto a space W(sub N) spanned by solutions of the governing partial differential equation at N selected points in parameter space; (ii) a posteriori error estimation, relaxations of the error-residual equation that provide inexpensive yet sharp and rigorous bounds for the error in the outputs of interest; and (iii) off-line/on-line computational procedures, methods which decouple the generation and projection stages of the approximation process. The operation count for the on-line stage, in which, given a new parameter value, we calculate the output of interest and associated error bound, depends only on N (typically very small) and the parametric complexity of the problem; the method is thus ideally suited for the repeated and rapid evaluations required in the context of parameter estimation, design, optimization, and real-time control.

  18. Evaluation of a transfinite element numerical solution method for nonlinear heat transfer problems

    NASA Technical Reports Server (NTRS)

    Cerro, J. A.; Scotti, S. J.

    1991-01-01

    Laplace transform techniques have been widely used to solve linear, transient field problems. A transform-based algorithm enables calculation of the response at selected times of interest without the need for stepping in time as required by conventional time integration schemes. The elimination of time stepping can substantially reduce computer time when transform techniques are implemented in a numerical finite element program. The coupling of transform techniques with spatial discretization techniques such as the finite element method has resulted in what are known as transfinite element methods. Recently attempts have been made to extend the transfinite element method to solve nonlinear, transient field problems. This paper examines the theoretical basis and numerical implementation of one such algorithm, applied to nonlinear heat transfer problems. The problem is linearized and solved by requiring a numerical iteration at selected times of interest. While shown to be acceptable for weakly nonlinear problems, this algorithm is ineffective as a general nonlinear solution method.

  19. Model based Computerized Ionospheric Tomography in space and time

    NASA Astrophysics Data System (ADS)

    Tuna, Hakan; Arikan, Orhan; Arikan, Feza

    2018-04-01

    Reconstruction of the ionospheric electron density distribution in space and time not only provide basis for better understanding the physical nature of the ionosphere, but also provide improvements in various applications including HF communication. Recently developed IONOLAB-CIT technique provides physically admissible 3D model of the ionosphere by using both Slant Total Electron Content (STEC) measurements obtained from a GPS satellite - receiver network and IRI-Plas model. IONOLAB-CIT technique optimizes IRI-Plas model parameters in the region of interest such that the synthetic STEC computations obtained from the IRI-Plas model are in accordance with the actual STEC measurements. In this work, the IONOLAB-CIT technique is extended to provide reconstructions both in space and time. This extension exploits the temporal continuity of the ionosphere to provide more reliable reconstructions with a reduced computational load. The proposed 4D-IONOLAB-CIT technique is validated on real measurement data obtained from TNPGN-Active GPS receiver network in Turkey.

  20. Digital image classification approach for estimating forest clearing and regrowth rates and trends

    NASA Technical Reports Server (NTRS)

    Sader, Steven A.

    1987-01-01

    A technique is presented to monitor vegetation changes for a selected study area in Costa Rica. A normalized difference vegetation index was computed for three dates of Landsat satellite data and a modified parallelipiped classifier was employed to generate a multitemporal greenness image representing all three dates. A second-generation image was created by partitioning the intensity levels at each date into high, medium, and low and thereby reducing the number of classes to 21. A sampling technique was applied to describe forest and other land cover change occurring between time periods based on interpretation of aerial photography that closely matched the dates of satellite acquisition. Comparison of the Landsat-derived classes with the photo-interpreted sample areas can provide a basis for evaluating the satellite monitoring technique and the accuracy of estimating forest clearing and regrowth rates and trends.

  1. A four-dimensional variational chemistry data assimilation scheme for Eulerian chemistry transport modeling

    NASA Astrophysics Data System (ADS)

    Eibern, Hendrik; Schmidt, Hauke

    1999-08-01

    The inverse problem of data assimilation of tropospheric trace gas observations into an Eulerian chemistry transport model has been solved by the four-dimensional variational technique including chemical reactions, transport, and diffusion. The University of Cologne European Air Pollution Dispersion Chemistry Transport Model 2 with the Regional Acid Deposition Model 2 gas phase mechanism is taken as the basis for developing a full four-dimensional variational data assimilation package, on the basis of the adjoint model version, which includes the adjoint operators of horizontal and vertical advection, implicit vertical diffusion, and the adjoint gas phase mechanism. To assess the potential and limitations of the technique without degrading the impact of nonperfect meteorological analyses and statistically not established error covariance estimates, artificial meteorological data and observations are used. The results are presented on the basis of a suite of experiments, where reduced records of artificial "observations" are provided to the assimilation procedure, while other "data" is retained for performance control of the analysis. The paper demonstrates that the four-dimensional variational technique is applicable for a comprehensive chemistry transport model in terms of computational and storage requirements on advanced parallel platforms. It is further shown that observed species can generally be analyzed, even if the "measurements" have unbiased random errors. More challenging experiments are presented, aiming to tax the skill of the method (1) by restricting available observations mostly to surface ozone observations for a limited assimilation interval of 6 hours and (2) by starting with poorly chosen first guess values. In this first such application to a three-dimensional chemistry transport model, success was also achieved in analyzing not only observed but also chemically closely related unobserved constituents.

  2. A robust component mode synthesis method for stochastic damped vibroacoustics

    NASA Astrophysics Data System (ADS)

    Tran, Quang Hung; Ouisse, Morvan; Bouhaddi, Noureddine

    2010-01-01

    In order to reduce vibrations or sound levels in industrial vibroacoustic problems, the low-cost and efficient way consists in introducing visco- and poro-elastic materials either on the structure or on cavity walls. Depending on the frequency range of interest, several numerical approaches can be used to estimate the behavior of the coupled problem. In the context of low frequency applications related to acoustic cavities with surrounding vibrating structures, the finite elements method (FEM) is one of the most efficient techniques. Nevertheless, industrial problems lead to large FE models which are time-consuming in updating or optimization processes. A classical way to reduce calculation time is the component mode synthesis (CMS) method, whose classical formulation is not always efficient to predict dynamical behavior of structures including visco-elastic and/or poro-elastic patches. Then, to ensure an efficient prediction, the fluid and structural bases used for the model reduction need to be updated as a result of changes in a parametric optimization procedure. For complex models, this leads to prohibitive numerical costs in the optimization phase or for management and propagation of uncertainties in the stochastic vibroacoustic problem. In this paper, the formulation of an alternative CMS method is proposed and compared to classical ( u, p) CMS method: the Ritz basis is completed with static residuals associated to visco-elastic and poro-elastic behaviors. This basis is also enriched by the static response of residual forces due to structural modifications, resulting in a so-called robust basis, also adapted to Monte Carlo simulations for uncertainties propagation using reduced models.

  3. Atomic orbital-based SOS-MP2 with tensor hypercontraction. II. Local tensor hypercontraction

    NASA Astrophysics Data System (ADS)

    Song, Chenchen; Martínez, Todd J.

    2017-01-01

    In the first paper of the series [Paper I, C. Song and T. J. Martinez, J. Chem. Phys. 144, 174111 (2016)], we showed how tensor-hypercontracted (THC) SOS-MP2 could be accelerated by exploiting sparsity in the atomic orbitals and using graphical processing units (GPUs). This reduced the formal scaling of the SOS-MP2 energy calculation to cubic with respect to system size. The computational bottleneck then becomes the THC metric matrix inversion, which scales cubically with a large prefactor. In this work, the local THC approximation is proposed to reduce the computational cost of inverting the THC metric matrix to linear scaling with respect to molecular size. By doing so, we have removed the primary bottleneck to THC-SOS-MP2 calculations on large molecules with O(1000) atoms. The errors introduced by the local THC approximation are less than 0.6 kcal/mol for molecules with up to 200 atoms and 3300 basis functions. Together with the graphical processing unit techniques and locality-exploiting approaches introduced in previous work, the scaled opposite spin MP2 (SOS-MP2) calculations exhibit O(N2.5) scaling in practice up to 10 000 basis functions. The new algorithms make it feasible to carry out SOS-MP2 calculations on small proteins like ubiquitin (1231 atoms/10 294 atomic basis functions) on a single node in less than a day.

  4. Atomic orbital-based SOS-MP2 with tensor hypercontraction. II. Local tensor hypercontraction.

    PubMed

    Song, Chenchen; Martínez, Todd J

    2017-01-21

    In the first paper of the series [Paper I, C. Song and T. J. Martinez, J. Chem. Phys. 144, 174111 (2016)], we showed how tensor-hypercontracted (THC) SOS-MP2 could be accelerated by exploiting sparsity in the atomic orbitals and using graphical processing units (GPUs). This reduced the formal scaling of the SOS-MP2 energy calculation to cubic with respect to system size. The computational bottleneck then becomes the THC metric matrix inversion, which scales cubically with a large prefactor. In this work, the local THC approximation is proposed to reduce the computational cost of inverting the THC metric matrix to linear scaling with respect to molecular size. By doing so, we have removed the primary bottleneck to THC-SOS-MP2 calculations on large molecules with O(1000) atoms. The errors introduced by the local THC approximation are less than 0.6 kcal/mol for molecules with up to 200 atoms and 3300 basis functions. Together with the graphical processing unit techniques and locality-exploiting approaches introduced in previous work, the scaled opposite spin MP2 (SOS-MP2) calculations exhibit O(N 2.5 ) scaling in practice up to 10 000 basis functions. The new algorithms make it feasible to carry out SOS-MP2 calculations on small proteins like ubiquitin (1231 atoms/10 294 atomic basis functions) on a single node in less than a day.

  5. Mechanistic investigations in sono-hybrid techniques for rice straw pretreatment.

    PubMed

    Suresh, Kelothu; Ranjan, Amrita; Singh, Shuchi; Moholkar, Vijayanand S

    2014-01-01

    This paper reports comparative study of two chemical techniques (viz. dilute acid/alkali treatment) and two physical techniques (viz. hot water bath and autoclaving) coupled with sonication, termed as sono-hybrid techniques, for hydrolysis of rice straw. The efficacy of each sono-hybrid technique was assessed on the basis of total sugar and reducing sugar release. The system of biomass pretreatment is revealed to be mass transfer controlled. Higher sugar release is obtained during dilute acid treatment than dilute alkali treatment. Autoclaving alone was found to increase sugar release marginally as compared to hot water bath. Sonication of the biomass solution after autoclaving and stirring resulted in significant rise of sugar release, which is attributed to strong convection generated during sonication that assists effective transport of sugar molecules. Discrimination between individual contributions of ultrasound and cavitation to mass transfer enhancement reveals that contribution of ultrasound (through micro-streaming) is higher. Micro-turbulence as well as acoustic waves generated by cavitation did not contribute much to enhancing of mass transfer in the system. Copyright © 2013 Elsevier B.V. All rights reserved.

  6. Accelerated defect visualization of microelectronic systems using binary search with fixed pitch-catch distance laser ultrasonic scanning

    NASA Astrophysics Data System (ADS)

    Park, Byeongjin; Sohn, Hoon

    2018-04-01

    The practicality of laser ultrasonic scanning is limited because scanning at a high spatial resolution demands a prohibitively long scanning time. Inspired by binary search, an accelerated defect visualization technique is developed to visualize defect with a reduced scanning time. The pitch-catch distance between the excitation point and the sensing point is also fixed during scanning to maintain a high signal-to-noise ratio of measured ultrasonic responses. The approximate defect boundary is identified by examining the interactions between ultrasonic waves and defect observed at the scanning points that are sparsely selected by a binary search algorithm. Here, a time-domain laser ultrasonic response is transformed into a spatial ultrasonic domain response using a basis pursuit approach so that the interactions between ultrasonic waves and defect can be better identified in the spatial ultrasonic domain. Then, the area inside the identified defect boundary is visualized as defect. The performance of the proposed defect visualization technique is validated through an experiment on a semiconductor chip. The proposed defect visualization technique accelerates the defect visualization process in three aspects: (1) The number of measurements that is necessary for defect visualization is dramatically reduced by a binary search algorithm; (2) The number of averaging that is necessary to achieve a high signal-to-noise ratio is reduced by maintaining the wave propagation distance short; and (3) With the proposed technique, defect can be identified with a lower spatial resolution than the spatial resolution required by full-field wave propagation imaging.

  7. Quantitative Seismic Interpretation: Applying Rock Physics Tools to Reduce Interpretation Risk

    NASA Astrophysics Data System (ADS)

    Sondergeld, Carl H.

    This book is divided into seven chapters that cover rock physics, statistical rock physics, seismic inversion techniques, case studies, and work flows. On balance, the emphasis is on rock physics. Included are 56 color figures that greatly help in the interpretation of more complicated plots and displays.The domain of rock physics falls between petrophysics and seismics. It is the basis for interpreting seismic observations and therefore is pivotal to the understanding of this book. The first two chapters are dedicated to this topic (109 pages).

  8. A new electromagnetic NDI-technique based on the measurement of source-sample reaction forces

    NASA Astrophysics Data System (ADS)

    Fitzpatrick, G. L.; Skaugset, R. L.; Shih, W. C. L.

    2001-04-01

    Faraday's law of induction, Lenz's law, the Lorentz force law and Newton's third law, taken together, insure that sources (e.g., coil sources) of time-dependent electromagnetic fields, and nearby "nonmagnetic" electrical conductors (e.g., aluminum), always experience mutually repulsive (source-conductor) forces. This fact forms the basis for a new method for detecting cracks and corrosion in (aging) multi-layer airframes. The presence of cracks or corrosion (e.g., material thinning) in these structures is observed to reduce (second-harmonic) source-conductor reaction forces.

  9. Towards improving the NASA standard soil moisture retrieval algorithm and product

    NASA Astrophysics Data System (ADS)

    Mladenova, I. E.; Jackson, T. J.; Njoku, E. G.; Bindlish, R.; Cosh, M. H.; Chan, S.

    2013-12-01

    Soil moisture mapping using passive-based microwave remote sensing techniques has proven to be one of the most effective ways of acquiring reliable global soil moisture information on a routine basis. An important step in this direction was made by the launch of the Advanced Microwave Scanning Radiometer on the NASA's Earth Observing System Aqua satellite (AMSR-E). Along with the standard NASA algorithm and operational AMSR-E product, the easy access and availability of the AMSR-E data promoted the development and distribution of alternative retrieval algorithms and products. Several evaluation studies have demonstrated issues with the standard NASA AMSR-E product such as dampened temporal response and limited range of the final retrievals and noted that the available global passive-based algorithms, even though based on the same electromagnetic principles, produce different results in terms of accuracy and temporal dynamics. Our goal is to identify the theoretical causes that determine the reduced sensitivity of the NASA AMSR-E product and outline ways to improve the operational NASA algorithm, if possible. Properly identifying the underlying reasons that cause the above mentioned features of the NASA AMSR-E product and differences between the alternative algorithms requires a careful examination of the theoretical basis of each approach. Specifically, the simplifying assumptions and parametrization approaches adopted by each algorithm to reduce the dimensionality of unknowns and characterize the observing system. Statistically-based error analyses, which are useful and necessary, provide information on the relative accuracy of each product but give very little information on the theoretical causes, knowledge that is essential for algorithm improvement. Thus, we are currently examining the possibility of improving the standard NASA AMSR-E global soil moisture product by conducting a thorough theoretically-based review of and inter-comparisons between several well established global retrieval techniques. A detailed discussion focused on the theoretical basis of each approach and algorithms sensitivity to assumptions and parametrization approaches will be presented. USDA is an equal opportunity provider and employer.

  10. Digital super-resolution holographic data storage based on Hermitian symmetry for achieving high areal density.

    PubMed

    Nobukawa, Teruyoshi; Nomura, Takanori

    2017-01-23

    Digital super-resolution holographic data storage based on Hermitian symmetry is proposed to store digital data in a tiny area of a medium. In general, reducing a recording area with an aperture leads to the improvement in the storage capacity of holographic data storage. Conventional holographic data storage systems however have a limitation in reducing a recording area. This limitation is called a Nyquist size. Unlike the conventional systems, our proposed system can overcome the limitation with the help of a digital holographic technique and digital signal processing. Experimental result shows that the proposed system can record and retrieve a hologram in a smaller area than the Nyquist size on the basis of Hermitian symmetry.

  11. Techniques for fire detection

    NASA Technical Reports Server (NTRS)

    Bukowski, Richard W.

    1987-01-01

    An overview is given of the basis for an analysis of combustable materials and potential ignition sources in a spacecraft. First, the burning process is discussed in terms of the production of the fire signatures normally associated with detection devices. These include convected and radiated thermal energy, particulates, and gases. Second, the transport processes associated with the movement of these from the fire to the detector, along with the important phenomena which cause the level of these signatures to be reduced, are described. Third, the operating characteristics of the individual types of detectors which influence their response to signals, are presented. Finally, vulnerability analysis using predictive fire modeling techniques is discussed as a means to establish the necessary response of the detection system to provide the level of protection required in the application.

  12. Factorization in large-scale many-body calculations

    DOE PAGES

    Johnson, Calvin W.; Ormand, W. Erich; Krastev, Plamen G.

    2013-08-07

    One approach for solving interacting many-fermion systems is the configuration-interaction method, also sometimes called the interacting shell model, where one finds eigenvalues of the Hamiltonian in a many-body basis of Slater determinants (antisymmetrized products of single-particle wavefunctions). The resulting Hamiltonian matrix is typically very sparse, but for large systems the nonzero matrix elements can nonetheless require terabytes or more of storage. An alternate algorithm, applicable to a broad class of systems with symmetry, in our case rotational invariance, is to exactly factorize both the basis and the interaction using additive/multiplicative quantum numbers; such an algorithm recreates the many-body matrix elementsmore » on the fly and can reduce the storage requirements by an order of magnitude or more. Here, we discuss factorization in general and introduce a novel, generalized factorization method, essentially a ‘double-factorization’ which speeds up basis generation and set-up of required arrays. Although we emphasize techniques, we also place factorization in the context of a specific (unpublished) configuration-interaction code, BIGSTICK, which runs both on serial and parallel machines, and discuss the savings in memory due to factorization.« less

  13. A Reduced Basis Method with Exact-Solution Certificates for Symmetric Coercive Equations

    DTIC Science & Technology

    2013-11-06

    the energy associated with the infinite - dimensional weak solution of parametrized symmetric coercive partial differential equations with piecewise...builds bounds with respect to the infinite - dimensional weak solution, aims to entirely remove the issue of the “truth” within the certified reduced basis...framework. We in particular introduce a reduced basis method that provides rigorous upper and lower bounds

  14. A Primary Care Approach to the Diagnosis and Management of Peripheral Arterial Disease

    NASA Technical Reports Server (NTRS)

    Dawson, David L.

    2000-01-01

    The objectives of this work are: (1) Be able to recognize characteristic symptoms of intermittent claudication (2) Diagnose PAD on the basis of history, physical exam, and simple limb blood pressure measurements (3) Recognize the significance of peripheral artery disease as a marker for coronary or cerebrovascular atherosclerosis (4) Provide appropriate medical management of atherosclerosis risk factors-- including use of antiplatelet therapy to reduce risk of myocardial infarction, stroke and death (5) Manage symptoms of intermittent claudication with program of smoking cessation, exercise, and medication The diagnosis of intermittent claudication secondary to peripheral artery disease (PAD) can often be made on the basis of history and physical examination. Additional evaluation of PAD is multi-modal and the techniques used will vary depending on the nature and severity of the patient's presenting problem. Most patients can be appropriately managed without referral for specialized diagnostic services or interventions.

  15. Final Technical Report: Development of an Abrasion-Resistant Antisoiling Coating for Front-Surface Reflectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gee, Randy C.

    A high-performance reflective film has been successfully developed for Concentrating Solar Power (CSP) solar concentrators. Anti-soiling properties and abrasion resistance have been incorporated into the reflector to reduce reflector cleaning costs and to enhance durability. This approach has also resulted in higher reflectance and improved specularity. From the outset of this project we focused on the use of established high-volume roll-to-roll manufacturing techniques to achieve low manufacturing costs on a per ubit area basis. Roll-to-roll manufacturng equipment has a high capital cost so there is an entire industry devoted to roll-to-roll “toll” manufacturing, where the equipment is operated “around themore » clock” to produce a multitude of products for a large variety of uses. Using this approach, the reflective film can be manufactured by toll coaters/converters on an as-needed basis.« less

  16. Performance Evaluation of a Biometric System Based on Acoustic Images

    PubMed Central

    Izquierdo-Fuente, Alberto; del Val, Lara; Jiménez, María I.; Villacorta, Juan J.

    2011-01-01

    An acoustic electronic scanning array for acquiring images from a person using a biometric application is developed. Based on pulse-echo techniques, multifrequency acoustic images are obtained for a set of positions of a person (front, front with arms outstretched, back and side). Two Uniform Linear Arrays (ULA) with 15 λ/2-equispaced sensors have been employed, using different spatial apertures in order to reduce sidelobe levels. Working frequencies have been designed on the basis of the main lobe width, the grating lobe levels and the frequency responses of people and sensors. For a case-study with 10 people, the acoustic profiles, formed by all images acquired, are evaluated and compared in a mean square error sense. Finally, system performance, using False Match Rate (FMR)/False Non-Match Rate (FNMR) parameters and the Receiver Operating Characteristic (ROC) curve, is evaluated. On the basis of the obtained results, this system could be used for biometric applications. PMID:22163708

  17. Reducing the Anaerobic Digestion Model No. 1 for its application to an industrial wastewater treatment plant treating winery effluent wastewater.

    PubMed

    García-Diéguez, Carlos; Bernard, Olivier; Roca, Enrique

    2013-03-01

    The Anaerobic Digestion Model No. 1 (ADM1) is a complex model which is widely accepted as a common platform for anaerobic process modeling and simulation. However, it has a large number of parameters and states that hinder its calibration and use in control applications. A principal component analysis (PCA) technique was extended and applied to simplify the ADM1 using data of an industrial wastewater treatment plant processing winery effluent. The method shows that the main model features could be obtained with a minimum of two reactions. A reduced stoichiometric matrix was identified and the kinetic parameters were estimated on the basis of representative known biochemical kinetics (Monod and Haldane). The obtained reduced model takes into account the measured states in the anaerobic wastewater treatment (AWT) plant and reproduces the dynamics of the process fairly accurately. The reduced model can support on-line control, optimization and supervision strategies for AWT plants. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Computer simulation of a space SAR using a range-sequential processor for soil moisture mapping

    NASA Technical Reports Server (NTRS)

    Fujita, M.; Ulaby, F. (Principal Investigator)

    1982-01-01

    The ability of a spaceborne synthetic aperture radar (SAR) to detect soil moisture was evaluated by means of a computer simulation technique. The computer simulation package includes coherent processing of the SAR data using a range-sequential processor, which can be set up through hardware implementations, thereby reducing the amount of telemetry involved. With such a processing approach, it is possible to monitor the earth's surface on a continuous basis, since data storage requirements can be easily met through the use of currently available technology. The Development of the simulation package is described, followed by an examination of the application of the technique to actual environments. The results indicate that in estimating soil moisture content with a four-look processor, the difference between the assumed and estimated values of soil moisture is within + or - 20% of field capacity for 62% of the pixels for agricultural terrain and for 53% of the pixels for hilly terrain. The estimation accuracy for soil moisture may be improved by reducing the effect of fading through non-coherent averaging.

  19. A technique for measuring vertically and horizontally polarized microwave brightness temperatures using electronic polarization-basis rotation

    NASA Technical Reports Server (NTRS)

    Gasiewski, Albin J.

    1992-01-01

    This technique for electronically rotating the polarization basis of an orthogonal-linear polarization radiometer is based on the measurement of the first three feedhorn Stokes parameters, along with the subsequent transformation of this measured Stokes vector into a rotated coordinate frame. The technique requires an accurate measurement of the cross-correlation between the two orthogonal feedhorn modes, for which an innovative polarized calibration load was developed. The experimental portion of this investigation consisted of a proof of concept demonstration of the technique of electronic polarization basis rotation (EPBR) using a ground based 90-GHz dual orthogonal-linear polarization radiometer. Practical calibration algorithms for ground-, aircraft-, and space-based instruments were identified and tested. The theoretical effort consisted of radiative transfer modeling using the planar-stratified numerical model described in Gasiewski and Staelin (1990).

  20. Reduced kernel recursive least squares algorithm for aero-engine degradation prediction

    NASA Astrophysics Data System (ADS)

    Zhou, Haowen; Huang, Jinquan; Lu, Feng

    2017-10-01

    Kernel adaptive filters (KAFs) generate a linear growing radial basis function (RBF) network with the number of training samples, thereby lacking sparseness. To deal with this drawback, traditional sparsification techniques select a subset of original training data based on a certain criterion to train the network and discard the redundant data directly. Although these methods curb the growth of the network effectively, it should be noted that information conveyed by these redundant samples is omitted, which may lead to accuracy degradation. In this paper, we present a novel online sparsification method which requires much less training time without sacrificing the accuracy performance. Specifically, a reduced kernel recursive least squares (RKRLS) algorithm is developed based on the reduced technique and the linear independency. Unlike conventional methods, our novel methodology employs these redundant data to update the coefficients of the existing network. Due to the effective utilization of the redundant data, the novel algorithm achieves a better accuracy performance, although the network size is significantly reduced. Experiments on time series prediction and online regression demonstrate that RKRLS algorithm requires much less computational consumption and maintains the satisfactory accuracy performance. Finally, we propose an enhanced multi-sensor prognostic model based on RKRLS and Hidden Markov Model (HMM) for remaining useful life (RUL) estimation. A case study in a turbofan degradation dataset is performed to evaluate the performance of the novel prognostic approach.

  1. Satisfiability of logic programming based on radial basis function neural networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamadneh, Nawaf; Sathasivam, Saratha; Tilahun, Surafel Luleseged

    2014-07-10

    In this paper, we propose a new technique to test the Satisfiability of propositional logic programming and quantified Boolean formula problem in radial basis function neural networks. For this purpose, we built radial basis function neural networks to represent the proportional logic which has exactly three variables in each clause. We used the Prey-predator algorithm to calculate the output weights of the neural networks, while the K-means clustering algorithm is used to determine the hidden parameters (the centers and the widths). Mean of the sum squared error function is used to measure the activity of the two algorithms. We appliedmore » the developed technique with the recurrent radial basis function neural networks to represent the quantified Boolean formulas. The new technique can be applied to solve many applications such as electronic circuits and NP-complete problems.« less

  2. Rain Volume Estimation over Areas Using Satellite and Radar Data

    NASA Technical Reports Server (NTRS)

    Doneaud, A. A.; Miller, J. R., Jr.; Johnson, L. R.; Vonderhaar, T. H.; Laybe, P.

    1984-01-01

    The application of satellite data to a recently developed radar technique used to estimate convective rain volumes over areas on a dry environment (the northern Great Plains) is discussed. The area time integral technique (ATI) provides a means of estimating total rain volumes over fixed and floating target areas of the order of 1,000 to 100,000 km(2) for clusters lasting 40 min. The basis of the method is the existence of a strong correlation between the area coverage integrated over the lifetime of the storm (ATI) and the rain volume. One key element in this technique is that it does not require the consideration of the structure of the radar intensities inside the area coverage to generate rain volumes, but only considers the rain event per se. This fact might reduce or eliminate some sources of error in applying the technique to satellite data. The second key element is that the ATI once determined can be converted to total rain volume by using a constant factor (average rain rate) for a given locale.

  3. Optimization of a Biometric System Based on Acoustic Images

    PubMed Central

    Izquierdo Fuente, Alberto; Del Val Puente, Lara; Villacorta Calvo, Juan J.; Raboso Mateos, Mariano

    2014-01-01

    On the basis of an acoustic biometric system that captures 16 acoustic images of a person for 4 frequencies and 4 positions, a study was carried out to improve the performance of the system. On a first stage, an analysis to determine which images provide more information to the system was carried out showing that a set of 12 images allows the system to obtain results that are equivalent to using all of the 16 images. Finally, optimization techniques were used to obtain the set of weights associated with each acoustic image that maximizes the performance of the biometric system. These results improve significantly the performance of the preliminary system, while reducing the time of acquisition and computational burden, since the number of acoustic images was reduced. PMID:24616643

  4. Estimating propagation velocity through a surface acoustic wave sensor

    DOEpatents

    Xu, Wenyuan; Huizinga, John S.

    2010-03-16

    Techniques are described for estimating the propagation velocity through a surface acoustic wave sensor. In particular, techniques which measure and exploit a proper segment of phase frequency response of the surface acoustic wave sensor are described for use as a basis of bacterial detection by the sensor. As described, use of velocity estimation based on a proper segment of phase frequency response has advantages over conventional techniques that use phase shift as the basis for detection.

  5. Geotechnical behaviour of low-permeability soils in surfactant-enhanced electrokinetic remediation.

    PubMed

    López-Vizcaíno, Rubén; Navarro, Vicente; Alonso, Juan; Yustres, Ángel; Cañizares, Pablo; Rodrigo, Manuel A; Sáez, Cristina

    2016-01-01

    Electrokinetic processes provide the basis of a range of very interesting techniques for the remediation of polluted soils. These techniques consist of the application of a current field in the soil that develops different transport mechanisms capable of mobilizing several types of pollutants. However, the use of these techniques could generate nondesirable effects related to the geomechanical behavior of the soil, reducing the effectiveness of the processes. In the case of the remediation of polluted soils with plasticity index higher than 35, an excessive shrinkage can be observed in remediation test. For this reason, the continued evaporation that takes place in the sample top can lead to the development of cracks, distorting the electrokinetic transport regime, and consequently, the development of the operation. On the other hand, when analyzing silty soils, in the surroundings of injection surfactant wells, high seepages can be generated that give rise to the development of piping processes. In this article methods are described to allow a reduction, or to even eliminate, both problems.

  6. Wavelet-domain de-noising technique for THz pulsed spectroscopy

    NASA Astrophysics Data System (ADS)

    Chernomyrdin, Nikita V.; Zaytsev, Kirill I.; Gavdush, Arsenii A.; Fokina, Irina N.; Karasik, Valeriy E.; Reshetov, Igor V.; Kudrin, Konstantin G.; Nosov, Pavel A.; Yurchenko, Stanislav O.

    2014-09-01

    De-noising of terahertz (THz) pulsed spectroscopy (TPS) data is an essential problem, since a noise in the TPS system data prevents correct reconstruction of the sample spectral dielectric properties and to perform the sample internal structure studying. There are certain regions in TPS signal Fourier spectrum, where Fourier-domain signal-to-noise ratio is relatively small. Effective de-noising might potentially expand the range of spectrometer spectral sensitivity and reduce the time of waveform registration, which is an essential problem for biomedical applications of TPS. In this work, it is shown how the recent progress in signal processing in wavelet-domain could be used for TPS waveforms de-noising. It demonstrates the ability to perform effective de-noising of TPS data using the algorithm of the Fast Wavelet Transform (FWT). The results of the optimal wavelet basis selection and wavelet-domain thresholding technique selection are reported. Developed technique is implemented for reconstruction of in vivo healthy and deseased skin samplesspectral characteristics at THz frequency range.

  7. Low-cost satellite mechanical design and construction

    NASA Astrophysics Data System (ADS)

    Boisjolie-Gair, Nathaniel; Straub, Jeremy

    2017-05-01

    This paper presents a discussion of techniques for low-cost design and construction of a CubeSat mechanical structure that can serve as a basis for academic programs and a starting point for government, military and commercial large-scale sensing networks, where the cost of each node must be minimized to facilitate system affordability and lower the cost and associated risk of losing any node. Spacecraft Design plays a large role in manufacturability. An intentionally simplified mechanical design is presented which reduces machining costs, as compared to more intricate designs that were considered. Several fabrication approaches are evaluated relative to the low-cost goal.

  8. The feasibility and benefits of using volumetric arc therapy in patients with brain metastases: a systematic review.

    PubMed

    Andrevska, Adriana; Knight, Kellie A; Sale, Charlotte A

    2014-12-01

    Radiotherapy management of patients with brain metastases most commonly involve a whole-brain radiation therapy (WBRT) regime, as well as newer techniques such as stereotactic radiosurgery (SRS) and intensity modulated radiotherapy (IMRT). The long treatment times incurred by these techniques indicates the need for a novel technique that has shorter treatment times, whilst still producing highly conformal treatment with the potential to deliver escalated doses to the target area. Volumetric modulated arc therapy (VMAT) is a dynamic, highly conformal technique that may deliver high doses of radiation through a single gantry arc and reduce overall treatment times. The aim of this systematic review is to determine the feasibility and benefits of VMAT treatment in regard to overall survival rates and local control in patients with brain metastases, in comparison with patients treated with WBRT, SRS and IMRT. A search of the literature identified 23 articles for the purpose of this review. Articles were included on the basis they were human-based studies, with sample sizes of more than five patients who were receiving treatment for 1-10 metastatic brain lesions. VMAT was found to be highly conformal, have a reduced treatment delivery time and incurred no significant toxicities in comparison with WBRT, SRS and IMRT. Compared to other treatment techniques, VMAT proved to have fewer toxicities than conventional WBRT, shorter treatment times than SRS and similar dose distributions to IMRT plans. Future prospective studies are needed to accurately assess the prognostic benefits of VMAT as well as the occurrence of late toxicities.

  9. Model, analysis, and evaluation of the effects of analog VLSI arithmetic on linear subspace-based image recognition.

    PubMed

    Carvajal, Gonzalo; Figueroa, Miguel

    2014-07-01

    Typical image recognition systems operate in two stages: feature extraction to reduce the dimensionality of the input space, and classification based on the extracted features. Analog Very Large Scale Integration (VLSI) is an attractive technology to achieve compact and low-power implementations of these computationally intensive tasks for portable embedded devices. However, device mismatch limits the resolution of the circuits fabricated with this technology. Traditional layout techniques to reduce the mismatch aim to increase the resolution at the transistor level, without considering the intended application. Relating mismatch parameters to specific effects in the application level would allow designers to apply focalized mismatch compensation techniques according to predefined performance/cost tradeoffs. This paper models, analyzes, and evaluates the effects of mismatched analog arithmetic in both feature extraction and classification circuits. For the feature extraction, we propose analog adaptive linear combiners with on-chip learning for both Least Mean Square (LMS) and Generalized Hebbian Algorithm (GHA). Using mathematical abstractions of analog circuits, we identify mismatch parameters that are naturally compensated during the learning process, and propose cost-effective guidelines to reduce the effect of the rest. For the classification, we derive analog models for the circuits necessary to implement Nearest Neighbor (NN) approach and Radial Basis Function (RBF) networks, and use them to emulate analog classifiers with standard databases of face and hand-writing digits. Formal analysis and experiments show how we can exploit adaptive structures and properties of the input space to compensate the effects of device mismatch at the application level, thus reducing the design overhead of traditional layout techniques. Results are also directly extensible to multiple application domains using linear subspace methods. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. A no-gold-standard technique for objective assessment of quantitative nuclear-medicine imaging methods

    PubMed Central

    Jha, Abhinav K; Caffo, Brian; Frey, Eric C

    2016-01-01

    The objective optimization and evaluation of nuclear-medicine quantitative imaging methods using patient data is highly desirable but often hindered by the lack of a gold standard. Previously, a regression-without-truth (RWT) approach has been proposed for evaluating quantitative imaging methods in the absence of a gold standard, but this approach implicitly assumes that bounds on the distribution of true values are known. Several quantitative imaging methods in nuclear-medicine imaging measure parameters where these bounds are not known, such as the activity concentration in an organ or the volume of a tumor. We extended upon the RWT approach to develop a no-gold-standard (NGS) technique for objectively evaluating such quantitative nuclear-medicine imaging methods with patient data in the absence of any ground truth. Using the parameters estimated with the NGS technique, a figure of merit, the noise-to-slope ratio (NSR), can be computed, which can rank the methods on the basis of precision. An issue with NGS evaluation techniques is the requirement of a large number of patient studies. To reduce this requirement, the proposed method explored the use of multiple quantitative measurements from the same patient, such as the activity concentration values from different organs in the same patient. The proposed technique was evaluated using rigorous numerical experiments and using data from realistic simulation studies. The numerical experiments demonstrated that the NSR was estimated accurately using the proposed NGS technique when the bounds on the distribution of true values were not precisely known, thus serving as a very reliable metric for ranking the methods on the basis of precision. In the realistic simulation study, the NGS technique was used to rank reconstruction methods for quantitative single-photon emission computed tomography (SPECT) based on their performance on the task of estimating the mean activity concentration within a known volume of interest. Results showed that the proposed technique provided accurate ranking of the reconstruction methods for 97.5% of the 50 noise realizations. Further, the technique was robust to the choice of evaluated reconstruction methods. The simulation study pointed to possible violations of the assumptions made in the NGS technique under clinical scenarios. However, numerical experiments indicated that the NGS technique was robust in ranking methods even when there was some degree of such violation. PMID:26982626

  11. A no-gold-standard technique for objective assessment of quantitative nuclear-medicine imaging methods.

    PubMed

    Jha, Abhinav K; Caffo, Brian; Frey, Eric C

    2016-04-07

    The objective optimization and evaluation of nuclear-medicine quantitative imaging methods using patient data is highly desirable but often hindered by the lack of a gold standard. Previously, a regression-without-truth (RWT) approach has been proposed for evaluating quantitative imaging methods in the absence of a gold standard, but this approach implicitly assumes that bounds on the distribution of true values are known. Several quantitative imaging methods in nuclear-medicine imaging measure parameters where these bounds are not known, such as the activity concentration in an organ or the volume of a tumor. We extended upon the RWT approach to develop a no-gold-standard (NGS) technique for objectively evaluating such quantitative nuclear-medicine imaging methods with patient data in the absence of any ground truth. Using the parameters estimated with the NGS technique, a figure of merit, the noise-to-slope ratio (NSR), can be computed, which can rank the methods on the basis of precision. An issue with NGS evaluation techniques is the requirement of a large number of patient studies. To reduce this requirement, the proposed method explored the use of multiple quantitative measurements from the same patient, such as the activity concentration values from different organs in the same patient. The proposed technique was evaluated using rigorous numerical experiments and using data from realistic simulation studies. The numerical experiments demonstrated that the NSR was estimated accurately using the proposed NGS technique when the bounds on the distribution of true values were not precisely known, thus serving as a very reliable metric for ranking the methods on the basis of precision. In the realistic simulation study, the NGS technique was used to rank reconstruction methods for quantitative single-photon emission computed tomography (SPECT) based on their performance on the task of estimating the mean activity concentration within a known volume of interest. Results showed that the proposed technique provided accurate ranking of the reconstruction methods for 97.5% of the 50 noise realizations. Further, the technique was robust to the choice of evaluated reconstruction methods. The simulation study pointed to possible violations of the assumptions made in the NGS technique under clinical scenarios. However, numerical experiments indicated that the NGS technique was robust in ranking methods even when there was some degree of such violation.

  12. Caesarean Section: Could Different Transverse Abdominal Incision Techniques Influence Postpartum Pain and Subsequent Quality of Life? A Systematic Review

    PubMed Central

    Gizzo, Salvatore; Andrisani, Alessandra; Noventa, Marco; Di Gangi, Stefania; Quaranta, Michela; Cosmi, Erich; D’Antona, Donato; Nardelli, Giovanni Battista; Ambrosini, Guido

    2015-01-01

    The choice of the type of abdominal incision performed in caesarean delivery is made chiefly on the basis of the individual surgeon’s experience and preference. A general consensus on the most appropriate surgical technique has not yet been reached. The aim of this systematic review of the literature is to compare the two most commonly used transverse abdominal incisions for caesarean delivery, the Pfannenstiel incision and the modified Joel-Cohen incision, in terms of acute and chronic post-surgical pain and their subsequent influence in terms of quality of life. Electronic database searches formed the basis of the literature search and the following databases were searched in the time frame between January 1997 and December 2013: MEDLINE, EMBASE Sciencedirect and the Cochrane Library. Key search terms included: “acute pain”, “chronic pain”, “Pfannenstiel incision”, “Misgav-Ladach”, “Joel Cohen incision”, in combination with “Caesarean Section”, “abdominal incision”, “numbness”, “neuropathic pain” and “nerve entrapment”. Data on 4771 patients who underwent caesarean section (CS) was collected with regards to the relation between surgical techniques and postoperative outcomes defined as acute or chronic pain and future pregnancy desire. The Misgav-Ladach incision was associated with a significant advantage in terms of reduction of post-surgical acute and chronic pain. It was indicated as the optimal technique in view of its characteristic of reducing lower pelvic discomfort and pain, thus improving quality of life and future fertility desire. Further studies which are not subject to important bias like pre-existing chronic pain, non-standardized analgesia administration, variable length of skin incision and previous abdominal surgery are required. PMID:25646621

  13. Caesarean section: could different transverse abdominal incision techniques influence postpartum pain and subsequent quality of life? A systematic review.

    PubMed

    Gizzo, Salvatore; Andrisani, Alessandra; Noventa, Marco; Di Gangi, Stefania; Quaranta, Michela; Cosmi, Erich; D'Antona, Donato; Nardelli, Giovanni Battista; Ambrosini, Guido

    2015-01-01

    The choice of the type of abdominal incision performed in caesarean delivery is made chiefly on the basis of the individual surgeon's experience and preference. A general consensus on the most appropriate surgical technique has not yet been reached. The aim of this systematic review of the literature is to compare the two most commonly used transverse abdominal incisions for caesarean delivery, the Pfannenstiel incision and the modified Joel-Cohen incision, in terms of acute and chronic post-surgical pain and their subsequent influence in terms of quality of life. Electronic database searches formed the basis of the literature search and the following databases were searched in the time frame between January 1997 and December 2013: MEDLINE, EMBASE Sciencedirect and the Cochrane Library. Key search terms included: "acute pain", "chronic pain", "Pfannenstiel incision", "Misgav-Ladach", "Joel Cohen incision", in combination with "Caesarean Section", "abdominal incision", "numbness", "neuropathic pain" and "nerve entrapment". Data on 4771 patients who underwent caesarean section (CS) was collected with regards to the relation between surgical techniques and postoperative outcomes defined as acute or chronic pain and future pregnancy desire. The Misgav-Ladach incision was associated with a significant advantage in terms of reduction of post-surgical acute and chronic pain. It was indicated as the optimal technique in view of its characteristic of reducing lower pelvic discomfort and pain, thus improving quality of life and future fertility desire. Further studies which are not subject to important bias like pre-existing chronic pain, non-standardized analgesia administration, variable length of skin incision and previous abdominal surgery are required.

  14. Real-time volcano monitoring using GNSS single-frequency receivers

    NASA Astrophysics Data System (ADS)

    Lee, Seung-Woo; Yun, Sung-Hyo; Kim, Do Hyeong; Lee, Dukkee; Lee, Young J.; Schutz, Bob E.

    2015-12-01

    We present a real-time volcano monitoring strategy that uses the Global Navigation Satellite System (GNSS), and we examine the performance of the strategy by processing simulated and real data and comparing the results with published solutions. The cost of implementing the strategy is reduced greatly by using single-frequency GNSS receivers except for one dual-frequency receiver that serves as a base receiver. Positions of the single-frequency receivers are computed relative to the base receiver on an epoch-by-epoch basis using the high-rate double-difference (DD) GNSS technique, while the position of the base station is fixed to the values obtained with a deferred-time precise point positioning technique and updated on a regular basis. Since the performance of the single-frequency high-rate DD technique depends on the conditions of the ionosphere over the monitoring area, the ionospheric total electron content is monitored using the dual-frequency data from the base receiver. The surface deformation obtained with the high-rate DD technique is eventually processed by a real-time inversion filter based on the Mogi point source model. The performance of the real-time volcano monitoring strategy is assessed through a set of tests and case studies, in which the data recorded during the 2007 eruption of Kilauea and the 2005 eruption of Augustine are processed in a simulated real-time mode. The case studies show that the displacement time series obtained with the strategy seem to agree with those obtained with deferred-time, dual-frequency approaches at the level of 10-15 mm. Differences in the estimated volume change of the Mogi source between the real-time inversion filter and previously reported works were in the range of 11 to 13% of the maximum volume changes of the cases examined.

  15. Chopped random-basis quantum optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caneva, Tommaso; Calarco, Tommaso; Montangero, Simone

    2011-08-15

    In this work, we describe in detail the chopped random basis (CRAB) optimal control technique recently introduced to optimize time-dependent density matrix renormalization group simulations [P. Doria, T. Calarco, and S. Montangero, Phys. Rev. Lett. 106, 190501 (2011)]. Here, we study the efficiency of this control technique in optimizing different quantum processes and we show that in the considered cases we obtain results equivalent to those obtained via different optimal control methods while using less resources. We propose the CRAB optimization as a general and versatile optimal control technique.

  16. Solar Activity Heading for a Maunder Minimum?

    NASA Astrophysics Data System (ADS)

    Schatten, K. H.; Tobiska, W. K.

    2003-05-01

    Long-range (few years to decades) solar activity prediction techniques vary greatly in their methods. They range from examining planetary orbits, to spectral analyses (e.g. Fourier, wavelet and spectral analyses), to artificial intelligence methods, to simply using general statistical techniques. Rather than concentrate on statistical/mathematical/numerical methods, we discuss a class of methods which appears to have a "physical basis." Not only does it have a physical basis, but this basis is rooted in both "basic" physics (dynamo theory), but also solar physics (Babcock dynamo theory). The class we discuss is referred to as "precursor methods," originally developed by Ohl, Brown and Williams and others, using geomagnetic observations. My colleagues and I have developed some understanding for how these methods work and have expanded the prediction methods using "solar dynamo precursor" methods, notably a "SODA" index (SOlar Dynamo Amplitude). These methods are now based upon an understanding of the Sun's dynamo processes- to explain a connection between how the Sun's fields are generated and how the Sun broadcasts its future activity levels to Earth. This has led to better monitoring of the Sun's dynamo fields and is leading to more accurate prediction techniques. Related to the Sun's polar and toroidal magnetic fields, we explain how these methods work, past predictions, the current cycle, and predictions of future of solar activity levels for the next few solar cycles. The surprising result of these long-range predictions is a rapid decline in solar activity, starting with cycle #24. If this trend continues, we may see the Sun heading towards a "Maunder" type of solar activity minimum - an extensive period of reduced levels of solar activity. For the solar physicists, who enjoy studying solar activity, we hope this isn't so, but for NASA, which must place and maintain satellites in low earth orbit (LEO), it may help with reboost problems. Space debris, and other aspects of objects in LEO will also be affected. This research is supported by the NSF and NASA.

  17. Configurational forces in electronic structure calculations using Kohn-Sham density functional theory

    NASA Astrophysics Data System (ADS)

    Motamarri, Phani; Gavini, Vikram

    2018-04-01

    We derive the expressions for configurational forces in Kohn-Sham density functional theory, which correspond to the generalized variational force computed as the derivative of the Kohn-Sham energy functional with respect to the position of a material point x . These configurational forces that result from the inner variations of the Kohn-Sham energy functional provide a unified framework to compute atomic forces as well as stress tensor for geometry optimization. Importantly, owing to the variational nature of the formulation, these configurational forces inherently account for the Pulay corrections. The formulation presented in this work treats both pseudopotential and all-electron calculations in a single framework, and employs a local variational real-space formulation of Kohn-Sham density functional theory (DFT) expressed in terms of the nonorthogonal wave functions that is amenable to reduced-order scaling techniques. We demonstrate the accuracy and performance of the proposed configurational force approach on benchmark all-electron and pseudopotential calculations conducted using higher-order finite-element discretization. To this end, we examine the rates of convergence of the finite-element discretization in the computed forces and stresses for various materials systems, and, further, verify the accuracy from finite differencing the energy. Wherever applicable, we also compare the forces and stresses with those obtained from Kohn-Sham DFT calculations employing plane-wave basis (pseudopotential calculations) and Gaussian basis (all-electron calculations). Finally, we verify the accuracy of the forces on large materials systems involving a metallic aluminum nanocluster containing 666 atoms and an alkane chain containing 902 atoms, where the Kohn-Sham electronic ground state is computed using a reduced-order scaling subspace projection technique [P. Motamarri and V. Gavini, Phys. Rev. B 90, 115127 (2014), 10.1103/PhysRevB.90.115127].

  18. Reducing Time and Increasing Sensitivity in Sample Preparation for Adherent Mammalian Cell Metabolomics

    PubMed Central

    Lorenz, Matthew A.; Burant, Charles F.; Kennedy, Robert T.

    2011-01-01

    A simple, fast, and reproducible sample preparation procedure was developed for relative quantification of metabolites in adherent mammalian cells using the clonal β-cell line INS-1 as a model sample. The method was developed by evaluating the effect of different sample preparation procedures on high performance liquid chromatography- mass spectrometry quantification of 27 metabolites involved in glycolysis and the tricarboxylic acid cycle on a directed basis as well as for all detectable chromatographic features on an undirected basis. We demonstrate that a rapid water rinse step prior to quenching of metabolism reduces components that suppress electrospray ionization thereby increasing signal for 26 of 27 targeted metabolites and increasing total number of detected features from 237 to 452 with no detectable change of metabolite content. A novel quenching technique is employed which involves addition of liquid nitrogen directly to the culture dish and allows for samples to be stored at −80 °C for at least 7 d before extraction. Separation of quenching and extraction steps provides the benefit of increased experimental convenience and sample stability while maintaining metabolite content similar to techniques that employ simultaneous quenching and extraction with cold organic solvent. The extraction solvent 9:1 methanol: chloroform was found to provide superior performance over acetonitrile, ethanol, and methanol with respect to metabolite recovery and extract stability. Maximal recovery was achieved using a single rapid (~1 min) extraction step. The utility of this rapid preparation method (~5 min) was demonstrated through precise metabolite measurements (11% average relative standard deviation without internal standards) associated with step changes in glucose concentration that evoke insulin secretion in the clonal β-cell line INS-1. PMID:21456517

  19. Model's sparse representation based on reduced mixed GMsFE basis methods

    NASA Astrophysics Data System (ADS)

    Jiang, Lijian; Li, Qiuqi

    2017-06-01

    In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a large number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous media is simulated by the proposed sparse representation method.

  20. Model's sparse representation based on reduced mixed GMsFE basis methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn; Li, Qiuqi, E-mail: qiuqili@hnu.edu.cn

    2017-06-01

    In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a largemore » number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous media is simulated by the proposed sparse representation method.« less

  1. Emerging Techniques for Dose Optimization in Abdominal CT

    PubMed Central

    Platt, Joel F.; Goodsitt, Mitchell M.; Al-Hawary, Mahmoud M.; Maturen, Katherine E.; Wasnik, Ashish P.; Pandya, Amit

    2014-01-01

    Recent advances in computed tomographic (CT) scanning technique such as automated tube current modulation (ATCM), optimized x-ray tube voltage, and better use of iterative image reconstruction have allowed maintenance of good CT image quality with reduced radiation dose. ATCM varies the tube current during scanning to account for differences in patient attenuation, ensuring a more homogeneous image quality, although selection of the appropriate image quality parameter is essential for achieving optimal dose reduction. Reducing the x-ray tube voltage is best suited for evaluating iodinated structures, since the effective energy of the x-ray beam will be closer to the k-edge of iodine, resulting in a higher attenuation for the iodine. The optimal kilovoltage for a CT study should be chosen on the basis of imaging task and patient habitus. The aim of iterative image reconstruction is to identify factors that contribute to noise on CT images with use of statistical models of noise (statistical iterative reconstruction) and selective removal of noise to improve image quality. The degree of noise suppression achieved with statistical iterative reconstruction can be customized to minimize the effect of altered image quality on CT images. Unlike with statistical iterative reconstruction, model-based iterative reconstruction algorithms model both the statistical noise and the physical acquisition process, allowing CT to be performed with further reduction in radiation dose without an increase in image noise or loss of spatial resolution. Understanding these recently developed scanning techniques is essential for optimization of imaging protocols designed to achieve the desired image quality with a reduced dose. © RSNA, 2014 PMID:24428277

  2. Reduced order surrogate modelling (ROSM) of high dimensional deterministic simulations

    NASA Astrophysics Data System (ADS)

    Mitry, Mina

    Often, computationally expensive engineering simulations can prohibit the engineering design process. As a result, designers may turn to a less computationally demanding approximate, or surrogate, model to facilitate their design process. However, owing to the the curse of dimensionality, classical surrogate models become too computationally expensive for high dimensional data. To address this limitation of classical methods, we develop linear and non-linear Reduced Order Surrogate Modelling (ROSM) techniques. Two algorithms are presented, which are based on a combination of linear/kernel principal component analysis and radial basis functions. These algorithms are applied to subsonic and transonic aerodynamic data, as well as a model for a chemical spill in a channel. The results of this thesis show that ROSM can provide a significant computational benefit over classical surrogate modelling, sometimes at the expense of a minor loss in accuracy.

  3. The effect of sampling techniques used in the multiconfigurational Ehrenfest method

    NASA Astrophysics Data System (ADS)

    Symonds, C.; Kattirtzi, J. A.; Shalashilin, D. V.

    2018-05-01

    In this paper, we compare and contrast basis set sampling techniques recently developed for use in the ab initio multiple cloning method, a direct dynamics extension to the multiconfigurational Ehrenfest approach, used recently for the quantum simulation of ultrafast photochemistry. We demonstrate that simultaneous use of basis set cloning and basis function trains can produce results which are converged to the exact quantum result. To demonstrate this, we employ these sampling methods in simulations of quantum dynamics in the spin boson model with a broad range of parameters and compare the results to accurate benchmarks.

  4. The effect of sampling techniques used in the multiconfigurational Ehrenfest method.

    PubMed

    Symonds, C; Kattirtzi, J A; Shalashilin, D V

    2018-05-14

    In this paper, we compare and contrast basis set sampling techniques recently developed for use in the ab initio multiple cloning method, a direct dynamics extension to the multiconfigurational Ehrenfest approach, used recently for the quantum simulation of ultrafast photochemistry. We demonstrate that simultaneous use of basis set cloning and basis function trains can produce results which are converged to the exact quantum result. To demonstrate this, we employ these sampling methods in simulations of quantum dynamics in the spin boson model with a broad range of parameters and compare the results to accurate benchmarks.

  5. Dose rate prediction methodology for remote handled transuranic waste workers at the waste isolation pilot plant.

    PubMed

    Hayes, Robert

    2002-10-01

    An approach is described for estimating future dose rates to Waste Isolation Pilot Plant workers processing remote handled transuranic waste. The waste streams will come from the entire U.S. Department of Energy complex and can take on virtually any form found from the processing sequences for defense-related production, radiochemistry, activation and related work. For this reason, the average waste matrix from all generator sites is used to estimate the average radiation fields over the facility lifetime. Innovative new techniques were applied to estimate expected radiation fields. Non-linear curve fitting techniques were used to predict exposure rate profiles from cylindrical sources using closed form equations for lines and disks. This information becomes the basis for Safety Analysis Report dose rate estimates and for present and future ALARA design reviews when attempts are made to reduce worker doses.

  6. Motion Detection in Ultrasound Image-Sequences Using Tensor Voting

    NASA Astrophysics Data System (ADS)

    Inba, Masafumi; Yanagida, Hirotaka; Tamura, Yasutaka

    2008-05-01

    Motion detection in ultrasound image sequences using tensor voting is described. We have been developing an ultrasound imaging system adopting a combination of coded excitation and synthetic aperture focusing techniques. In our method, frame rate of the system at distance of 150 mm reaches 5000 frame/s. Sparse array and short duration coded ultrasound signals are used for high-speed data acquisition. However, many artifacts appear in the reconstructed image sequences because of the incompleteness of the transmitted code. To reduce the artifacts, we have examined the application of tensor voting to the imaging method which adopts both coded excitation and synthetic aperture techniques. In this study, the basis of applying tensor voting and the motion detection method to ultrasound images is derived. It was confirmed that velocity detection and feature enhancement are possible using tensor voting in the time and space of simulated ultrasound three-dimensional image sequences.

  7. Microsample analyses via DBS: challenges and opportunities.

    PubMed

    Henion, Jack; Oliveira, Regina V; Chace, Donald H

    2013-10-01

    The use of DBS is an appealing approach to employing microsampling techniques for the bioanalysis of samples, as has been demonstrated for the past 50 years in the metabolic screening of metabolites and diseases. In addition to its minimally invasive sample collection procedures and its economical merits, DBS microsampling benefits from the very high sensitivity, selectivity and multianalyte capabilities of LC-MS, which has been especially well demonstrated in newborn screening applications. Only a few microliters of a biological fluid are required for analysis, which also translates to significantly reduced demands on clinical samples from patients or from animals. Recently, the pharmaceutical industry and other arenas have begun to explore the utility and practicality of DBS microsampling. This review discusses the basis for why DBS techniques are likely to be part of the future, as well as offering insights into where these benefits may be realized.

  8. Cut set-based risk and reliability analysis for arbitrarily interconnected networks

    DOEpatents

    Wyss, Gregory D.

    2000-01-01

    Method for computing all-terminal reliability for arbitrarily interconnected networks such as the United States public switched telephone network. The method includes an efficient search algorithm to generate minimal cut sets for nonhierarchical networks directly from the network connectivity diagram. Efficiency of the search algorithm stems in part from its basis on only link failures. The method also includes a novel quantification scheme that likewise reduces computational effort associated with assessing network reliability based on traditional risk importance measures. Vast reductions in computational effort are realized since combinatorial expansion and subsequent Boolean reduction steps are eliminated through analysis of network segmentations using a technique of assuming node failures to occur on only one side of a break in the network, and repeating the technique for all minimal cut sets generated with the search algorithm. The method functions equally well for planar and non-planar networks.

  9. A Radial Basis Function Approach to Financial Time Series Analysis

    DTIC Science & Technology

    1993-12-01

    including efficient methods for parameter estimation and pruning, a pointwise prediction error estimator, and a methodology for controlling the "data...collection of practical techniques to address these issues for a modeling methodology . Radial Basis Function networks. These techniques in- clude efficient... methodology often then amounts to a careful consideration of the interplay between model complexity and reliability. These will be recurrent themes

  10. Lessons from a primary-prevention program for lead poisoning among inner-city children.

    PubMed

    Dugbatey, Kwesi; Croskey, Valda; Evans, R Gregory; Narayan, Gopal; Osamudiamen, Osa-Edoh

    2005-12-01

    This study evaluated the impact on childhood lead poisoning of a primary-prevention educational-intervention program for pregnant women in St. Louis, Missouri. The women were predominantly poor and of African-American, Hispanic, Asian, and Caucasian backgrounds. The interventions, tailored for each woman on the basis of responses to a survey and environmental measurements, included case management with hands-on instruction on cleaning techniques, property maintenance, hygiene, and nutrition to reduce exposure of newborns to lead. It was hypothesized that the probability of lead poisoning (blood lead levels greater than 10 microg/dL) would be reduced among mothers who received the interventions compared with those who received only printed educational material. Contrary to expectations, none of the interventions reduced the likelihood of lead poisoning among participating children. In the process of the study, however, a number of valuable lessons related to recruitment and commitment of participants emerged that can inform future efforts of this nature.

  11. Reduced prefrontal connectivity in psychopathy.

    PubMed

    Motzkin, Julian C; Newman, Joseph P; Kiehl, Kent A; Koenigs, Michael

    2011-11-30

    Linking psychopathy to a specific brain abnormality could have significant clinical, legal, and scientific implications. Theories on the neurobiological basis of the disorder typically propose dysfunction in a circuit involving ventromedial prefrontal cortex (vmPFC). However, to date there is limited brain imaging data to directly test whether psychopathy may indeed be associated with any structural or functional abnormality within this brain area. In this study, we employ two complementary imaging techniques to assess the structural and functional connectivity of vmPFC in psychopathic and non-psychopathic criminals. Using diffusion tensor imaging, we show that psychopathy is associated with reduced structural integrity in the right uncinate fasciculus, the primary white matter connection between vmPFC and anterior temporal lobe. Using functional magnetic resonance imaging, we show that psychopathy is associated with reduced functional connectivity between vmPFC and amygdala as well as between vmPFC and medial parietal cortex. Together, these data converge to implicate diminished vmPFC connectivity as a characteristic neurobiological feature of psychopathy.

  12. Reduced Prefrontal Connectivity in Psychopathy

    PubMed Central

    Motzkin, Julian C.; Newman, Joseph P.; Kiehl, Kent A.; Koenigs, Michael

    2012-01-01

    Linking psychopathy to a specific brain abnormality could have significant clinical, legal, and scientific implications. Theories on the neurobiological basis of the disorder typically propose dysfunction in a circuit involving ventromedial prefrontal cortex (vmPFC). However, to date there is limited brain imaging data to directly test whether psychopathy may indeed be associated with any structural or functional abnormality within this brain area. In this study, we employ two complementary imaging techniques to assess the structural and functional connectivity of vmPFC in psychopathic and non-psychopathic criminals. Using diffusion tensor imaging, we show that psychopathy is associated with reduced structural integrity in the right uncinate fasciculus, the primary white matter connection between vmPFC and anterior temporal lobe. Using functional magnetic resonance imaging, we show that psychopathy is associated with reduced functional connectivity between vmPFC and amygdala as well as between vmPFC and medial parietal cortex. Together, these data converge to implicate diminished vmPFC connectivity as a characteristic neurobiological feature of psychopathy. PMID:22131397

  13. The effects of specified chemical meals on food intake.

    PubMed

    Koopmans, H S; Maggio, C A

    1978-10-01

    Rats received intragastric infusions of various specified chemical meals and were subsequently tested for a reduction in food intake. A second experiment, using a novel technique, tested for conditioned aversion to the meal infusions. The nonnutritive substances, kaolin clay and emulsified fluorocarbon, had no significant effect on food intake. Infusions of 1 M glucose and 1 M sorbitol reduced feeding behavior, but the 1 M sorbitol infusion also produced a conditioned aversion to flavored pellets paired with the sorbitol infusion, showing that the reduced feeding could have been caused by discomfort. Infusion of a high-fat meal consisting of emulsified triolein mixed with small amounts of sugar and protein or the rat's normal liquid diet, Nutrament, also reduced food intake, and both infusions failed to produce a conditioned aversion. The use of specified meals to understand the chemical basis of satiety requires a sensitive behavioral test to establish that the meal does not cause discomfort or other nonspecific effects.

  14. A model-reduction approach to the micromechanical analysis of polycrystalline materials

    NASA Astrophysics Data System (ADS)

    Michel, Jean-Claude; Suquet, Pierre

    2016-03-01

    The present study is devoted to the extension to polycrystals of a model-reduction technique introduced by the authors, called the nonuniform transformation field analysis (NTFA). This new reduced model is obtained in two steps. First the local fields of internal variables are decomposed on a reduced basis of modes as in the NTFA. Second the dissipation potential of the phases is replaced by its tangent second-order (TSO) expansion. The reduced evolution equations of the model can be entirely expressed in terms of quantities which can be pre-computed once for all. Roughly speaking, these pre-computed quantities depend only on the average and fluctuations per phase of the modes and of the associated stress fields. The accuracy of the new NTFA-TSO model is assessed by comparison with full-field simulations on two specific applications, creep of polycrystalline ice and response of polycrystalline copper to a cyclic tension-compression test. The new reduced evolution equations is faster than the full-field computations by two orders of magnitude in the two examples.

  15. Advances in locally constrained k-space-based parallel MRI.

    PubMed

    Samsonov, Alexey A; Block, Walter F; Arunachalam, Arjun; Field, Aaron S

    2006-02-01

    In this article, several theoretical and methodological developments regarding k-space-based, locally constrained parallel MRI (pMRI) reconstruction are presented. A connection between Parallel MRI with Adaptive Radius in k-Space (PARS) and GRAPPA methods is demonstrated. The analysis provides a basis for unified treatment of both methods. Additionally, a weighted PARS reconstruction is proposed, which may absorb different weighting strategies for improved image reconstruction. Next, a fast and efficient method for pMRI reconstruction of data sampled on non-Cartesian trajectories is described. In the new technique, the computational burden associated with the numerous matrix inversions in the original PARS method is drastically reduced by limiting direct calculation of reconstruction coefficients to only a few reference points. The rest of the coefficients are found by interpolating between the reference sets, which is possible due to the similar configuration of points participating in reconstruction for highly symmetric trajectories, such as radial and spirals. As a result, the time requirements are drastically reduced, which makes it practical to use pMRI with non-Cartesian trajectories in many applications. The new technique was demonstrated with simulated and actual data sampled on radial trajectories. Copyright 2006 Wiley-Liss, Inc.

  16. Three-dimensional surface profile intensity correction for spatially modulated imaging

    NASA Astrophysics Data System (ADS)

    Gioux, Sylvain; Mazhar, Amaan; Cuccia, David J.; Durkin, Anthony J.; Tromberg, Bruce J.; Frangioni, John V.

    2009-05-01

    We describe a noncontact profile correction technique for quantitative, wide-field optical measurement of tissue absorption (μa) and reduced scattering (μs') coefficients, based on geometric correction of the sample's Lambertian (diffuse) reflectance intensity. Because the projection of structured light onto an object is the basis for both phase-shifting profilometry and modulated imaging, we were able to develop a single instrument capable of performing both techniques. In so doing, the surface of the three-dimensional object could be acquired and used to extract the object's optical properties. The optical properties of flat polydimethylsiloxane (silicone) phantoms with homogenous tissue-like optical properties were extracted, with and without profilometry correction, after vertical translation and tilting of the phantoms at various angles. Objects having a complex shape, including a hemispheric silicone phantom and human fingers, were acquired and similarly processed, with vascular constriction of a finger being readily detectable through changes in its optical properties. Using profilometry correction, the accuracy of extracted absorption and reduced scattering coefficients improved from two- to ten-fold for surfaces having height variations as much as 3 cm and tilt angles as high as 40 deg. These data lay the foundation for employing structured light for quantitative imaging during surgery.

  17. Digital SAR processing using a fast polynomial transform

    NASA Technical Reports Server (NTRS)

    Butman, S.; Lipes, R.; Rubin, A.; Truong, T. K.

    1981-01-01

    A new digital processing algorithm based on the fast polynomial transform is developed for producing images from Synthetic Aperture Radar data. This algorithm enables the computation of the two dimensional cyclic correlation of the raw echo data with the impulse response of a point target, thereby reducing distortions inherent in one dimensional transforms. This SAR processing technique was evaluated on a general-purpose computer and an actual Seasat SAR image was produced. However, regular production runs will require a dedicated facility. It is expected that such a new SAR processing algorithm could provide the basis for a real-time SAR correlator implementation in the Deep Space Network.

  18. Penetrating keratoplasty in infancy and early childhood.

    PubMed

    Reidy, J J

    2001-08-01

    Penetrating keratoplasty in infants and young children is performed on an infrequent basis. The most common indication is visually significant congenital corneal opacity. Surgery must be performed early to avoid amblyopia. Surgical techniques differ from those used in adult penetrating keratoplasty because of the reduced ocular rigidity encountered in infants and young children. Use of a multispecialty team approach is important to improve visual outcome. Poor prognostic indicators include bilateral disease, concomitant infantile glaucoma, lensectomy and vitrectomy at the time of surgery, previous graft failure, extensive goniosynechiae, and extensive corneal vascularization. Prompt postoperative optical rehabilitation, combined with occlusion therapy when appropriate, is an important determinant of success.

  19. Current issues and future perspectives of gastric cancer screening

    PubMed Central

    Hamashima, Chisato

    2014-01-01

    Gastric cancer remains the second leading cause of cancer death worldwide. About half of the incidence of gastric cancer is observed in East Asian countries, which show a higher mortality than other countries. The effectiveness of 3 new gastric cancer screening techniques, namely, upper gastrointestinal endoscopy, serological testing, and “screen and treat” method were extensively reviewed. Moreover, the phases of development for cancer screening were analyzed on the basis of the biomarker development road map. Several observational studies have reported the effectiveness of endoscopic screening in reducing mortality from gastric cancer. On the other hand, serologic testing has mainly been used for targeting the high-risk group for gastric cancer. To date, the effectiveness of new techniques for gastric cancer screening has remained limited. However, endoscopic screening is presently in the last trial phase of development before their introduction to population-based screening. To effectively introduce new techniques for gastric cancer screening in a community, incidence and mortality reduction from gastric cancer must be initially and thoroughly evaluated by conducting reliable studies. In addition to effectiveness evaluation, the balance of benefits and harms must be carefully assessed before introducing these new techniques for population-based screening. PMID:25320514

  20. Spectral estimation—What is new? What is next?

    NASA Astrophysics Data System (ADS)

    Tary, Jean Baptiste; Herrera, Roberto Henry; Han, Jiajun; van der Baan, Mirko

    2014-12-01

    Spectral estimation, and corresponding time-frequency representation for nonstationary signals, is a cornerstone in geophysical signal processing and interpretation. The last 10-15 years have seen the development of many new high-resolution decompositions that are often fundamentally different from Fourier and wavelet transforms. These conventional techniques, like the short-time Fourier transform and the continuous wavelet transform, show some limitations in terms of resolution (localization) due to the trade-off between time and frequency localizations and smearing due to the finite size of the time series of their template. Well-known techniques, like autoregressive methods and basis pursuit, and recently developed techniques, such as empirical mode decomposition and the synchrosqueezing transform, can achieve higher time-frequency localization due to reduced spectral smearing and leakage. We first review the theory of various established and novel techniques, pointing out their assumptions, adaptability, and expected time-frequency localization. We illustrate their performances on a provided collection of benchmark signals, including a laughing voice, a volcano tremor, a microseismic event, and a global earthquake, with the intention to provide a fair comparison of the pros and cons of each method. Finally, their outcomes are discussed and possible avenues for improvements are proposed.

  1. Demonstration of landfill gas enhancement techniques in landfill simulators

    NASA Astrophysics Data System (ADS)

    Walsh, J. J.; Vogt, W. G.

    1982-02-01

    Various techniques to enhance gas production in sanitary landfills were applied to landfill simulators. These techniques include (1) accelerated moisture addition, (2) leachate recycling, (3) buffer addition, (4) nutrient addition, and (5) combinations of the above. Results are compiled through on-going operation and monitoring of sixteen landfill simulators. These test cells contain about 380 kg of municipal solid waste. Quantities of buffer and nutrient materials were placed in selected cells at the time of loading. Water is added to all test cells on a monthly basis; leachate is withdrawn from all cells (and recycled on selected cells) also on a monthly basis. Daily monitoring of gas volumes and refuse temperatures is performed. Gas and leachate samples are collected and analyzed on a monthly basis. Leachate and gas quality and quantity reslts are presented for the first 18 months of operation.

  2. Feature extraction through parallel Probabilistic Principal Component Analysis for heart disease diagnosis

    NASA Astrophysics Data System (ADS)

    Shah, Syed Muhammad Saqlain; Batool, Safeera; Khan, Imran; Ashraf, Muhammad Usman; Abbas, Syed Hussnain; Hussain, Syed Adnan

    2017-09-01

    Automatic diagnosis of human diseases are mostly achieved through decision support systems. The performance of these systems is mainly dependent on the selection of the most relevant features. This becomes harder when the dataset contains missing values for the different features. Probabilistic Principal Component Analysis (PPCA) has reputation to deal with the problem of missing values of attributes. This research presents a methodology which uses the results of medical tests as input, extracts a reduced dimensional feature subset and provides diagnosis of heart disease. The proposed methodology extracts high impact features in new projection by using Probabilistic Principal Component Analysis (PPCA). PPCA extracts projection vectors which contribute in highest covariance and these projection vectors are used to reduce feature dimension. The selection of projection vectors is done through Parallel Analysis (PA). The feature subset with the reduced dimension is provided to radial basis function (RBF) kernel based Support Vector Machines (SVM). The RBF based SVM serves the purpose of classification into two categories i.e., Heart Patient (HP) and Normal Subject (NS). The proposed methodology is evaluated through accuracy, specificity and sensitivity over the three datasets of UCI i.e., Cleveland, Switzerland and Hungarian. The statistical results achieved through the proposed technique are presented in comparison to the existing research showing its impact. The proposed technique achieved an accuracy of 82.18%, 85.82% and 91.30% for Cleveland, Hungarian and Switzerland dataset respectively.

  3. The cardiac dose-sparing benefits of deep inspiration breath-hold in left breast irradiation: a systematic review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smyth, Lloyd M, E-mail: lloyd.smyth@epworth.org.au; Department of Medical Imaging and Radiation Sciences, Faculty of Medicine, School of Biomedical Sciences, Nursing and Health Sciences, Monash University, Clayton, Victoria; Knight, Kellie A

    Despite technical advancements in breast radiation therapy, cardiac structures are still subject to significant levels of irradiation. As the use of adjuvant radiation therapy after breast-conserving surgery continues to improve survival for early breast cancer patients, the associated radiation-induced cardiac toxicities become increasingly relevant. Our primary aim was to evaluate the cardiac-sparing benefits of the deep inspiration breath-hold (DIBH) technique. An electronic literature search of the PubMed database from 1966 to July 2014 was used to identify articles published in English relating to the dosimetric benefits of DIBH. Studies comparing the mean heart dose of DIBH and free breathing treatmentmore » plans for left breast cancer patients were eligible to be included in the review. Studies evaluating the reproducibility and stability of the DIBH technique were also reviewed. Ten studies provided data on the benefits of DIBH during left breast irradiation. From these studies, DIBH reduced the mean heart dose by up to 3.4 Gy when compared to a free breathing approach. Four studies reported that the DIBH technique was stable and reproducible on a daily basis. According to current estimates of the excess cardiac toxicity associated with radiation therapy, a 3.4 Gy reduction in mean heart dose is equivalent to a 13.6% reduction in the projected increase in risk of heart disease. DIBH is a reproducible and stable technique for left breast irradiation showing significant promise in reducing the late cardiac toxicities associated with radiation therapy.« less

  4. An Application of Data Mining Techniques for Flood Forecasting: Application in Rivers Daya and Bhargavi, India

    NASA Astrophysics Data System (ADS)

    Panigrahi, Binay Kumar; Das, Soumya; Nath, Tushar Kumar; Senapati, Manas Ranjan

    2018-05-01

    In the present study, with a view to speculate the water flow of two rivers in eastern India namely river Daya and river Bhargavi, the focus was on developing Cascaded Functional Link Artificial Neural Network (C-FLANN) model. Parameters of C-FLANN architecture were updated using Harmony Search (HS) and Differential Evolution (DE). As the numbers of samples are very low, there is a risk of over fitting. To avoid this Map reduce based ANOVA technique is used to select important features. These features were used and provided to the architecture which is used to predict the water flow in both the rivers, one day, one week and two weeks ahead. The results of both the techniques were compared with Radial Basis Functional Neural Network (RBFNN) and Multilayer Perceptron (MLP), two widely used artificial neural network for prediction. From the result it was confirmed that C-FLANN trained through HS gives better prediction result than being trained through DE or RBFNN or MLP and can be used for predicting water flow in different rivers.

  5. Sparse Gaussian elimination with controlled fill-in on a shared memory multiprocessor

    NASA Technical Reports Server (NTRS)

    Alaghband, Gita; Jordan, Harry F.

    1989-01-01

    It is shown that in sparse matrices arising from electronic circuits, it is possible to do computations on many diagonal elements simultaneously. A technique for obtaining an ordered compatible set directly from the ordered incompatible table is given. The ordering is based on the Markowitz number of the pivot candidates. This technique generates a set of compatible pivots with the property of generating few fills. A novel heuristic algorithm is presented that combines the idea of an order-compatible set with a limited binary tree search to generate several sets of compatible pivots in linear time. An elimination set for reducing the matrix is generated and selected on the basis of a minimum Markowitz sum number. The parallel pivoting technique presented is a stepwise algorithm and can be applied to any submatrix of the original matrix. Thus, it is not a preordering of the sparse matrix and is applied dynamically as the decomposition proceeds. Parameters are suggested to obtain a balance between parallelism and fill-ins. Results of applying the proposed algorithms on several large application matrices using the HEP multiprocessor (Kowalik, 1985) are presented and analyzed.

  6. Measurement of total ultrasonic power using thermal expansion and change in buoyancy of an absorbing target

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dubey, P. K., E-mail: premkdubey@gmail.com; Kumar, Yudhisther; Gupta, Reeta

    2014-05-15

    The Radiation Force Balance (RFB) technique is well established and most widely used for the measurement of total ultrasonic power radiated by ultrasonic transducer. The technique is used as a primary standard for calibration of ultrasonic transducers with relatively fair uncertainty in the low power (below 1 W) regime. In this technique, uncertainty comparatively increases in the range of few watts wherein the effects such as thermal heating of the target, cavitations, and acoustic streaming dominate. In addition, error in the measurement of ultrasonic power is also caused due to movement of absorber at relatively high radiated force which occursmore » at high power level. In this article a new technique is proposed which does not measure the balance output during transducer energized state as done in RFB. It utilizes the change in buoyancy of the absorbing target due to local thermal heating. The linear thermal expansion of the target changes the apparent mass in water due to buoyancy change. This forms the basis for the measurement of ultrasonic power particularly in watts range. The proposed method comparatively reduces uncertainty caused by various ultrasonic effects that occur at high power such as overshoot due to momentum of target at higher radiated force. The functionality of the technique has been tested and compared with the existing internationally recommended RFB technique.« less

  7. Measurement of total ultrasonic power using thermal expansion and change in buoyancy of an absorbing target

    NASA Astrophysics Data System (ADS)

    Dubey, P. K.; Kumar, Yudhisther; Gupta, Reeta; Jain, Anshul; Gohiya, Chandrashekhar

    2014-05-01

    The Radiation Force Balance (RFB) technique is well established and most widely used for the measurement of total ultrasonic power radiated by ultrasonic transducer. The technique is used as a primary standard for calibration of ultrasonic transducers with relatively fair uncertainty in the low power (below 1 W) regime. In this technique, uncertainty comparatively increases in the range of few watts wherein the effects such as thermal heating of the target, cavitations, and acoustic streaming dominate. In addition, error in the measurement of ultrasonic power is also caused due to movement of absorber at relatively high radiated force which occurs at high power level. In this article a new technique is proposed which does not measure the balance output during transducer energized state as done in RFB. It utilizes the change in buoyancy of the absorbing target due to local thermal heating. The linear thermal expansion of the target changes the apparent mass in water due to buoyancy change. This forms the basis for the measurement of ultrasonic power particularly in watts range. The proposed method comparatively reduces uncertainty caused by various ultrasonic effects that occur at high power such as overshoot due to momentum of target at higher radiated force. The functionality of the technique has been tested and compared with the existing internationally recommended RFB technique.

  8. Inhibitory effects of sevoflurane on pacemaking activity of sinoatrial node cells in guinea-pig heart

    PubMed Central

    Kojima, Akiko; Kitagawa, Hirotoshi; Omatsu-Kanbe, Mariko; Matsuura, Hiroshi; Nosaka, Shuichi

    2012-01-01

    BACKGROUND AND PURPOSE The volatile anaesthetic sevoflurane affects heart rate in clinical settings. The present study investigated the effect of sevoflurane on sinoatrial (SA) node automaticity and its underlying ionic mechanisms. EXPERIMENTAL APPROACH Spontaneous action potentials and four ionic currents fundamental for pacemaking, namely, the hyperpolarization-activated cation current (If), T-type and L-type Ca2+ currents (ICa,T and ICa,L, respectively), and slowly activating delayed rectifier K+ current (IKs), were recorded in isolated guinea-pig SA node cells using perforated and conventional whole-cell patch-clamp techniques. Heart rate in guinea-pigs was recorded ex vivo in Langendorff mode and in vivo during sevoflurane inhalation. KEY RESULTS In isolated SA node cells, sevoflurane (0.12–0.71 mM) reduced the firing rate of spontaneous action potentials and its electrical basis, diastolic depolarization rate, in a qualitatively similar concentration-dependent manner. Sevoflurane (0.44 mM) reduced spontaneous firing rate by approximately 25% and decreased If, ICa,T, ICa,L and IKs by 14.4, 31.3, 30.3 and 37.1%, respectively, without significantly affecting voltage dependence of current activation. The negative chronotropic effect of sevoflurane was partly reproduced by a computer simulation of SA node cell electrophysiology. Sevoflurane reduced heart rate in Langendorff-perfused hearts, but not in vivo during sevoflurane inhalation in guinea-pigs. CONCLUSIONS AND IMPLICATIONS Sevoflurane at clinically relevant concentrations slowed diastolic depolarization and thereby reduced pacemaking activity in SA node cells, at least partly due to its inhibitory effect on If, ICa,T and ICa,L. These findings provide an important electrophysiological basis of alterations in heart rate during sevoflurane anaesthesia in clinical settings. PMID:22356456

  9. [Advances in studies on toxicity of aconite].

    PubMed

    Chen, Rong-Chang; Sun, Gui-Bo; Zhang, Qiang; Ye, Zu-Guang; Sun, Xiao-Bo

    2013-04-01

    Aconite has the efficacy of reviving yang for resuscitation, dispelling cold and relieving pain, which is widely used in clinic, and shows unique efficacy in treating severe diseases. However, aconite has great toxicity, with obvious cardio-toxicity and neurotoxicity. Its toxicological mechanism main shows in the effect on voltage-dependent sodium channels, release of neurotransmitters and changes in receptors, promotion of lipid peroxidation and cell apoptosis in heart, liver and other tissues. Aconite works to reduce toxicity mainly through compatibility and processing. Besides traditional processing methods, many new modern processing techniques could also help achieve the objectives of detoxification and efficacy enhancement. In order to further develop the medicinal value of aconite and reduce its side effect in clinical application, this article gives comprehensive comments on aconite's toxicity characteristics, mechanism and detoxification methods on the basis of relevant reports for aconite's toxicity and the author's experimental studies.

  10. 7 CFR 1412.47 - Planting flexibility.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... payments will not be reduced for the planting or harvesting of the fruit, vegetable, or wild rice; (2) The... the payment acres for the farm will be reduced on an acre-for-acre basis; or (3) The producer has a...; and (ii) The payment acres for the farm will be reduced on an acre-for-acre basis. (e) Double-cropping...

  11. Accelerated damage visualization using binary search with fixed pitch-catch distance laser ultrasonic scanning

    NASA Astrophysics Data System (ADS)

    Park, Byeongjin; Sohn, Hoon

    2017-07-01

    Laser ultrasonic scanning, especially full-field wave propagation imaging, is attractive for damage visualization thanks to its noncontact nature, sensitivity to local damage, and high spatial resolution. However, its practicality is limited because scanning at a high spatial resolution demands a prohibitively long scanning time. Inspired by binary search, an accelerated damage visualization technique is developed to visualize damage with a reduced scanning time. The pitch-catch distance between the excitation point and the sensing point is also fixed during scanning to maintain a high signal-to-noise ratio (SNR) of measured ultrasonic responses. The approximate damage boundary is identified by examining the interactions between ultrasonic waves and damage observed at the scanning points that are sparsely selected by a binary search algorithm. Here, a time-domain laser ultrasonic response is transformed into a spatial ultrasonic domain response using a basis pursuit approach so that the interactions between ultrasonic waves and damage, such as reflections and transmissions, can be better identified in the spatial ultrasonic domain. Then, the area inside the identified damage boundary is visualized as damage. The performance of the proposed damage visualization technique is validated excusing a numerical simulation performed on an aluminum plate with a notch and experiments performed on an aluminum plate with a crack and a wind turbine blade with delamination. The proposed damage visualization technique accelerates the damage visualization process in three aspects: (1) the number of measurements that is necessary for damage visualization is dramatically reduced by a binary search algorithm; (2) the number of averaging that is necessary to achieve a high SNR is reduced by maintaining the wave propagation distance short; and (3) with the proposed technique, the same damage can be identified with a lower spatial resolution than the spatial resolution required by full-field wave propagation imaging.

  12. A Method to Improve Electron Density Measurement of Cone-Beam CT Using Dual Energy Technique

    PubMed Central

    Men, Kuo; Dai, Jian-Rong; Li, Ming-Hui; Chen, Xin-Yuan; Zhang, Ke; Tian, Yuan; Huang, Peng; Xu, Ying-Jie

    2015-01-01

    Purpose. To develop a dual energy imaging method to improve the accuracy of electron density measurement with a cone-beam CT (CBCT) device. Materials and Methods. The imaging system is the XVI CBCT system on Elekta Synergy linac. Projection data were acquired with the high and low energy X-ray, respectively, to set up a basis material decomposition model. Virtual phantom simulation and phantoms experiments were carried out for quantitative evaluation of the method. Phantoms were also scanned twice with the high and low energy X-ray, respectively. The data were decomposed into projections of the two basis material coefficients according to the model set up earlier. The two sets of decomposed projections were used to reconstruct CBCT images of the basis material coefficients. Then, the images of electron densities were calculated with these CBCT images. Results. The difference between the calculated and theoretical values was within 2% and the correlation coefficient of them was about 1.0. The dual energy imaging method obtained more accurate electron density values and reduced the beam hardening artifacts obviously. Conclusion. A novel dual energy CBCT imaging method to calculate the electron densities was developed. It can acquire more accurate values and provide a platform potentially for dose calculation. PMID:26346510

  13. Grid and basis adaptive polynomial chaos techniques for sensitivity and uncertainty analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perkó, Zoltán, E-mail: Z.Perko@tudelft.nl; Gilli, Luca, E-mail: Gilli@nrg.eu; Lathouwers, Danny, E-mail: D.Lathouwers@tudelft.nl

    2014-03-01

    The demand for accurate and computationally affordable sensitivity and uncertainty techniques is constantly on the rise and has become especially pressing in the nuclear field with the shift to Best Estimate Plus Uncertainty methodologies in the licensing of nuclear installations. Besides traditional, already well developed methods – such as first order perturbation theory or Monte Carlo sampling – Polynomial Chaos Expansion (PCE) has been given a growing emphasis in recent years due to its simple application and good performance. This paper presents new developments of the research done at TU Delft on such Polynomial Chaos (PC) techniques. Our work ismore » focused on the Non-Intrusive Spectral Projection (NISP) approach and adaptive methods for building the PCE of responses of interest. Recent efforts resulted in a new adaptive sparse grid algorithm designed for estimating the PC coefficients. The algorithm is based on Gerstner's procedure for calculating multi-dimensional integrals but proves to be computationally significantly cheaper, while at the same it retains a similar accuracy as the original method. More importantly the issue of basis adaptivity has been investigated and two techniques have been implemented for constructing the sparse PCE of quantities of interest. Not using the traditional full PC basis set leads to further reduction in computational time since the high order grids necessary for accurately estimating the near zero expansion coefficients of polynomial basis vectors not needed in the PCE can be excluded from the calculation. Moreover the sparse PC representation of the response is easier to handle when used for sensitivity analysis or uncertainty propagation due to the smaller number of basis vectors. The developed grid and basis adaptive methods have been implemented in Matlab as the Fully Adaptive Non-Intrusive Spectral Projection (FANISP) algorithm and were tested on four analytical problems. These show consistent good performance both in terms of the accuracy of the resulting PC representation of quantities and the computational costs associated with constructing the sparse PCE. Basis adaptivity also seems to make the employment of PC techniques possible for problems with a higher number of input parameters (15–20), alleviating a well known limitation of the traditional approach. The prospect of larger scale applicability and the simplicity of implementation makes such adaptive PC algorithms particularly appealing for the sensitivity and uncertainty analysis of complex systems and legacy codes.« less

  14. Ion-Exclusion Chromatography for Analyzing Organics in Water

    NASA Technical Reports Server (NTRS)

    Sauer, Richard; Rutz, Jeffrey A.; Schultz, John R.

    2006-01-01

    A liquid-chromatography technique has been developed for use in the quantitative analysis of urea (and of other nonvolatile organic compounds typically found with urea) dissolved in water. The technique involves the use of a column that contains an ion-exclusion resin; heretofore, this column has been sold for use in analyzing monosaccharides and food softeners, but not for analyzing water supplies. The prior technique commonly used to analyze water for urea content has been one of high-performance liquid chromatography (HPLC), with reliance on hydrophobic interactions between analytes in a water sample and long-chain alkyl groups bonded to an HPLC column. The prior technique has proven inadequate because of a strong tendency toward co-elution of urea with other compounds. Co-elution often causes the urea and other compounds to be crowded into a narrow region of the chromatogram (see left part of figure), thereby giving rise to low chromatographic resolution and misidentification of compounds. It is possible to quantitate urea or another analyte via ultraviolet- and visible-light absorbance measurements, but in order to perform such measurements, it is necessary to dilute the sample, causing a significant loss of sensitivity. The ion-exclusion resin used in the improved technique is sulfonated polystyrene in the calcium form. Whereas the alkyl-chain column used in the prior technique separates compounds on the basis of polarity only, the ion-exclusion-resin column used in the improved technique separates compounds on the basis of both molecular size and electric charge. As a result, the degree of separation is increased: instead of being crowded together into a single chromatographic peak only about 1 to 2 minutes wide as in the prior technique, the chromatographic peaks of different compounds are now separated from each other and spread out over a range about 33 minutes wide (see right part of figure), and the urea peak can readily be distinguished from the other peaks. Although the analysis takes more time in the improved technique, this disadvantage is offset by two important advantages: Sensitivity is increased. The minimum concentration of urea that can be measured is reduced (to between 1/5 and 1/3 of that of the prior technique) because it is not necessary to dilute the sample. The separation of peaks facilitates the identification and quantitation of the various compounds. The resolution of the compounds other than urea makes it possible to identify those compounds by use of mass spectrometry.

  15. A Fast MoM Solver (GIFFT) for Large Arrays of Microstrip and Cavity-Backed Antennas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fasenfest, B J; Capolino, F; Wilton, D

    2005-02-02

    A straightforward numerical analysis of large arrays of arbitrary contour (and possibly missing elements) requires large memory storage and long computation times. Several techniques are currently under development to reduce this cost. One such technique is the GIFFT (Green's function interpolation and FFT) method discussed here that belongs to the class of fast solvers for large structures. This method uses a modification of the standard AIM approach [1] that takes into account the reusability properties of matrices that arise from identical array elements. If the array consists of planar conducting bodies, the array elements are meshed using standard subdomain basismore » functions, such as the RWG basis. The Green's function is then projected onto a sparse regular grid of separable interpolating polynomials. This grid can then be used in a 2D or 3D FFT to accelerate the matrix-vector product used in an iterative solver [2]. The method has been proven to greatly reduce solve time by speeding up the matrix-vector product computation. The GIFFT approach also reduces fill time and memory requirements, since only the near element interactions need to be calculated exactly. The present work extends GIFFT to layered material Green's functions and multiregion interactions via slots in ground planes. In addition, a preconditioner is implemented to greatly reduce the number of iterations required for a solution. The general scheme of the GIFFT method is reported in [2]; this contribution is limited to presenting new results for array antennas made of slot-excited patches and cavity-backed patch antennas.« less

  16. Sympathetic Cooling of Lattice Atoms by a Bose-Einstein Condensate

    DTIC Science & Technology

    2010-08-13

    average out to zero net change in momentum. This type of cooling is the basis for techniques such as Zeeman slowing and Magneto - optical traps . On a...change in momentum. This type of cooling is the basis for techniques such as Zeeman slowing and Magneto - optical traps . On a more basic level, an excited...cause stimulated emission of a second excitation. A quantitative explanation requires the use of the density fluctuation operator . This operator

  17. Note: Photopyroelectric measurement of thermal effusivity of transparent liquids by a method free of fitting procedures.

    PubMed

    Ivanov, R; Marín, E; Villa, J; Aguilar, C Hernández; Pacheco, A Domínguez; Garrido, S Hernández

    2016-02-01

    In a recent paper published in this journal [R. Ivanov et al., Rev. Sci. Instrum. 86, 064902 (2015)], a methodology free of fitting procedures for determining the thermal effusivity of liquids using the electropyroelectric technique was reported. Here the same measurement principle is extended to the well-known photopyroelectric technique. The theoretical basis and experimental basis of the method are presented and its usefulness is demonstrated with measurements on test samples.

  18. Taking a statistical approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wild, M.; Rouhani, S.

    1995-02-01

    A typical site investigation entails extensive sampling and monitoring. In the past, sampling plans have been designed on purely ad hoc bases, leading to significant expenditures and, in some cases, collection of redundant information. In many instances, sampling costs exceed the true worth of the collected data. The US Environmental Protection Agency (EPA) therefore has advocated the use of geostatistics to provide a logical framework for sampling and analysis of environmental data. Geostatistical methodology uses statistical techniques for the spatial analysis of a variety of earth-related data. The use of geostatistics was developed by the mining industry to estimate oremore » concentrations. The same procedure is effective in quantifying environmental contaminants in soils for risk assessments. Unlike classical statistical techniques, geostatistics offers procedures to incorporate the underlying spatial structure of the investigated field. Sample points spaced close together tend to be more similar than samples spaced further apart. This can guide sampling strategies and determine complex contaminant distributions. Geostatistic techniques can be used to evaluate site conditions on the basis of regular, irregular, random and even spatially biased samples. In most environmental investigations, it is desirable to concentrate sampling in areas of known or suspected contamination. The rigorous mathematical procedures of geostatistics allow for accurate estimates at unsampled locations, potentially reducing sampling requirements. The use of geostatistics serves as a decision-aiding and planning tool and can significantly reduce short-term site assessment costs, long-term sampling and monitoring needs, as well as lead to more accurate and realistic remedial design criteria.« less

  19. Simple and Efficient Numerical Evaluation of Near-Hypersingular Integrals

    NASA Technical Reports Server (NTRS)

    Fink, Patrick W.; Wilton, Donald R.; Khayat, Michael A.

    2007-01-01

    Recently, significant progress has been made in the handling of singular and nearly-singular potential integrals that commonly arise in the Boundary Element Method (BEM). To facilitate object-oriented programming and handling of higher order basis functions, cancellation techniques are favored over techniques involving singularity subtraction. However, gradients of the Newton-type potentials, which produce hypersingular kernels, are also frequently required in BEM formulations. As is the case with the potentials, treatment of the near-hypersingular integrals has proven more challenging than treating the limiting case in which the observation point approaches the surface. Historically, numerical evaluation of these near-hypersingularities has often involved a two-step procedure: a singularity subtraction to reduce the order of the singularity, followed by a boundary contour integral evaluation of the extracted part. Since this evaluation necessarily links basis function, Green s function, and the integration domain (element shape), the approach ill fits object-oriented programming concepts. Thus, there is a need for cancellation-type techniques for efficient numerical evaluation of the gradient of the potential. Progress in the development of efficient cancellation-type procedures for the gradient potentials was recently presented. To the extent possible, a change of variables is chosen such that the Jacobian of the transformation cancels the singularity. However, since the gradient kernel involves singularities of different orders, we also require that the transformation leaves remaining terms that are analytic. The terms "normal" and "tangential" are used herein with reference to the source element. Also, since computational formulations often involve the numerical evaluation of both potentials and their gradients, it is highly desirable that a single integration procedure efficiently handles both.

  20. A minimal approach to the scattering of physical massless bosons

    NASA Astrophysics Data System (ADS)

    Boels, Rutger H.; Luo, Hui

    2018-05-01

    Tree and loop level scattering amplitudes which involve physical massless bosons are derived directly from physical constraints such as locality, symmetry and unitarity, bypassing path integral constructions. Amplitudes can be projected onto a minimal basis of kinematic factors through linear algebra, by employing four dimensional spinor helicity methods or at its most general using projection techniques. The linear algebra analysis is closely related to amplitude relations, especially the Bern-Carrasco-Johansson relations for gluon amplitudes and the Kawai-Lewellen-Tye relations between gluons and graviton amplitudes. Projection techniques are known to reduce the computation of loop amplitudes with spinning particles to scalar integrals. Unitarity, locality and integration-by-parts identities can then be used to fix complete tree and loop amplitudes efficiently. The loop amplitudes follow algorithmically from the trees. A number of proof-of-concept examples are presented. These include the planar four point two-loop amplitude in pure Yang-Mills theory as well as a range of one loop amplitudes with internal and external scalars, gluons and gravitons. Several interesting features of the results are highlighted, such as the vanishing of certain basis coefficients for gluon and graviton amplitudes. Effective field theories are naturally and efficiently included into the framework. Dimensional regularisation is employed throughout; different regularisation schemes are worked out explicitly. The presented methods appear most powerful in non-supersymmetric theories in cases with relatively few legs, but with potentially many loops. For instance, in the introduced approach iterated unitarity cuts of four point amplitudes for non-supersymmetric gauge and gravity theories can be computed by matrix multiplication, generalising the so-called rung-rule of maximally supersymmetric theories. The philosophy of the approach to kinematics also leads to a technique to control colour quantum numbers of scattering amplitudes with matter, especially efficient in the adjoint and fundamental representations.

  1. Historical shoreline mapping (I): improving techniques and reducing positioning errors

    USGS Publications Warehouse

    Thieler, E. Robert; Danforth, William W.

    1994-01-01

    A critical need exists among coastal researchers and policy-makers for a precise method to obtain shoreline positions from historical maps and aerial photographs. A number of methods that vary widely in approach and accuracy have been developed to meet this need. None of the existing methods, however, address the entire range of cartographic and photogrammetric techniques required for accurate coastal mapping. Thus, their application to many typical shoreline mapping problems is limited. In addition, no shoreline mapping technique provides an adequate basis for quantifying the many errors inherent in shoreline mapping using maps and air photos. As a result, current assessments of errors in air photo mapping techniques generally (and falsely) assume that errors in shoreline positions are represented by the sum of a series of worst-case assumptions about digitizer operator resolution and ground control accuracy. These assessments also ignore altogether other errors that commonly approach ground distances of 10 m. This paper provides a conceptual and analytical framework for improved methods of extracting geographic data from maps and aerial photographs. We also present a new approach to shoreline mapping using air photos that revises and extends a number of photogrammetric techniques. These techniques include (1) developing spatially and temporally overlapping control networks for large groups of photos; (2) digitizing air photos for use in shoreline mapping; (3) preprocessing digitized photos to remove lens distortion and film deformation effects; (4) simultaneous aerotriangulation of large groups of spatially and temporally overlapping photos; and (5) using a single-ray intersection technique to determine geographic shoreline coordinates and express the horizontal and vertical error associated with a given digitized shoreline. As long as historical maps and air photos are used in studies of shoreline change, there will be a considerable amount of error (on the order of several meters) present in shoreline position and rate-of- change calculations. The techniques presented in this paper, however, provide a means to reduce and quantify these errors so that realistic assessments of the technological noise (as opposed to geological noise) in geographic shoreline positions can be made.

  2. Optimization of Turbine Blade Design for Reusable Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Shyy, Wei

    1998-01-01

    To facilitate design optimization of turbine blade shape for reusable launching vehicles, appropriate techniques need to be developed to process and estimate the characteristics of the design variables and the response of the output with respect to the variations of the design variables. The purpose of this report is to offer insight into developing appropriate techniques for supporting such design and optimization needs. Neural network and polynomial-based techniques are applied to process aerodynamic data obtained from computational simulations for flows around a two-dimensional airfoil and a generic three- dimensional wing/blade. For the two-dimensional airfoil, a two-layered radial-basis network is designed and trained. The performances of two different design functions for radial-basis networks, one based on the accuracy requirement, whereas the other one based on the limit on the network size. While the number of neurons needed to satisfactorily reproduce the information depends on the size of the data, the neural network technique is shown to be more accurate for large data set (up to 765 simulations have been used) than the polynomial-based response surface method. For the three-dimensional wing/blade case, smaller aerodynamic data sets (between 9 to 25 simulations) are considered, and both the neural network and the polynomial-based response surface techniques improve their performance as the data size increases. It is found while the relative performance of two different network types, a radial-basis network and a back-propagation network, depends on the number of input data, the number of iterations required for radial-basis network is less than that for the back-propagation network.

  3. Fabrication and characterization of a co-planar detector in diamond for low energy single ion implantation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abraham, John Bishoy Sam; Pacheco, Jose L.; Aguirre, Brandon Adrian

    2016-08-09

    We demonstrate low energy single ion detection using a co-planar detector fabricated on a diamond substrate and characterized by ion beam induced charge collection. Histograms are taken with low fluence ion pulses illustrating quantized ion detection down to a single ion with a signal-to-noise ratio of approximately 10. We anticipate that this detection technique can serve as a basis to optimize the yield of single color centers in diamond. In conclusion, the ability to count ions into a diamond substrate is expected to reduce the uncertainty in the yield of color center formation by removing Poisson statistics from the implantationmore » process.« less

  4. Future developments in aeronautical satellite communications

    NASA Technical Reports Server (NTRS)

    Wood, Peter

    1990-01-01

    Very shortly aeronautical satellite communications will be introduced on a world wide basis. By the end of the year, voice communications (both to the cabin and cockpit) and packet data communications will be available to both airlines and executive aircraft. During the decade following the introduction of the system, there will be many enhancements and developments which will increase the range of applications, expand the potential number of users, and reduce costs. A number of ways in which the system is expected to evolve over this period are presented. Among the issues which are covered are the impact of spot beam satellites, spectrum and power conservation techniques, and the expanding range of user services.

  5. Study of guidance techniques for aerial application of agricultural compounds

    NASA Technical Reports Server (NTRS)

    Caldwell, J. D.; Dimmock, P. B. A.; Watkins, R. H.

    1980-01-01

    Candidate systems were identified for evaluation of suitability in meeting specified accuracy requirements for a swath guidance system in an agriculture aircraft. Further examination reduced the list of potential candidates to a single category, i.e., transponder type systems, for detailed evaluation. Within this category three systems were found which met the basic accuracy requirements of the work statement. The Flying Flagman, the Electronic Flagging and the Raydist Director System. In addition to evaluating the systems against the specified requirements, each system was compared with the other two systems on a relative basis. The conclusions supported by the analyses show the Flying Flagman system to be the most suitable system currently available to meet the requirements.

  6. Digital SAR processing using a fast polynomial transform

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Lipes, R. G.; Butman, S. A.; Reed, I. S.; Rubin, A. L.

    1984-01-01

    A new digital processing algorithm based on the fast polynomial transform is developed for producing images from Synthetic Aperture Radar data. This algorithm enables the computation of the two dimensional cyclic correlation of the raw echo data with the impulse response of a point target, thereby reducing distortions inherent in one dimensional transforms. This SAR processing technique was evaluated on a general-purpose computer and an actual Seasat SAR image was produced. However, regular production runs will require a dedicated facility. It is expected that such a new SAR processing algorithm could provide the basis for a real-time SAR correlator implementation in the Deep Space Network. Previously announced in STAR as N82-11295

  7. Improved lifetime high voltage switch electrode

    NASA Astrophysics Data System (ADS)

    Halverson, W.

    1985-06-01

    In this Phase 1 Small Business Innovation Research (SBIR) program, preliminary tests of ion implantation to increase the lifetime of spark switch electrodes have indicated that a 185 keV carbon ion implant into a tungsten-copper composite has reduced electrode erosion by a factor of two to four. Apparently, the thin layer of tungsten carbide (WC) has better thermal properties than pure tungsten; the WC may have penetrated into the unimplanted body of the electrode by liquid and/or solid phase diffusion during erosion testing. These encouraging results should provide the basis for a Phase 2 SBIR program to investigate further the physical and chemical effects of ion implantation on spark gap electrodes and to optimize the technique for applications.

  8. Efficient multiparty quantum-secret-sharing schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiao Li; Deng Fuguo; Key Laboratory for Quantum Information and Measurements, MOE, Beijing 100084

    In this work, we generalize the quantum-secret-sharing scheme of Hillery, Buzek, and Berthiaume [Phys. Rev. A 59, 1829 (1999)] into arbitrary multiparties. Explicit expressions for the shared secret bit is given. It is shown that in the Hillery-Buzek-Berthiaume quantum-secret-sharing scheme the secret information is shared in the parity of binary strings formed by the measured outcomes of the participants. In addition, we have increased the efficiency of the quantum-secret-sharing scheme by generalizing two techniques from quantum key distribution. The favored-measuring-basis quantum-secret-sharing scheme is developed from the Lo-Chau-Ardehali technique [H. K. Lo, H. F. Chau, and M. Ardehali, e-print quant-ph/0011056] wheremore » all the participants choose their measuring-basis asymmetrically, and the measuring-basis-encrypted quantum-secret-sharing scheme is developed from the Hwang-Koh-Han technique [W. Y. Hwang, I. G. Koh, and Y. D. Han, Phys. Lett. A 244, 489 (1998)] where all participants choose their measuring basis according to a control key. Both schemes are asymptotically 100% in efficiency, hence nearly all the Greenberger-Horne-Zeilinger states in a quantum-secret-sharing process are used to generate shared secret information.« less

  9. A Demonstration of the Molecular Basis of Sickle-Cell Anemia.

    ERIC Educational Resources Information Center

    Fox, Marty; Gaynor, John J.

    1996-01-01

    Describes a demonstration that permits the separation of different hemoglobin molecules within two to three hours. Introduces students to the powerful technique of gel electrophoresis and illustrates the molecular basis of sickle-cell anemia. (JRH)

  10. Comparing Different Strategies in Directed Evolution of Enzyme Stereoselectivity: Single- versus Double-Code Saturation Mutagenesis.

    PubMed

    Sun, Zhoutong; Lonsdale, Richard; Li, Guangyue; Reetz, Manfred T

    2016-10-04

    Saturation mutagenesis at sites lining the binding pockets of enzymes constitutes a viable protein engineering technique for enhancing or inverting stereoselectivity. Statistical analysis shows that oversampling in the screening step (the bottleneck) increases astronomically as the number of residues in the randomization site increases, which is the reason why reduced amino acid alphabets have been employed, in addition to splitting large sites into smaller ones. Limonene epoxide hydrolase (LEH) has previously served as the experimental platform in these methodological efforts, enabling comparisons between single-code saturation mutagenesis (SCSM) and triple-code saturation mutagenesis (TCSM); these employ either only one or three amino acids, respectively, as building blocks. In this study the comparative platform is extended by exploring the efficacy of double-code saturation mutagenesis (DCSM), in which the reduced amino acid alphabet consists of two members, chosen according to the principles of rational design on the basis of structural information. The hydrolytic desymmetrization of cyclohexene oxide is used as the model reaction, with formation of either (R,R)- or (S,S)-cyclohexane-1,2-diol. DCSM proves to be clearly superior to the likewise tested SCSM, affording both R,R- and S,S-selective mutants. These variants are also good catalysts in reactions of further substrates. Docking computations reveal the basis of enantioselectivity. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. The Random-Map Technique: Enhancing Mind-Mapping with a Conceptual Combination Technique to Foster Creative Potential

    ERIC Educational Resources Information Center

    Malycha, Charlotte P.; Maier, Günter W.

    2017-01-01

    Although creativity techniques are highly recommended in working environments, their effects have been scarcely investigated. Two cognitive processes are often considered to foster creative potential and are, therefore, taken as a basis for creativity techniques: knowledge activation and conceptual combination. In this study, both processes were…

  12. Material model validation for laser shock peening process simulation

    NASA Astrophysics Data System (ADS)

    Amarchinta, H. K.; Grandhi, R. V.; Langer, K.; Stargel, D. S.

    2009-01-01

    Advanced mechanical surface enhancement techniques have been used successfully to increase the fatigue life of metallic components. These techniques impart deep compressive residual stresses into the component to counter potentially damage-inducing tensile stresses generated under service loading. Laser shock peening (LSP) is an advanced mechanical surface enhancement technique used predominantly in the aircraft industry. To reduce costs and make the technique available on a large-scale basis for industrial applications, simulation of the LSP process is required. Accurate simulation of the LSP process is a challenging task, because the process has many parameters such as laser spot size, pressure profile and material model that must be precisely determined. This work focuses on investigating the appropriate material model that could be used in simulation and design. In the LSP process material is subjected to strain rates of 106 s-1, which is very high compared with conventional strain rates. The importance of an accurate material model increases because the material behaves significantly different at such high strain rates. This work investigates the effect of multiple nonlinear material models for representing the elastic-plastic behavior of materials. Elastic perfectly plastic, Johnson-Cook and Zerilli-Armstrong models are used, and the performance of each model is compared with available experimental results.

  13. The Development of Models for Carbon Dioxide Reduction Technologies for Spacecraft Air Revitalization

    NASA Technical Reports Server (NTRS)

    Swickrath, Michael J.; Anderson, Molly

    2012-01-01

    Through the respiration process, humans consume oxygen (O2) while producing carbon dioxide (CO2) and water (H2O) as byproducts. For long term space exploration, CO2 concentration in the atmosphere must be managed to prevent hypercapnia. Moreover, CO2 can be used as a source of oxygen through chemical reduction serving to minimize the amount of oxygen required at launch. Reduction can be achieved through a number of techniques. NASA is currently exploring the Sabatier reaction, the Bosch reaction, and co- electrolysis of CO2 and H2O for this process. Proof-of-concept experiments and prototype units for all three processes have proven capable of returning useful commodities for space exploration. All three techniques have demonstrated the capacity to reduce CO2 in the laboratory, yet there is interest in understanding how all three techniques would perform at a system level within a spacecraft. Consequently, there is an impetus to develop predictive models for these processes that can be readily rescaled and integrated into larger system models. Such analysis tools provide the ability to evaluate each technique on a comparable basis with respect to processing rates. This manuscript describes the current models for the carbon dioxide reduction processes under parallel developmental efforts. Comparison to experimental data is provided were available for verification purposes.

  14. Advanced aeroservoelastic stabilization techniques for hypersonic flight vehicles

    NASA Technical Reports Server (NTRS)

    Chan, Samuel Y.; Cheng, Peter Y.; Myers, Thomas T.; Klyde, David H.; Magdaleno, Raymond E.; Mcruer, Duane T.

    1992-01-01

    Advanced high performance vehicles, including Single-Stage-To-Orbit (SSTO) hypersonic flight vehicles, that are statically unstable, require higher bandwidth flight control systems to compensate for the instability resulting in interactions between the flight control system, the engine/propulsion dynamics, and the low frequency structural modes. Military specifications, such as MIL-F-9490D and MIL-F-87242, tend to limit treatment of structural modes to conventional gain stabilization techniques. The conventional gain stabilization techniques, however, introduce low frequency effective time delays which can be troublesome from a flying qualities standpoint. These time delays can be alleviated by appropriate blending of gain and phase stabilization techniques (referred to as Hybrid Phase Stabilization or HPS) for the low frequency structural modes. The potential of using HPS for compensating structural mode interaction was previously explored. It was shown that effective time delay was significantly reduced with the use of HPS; however, the HPS design was seen to have greater residual response than a conventional gain stablized design. Additional work performed to advance and refine the HPS design procedure, to further develop residual response metrics as a basis for alternative structural stability specifications, and to develop strategies for validating HPS design and specification concepts in manned simulation is presented. Stabilization design sensitivity to structural uncertainties and aircraft-centered requirements are also assessed.

  15. Curve fitting and modeling with splines using statistical variable selection techniques

    NASA Technical Reports Server (NTRS)

    Smith, P. L.

    1982-01-01

    The successful application of statistical variable selection techniques to fit splines is demonstrated. Major emphasis is given to knot selection, but order determination is also discussed. Two FORTRAN backward elimination programs, using the B-spline basis, were developed. The program for knot elimination is compared in detail with two other spline-fitting methods and several statistical software packages. An example is also given for the two-variable case using a tensor product basis, with a theoretical discussion of the difficulties of their use.

  16. Ionospheric propagation correction modeling for satellite altimeters

    NASA Technical Reports Server (NTRS)

    Nesterczuk, G.

    1981-01-01

    The theoretical basis and avaliable accuracy verifications were reviewed and compared for ionospheric correction procedures based on a global ionsopheric model driven by solar flux, and a technique in which measured electron content (using Faraday rotation measurements) for one path is mapped into corrections for a hemisphere. For these two techniques, RMS errors for correcting satellite altimeters data (at 14 GHz) are estimated to be 12 cm and 3 cm, respectively. On the basis of global accuracy and reliability after implementation, the solar flux model is recommended.

  17. Fitting multidimensional splines using statistical variable selection techniques

    NASA Technical Reports Server (NTRS)

    Smith, P. L.

    1982-01-01

    This report demonstrates the successful application of statistical variable selection techniques to fit splines. Major emphasis is given to knot selection, but order determination is also discussed. Two FORTRAN backward elimination programs using the B-spline basis were developed, and the one for knot elimination is compared in detail with two other spline-fitting methods and several statistical software packages. An example is also given for the two-variable case using a tensor product basis, with a theoretical discussion of the difficulties of their use.

  18. Universal Keyword Classifier on Public Key Based Encrypted Multikeyword Fuzzy Search in Public Cloud

    PubMed Central

    Munisamy, Shyamala Devi; Chokkalingam, Arun

    2015-01-01

    Cloud computing has pioneered the emerging world by manifesting itself as a service through internet and facilitates third party infrastructure and applications. While customers have no visibility on how their data is stored on service provider's premises, it offers greater benefits in lowering infrastructure costs and delivering more flexibility and simplicity in managing private data. The opportunity to use cloud services on pay-per-use basis provides comfort for private data owners in managing costs and data. With the pervasive usage of internet, the focus has now shifted towards effective data utilization on the cloud without compromising security concerns. In the pursuit of increasing data utilization on public cloud storage, the key is to make effective data access through several fuzzy searching techniques. In this paper, we have discussed the existing fuzzy searching techniques and focused on reducing the searching time on the cloud storage server for effective data utilization. Our proposed Asymmetric Classifier Multikeyword Fuzzy Search method provides classifier search server that creates universal keyword classifier for the multiple keyword request which greatly reduces the searching time by learning the search path pattern for all the keywords in the fuzzy keyword set. The objective of using BTree fuzzy searchable index is to resolve typos and representation inconsistencies and also to facilitate effective data utilization. PMID:26380364

  19. Universal Keyword Classifier on Public Key Based Encrypted Multikeyword Fuzzy Search in Public Cloud.

    PubMed

    Munisamy, Shyamala Devi; Chokkalingam, Arun

    2015-01-01

    Cloud computing has pioneered the emerging world by manifesting itself as a service through internet and facilitates third party infrastructure and applications. While customers have no visibility on how their data is stored on service provider's premises, it offers greater benefits in lowering infrastructure costs and delivering more flexibility and simplicity in managing private data. The opportunity to use cloud services on pay-per-use basis provides comfort for private data owners in managing costs and data. With the pervasive usage of internet, the focus has now shifted towards effective data utilization on the cloud without compromising security concerns. In the pursuit of increasing data utilization on public cloud storage, the key is to make effective data access through several fuzzy searching techniques. In this paper, we have discussed the existing fuzzy searching techniques and focused on reducing the searching time on the cloud storage server for effective data utilization. Our proposed Asymmetric Classifier Multikeyword Fuzzy Search method provides classifier search server that creates universal keyword classifier for the multiple keyword request which greatly reduces the searching time by learning the search path pattern for all the keywords in the fuzzy keyword set. The objective of using BTree fuzzy searchable index is to resolve typos and representation inconsistencies and also to facilitate effective data utilization.

  20. Tuning oxidation level, electrical conductance and band gap structure on graphene sheets by cyclic atomic layer reduction technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gu, Si-Yong; Hsieh, Chien-Te; Lin, Tzu-Wei

    The present work develops an atomic layer reduction (ALR) method to accurately tune oxidation level, electrical conductance, band-gap structure, and photoluminescence (PL) response of graphene oxide (GO) sheets. The ALR route is carried out at 200 °C within ALR cycle number of 10–100. The ALR treatment is capable of striping surface functionalities (e.g., hydroxyl, carbonyl, and carboxylic groups), producing thermally-reduced GO sheets. The ALR cycle number serves as a controlling factor in adjusting the crystalline, surface chemistry, electrical, optical properties of GO sheets. With increasing the ALR cycle number, ALR-GO sheets display a high crystallinity, a low oxidation level, anmore » improved electrical conductivity, a narrow band gap, and a tunable PL response. Finally, on the basis of the results, the ALR technique offers a great potential for accurately tune electrical and optical properties of carbon materials through the cyclic removal of oxygen functionalities, without any complicated thermal and chemical desorption processes.« less

  1. Tuning oxidation level, electrical conductance and band gap structure on graphene sheets by cyclic atomic layer reduction technique

    DOE PAGES

    Gu, Si-Yong; Hsieh, Chien-Te; Lin, Tzu-Wei; ...

    2018-05-12

    The present work develops an atomic layer reduction (ALR) method to accurately tune oxidation level, electrical conductance, band-gap structure, and photoluminescence (PL) response of graphene oxide (GO) sheets. The ALR route is carried out at 200 °C within ALR cycle number of 10–100. The ALR treatment is capable of striping surface functionalities (e.g., hydroxyl, carbonyl, and carboxylic groups), producing thermally-reduced GO sheets. The ALR cycle number serves as a controlling factor in adjusting the crystalline, surface chemistry, electrical, optical properties of GO sheets. With increasing the ALR cycle number, ALR-GO sheets display a high crystallinity, a low oxidation level, anmore » improved electrical conductivity, a narrow band gap, and a tunable PL response. Finally, on the basis of the results, the ALR technique offers a great potential for accurately tune electrical and optical properties of carbon materials through the cyclic removal of oxygen functionalities, without any complicated thermal and chemical desorption processes.« less

  2. Qubit Manipulations Techniques for Trapped-Ion Quantum Information Processing

    NASA Astrophysics Data System (ADS)

    Gaebler, John; Tan, Ting; Lin, Yiheng; Bowler, Ryan; Jost, John; Meier, Adam; Knill, Emanuel; Leibfried, Dietrich; Wineland, David; Ion Storage Team

    2013-05-01

    We report recent results on qubit manipulation techniques for trapped-ions towards scalable quantum information processing (QIP). We demonstrate a platform-independent benchmarking protocol for evaluating the performance of Clifford gates, which form a basis for fault-tolerant QIP. We report a demonstration of an entangling gate scheme proposed by Bermudez et al. [Phys. Rev. A. 85, 040302 (2012)] and achieve a fidelity of 0.974(4). This scheme takes advantage of dynamic decoupling which protects the qubit against dephasing errors. It can be applied directly on magnetic-field-insensitive states, and provides a number of simplifications in experimental implementation compared to some other entangling gates with trapped ions. We also report preliminary results on dissipative creation of entanglement with trapped-ions. Creation of an entangled pair does not require discrete logic gates and thus could reduce the level of quantum-coherent control needed for large-scale QIP. Supported by IARPA, ARO contract No. EAO139840, ONR, and the NIST Quantum Information Program.

  3. Arthroscopic trans-osseous rotator cuff repair

    PubMed Central

    Chillemi, Claudio; Mantovani, Matteo

    2017-01-01

    Summary Background: Mechanical factors are at the basis of any tendon healing process, being pressure an aspect able to positively influence it. For this reason transosseous rotator cuff repair represents the gold standard procedure for patients affected by a cuff tear, maximizing the tendon footprint contact area and reducing motion at the tendon to bone interface. Methods: The Authors present an all arthroscopic suture bridge-like transosseous repair with the preparation of a single transosseous tunnel perfor med thanks to a precise dedicated instrument (Compasso®) and one implant (Elite-SPK®) with the use of only 3 suture wires. In addition this technique permits to accurately prepare the bony side of the lesion without any risk or complication, such as anchor pull-out and greater tuberosity bone osteolysis. Conclusions: However, even if this technique seems less demanding, the arthroscopic transosseous repair is still an advanced procedure, and should be performed only by well prepared arthroscopic shoulder surgeons. Level of evidence: V. PMID:28717607

  4. Wavelets in electronic structure calculations

    NASA Astrophysics Data System (ADS)

    Modisette, Jason Perry

    1997-09-01

    Ab initio calculations of the electronic structure of bulk materials and large clusters are not possible on today's computers using current techniques. The storage and diagonalization of the Hamiltonian matrix are the limiting factors in both memory and execution time. The scaling of both quantities with problem size can be reduced by using approximate diagonalization or direct minimization of the total energy with respect to the density matrix in conjunction with a localized basis. Wavelet basis members are much more localized than conventional bases such as Gaussians or numerical atomic orbitals. This localization leads to sparse matrices of the operators that arise in SCF multi-electron calculations. We have investigated the construction of the one-electron Hamiltonian, and also the effective one- electron Hamiltonians that appear in density-functional and Hartree-Fock theories. We develop efficient methods for the generation of the kinetic energy and potential matrices, the Hartree and exchange potentials, and the local exchange-correlation potential of the LDA. Test calculations are performed on one-electron problems with a variety of potentials in one and three dimensions.

  5. Techniques in Marriage and Family Counseling. Volume One. The Family Psychology and Counseling Series.

    ERIC Educational Resources Information Center

    Watts, Richard E., Ed.

    This book is designed to bridge the gap between the reality of professional practice and what is being written about it in professional publications. It is divided into three sections, focusing on the techniques of assessment, transgenerational techniques, and constructivist techniques. Section one argues that assessment is the basis of all…

  6. Evaluation of Programmed Instruction Techniques in Medical Interviewing. Final Report, June 15, 1966 to June 15, 1968.

    ERIC Educational Resources Information Center

    Adler, Leta McKinney; And Others

    Since the medical interview is usually considered to be the basis of all diagnosis and treatment in medicine, this study investigated alternative ways of improving medical interview techniques. To test the hypothesis that the visual (videotape) technique would be more effective than the lecturing or audiotape technique, 12 videotaped interviews…

  7. Emerging Techniques 2: Architectural Programming.

    ERIC Educational Resources Information Center

    Evans, Benjamin H.; Wheeler, C. Herbert, Jr.

    A selected collection of architectural programming techniques has been assembled to aid architects in building design. Several exciting and sophisticated techniques for determining a basis for environmental design have been developed in recent years. These extend to the logic of environmental design and lead to more appropriate and useful…

  8. An intertwined method for making low-rank, sum-of-product basis functions that makes it possible to compute vibrational spectra of molecules with more than 10 atoms

    PubMed Central

    Thomas, Phillip S.

    2017-01-01

    We propose a method for solving the vibrational Schrödinger equation with which one can compute spectra for molecules with more than ten atoms. It uses sum-of-product (SOP) basis functions stored in a canonical polyadic tensor format and generated by evaluating matrix-vector products. By doing a sequence of partial optimizations, in each of which the factors in a SOP basis function for a single coordinate are optimized, the rank of the basis functions is reduced as matrix-vector products are computed. This is better than using an alternating least squares method to reduce the rank, as is done in the reduced-rank block power method. Partial optimization is better because it speeds up the calculation by about an order of magnitude and allows one to significantly reduce the memory cost. We demonstrate the effectiveness of the new method by computing vibrational spectra of two molecules, ethylene oxide (C2H4O) and cyclopentadiene (C5H6), with 7 and 11 atoms, respectively. PMID:28571348

  9. An intertwined method for making low-rank, sum-of-product basis functions that makes it possible to compute vibrational spectra of molecules with more than 10 atoms.

    PubMed

    Thomas, Phillip S; Carrington, Tucker

    2017-05-28

    We propose a method for solving the vibrational Schrödinger equation with which one can compute spectra for molecules with more than ten atoms. It uses sum-of-product (SOP) basis functions stored in a canonical polyadic tensor format and generated by evaluating matrix-vector products. By doing a sequence of partial optimizations, in each of which the factors in a SOP basis function for a single coordinate are optimized, the rank of the basis functions is reduced as matrix-vector products are computed. This is better than using an alternating least squares method to reduce the rank, as is done in the reduced-rank block power method. Partial optimization is better because it speeds up the calculation by about an order of magnitude and allows one to significantly reduce the memory cost. We demonstrate the effectiveness of the new method by computing vibrational spectra of two molecules, ethylene oxide (C 2 H 4 O) and cyclopentadiene (C 5 H 6 ), with 7 and 11 atoms, respectively.

  10. Nonlinear Reduced-Order Analysis with Time-Varying Spatial Loading Distributions

    NASA Technical Reports Server (NTRS)

    Prezekop, Adam

    2008-01-01

    Oscillating shocks acting in combination with high-intensity acoustic loadings present a challenge to the design of resilient hypersonic flight vehicle structures. This paper addresses some features of this loading condition and certain aspects of a nonlinear reduced-order analysis with emphasis on system identification leading to formation of a robust modal basis. The nonlinear dynamic response of a composite structure subject to the simultaneous action of locally strong oscillating pressure gradients and high-intensity acoustic loadings is considered. The reduced-order analysis used in this work has been previously demonstrated to be both computationally efficient and accurate for time-invariant spatial loading distributions, provided that an appropriate modal basis is used. The challenge of the present study is to identify a suitable basis for loadings with time-varying spatial distributions. Using a proper orthogonal decomposition and modal expansion, it is shown that such a basis can be developed. The basis is made more robust by incrementally expanding it to account for changes in the location, frequency and span of the oscillating pressure gradient.

  11. A Program Aimed at Reducing Anxiety in Pregnant Women Diagnosed With a Small-for-Gestational-Age Fetus: Evaluative Findings From a Spanish Study.

    PubMed

    Arranz Betegón, Ángela; García, Marta; Parés, Sandra; Montenegro, Gala; Feixas, Georgina; Padilla, Nelly; Camacho, Alba; Goberna, Josefina; Botet, Francesc; Gratacós, Eduard

    The objective of this study was to evaluate the effect of anxiety-reducing techniques including music therapy, sophrology, and creative visualization in pregnant women with a fetus diagnosed as small for gestational age and improved fetal and neonatal weight. This was a quasi-experimental study with a nonrandomized clinical trial design. We compared 2 groups of pregnant women with a fetus diagnosed as small for gestational age with no abnormalities on Doppler studies. The control group (n = 93) received standard care, and the intervention group (n = 65), in addition to standard care, underwent a program of 6 sessions led by a midwife or nurse who taught anxiety-reduction techniques. The State-Trait Anxiety Inventory (STAI) including trait and state subscales were completed by both groups at the start of the study, and only the STAI-State subscale was completed again at the end of the study. Comparisons between the 2 groups regarding fetal weight and centile and maternal STAI scores were performed using the t test and the χ test. There were no significant differences in the STAI-Trait scores between the 2 groups. There were statistically significant differences in the intervention group's STAI-State score percentiles between the start and the end of the study, being lower at the end of the study (P < .001). There were significant differences between the 2 groups in fetal weight trajectory on the basis of fetal weight: the intervention group had a larger weight gain (P < .005). The program designed to reduce anxiety in pregnant women was effective at reducing anxiety in the women in the intervention group, leading to a favorable fetal weight trajectory in this group.

  12. Dealing with Liars: Misbehavior Identification via Rényi-Ulam Games

    NASA Astrophysics Data System (ADS)

    Kozma, William; Lazos, Loukas

    We address the problem of identifying misbehaving nodes that refuse to forward packets in wireless multi-hop networks. We map the process of locating the misbehaving nodes to the classic Rényi-Ulam game of 20 questions. Compared to previous methods, our mapping allows the evaluation of node behavior on a per-packet basis, without the need for energy-expensive overhearing techniques or intensive acknowledgment schemes. Furthermore, it copes with colluding adversaries that coordinate their behavioral patterns to avoid identification and frame honest nodes. We show via simulations that our algorithms reduce the communication overhead for identifying misbehaving nodes by at least one order of magnitude compared to other methods, while increasing the identification delay logarithmically with the path size.

  13. Parallel, adaptive finite element methods for conservation laws

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Devine, Karen D.; Flaherty, Joseph E.

    1994-01-01

    We construct parallel finite element methods for the solution of hyperbolic conservation laws in one and two dimensions. Spatial discretization is performed by a discontinuous Galerkin finite element method using a basis of piecewise Legendre polynomials. Temporal discretization utilizes a Runge-Kutta method. Dissipative fluxes and projection limiting prevent oscillations near solution discontinuities. A posteriori estimates of spatial errors are obtained by a p-refinement technique using superconvergence at Radau points. The resulting method is of high order and may be parallelized efficiently on MIMD computers. We compare results using different limiting schemes and demonstrate parallel efficiency through computations on an NCUBE/2 hypercube. We also present results using adaptive h- and p-refinement to reduce the computational cost of the method.

  14. Efficiency and Impact of Positive and Negative Magnetic Separation on Monocyte Derived Dendritic Cell Generation.

    PubMed

    Kowalewicz-Kulbat, Magdalena; Ograczyk, Elżbieta; Włodarczyk, Marcin; Krawczyk, Krzysztof; Fol, Marek

    2016-06-01

    The immunomagnetic separation technique is the basis of monocyte isolation and further generation of monocyte-derived dendritic cells. To compare the efficiency of monocyte positive and negative separation, concentration of beads, and their impact on generated dendritic cells. Monocytes were obtained using monoclonal antibody-coated magnetic beads followed the Ficoll-Paque gradient separation of mononuclear cell fraction from the peripheral blood of 6 healthy volunteers. CD14 expression was analyzed by flow cytometry. Both types of magnetic separation including recommended and reduced concentrations of beads did not affect the yield and the purity of monocytes and their surface CD14 expression. However, DCs originated from the "positively" separated monocytes had noticeable higher expression of CD80.

  15. Wavelet-domain de-noising of OCT images of human brain malignant glioma

    NASA Astrophysics Data System (ADS)

    Dolganova, I. N.; Aleksandrova, P. V.; Beshplav, S.-I. T.; Chernomyrdin, N. V.; Dubyanskaya, E. N.; Goryaynov, S. A.; Kurlov, V. N.; Reshetov, I. V.; Potapov, A. A.; Tuchin, V. V.; Zaytsev, K. I.

    2018-04-01

    We have proposed a wavelet-domain de-noising technique for imaging of human brain malignant glioma by optical coherence tomography (OCT). It implies OCT image decomposition using the direct fast wavelet transform, thresholding of the obtained wavelet spectrum and further inverse fast wavelet transform for image reconstruction. By selecting both wavelet basis and thresholding procedure, we have found an optimal wavelet filter, which application improves differentiation of the considered brain tissue classes - i.e. malignant glioma and normal/intact tissue. Namely, it allows reducing the scattering noise in the OCT images and retaining signal decrement for each tissue class. Therefore, the observed results reveals the wavelet-domain de-noising as a prospective tool for improved characterization of biological tissue using the OCT.

  16. Chameleon Coatings: Adaptive Surfaces to Reduce Friction and Wear in Extreme Environments

    NASA Astrophysics Data System (ADS)

    Muratore, C.; Voevodin, A. A.

    2009-08-01

    Adaptive nanocomposite coating materials that automatically and reversibly adjust their surface composition and morphology via multiple mechanisms are a promising development for the reduction of friction and wear over broad ranges of ambient conditions encountered in aerospace applications, such as cycling of temperature and atmospheric composition. Materials selection for these composites is based on extensive study of interactions occurring between solid lubricants and their surroundings, especially with novel in situ surface characterization techniques used to identify adaptive behavior on size scales ranging from 10-10 to 10-4 m. Recent insights on operative solid-lubricant mechanisms and their dependency upon the ambient environment are reviewed as a basis for a discussion of the state of the art in solid-lubricant materials.

  17. New radar-derived topography for the northern hemisphere of Mars

    NASA Technical Reports Server (NTRS)

    Downs, G. S.; Thompson, T. W.; Mouginis-Mark, P. J.; Zisk, S. H.

    1982-01-01

    Earth-based radar altimetry data for the northern equatorial belt of Mars (6 deg S-23 deg N) have recently been reduced to a common basis corresponding to the 6.1-mbar reference surface. A first look at these data indicates that the elevations of Tharsis, Elysium, and Lunae Planum are lower (by 2-5 km) than has been suggested by previous estimates. These differences show that the required amount of tectonic uplift (or constructional volcanism) for each area is less than has been previously envisioned. Atmospheric or surficial conditions are suggested which may explain the discrepancies between the radar topography and elevations measured by other techniques. The topographies of Chryse Planitia, Syrtis Major, and Valles Marineris are also described.

  18. Boiler Tube Corrosion Characterization with a Scanning Thermal Line

    NASA Technical Reports Server (NTRS)

    Cramer, K. Elliott; Jacobstein, Ronald; Reilly, Thomas

    2001-01-01

    Wall thinning due to corrosion in utility boiler water wall tubing is a significant operational concern for boiler operators. Historically, conventional ultrasonics has been used for inspection of these tubes. Unfortunately, ultrasonic inspection is very manpower intense and slow. Therefore, thickness measurements are typically taken over a relatively small percentage of the total boiler wall and statistical analysis is used to determine the overall condition of the boiler tubing. Other inspection techniques, such as electromagnetic acoustic transducer (EMAT), have recently been evaluated, however they provide only a qualitative evaluation - identifying areas or spots where corrosion has significantly reduced the wall thickness. NASA Langley Research Center, in cooperation with ThermTech Services, has developed a thermal NDE technique designed to quantitatively measure the wall thickness and thus determine the amount of material thinning present in steel boiler tubing. The technique involves the movement of a thermal line source across the outer surface of the tubing followed by an infrared imager at a fixed distance behind the line source. Quantitative images of the material loss due to corrosion are reconstructed from measurements of the induced surface temperature variations. This paper will present a discussion of the development of the thermal imaging system as well as the techniques used to reconstruct images of flaws. The application of the thermal line source coupled with the analysis technique represents a significant improvement in the inspection speed and accuracy for large structures such as boiler water walls. A theoretical basis for the technique will be presented to establish the quantitative nature of the technique. Further, a dynamic calibration system will be presented for the technique that allows the extraction of thickness information from the temperature data. Additionally, the results of the application of this technology to actual water wall tubing samples and in-situ inspections will be presented.

  19. Trauma-focused cognitive-behavioral therapy for children and adolescents: assessing the evidence.

    PubMed

    de Arellano, Michael A Ramirez; Lyman, D Russell; Jobe-Shields, Lisa; George, Preethy; Dougherty, Richard H; Daniels, Allen S; Ghose, Sushmita Shoma; Huang, Larke; Delphin-Rittmon, Miriam E

    2014-05-01

    Trauma-Focused Cognitive-Behavioral Therapy (TF-CBT) is a conjoint parent-child treatment developed by Cohen, Mannarino, and Deblinger that uses cognitive-behavioral principles and exposure techniques to prevent and treat posttraumatic stress, depression, and behavioral problems. This review defined TF-CBT, differentiated it from other models, and assessed the evidence base. Authors reviewed meta-analyses, reviews, and individual studies (1995 to 2013). Databases surveyed were PubMed, PsycINFO, Applied Social Sciences Index and Abstracts, Sociological Abstracts, Social Services Abstracts, PILOTS, the ERIC, and the CINAHL. They chose from three levels of research evidence (high, moderate, and low) on the basis of benchmarks for number of studies and quality of their methodology. They also described the evidence of effectiveness. The level of evidence for TF-CBT was rated as high on the basis of ten RCTs, three of which were conducted independently (not by TF-CBT developers). TF-CBT has demonstrated positive outcomes in reducing symptoms of posttraumatic stress disorder, although it is less clear whether TF-CBT is effective in reducing behavior problems or symptoms of depression. Limitations of the studies include concerns about investigator bias and exclusion of vulnerable populations. TF-CBT is a viable treatment for reducing trauma-related symptoms among some children who have experienced trauma and their nonoffending caregivers. Based on this evidence, TF-CBT should be available as a covered service in health plans. Ongoing research is needed to further identify best practices for TF-CBT in various settings and with individuals from various racial and ethnic backgrounds and with varied trauma histories, symptoms, and stages of intellectual, social, and emotional development.

  20. Physical basis for altered stem elongation rates in internode length mutants of Pisum

    NASA Technical Reports Server (NTRS)

    Behringer, F. J.; Cosgrove, D. J.; Reid, J. B.; Davies, P. J.

    1990-01-01

    Biophysical parameters related to gibberellin (GA)-dependent stem elongation were examined in dark-grown stem-length genotypes of Pisum sativum L. The rate of internode expansion in these genotypes is altered due to recessive mutations which affect either the endogenous levels of, or response to, GA. The GA deficient dwarf L181 (ls), two GA insensitive semierectoides dwarfs NGB5865 and NGB5862 (lka and lkb, respectively) and the slender' line L197 (la crys), which is tall regardless of GA content, were compared to the wild-type tall cultivar, Torsdag. Osmotic pressure, estimated by vapor pressure osmometry, and turgor pressure, measured directly with a pressure probe, did not correlate with the differences in growth rate among the genotypes. Mechanical wall properties of frozen-thawed tissue were measured using a constant force assay. GA deficiency resulted in increased wall stiffness judged both on the basis of plastic compliance and plastic extensibility normalized for equal stem circumference. Plastic compliance was not reduced in the GA insensitive dwarfs, though lka reduced circumference-normalized plasticity. In contrast, in vivo wall relaxation, determined by the pressure-block technique, differed among genotypes in a manner which did correlate with extension rates. The wall yield threshold was 1 bar or less in the tall lines, but ranged from 3 to 6 bars in the dwarf genotypes. The results with the ls mutant indicate that GA enhances stem elongation by both decreasing the wall yield threshold and increasing the wall yield coefficient. In the GA-insensitive mutants, lka and lkb, the wall yield threshold is substantially elevated. Plants possessing lka may also possess a reduced wall yield coefficient.

  1. Trauma-Focused Cognitive Behavioral Therapy: Assessing the Evidence

    PubMed Central

    Ramirez de Arellano, Michael A.; Jobe-Shields, Lisa; George, Preethy; Dougherty, Richard H.; Daniels, Allen S.; Ghose, Sushmita Shoma; Huang, Larke; Delphin-Rittmon, Miriam E.

    2015-01-01

    Objective Trauma-Focused Cognitive-Behavioral Therapy (TF-CBT) is a conjoint parent-child treatment developed by Cohen, Mannarino, and Deblinger that uses cognitive-behavioral principles and exposure techniques to prevent and treat posttraumatic stress, depression, and behavioral problems. This review defined TF-CBT, differentiated it from other models, and assessed the evidence base. Methods Authors reviewed meta-analyses, reviews, and individual studies (1995 to 2013). Databases surveyed were PubMed, PsycINFO, Applied Social Sciences Index and Abstracts, Sociological Abstracts, Social Services Abstracts, PILOTS, the ERIC, and the CINAHL. They chose from three levels of research evidence (high, moderate, and low) on the basis of benchmarks for number of studies and quality of their methodology. They also described the evidence of effectiveness. Results The level of evidence for TF-CBT was rated as high on the basis of ten RCTs, three of which were conducted independently (not by TF-CBT developers). TF-CBT has demonstrated positive outcomes in reducing symptoms of posttraumatic stress disorder, although it is less clear whether TF-CBT is effective in reducing behavior problems or symptoms of depression. Limitations of the studies include concerns about investigator bias and exclusion of vulnerable populations. Conclusions TF-CBT is a viable treatment for reducing trauma-related symptoms among some children who have experienced trauma and their nonoffending caregivers. Based on this evidence, TF-CBT should be available as a covered service in health plans. Ongoing research is needed to further identify best practices for TF-CBT in various settings and with individuals from various racial and ethnic backgrounds and with varied trauma histories, symptoms, and stages of intellectual, social, and emotional development. PMID:24638076

  2. The Effect of Selected Cleaning Techniques on Berkshire Lee Marble: A Scientific Study at Philadelphia City Hall

    USGS Publications Warehouse

    Mossotti, Victor G.; Eldeeb, A. Raouf; Fries, Terry L.; Coombs, Mary Jane; Naude, Virginia N.; Soderberg, Lisa; Wheeler, George S.

    2002-01-01

    This report describes a scientific investigation of the effects of eight different cleaning techniques on the Berkshire Lee marble component of the facade of the East Center Pavilion at Philadelphia City Hall; the study was commissioned by the city of Philadelphia. The eight cleaning techniques evaluated in this study were power wash (proprietary gel detergent followed by water rinse under pressure), misting (treatment with potable, nebulized water for 24-36 hours), gommage (proprietary Thomann-Hanry low-pressure, air-driven, small-particle, dry abrasion), combination (gommage followed by misting), Armax (sodium bicarbonate delivered under pressure in a water wash), JOS (dolomite powder delivered in a low-pressure, rotary-vortex water wash), laser (thermal ablation), and dry ice (powdered-dry-ice abrasion delivered under pressure). In our study approximately 160 cores were removed from the building for laboratory analysis. We developed a computer program to analyze scanning-electron-micrograph images for the microscale surface roughness and other morphologic parameters of the stone surface, including the near-surface fracture density of the stone. An analysis of more than 1,100 samples cut from the cores provided a statistical basis for crafting the essential elements of a reduced-form, mixed-kinetics conceptual model that represents the deterioration of calcareous stone in terms of self-organized soiling and erosion patterns. This model, in turn, provided a basis for identifying the variables that are affected by the cleaning techniques and for evaluating the extent to which such variables influence the stability of the stone. The model recognizes three classes of variables that may influence the soiling load on the stone, including such exogenous environmental variables as airborne moisture, pollutant concentrations, and local aerodynamics, and such endogenous stone variables as surface chemistry and microstructure (fracturing, roughness, and so on). This study showed that morphologic variables on the mesoscale to macroscale are not generally affected by the choice of a cleaning technique. The long-term soiling pattern on the building is independent of the cleaning technique applied. This study also showed that soluble salts do not play a significant role in the deterioration of Berkshire Lee marble. Although salts were evident in cracks and fissures of the heavily soiled stone, such salts did not penetrate the surface to a depth of more than a few hundred micrometers. The criteria used to differentiate the cleaning techniques were ultimately based on the ability of each technique to remove soiling without altering the texture of the stone surface. This study identified both the gommage and JOS techniques as appropriate for cleaning ashlar surfaces and the combination technique as appropriate for cleaning highly carved surfaces at the entablatures, cornices, and column capitals.

  3. Subband Approach to Bandlimited Crosstalk Cancellation System in Spatial Sound Reproduction

    NASA Astrophysics Data System (ADS)

    Bai, Mingsian R.; Lee, Chih-Chung

    2006-12-01

    Crosstalk cancellation system (CCS) plays a vital role in spatial sound reproduction using multichannel loudspeakers. However, this technique is still not of full-blown use in practical applications due to heavy computation loading. To reduce the computation loading, a bandlimited CCS is presented in this paper on the basis of subband filtering approach. A pseudoquadrature mirror filter (QMF) bank is employed in the implementation of CCS filters which are bandlimited to 6 kHz, where human's localization is the most sensitive. In addition, a frequency-dependent regularization scheme is adopted in designing the CCS inverse filters. To justify the proposed system, subjective listening experiments were undertaken in an anechoic room. The experiments include two parts: the source localization test and the sound quality test. Analysis of variance (ANOVA) is applied to process the data and assess statistical significance of subjective experiments. The results indicate that the bandlimited CCS performed comparably well as the fullband CCS, whereas the computation loading was reduced by approximately eighty percent.

  4. Lattice design and expected performance of the Muon Ionization Cooling Experiment demonstration of ionization cooling

    NASA Astrophysics Data System (ADS)

    Bogomilov, M.; Tsenov, R.; Vankova-Kirilova, G.; Song, Y.; Tang, J.; Li, Z.; Bertoni, R.; Bonesini, M.; Chignoli, F.; Mazza, R.; Palladino, V.; de Bari, A.; Cecchet, G.; Orestano, D.; Tortora, L.; Kuno, Y.; Ishimoto, S.; Filthaut, F.; Jokovic, D.; Maletic, D.; Savic, M.; Hansen, O. M.; Ramberger, S.; Vretenar, M.; Asfandiyarov, R.; Blondel, A.; Drielsma, F.; Karadzhov, Y.; Charnley, G.; Collomb, N.; Dumbell, K.; Gallagher, A.; Grant, A.; Griffiths, S.; Hartnett, T.; Martlew, B.; Moss, A.; Muir, A.; Mullacrane, I.; Oates, A.; Owens, P.; Stokes, G.; Warburton, P.; White, C.; Adams, D.; Anderson, R. J.; Barclay, P.; Bayliss, V.; Boehm, J.; Bradshaw, T. W.; Courthold, M.; Francis, V.; Fry, L.; Hayler, T.; Hills, M.; Lintern, A.; Macwaters, C.; Nichols, A.; Preece, R.; Ricciardi, S.; Rogers, C.; Stanley, T.; Tarrant, J.; Tucker, M.; Wilson, A.; Watson, S.; Bayes, R.; Nugent, J. C.; Soler, F. J. P.; Gamet, R.; Barber, G.; Blackmore, V. J.; Colling, D.; Dobbs, A.; Dornan, P.; Hunt, C.; Kurup, A.; Lagrange, J.-B.; Long, K.; Martyniak, J.; Middleton, S.; Pasternak, J.; Uchida, M. A.; Cobb, J. H.; Lau, W.; Booth, C. N.; Hodgson, P.; Langlands, J.; Overton, E.; Robinson, M.; Smith, P. J.; Wilbur, S.; Dick, A. J.; Ronald, K.; Whyte, C. G.; Young, A. R.; Boyd, S.; Franchini, P.; Greis, J. R.; Pidcott, C.; Taylor, I.; Gardener, R. B. S.; Kyberd, P.; Nebrensky, J. J.; Palmer, M.; Witte, H.; Bross, A. D.; Bowring, D.; Liu, A.; Neuffer, D.; Popovic, M.; Rubinov, P.; DeMello, A.; Gourlay, S.; Li, D.; Prestemon, S.; Virostek, S.; Freemire, B.; Hanlet, P.; Kaplan, D. M.; Mohayai, T. A.; Rajaram, D.; Snopok, P.; Suezaki, V.; Torun, Y.; Onel, Y.; Cremaldi, L. M.; Sanders, D. A.; Summers, D. J.; Hanson, G. G.; Heidt, C.; MICE Collaboration

    2017-06-01

    Muon beams of low emittance provide the basis for the intense, well-characterized neutrino beams necessary to elucidate the physics of flavor at a neutrino factory and to provide lepton-antilepton collisions at energies of up to several TeV at a muon collider. The international Muon Ionization Cooling Experiment (MICE) aims to demonstrate ionization cooling, the technique by which it is proposed to reduce the phase-space volume occupied by the muon beam at such facilities. In an ionization-cooling channel, the muon beam passes through a material in which it loses energy. The energy lost is then replaced using rf cavities. The combined effect of energy loss and reacceleration is to reduce the transverse emittance of the beam (transverse cooling). A major revision of the scope of the project was carried out over the summer of 2014. The revised experiment can deliver a demonstration of ionization cooling. The design of the cooling demonstration experiment will be described together with its predicted cooling performance.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bogomilov, M.; Tsenov, R.; Vankova-Kirilova, G.

    Muon beams of low emittance provide the basis for the intense, well-characterized neutrino beams necessary to elucidate the physics of flavor at a neutrino factory and to provide lepton-antilepton collisions at energies of up to several TeV at a muon collider. The international Muon Ionization Cooling Experiment (MICE) aims to demonstrate ionization cooling, the technique by which it is proposed to reduce the phase-space volume occupied by the muon beam at such facilities. In an ionization-cooling channel, the muon beam passes through a material in which it loses energy. The energy lost is then replaced using rf cavities. The combinedmore » effect of energy loss and reacceleration is to reduce the transverse emittance of the beam (transverse cooling). A major revision of the scope of the project was carried out over the summer of 2014. The revised experiment can deliver a demonstration of ionization cooling. The design of the cooling demonstration experiment will be described together with its predicted cooling performance.« less

  6. Parametric Cost Models for Space Telescopes

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip; Henrichs, Todd; Dollinger, Courtney

    2010-01-01

    Multivariable parametric cost models for space telescopes provide several benefits to designers and space system project managers. They identify major architectural cost drivers and allow high-level design trades. They enable cost-benefit analysis for technology development investment. And, they provide a basis for estimating total project cost. A survey of historical models found that there is no definitive space telescope cost model. In fact, published models vary greatly [1]. Thus, there is a need for parametric space telescopes cost models. An effort is underway to develop single variable [2] and multi-variable [3] parametric space telescope cost models based on the latest available data and applying rigorous analytical techniques. Specific cost estimating relationships (CERs) have been developed which show that aperture diameter is the primary cost driver for large space telescopes; technology development as a function of time reduces cost at the rate of 50% per 17 years; it costs less per square meter of collecting aperture to build a large telescope than a small telescope; and increasing mass reduces cost.

  7. Biotechnology Towards Energy Crops.

    PubMed

    Margaritopoulou, Theoni; Roka, Loukia; Alexopoulou, Efi; Christou, Myrsini; Rigas, Stamatis; Haralampidis, Kosmas; Milioni, Dimitra

    2016-03-01

    New crops are gradually establishing along with cultivation systems to reduce reliance on depleting fossil fuel reserves and sustain better adaptation to climate change. These biological assets could be efficiently exploited as bioenergy feedstocks. Bioenergy crops are versatile renewable sources with the potential to alternatively contribute on a daily basis towards the coverage of modern society's energy demands. Biotechnology may facilitate the breeding of elite energy crop genotypes, better suited for bio-processing and subsequent use that will improve efficiency, further reduce costs, and enhance the environmental benefits of biofuels. Innovative molecular techniques may improve a broad range of important features including biomass yield, product quality and resistance to biotic factors like pests or microbial diseases or environmental cues such as drought, salinity, freezing injury or heat shock. The current review intends to assess the capacity of biotechnological applications to develop a beneficial bioenergy pipeline extending from feedstock development to sustainable biofuel production and provide examples of the current state of the art on future energy crops.

  8. [Abdominal ultrasound course an introduction to the ultrasound technique. Physical basis. Ultrasound language].

    PubMed

    Segura-Grau, A; Sáez-Fernández, A; Rodríguez-Lorenzo, A; Díaz-Rodríguez, N

    2014-01-01

    Ultrasound is a non-invasive, accessible, and versatile diagnostic technique that uses high frequency ultrasound waves to define outline the organs of the human body, with no ionising radiation, in real time and with the capacity to visual several planes. The high diagnostic yield of the technique, together with its ease of uses plus the previously mentioned characteristics, has currently made it a routine method in daily medical practice. It is for this reason that the multidisciplinary character of this technique is being strengthened every day. To be able to perform the technique correctly requires knowledge of the physical basis of ultrasound, the method and the equipment, as well as of the human anatomy, in order to have the maximum information possible to avoid diagnostic errors due to poor interpretation or lack of information. Copyright © 2013 Sociedad Española de Médicos de Atención Primaria (SEMERGEN). Publicado por Elsevier España. All rights reserved.

  9. Toward a New Method of Decoding Algebraic Codes Using Groebner Bases

    DTIC Science & Technology

    1993-10-01

    variables over GF(2m). A celebrated algorithm by Buchberger produces a reduced Groebner basis of that ideal. It tums out that, since the common roots of...all the polynomials in the ideal are a set of isolated points, this reduced Groebner basis is in triangular form, and the univariate polynomial in that

  10. Construction of energy-stable Galerkin reduced order models.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalashnikova, Irina; Barone, Matthew Franklin; Arunajatesan, Srinivasan

    2013-05-01

    This report aims to unify several approaches for building stable projection-based reduced order models (ROMs). Attention is focused on linear time-invariant (LTI) systems. The model reduction procedure consists of two steps: the computation of a reduced basis, and the projection of the governing partial differential equations (PDEs) onto this reduced basis. Two kinds of reduced bases are considered: the proper orthogonal decomposition (POD) basis and the balanced truncation basis. The projection step of the model reduction can be done in two ways: via continuous projection or via discrete projection. First, an approach for building energy-stable Galerkin ROMs for linear hyperbolicmore » or incompletely parabolic systems of PDEs using continuous projection is proposed. The idea is to apply to the set of PDEs a transformation induced by the Lyapunov function for the system, and to build the ROM in the transformed variables. The resulting ROM will be energy-stable for any choice of reduced basis. It is shown that, for many PDE systems, the desired transformation is induced by a special weighted L2 inner product, termed the %E2%80%9Csymmetry inner product%E2%80%9D. Attention is then turned to building energy-stable ROMs via discrete projection. A discrete counterpart of the continuous symmetry inner product, a weighted L2 inner product termed the %E2%80%9CLyapunov inner product%E2%80%9D, is derived. The weighting matrix that defines the Lyapunov inner product can be computed in a black-box fashion for a stable LTI system arising from the discretization of a system of PDEs in space. It is shown that a ROM constructed via discrete projection using the Lyapunov inner product will be energy-stable for any choice of reduced basis. Connections between the Lyapunov inner product and the inner product induced by the balanced truncation algorithm are made. Comparisons are also made between the symmetry inner product and the Lyapunov inner product. The performance of ROMs constructed using these inner products is evaluated on several benchmark test cases.« less

  11. Stabilization techniques for unpaved roads.

    DOT National Transportation Integrated Search

    2004-01-01

    This study presents the basis for evaluating promising soil stabilization products using the relatively new technique of deeply mixing chemical additives into unpaved roadbeds. The work is in response to an amendment to House Bill 1400, Item 490, No....

  12. A Chromosome-Scale Assembly of the Bactrocera cucurbitae Genome Provides Insight to the Genetic Basis of white pupae

    PubMed Central

    Sim, Sheina B.; Geib, Scott M.

    2017-01-01

    Genetic sexing strains (GSS) used in sterile insect technique (SIT) programs are textbook examples of how classical Mendelian genetics can be directly implemented in the management of agricultural insect pests. Although the foundation of traditionally developed GSS are single locus, autosomal recessive traits, their genetic basis are largely unknown. With the advent of modern genomic techniques, the genetic basis of sexing traits in GSS can now be further investigated. This study is the first of its kind to integrate traditional genetic techniques with emerging genomics to characterize a GSS using the tephritid fruit fly pest Bactrocera cucurbitae as a model. These techniques include whole-genome sequencing, the development of a mapping population and linkage map, and quantitative trait analysis. The experiment designed to map the genetic sexing trait in B. cucurbitae, white pupae (wp), also enabled the generation of a chromosome-scale genome assembly by integrating the linkage map with the assembly. Quantitative trait loci analysis revealed SNP loci near position 42 MB on chromosome 3 to be tightly linked to wp. Gene annotation and synteny analysis show a near perfect relationship between chromosomes in B. cucurbitae and Muller elements A–E in Drosophila melanogaster. This chromosome-scale genome assembly is complete, has high contiguity, was generated using a minimal input DNA, and will be used to further characterize the genetic mechanisms underlying wp. Knowledge of the genetic basis of genetic sexing traits can be used to improve SIT in this species and expand it to other economically important Diptera. PMID:28450369

  13. Novel Histogram Based Unsupervised Classification Technique to Determine Natural Classes From Biophysically Relevant Fit Parameters to Hyperspectral Data

    DOE PAGES

    McCann, Cooper; Repasky, Kevin S.; Morin, Mikindra; ...

    2017-05-23

    Hyperspectral image analysis has benefited from an array of methods that take advantage of the increased spectral depth compared to multispectral sensors; however, the focus of these developments has been on supervised classification methods. Lack of a priori knowledge regarding land cover characteristics can make unsupervised classification methods preferable under certain circumstances. An unsupervised classification technique is presented in this paper that utilizes physically relevant basis functions to model the reflectance spectra. These fit parameters used to generate the basis functions allow clustering based on spectral characteristics rather than spectral channels and provide both noise and data reduction. Histogram splittingmore » of the fit parameters is then used as a means of producing an unsupervised classification. Unlike current unsupervised classification techniques that rely primarily on Euclidian distance measures to determine similarity, the unsupervised classification technique uses the natural splitting of the fit parameters associated with the basis functions creating clusters that are similar in terms of physical parameters. The data set used in this work utilizes the publicly available data collected at Indian Pines, Indiana. This data set provides reference data allowing for comparisons of the efficacy of different unsupervised data analysis. The unsupervised histogram splitting technique presented in this paper is shown to be better than the standard unsupervised ISODATA clustering technique with an overall accuracy of 34.3/19.0% before merging and 40.9/39.2% after merging. Finally, this improvement is also seen as an improvement of kappa before/after merging of 24.8/30.5 for the histogram splitting technique compared to 15.8/28.5 for ISODATA.« less

  14. Novel Histogram Based Unsupervised Classification Technique to Determine Natural Classes From Biophysically Relevant Fit Parameters to Hyperspectral Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCann, Cooper; Repasky, Kevin S.; Morin, Mikindra

    Hyperspectral image analysis has benefited from an array of methods that take advantage of the increased spectral depth compared to multispectral sensors; however, the focus of these developments has been on supervised classification methods. Lack of a priori knowledge regarding land cover characteristics can make unsupervised classification methods preferable under certain circumstances. An unsupervised classification technique is presented in this paper that utilizes physically relevant basis functions to model the reflectance spectra. These fit parameters used to generate the basis functions allow clustering based on spectral characteristics rather than spectral channels and provide both noise and data reduction. Histogram splittingmore » of the fit parameters is then used as a means of producing an unsupervised classification. Unlike current unsupervised classification techniques that rely primarily on Euclidian distance measures to determine similarity, the unsupervised classification technique uses the natural splitting of the fit parameters associated with the basis functions creating clusters that are similar in terms of physical parameters. The data set used in this work utilizes the publicly available data collected at Indian Pines, Indiana. This data set provides reference data allowing for comparisons of the efficacy of different unsupervised data analysis. The unsupervised histogram splitting technique presented in this paper is shown to be better than the standard unsupervised ISODATA clustering technique with an overall accuracy of 34.3/19.0% before merging and 40.9/39.2% after merging. Finally, this improvement is also seen as an improvement of kappa before/after merging of 24.8/30.5 for the histogram splitting technique compared to 15.8/28.5 for ISODATA.« less

  15. Optimization of selected molecular orbitals in group basis sets.

    PubMed

    Ferenczy, György G; Adams, William H

    2009-04-07

    We derive a local basis equation which may be used to determine the orbitals of a group of electrons in a system when the orbitals of that group are represented by a group basis set, i.e., not the basis set one would normally use but a subset suited to a specific electronic group. The group orbitals determined by the local basis equation minimize the energy of a system when a group basis set is used and the orbitals of other groups are frozen. In contrast, under the constraint of a group basis set, the group orbitals satisfying the Huzinaga equation do not minimize the energy. In a test of the local basis equation on HCl, the group basis set included only 12 of the 21 functions in a basis set one might ordinarily use, but the calculated active orbital energies were within 0.001 hartree of the values obtained by solving the Hartree-Fock-Roothaan (HFR) equation using all 21 basis functions. The total energy found was just 0.003 hartree higher than the HFR value. The errors with the group basis set approximation to the Huzinaga equation were larger by over two orders of magnitude. Similar results were obtained for PCl(3) with the group basis approximation. Retaining more basis functions allows an even higher accuracy as shown by the perfect reproduction of the HFR energy of HCl with 16 out of 21 basis functions in the valence basis set. When the core basis set was also truncated then no additional error was introduced in the calculations performed for HCl with various basis sets. The same calculations with fixed core orbitals taken from isolated heavy atoms added a small error of about 10(-4) hartree. This offers a practical way to calculate wave functions with predetermined fixed core and reduced base valence orbitals at reduced computational costs. The local basis equation can also be used to combine the above approximations with the assignment of local basis sets to groups of localized valence molecular orbitals and to derive a priori localized orbitals. An appropriately chosen localization and basis set assignment allowed a reproduction of the energy of n-hexane with an error of 10(-5) hartree, while the energy difference between its two conformers was reproduced with a similar accuracy for several combinations of localizations and basis set assignments. These calculations include localized orbitals extending to 4-5 heavy atoms and thus they require to solve reduced dimension secular equations. The dimensions are not expected to increase with increasing system size and thus the local basis equation may find use in linear scaling electronic structure calculations.

  16. Reduced Order Model Basis Vector Generation: Generates Basis Vectors fro ROMs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arrighi, Bill

    2016-03-03

    libROM is a library that implements order reduction via singular value decomposition (SVD) of sampled state vectors. It implements 2 parallel, incremental SVD algorithms and one serial, non-incremental algorithm. It also provides a mechanism for adaptive sampling of basis vectors.

  17. 26 CFR 1.1502-31 - Stock basis after a group structure change.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... S's stock. If T's net asset basis is a negative amount, it reduces P's basis in S's stock and, if... § 1.1502-19 for rules treating P's excess loss account as negative basis, and treating a reference to...(a)(2)(D), and S provides an appreciated asset (e.g., stock of P) as partial consideration in the...

  18. Effects of wheat dried distillers' grains with solubles and cinnamaldehyde on in vitro fermentation and protein degradation using the Rusitec technique.

    PubMed

    Lia, Yangling; He, Maolong; Li, Chun; Forster, Robert; Beauchemin, Karen Anne; Yang, Wenzhu

    2012-04-01

    This study was conducted to evaluate the effect of wheat dried distillers' grains with solubles (DDGS) and cinnamaldehyde (CIN) on in vitro fermentation and microbial profiles using the rumen simulation technique. The control substrate (10% barley silage, 85% barley grain and 5% supplement, on dry matter basis) and the wheat DDGS substrate (30% wheat DDGS replaced an equal portion of barley grain) were combined with 0 and 300 mg CIN/l of culture fluid. The inclusion of DDGS increased (p < 0.05) the concentration of volatile fatty acids (VFA) and the molar proportion of acetate and propionate. Dry matter disappearance (p = 0.03) and production of bacterial protein (p < 0.01) were greater, whereas the disappearances of crude protein (CP) and neutral detergent fibre were less (p < 0.01) for the DDGS than for the control substrate. With addition of CIN, concentration of total VFA decreased and fermentation pattern changed to greater acetate and less propionate proportions (p < 0.01). The CIN reduced (p < 0.01) methane production and CP degradability. The copy numbers of Fibrobacter, Prevotella and Archaea were not affected by DDGS but were reduced (p < 0.05) by CIN. The results indicate that replacing barley grain by DDGS increased nutrient fermentability and potentially increase protein flows to the intestine. Supplementation of high-grain substrates with CIN reduced methane production and potentially increased the true protein reaching the small intestine; however, overall reduction of feed fermentation may lower the feeding value of a high-grain diet.

  19. Target oriented dimensionality reduction of hyperspectral data by Kernel Fukunaga-Koontz Transform

    NASA Astrophysics Data System (ADS)

    Binol, Hamidullah; Ochilov, Shuhrat; Alam, Mohammad S.; Bal, Abdullah

    2017-02-01

    Principal component analysis (PCA) is a popular technique in remote sensing for dimensionality reduction. While PCA is suitable for data compression, it is not necessarily an optimal technique for feature extraction, particularly when the features are exploited in supervised learning applications (Cheriyadat and Bruce, 2003) [1]. Preserving features belonging to the target is very crucial to the performance of target detection/recognition techniques. Fukunaga-Koontz Transform (FKT) based supervised band reduction technique can be used to provide this requirement. FKT achieves feature selection by transforming into a new space in where feature classes have complimentary eigenvectors. Analysis of these eigenvectors under two classes, target and background clutter, can be utilized for target oriented band reduction since each basis functions best represent target class while carrying least information of the background class. By selecting few eigenvectors which are the most relevant to the target class, dimension of hyperspectral data can be reduced and thus, it presents significant advantages for near real time target detection applications. The nonlinear properties of the data can be extracted by kernel approach which provides better target features. Thus, we propose constructing kernel FKT (KFKT) to present target oriented band reduction. The performance of the proposed KFKT based target oriented dimensionality reduction algorithm has been tested employing two real-world hyperspectral data and results have been reported consequently.

  20. The Development of Models for Carbon Dioxide Reduction Technologies for Spacecraft Air Revitalization

    NASA Technical Reports Server (NTRS)

    Swickrath, Michael J.; Anderson, Molly

    2011-01-01

    Through the respiration process, humans consume oxygen (O2) while producing carbon dioxide (CO2) and water (H2O) as byproducts. For long term space exploration, CO2 concentration in the atmosphere must be managed to prevent hypercapnia. Moreover, CO2 can be used as a source of oxygen through chemical reduction serving to minimize the amount of oxygen required at launch. Reduction can be achieved through a number of techniques. The National Aeronautics and Space Administration (NASA) is currently exploring the Sabatier reaction, the Bosch reaction, and co-electrolysis of CO2 and H2O for this process. Proof-of-concept experiments and prototype units for all three processes have proven capable of returning useful commodities for space exploration. While all three techniques have demonstrated the capacity to reduce CO2 in the laboratory, there is interest in understanding how all three techniques would perform at a system-level within a spacecraft. Consequently, there is an impetus to develop predictive models for these processes that can be readily re-scaled and integrated into larger system models. Such analysis tools provide the ability to evaluate each technique on a comparable basis with respect to processing rates. This manuscript describes the current models for the carbon dioxide reduction processes under parallel developmental e orts. Comparison to experimental data is provided were available for veri cation purposes.

  1. Epileptic seizure detection in EEG signal using machine learning techniques.

    PubMed

    Jaiswal, Abeg Kumar; Banka, Haider

    2018-03-01

    Epilepsy is a well-known nervous system disorder characterized by seizures. Electroencephalograms (EEGs), which capture brain neural activity, can detect epilepsy. Traditional methods for analyzing an EEG signal for epileptic seizure detection are time-consuming. Recently, several automated seizure detection frameworks using machine learning technique have been proposed to replace these traditional methods. The two basic steps involved in machine learning are feature extraction and classification. Feature extraction reduces the input pattern space by keeping informative features and the classifier assigns the appropriate class label. In this paper, we propose two effective approaches involving subpattern based PCA (SpPCA) and cross-subpattern correlation-based PCA (SubXPCA) with Support Vector Machine (SVM) for automated seizure detection in EEG signals. Feature extraction was performed using SpPCA and SubXPCA. Both techniques explore the subpattern correlation of EEG signals, which helps in decision-making process. SVM is used for classification of seizure and non-seizure EEG signals. The SVM was trained with radial basis kernel. All the experiments have been carried out on the benchmark epilepsy EEG dataset. The entire dataset consists of 500 EEG signals recorded under different scenarios. Seven different experimental cases for classification have been conducted. The classification accuracy was evaluated using tenfold cross validation. The classification results of the proposed approaches have been compared with the results of some of existing techniques proposed in the literature to establish the claim.

  2. Comparison of different eigensolvers for calculating vibrational spectra using low-rank, sum-of-product basis functions

    NASA Astrophysics Data System (ADS)

    Leclerc, Arnaud; Thomas, Phillip S.; Carrington, Tucker

    2017-08-01

    Vibrational spectra and wavefunctions of polyatomic molecules can be calculated at low memory cost using low-rank sum-of-product (SOP) decompositions to represent basis functions generated using an iterative eigensolver. Using a SOP tensor format does not determine the iterative eigensolver. The choice of the interative eigensolver is limited by the need to restrict the rank of the SOP basis functions at every stage of the calculation. We have adapted, implemented and compared different reduced-rank algorithms based on standard iterative methods (block-Davidson algorithm, Chebyshev iteration) to calculate vibrational energy levels and wavefunctions of the 12-dimensional acetonitrile molecule. The effect of using low-rank SOP basis functions on the different methods is analysed and the numerical results are compared with those obtained with the reduced rank block power method. Relative merits of the different algorithms are presented, showing that the advantage of using a more sophisticated method, although mitigated by the use of reduced-rank SOP functions, is noticeable in terms of CPU time.

  3. Development and comparison of advanced reduced-basis methods for the transient structural analysis of unconstrained structures

    NASA Technical Reports Server (NTRS)

    Mcgowan, David M.; Bostic, Susan W.; Camarda, Charles J.

    1993-01-01

    The development of two advanced reduced-basis methods, the force derivative method and the Lanczos method, and two widely used modal methods, the mode displacement method and the mode acceleration method, for transient structural analysis of unconstrained structures is presented. Two example structural problems are studied: an undamped, unconstrained beam subject to a uniformly distributed load which varies as a sinusoidal function of time and an undamped high-speed civil transport aircraft subject to a normal wing tip load which varies as a sinusoidal function of time. These example problems are used to verify the methods and to compare the relative effectiveness of each of the four reduced-basis methods for performing transient structural analyses on unconstrained structures. The methods are verified with a solution obtained by integrating directly the full system of equations of motion, and they are compared using the number of basis vectors required to obtain a desired level of accuracy and the associated computational times as comparison criteria.

  4. On the theoretical link between LLL-reduction and Lambda-decorrelation

    NASA Astrophysics Data System (ADS)

    Lannes, A.

    2013-04-01

    The LLL algorithm, introduced by Lenstra et al. (Math Ann 261:515-534, 1982), plays a key role in many fields of applied mathematics. In particular, it is used as an effective numerical tool for preconditioning the integer least-squares problems arising in high-precision geodetic positioning and Global Navigation Satellite Systems (GNSS). In 1992, Teunissen developed a method for solving these nearest-lattice point (NLP) problems. This method is referred to as Lambda (for Least-squares AMBiguity Decorrelation Adjustment). The preconditioning stage of Lambda corresponds to its decorrelation algorithm. From an epistemological point of view, the latter was devised through an innovative statistical approach completely independent of the LLL algorithm. Recent papers pointed out some similarities between the LLL algorithm and the Lambda-decorrelation algorithm. We try to clarify this point in the paper. We first introduce a parameter measuring the orthogonality defect of the integer basis in which the NLP problem is solved, the LLL-reduced basis of the LLL algorithm, or the Λ -basis of the Lambda method. With regard to this problem, the potential qualities of these bases can then be compared. The Λ -basis is built by working at the level of the variance-covariance matrix of the float solution, while the LLL-reduced basis is built by working at the level of its inverse. As a general rule, the orthogonality defect of the Λ -basis is greater than that of the corresponding LLL-reduced basis; these bases are however very close to one another. To specify this tight relationship, we present a method that provides the dual LLL-reduced basis of a given Λ -basis. As a consequence of this basic link, all the recent developments made on the LLL algorithm can be applied to the Lambda-decorrelation algorithm. This point is illustrated in a concrete manner: we present a parallel Λ -type decorrelation algorithm derived from the parallel LLL algorithm of Luo and Qiao (Proceedings of the fourth international C^* conference on computer science and software engineering. ACM Int Conf P Series. ACM Press, pp 93-101, 2012).

  5. Single-Donor Leukophoretic Technique

    NASA Technical Reports Server (NTRS)

    Eberhardt, R. N.

    1977-01-01

    Leukocyte separation-and-retrieval device utilizes granulocyte and monocyte property of leukoadhesion to glass surfaces as basis of their separation from whole blood. Device is used with single donor technique and has application in biological and chemical processing, veterinary research and clinical care.

  6. An Automated and Intelligent Medical Decision Support System for Brain MRI Scans Classification.

    PubMed

    Siddiqui, Muhammad Faisal; Reza, Ahmed Wasif; Kanesan, Jeevan

    2015-01-01

    A wide interest has been observed in the medical health care applications that interpret neuroimaging scans by machine learning systems. This research proposes an intelligent, automatic, accurate, and robust classification technique to classify the human brain magnetic resonance image (MRI) as normal or abnormal, to cater down the human error during identifying the diseases in brain MRIs. In this study, fast discrete wavelet transform (DWT), principal component analysis (PCA), and least squares support vector machine (LS-SVM) are used as basic components. Firstly, fast DWT is employed to extract the salient features of brain MRI, followed by PCA, which reduces the dimensions of the features. These reduced feature vectors also shrink the memory storage consumption by 99.5%. At last, an advanced classification technique based on LS-SVM is applied to brain MR image classification using reduced features. For improving the efficiency, LS-SVM is used with non-linear radial basis function (RBF) kernel. The proposed algorithm intelligently determines the optimized values of the hyper-parameters of the RBF kernel and also applied k-fold stratified cross validation to enhance the generalization of the system. The method was tested by 340 patients' benchmark datasets of T1-weighted and T2-weighted scans. From the analysis of experimental results and performance comparisons, it is observed that the proposed medical decision support system outperformed all other modern classifiers and achieves 100% accuracy rate (specificity/sensitivity 100%/100%). Furthermore, in terms of computation time, the proposed technique is significantly faster than the recent well-known methods, and it improves the efficiency by 71%, 3%, and 4% on feature extraction stage, feature reduction stage, and classification stage, respectively. These results indicate that the proposed well-trained machine learning system has the potential to make accurate predictions about brain abnormalities from the individual subjects, therefore, it can be used as a significant tool in clinical practice.

  7. Phenology Information Contributes to Reduce Temporal Basis Risk in Agricultural Weather Index Insurance.

    PubMed

    Dalhaus, Tobias; Musshoff, Oliver; Finger, Robert

    2018-01-08

    Weather risks are an essential and increasingly important driver of agricultural income volatility. Agricultural insurances contribute to support farmers to cope with these risks. Among these insurances, weather index insurances (WII) are an innovative tool to cope with climatic risks in agriculture. Using WII, farmers receive an indemnification not based on actual yield reductions but are compensated based on a measured weather index, such as rainfall at a nearby weather station. The discrepancy between experienced losses and actual indemnification, basis risk, is a key challenge. In particular, specifications of WII used so far do not capture critical plant growth phases adequately. Here, we contribute to reduce basis risk by proposing novel procedures how occurrence dates and shifts of growth phases over time and space can be considered and test for their risk reducing potential. Our empirical example addresses drought risks in the critical growth phase around the anthesis stage in winter wheat production in Germany. We find spatially explicit, public and open databases of phenology reports to contribute to reduce basis risk and thus improve the attractiveness of WII. In contrast, we find growth stage modelling based on growing degree days (thermal time) not to result in significant improvements.

  8. The reduced basis method for the electric field integral equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fares, M., E-mail: fares@cerfacs.f; Hesthaven, J.S., E-mail: Jan_Hesthaven@Brown.ed; Maday, Y., E-mail: maday@ann.jussieu.f

    We introduce the reduced basis method (RBM) as an efficient tool for parametrized scattering problems in computational electromagnetics for problems where field solutions are computed using a standard Boundary Element Method (BEM) for the parametrized electric field integral equation (EFIE). This combination enables an algorithmic cooperation which results in a two step procedure. The first step consists of a computationally intense assembling of the reduced basis, that needs to be effected only once. In the second step, we compute output functionals of the solution, such as the Radar Cross Section (RCS), independently of the dimension of the discretization space, formore » many different parameter values in a many-query context at very little cost. Parameters include the wavenumber, the angle of the incident plane wave and its polarization.« less

  9. BaSi2 formation mechanism in thermally evaporated films and its application to reducing oxygen impurity concentration

    NASA Astrophysics Data System (ADS)

    Hara, Kosuke O.; Yamamoto, Chiaya; Yamanaka, Junji; Arimoto, Keisuke; Nakagawa, Kiyokazu; Usami, Noritaka

    2018-04-01

    Thermal evaporation is a simple and rapid method to fabricate semiconducting BaSi2 films. In this study, to elucidate the BaSi2 formation mechanism, the microstructure of a BaSi2 epitaxial film fabricated by thermal evaporation has been investigated by transmission electron microscopy. The BaSi2 film is found to consist of three layers with different microstructural characteristics, which is well explained by assuming two stages of film deposition. In the first stage, BaSi2 forms through the diffusion of Ba atoms from the deposited Ba-rich film to the Si substrate while in the second stage, the mutual diffusion of Ba and Si atoms in the film leads to BaSi2 formation. On the basis of the BaSi2 formation mechanism, two issues are addressed. One is the as-yet unclarified reason for epitaxial growth. It is found important to quickly form BaSi2 in the first stage for the epitaxial growth of upper layers. The other issue is the high oxygen concentration in BaSi2 films around the BaSi2-Si interface. Two routes of oxygen incorporation, i.e., oxidation of the Si substrate surface and initially deposited Ba-rich layer by the residual gas, are identified. On the basis of this knowledge, oxygen concentration is decreased by reducing the holding time of the substrate at high temperatures and by premelting of the source. In addition, X-ray diffraction results show that the decrease in oxygen concentration can lead to an increased proportion of a-axis-oriented grains.

  10. Spectral CT metal artifact reduction with an optimization-based reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Gilat Schmidt, Taly; Barber, Rina F.; Sidky, Emil Y.

    2017-03-01

    Metal objects cause artifacts in computed tomography (CT) images. This work investigated the feasibility of a spectral CT method to reduce metal artifacts. Spectral CT acquisition combined with optimization-based reconstruction is proposed to reduce artifacts by modeling the physical effects that cause metal artifacts and by providing the flexibility to selectively remove corrupted spectral measurements in the spectral-sinogram space. The proposed Constrained `One-Step' Spectral CT Image Reconstruction (cOSSCIR) algorithm directly estimates the basis material maps while enforcing convex constraints. The incorporation of constraints on the reconstructed basis material maps is expected to mitigate undersampling effects that occur when corrupted data is excluded from reconstruction. The feasibility of the cOSSCIR algorithm to reduce metal artifacts was investigated through simulations of a pelvis phantom. The cOSSCIR algorithm was investigated with and without the use of a third basis material representing metal. The effects of excluding data corrupted by metal were also investigated. The results demonstrated that the proposed cOSSCIR algorithm reduced metal artifacts and improved CT number accuracy. For example, CT number error in a bright shading artifact region was reduced from 403 HU in the reference filtered backprojection reconstruction to 33 HU using the proposed algorithm in simulation. In the dark shading regions, the error was reduced from 1141 HU to 25 HU. Of the investigated approaches, decomposing the data into three basis material maps and excluding the corrupted data demonstrated the greatest reduction in metal artifacts.

  11. Recent advances in electronic nose techniques for monitoring of fermentation process.

    PubMed

    Jiang, Hui; Zhang, Hang; Chen, Quansheng; Mei, Congli; Liu, Guohai

    2015-12-01

    Microbial fermentation process is often sensitive to even slight changes of conditions that may result in unacceptable end-product quality. Thus, the monitoring of the process is critical for discovering unfavorable deviations as early as possible and taking the appropriate measures. However, the use of traditional analytical techniques is often time-consuming and labor-intensive. In this sense, the most effective way of developing rapid, accurate and relatively economical method for quality assurance in microbial fermentation process is the use of novel chemical sensor systems. Electronic nose techniques have particular advantages in non-invasive monitoring of microbial fermentation process. Therefore, in this review, we present an overview of the most important contributions dealing with the quality control in microbial fermentation process using the electronic nose techniques. After a brief description of the fundamentals of the sensor techniques, some examples of potential applications of electronic nose techniques monitoring are provided, including the implementation of control strategies and the combination with other monitoring tools (i.e. sensor fusion). Finally, on the basis of the review, the electronic nose techniques are critically commented, and its strengths and weaknesses being highlighted. In addition, on the basis of the observed trends, we also propose the technical challenges and future outlook for the electronic nose techniques.

  12. Propylparaben reduces the excitability of hippocampal neurons by blocking sodium channels.

    PubMed

    Lara-Valderrábano, Leonardo; Rocha, Luisa; Galván, Emilio J

    2016-12-01

    Propylparaben (PPB) is an antimicrobial preservative widely used in food, cosmetics, and pharmaceutics. Virtual screening methodologies predicted anticonvulsant activity of PPB that was confirmed in vivo. Thus, we explored the effects of PPB on the excitability of hippocampal neurons by using standard patch clamp techniques. Bath perfusion of PPB reduced the fast-inactivating sodium current (I Na ) amplitude, causing a hyperpolarizing shift in the inactivation curve of the I Na, and markedly delayed the sodium channel recovery from the inactivation state. Also, PPB effectively suppressed the riluzole-sensitive, persistent sodium current (I NaP ). PPB perfusion also modified the action potential kinetics, and higher concentrations of PPB suppressed the spike activity. Nevertheless, the modulatory effects of PPB did not occur when PPB was internally applied by whole-cell dialysis. These results indicate that PPB reduces the excitability of CA1 pyramidal neurons by modulating voltage-dependent sodium channels. The mechanistic basis of this effect is a marked delay in the recovery from inactivation state of the voltage-sensitive sodium channels. Our results indicate that similar to local anesthetics and anticonvulsant drugs that act on sodium channels, PPB acts in a use-dependent manner. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Mass-storage management for distributed image/video archives

    NASA Astrophysics Data System (ADS)

    Franchi, Santina; Guarda, Roberto; Prampolini, Franco

    1993-04-01

    The realization of image/video database requires a specific design for both database structures and mass storage management. This issue has addressed the project of the digital image/video database system that has been designed at IBM SEMEA Scientific & Technical Solution Center. Proper database structures have been defined to catalog image/video coding technique with the related parameters, and the description of image/video contents. User workstations and servers are distributed along a local area network. Image/video files are not managed directly by the DBMS server. Because of their wide size, they are stored outside the database on network devices. The database contains the pointers to the image/video files and the description of the storage devices. The system can use different kinds of storage media, organized in a hierarchical structure. Three levels of functions are available to manage the storage resources. The functions of the lower level provide media management. They allow it to catalog devices and to modify device status and device network location. The medium level manages image/video files on a physical basis. It manages file migration between high capacity media and low access time media. The functions of the upper level work on image/video file on a logical basis, as they archive, move and copy image/video data selected by user defined queries. These functions are used to support the implementation of a storage management strategy. The database information about characteristics of both storage devices and coding techniques are used by the third level functions to fit delivery/visualization requirements and to reduce archiving costs.

  14. The basis function approach for modeling autocorrelation in ecological data

    USGS Publications Warehouse

    Hefley, Trevor J.; Broms, Kristin M.; Brost, Brian M.; Buderman, Frances E.; Kay, Shannon L.; Scharf, Henry; Tipton, John; Williams, Perry J.; Hooten, Mevin B.

    2017-01-01

    Analyzing ecological data often requires modeling the autocorrelation created by spatial and temporal processes. Many seemingly disparate statistical methods used to account for autocorrelation can be expressed as regression models that include basis functions. Basis functions also enable ecologists to modify a wide range of existing ecological models in order to account for autocorrelation, which can improve inference and predictive accuracy. Furthermore, understanding the properties of basis functions is essential for evaluating the fit of spatial or time-series models, detecting a hidden form of collinearity, and analyzing large data sets. We present important concepts and properties related to basis functions and illustrate several tools and techniques ecologists can use when modeling autocorrelation in ecological data.

  15. Spatial variability of soil available phosphorous and potassium at three different soils located in Pannonian Croatia

    NASA Astrophysics Data System (ADS)

    Bogunović, Igor; Pereira, Paulo; Đurđević, Boris

    2017-04-01

    Information on spatial distribution of soil nutrients in agroecosystems is critical for improving productivity and reducing environmental pressures in intensive farmed soils. In this context, spatial prediction of soil properties should be accurate. In this study we analyse 704 data of soil available phosphorus (AP) and potassium (AK); the data derive from soil samples collected across three arable fields in Baranja region (Croatia) in correspondence of different soil types: Cambisols (169 samples), Chernozems (131 samples) and Gleysoils (404 samples). The samples are collected in a regular sampling grid (distance 225 x 225 m). Several geostatistical techniques (Inverse Distance to a Weight (IDW) with the power of 1, 2 and 3; Radial Basis Functions (RBF) - Inverse Multiquadratic (IMT), Multiquadratic (MTQ), Completely Regularized Spline (CRS), Spline with Tension (SPT) and Thin Plate Spline (TPS); and Local Polynomial (LP) with the power of 1 and 2; two geostatistical techniques -Ordinary Kriging - OK and Simple Kriging - SK) were tested in order to evaluate the most accurate spatial variability maps using criteria of lowest RMSE during cross validation technique. Soil parameters varied considerably throughout the studied fields and their coefficient of variations ranged from 31.4% to 37.7% and from 19.3% to 27.1% for soil AP and AK, respectively. The experimental variograms indicate a moderate spatial dependence for AP and strong spatial dependence for all three locations. The best spatial predictor for AP at Chernozem field was Simple kriging (RMSE=61.711), and for AK inverse multiquadratic (RMSE=44.689). The least accurate technique was Thin plate spline (AP) and Inverse distance to a weight with a power of 1 (AK). Radial basis function models (Spline with Tension for AP at Gleysoil and Cambisol and Completely Regularized Spline for AK at Gleysol) were the best predictors, while Thin Plate Spline models were the least accurate in all three cases. The best interpolator for AK at Cambisol was the local polynomial with the power of 2 (RMSE=33.943), while the least accurate was Thin Plate Spline (RMSE=39.572).

  16. Intelligent model-based OPC

    NASA Astrophysics Data System (ADS)

    Huang, W. C.; Lai, C. M.; Luo, B.; Tsai, C. K.; Chih, M. H.; Lai, C. W.; Kuo, C. C.; Liu, R. G.; Lin, H. T.

    2006-03-01

    Optical proximity correction is the technique of pre-distorting mask layouts so that the printed patterns are as close to the desired shapes as possible. For model-based optical proximity correction, a lithographic model to predict the edge position (contour) of patterns on the wafer after lithographic processing is needed. Generally, segmentation of edges is performed prior to the correction. Pattern edges are dissected into several small segments with corresponding target points. During the correction, the edges are moved back and forth from the initial drawn position, assisted by the lithographic model, to finally settle on the proper positions. When the correction converges, the intensity predicted by the model in every target points hits the model-specific threshold value. Several iterations are required to achieve the convergence and the computation time increases with the increase of the required iterations. An artificial neural network is an information-processing paradigm inspired by biological nervous systems, such as how the brain processes information. It is composed of a large number of highly interconnected processing elements (neurons) working in unison to solve specific problems. A neural network can be a powerful data-modeling tool that is able to capture and represent complex input/output relationships. The network can accurately predict the behavior of a system via the learning procedure. A radial basis function network, a variant of artificial neural network, is an efficient function approximator. In this paper, a radial basis function network was used to build a mapping from the segment characteristics to the edge shift from the drawn position. This network can provide a good initial guess for each segment that OPC has carried out. The good initial guess reduces the required iterations. Consequently, cycle time can be shortened effectively. The optimization of the radial basis function network for this system was practiced by genetic algorithm, which is an artificially intelligent optimization method with a high probability to obtain global optimization. From preliminary results, the required iterations were reduced from 5 to 2 for a simple dumbbell-shape layout.

  17. A Comparative of business process modelling techniques

    NASA Astrophysics Data System (ADS)

    Tangkawarow, I. R. H. T.; Waworuntu, J.

    2016-04-01

    In this era, there is a lot of business process modeling techniques. This article is the research about differences of business process modeling techniques. For each technique will explain about the definition and the structure. This paper presents a comparative analysis of some popular business process modelling techniques. The comparative framework is based on 2 criteria: notation and how it works when implemented in Somerleyton Animal Park. Each technique will end with the advantages and disadvantages. The final conclusion will give recommend of business process modeling techniques that easy to use and serve the basis for evaluating further modelling techniques.

  18. Automatic detection of malaria parasite in blood images using two parameters.

    PubMed

    Kim, Jong-Dae; Nam, Kyeong-Min; Park, Chan-Young; Kim, Yu-Seop; Song, Hye-Jeong

    2015-01-01

    Malaria must be diagnosed quickly and accurately at the initial infection stage and treated early to cure it properly. The malaria diagnosis method using a microscope requires much labor and time of a skilled expert and the diagnosis results vary greatly between individual diagnosticians. Therefore, to be able to measure the malaria parasite infection quickly and accurately, studies have been conducted for automated classification techniques using various parameters. In this study, by measuring classification technique performance according to changes of two parameters, the parameter values were determined that best distinguish normal from plasmodium-infected red blood cells. To reduce the stain deviation of the acquired images, a principal component analysis (PCA) grayscale conversion method was used, and as parameters, we used a malaria infected area and a threshold value used in binarization. The parameter values with the best classification performance were determined by selecting the value (72) corresponding to the lowest error rate on the basis of cell threshold value 128 for the malaria threshold value for detecting plasmodium-infected red blood cells.

  19. State of the art: wastewater management in the beverage industry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joyce, M.E.; Scaief, J.F.; Cochrane, M.W.

    The water pollution impact caused by the wastes from the beverage industry and the methods available to combat the associated problems were studied. The size of each industry is discussed along with production processes, wastewater sources and effluent characteristics. Wastewater management techniques are described in terms of in-plant recycling, by-product recovery and end-of-pipe treatment along with the economics of treatment. The malt liquor, malting, soft drinks and flavoring industries primarily dispose of their effluents in municipal sewers. In-plant recycling and by-product recovery techniques have been developed in these industries to reduce their raw waste load. The wine and brandy andmore » distilled spirits industries in many cases must treat their own effluents so they have developed wastewater management systems including industry-owned treatment plants that yield good effluents. The technology to adequately treat rum distillery wastewater has not been demonstrated. The information basis for this study was a literature search, an effluent guidelines report done for EPA, limited site visits, personal communications and an unpublished report conducted for EPA that included questionaire surveys of the industries.« less

  20. Developing an Intelligent System for Diagnosis of Asthma Based on Artificial Neural Network.

    PubMed

    Alizadeh, Behrouz; Safdari, Reza; Zolnoori, Maryam; Bashiri, Azadeh

    2015-08-01

    Lack of proper diagnosis and inadequate treatment of asthma, leads to physical and financial complications. This study aimed to use data mining techniques and creating a neural network intelligent system for diagnosis of asthma. The study population is the patients who had visited one of the Lung Clinics in Tehran. Data were analyzed using the SPSS statistical tool and the chi-square Pearson's coefficient was the basis of decision making for data ranking. The considered neural network is trained using back propagation learning technique. According to the analysis performed by means of SPSS to select the top factors, 13 effective factors were selected, in different performances, data was mixed in various forms, so the different modes was made for training the data and testing networks and in all different modes, the network was able to predict correctly 100% of all cases. Using data mining methods before the design structure of system, aimed to reduce the data dimension and the optimum choice of the data, will lead to a more accurate system. So considering the data mining approaches due to the nature of medical data is necessary.

  1. Cloud-based adaptive exon prediction for DNA analysis.

    PubMed

    Putluri, Srinivasareddy; Zia Ur Rahman, Md; Fathima, Shaik Yasmeen

    2018-02-01

    Cloud computing offers significant research and economic benefits to healthcare organisations. Cloud services provide a safe place for storing and managing large amounts of such sensitive data. Under conventional flow of gene information, gene sequence laboratories send out raw and inferred information via Internet to several sequence libraries. DNA sequencing storage costs will be minimised by use of cloud service. In this study, the authors put forward a novel genomic informatics system using Amazon Cloud Services, where genomic sequence information is stored and accessed for processing. True identification of exon regions in a DNA sequence is a key task in bioinformatics, which helps in disease identification and design drugs. Three base periodicity property of exons forms the basis of all exon identification techniques. Adaptive signal processing techniques found to be promising in comparison with several other methods. Several adaptive exon predictors (AEPs) are developed using variable normalised least mean square and its maximum normalised variants to reduce computational complexity. Finally, performance evaluation of various AEPs is done based on measures such as sensitivity, specificity and precision using various standard genomic datasets taken from National Center for Biotechnology Information genomic sequence database.

  2. Integrated flight/propulsion control design for a STOVL aircraft using H-infinity control design techniques

    NASA Technical Reports Server (NTRS)

    Garg, Sanjay; Ouzts, Peter J.

    1991-01-01

    Results are presented from an application of H-infinity control design methodology to a centralized integrated flight propulsion control (IFPC) system design for a supersonic Short Takeoff and Vertical Landing (STOVL) fighter aircraft in transition flight. The emphasis is on formulating the H-infinity control design problem such that the resulting controller provides robustness to modeling uncertainties and model parameter variations with flight condition. Experience gained from a preliminary H-infinity based IFPC design study performed earlier is used as the basis to formulate the robust H-infinity control design problem and improve upon the previous design. Detailed evaluation results are presented for a reduced order controller obtained from the improved H-infinity control design showing that the control design meets the specified nominal performance objectives as well as provides stability robustness for variations in plant system dynamics with changes in aircraft trim speed within the transition flight envelope. A controller scheduling technique which accounts for changes in plant control effectiveness with variation in trim conditions is developed and off design model performance results are presented.

  3. Magnetic studies of nickel ferrite nanoparticles prepared by sol-gel technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anumol, C. N.; Chithra, M.; Sahoo, Subasa C., E-mail: subasa@cukerala.ac.in

    2016-05-06

    Ni-ferrite nanoparticles were synthesized by sol–gel technique by varying the solvent concentration. X-ray diffraction studies confirmed the phase purity in the samples. The lattice constant and grain size were found to be in the range of 0.833-0.834 nm and 14-26 nm respectively. There was no systematic variation in magnetization value with the solvent concentration and grain size. The highest magnetization, remanence and coercivity values of 60 emu/g, 12 emu/g and 180 Oe respectively were observed at 300K in the present study for the sample prepared in 75ml of solvent. The observed magnetization value is 20% higher than the bulk value of 50more » emu/g. The magnetization, coercivity and remanence values were enhanced at 60K compared to those at 300K. The observed high magnetization value in the nanoparticles can be explained on the basis of modified cation distribution in the lattice sites. The enhanced magnetic properties at 60K may be understood due to the reduced thermal fluctuation and increased anisotropy at low temperature.« less

  4. Investigation of Dynamic Properties of Water-Saturated Sand by the Results of the Inverse Experiment Technique

    NASA Astrophysics Data System (ADS)

    Bragov, A. M.; Balandin, Vl. V.; Kotov, V. L.; Balandin, Vl. Vl.

    2018-04-01

    We present new experimental results on the investigation of the dynamic properties of sand soil on the basis of the inverse experiment technique using a measuring rod with a flat front-end face. A limited applicability has been shown of the method using the procedure for correcting the shape of the deformation pulse due to dispersion during its propagation in the measuring rod. Estimates of the pulse maximum have been obtained and the results of comparison of numerical calculations with experimental data are given. The sufficient accuracy in determining the drag force during the quasi-stationary stage of penetration has been established. The parameters of dynamic compressibility and resistance to shear of water-saturated sand have been determined in the course of the experimental-theoretical analysis of the maximum values of the drag force and its values at the quasi-stationary stage of penetration. It has been shown that with almost complete water saturation of sand its shear properties are reduced but remain significant in the practically important range of penetration rates.

  5. Absence of evidence or evidence of absence: reflecting on therapeutic implementations of attentional bias modification.

    PubMed

    Clarke, Patrick J F; Notebaert, Lies; MacLeod, Colin

    2014-01-15

    Attentional bias modification (ABM) represents one of a number of cognitive bias modification techniques which are beginning to show promise as therapeutic interventions for emotional pathology. Numerous studies with both clinical and non-clinical populations have now demonstrated that ABM can reduce emotional vulnerability. However, some recent studies have failed to achieve change in either selective attention or emotional vulnerability using ABM methodologies, including a recent randomised controlled trial by Carlbring et al. Some have sought to represent such absence of evidence as a sound basis not to further pursue ABM as an online intervention. While these findings obviously raise questions about the specific conditions under which ABM procedures will produce therapeutic benefits, we suggest that the failure of some studies to modify selective attention does not challenge the theoretical and empirical basis of ABM. The present paper seeks to put these ABM failures in perspective within the broader context of attentional bias modification research. In doing so it is apparent that the current findings and future prospects of ABM are in fact very promising, suggesting that more research in this area is warranted, not less.

  6. [Groundwater organic pollution source identification technology system research and application].

    PubMed

    Wang, Xiao-Hong; Wei, Jia-Hua; Cheng, Zhi-Neng; Liu, Pei-Bin; Ji, Yi-Qun; Zhang, Gan

    2013-02-01

    Groundwater organic pollutions are found in large amount of locations, and the pollutions are widely spread once onset; which is hard to identify and control. The key process to control and govern groundwater pollution is how to control the sources of pollution and reduce the danger to groundwater. This paper introduced typical contaminated sites as an example; then carried out the source identification studies and established groundwater organic pollution source identification system, finally applied the system to the identification of typical contaminated sites. First, grasp the basis of the contaminated sites of geological and hydrogeological conditions; determine the contaminated sites characteristics of pollutants as carbon tetrachloride, from the large numbers of groundwater analysis and test data; then find the solute transport model of contaminated sites and compound-specific isotope techniques. At last, through groundwater solute transport model and compound-specific isotope technology, determine the distribution of the typical site of organic sources of pollution and pollution status; invest identified potential sources of pollution and sample the soil to analysis. It turns out that the results of two identified historical pollution sources and pollutant concentration distribution are reliable. The results provided the basis for treatment of groundwater pollution.

  7. Computational Morphometry for Detecting Changes in Brain Structure Due to Development, Aging, Learning, Disease and Evolution

    PubMed Central

    Mietchen, Daniel; Gaser, Christian

    2009-01-01

    The brain, like any living tissue, is constantly changing in response to genetic and environmental cues and their interaction, leading to changes in brain function and structure, many of which are now in reach of neuroimaging techniques. Computational morphometry on the basis of Magnetic Resonance (MR) images has become the method of choice for studying macroscopic changes of brain structure across time scales. Thanks to computational advances and sophisticated study designs, both the minimal extent of change necessary for detection and, consequently, the minimal periods over which such changes can be detected have been reduced considerably during the last few years. On the other hand, the growing availability of MR images of more and more diverse brain populations also allows more detailed inferences about brain changes that occur over larger time scales, way beyond the duration of an average research project. On this basis, a whole range of issues concerning the structures and functions of the brain are now becoming addressable, thereby providing ample challenges and opportunities for further contributions from neuroinformatics to our understanding of the brain and how it changes over a lifetime and in the course of evolution. PMID:19707517

  8. Matrix basis for plane and modal waves in a Timoshenko beam.

    PubMed

    Claeyssen, Julio Cesar Ruiz; Tolfo, Daniela de Rosso; Tonetto, Leticia

    2016-11-01

    Plane waves and modal waves of the Timoshenko beam model are characterized in closed form by introducing robust matrix basis that behave according to the nature of frequency and wave or modal numbers. These new characterizations are given in terms of a finite number of coupling matrices and closed form generating scalar functions. Through Liouville's technique, these latter are well behaved at critical or static situations. Eigenanalysis is formulated for exponential and modal waves. Modal waves are superposition of four plane waves, but there are plane waves that cannot be modal waves. Reflected and transmitted waves at an interface point are formulated in matrix terms, regardless of having a conservative or a dissipative situation. The matrix representation of modal waves is used in a crack problem for determining the reflected and transmitted matrices. Their euclidean norms are seen to be dominated by certain components at low and high frequencies. The matrix basis technique is also used with a non-local Timoshenko model and with the wave interaction with a boundary. The matrix basis allows to characterize reflected and transmitted waves in spectral and non-spectral form.

  9. Determination of gas volume trapped in a closed fluid system

    NASA Technical Reports Server (NTRS)

    Hunter, W. F.; Jolley, J. E.

    1971-01-01

    Technique involves extracting known volume of fluid and measuring system before and after extraction, volume of entrapped gas is then computed. Formula derived from ideal gas laws is basis of this method. Technique is applicable to thermodynamic cycles and hydraulic systems.

  10. Detection of micro gap weld joint by using magneto-optical imaging and Kalman filtering compensated with RBF neural network

    NASA Astrophysics Data System (ADS)

    Gao, Xiangdong; Chen, Yuquan; You, Deyong; Xiao, Zhenlin; Chen, Xiaohui

    2017-02-01

    An approach for seam tracking of micro gap weld whose width is less than 0.1 mm based on magneto optical (MO) imaging technique during butt-joint laser welding of steel plates is investigated. Kalman filtering(KF) technology with radial basis function(RBF) neural network for weld detection by an MO sensor was applied to track the weld center position. Because the laser welding system process noises and the MO sensor measurement noises were colored noises, the estimation accuracy of traditional KF for seam tracking was degraded by the system model with extreme nonlinearities and could not be solved by the linear state-space model. Also, the statistics characteristics of noises could not be accurately obtained in actual welding. Thus, a RBF neural network was applied to the KF technique to compensate for the weld tracking errors. The neural network can restrain divergence filter and improve the system robustness. In comparison of traditional KF algorithm, the RBF with KF was not only more effectively in improving the weld tracking accuracy but also reduced noise disturbance. Experimental results showed that magneto optical imaging technique could be applied to detect micro gap weld accurately, which provides a novel approach for micro gap seam tracking.

  11. The Study on Flood Reduction and Securing Instreamflow by applying Decentralized Rainwater Retention Facilities for Chunggyechun in Seoul of Korea

    NASA Astrophysics Data System (ADS)

    Park, J. H.; Jun, S. M.; Park, C. G.

    2014-12-01

    Recently abnormal climate phenomena and urbanization recently causes the changes of the hydrological environment. To restore the hydrological cycle in urban area some fundamental solutions such as decentralized rainwater management system and Low Impact Development (LID) techniques may be choosed. In this study, SWMM 5 was used to analyze the effects of decentralized stormwater retention for preventing the urban flood and securing the instreamflow. The Chunggyechun stream watershed(21.29㎢) which is located in Seoul city(Korea) and fully developed as urban area was selected as the study watershed, and the runoff characteristics of urban stream with various methods of LID techniques (Permeable pavement, small rainwater storage tank, large rainwater storage tank) were analyzed. By the simulation results, the permeability of pavement materials and detention storage at the surface soil layer make high effect to the flood discharge, and the initial rainfall retention at the rainwater storage tank effected to reduce the flood peak. The peak discharge was decreased as 22% for the design precipitation. Moreover the instreamflow was increased as 55% by using adequate LID techniques These kind of data could be used as the basis data for designing urban flood prevention facilities, urban regeneration planning in the view of the integrated watershed management.

  12. The critical role of volcano monitoring in risk reduction

    USGS Publications Warehouse

    Tilling, R.I.

    2008-01-01

    Data from volcano-monitoring studies constitute the only scientifically valid basis for short-term forecasts of a future eruption, or of possible changes during an ongoing eruption. Thus, in any effective hazards-mitigation program, a basic strategy in reducing volcano risk is the initiation or augmentation of volcano monitoring at historically active volcanoes and also at geologically young, but presently dormant, volcanoes with potential for reactivation. Beginning with the 1980s, substantial progress in volcano-monitoring techniques and networks - ground-based as well space-based - has been achieved. Although some geochemical monitoring techniques (e.g., remote measurement of volcanic gas emissions) are being increasingly applied and show considerable promise, seismic and geodetic methods to date remain the techniques of choice and are the most widely used. Availability of comprehensive volcano-monitoring data was a decisive factor in the successful scientific and governmental responses to the reawakening of Mount St. Helens (Washington, USA) in 1980 and, more recently, to the powerful explosive eruptions at Mount Pinatubo (Luzon, Philippines) in 1991. However, even with the ever-improving state-ofthe-art in volcano monitoring and predictive capability, the Mount St. Helens and Pinatubo case histories unfortunately still represent the exceptions, rather than the rule, in successfully forecasting the most likely outcome of volcano unrest.

  13. Straightforward and accurate technique for post-coupler stabilization in drift tube linac structures

    NASA Astrophysics Data System (ADS)

    Khalvati, Mohammad Reza; Ramberger, Suitbert

    2016-04-01

    The axial electric field of Alvarez drift tube linacs (DTLs) is known to be susceptible to variations due to static and dynamic effects like manufacturing tolerances and beam loading. Post-couplers are used to stabilize the accelerating fields of DTLs against tuning errors. Tilt sensitivity and its slope have been introduced as measures for the stability right from the invention of post-couplers but since then the actual stabilization has mostly been done by tedious iteration. In the present article, the local tilt-sensitivity slope TSn' is established as the principal measure for stabilization instead of tilt sensitivity or some visual slope, and its significance is developed on the basis of an equivalent-circuit diagram of the DTL. Experimental and 3D simulation results are used to analyze its behavior and to define a technique for stabilization that allows finding the best post-coupler settings with just four tilt-sensitivity measurements. CERN's Linac4 DTL Tank 2 and Tank 3 have been stabilized successfully using this technique. The final tilt-sensitivity error has been reduced from ±100 %/MHz down to ±3 %/MHz for Tank 2 and down to ±1 %/MHz for Tank 3. Finally, an accurate procedure for tuning the structure using slug tuners is discussed.

  14. Timing analysis by model checking

    NASA Technical Reports Server (NTRS)

    Naydich, Dimitri; Guaspari, David

    2000-01-01

    The safety of modern avionics relies on high integrity software that can be verified to meet hard real-time requirements. The limits of verification technology therefore determine acceptable engineering practice. To simplify verification problems, safety-critical systems are commonly implemented under the severe constraints of a cyclic executive, which make design an expensive trial-and-error process highly intolerant of change. Important advances in analysis techniques, such as rate monotonic analysis (RMA), have provided a theoretical and practical basis for easing these onerous restrictions. But RMA and its kindred have two limitations: they apply only to verifying the requirement of schedulability (that tasks meet their deadlines) and they cannot be applied to many common programming paradigms. We address both these limitations by applying model checking, a technique with successful industrial applications in hardware design. Model checking algorithms analyze finite state machines, either by explicit state enumeration or by symbolic manipulation. Since quantitative timing properties involve a potentially unbounded state variable (a clock), our first problem is to construct a finite approximation that is conservative for the properties being analyzed-if the approximation satisfies the properties of interest, so does the infinite model. To reduce the potential for state space explosion we must further optimize this finite model. Experiments with some simple optimizations have yielded a hundred-fold efficiency improvement over published techniques.

  15. [Dynamic metabolic monitoring as a basis of nutritional support in acute cerebral insufficiency of vascular genesis].

    PubMed

    Leĭderman, I N; Gadzhieva, N Sh; Gromov, V S

    2008-01-01

    Within the framework of a prospective controlled study, the metabolic status was evaluated in 148 patients with stroke, by using the dynamic metabolic monitoring technique comprising the calculation of real daily calorie consumption, the assessment of the degree of hypermetabolism, protein hypercatabolism, nutritional disorders, and needs for nutrients, and the daily evaluation of nutritional support. As a result, the authors provide evidence that dynamic metabolic monitoring rapidly and adequately reflect changes in the degree of hypercatabolism and hypermetabolism in patients with lesions of the central nervous system and the structures responsible for regulation of metabolism and nutritional support in accordance with monitoring data makes it possible to enhance the efficiency of intensive care and to reduce the frequency of neurotrophic complications.

  16. The use of FAME analyses to discriminate between different strains of Geotrichum klebahnii with different viabilities.

    PubMed

    Schwarzenauer, Thomas; Lins, Philipp; Reitschuler, Christoph; Illmer, Paul

    2012-02-01

    A considerable decline in viability of spray dried cells of Geotrichum klebahnii was observed and was attributed to an undefined alteration of the used strain. As common techniques were not able to distinguish the altered from the still viable strains, we used the fatty acid methyl ester (FAME) analysis. On the basis of FAME data we were able to discriminate the three strains under investigation. Especially the ratios of cis/trans fatty acid ratios and of saturated/unsaturated fatty acid were significantly reduced in the less viable strain, pointing to an increased stress level in this strain. These findings clearly show the applicability of the FAME analysis to detect strain alterations and that this method is therefore a suitable, fast and feasible tool for quality assurance.

  17. Use of 35-mm color aerial photography to acquire mallard sex ratio data

    USGS Publications Warehouse

    Ferguson, Edgar L.; Jorde, Dennis G.; Sease, John L.

    1981-01-01

    A conventional 35-mm camera equipped with an f2.8 135-mm lens and ASA 64 color film was used to acquire sex ratio data on mallards (Anas platyrhynchos) wintering in the Platte River Valley of south-central Nebraska. Prelight focusing for a distance of 30.5 metres and setting of shutter speed at 1/2000 of a second eliminated focusing and reduced image motion problems and resulted in high-resolution, large-scale aerial photography of small targets. This technique has broad application to the problem of determining sex ratios of various species of waterfowl concentrated on wintering and staging areas. The aerial photographic method was cheaper than the ground ocular method when costs were compared on a per-100 bird basis.

  18. Health promotion: what's in it for business and industry?

    PubMed

    Brennan, A J

    1982-01-01

    Health promotion has been linked to improved morale, increased productivity, reduced absenteeism and turnover, more appropriate utilization of medical services and decreased disability and premature death claims due to unhealthy lifestyles. Preliminary data in favor of HPPs are being accumulated. Final proof is not available to "sell" myopic bottom line managers on the concept, however, as Immanuel Kant stated, "It is often necessary to make a decision on the basis of knowledge sufficient for action but insufficient to satisfy the intellect." If techniques can be developed to quantify in economic terms the impact of health promotion in these areas, business and industry will have a profound, hard line reason beyond their genuine interest in the health of their employees, for providing health promotion to employee populations--MONEY.

  19. Archaeometric study of black-coated pottery from Pompeii by different analytical techniques.

    PubMed

    Scarpelli, Roberta; Clark, Robin J H; De Francesco, Anna Maria

    2014-01-01

    Complementary spectroscopic methods were used to characterize ceramic body and black coating of fine pottery found at Pompeii (Italy). This has enabled us to investigate local productions and to clarify the technological changes over the 4th-1st centuries BC. Two different groups of ceramics were originally distinguished on the basis of macroscopic observations. Optical microscopy (OM), X-ray diffraction (XRD) and X-ray fluorescence (XRF) seem to indicate the usage of the same raw materials for the production of black-coated ceramics at Pompeii for about three centuries. Raman microscopy (RM) and micro-analysis (SEM/EDS) suggest different production treatments for both raw material processing and firing practice (duration of the reducing step and the cooling rate). Copyright © 2013 Elsevier B.V. All rights reserved.

  20. Wearable PWV technologies to measure Blood Pressure: eliminating brachial cuffs.

    PubMed

    Solá, J; Proença, M; Chételat, O

    2013-01-01

    The clinical demand for technologies to monitor Blood Pressure (BP) in ambulatory scenarios with minimal use of inflation cuffs is strong: new generation of BP monitors are expected to be not only accurate, but also non-occlusive. In this paper we review recent advances on the use of the so-called Pulse Wave Velocity (PWV) technologies to estimate BP in a beat-by-beat basis. After introducing the working principle and underlying methodological limitations, two implementation examples are provided. Pilot studies have demonstrated that novel PWV-based BP monitors depict accuracy scores falling within the limits of the British Hypertensive Society (BHS) Grade A standard. The reported techniques pave the way towards ambulatory-compliant, continuous and non-occlusive BP monitoring devices, where the use of inflation cuffs is drastically reduced.

  1. Analysis of memory use for improved design and compile-time allocation of local memory

    NASA Technical Reports Server (NTRS)

    Mcniven, Geoffrey D.; Davidson, Edward S.

    1986-01-01

    Trace analysis techniques are used to study memory referencing behavior for the purpose of designing local memories and determining how to allocate them for data and instructions. In an attempt to assess the inherent behavior of the source code, the trace analysis system described here reduced the effects of the compiler and host architecture on the trace by using a technical called flattening. The variables in the trace, their associated single-assignment values, and references are histogrammed on the basis of various parameters describing memory referencing behavior. Bounds are developed specifying the amount of memory space required to store all live values in a particular histogram class. The reduction achieved in main memory traffic by allocating local memory is specified for each class.

  2. Weighted graph based ordering techniques for preconditioned conjugate gradient methods

    NASA Technical Reports Server (NTRS)

    Clift, Simon S.; Tang, Wei-Pai

    1994-01-01

    We describe the basis of a matrix ordering heuristic for improving the incomplete factorization used in preconditioned conjugate gradient techniques applied to anisotropic PDE's. Several new matrix ordering techniques, derived from well-known algorithms in combinatorial graph theory, which attempt to implement this heuristic, are described. These ordering techniques are tested against a number of matrices arising from linear anisotropic PDE's, and compared with other matrix ordering techniques. A variation of RCM is shown to generally improve the quality of incomplete factorization preconditioners.

  3. [The evaluation of costs: standards of medical care and clinical statistic groups].

    PubMed

    Semenov, V Iu; Samorodskaia, I V

    2014-01-01

    The article presents the comparative analysis of techniques of evaluation of costs of hospital treatment using medical economic standards of medical care and clinical statistical groups. The technique of evaluation of costs on the basis of clinical statistical groups was developed almost fifty years ago and is largely applied in a number of countries. Nowadays, in Russia the payment for completed case of treatment on the basis of medical economic standards is the main mode of payment for medical care in hospital. It is very conditionally a Russian analogue of world-wide prevalent system of diagnostic related groups. The tariffs for these cases of treatment as opposed to clinical statistical groups are counted on basis of standards of provision of medical care approved by Minzdrav of Russia. The information derived from generalization of cases of treatment of real patients is not applied.

  4. Raising Spelling Scores through Peer Tutoring and Cooperative Groups.

    ERIC Educational Resources Information Center

    Fowler, Elaine D.

    Two techniques can be used to improve spelling scores and make spelling more interesting. The first technique is a combination of peer tutoring and the corrected-test technique. Students are paired as tutors and tutees on the basis of past spelling performance. The tutor gives a series of bi-weekly spelling tests to the tutee and helps with error…

  5. Evaluation of Automated Natural Language Processing in the Further Development of Science Information Retrieval. String Program Reports No. 10.

    ERIC Educational Resources Information Center

    Sager, Naomi

    This investigation matches the emerging techniques in computerized natural language processing against emerging needs for such techniques in the information field to evaluate and extend such techniques for future applications and to establish a basis and direction for further research toward these goals. An overview describes developments in the…

  6. SU-C-209-06: Improving X-Ray Imaging with Computer Vision and Augmented Reality

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacDougall, R.D.; Scherrer, B; Don, S

    Purpose: To determine the feasibility of using a computer vision algorithm and augmented reality interface to reduce repeat rates and improve consistency of image quality and patient exposure in general radiography. Methods: A prototype device, designed for use with commercially available hardware (Microsoft Kinect 2.0) capable of depth sensing and high resolution/frame rate video, was mounted to the x-ray tube housing as part of a Philips DigitalDiagnost digital radiography room. Depth data and video was streamed to a Windows 10 PC. Proprietary software created an augmented reality interface where overlays displayed selectable information projected over real-time video of the patient.more » The information displayed prior to and during x-ray acquisition included: recognition and position of ordered body part, position of image receptor, thickness of anatomy, location of AEC cells, collimated x-ray field, degree of patient motion and suggested x-ray technique. Pre-clinical data was collected in a volunteer study to validate patient thickness measurements and x-ray images were not acquired. Results: Proprietary software correctly identified ordered body part, measured patient motion, and calculated thickness of anatomy. Pre-clinical data demonstrated accuracy and precision of body part thickness measurement when compared with other methods (e.g. laser measurement tool). Thickness measurements provided the basis for developing a database of thickness-based technique charts that can be automatically displayed to the technologist. Conclusion: The utilization of computer vision and commercial hardware to create an augmented reality view of the patient and imaging equipment has the potential to drastically improve the quality and safety of x-ray imaging by reducing repeats and optimizing technique based on patient thickness. Society of Pediatric Radiology Pilot Grant; Washington University Bear Cub Fund.« less

  7. Using diffusion k-means for simple stellar population modeling of low S/N quasar host galaxy spectra

    NASA Astrophysics Data System (ADS)

    Mosby, Gregory; Tremonti, Christina A.; Hooper, Eric; Wolf, Marsha J.; Sheinis, Andrew; Richards, Joseph

    2016-01-01

    Quasar host galaxies (QHGs) represent a unique stage in galaxy evolution that can provide a glimpse into the relationship between an active supermassive black hole (SMBH) and its host galaxy. However, observing the hosts of high luminosity, unobscured quasars in the optical is complicated by the large ratio of quasar to host galaxy light. One strategy in optical spectroscopy is to use offset longslit observations of the host galaxy. This method allows the centers of QHGs to be analyzed apart from other regions of their host galaxies. But light from the accreting black hole's point spread function still enters the host galaxy observations, and where the contrast between the host and intervening quasar light is favorable, the host galaxy is faint, producing low signal-to-noise (S/N) data. This stymies traditional stellar population methods that might rely on high S/N features in galaxy spectra to recover key galaxy properties like its star formation history (SFH). In response to this challenge, we have developed a method of stellar population modeling using diffusion k-means (DFK) that can recover SFHs from rest frame optical data with S/N ~ 5 Å^-1. Specifically, we use DFK to cultivate a reduced stellar population basis set. This DFK basis set of four broad age bins is able to recover a range of SFHs. With an analytic description of the seeing, we can use this DFK basis set to simultaneously model the SFHs and the intervening quasar light of QHGs as well. We compare the results of this method with previous techniques using synthetic data and find that our new method has a clear advantage in recovering SFHs from QHGs. On average, the DFK basis set is just as accurate and decisively more precise. This new technique could be used to analyze other low S/N galaxy spectra like those from higher redshift or integral field spectroscopy surveys.This material is based upon work supported by the National Science Foundation under grant no. DGE -0718123 and the Advanced Opportunity fellowship program at the University of Wisconsin-Madison. This research was performed using the computer resources and assistance of the UW-Madison Center For High Throughput Computing (CHTC) in the Department of Computer Sciences.

  8. Improved Algorithm For Finite-Field Normal-Basis Multipliers

    NASA Technical Reports Server (NTRS)

    Wang, C. C.

    1989-01-01

    Improved algorithm reduces complexity of calculations that must precede design of Massey-Omura finite-field normal-basis multipliers, used in error-correcting-code equipment and cryptographic devices. Algorithm represents an extension of development reported in "Algorithm To Design Finite-Field Normal-Basis Multipliers" (NPO-17109), NASA Tech Briefs, Vol. 12, No. 5, page 82.

  9. Alternative Modal Basis Selection Procedures For Reduced-Order Nonlinear Random Response Simulation

    NASA Technical Reports Server (NTRS)

    Przekop, Adam; Guo, Xinyun; Rizi, Stephen A.

    2012-01-01

    Three procedures to guide selection of an efficient modal basis in a nonlinear random response analysis are examined. One method is based only on proper orthogonal decomposition, while the other two additionally involve smooth orthogonal decomposition. Acoustic random response problems are employed to assess the performance of the three modal basis selection approaches. A thermally post-buckled beam exhibiting snap-through behavior, a shallowly curved arch in the auto-parametric response regime and a plate structure are used as numerical test articles. The results of a computationally taxing full-order analysis in physical degrees of freedom are taken as the benchmark for comparison with the results from the three reduced-order analyses. For the cases considered, all three methods are shown to produce modal bases resulting in accurate and computationally efficient reduced-order nonlinear simulations.

  10. Application of Fourier transform infrared (FTIR) spectroscopy for the identification of wheat varieties.

    PubMed

    Amir, Rai Muhammad; Anjum, Faqir Muhammad; Khan, Muhammad Issa; Khan, Moazzam Rafiq; Pasha, Imran; Nadeem, Muhammad

    2013-10-01

    Quality characteristics of wheat are determined by different physiochemical and rheological analysis by using different AACC methods. AACC methods are expensive, time consuming and cause destruction of samples. Fourier transforms infrared (FTIR) spectroscopy is one of the most important and emerging tool used for analyzing wheat for different quality parameters. This technique is rapid and sensitive with a great variety of sampling techniques. In the present study different wheat varieties were analyzed for quality assessment and were also characterized by using AACC methods and FTIR technique. The straight grade flour was analyzed for physical, chemical and rheological properties by standard methods and results were obtained. FTIR works on the basis of functional groups and provide information in the form of peaks. On basis of peaks the value of moisture, protein, fat, ash, carbohydrates and hardness of grain were determined. Peaks for water were observed in the range 1,640 cm(-1) and 3,300 cm(-1) on the basis of functional group H and OH. Protein was observed in the range from 1,600 cm(-1) to 1,700 cm(-1) and 1,550 cm(-1) to 1,570 cm(-1) on the basis of bond amide I and amide II respectively. Fat was also observed within these ranges but on the basis of C-H bond and also starch was observed in the range from 2,800 and 3,000 cm(-1) (C-H stretch region) and in the range 3,000 and 3,600 cm(-1) (O-H stretch region). As FTIR is a fast tool it can be easily emplyed for wheat varieties identification according to a set criterion.

  11. Molecular Properties by Quantum Monte Carlo: An Investigation on the Role of the Wave Function Ansatz and the Basis Set in the Water Molecule

    PubMed Central

    Zen, Andrea; Luo, Ye; Sorella, Sandro; Guidoni, Leonardo

    2014-01-01

    Quantum Monte Carlo methods are accurate and promising many body techniques for electronic structure calculations which, in the last years, are encountering a growing interest thanks to their favorable scaling with the system size and their efficient parallelization, particularly suited for the modern high performance computing facilities. The ansatz of the wave function and its variational flexibility are crucial points for both the accurate description of molecular properties and the capabilities of the method to tackle large systems. In this paper, we extensively analyze, using different variational ansatzes, several properties of the water molecule, namely, the total energy, the dipole and quadrupole momenta, the ionization and atomization energies, the equilibrium configuration, and the harmonic and fundamental frequencies of vibration. The investigation mainly focuses on variational Monte Carlo calculations, although several lattice regularized diffusion Monte Carlo calculations are also reported. Through a systematic study, we provide a useful guide to the choice of the wave function, the pseudopotential, and the basis set for QMC calculations. We also introduce a new method for the computation of forces with finite variance on open systems and a new strategy for the definition of the atomic orbitals involved in the Jastrow-Antisymmetrised Geminal power wave function, in order to drastically reduce the number of variational parameters. This scheme significantly improves the efficiency of QMC energy minimization in case of large basis sets. PMID:24526929

  12. Surface-Enhanced Raman Spectroscopy.

    ERIC Educational Resources Information Center

    Garrell, Robin L.

    1989-01-01

    Reviews the basis for the technique and its experimental requirements. Describes a few examples of the analytical problems to which surface-enhanced Raman spectroscopy (SERS) has been and can be applied. Provides a perspective on the current limitations and frontiers in developing SERS as an analytical technique. (MVL)

  13. Modeling Lexical Borrowability.

    ERIC Educational Resources Information Center

    van Hout, Roeland; Muysken, Pieter

    1994-01-01

    Develops analytical techniques to determine "borrowability," the ease with which a lexical item or category of lexical items can be borrowed by one language from another. These techniques are then applied to Spanish borrowings in Bolivian Quechua on the basis of a set of bilingual texts. (29 references) (MDM)

  14. Statistical Inference for Big Data Problems in Molecular Biophysics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramanathan, Arvind; Savol, Andrej; Burger, Virginia

    2012-01-01

    We highlight the role of statistical inference techniques in providing biological insights from analyzing long time-scale molecular simulation data. Technologi- cal and algorithmic improvements in computation have brought molecular simu- lations to the forefront of techniques applied to investigating the basis of living systems. While these longer simulations, increasingly complex reaching petabyte scales presently, promise a detailed view into microscopic behavior, teasing out the important information has now become a true challenge on its own. Mining this data for important patterns is critical to automating therapeutic intervention discovery, improving protein design, and fundamentally understanding the mech- anistic basis of cellularmore » homeostasis.« less

  15. [A method for inducing standardized spiral fractures of the tibia in the animal experiment].

    PubMed

    Seibold, R; Schlegel, U; Cordey, J

    1995-07-01

    A method for the deliberate weakening of cortical bone has been developed on the basis of an already established technique for creating butterfly fractures. It enables one to create the same type of fracture, i.e., a spiral fracture, every time. The fracturing process is recorded as a force-strain curve. The results of the in vitro investigations form a basis for the preparation of experimental tasks aimed at demonstrating internal fixation techniques and their influence on the vascularity of the bone in simulated fractures. Animal protection law lays down that this fracture model must not fail in animal experiments.

  16. Economical and accurate protocol for calculating hydrogen-bond-acceptor strengths.

    PubMed

    El Kerdawy, Ahmed; Tautermann, Christofer S; Clark, Timothy; Fox, Thomas

    2013-12-23

    A series of density functional/basis set combinations and second-order Møller-Plesset calculations have been used to test their ability to reproduce the trends observed experimentally for the strengths of hydrogen-bond acceptors in order to identify computationally efficient techniques for routine use in the computational drug-design process. The effects of functionals, basis sets, counterpoise corrections, and constraints on the optimized geometries were tested and analyzed, and recommendations (M06-2X/cc-pVDZ and X3LYP/cc-pVDZ with single-point counterpoise corrections or X3LYP/aug-cc-pVDZ without counterpoise) were made for suitable moderately high-throughput techniques.

  17. Structural reliability assessment of the Oman India Pipeline

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Al-Sharif, A.M.; Preston, R.

    1996-12-31

    Reliability techniques are increasingly finding application in design. The special design conditions for the deep water sections of the Oman India Pipeline dictate their use since the experience basis for application of standard deterministic techniques is inadequate. The paper discusses the reliability analysis as applied to the Oman India Pipeline, including selection of a collapse model, characterization of the variability in the parameters that affect pipe resistance to collapse, and implementation of first and second order reliability analyses to assess the probability of pipe failure. The reliability analysis results are used as the basis for establishing the pipe wall thicknessmore » requirements for the pipeline.« less

  18. Increasing farmers' adoption of agricultural index insurance: The search for a better index

    NASA Astrophysics Data System (ADS)

    Muneepeerakul, C. P.

    2015-12-01

    The weather index insurance promises to provide farmers' financial resilience when struck by adverse weather conditions, owing to its minimal moral hazard, low transaction cost, and swift compensation. Despite these advantages, the index insurance has so far received low level of adoption. One of the major causes is the presence of "basis risk"—the risk of getting an insurance payoff that falls short of the actual losses. One source of this basis risk is the production basis risk—the probability that the selected weather indexes and their thresholds do not correspond to actual damages. Here, we investigate how to reduce this production basis risk, using current knowledge in non-linear analysis and stochastic modeling from the fields of ecology and hydrology. We demonstrate how the inclusion of rainfall stochasticity can reduce production basis risk while identifying events that do not need to be insured. Through these findings, we show how much we can improve farmers' adoption of agricultural index insurance under different design contexts.

  19. The basis function approach for modeling autocorrelation in ecological data.

    PubMed

    Hefley, Trevor J; Broms, Kristin M; Brost, Brian M; Buderman, Frances E; Kay, Shannon L; Scharf, Henry R; Tipton, John R; Williams, Perry J; Hooten, Mevin B

    2017-03-01

    Analyzing ecological data often requires modeling the autocorrelation created by spatial and temporal processes. Many seemingly disparate statistical methods used to account for autocorrelation can be expressed as regression models that include basis functions. Basis functions also enable ecologists to modify a wide range of existing ecological models in order to account for autocorrelation, which can improve inference and predictive accuracy. Furthermore, understanding the properties of basis functions is essential for evaluating the fit of spatial or time-series models, detecting a hidden form of collinearity, and analyzing large data sets. We present important concepts and properties related to basis functions and illustrate several tools and techniques ecologists can use when modeling autocorrelation in ecological data. © 2016 by the Ecological Society of America.

  20. Evaluation of culture-based techniques and 454 pyrosequencing for the analysis of fungal diversity in potting media and organic fertilizers.

    PubMed

    Al-Sadi, A M; Al-Mazroui, S S; Phillips, A J L

    2015-08-01

    Potting media and organic fertilizers (OFs) are commonly used in agricultural systems. However, there is a lack of studies on the efficiency of culture-based techniques in assessing the level of fungal diversity in these products. A study was conducted to investigate the efficiency of seven culture-based techniques and pyrosequencing for characterizing fungal diversity in potting media and OFs. Fungal diversity was evaluated using serial dilution, direct plating and baiting with carrot slices, potato slices, radish seeds, cucumber seeds and cucumber cotyledons. Identity of all the isolates was confirmed on the basis of the internal transcribed spacer region of the ribosomal RNA (ITS rRNA) sequence data. The direct plating technique was found to be superior over other culture-based techniques in the number of fungal species detected. It was also found to be simple and the least time consuming technique. Comparing the efficiency of direct plating with 454 pyrosequencing revealed that pyrosequencing detected 12 and 15 times more fungal species from potting media and OFs respectively. Analysis revealed that there were differences between potting media and OFs in the dominant phyla, classes, orders, families, genera and species detected. Zygomycota (52%) and Chytridiomycota (60%) were the predominant phyla in potting media and OFs respectively. The superiority of pyrosequencing over cultural methods could be related to the ability to detect obligate fungi, slow growing fungi and fungi that exist at low population densities. The evaluated methods in this study, especially direct plating and pyrosequencing, may be used as tools to help detect and reduce movement of unwanted fungi between countries and regions. © 2015 The Society for Applied Microbiology.

  1. Multivariate Bias Correction Procedures for Improving Water Quality Predictions from the SWAT Model

    NASA Astrophysics Data System (ADS)

    Arumugam, S.; Libera, D.

    2017-12-01

    Water quality observations are usually not available on a continuous basis for longer than 1-2 years at a time over a decadal period given the labor requirements making calibrating and validating mechanistic models difficult. Further, any physical model predictions inherently have bias (i.e., under/over estimation) and require post-simulation techniques to preserve the long-term mean monthly attributes. This study suggests a multivariate bias-correction technique and compares to a common technique in improving the performance of the SWAT model in predicting daily streamflow and TN loads across the southeast based on split-sample validation. The approach is a dimension reduction technique, canonical correlation analysis (CCA) that regresses the observed multivariate attributes with the SWAT model simulated values. The common approach is a regression based technique that uses an ordinary least squares regression to adjust model values. The observed cross-correlation between loadings and streamflow is better preserved when using canonical correlation while simultaneously reducing individual biases. Additionally, canonical correlation analysis does a better job in preserving the observed joint likelihood of observed streamflow and loadings. These procedures were applied to 3 watersheds chosen from the Water Quality Network in the Southeast Region; specifically, watersheds with sufficiently large drainage areas and number of observed data points. The performance of these two approaches are compared for the observed period and over a multi-decadal period using loading estimates from the USGS LOADEST model. Lastly, the CCA technique is applied in a forecasting sense by using 1-month ahead forecasts of P & T from ECHAM4.5 as forcings in the SWAT model. Skill in using the SWAT model for forecasting loadings and streamflow at the monthly and seasonal timescale is also discussed.

  2. Use of a microwave diagnostics technique to measure the temperature of an axisymmetric ionized gas flow

    NASA Astrophysics Data System (ADS)

    Tsel'Sov, Iu. G.; Kondrat'ev, A. S.

    1990-12-01

    A method is developed for determining the temperature of an ionized gas on the basis of electron-density sounding. This technique is used to measure the cross-sectional temperature distribution of an axisymmetric ionized gas flow using microwave diagnostics.

  3. Voice Therapy Techniques Adapted to Treatment of Habit Cough: A Pilot Study.

    ERIC Educational Resources Information Center

    Blager, Florence B.; And Others

    1988-01-01

    Individuals with long-standing habit cough having no organic basis can be successfully treated with a combination of psychotherapy and speech therapy. Techniques for speech therapy are adapted from those used with hyperfunctional voice disorders to fit this debilitating laryngeal disorder. (Author)

  4. Suppression of Arabidopsis genes by terminator-less transgene constructs

    USDA-ARS?s Scientific Manuscript database

    Transgene-mediated gene silencing is an important biotechnological and research tool. There are several RNAi-mediated techniques available for silencing genes in plants. The basis of all these techniques is to generate double stranded RNA precursors in the cell, which are recognized by the cellula...

  5. Exploiting sparsity and low-rank structure for the recovery of multi-slice breast MRIs with reduced sampling error.

    PubMed

    Yin, X X; Ng, B W-H; Ramamohanarao, K; Baghai-Wadji, A; Abbott, D

    2012-09-01

    It has been shown that, magnetic resonance images (MRIs) with sparsity representation in a transformed domain, e.g. spatial finite-differences (FD), or discrete cosine transform (DCT), can be restored from undersampled k-space via applying current compressive sampling theory. The paper presents a model-based method for the restoration of MRIs. The reduced-order model, in which a full-system-response is projected onto a subspace of lower dimensionality, has been used to accelerate image reconstruction by reducing the size of the involved linear system. In this paper, the singular value threshold (SVT) technique is applied as a denoising scheme to reduce and select the model order of the inverse Fourier transform image, and to restore multi-slice breast MRIs that have been compressively sampled in k-space. The restored MRIs with SVT for denoising show reduced sampling errors compared to the direct MRI restoration methods via spatial FD, or DCT. Compressive sampling is a technique for finding sparse solutions to underdetermined linear systems. The sparsity that is implicit in MRIs is to explore the solution to MRI reconstruction after transformation from significantly undersampled k-space. The challenge, however, is that, since some incoherent artifacts result from the random undersampling, noise-like interference is added to the image with sparse representation. These recovery algorithms in the literature are not capable of fully removing the artifacts. It is necessary to introduce a denoising procedure to improve the quality of image recovery. This paper applies a singular value threshold algorithm to reduce the model order of image basis functions, which allows further improvement of the quality of image reconstruction with removal of noise artifacts. The principle of the denoising scheme is to reconstruct the sparse MRI matrices optimally with a lower rank via selecting smaller number of dominant singular values. The singular value threshold algorithm is performed by minimizing the nuclear norm of difference between the sampled image and the recovered image. It has been illustrated that this algorithm improves the ability of previous image reconstruction algorithms to remove noise artifacts while significantly improving the quality of MRI recovery.

  6. Massage--the scientific basis of an ancient art: Part 1. The techniques.

    PubMed Central

    Goats, G C

    1994-01-01

    Manual massage is a long established and effective therapy used for the relief of pain, swelling, muscle spasm and restricted movement. Latterly, various mechanical methods have appeared to complement the traditional manual techniques. Both manual and mechanical techniques are described systematically, together with a review of indications for use in sports medicine. Images Figure 1 Figure 2 Figure 3 PMID:8000809

  7. Integrand-level reduction of loop amplitudes by computational algebraic geometry methods

    NASA Astrophysics Data System (ADS)

    Zhang, Yang

    2012-09-01

    We present an algorithm for the integrand-level reduction of multi-loop amplitudes of renormalizable field theories, based on computational algebraic geometry. This algorithm uses (1) the Gröbner basis method to determine the basis for integrand-level reduction, (2) the primary decomposition of an ideal to classify all inequivalent solutions of unitarity cuts. The resulting basis and cut solutions can be used to reconstruct the integrand from unitarity cuts, via polynomial fitting techniques. The basis determination part of the algorithm has been implemented in the Mathematica package, BasisDet. The primary decomposition part can be readily carried out by algebraic geometry softwares, with the output of the package BasisDet. The algorithm works in both D = 4 and D = 4 - 2 ɛ dimensions, and we present some two and three-loop examples of applications of this algorithm.

  8. Reduced Order Methods for Prediction of Thermal-Acoustic Fatigue

    NASA Technical Reports Server (NTRS)

    Przekop, A.; Rizzi, S. A.

    2004-01-01

    The goal of this investigation is to assess the quality of high-cycle-fatigue life estimation via a reduced order method, for structures undergoing random nonlinear vibrations in a presence of thermal loading. Modal reduction is performed with several different suites of basis functions. After numerically solving the reduced order system equations of motion, the physical displacement time history is obtained by an inverse transformation and stresses are recovered. Stress ranges obtained through the rainflow counting procedure are used in a linear damage accumulation method to yield fatigue estimates. Fatigue life estimates obtained using various basis functions in the reduced order method are compared with those obtained from numerical simulation in physical degrees-of-freedom.

  9. A Nonlinear Reduced Order Method for Prediction of Acoustic Fatigue

    NASA Technical Reports Server (NTRS)

    Przekop, Adam; Rizzi, Stephen A.

    2006-01-01

    The goal of this investigation is to assess the quality of high-cycle-fatigue life estimation via a reduced order method, for structures undergoing geometrically nonlinear random vibrations. Modal reduction is performed with several different suites of basis functions. After numerically solving the reduced order system equations of motion, the physical displacement time history is obtained by an inverse transformation and stresses are recovered. Stress ranges obtained through the rainflow counting procedure are used in a linear damage accumulation method to yield fatigue estimates. Fatigue life estimates obtained using various basis functions in the reduced order method are compared with those obtained from numerical simulation in physical degrees-of-freedom.

  10. Non-Destructive Evaluation of Grain Structure Using Air-Coupled Ultrasonics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belvin, A. D.; Burrell, R. K.; Cole, E.G.

    2009-08-01

    Cast material has a grain structure that is relatively non-uniform. There is a desire to evaluate the grain structure of this material non-destructively. Traditionally, grain size measurement is a destructive process involving the sectioning and metallographic imaging of the material. Generally, this is performed on a representative sample on a periodic basis. Sampling is inefficient and costly. Furthermore, the resulting data may not provide an accurate description of the entire part's average grain size or grain size variation. This project is designed to develop a non-destructive acoustic scanning technique, using Chirp waveforms, to quantify average grain size and grain sizemore » variation across the surface of a cast material. A Chirp is a signal in which the frequency increases or decreases over time (frequency modulation). As a Chirp passes through a material, the material's grains reduce the signal (attenuation) by absorbing the signal energy. Geophysics research has shown a direct correlation with Chirp wave attenuation and mean grain size in geological structures. The goal of this project is to demonstrate that Chirp waveform attenuation can be used to measure grain size and grain variation in cast metals (uranium and other materials of interest). An off-axis ultrasonic inspection technique using air-coupled ultrasonics has been developed to determine grain size in cast materials. The technique gives a uniform response across the volume of the component. This technique has been demonstrated to provide generalized trends of grain variation over the samples investigated.« less

  11. The contemporary mindfulness movement and the question of nonself1.

    PubMed

    Samuel, Geoffrey

    2015-08-01

    Mindfulness-based stress reduction (MBSR), mindfulness-based cognitive therapy (MBCT), and other "mindfulness"-based techniques have rapidly gained a significant presence within contemporary society. Clearly these techniques, which derive or are claimed to derive from Buddhist meditational practices, meet genuine human needs. However, questions are increasingly raised regarding what these techniques meant in their original context(s), how they have been transformed in relation to their new Western and global field of activity, what might have been lost (or gained) on the way, and how the entire contemporary mindfulness phenomenon might be understood. The article points out that first-generation mindfulness practices, such as MBSR and MBCT, derive from modernist versions of Buddhism, and omit or minimize key aspects of the Buddhist tradition, including the central Buddhist philosophical emphasis on the deconstruction of the self. Nonself (or no self) fits poorly into the contemporary therapeutic context, but is at the core of the Buddhist enterprise from which contemporary "mindfulness" has been abstracted. Instead of focussing narrowly on the practical efficacy of the first generation of mindfulness techniques, we might see them as an invitation to explore the much wider range of practices available in the traditions from which they originate. Rather, too, than simplifying and reducing these practices to fit current Western conceptions of knowledge, we might seek to incorporate more of their philosophical basis into our Western adaptations. This might lead to a genuine and productive expansion of both scientific knowledge and therapeutic possibilities. © The Author(s) 2014.

  12. Reduced-cost linear-response CC2 method based on natural orbitals and natural auxiliary functions

    PubMed Central

    Mester, Dávid

    2017-01-01

    A reduced-cost density fitting (DF) linear-response second-order coupled-cluster (CC2) method has been developed for the evaluation of excitation energies. The method is based on the simultaneous truncation of the molecular orbital (MO) basis and the auxiliary basis set used for the DF approximation. For the reduction of the size of the MO basis, state-specific natural orbitals (NOs) are constructed for each excited state using the average of the second-order Møller–Plesset (MP2) and the corresponding configuration interaction singles with perturbative doubles [CIS(D)] density matrices. After removing the NOs of low occupation number, natural auxiliary functions (NAFs) are constructed [M. Kállay, J. Chem. Phys. 141, 244113 (2014)], and the NAF basis is also truncated. Our results show that, for a triple-zeta basis set, about 60% of the virtual MOs can be dropped, while the size of the fitting basis can be reduced by a factor of five. This results in a dramatic reduction of the computational costs of the solution of the CC2 equations, which are in our approach about as expensive as the evaluation of the MP2 and CIS(D) density matrices. All in all, an average speedup of more than an order of magnitude can be achieved at the expense of a mean absolute error of 0.02 eV in the calculated excitation energies compared to the canonical CC2 results. Our benchmark calculations demonstrate that the new approach enables the efficient computation of CC2 excitation energies for excited states of all types of medium-sized molecules composed of up to 100 atoms with triple-zeta quality basis sets. PMID:28527453

  13. Lattice design and expected performance of the Muon Ionization Cooling Experiment demonstration of ionization cooling

    DOE PAGES

    Bogomilov, M.; Tsenov, R.; Vankova-Kirilova, G.; ...

    2017-06-19

    Muon beams of low emittance provide the basis for the intense, well-characterized neutrino beams necessary to elucidate the physics of flavor at a neutrino factory and to provide lepton-antilepton collisions at energies of up to several TeV at a muon collider. The international Muon Ionization Cooling Experiment (MICE) aims to demonstrate ionization cooling, the technique by which it is proposed to reduce the phase-space volume occupied by the muon beam at such facilities. In an ionization-cooling channel, the muon beam passes through a material in which it loses energy. The energy lost is then replaced using rf cavities. The combinedmore » effect of energy loss and reacceleration is to reduce the transverse emittance of the beam (transverse cooling). A major revision of the scope of the project was carried out over the summer of 2014. The revised experiment can deliver a demonstration of ionization cooling. The design of the cooling demonstration experiment will be described together with its predicted cooling performance.« less

  14. Circumcision-incision orchidopexy: A novel technique for palpable, low inguinal undescended testis.

    PubMed

    Chua, Michael E; Silangcruz, Jan Michael A; Gomez, Odina; Dy, Jun S; Morales, Marcelino L

    2017-11-01

    Given that both orchidopexy and circumcision are commonly done in a single operative setting, we adopted a technique of combined orchidopexy and circumcision using a single circumcision incision. We applied this new technique to boys with palpable, low inguinal cryptorchidism. Here we describe a case series of 7 boys who underwent concurrent orchidopexy via the circumcision site. We present this novel technique and discuss our preliminary outcomes, including the anatomic basis and feasibility. The technique appears to be an alternative for concurrent circumcision and cryptorchid cases with palpable, low inguinal testes.

  15. Spaced antenna drift

    NASA Technical Reports Server (NTRS)

    Royrvik, O.

    1983-01-01

    It has been suggested that the spaced antenna drift (SAD) technique could be successfully used by VHF radars and that it would be superior to a Doppler-beam-swinging (DBS) technique because it would take advantage of the aspect sensitivity of the scattered signal, and might also benefit from returns from single meteors. It appears, however, that the technique suffers from several limitations. On the basis of one SAD experiment performed at the very large Jicamarca radar, it is concluded that the SAD technique can be compared in accuracy to the DBS technique only if small antenna dimensions are used.

  16. Herramientas y tecnicas para corregir composiciones electronicamente (Tools and Techniques for Correcting Compositions Electronically).

    ERIC Educational Resources Information Center

    Larsen, Mark D.

    2001-01-01

    Although most teachers use word processors and electronic mail on a daily basis, they still depend on paper and pencil for correcting their students' compositions. This article suggests some tools and techniques for submitting, editing, and returning written work electronically. (BD) (Author/VWL)

  17. Listening as a Basis for Painting.

    ERIC Educational Resources Information Center

    Kulianin, Anatoly F.

    1980-01-01

    The author, formerly a Soviet art teacher, describes his technique for combining music and painting. After teaching children the fundamentals of music technique and color, he has them experience a piece of music and paint their reactions. One of several articles in this issue on art teaching in other countries. (SJL)

  18. Retrograde pyelogram using the flexible cystoscope.

    PubMed

    Reddy, P K; Hulbert, J C

    1986-12-01

    A retrograde pyelogram was performed on 2 men with the flexible choledochonephroscope and a 5F whistle-tip ureteral catheter. The procedure was done on an outpatient basis with topical anesthesia and patient tolerance was good. The technique is simple and is a useful alternative to the classical rigid cystoscopic technique.

  19. Syntactic Processing in Bilinguals: An fNIRS Study

    ERIC Educational Resources Information Center

    Scherer, Lilian Cristine; Fonseca, Rochele Paz; Amiri, Mahnoush; Adrover-Roig, Daniel; Marcotte, Karine; Giroux, Francine; Senhadji, Noureddine; Benali, Habib; Lesage, Frederic; Ansaldo, Ana Ines

    2012-01-01

    The study of the neural basis of syntactic processing has greatly benefited from neuroimaging techniques. Research on syntactic processing in bilinguals has used a variety of techniques, including mainly functional magnetic resonance imaging (fMRI) and event-related potentials (ERP). This paper reports on a functional near-infrared spectroscopy…

  20. Vis-A-Plan /visualize a plan/ management technique provides performance-time scale

    NASA Technical Reports Server (NTRS)

    Ranck, N. H.

    1967-01-01

    Vis-A-Plan is a bar-charting technique for representing and evaluating project activities on a performance-time basis. This rectilinear method presents the logic diagram of a project as a series of horizontal time bars. It may be used supplementary to PERT or independently.

  1. Reduced nicotine product standards for combustible tobacco: Building an empirical basis for effective regulation

    PubMed Central

    Donny, Eric C.; Hatsukami, Dorothy K.; Benowitz, Neal L.; Sved, Alan F.; Tidey, Jennifer W.; Cassidy, Rachel N.

    2014-01-01

    Introduction Both the Tobacco Control Act in the U.S. and Article 9 of the Framework Convention on Tobacco Control enable governments to directly address the addictiveness of combustible tobacco by reducing nicotine through product standards. Although nicotine may have some harmful effects, the detrimental health effects of smoked tobacco are primarily due to non-nicotine constituents. Hence, the health effects of nicotine reduction would likely be determined by changes in behavior that result in changes in smoke exposure. Methods Herein, we review the current evidence on nicotine reduction and discuss some of the challenges in establishing the empirical basis for regulatory decisions. Results To date, research suggests that very low nicotine content cigarettes produce a desirable set of outcomes, including reduced exposure to nicotine, reduced smoking, and reduced dependence, without significant safety concerns. However, much is still unknown, including the effects of gradual versus abrupt changes in nicotine content, effects in vulnerable populations, and impact on youth. Discussion A coordinated effort must be made to provide the best possible scientific basis for regulatory decisions. The outcome of this effort may provide the foundation for a novel approach to tobacco control that dramatically reduces the devastating health consequences of smoked tobacco. PMID:24967958

  2. Reduced nicotine product standards for combustible tobacco: building an empirical basis for effective regulation.

    PubMed

    Donny, Eric C; Hatsukami, Dorothy K; Benowitz, Neal L; Sved, Alan F; Tidey, Jennifer W; Cassidy, Rachel N

    2014-11-01

    Both the Tobacco Control Act in the U.S. and Article 9 of the Framework Convention on Tobacco Control enable governments to directly address the addictiveness of combustible tobacco by reducing nicotine through product standards. Although nicotine may have some harmful effects, the detrimental health effects of smoked tobacco are primarily due to non-nicotine constituents. Hence, the health effects of nicotine reduction would likely be determined by changes in behavior that result in changes in smoke exposure. Herein, we review the current evidence on nicotine reduction and discuss some of the challenges in establishing the empirical basis for regulatory decisions. To date, research suggests that very low nicotine content cigarettes produce a desirable set of outcomes, including reduced exposure to nicotine, reduced smoking, and reduced dependence, without significant safety concerns. However, much is still unknown, including the effects of gradual versus abrupt changes in nicotine content, effects in vulnerable populations, and impact on youth. A coordinated effort must be made to provide the best possible scientific basis for regulatory decisions. The outcome of this effort may provide the foundation for a novel approach to tobacco control that dramatically reduces the devastating health consequences of smoked tobacco. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Low radiation dose in computed tomography: the role of iodine

    PubMed Central

    Aschoff, Andrik J; Catalano, Carlo; Krix, Martin; Albrecht, Thomas

    2017-01-01

    Recent approaches to reducing radiation exposure during CT examinations typically utilize automated dose modulation strategies on the basis of lower tube voltage combined with iterative reconstruction and other dose-saving techniques. Less clearly appreciated is the potentially substantial role that iodinated contrast media (CM) can play in low-radiation-dose CT examinations. Herein we discuss the role of iodinated CM in low-radiation-dose examinations and describe approaches for the optimization of CM administration protocols to further reduce radiation dose and/or CM dose while maintaining image quality for accurate diagnosis. Similar to the higher iodine attenuation obtained at low-tube-voltage settings, high-iodine-signal protocols may permit radiation dose reduction by permitting a lowering of mAs while maintaining the signal-to-noise ratio. This is particularly feasible in first pass examinations where high iodine signal can be achieved by injecting iodine more rapidly. The combination of low kV and IR can also be used to reduce the iodine dose. Here, in optimum contrast injection protocols, the volume of CM administered rather than the iodine concentration should be reduced, since with high-iodine-concentration CM further reductions of iodine dose are achievable for modern first pass examinations. Moreover, higher concentrations of CM more readily allow reductions of both flow rate and volume, thereby improving the tolerability of contrast administration. PMID:28471242

  4. Applying the Lean principles of the Toyota Production System to reduce wait times in the emergency department.

    PubMed

    Ng, David; Vail, Gord; Thomas, Sophia; Schmidt, Nicki

    2010-01-01

    In recognition of patient wait times, and deteriorating patient and staff satisfaction, we set out to improve these measures in our emergency department (ED) without adding any new funding or beds. In 2005 all staff in the ED at Hôtel-Dieu Grace Hospital began a transformation, employing Toyota Lean manufacturing principles to improve ED wait times and quality of care. Lean techniques such as value-stream mapping, just-in-time delivery techniques, workplace organization, reduction of systemic wastes, use of the worker as the source of quality improvement and ongoing refinement of our process steps formed the basis of our project. Our ED has achieved major improvements in departmental flow without adding any additional ED or inpatient beds. The mean registration to physician time has decreased from 111 minutes to 78 minutes. The number of patients who left without being seen has decreased from 7.1% to 4.3%. The length of stay (LOS) for discharged patients has decreased from a mean of 3.6 to 2.8 hours, with the largest decrease seen in our patients triaged at levels 4 or 5 using the Canadian Emergency Department Triage and Acuity Scale. We noted an improvement in ED patient satisfaction scores following the implementation of Lean principles. Lean manufacturing principles can improve the flow of patients through the ED, resulting in greater patient satisfaction along with reduced time spent by the patient in the ED.

  5. Measurement techniques and instruments suitable for life-prediction testing of photovoltaic arrays

    NASA Technical Reports Server (NTRS)

    Noel, G. T.; Sliemers, F. A.; Deringer, G. C.; Wood, V. E.; Wilkes, K. E.; Gaines, G. B.; Carmichael, D. C.

    1978-01-01

    Array failure modes, relevant materials property changes, and primary degradation mechanisms are discussed as a prerequisite to identifying suitable measurement techniques and instruments. Candidate techniques and instruments are identified on the basis of extensive reviews of published and unpublished information. These methods are organized in six measurement categories - chemical, electrical, optical, thermal, mechanical, and other physicals. Using specified evaluation criteria, the most promising techniques and instruments for use in life prediction tests of arrays were selected.

  6. Ultra high speed image processing techniques. [electronic packaging techniques

    NASA Technical Reports Server (NTRS)

    Anthony, T.; Hoeschele, D. F.; Connery, R.; Ehland, J.; Billings, J.

    1981-01-01

    Packaging techniques for ultra high speed image processing were developed. These techniques involve the development of a signal feedthrough technique through LSI/VLSI sapphire substrates. This allows the stacking of LSI/VLSI circuit substrates in a 3 dimensional package with greatly reduced length of interconnecting lines between the LSI/VLSI circuits. The reduced parasitic capacitances results in higher LSI/VLSI computational speeds at significantly reduced power consumption levels.

  7. Effects of replacing dietary starch with neutral detergent-soluble fibre on ruminal fermentation, microbial synthesis and populations of ruminal cellulolytic bacteria using the rumen simulation technique (RUSITEC).

    PubMed

    Zhao, X H; Liu, C J; Liu, Y; Li, C Y; Yao, J H

    2013-12-01

    A rumen simulation technique (RUSITEC) apparatus with eight 800 ml fermenters was used to investigate the effects of replacing dietary starch with neutral detergent-soluble fibre (NDSF) by inclusion of sugar beet pulp in diets on ruminal fermentation, microbial synthesis and populations of ruminal cellulolytic bacteria. Experimental diets contained 12.7, 16.4, 20.1 or 23.8% NDSF substituted for starch on a dry matter basis. The experiment was conducted over two independent 15-day incubation periods with the last 8 days used for data collection. There was a tendency that 16.4% NDSF in the diet increased the apparent disappearance of organic matter (OM) and neutral detergent fibre (NDF). Increasing dietary NDSF level increased carboxymethylcellulase and xylanase activity in the solid fraction and apparent disappearance of acid detergent fibre (ADF) but reduced the 16S rDNA copy numbers of Ruminococcus albus in both liquid and solid fractions and R. flavefaciens in the solid fraction. The apparent disappearance of dietary nitrogen (N) was reduced by 29.6% with increased dietary NDSF. Substituting NDSF for starch appeared to increase the ratios of acetate/propionate and methane/volatile fatty acids (VFA) (mol/mol). Replacing dietary starch with NDSF reduced the daily production of ammonia-N and increased the growth of the solid-associated microbial pellets (SAM). Total microbial N flow and efficiency of microbial synthesis (EMS), expressed as g microbial N/kg OM fermented, tended to increase with increased dietary NDSF, but the numerical increase did not continue as dietary NDSF exceeded 20.1% of diet DM. Results suggested that substituting NDSF for starch up to 16.4% of diet DM increased digestion of nutrients (except for N) and microbial synthesis, and further increases (from 16.4% to 23.8%) in dietary NDSF did not repress microbial synthesis but did significantly reduce digestion of dietary N. © 2012 Blackwell Verlag GmbH.

  8. Optogenetics in the Teaching Laboratory: Using Channelrhodopsin-2 to Study the Neural Basis of Behavior and Synaptic Physiology in "Drosophila"

    ERIC Educational Resources Information Center

    Pulver, Stefan R.; Hornstein, Nicholas J.; Land, Bruce L.; Johnson, Bruce R.

    2011-01-01

    Here we incorporate recent advances in "Drosophila" neurogenetics and "optogenetics" into neuroscience laboratory exercises. We used the light-activated ion channel channelrhodopsin-2 (ChR2) and tissue-specific genetic expression techniques to study the neural basis of behavior in "Drosophila" larvae. We designed and implemented exercises using…

  9. Materials Toward the Comparative Analysis of Presentation Techniques. Project TACT Working Paper 2.

    ERIC Educational Resources Information Center

    Bossert, William H.; Oettinger, Anthony G.

    One of the objectives of Project TACT is to determine the potential of a gamut of educational media. The working papers in this set have a basis in pictorial information produced through computer graphics. These papers are intended to serve as a basis for sharpening questions, delineating the context within which the answers might be significant,…

  10. A coherent discrete variable representation method on a sphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Hua -Gen

    Here, the coherent discrete variable representation (ZDVR) has been extended for construct- ing a multidimensional potential-optimized DVR basis on a sphere. In order to deal with the non-constant Jacobian in spherical angles, two direct product primitive basis methods are proposed so that the original ZDVR technique can be properly implemented. The method has been demonstrated by computing the lowest states of a two dimensional (2D) vibrational model. Results show that the extended ZDVR method gives accurate eigenval- ues and exponential convergence with increasing ZDVR basis size.

  11. A coherent discrete variable representation method on a sphere

    DOE PAGES

    Yu, Hua -Gen

    2017-09-05

    Here, the coherent discrete variable representation (ZDVR) has been extended for construct- ing a multidimensional potential-optimized DVR basis on a sphere. In order to deal with the non-constant Jacobian in spherical angles, two direct product primitive basis methods are proposed so that the original ZDVR technique can be properly implemented. The method has been demonstrated by computing the lowest states of a two dimensional (2D) vibrational model. Results show that the extended ZDVR method gives accurate eigenval- ues and exponential convergence with increasing ZDVR basis size.

  12. 36 CFR 223.64 - Appraisal on a lump-sum value or rate per unit of measure basis.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... costs or selling values subsequent to the rate redetermination which reduce conversion value to less... or rate per unit of measure basis. 223.64 Section 223.64 Parks, Forests, and Public Property FOREST... Contracts Appraisal and Pricing § 223.64 Appraisal on a lump-sum value or rate per unit of measure basis...

  13. Matrix basis for plane and modal waves in a Timoshenko beam

    PubMed Central

    Tolfo, Daniela de Rosso; Tonetto, Leticia

    2016-01-01

    Plane waves and modal waves of the Timoshenko beam model are characterized in closed form by introducing robust matrix basis that behave according to the nature of frequency and wave or modal numbers. These new characterizations are given in terms of a finite number of coupling matrices and closed form generating scalar functions. Through Liouville’s technique, these latter are well behaved at critical or static situations. Eigenanalysis is formulated for exponential and modal waves. Modal waves are superposition of four plane waves, but there are plane waves that cannot be modal waves. Reflected and transmitted waves at an interface point are formulated in matrix terms, regardless of having a conservative or a dissipative situation. The matrix representation of modal waves is used in a crack problem for determining the reflected and transmitted matrices. Their euclidean norms are seen to be dominated by certain components at low and high frequencies. The matrix basis technique is also used with a non-local Timoshenko model and with the wave interaction with a boundary. The matrix basis allows to characterize reflected and transmitted waves in spectral and non-spectral form. PMID:28018668

  14. Utilization of the Space Vision System as an Augmented Reality System For Mission Operations

    NASA Technical Reports Server (NTRS)

    Maida, James C.; Bowen, Charles

    2003-01-01

    Augmented reality is a technique whereby computer generated images are superimposed on live images for visual enhancement. Augmented reality can also be characterized as dynamic overlays when computer generated images are registered with moving objects in a live image. This technique has been successfully implemented, with low to medium levels of registration precision, in an NRA funded project entitled, "Improving Human Task Performance with Luminance Images and Dynamic Overlays". Future research is already being planned to also utilize a laboratory-based system where more extensive subject testing can be performed. However successful this might be, the problem will still be whether such a technology can be used with flight hardware. To answer this question, the Canadian Space Vision System (SVS) will be tested as an augmented reality system capable of improving human performance where the operation requires indirect viewing. This system has already been certified for flight and is currently flown on each shuttle mission for station assembly. Successful development and utilization of this system in a ground-based experiment will expand its utilization for on-orbit mission operations. Current research and development regarding the use of augmented reality technology is being simulated using ground-based equipment. This is an appropriate approach for development of symbology (graphics and annotation) optimal for human performance and for development of optimal image registration techniques. It is anticipated that this technology will become more pervasive as it matures. Because we know what and where almost everything is on ISS, this reduces the registration problem and improves the computer model of that reality, making augmented reality an attractive tool, provided we know how to use it. This is the basis for current research in this area. However, there is a missing element to this process. It is the link from this research to the current ISS video system and to flight hardware capable of utilizing this technology. This is the basis for this proposed Space Human Factors Engineering project, the determination of the display symbology within the performance limits of the Space Vision System that will objectively improve human performance. This utilization of existing flight hardware will greatly reduce the costs of implementation for flight. Besides being used onboard shuttle and space station and as a ground-based system for mission operational support, it also has great potential for science and medical training and diagnostics, remote learning, team learning, video/media conferencing, and educational outreach.

  15. Improved techniques for outgoing wave variational principle calculations of converged state-to-state transition probabilities for chemical reactions

    NASA Technical Reports Server (NTRS)

    Mielke, Steven L.; Truhlar, Donald G.; Schwenke, David W.

    1991-01-01

    Improved techniques and well-optimized basis sets are presented for application of the outgoing wave variational principle to calculate converged quantum mechanical reaction probabilities. They are illustrated with calculations for the reactions D + H2 yields HD + H with total angular momentum J = 3 and F + H2 yields HF + H with J = 0 and 3. The optimization involves the choice of distortion potential, the grid for calculating half-integrated Green's functions, the placement, width, and number of primitive distributed Gaussians, and the computationally most efficient partition between dynamically adapted and primitive basis functions. Benchmark calculations with 224-1064 channels are presented.

  16. Fast online generalized multiscale finite element method using constraint energy minimization

    NASA Astrophysics Data System (ADS)

    Chung, Eric T.; Efendiev, Yalchin; Leung, Wing Tat

    2018-02-01

    Local multiscale methods often construct multiscale basis functions in the offline stage without taking into account input parameters, such as source terms, boundary conditions, and so on. These basis functions are then used in the online stage with a specific input parameter to solve the global problem at a reduced computational cost. Recently, online approaches have been introduced, where multiscale basis functions are adaptively constructed in some regions to reduce the error significantly. In multiscale methods, it is desired to have only 1-2 iterations to reduce the error to a desired threshold. Using Generalized Multiscale Finite Element Framework [10], it was shown that by choosing sufficient number of offline basis functions, the error reduction can be made independent of physical parameters, such as scales and contrast. In this paper, our goal is to improve this. Using our recently proposed approach [4] and special online basis construction in oversampled regions, we show that the error reduction can be made sufficiently large by appropriately selecting oversampling regions. Our numerical results show that one can achieve a three order of magnitude error reduction, which is better than our previous methods. We also develop an adaptive algorithm and enrich in selected regions with large residuals. In our adaptive method, we show that the convergence rate can be determined by a user-defined parameter and we confirm this by numerical simulations. The analysis of the method is presented.

  17. [Anti-aging medicine: science or marketing ?].

    PubMed

    Cogan, E

    2015-09-01

    Anti-aging medicine is self defined as a preventive medicine, combining nutritional recommendations, dietary supplements, prescriptions for hormones and various aesthetic techniques. The essential aim is to reduce the risk of aging, both psychically, physically and aesthetically. Although many scientific studies in animals or in vitro models have demonstrated the deleterious role of oxidative stress and of hormonal, vitamin or trace elements deficiencies, the transposition to humans of these findings is marginal and does not justify the therapeutic proposals advocated by the anti aging medicine. These practices are mostly not based on any scientific basis both in the diagnostic and therapeutic fields. These approaches are particularly costly for gullible patients in search of well being and abused by a carefully organized marketing involving tacit complicity of doctors, laboratories and firms producing hormones and dietary supplements and various substances devoted for aesthetic purposes.

  18. 3D-printed guiding templates for improved osteosarcoma resection

    NASA Astrophysics Data System (ADS)

    Ma, Limin; Zhou, Ye; Zhu, Ye; Lin, Zefeng; Wang, Yingjun; Zhang, Yu; Xia, Hong; Mao, Chuanbin

    2016-03-01

    Osteosarcoma resection is challenging due to the variable location of tumors and their proximity with surrounding tissues. It also carries a high risk of postoperative complications. To overcome the challenge in precise osteosarcoma resection, computer-aided design (CAD) was used to design patient-specific guiding templates for osteosarcoma resection on the basis of the computer tomography (CT) scan and magnetic resonance imaging (MRI) of the osteosarcoma of human patients. Then 3D printing technique was used to fabricate the guiding templates. The guiding templates were used to guide the osteosarcoma surgery, leading to more precise resection of the tumorous bone and the implantation of the bone implants, less blood loss, shorter operation time and reduced radiation exposure during the operation. Follow-up studies show that the patients recovered well to reach a mean Musculoskeletal Tumor Society score of 27.125.

  19. Large-area multiplexed sensing using MEMS and fiber optics

    NASA Astrophysics Data System (ADS)

    Miller, Michael B.; Clark, Richard L., Jr.; Bell, Clifton R.; Russler, Patrick M.

    2000-06-01

    Micro-electro-mechanical (MEMS) technology offers the ability to implement local and independent sensing and actuation functions through the coordinated response of discrete micro-electro-mechanical 'basis function' elements. The small size of micromechanical components coupled with the ability to reduce costs using volume manufacturing techniques opens up significant potential not only in military applications such as flight and engine monitoring and control, but in autonomous vehicle control, smart munitions, airborne reconnaissance, LADAR, missile guidance, and even in intelligent transportation systems and automotive guidance applications. In this program, Luna Innovations is developing a flexible, programmable interface which can be integrated direction with different types of MEMS sensors, and then used to multiplex many sensors ona single optical fiber to provide a unique combination of functions that will allow larger quantities of sensory input with better resolution than ever before possible.

  20. Geometric Representations of Condition Queries on Three-Dimensional Vector Fields

    NASA Technical Reports Server (NTRS)

    Henze, Chris

    1999-01-01

    Condition queries on distributed data ask where particular conditions are satisfied. It is possible to represent condition queries as geometric objects by plotting field data in various spaces derived from the data, and by selecting loci within these derived spaces which signify the desired conditions. Rather simple geometric partitions of derived spaces can represent complex condition queries because much complexity can be encapsulated in the derived space mapping itself A geometric view of condition queries provides a useful conceptual unification, allowing one to intuitively understand many existing vector field feature detection algorithms -- and to design new ones -- as variations on a common theme. A geometric representation of condition queries also provides a simple and coherent basis for computer implementation, reducing a wide variety of existing and potential vector field feature detection techniques to a few simple geometric operations.

  1. Psychological intervention for a child exposed to murder.

    PubMed

    Rupa, Megha; Hirisave, Uma; Srinath, Shoba

    2014-05-01

    This report describes the process of psychotherapy for a 7-y-old boy who witnessed the gruesome murder of his mother by the father. Expressive therapy techniques such as play, art and storytelling were used to help the child emote, achieve independence and emotional maturity. The child was seen as an in-patient for 3 mo on a daily basis, followed by weekly and subsequently bi-monthly follow ups. During the ward stay, aggression towards other children and grandmother had reduced significantly. The child was able to verbalize the irreversibility and inevitability of death, and had developed healthy ways to resolve the grief. In the subsequent follow ups, although some behavior problems persisted, gains from therapy generalized to help him deal with challenges of real life, such as a constantly lurking fear of the father returning from prison.

  2. "Idiopathic" mental retardation and new chromosomal abnormalities

    PubMed Central

    2010-01-01

    Mental retardation is a heterogeneous condition, affecting 1-3% of general population. In the last few years, several emerging clinical entities have been described, due to the advent of newest genetic techniques, such as array Comparative Genomic Hybridization. The detection of cryptic microdeletion/microduplication abnormalities has allowed genotype-phenotype correlations, delineating recognizable syndromic conditions that are herein reviewed. With the aim to provide to Paediatricians a combined clinical and genetic approach to the child with cognitive impairment, a practical diagnostic algorithm is also illustrated. The use of microarray platforms has further reduced the percentage of "idiopathic" forms of mental retardation, previously accounted for about half of total cases. We discussed the putative pathways at the basis of remaining "pure idiopathic" forms of mental retardation, highlighting possible environmental and epigenetic mechanisms as causes of altered cognition. PMID:20152051

  3. Effect of climate change on marine ecosystems

    NASA Astrophysics Data System (ADS)

    Vikebo, F. B.; Sundby, S.; Aadlandsvik, B.; Fiksen, O.

    2003-04-01

    As a part of the INTEGRATION project, headed by Potsdam Institute for Climate Impact Research, funded by the German Research Council, the impact of climate change scenarios on marine fish populations will be addressed on a spesific population basis and will focus on fish populations in the northern North Atlantic with special emphasis on cod. The approach taken will mainly be a modelling study supported by analysis of existing data on fish stocks and climate. Through down-scaling and nesting techniques, various climate change scenarios with reduced THC in the North Atlantic will be investigated with higher spatial resolution for selected shelf areas. The hydrodynamical model used for the regional ocean modeling is ROMS (http://marine.rutgers.edu/po/models/roms/). An individual based model will be implemented into the larval drift module to simulate growth of the larvae along the drift paths.

  4. Symmetrized density matrix renormalization group algorithm for low-lying excited states of conjugated carbon systems: Application to 1,12-benzoperylene and polychrysene

    NASA Astrophysics Data System (ADS)

    Prodhan, Suryoday; Ramasesha, S.

    2018-05-01

    The symmetry adapted density matrix renormalization group (SDMRG) technique has been an efficient method for studying low-lying eigenstates in one- and quasi-one-dimensional electronic systems. However, the SDMRG method had bottlenecks involving the construction of linearly independent symmetry adapted basis states as the symmetry matrices in the DMRG basis were not sparse. We have developed a modified algorithm to overcome this bottleneck. The new method incorporates end-to-end interchange symmetry (C2) , electron-hole symmetry (J ) , and parity or spin-flip symmetry (P ) in these calculations. The one-to-one correspondence between direct-product basis states in the DMRG Hilbert space for these symmetry operations renders the symmetry matrices in the new basis with maximum sparseness, just one nonzero matrix element per row. Using methods similar to those employed in the exact diagonalization technique for Pariser-Parr-Pople (PPP) models, developed in the 1980s, it is possible to construct orthogonal SDMRG basis states while bypassing the slow step of the Gram-Schmidt orthonormalization procedure. The method together with the PPP model which incorporates long-range electronic correlations is employed to study the correlated excited-state spectra of 1,12-benzoperylene and a narrow mixed graphene nanoribbon with a chrysene molecule as the building unit, comprising both zigzag and cove-edge structures.

  5. Communication: A novel implementation to compute MP2 correlation energies without basis set superposition errors and complete basis set extrapolation.

    PubMed

    Dixit, Anant; Claudot, Julien; Lebègue, Sébastien; Rocca, Dario

    2017-06-07

    By using a formulation based on the dynamical polarizability, we propose a novel implementation of second-order Møller-Plesset perturbation (MP2) theory within a plane wave (PW) basis set. Because of the intrinsic properties of PWs, this method is not affected by basis set superposition errors. Additionally, results are converged without relying on complete basis set extrapolation techniques; this is achieved by using the eigenvectors of the static polarizability as an auxiliary basis set to compactly and accurately represent the response functions involved in the MP2 equations. Summations over the large number of virtual states are avoided by using a formalism inspired by density functional perturbation theory, and the Lanczos algorithm is used to include dynamical effects. To demonstrate this method, applications to three weakly interacting dimers are presented.

  6. A Study on Gröbner Basis with Inexact Input

    NASA Astrophysics Data System (ADS)

    Nagasaka, Kosaku

    Gröbner basis is one of the most important tools in recent symbolic algebraic computations. However, computing a Gröbner basis for the given polynomial ideal is not easy and it is not numerically stable if polynomials have inexact coefficients. In this paper, we study what we should get for computing a Gröbner basis with inexact coefficients and introduce a naive method to compute a Gröbner basis by reduced row echelon form, for the ideal generated by the given polynomial set having a priori errors on their coefficients.

  7. Application of Multi-Criteria Decision Making (MCDM) Technique for Gradation of Jute Fibres

    NASA Astrophysics Data System (ADS)

    Choudhuri, P. K.

    2014-12-01

    Multi-Criteria Decision Making is a branch of Operation Research (OR) having a comparatively short history of about 40 years. It is being popularly used in the field of engineering, banking, fixing policy matters etc. It can also be applied for taking decisions in daily life like selecting a car to purchase, selecting bride or groom and many others. Various MCDM methods namely Weighted Sum Model (WSM), Weighted Product Model (WPM), Analytic Hierarchy Process (AHP), Technique for Order Preference by Similarity to Ideal Solutions (TOPSIS) and Elimination and Choice Translating Reality (ELECTRE) are there to solve many decision making problems, each having its own limitations. However it is very difficult to decide which MCDM method is the best. MCDM methods are prospective quantitative approaches for solving decision problems involving finite number of alternatives and criteria. Very few research works in textiles have been carried out with the help of this technique particularly where decision taking among several alternatives becomes the major problem based on some criteria which are conflicting in nature. Gradation of jute fibres on the basis of the criteria like strength, root content, defects, colour, density, fineness etc. is an important task to perform. The MCDM technique provides enough scope to be applied for the gradation of jute fibres or ranking among several varieties keeping in view a particular object and on the basis of some selection criteria and their relative weightage. The present paper is an attempt to explore the scope of applying the multiplicative AHP method of multi-criteria decision making technique to determine the quality values of selected jute fibres on the basis of some above stated important criteria and ranking them accordingly. A good agreement in ranking is observed between the existing Bureau of Indian Standards (BIS) grading and proposed method.

  8. The Reduced Basis Method in Geosciences: Practical examples for numerical forward simulations

    NASA Astrophysics Data System (ADS)

    Degen, D.; Veroy, K.; Wellmann, F.

    2017-12-01

    Due to the highly heterogeneous character of the earth's subsurface, the complex coupling of thermal, hydrological, mechanical, and chemical processes, and the limited accessibility we have to face high-dimensional problems associated with high uncertainties in geosciences. Performing the obviously necessary uncertainty quantifications with a reasonable number of parameters is often not possible due to the high-dimensional character of the problem. Therefore, we are presenting the reduced basis (RB) method, being a model order reduction (MOR) technique, that constructs low-order approximations to, for instance, the finite element (FE) space. We use the RB method to address this computationally challenging simulations because this method significantly reduces the degrees of freedom. The RB method is decomposed into an offline and online stage, allowing to make the expensive pre-computations beforehand to get real-time results during field campaigns. Generally, the RB approach is most beneficial in the many-query and real-time context.We will illustrate the advantages of the RB method for the field of geosciences through two examples of numerical forward simulations.The first example is a geothermal conduction problem demonstrating the implementation of the RB method for a steady-state case. The second examples, a Darcy flow problem, shows the benefits for transient scenarios. In both cases, a quality evaluation of the approximations is given. Additionally, the runtimes for both the FE and the RB simulations are compared. We will emphasize the advantages of this method for repetitive simulations by showing the speed-up for the RB solution in contrast to the FE solution. Finally, we will demonstrate how the used implementation is usable in high-performance computing (HPC) infrastructures and evaluate its performance for such infrastructures. Hence, we will especially point out its scalability, yielding in an optimal usage on HPC infrastructures and normal working stations.

  9. An extended basis inexact shift-invert Lanczos for the efficient solution of large-scale generalized eigenproblems

    NASA Astrophysics Data System (ADS)

    Rewieński, M.; Lamecki, A.; Mrozowski, M.

    2013-09-01

    This paper proposes a technique, based on the Inexact Shift-Invert Lanczos (ISIL) method with Inexact Jacobi Orthogonal Component Correction (IJOCC) refinement, and a preconditioned conjugate-gradient (PCG) linear solver with multilevel preconditioner, for finding several eigenvalues for generalized symmetric eigenproblems. Several eigenvalues are found by constructing (with the ISIL process) an extended projection basis. Presented results of numerical experiments confirm the technique can be effectively applied to challenging, large-scale problems characterized by very dense spectra, such as resonant cavities with spatial dimensions which are large with respect to wavelengths of the resonating electromagnetic fields. It is also shown that the proposed scheme based on inexact linear solves delivers superior performance, as compared to methods which rely on exact linear solves, indicating tremendous potential of the 'inexact solve' concept. Finally, the scheme which generates an extended projection basis is found to provide a cost-efficient alternative to classical deflation schemes when several eigenvalues are computed.

  10. Computing single step operators of logic programming in radial basis function neural networks

    NASA Astrophysics Data System (ADS)

    Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong

    2014-07-01

    Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (Tp:I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.

  11. Development of a surface isolation estimation technique suitable for application of polar orbiting satellite data

    NASA Technical Reports Server (NTRS)

    Davis, P. A.; Penn, L. M. (Principal Investigator)

    1981-01-01

    A technique is developed for the estimation of total daily insolation on the basis of data derivable from operational polar-orbiting satellites. Although surface insolation and meteorological observations are used in the development, the algorithm is constrained in application by the infrequent daytime polar-orbiter coverage.

  12. Three Techniques to Help Students Teach Themselves Concepts in Environmental Geochemistry.

    ERIC Educational Resources Information Center

    Brown, I. Foster

    1984-01-01

    Describes techniques in which students learn to: (1) create elemental "fairy tales" based on the geochemical behavior of elements and on imagination to integrate concepts; (2) to visually eliminate problems of bias; and (3) to utilize multiple working hypotheses as a basis for testing concepts of classification and distinguishing…

  13. Application of multivariable search techniques to structural design optimization

    NASA Technical Reports Server (NTRS)

    Jones, R. T.; Hague, D. S.

    1972-01-01

    Multivariable optimization techniques are applied to a particular class of minimum weight structural design problems: the design of an axially loaded, pressurized, stiffened cylinder. Minimum weight designs are obtained by a variety of search algorithms: first- and second-order, elemental perturbation, and randomized techniques. An exterior penalty function approach to constrained minimization is employed. Some comparisons are made with solutions obtained by an interior penalty function procedure. In general, it would appear that an interior penalty function approach may not be as well suited to the class of design problems considered as the exterior penalty function approach. It is also shown that a combination of search algorithms will tend to arrive at an extremal design in a more reliable manner than a single algorithm. The effect of incorporating realistic geometrical constraints on stiffener cross-sections is investigated. A limited comparison is made between minimum weight cylinders designed on the basis of a linear stability analysis and cylinders designed on the basis of empirical buckling data. Finally, a technique for locating more than one extremal is demonstrated.

  14. A regional ionospheric TEC mapping technique over China and adjacent areas on the basis of data assimilation

    NASA Astrophysics Data System (ADS)

    Aa, Ercha; Huang, Wengeng; Yu, Shimei; Liu, Siqing; Shi, Liqin; Gong, Jiancun; Chen, Yanhong; Shen, Hua

    2015-06-01

    In this paper, a regional total electron content (TEC) mapping technique over China and adjacent areas (70°E-140°E and 15°N-55°N) is developed on the basis of a Kalman filter data assimilation scheme driven by Global Navigation Satellite Systems (GNSS) data from the Crustal Movement Observation Network of China and International GNSS Service. The regional TEC maps can be generated accordingly with the spatial and temporal resolution being 1°×1° and 5 min, respectively. The accuracy and quality of the TEC mapping technique have been validated through the comparison with GNSS observations, the International Reference Ionosphere model values, the global ionosphere maps from Center for Orbit Determination of Europe, and the Massachusetts Institute of Technology Automated Processing of GPS TEC data from Madrigal database. The verification results indicate that great systematic improvements can be obtained when data are assimilated into the background model, which demonstrates the effectiveness of this technique in providing accurate regional specification of the ionospheric TEC over China and adjacent areas.

  15. A probabilistic technique for the assessment of complex dynamic system resilience

    NASA Astrophysics Data System (ADS)

    Balchanos, Michael Gregory

    In the presence of operational uncertainty, one of the greatest challenges in systems engineering is to ensure system effectiveness, mission capability and survivability for large scale, complex system architectures. Historic events such as the 2003 Northeastern Blackout, and the 2005 Hurricane Katrina, have underlined the great importance of system safety, and survivability. With safety management currently applied on a reactive basis to emerging incidents and risk challenges, there is a paradigm shift from passive, reactive and diagnosis-based approaches to the development of architectures that will autonomously manage safety and survivability through active, proactive and prognosis-based engineering solutions. The shift aims to bring safety considerations early in the engineering design process, in order to reduce retrofitting and additional safety certification costs, increase flexibility in risk management, and essentially make safety be "built-in" the design. As a possible enabling research direction, resilience engineering is an emerging discipline, pertinent to safety management, which offers alternative insights on the design of more safe and survivable system architectures. Conceptually, resilience engineering brings new perspectives on the understanding of system safety, accidents, failures, performance degradations and risk. A resilient system can "absorb" the impact of change due to unexpected disturbances, while it "adapts" to change, in order to maintain the system's physical integrity and capability to carry on with its mission. The leading hypothesis advocates that if a complex dynamic system is more resilient, then it would be more survivable, thus more effective, despite the unexpected disturbances that could affect its normal operating conditions. For investigating the impact of more resilient systems on survivability and safety, a framework for theoretical resilience estimations has been formulated. It constitutes the basis for quantitative techniques for total system resilience evaluation, based on scenario-based, dynamic system simulations. Physics-based Modeling and Simulation (M&S) is applied for dynamical system behavior analysis, which includes system performance, health monitoring, damage propagation and overall mission capability. For the development of the assessment framework and testing of a resilience assessment technique, a small-scale canonical problem has been formulated, involving a computational model of a degradable and reconfigurable spring-mass-damper SDOF system, in a multiple main and redundant spring configuration. A rule-based feedback controller is responsible for system performance recovery, through the application of different reconfiguration strategies and strategic activation of the necessary main or redundant springs. Uncertainty effects on system operation are introduced through disturbance factors, such as external forces with varying magnitude, input frequency, event duration and occurrence time. Such factors are the basis for scenario formulation, in support of a Monte Carlo simulation analysis. Case studies with varying levels of damping and different reconfiguration strategies, involve the investigation of operational uncertainty effects on system performance, mission capability, and system survivability. These studies furthermore explore uncertainty effects on resilience functions that describe the system's capacities on "restoring" mission capability, on "absorbing" the effects of changing conditions, and on "adapting" to the occurring change. The proposed resilience assessment technique or the Topological Investigation for Resilient and Effective Systems, through Increased Architecture Survivability (TIRESIAS) is then applied and demonstrated for a naval system application, in the form of a reduced scale, reconfigurable cooling network of a naval combatant. Uncertainty effects are modeled through combinations of different number of network fluid leaks. The TIRESIAS approach on the system baseline (32-control valve configuration) has allowed for the investigation of leak effects on survival times, mission capability degradations, as well as the resilience function capacities. As part of the technique demonstration, case studies were conducted for different architecture configurations, which have been generated for different total number of control valves and valve locations on the topology.

  16. Text, photo, and line extraction in scanned documents

    NASA Astrophysics Data System (ADS)

    Erkilinc, M. Sezer; Jaber, Mustafa; Saber, Eli; Bauer, Peter; Depalov, Dejan

    2012-07-01

    We propose a page layout analysis algorithm to classify a scanned document into different regions such as text, photo, or strong lines. The proposed scheme consists of five modules. The first module performs several image preprocessing techniques such as image scaling, filtering, color space conversion, and gamma correction to enhance the scanned image quality and reduce the computation time in later stages. Text detection is applied in the second module wherein wavelet transform and run-length encoding are employed to generate and validate text regions, respectively. The third module uses a Markov random field based block-wise segmentation that employs a basis vector projection technique with maximum a posteriori probability optimization to detect photo regions. In the fourth module, methods for edge detection, edge linking, line-segment fitting, and Hough transform are utilized to detect strong edges and lines. In the last module, the resultant text, photo, and edge maps are combined to generate a page layout map using K-Means clustering. The proposed algorithm has been tested on several hundred documents that contain simple and complex page layout structures and contents such as articles, magazines, business cards, dictionaries, and newsletters, and compared against state-of-the-art page-segmentation techniques with benchmark performance. The results indicate that our methodology achieves an average of ˜89% classification accuracy in text, photo, and background regions.

  17. Concurrent Probabilistic Simulation of High Temperature Composite Structural Response

    NASA Technical Reports Server (NTRS)

    Abdi, Frank

    1996-01-01

    A computational structural/material analysis and design tool which would meet industry's future demand for expedience and reduced cost is presented. This unique software 'GENOA' is dedicated to parallel and high speed analysis to perform probabilistic evaluation of high temperature composite response of aerospace systems. The development is based on detailed integration and modification of diverse fields of specialized analysis techniques and mathematical models to combine their latest innovative capabilities into a commercially viable software package. The technique is specifically designed to exploit the availability of processors to perform computationally intense probabilistic analysis assessing uncertainties in structural reliability analysis and composite micromechanics. The primary objectives which were achieved in performing the development were: (1) Utilization of the power of parallel processing and static/dynamic load balancing optimization to make the complex simulation of structure, material and processing of high temperature composite affordable; (2) Computational integration and synchronization of probabilistic mathematics, structural/material mechanics and parallel computing; (3) Implementation of an innovative multi-level domain decomposition technique to identify the inherent parallelism, and increasing convergence rates through high- and low-level processor assignment; (4) Creating the framework for Portable Paralleled architecture for the machine independent Multi Instruction Multi Data, (MIMD), Single Instruction Multi Data (SIMD), hybrid and distributed workstation type of computers; and (5) Market evaluation. The results of Phase-2 effort provides a good basis for continuation and warrants Phase-3 government, and industry partnership.

  18. X-ray imaging for security applications

    NASA Astrophysics Data System (ADS)

    Evans, J. Paul

    2004-01-01

    The X-ray screening of luggage by aviation security personnel may be badly hindered by the lack of visual cues to depth in an image that has been produced by transmitted radiation. Two-dimensional "shadowgraphs" with "organic" and "metallic" objects encoded using two different colors (usually orange and blue) are still in common use. In the context of luggage screening there are no reliable cues to depth present in individual shadowgraph X-ray images. Therefore, the screener is required to convert the 'zero depth resolution' shadowgraph into a three-dimensional mental picture to be able to interpret the relative spatial relationship of the objects under inspection. Consequently, additional cognitive processing is required e.g. integration, inference and memory. However, these processes can lead to serious misinterpretations of the actual physical structure being examined. This paper describes the development of a stereoscopic imaging technique enabling the screener to utilise binocular stereopsis and kinetic depth to enhance their interpretation of the actual nature of the objects under examination. Further work has led to the development of a technique to combine parallax data (to calculate the thickness of a target material) with the results of a basis material subtraction technique to approximate the target's effective atomic number and density. This has been achieved in preliminary experiments with a novel spatially interleaved dual-energy sensor which reduces the number of scintillation elements required by 50% in comparison to conventional sensor configurations.

  19. Determination of the Accommodation Coefficient Using Vapor/gas Bubble Dynamics in an Acoustic Field

    NASA Technical Reports Server (NTRS)

    Gumerov, Nail A.; Hsiao, Chao-Tsung; Goumilevski, Alexei G.; Allen, Jeff (Technical Monitor)

    2001-01-01

    Nonequilibrium liquid/vapor phase transformations can occur in superheated or subcooled liquids in fast processes such as in evaporation in a vacuum. The rate at which such a phase transformation occurs depends on the "condensation" or "accommodation" coefficient, Beta, which is a property of the interface. Existing measurement techniques for Beta are complex and expensive. The development of a relatively inexpensive and reliable technique for measurement of Beta for a wide range of substances and temperatures is of great practical importance. The dynamics of a bubble in an acoustic field strongly depends on the value of Beta. It is known that near the saturation temperature, small vapor bubbles grow under the action of an acoustic field due to "rectified heat transfer." This finding can be used as the basis for an effective measurement technique of Beta. We developed a theory of vapor bubble behavior in an isotropic acoustic wave and in a plane standing acoustic wave. A numerical code was developed which enables simulation of a variety of experimental situations and accurately takes into account slowly evolving temperature. A parametric study showed that the measurement of Beta can be made over a broad range of frequencies and bubble sizes. We found several interesting regimes and conditions which can be efficiently used for measurements of Beta. Measurements of Beta can be performed in both reduced and normal gravity environments.

  20. A practical radial basis function equalizer.

    PubMed

    Lee, J; Beach, C; Tepedelenlioglu, N

    1999-01-01

    A radial basis function (RBF) equalizer design process has been developed in which the number of basis function centers used is substantially fewer than conventionally required. The reduction of centers is accomplished in two-steps. First an algorithm is used to select a reduced set of centers that lie close to the decision boundary. Then the centers in this reduced set are grouped, and an average position is chosen to represent each group. Channel order and delay, which are determining factors in setting the initial number of centers, are estimated from regression analysis. In simulation studies, an RBF equalizer with more than 2000-to-1 reduction in centers performed as well as the RBF equalizer without reduction in centers, and better than a conventional linear equalizer.

  1. Circumcision-incision orchidopexy: A novel technique for palpable, low inguinal undescended testis

    PubMed Central

    Silangcruz, Jan Michael A.; Gomez, Odina; Dy, Jun S.; Morales, Marcelino L.

    2017-01-01

    Given that both orchidopexy and circumcision are commonly done in a single operative setting, we adopted a technique of combined orchidopexy and circumcision using a single circumcision incision. We applied this new technique to boys with palpable, low inguinal cryptorchidism. Here we describe a case series of 7 boys who underwent concurrent orchidopexy via the circumcision site. We present this novel technique and discuss our preliminary outcomes, including the anatomic basis and feasibility. The technique appears to be an alternative for concurrent circumcision and cryptorchid cases with palpable, low inguinal testes. PMID:29124248

  2. Synthesis and characterization of barium silicide (BaSi2) nanowire arrays for potential solar applications.

    PubMed

    Pokhrel, Ankit; Samad, Leith; Meng, Fei; Jin, Song

    2015-11-07

    In order to utilize nanostructured materials for potential solar and other energy-harvesting applications, scalable synthetic techniques for these materials must be developed. Herein we use a vapor phase conversion approach to synthesize nanowire (NW) arrays of semiconducting barium silicide (BaSi2) in high yield for the first time for potential solar applications. Dense arrays of silicon NWs obtained by metal-assisted chemical etching were converted to single-crystalline BaSi2 NW arrays by reacting with Ba vapor at about 930 °C. Structural characterization by X-ray diffraction and high-resolution transmission electron microscopy confirm that the converted NWs are single-crystalline BaSi2. The optimal conversion reaction conditions allow the phase-pure synthesis of BaSi2 NWs that maintain the original NW morphology, and tuning the reaction parameters led to a controllable synthesis of BaSi2 films on silicon substrates. The optical bandgap and electrochemical measurements of these BaSi2 NWs reveal a bandgap and carrier concentrations comparable to previously reported values for BaSi2 thin films.

  3. A theoretical basis for the analysis of redundant software subject to coincident errors

    NASA Technical Reports Server (NTRS)

    Eckhardt, D. E., Jr.; Lee, L. D.

    1985-01-01

    Fundamental to the development of redundant software techniques fault-tolerant software, is an understanding of the impact of multiple-joint occurrences of coincident errors. A theoretical basis for the study of redundant software is developed which provides a probabilistic framework for empirically evaluating the effectiveness of the general (N-Version) strategy when component versions are subject to coincident errors, and permits an analytical study of the effects of these errors. The basic assumptions of the model are: (1) independently designed software components are chosen in a random sample; and (2) in the user environment, the system is required to execute on a stationary input series. The intensity of coincident errors, has a central role in the model. This function describes the propensity to introduce design faults in such a way that software components fail together when executing in the user environment. The model is used to give conditions under which an N-Version system is a better strategy for reducing system failure probability than relying on a single version of software. A condition which limits the effectiveness of a fault-tolerant strategy is studied, and it is posted whether system failure probability varies monotonically with increasing N or whether an optimal choice of N exists.

  4. Integration of GMR Sensors with Different Technologies

    PubMed Central

    Cubells-Beltrán, María-Dolores; Reig, Càndid; Madrenas, Jordi; De Marcellis, Andrea; Santos, Joana; Cardoso, Susana; Freitas, Paulo P.

    2016-01-01

    Less than thirty years after the giant magnetoresistance (GMR) effect was described, GMR sensors are the preferred choice in many applications demanding the measurement of low magnetic fields in small volumes. This rapid deployment from theoretical basis to market and state-of-the-art applications can be explained by the combination of excellent inherent properties with the feasibility of fabrication, allowing the real integration with many other standard technologies. In this paper, we present a review focusing on how this capability of integration has allowed the improvement of the inherent capabilities and, therefore, the range of application of GMR sensors. After briefly describing the phenomenological basis, we deal on the benefits of low temperature deposition techniques regarding the integration of GMR sensors with flexible (plastic) substrates and pre-processed CMOS chips. In this way, the limit of detection can be improved by means of bettering the sensitivity or reducing the noise. We also report on novel fields of application of GMR sensors by the recapitulation of a number of cases of success of their integration with different heterogeneous complementary elements. We finally describe three fully functional systems, two of them in the bio-technology world, as the proof of how the integrability has been instrumental in the meteoric development of GMR sensors and their applications. PMID:27338415

  5. A facility monitoring system: The single most valuable and cost-effective tool available to an energy manager

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holmes, W.A.

    Energy engineering and management combines engineering problem-solving and financial management techniques to reduce utility costs. At present, substantial amounts of time and money are being spent in order to attempt to quantify energy consumption and costs and define opportunities for savings. Unfortunately, accurate verification of results is often overlooked. Advances in technology during the last few years have made the installation of a permanent, PC-based monitoring system possible for any facility, often for no more than the cost of a detailed study. By investing initially in a monitoring system rather than audits or studies, the actual consumption and cost datamore » will be available on a continuing basis and can be used to produce immediate operational savings, more accurately analyze opportunities requiring capital investments, and to verify actual savings resulting from changes. A permanent monitoring system, installed as the first step in a utility cost reduction effort, to identify where and how energy is used in a facility on a dynamic and real-time basis, can provide the most valuable and cost-effective tool available to an energy manager. The resulting data allows energy consumption patterns and utility costs to be understood and managed in the same manner as all other costs within a facility.« less

  6. Reduced multiple empirical kernel learning machine.

    PubMed

    Wang, Zhe; Lu, MingZhe; Gao, Daqi

    2015-02-01

    Multiple kernel learning (MKL) is demonstrated to be flexible and effective in depicting heterogeneous data sources since MKL can introduce multiple kernels rather than a single fixed kernel into applications. However, MKL would get a high time and space complexity in contrast to single kernel learning, which is not expected in real-world applications. Meanwhile, it is known that the kernel mapping ways of MKL generally have two forms including implicit kernel mapping and empirical kernel mapping (EKM), where the latter is less attracted. In this paper, we focus on the MKL with the EKM, and propose a reduced multiple empirical kernel learning machine named RMEKLM for short. To the best of our knowledge, it is the first to reduce both time and space complexity of the MKL with EKM. Different from the existing MKL, the proposed RMEKLM adopts the Gauss Elimination technique to extract a set of feature vectors, which is validated that doing so does not lose much information of the original feature space. Then RMEKLM adopts the extracted feature vectors to span a reduced orthonormal subspace of the feature space, which is visualized in terms of the geometry structure. It can be demonstrated that the spanned subspace is isomorphic to the original feature space, which means that the dot product of two vectors in the original feature space is equal to that of the two corresponding vectors in the generated orthonormal subspace. More importantly, the proposed RMEKLM brings a simpler computation and meanwhile needs a less storage space, especially in the processing of testing. Finally, the experimental results show that RMEKLM owns a much efficient and effective performance in terms of both complexity and classification. The contributions of this paper can be given as follows: (1) by mapping the input space into an orthonormal subspace, the geometry of the generated subspace is visualized; (2) this paper first reduces both the time and space complexity of the EKM-based MKL; (3) this paper adopts the Gauss Elimination, one of the on-the-shelf techniques, to generate a basis of the original feature space, which is stable and efficient.

  7. Design and comparative performance analysis of different chirping profiles of tanh apodized fiber Bragg grating and comparison with the dispersion compensation fiber for long-haul transmission system

    NASA Astrophysics Data System (ADS)

    Dar, Aasif Bashir; Jha, Rakesh Kumar

    2017-03-01

    Various dispersion compensation units are presented and evaluated in this paper. These dispersion compensation units include dispersion compensation fiber (DCF), DCF merged with fiber Bragg grating (FBG) (joint technique), and linear, square root, and cube root chirped tanh apodized FBG. For the performance evaluation 10 Gb/s NRZ transmission system over 100-km-long single-mode fiber is used. The three chirped FBGs are optimized individually to yield pulse width reduction percentage (PWRP) of 86.66, 79.96, 62.42% for linear, square root, and cube root, respectively. The DCF and Joint technique both provide a remarkable PWRP of 94.45 and 96.96%, respectively. The performance of optimized linear chirped tanh apodized FBG and DCF is compared for long-haul transmission system on the basis of quality factor of received signal. For both the systems maximum transmission distance is calculated such that quality factor is ≥ 6 at the receiver and result shows that performance of FBG is comparable to that of DCF with advantages of very low cost, small size and reduced nonlinear effects.

  8. Cloud-based adaptive exon prediction for DNA analysis

    PubMed Central

    Putluri, Srinivasareddy; Fathima, Shaik Yasmeen

    2018-01-01

    Cloud computing offers significant research and economic benefits to healthcare organisations. Cloud services provide a safe place for storing and managing large amounts of such sensitive data. Under conventional flow of gene information, gene sequence laboratories send out raw and inferred information via Internet to several sequence libraries. DNA sequencing storage costs will be minimised by use of cloud service. In this study, the authors put forward a novel genomic informatics system using Amazon Cloud Services, where genomic sequence information is stored and accessed for processing. True identification of exon regions in a DNA sequence is a key task in bioinformatics, which helps in disease identification and design drugs. Three base periodicity property of exons forms the basis of all exon identification techniques. Adaptive signal processing techniques found to be promising in comparison with several other methods. Several adaptive exon predictors (AEPs) are developed using variable normalised least mean square and its maximum normalised variants to reduce computational complexity. Finally, performance evaluation of various AEPs is done based on measures such as sensitivity, specificity and precision using various standard genomic datasets taken from National Center for Biotechnology Information genomic sequence database. PMID:29515813

  9. Mathematical modeling of wastewater-derived biodegradable dissolved organic nitrogen.

    PubMed

    Simsek, Halis

    2016-11-01

    Wastewater-derived dissolved organic nitrogen (DON) typically constitutes the majority of total dissolved nitrogen (TDN) discharged to surface waters from advanced wastewater treatment plants (WWTPs). When considering the stringent regulations on nitrogen discharge limits in sensitive receiving waters, DON becomes problematic and needs to be reduced. Biodegradable DON (BDON) is a portion of DON that is biologically degradable by bacteria when the optimum environmental conditions are met. BDON in a two-stage trickling filter WWTP was estimated using artificial intelligence techniques, such as adaptive neuro-fuzzy inference systems, multilayer perceptron, radial basis neural networks (RBNN), and generalized regression neural networks. Nitrite, nitrate, ammonium, TDN, and DON data were used as input neurons. Wastewater samples were collected from four different locations in the plant. Model performances were evaluated using root mean square error, mean absolute error, mean bias error, and coefficient of determination statistics. Modeling results showed that the R(2) values were higher than 0.85 in all four models for all wastewater samples, except only R(2) in the final effluent sample for RBNN modeling was low (0.52). Overall, it was found that all four computing techniques could be employed successfully to predict BDON.

  10. Spectral element method for elastic and acoustic waves in frequency domain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, Linlin; Zhou, Yuanguo; Wang, Jia-Min

    Numerical techniques in time domain are widespread in seismic and acoustic modeling. In some applications, however, frequency-domain techniques can be advantageous over the time-domain approach when narrow band results are desired, especially if multiple sources can be handled more conveniently in the frequency domain. Moreover, the medium attenuation effects can be more accurately and conveniently modeled in the frequency domain. In this paper, we present a spectral-element method (SEM) in frequency domain to simulate elastic and acoustic waves in anisotropic, heterogeneous, and lossy media. The SEM is based upon the finite-element framework and has exponential convergence because of the usemore » of GLL basis functions. The anisotropic perfectly matched layer is employed to truncate the boundary for unbounded problems. Compared with the conventional finite-element method, the number of unknowns in the SEM is significantly reduced, and higher order accuracy is obtained due to its spectral accuracy. To account for the acoustic-solid interaction, the domain decomposition method (DDM) based upon the discontinuous Galerkin spectral-element method is proposed. Numerical experiments show the proposed method can be an efficient alternative for accurate calculation of elastic and acoustic waves in frequency domain.« less

  11. Effect of different thickness of material filter on Tc-99m spectra and performance parameters of gamma camera

    NASA Astrophysics Data System (ADS)

    Nazifah, A.; Norhanna, S.; Shah, S. I.; Zakaria, A.

    2014-11-01

    This study aimed to investigate the effects of material filter technique on Tc-99m spectra and performance parameters of Philip ADAC forte dual head gamma camera. Thickness of material filter was selected on the basis of percentage attenuation of various gamma ray energies by different thicknesses of zinc material. A cylindrical source tank of NEMA single photon emission computed tomography (SPECT) Triple Line Source Phantom filled with water and Tc-99m radionuclide injected was used for spectra, uniformity and sensitivity measurements. Vinyl plastic tube was used as a line source for spatial resolution. Images for uniformity were reconstructed by filtered back projection method. Butterworth filter of order 5 and cut off frequency 0.35 cycles/cm was selected. Chang's attenuation correction method was applied by selecting 0.13/cm linear attenuation coefficient. Count rate was decreased with material filter from the compton region of Tc-99m energy spectrum, also from the photopeak region. Spatial resolution was improved. However, uniformity of tomographic image was equivocal, and system volume sensitivity was reduced by material filter. Material filter improved system's spatial resolution. Therefore, the technique may be used for phantom studies to improve the image quality.

  12. Ultra-high-Q phononic resonators on-chip at cryogenic temperatures

    NASA Astrophysics Data System (ADS)

    Kharel, Prashanta; Chu, Yiwen; Power, Michael; Renninger, William H.; Schoelkopf, Robert J.; Rakich, Peter T.

    2018-06-01

    Long-lived, high-frequency phonons are valuable for applications ranging from optomechanics to emerging quantum systems. For scientific as well as technological impact, we seek high-performance oscillators that offer a path toward chip-scale integration. Confocal bulk acoustic wave resonators have demonstrated an immense potential to support long-lived phonon modes in crystalline media at cryogenic temperatures. So far, these devices have been macroscopic with cm-scale dimensions. However, as we push these oscillators to high frequencies, we have an opportunity to radically reduce the footprint as a basis for classical and emerging quantum technologies. In this paper, we present novel design principles and simple microfabrication techniques to create high performance chip-scale confocal bulk acoustic wave resonators in a wide array of crystalline materials. We tailor the acoustic modes of such resonators to efficiently couple to light, permitting us to perform a non-invasive laser-based phonon spectroscopy. Using this technique, we demonstrate an acoustic Q-factor of 2.8 × 107 (6.5 × 106) for chip-scale resonators operating at 12.7 GHz (37.8 GHz) in crystalline z-cut quartz (x-cut silicon) at cryogenic temperatures.

  13. Implementing Capsule Representation in a Total Hip Dislocation Finite Element Model

    PubMed Central

    Stewart, Kristofer J; Pedersen, Douglas R; Callaghan, John J; Brown, Thomas D

    2004-01-01

    Previously validated hardware-only finite element models of THA dislocation have clarified how various component design and surgical placement variables contribute to resisting the propensity for implant dislocation. This body of work has now been enhanced with the incorporation of experimentally based capsule representation, and with anatomic bone structures. The current form of this finite element model provides for large deformation multi-body contact (including capsule wrap-around on bone and/or implant), large displacement interfacial sliding, and large deformation (hyperelastic) capsule representation. In addition, the modular nature of this model now allows for rapid incorporation of current or future total hip implant designs, accepts complex multi-axial physiologic motion inputs, and outputs case-specific component/bone/soft-tissue impingement events. This soft-tissue-augmented finite element model is being used to investigate the performance of various implant designs for a range of clinically-representative soft tissue integrities and surgical techniques. Preliminary results show that capsule enhancement makes a substantial difference in stability, compared to an otherwise identical hardware-only model. This model is intended to help put implant design and surgical technique decisions on a firmer scientific basis, in terms of reducing the likelihood of dislocation. PMID:15296198

  14. Inventory and mapping of flood inundation using interactive digital image analysis techniques

    USGS Publications Warehouse

    Rohde, Wayne G.; Nelson, Charles A.; Taranik, J.V.

    1979-01-01

    LANDSAT digital data and color infra-red photographs were used in a multiphase sampling scheme to estimate the area of agricultural land affected by a flood. The LANDSAT data were classified with a maximum likelihood algorithm. Stratification of the LANDSAT data, prior to classification, greatly reduced misclassification errors. The classification results were used to prepare a map overlay showing the areal extent of flooding. These data also provided statistics required to estimate sample size in a two phase sampling scheme, and provided quick, accurate estimates of areas flooded for the first phase. The measurements made in the second phase, based on ground data and photo-interpretation, were used with two phase sampling statistics to estimate the area of agricultural land affected by flooding These results show that LANDSAT digital data can be used to prepare map overlays showing the extent of flooding on agricultural land and, with two phase sampling procedures, can provide acreage estimates with sampling errors of about 5 percent. This procedure provides a technique for rapidly assessing the areal extent of flood conditions on agricultural land and would provide a basis for designing a sampling framework to estimate the impact of flooding on crop production.

  15. Achieving energy efficiency during collective communications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sundriyal, Vaibhav; Sosonkina, Masha; Zhang, Zhao

    2012-09-13

    Energy consumption has become a major design constraint in modern computing systems. With the advent of petaflops architectures, power-efficient software stacks have become imperative for scalability. Techniques such as dynamic voltage and frequency scaling (called DVFS) and CPU clock modulation (called throttling) are often used to reduce the power consumption of the compute nodes. To avoid significant performance losses, these techniques should be used judiciously during parallel application execution. For example, its communication phases may be good candidates to apply the DVFS and CPU throttling without incurring a considerable performance loss. They are often considered as indivisible operations although littlemore » attention is being devoted to the energy saving potential of their algorithmic steps. In this work, two important collective communication operations, all-to-all and allgather, are investigated as to their augmentation with energy saving strategies on the per-call basis. The experiments prove the viability of such a fine-grain approach. They also validate a theoretical power consumption estimate for multicore nodes proposed here. While keeping the performance loss low, the obtained energy savings were always significantly higher than those achieved when DVFS or throttling were switched on across the entire application run« less

  16. Natural Fiber Composite Retting, Preform Manufacture and Molding (Project 18988/Agreement 16313)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simmons, Kevin L.; Howe, Daniel T.; Laddha, Sachin

    2009-12-31

    Plant-based natural fibers can be used in place of glass in fiber reinforced automotive composites to reduce weight, cost and provide environmental benefits. Current automotive applications use natural fibers in injection molded thermoplastics for interior, non-structural applications. Compression molded natural fiber reinforced thermosets have the opportunity to extend natural fiber composite applications to structural and semi-structural parts and exterior parts realizing further vehicle weight savings. The development of low cost molding and fiber processing techniques for large volumes of natural fibers has helped in understanding the barriers of non-aqueous retting. The retting process has a significant effect on the fibermore » quality and its processing ability that is related to the natural fiber composite mechanical properties. PNNL has developed a compression molded fiber reinforced composite system of which is the basis for future preforming activities and fiber treatment. We are using this process to develop preforming techniques and to validate fiber treatment methods relative to OEM provided application specifications. It is anticipated for next fiscal year that demonstration of larger quantities of SMC materials and molding of larger, more complex components with a more complete testing regimen in coordination with Tier suppliers under OEM guidance.« less

  17. Comparative factor analysis models for an empirical study of EEG data, II: A data-guided resolution of the rotation indeterminacy.

    PubMed

    Rogers, L J; Douglas, R R

    1984-02-01

    In this paper (the second in a series), we consider a (generic) pair of datasets, which have been analyzed by the techniques of the previous paper. Thus, their "stable subspaces" have been established by comparative factor analysis. The pair of datasets must satisfy two confirmable conditions. The first is the "Inclusion Condition," which requires that the stable subspace of one of the datasets is nearly identical to a subspace of the other dataset's stable subspace. On the basis of that, we have assumed the pair to have similar generating signals, with stochastically independent generators. The second verifiable condition is that the (presumed same) generating signals have distinct ratios of variances for the two datasets. Under these conditions a small elaboration of some elementary linear algebra reduces the rotation problem to several eigenvalue-eigenvector problems. Finally, we emphasize that an analysis of each dataset by the method of Douglas and Rogers (1983) is an essential prerequisite for the useful application of the techniques in this paper. Nonempirical methods of estimating the number of factors simply will not suffice, as confirmed by simulations reported in the previous paper.

  18. High-speed scanning of critical structures in aviation using coordinate measurement machine and the laser ultrasonic.

    PubMed

    Swornowski, Pawel J

    2012-01-01

    Aviation is one of the know-how spheres containing a great deal of responsible sub-assemblies, in this case landing gear. The necessity for reducing production cycle times while achieving better quality compels metrologists to look for new and improved ways to perform inspection of critical structures. This article describes the ability to determine the shape deviation and location of defects in landing gear using coordinate measuring machines and laser ultrasonic with high-speed scanning. A nondestructive test is the basis for monitoring microcrack and corrosion propagation in the context of a damage-tolerant design approach. This article presents an overview of the basics and of the various metrological aspects of coordinate measurement and a nondestructive testing method in terms of high-speed scanning. The new test method (laser ultrasonic) promises to produce the necessary increase in inspection quality, but this is limited by the wide range of materials, geometries, and structure aeronautic parts used. A technique combining laser ultrasonic and F-SAFT (Fourier-Synthetic Aperture Focusing Technique) processing has been proposed for the detection of small defects buried in landing gear. The experimental results of landing gear inspection are also presented. © Wiley Periodicals, Inc.

  19. Diffusion MRI in early cancer therapeutic response assessment

    PubMed Central

    Galbán, C. J.; Hoff, B. A.; Chenevert, T. L.; Ross, B. D.

    2016-01-01

    Imaging biomarkers for the predictive assessment of treatment response in patients with cancer earlier than standard tumor volumetric metrics would provide new opportunities to individualize therapy. Diffusion-weighted MRI (DW-MRI), highly sensitive to microenvironmental alterations at the cellular level, has been evaluated extensively as a technique for the generation of quantitative and early imaging biomarkers of therapeutic response and clinical outcome. First demonstrated in a rodent tumor model, subsequent studies have shown that DW-MRI can be applied to many different solid tumors for the detection of changes in cellularity as measured indirectly by an increase in the apparent diffusion coefficient (ADC) of water molecules within the lesion. The introduction of quantitative DW-MRI into the treatment management of patients with cancer may aid physicians to individualize therapy, thereby minimizing unnecessary systemic toxicity associated with ineffective therapies, saving valuable time, reducing patient care costs and ultimately improving clinical outcome. This review covers the theoretical basis behind the application of DW-MRI to monitor therapeutic response in cancer, the analytical techniques used and the results obtained from various clinical studies that have demonstrated the efficacy of DW-MRI for the prediction of cancer treatment response. PMID:26773848

  20. Comparison of relaxation with counterpressure massage techniques for reduce pain first stage of labor

    NASA Astrophysics Data System (ADS)

    Lisa, U. F.; Jalina, M.; Marniati

    2017-09-01

    Based on interviews of so me mother who entered the first stage of labor lack of care from health workers to the effort to reducing the acuteof labor. Health care workers appertain hospital in effective in implement maternity nursing interventions in reducing acute the first stage of labor. The reducing acute have two method are pharmacological and non-pharmacological. In this case, has several techniques there are: relaxation and counterpressure massage techniques that capable to reducing acute first stage of labor. The of non-pharmacological is one of authority which must be implemented by midwives especially breathing relaxation techniquesand massage. The research is Quasi Exsperimen with pretes-posttest design. The statistic test has T test paired and unpairedt test. To indicatea reducing the level of acute before and after given relaxation technique result p-value <0.001 with value mean after being given the treatment as much as 44.00 and the ranges of value 10-90, a reducing the level of acute before and after the counter pressure massage techniques p-value <0.001 with value mean after being given the treatment as much as 42.67 and the ranges of value 10-90. It is no significant difference between the relaxation and counter pressure massage techniques in reducing acute in the first stage of labor, because both techniques are highly effective use in reducing acute of labor the result p-value is 0.891. The relaxation and counter pressure massage techniques useful in provide an affection of mother care because both techniques are very effective work in reducing acute to focus on the point of pain. Therefore, the health of workers, especially for a study to apply relaxation and massage to provide of mother care, mainly to the primigravida who in experienced in process of labor.

  1. High order discretization techniques for real-space ab initio simulations

    NASA Astrophysics Data System (ADS)

    Anderson, Christopher R.

    2018-03-01

    In this paper, we present discretization techniques to address numerical problems that arise when constructing ab initio approximations that use real-space computational grids. We present techniques to accommodate the singular nature of idealized nuclear and idealized electronic potentials, and we demonstrate the utility of using high order accurate grid based approximations to Poisson's equation in unbounded domains. To demonstrate the accuracy of these techniques, we present results for a Full Configuration Interaction computation of the dissociation of H2 using a computed, configuration dependent, orbital basis set.

  2. Comparison of data inversion techniques for remotely sensed wide-angle observations of Earth emitted radiation

    NASA Technical Reports Server (NTRS)

    Green, R. N.

    1981-01-01

    The shape factor, parameter estimation, and deconvolution data analysis techniques were applied to the same set of Earth emitted radiation measurements to determine the effects of different techniques on the estimated radiation field. All three techniques are defined and their assumptions, advantages, and disadvantages are discussed. Their results are compared globally, zonally, regionally, and on a spatial spectrum basis. The standard deviations of the regional differences in the derived radiant exitance varied from 7.4 W-m/2 to 13.5 W-m/2.

  3. Harmony: EEG/MEG Linear Inverse Source Reconstruction in the Anatomical Basis of Spherical Harmonics

    PubMed Central

    Petrov, Yury

    2012-01-01

    EEG/MEG source localization based on a “distributed solution” is severely underdetermined, because the number of sources is much larger than the number of measurements. In particular, this makes the solution strongly affected by sensor noise. A new way to constrain the problem is presented. By using the anatomical basis of spherical harmonics (or spherical splines) instead of single dipoles the dimensionality of the inverse solution is greatly reduced without sacrificing the quality of the data fit. The smoothness of the resulting solution reduces the surface bias and scatter of the sources (incoherency) compared to the popular minimum-norm algorithms where single-dipole basis is used (MNE, depth-weighted MNE, dSPM, sLORETA, LORETA, IBF) and allows to efficiently reduce the effect of sensor noise. This approach, termed Harmony, performed well when applied to experimental data (two exemplars of early evoked potentials) and showed better localization precision and solution coherence than the other tested algorithms when applied to realistically simulated data. PMID:23071497

  4. 3D airborne EM modeling based on the spectral-element time-domain (SETD) method

    NASA Astrophysics Data System (ADS)

    Cao, X.; Yin, C.; Huang, X.; Liu, Y.; Zhang, B., Sr.; Cai, J.; Liu, L.

    2017-12-01

    In the field of 3D airborne electromagnetic (AEM) modeling, both finite-difference time-domain (FDTD) method and finite-element time-domain (FETD) method have limitations that FDTD method depends too much on the grids and time steps, while FETD requires large number of grids for complex structures. We propose a time-domain spectral-element (SETD) method based on GLL interpolation basis functions for spatial discretization and Backward Euler (BE) technique for time discretization. The spectral-element method is based on a weighted residual technique with polynomials as vector basis functions. It can contribute to an accurate result by increasing the order of polynomials and suppressing spurious solution. BE method is a stable tine discretization technique that has no limitation on time steps and can guarantee a higher accuracy during the iteration process. To minimize the non-zero number of sparse matrix and obtain a diagonal mass matrix, we apply the reduced order integral technique. A direct solver with its speed independent of the condition number is adopted for quickly solving the large-scale sparse linear equations system. To check the accuracy of our SETD algorithm, we compare our results with semi-analytical solutions for a three-layered earth model within the time lapse 10-6-10-2s for different physical meshes and SE orders. The results show that the relative errors for magnetic field B and magnetic induction are both around 3-5%. Further we calculate AEM responses for an AEM system over a 3D earth model in Figure 1. From numerical experiments for both 1D and 3D model, we draw the conclusions that: 1) SETD can deliver an accurate results for both dB/dt and B; 2) increasing SE order improves the modeling accuracy for early to middle time channels when the EM field diffuses fast so the high-order SE can model the detailed variation; 3) at very late time channels, increasing SE order has little improvement on modeling accuracy, but the time interval plays important roles. This research is supported by Key Program of National Natural Science Foundation of China (41530320), China Natural Science Foundation for Young Scientists (41404093), and Key National Research Project of China (2016YFC0303100, 2017YFC0601900). Figure 1: (a) AEM system over a 3D earth model; (b) magnetic field Bz; (c) magnetic induction dBz/dt.

  5. Using a pruned, nondirect product basis in conjunction with the multi-configuration time-dependent Hartree (MCTDH) method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wodraszka, Robert, E-mail: Robert.Wodraszka@chem.queensu.ca; Carrington, Tucker, E-mail: Tucker.Carrington@queensu.ca

    In this paper, we propose a pruned, nondirect product multi-configuration time dependent Hartree (MCTDH) method for solving the Schrödinger equation. MCTDH uses optimized 1D basis functions, called single particle functions, but the size of the standard direct product MCTDH basis scales exponentially with D, the number of coordinates. We compare the pruned approach to standard MCTDH calculations for basis sizes small enough that the latter are possible and demonstrate that pruning the basis reduces the CPU cost of computing vibrational energy levels of acetonitrile (D = 12) by more than two orders of magnitude. Using the pruned method, it ismore » possible to do calculations with larger bases, for which the cost of standard MCTDH calculations is prohibitive. Pruning the basis complicates the evaluation of matrix-vector products. In this paper, they are done term by term for a sum-of-products Hamiltonian. When no attempt is made to exploit the fact that matrices representing some of the factors of a term are identity matrices, one needs only to carefully constrain indices. In this paper, we develop new ideas that make it possible to further reduce the CPU time by exploiting identity matrices.« less

  6. A simple procedure for construction of the orthonormal basis vectors of irreducible representations of O(5) in the OT (3) ⊗ON (2) basis

    NASA Astrophysics Data System (ADS)

    Pan, Feng; Ding, Xiaoxue; Launey, Kristina D.; Draayer, J. P.

    2018-06-01

    A simple and effective algebraic isospin projection procedure for constructing orthonormal basis vectors of irreducible representations of O (5) ⊃OT (3) ⊗ON (2) from those in the canonical O (5) ⊃ SUΛ (2) ⊗ SUI (2) basis is outlined. The expansion coefficients are components of null space vectors of the projection matrix with four nonzero elements in each row in general. Explicit formulae for evaluating OT (3)-reduced matrix elements of O (5) generators are derived.

  7. [Estimation of nonpoint source pollutant loads and optimization of the best management practices (BMPs) in the Zhangweinan River basin].

    PubMed

    Xu, Hua-Shan; Xu, Zong-Xue; Liu, Pin

    2013-03-01

    One of the key techniques in establishing and implementing TMDL (total maximum daily load) is to utilize hydrological model to quantify non-point source pollutant loads, establish BMPs scenarios, reduce non-point source pollutant loads. Non-point source pollutant loads under different years (wet, normal and dry year) were estimated by using SWAT model in the Zhangweinan River basin, spatial distribution characteristics of non-point source pollutant loads were analyzed on the basis of the simulation result. During wet years, total nitrogen (TN) and total phosphorus (TP) accounted for 0.07% and 27.24% of the total non-point source pollutant loads, respectively. Spatially, agricultural and residential land with steep slope are the regions that contribute more non-point source pollutant loads in the basin. Compared to non-point source pollutant loads with those during the baseline period, 47 BMPs scenarios were set to simulate the reduction efficiency of different BMPs scenarios for 5 kinds of pollutants (organic nitrogen, organic phosphorus, nitrate nitrogen, dissolved phosphorus and mineral phosphorus) in 8 prior controlled subbasins. Constructing vegetation type ditch was optimized as the best measure to reduce TN and TP by comparing cost-effective relationship among different BMPs scenarios, and the costs of unit pollutant reduction are 16.11-151.28 yuan x kg(-1) for TN, and 100-862.77 yuan x kg(-1) for TP, which is the most cost-effective measure among the 47 BMPs scenarios. The results could provide a scientific basis and technical support for environmental protection and sustainable utilization of water resources in the Zhangweinan River basin.

  8. Zero-Profile Spacer Versus Cage-Plate Construct in Anterior Cervical Diskectomy and Fusion for Multilevel Cervical Spondylotic Myelopathy: Systematic Review and Meta-Analysis.

    PubMed

    Tong, Min-Ji; Xiang, Guang-Heng; He, Zi-Li; Chen, De-Heng; Tang, Qian; Xu, Hua-Zi; Tian, Nai-Feng

    2017-08-01

    Anterior cervical diskectomy and fusion with plate-screw construct has been gradually applied for multilevel cervical spondylotic myelopathy in recent years. However, long cervical plate was associated with complications including breakage or loosening of plate and screws, trachea-esophageal injury, neurovascular injury, and postoperative dysphagia. To reduce these complications, the zero-profile spacer has been introduced. This meta-analysis was performed to compare the clinical and radiologic outcomes of zero-profile spacer versus cage-plate construct for the treatment of multilevel cervical spondylotic myelopathy. We systematically searched MEDLINE, Springer, and Web of Science databases for relevant studies that compared the clinical and radiologic outcomes of zero-profile spacer versus cage and plate for multilevel cervical spondylotic myelopathy. Risk of bias in included studies was assessed. Pooled estimates and corresponding 95% confidence intervals were calculated. On the basis of predefined inclusion criteria, 7 studies with a total of 409 patients were included in this analysis. The pooled data revealed that zero-profile spacer was associated with a decreased dysphagia rate at 2, 3, and 6 months postoperatively when compared with the cage-plate group. Both techniques had similar perioperative outcomes, functional outcome, radiologic outcome, and dysphagia rate immediately and at >1-year after operation. On the basis of available evidence, zero-profile spacer was more effective in reducing postoperative dysphagia rate for multilevel cervical spondylotic myelopathy. Both devices were safe in anterior cervical surgeries, and they had similar efficacy in improving the functional and radiologic outcomes. More randomized controlled trials are needed to compare these 2 devices. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Vertical intensity modulation for improved radiographic penetration and reduced exclusion zone

    NASA Astrophysics Data System (ADS)

    Bendahan, J.; Langeveld, W. G. J.; Bharadwaj, V.; Amann, J.; Limborg, C.; Nosochkov, Y.

    2016-09-01

    In the present work, a method to direct the X-ray beam in real time to the desired locations in the cargo to increase penetration and reduce exclusion zone is presented. Cargo scanners employ high energy X-rays to produce radiographic images of the cargo. Most new scanners employ dual-energy to produce, in addition to attenuation maps, atomic number information in order to facilitate the detection of contraband. The electron beam producing the bremsstrahlung X-ray beam is usually directed approximately to the center of the container, concentrating the highest X-ray intensity to that area. Other parts of the container are exposed to lower radiation levels due to the large drop-off of the bremsstrahlung radiation intensity as a function of angle, especially for high energies (>6 MV). This results in lower penetration in these areas, requiring higher power sources that increase the dose and exclusion zone. The capability to modulate the X-ray source intensity on a pulse-by-pulse basis to deliver only as much radiation as required to the cargo has been reported previously. This method is, however, controlled by the most attenuating part of the inspected slice, resulting in excessive radiation to other areas of the cargo. A method to direct a dual-energy beam has been developed to provide a more precisely controlled level of required radiation to highly attenuating areas. The present method is based on steering the dual-energy electron beam using magnetic components on a pulse-to-pulse basis to a fixed location on the X-ray production target, but incident at different angles so as to direct the maximum intensity of the produced bremsstrahlung to the desired locations. The details of the technique and subsystem and simulation results are presented.

  10. Analysis of mutational resistance to trimethoprim in Staphylococcus aureus by genetic and structural modelling techniques.

    PubMed

    Vickers, Anna A; Potter, Nicola J; Fishwick, Colin W G; Chopra, Ian; O'Neill, Alex J

    2009-06-01

    This study sought to expand knowledge on the molecular mechanisms of mutational resistance to trimethoprim in Staphylococcus aureus, and the fitness costs associated with resistance. Spontaneous trimethoprim-resistant mutants of S. aureus SH1000 were recovered in vitro, resistance genotypes characterized by DNA sequencing of the gene encoding the drug target (dfrA) and the fitness of mutants determined by pair-wise growth competition assays with SH1000. Novel resistance genotypes were confirmed by ectopic expression of dfrA alleles in a trimethoprim-sensitive S. aureus strain. Molecular models of S. aureus dihydrofolate reductase (DHFR) were constructed to explore the structural basis of trimethoprim resistance, and to rationalize the observed in vitro fitness of trimethoprim-resistant mutants. In addition to known amino acid substitutions in DHFR mediating trimethoprim resistance (F(99)Y and H(150)R), two novel resistance polymorphisms (L(41)F and F(99)S) were identified among the trimethoprim-resistant mutants selected in vitro. Molecular modelling of mutated DHFR enzymes provided insight into the structural basis of trimethoprim resistance. Calculated binding energies of the substrate (dihydrofolate) for the mutant and wild-type enzymes were similar, consistent with apparent lack of fitness costs for the resistance mutations in vitro. Reduced susceptibility to trimethoprim of DHFR enzymes carrying substitutions L(41)F, F(99)S, F(99)Y and H(150)R appears to result from structural changes that reduce trimethoprim binding to the enzyme. However, the mutations conferring trimethoprim resistance are not associated with fitness costs in vitro, suggesting that the survival of trimethoprim-resistant strains emerging in the clinic may not be subject to a fitness disadvantage.

  11. Cognitive Mapping Techniques: Implications for Research in Engineering and Technology Education

    ERIC Educational Resources Information Center

    Dixon, Raymond A.; Lammi, Matthew

    2014-01-01

    The primary goal of this paper is to present the theoretical basis and application of two types of cognitive maps, concept map and mind map, and explain how they can be used by educational researchers in engineering design research. Cognitive mapping techniques can be useful to researchers as they study students' problem solving strategies…

  12. Minimizing Experimental Error in Thinning Research

    Treesearch

    C. B. Briscoe

    1964-01-01

    Many diverse approaches have been made prescribing and evaluating thinnings on an objective basis. None of the techniques proposed hasbeen widely accepted. Indeed. none has been proven superior to the others nor even widely applicable. There are at least two possible reasons for this: none of the techniques suggested is of any general utility and/or experimental error...

  13. Training the Recurrent neural network by the Fuzzy Min-Max algorithm for fault prediction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zemouri, Ryad; Racoceanu, Daniel; Zerhouni, Noureddine

    2009-03-05

    In this paper, we present a training technique of a Recurrent Radial Basis Function neural network for fault prediction. We use the Fuzzy Min-Max technique to initialize the k-center of the RRBF neural network. The k-means algorithm is then applied to calculate the centers that minimize the mean square error of the prediction task. The performances of the k-means algorithm are then boosted by the Fuzzy Min-Max technique.

  14. Detection of physiological noise in resting state fMRI using machine learning.

    PubMed

    Ash, Tom; Suckling, John; Walter, Martin; Ooi, Cinly; Tempelmann, Claus; Carpenter, Adrian; Williams, Guy

    2013-04-01

    We present a technique for predicting cardiac and respiratory phase on a time point by time point basis, from fMRI image data. These predictions have utility in attempts to detrend effects of the physiological cycles from fMRI image data. We demonstrate the technique both in the case where it can be trained on a subject's own data, and when it cannot. The prediction scheme uses a multiclass support vector machine algorithm. Predictions are demonstrated to have a close fit to recorded physiological phase, with median Pearson correlation scores between recorded and predicted values of 0.99 for the best case scenario (cardiac cycle trained on a subject's own data) down to 0.83 for the worst case scenario (respiratory predictions trained on group data), as compared to random chance correlation score of 0.70. When predictions were used with RETROICOR--a popular physiological noise removal tool--the effects are compared to using recorded phase values. Using Fourier transforms and seed based correlation analysis, RETROICOR is shown to produce similar effects whether recorded physiological phase values are used, or they are predicted using this technique. This was seen by similar levels of noise reduction noise in the same regions of the Fourier spectra, and changes in seed based correlation scores in similar regions of the brain. This technique has a use in situations where data from direct monitoring of the cardiac and respiratory cycles are incomplete or absent, but researchers still wish to reduce this source of noise in the image data. Copyright © 2011 Wiley Periodicals, Inc.

  15. Use of X-ray diffraction technique and chemometrics to aid soil sampling strategies in traceability studies.

    PubMed

    Bertacchini, Lucia; Durante, Caterina; Marchetti, Andrea; Sighinolfi, Simona; Silvestri, Michele; Cocchi, Marina

    2012-08-30

    Aim of this work is to assess the potentialities of the X-ray powder diffraction technique as fingerprinting technique, i.e. as a preliminary tool to assess soil samples variability, in terms of geochemical features, in the context of food geographical traceability. A correct approach to sampling procedure is always a critical issue in scientific investigation. In particular, in food geographical traceability studies, where the cause-effect relations between the soil of origin and the final foodstuff is sought, a representative sampling of the territory under investigation is certainly an imperative. This research concerns a pilot study to investigate the field homogeneity with respect to both field extension and sampling depth, taking also into account the seasonal variability. Four Lambrusco production sites of the Modena district were considered. The X-Ray diffraction spectra, collected on the powder of each soil sample, were treated as fingerprint profiles to be deciphered by multivariate and multi-way data analysis, namely PCA and PARAFAC. The differentiation pattern observed in soil samples, as obtained by this fast and non-destructive analytical approach, well matches with the results obtained by characterization with other costly analytical techniques, such as ICP/MS, GFAAS, FAAS, etc. Thus, the proposed approach furnishes a rational basis to reduce the number of soil samples to be collected for further analytical characterization, i.e. metals content, isotopic ratio of radiogenic element, etc., while maintaining an exhaustive description of the investigated production areas. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. A novel technique for fetal heart rate estimation from Doppler ultrasound signal

    PubMed Central

    2011-01-01

    Background The currently used fetal monitoring instrumentation that is based on Doppler ultrasound technique provides the fetal heart rate (FHR) signal with limited accuracy. It is particularly noticeable as significant decrease of clinically important feature - the variability of FHR signal. The aim of our work was to develop a novel efficient technique for processing of the ultrasound signal, which could estimate the cardiac cycle duration with accuracy comparable to a direct electrocardiography. Methods We have proposed a new technique which provides the true beat-to-beat values of the FHR signal through multiple measurement of a given cardiac cycle in the ultrasound signal. The method consists in three steps: the dynamic adjustment of autocorrelation window, the adaptive autocorrelation peak detection and determination of beat-to-beat intervals. The estimated fetal heart rate values and calculated indices describing variability of FHR, were compared to the reference data obtained from the direct fetal electrocardiogram, as well as to another method for FHR estimation. Results The results revealed that our method increases the accuracy in comparison to currently used fetal monitoring instrumentation, and thus enables to calculate reliable parameters describing the variability of FHR. Relating these results to the other method for FHR estimation we showed that in our approach a much lower number of measured cardiac cycles was rejected as being invalid. Conclusions The proposed method for fetal heart rate determination on a beat-to-beat basis offers a high accuracy of the heart interval measurement enabling reliable quantitative assessment of the FHR variability, at the same time reducing the number of invalid cardiac cycle measurements. PMID:21999764

  17. Measuring bio-oil upgrade intermediates and corrosive species with polarity-matched analytical approaches

    DOE PAGES

    Connatser, Raynella M.; Lewis, Sr., Samuel Arthur; Keiser, James R.; ...

    2014-10-03

    Integrating biofuels with conventional petroleum products requires improvements in processing to increase blendability with existing fuels. This work demonstrates analysis techniques for more hydrophilic bio-oil liquids that give improved quantitative and qualitative description of the total acid content and organic acid profiles. To protect infrastructure from damage and reduce the cost associated with upgrading, accurate determination of acid content and representative chemical compound analysis are central imperatives to assessing both the corrosivity and the progress toward removing oxygen and acidity in processed biomass liquids. Established techniques form an ample basis for bio-liquids evaluation. However, early in the upgrading process, themore » unique physical phases and varied hydrophilicity of many pyrolysis liquids can render analytical methods originally designed for use in petroleum-derived oils inadequate. In this work, the water solubility of the organic acids present in bio-oils is exploited in a novel extraction and titration technique followed by analysis on the water-based capillary electrophoresis (CE) platform. The modification of ASTM D664, the standard for Total Acid Number (TAN), to include aqueous carrier solvents improves the utility of that approach for quantifying acid content in hydrophilic bio-oils. Termed AMTAN (modified Total Acid Number), this technique offers 1.2% relative standard deviation and dynamic range comparable to the conventional ASTM method. Furthermore, the results of corrosion product evaluations using several different sources of real bio-oil are discussed in the context of the unique AMTAN and CE analytical approaches developed to facilitate those measurements.« less

  18. Peptide dynamics by molecular dynamics simulation and diffusion theory method with improved basis sets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hsu, Po Jen; Lai, S. K., E-mail: sklai@coll.phy.ncu.edu.tw; Rapallo, Arnaldo

    Improved basis sets for the study of polymer dynamics by means of the diffusion theory, and tests on a melt of cis-1,4-polyisoprene decamers, and a toluene solution of a 71-mer syndiotactic trans-1,2-polypentadiene were presented recently [R. Gaspari and A. Rapallo, J. Chem. Phys. 128, 244109 (2008)]. The proposed hybrid basis approach (HBA) combined two techniques, the long time sorting procedure and the maximum correlation approximation. The HBA takes advantage of the strength of these two techniques, and its basis sets proved to be very effective and computationally convenient in describing both local and global dynamics in cases of flexible syntheticmore » polymers where the repeating unit is a unique type of monomer. The question then arises if the same efficacy continues when the HBA is applied to polymers of different monomers, variable local stiffness along the chain and with longer persistence length, which have different local and global dynamical properties against the above-mentioned systems. Important examples of this kind of molecular chains are the proteins, so that a fragment of the protein transthyretin is chosen as the system of the present study. This peptide corresponds to a sequence that is structured in β-sheets of the protein and is located on the surface of the channel with thyroxin. The protein transthyretin forms amyloid fibrils in vivo, whereas the peptide fragment has been shown [C. P. Jaroniec, C. E. MacPhee, N. S. Astrof, C. M. Dobson, and R. G. Griffin, Proc. Natl. Acad. Sci. U.S.A. 99, 16748 (2002)] to form amyloid fibrils in vitro in extended β-sheet conformations. For these reasons the latter is given considerable attention in the literature and studied also as an isolated fragment in water solution where both experimental and theoretical efforts have indicated the propensity of the system to form β turns or α helices, but is otherwise predominantly unstructured. Differing from previous computational studies that employed implicit solvent, we performed in this work the classical molecular dynamics simulation on a realistic model solution with the peptide embedded in an explicit water environment, and calculated its dynamic properties both as an outcome of the simulations, and by the diffusion theory in reduced statistical-mechanical approach within HBA on the premise that the mode-coupling approach to the diffusion theory can give both the long-range and local dynamics starting from equilibrium averages which were obtained from detailed atomistic simulations.« less

  19. 15 CFR 923.44 - State review on a case-by-case basis of actions affecting land and water uses subject to the...

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 15 Commerce and Foreign Trade 3 2012-01-01 2012-01-01 false State review on a case-by-case basis of actions affecting land and water uses subject to the management program-Technique C. 923.44 Section 923.44 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade (Continued) NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION,...

  20. 15 CFR 923.44 - State review on a case-by-case basis of actions affecting land and water uses subject to the...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 15 Commerce and Foreign Trade 3 2010-01-01 2010-01-01 false State review on a case-by-case basis of actions affecting land and water uses subject to the management program-Technique C. 923.44 Section 923.44 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade (Continued) NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION,...

  1. Novel throughput phenotyping platforms in plant genetic studies.

    PubMed

    Montes, Juan M; Melchinger, Albrecht E; Reif, Jochen C

    2007-10-01

    Unraveling the genetic basis of complex traits in plants is limited by the lack of appropriate phenotyping platforms that enable high-throughput screening of many genotypes in multilocation field trials. Near-infrared spectroscopy on agricultural harvesters and spectral reflectance of plant canopies have recently been reported as promising components of novel phenotyping platforms. Understanding the genetic basis of complex traits is now within reach with the use of these new techniques.

  2. Use of the Transcendental Meditation Technique to Reduce Symptoms of Attention Deficit Hyperactivity Disorder (ADHD) by Reducing Stress and Anxiety: An Exploratory Study

    ERIC Educational Resources Information Center

    Grosswald, Sarina J.; Stixrud, William R.; Travis, Fred; Bateh, Mark A.

    2008-01-01

    This exploratory study tested the feasibility of using the Transcendental Meditation[R] technique to reduce stress and anxiety as a means of reducing symptoms of ADHD. Students ages 11-14 were taught the technique, and practiced it twice daily in school. Common ADHD inventories and performance measures of executive function were administered at…

  3. [Acupuncture combined with medication for morning blood pressure of essential hypertension].

    PubMed

    Zhang, Yi; Du, Yuzheng

    2018-04-12

    Based on the western medication, to evaluate the advantages in the morning blood pressure treated with acupuncture at Fengchi (GB 20) and Neck-Jiaji (EX-B 2) combined with acupuncture technique for activating blood circulation, eliminating wind and regulating the liver and spleen in the patients with essential hypertension. A total of 90 patients of essential hypertension of the mild and moderate degrees were randomized into a medication group (30 cases, 3 dropping), No.1 acupuncture group (30 cases, 2 dropping) and No.2 acupuncture group (30 cases, 1 dropping). In the medication group, adalat was prescribed for oral administration, 30 mg at 7 am every day, continuously for 6 weeks. In the No.1 acupuncture group, on the basis of the treatment as the medication group, the acupuncture technique for activating blood circulation, eliminating wind and regulating the liver and spleen was applied and the acupoints were Renying (ST 9), Hegu (LI 4), Taichong (LR 3), Quchi (LI 11) and Zusanli (ST 36). In the No.2 acupuncture group, on the basis of the treatment as the No.1 acupuncture group, Fengchi (GB 20) and Neck-Jiaji (EX-B 2) were added in acupuncture. Acupuncture was given in the time zone from 8 am through 10 am every day, once a day, 5 times a week, totally for 6 weeks. Separately, before treatment and in 2, 4 and 6 weeks of treatment, the morning blood pressure, the control rate and the symptom score were observed in the patients of the three groups. The morning blood pressure was followed up in 3 and 6 months separately. Compared with those before treatment, in 2, 4 and 6 weeks of treatment, the levels of blood pressure reduced in the patients of the three groups ( P <0.05, P <0.01). After 2-week treatment, the differences were not significant in the morning blood pressure and its control rate in the patients of the three groups (all P >0.05). In 4 and 6 weeks of treatment, the levels of the morning blood pressure in the No.2 acupuncture group were lower than those in the No.1 acupuncture group, and the results in the No.1 and No.2 acupuncture groups were all lower than those in the medication group (all P <0.05). In the follow-up visit for 3 and 6 months separately, the differences were not significant in the morning blood pressure among the three groups (all P >0.05). In 2, 4 and 6 weeks of treatment, the symptom scores reduced as compared with those before treatment in the three groups (all P <0.05). The symptom scores in the No.1 and No.2 acupuncture groups were all lower than those in the medication group (all P <0.05). The differences were not significant between the No.1 acupuncture group and the No.2 acupuncture group (all P >0.05). The comprehensive treatment of acupuncture at Fengchi (GB 20) and Neck-Jiaji (EX-B 2) combined with acupuncture technique for activating blood circulation, eliminating wind and regulating the liver and spleen achieve the effects of reducing the morning blood pressure in the patients with essential hypertension, relieving the symptoms of hypertension such as headache, vertigo and tinnitus and the effects are better than those of the acupuncture technique for activating blood circulation, eliminating wind and regulating the liver and spleen.

  4. Direct recovery of regional tracer kinetics from temporally inconsistent dynamic ECT projections using dimension-reduced time-activity basis

    NASA Astrophysics Data System (ADS)

    Maltz, Jonathan S.

    2000-11-01

    We present an algorithm of reduced computational cost which is able to estimate kinetic model parameters directly from dynamic ECT sinograms made up of temporally inconsistent projections. The algorithm exploits the extreme degree of parameter redundancy inherent in linear combinations of the exponential functions which represent the modes of first-order compartmental systems. The singular value decomposition is employed to find a small set of orthogonal functions, the linear combinations of which are able to accurately represent all modes within the physiologically anticipated range in a given study. The reduced-dimension basis is formed as the convolution of this orthogonal set with a measured input function. The Moore-Penrose pseudoinverse is used to find coefficients of this basis. Algorithm performance is evaluated at realistic count rates using MCAT phantom and clinical 99mTc-teboroxime myocardial study data. Phantom data are modelled as originating from a Poisson process. For estimates recovered from a single slice projection set containing 2.5×105 total counts, recovered tissue responses compare favourably with those obtained using more computationally intensive methods. The corresponding kinetic parameter estimates (coefficients of the new basis) exhibit negligible bias, while parameter variances are low, falling within 30% of the Cramér-Rao lower bound.

  5. Alternative Modal Basis Selection Procedures for Nonlinear Random Response Simulation

    NASA Technical Reports Server (NTRS)

    Przekop, Adam; Guo, Xinyun; Rizzi, Stephen A.

    2010-01-01

    Three procedures to guide selection of an efficient modal basis in a nonlinear random response analysis are examined. One method is based only on proper orthogonal decomposition, while the other two additionally involve smooth orthogonal decomposition. Acoustic random response problems are employed to assess the performance of the three modal basis selection approaches. A thermally post-buckled beam exhibiting snap-through behavior, a shallowly curved arch in the auto-parametric response regime and a plate structure are used as numerical test articles. The results of the three reduced-order analyses are compared with the results of the computationally taxing simulation in the physical degrees of freedom. For the cases considered, all three methods are shown to produce modal bases resulting in accurate and computationally efficient reduced-order nonlinear simulations.

  6. Novel neural control for a class of uncertain pure-feedback systems.

    PubMed

    Shen, Qikun; Shi, Peng; Zhang, Tianping; Lim, Cheng-Chew

    2014-04-01

    This paper is concerned with the problem of adaptive neural tracking control for a class of uncertain pure-feedback nonlinear systems. Using the implicit function theorem and backstepping technique, a practical robust adaptive neural control scheme is proposed to guarantee that the tracking error converges to an adjusted neighborhood of the origin by choosing appropriate design parameters. In contrast to conventional Lyapunov-based design techniques, an alternative Lyapunov function is constructed for the development of control law and learning algorithms. Differing from the existing results in the literature, the control scheme does not need to compute the derivatives of virtual control signals at each step in backstepping design procedures. Furthermore, the scheme requires the desired trajectory and its first derivative rather than its first n derivatives. In addition, the useful property of the basis function of the radial basis function, which will be used in control design, is explored. Simulation results illustrate the effectiveness of the proposed techniques.

  7. On the accuracy of explicitly correlated methods to generate potential energy surfaces for scattering calculations and clustering: application to the HCl-He complex.

    PubMed

    Ajili, Yosra; Hammami, Kamel; Jaidane, Nejm Eddine; Lanza, Mathieu; Kalugina, Yulia N; Lique, François; Hochlaf, Majdi

    2013-07-07

    We closely compare the accuracy of multidimensional potential energy surfaces (PESs) generated by the recently developed explicitly correlated coupled cluster (CCSD(T)-F12) methods in connection with the cc-pVXZ-F12 (X = D, T) and aug-cc-pVTZ basis sets and those deduced using the well-established orbital-based coupled cluster techniques employing correlation consistent atomic basis sets (aug-cc-pVXZ, X = T, Q, 5) and extrapolated to the complete basis set (CBS) limit. This work is performed on the benchmark rare gas-hydrogen halide interaction (HCl-He) system. These PESs are then incorporated into quantum close-coupling scattering dynamical calculations in order to check the impact of the accuracy of the PES on the scattering calculations. For this system, we deduced inelastic collisional data including (de-)excitation collisional and pressure broadening cross sections. Our work shows that the CCSD(T)-F12/aug-cc-pVTZ PES describes correctly the repulsive wall, the van der Waals minimum and long range internuclear distances whereas cc-pVXZ-F12 (X = D,T) basis sets are not diffuse enough for that purposes. Interestingly, the collision cross sections deduced from the CCSD(T)-F12/aug-cc-pVTZ PES are in excellent agreement with those obtained with CCSD(T)/CBS methodology. The position of the resonances and the general shape of these cross sections almost coincide. Since the cost of the electronic structure computations is reduced by several orders of magnitude when using CCSD(T)-F12/aug-cc-pVTZ compared to CCSD(T)/CBS methodology, this approach can be recommended as an alternative for generation of PESs of molecular clusters and for the interpretation of accurate scattering experiments as well as for a wide production of collisional data to be included in astrophysical and atmospherical models.

  8. Theoretical, spectroscopic and antioxidant activity studies on (E)-2-[(2-fluorophenylimino)methyl]-4-hydroxyphenol and (E)-2-[(3-fluorophenylimino)methyl]-4-hydroxyphenol compounds

    NASA Astrophysics Data System (ADS)

    Alaşalvar, Can; Güder, Aytaç; Gökçe, Halil; Albayrak Kaştaş, Çiğdem; Çatak Çelik, Raziye

    2017-04-01

    We studied synthesis and characterization of the title compounds by using X-ray crystallographic technique, FT-IR spectroscopy, UV-Vis spectroscopy and Density functional method. Optimized geometry, vibrational frequencies and UV-Vis parameters of the title compounds in the ground state have been calculated by using B3LYP with the 6-311G+ (d,p) basis set. HOMO - LUMO energy gap, Non-linear optical properties are performed at B3LYP/6-311G+(d,p) level. For determination of antioxidant properties of the title compounds (CMPD1 and CMPD2) have been investigated by using different methods, i.e. ferric reducing antioxidant power (FRAP), hydrogen peroxide scavenging (HPSA), free radical scavenging (FRSA) and ferrous ion chelating activities (FICA). In comparison with standard antioxidants (BHA, BHT, and α-tocopherol), CPMD1 and CMPD2 have influential FRAP, HPSA, FRSA and FICA.

  9. Guidelines for imaging retinoblastoma: imaging principles and MRI standardization.

    PubMed

    de Graaf, Pim; Göricke, Sophia; Rodjan, Firazia; Galluzzi, Paolo; Maeder, Philippe; Castelijns, Jonas A; Brisse, Hervé J

    2012-01-01

    Retinoblastoma is the most common intraocular tumor in children. The diagnosis is usually established by the ophthalmologist on the basis of fundoscopy and US. Together with US, high-resolution MRI has emerged as an important imaging modality for pretreatment assessment, i.e. for diagnostic confirmation, detection of local tumor extent, detection of associated developmental malformation of the brain and detection of associated intracranial primitive neuroectodermal tumor (trilateral retinoblastoma). Minimum requirements for pretreatment diagnostic evaluation of retinoblastoma or mimicking lesions are presented, based on consensus among members of the European Retinoblastoma Imaging Collaboration (ERIC). The most appropriate techniques for imaging in a child with leukocoria are reviewed. CT is no longer recommended. Implementation of a standardized MRI protocol for retinoblastoma in clinical practice may benefit children worldwide, especially those with hereditary retinoblastoma, since a decreased use of CT reduces the exposure to ionizing radiation.

  10. [Increasingly better diagnosis and treatment of endometrial cancer].

    PubMed

    Salehi, Sahar; Stålberg, Karin; Marcickiewicz, Janusz; Rosenberg, Per; Falconer, Henrik

    2015-12-08

    Endometrial cancer is the most common gynecological cancer in developed countries and the observed rise in incidence is mainly caused by life style factors including obesity and diabetes. The management of the disease has undergone major changes in the past 5-10 years. Morphological and genetic studies constitute the basis for the new classification of the disease, and data emerging from the Cancer Genome Atlas suggest that genomic patterns differ within the two types of endometrial cancer. The prognosis seems to be related to occult lymphatic spread but the role of lymphadenectomy is heavily debated. Development of novel biomarkers, sentinel lymph node technique and refined radiological methods may reduce the need of comprehensive staging in the future. The results from the Cancer Genome Atlas suggest that women with endometrial cancer may benefit from »targeted therapies« in the evolving era of personalised medicine.

  11. FY02 CBNP Annual Report Input: Bioinformatics Support for CBNP Research and Deployments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slezak, T; Wolinsky, M

    2002-10-31

    The events of FY01 dynamically reprogrammed the objectives of the CBNP bioinformatics support team, to meet rapidly-changing Homeland Defense needs and requests from other agencies for assistance: Use computational techniques to determine potential unique DNA signature candidates for microbial and viral pathogens of interest to CBNP researcher and to our collaborating partner agencies such as the Centers for Disease Control and Prevention (CDC), U.S. Department of Agriculture (USDA), Department of Defense (DOD), and Food and Drug Administration (FDA). Develop effective electronic screening measures for DNA signatures to reduce the cost and time of wet-bench screening. Build a comprehensive system formore » tracking the development and testing of DNA signatures. Build a chain-of-custody sample tracking system for field deployment of the DNA signatures as part of the BASIS project. Provide computational tools for use by CBNP Biological Foundations researchers.« less

  12. Space solar cell technology development - A perspective

    NASA Technical Reports Server (NTRS)

    Scott-Monck, J.

    1982-01-01

    The developmental history of photovoltaics is examined as a basis for predicting further advances to the year 2000. Transistor technology was the precursor of solar cell development. Terrestrial cells were modified for space through changes in geometry and size, as well as the use of Ag-Ti contacts and manufacture of a p-type base. The violet cell was produced for Comsat, and involved shallow junctions, new contacts, and an enhanced antireflection coating for better radiation tolerance. The driving force was the desire by private companies to reduce cost and weight for commercial satellite power supplies. Liquid phase epitaxial (LPE) GaAs cells are the latest advancement, having a 4 sq cm area and increased efficiency. GaAs cells are expected to be flight ready in the 1980s. Testing is still necessary to verify production techniques and the resistance to electron and photon damage. Research will continue in CVD cell technology, new panel technology, and ultrathin Si cells.

  13. Classification of hospital admissions into emergency and elective care: a machine learning approach.

    PubMed

    Krämer, Jonas; Schreyögg, Jonas; Busse, Reinhard

    2017-11-25

    Rising admissions from emergency departments (EDs) to hospitals are a primary concern for many healthcare systems. The issue of how to differentiate urgent admissions from non-urgent or even elective admissions is crucial. We aim to develop a model for classifying inpatient admissions based on a patient's primary diagnosis as either emergency care or elective care and predicting urgency as a numerical value. We use supervised machine learning techniques and train the model with physician-expert judgments. Our model is accurate (96%) and has a high area under the ROC curve (>.99). We provide the first comprehensive classification and urgency categorization for inpatient emergency and elective care. This model assigns urgency values to every relevant diagnosis in the ICD catalog, and these values are easily applicable to existing hospital data. Our findings may provide a basis for policy makers to create incentives for hospitals to reduce the number of inappropriate ED admissions.

  14. Ultrasonic Acoustic Emissions from the Sapwood of Thuja occidentalis Measured inside a Pressure Bomb 1

    PubMed Central

    Tyree, Melvin T.; Dixon, Michael A.; Thompson, Robert G.

    1984-01-01

    An improved method of counting acoustic emission (AE) events from water-stressed stems of cedar (Thuja occidentalis L.) is presented. Amplified AEs are analyzed on a real time basis by a microcomputer. The instrumentation counts AE events in a fashion nearly analogous to scintillation counting of radioactive materials. The technique was applied to measuring ultrasonic AEs from the stems of cedar inside a pressure bomb. The shoots were originally fully hydrated. When the shoots are dehydrated in the bomb by application of an overpressure very few AEs were detected. When the bomb pressure is reduced after dehydration of the shoot, AE events could be detected. We conclude that ultrasonic AEs are caused by cavitation events (= structural breakdown of water columns in the tracheids of cedar) and not by the breaking of cellulose fibers in the wood. PMID:16663501

  15. Evaluation of three watering and mulching techniques on transplanted trees at Adobe Dam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moore, C.

    1983-06-01

    On the basis of these transplant studies, it is recommended that a minimal irrigation schedule be followed in the future for transplanted specimens. Transplanting early in the year reduces the watering requirements. Furthermore, after a one month adjustment period, trees watered once a month did well. Removal of supplemental water should be gradual, so as not to cause shock to the trees. Stone mulch appears to be both durable and effective as a mulching material, and can be cost effective if readily available on site. Fencing is a requirement for Palo Verde and Mesquite transplants but can be foregone onmore » Creosote. Management following transplanting should include regular site inspections for signs of insect infestation and for watering problems. Inspection personnel should watch for signs that transplants have been watered adequately and the fences are intact and not restricting tree growth.« less

  16. Field deployment of a scope for growth assay involving Gammarus pulex, a freshwater benthic invertebrate.

    PubMed

    Maltby, L; Naylor, C; Calow, P

    1990-06-01

    Scope for growth (SfG) is a measure of the energy balance of an animal (i.e., the difference between energy intake and metabolic output). The SfG of marine invertebrates, particularly the mussel Mytilus edulis, has been successfully used as the basis of a field bioassay to detect a range of stresses both natural (temperature, food, salinity) and anthropogenic (hydrocarbons, sewage sludge). SfG of the freshwater amphipod Gammarus pulex was found to be a sensitive indicator of stress under laboratory conditions and here we describe the field deployment of this technique and present data from three field trials. In every case, SfG was reduced at the downstream polluted site compared with that at an upstream reference site. This reduction in SfG was the result of a decrease in energy intake (absorption) rather than an increase in energy expenditure (respiration).

  17. The use of scenario analysis in local public health departments: alternative futures for strategic planning.

    PubMed Central

    Venable, J M; Ma, Q L; Ginter, P M; Duncan, W J

    1993-01-01

    Scenario analysis is a strategic planning technique used to describe and evaluate an organization's external environment. A methodology for conducting scenario analysis using the Jefferson County Department of Health and the national, State, and county issues confronting it is outlined. Key health care and organizational issues were identified using published sources, focus groups, questionnaires, and personal interviews. The most important of these issues were selected by asking health department managers to evaluate the issues according to their probability of occurrence and likely impact on the health department. The high-probability, high-impact issues formed the basis for developing scenario logics that constitute the story line holding the scenario together. The results were a set of plausible scenarios that aided in strategic planning, encouraged strategic thinking among managers, eliminated or reduced surprise about environmental changes, and improved managerial discussion and communication. PMID:8265754

  18. [Therapy and course of recurrent odontogenic keratocyst. A case report].

    PubMed

    Schultz, Christoph B; Pajarola, Gion F; Grätz, Klaus W

    2005-01-01

    Recurrence following the surgical treatment of keratocysts of the jaws may present a major problem to the oral surgeon. The surgical treatment of patients with odontogenic keratocysts is concerning the high recurrence rate demanding and difficult. It has been suggested that recurrence is a consequence of technical of microcysts in the mucosa overlying the recurrent lesions. Attemps have been made to reduce this high recurrence rate by improved surgical techniques, such as removal of superadjacent mucosa, smoothing of the osseous wall of the cystic cavity, resection of neighboring parts of the mandible, tanning of the epithelial lining of the cyst with Carnoy's solution and marsupialisation. On the basis of a case report it was the aim of the authors to present the surgical treatment of odontogenic, recurrent keratocysts at the Clinic for Maxillo-Facial Surgery, University Hospital Zurich, from the primary operation following the Brosch-procedure in 1971 up to the latest cystectomy in 2004.

  19. [Sheng's acupuncture manipulation at bone-nearby acupoints and the academic thoughts].

    PubMed

    Sheng, Ji-li; Jin, Xiao-qing

    2014-11-01

    Sheng's acupuncture manipulation at bone-nearby acupoints is a set of needling manipulation of the chief physician of TCM, SHENG Xie-sun, summarized through his over 50 years clinical experiences and on the basis of Internal Classic. Regarding this manipulation, on the premise of acupoint selection based on syndrome differentiation, the acupoints close to bone are possibly selected and punctured, with the needle tip toward bone edge, and followed by the technique to achieve reducing purpose. Clinically, the significant immediate analgesia can be achieved in pain disorders such as headache and toothache. Professor Sheng thought, corresponding to the location of needle insertion and needling depth, the tissue layers of needle tip passing through should be considered specially. The site of needle insertion should be changeable so as to ensure the needle tip reaching the bone. This manipulation for analgesia provides a certain guide for acupuncture study, especially for the mechanism study on acupuncture analgesia.

  20. Research on damping properties optimization of variable-stiffness plate

    NASA Astrophysics Data System (ADS)

    Wen-kai, QI; Xian-tao, YIN; Cheng, SHEN

    2016-09-01

    This paper investigates damping optimization design of variable-stiffness composite laminated plate, which means fibre paths can be continuously curved and fibre angles are distinct for different regions. First, damping prediction model is developed based on modal dissipative energy principle and verified by comparing with modal testing results. Then, instead of fibre angles, the element stiffness and damping matrixes are translated to be design variables on the basis of novel Discrete Material Optimization (DMO) formulation, thus reducing the computation time greatly. Finally, the modal damping capacity of arbitrary order is optimized using MMA (Method of Moving Asymptotes) method. Meanwhile, mode tracking technique is employed to investigate the variation of modal shape. The convergent performance of interpolation function, first order specific damping capacity (SDC) optimization results and variation of modal shape in different penalty factor are discussed. The results show that the damping properties of the variable-stiffness plate can be increased by 50%-70% after optimization.

  1. Improving the efficiency of hierarchical equations of motion approach and application to coherent dynamics in Aharonov–Bohm interferometers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hou, Dong; Xu, RuiXue; Zheng, Xiao, E-mail: xz58@ustc.edu.cn

    2015-03-14

    Several recent advancements for the hierarchical equations of motion (HEOM) approach are reported. First, we propose an a priori estimate for the optimal number of basis functions for the reservoir memory decomposition. Second, we make use of the sparsity of auxiliary density operators (ADOs) and propose two ansatzs to screen out all the intrinsic zero ADO elements. Third, we propose a new truncation scheme by utilizing the time derivatives of higher-tier ADOs. These novel techniques greatly reduce the memory cost of the HEOM approach, and thus enhance its efficiency and applicability. The improved HEOM approach is applied to simulate themore » coherent dynamics of Aharonov–Bohm double quantum dot interferometers. Quantitatively accurate dynamics is obtained for both noninteracting and interacting quantum dots. The crucial role of the quantum phase for the magnitude of quantum coherence and quantum entanglement is revealed.« less

  2. NASTRAN implementation of an isoparametric doubly-curved quadrilateral shell element

    NASA Technical Reports Server (NTRS)

    Potvin, A. B.; Leick, R. D.

    1978-01-01

    A quadrilateral shell element, CQUAD4, was added to level 15.5 and subsequently to level 16.0 of NASTRAN. The element exhibited doubly curved surfaces and used biquadratic interpolation functions. Reduced integration techniques were used to improve the performance of the element in thin shell problems. The creation of several new bulk data items is discussed, along with a special module, GPNORM, to process SHLNORM bulk data cards. In addition to the theoretical basis for the element stiffness matrix, consistent mass and load matrices are presented. Several potential sources of degenerate behavior of the element were investigated. Guidelines for proper use of the element were suggested. Performance of the element on several widely published classical examples was demonstrated. The results showed a significant improvement over presently available NASTRAN shell elements for even the coarsest meshes. Potential applications to two classes of practical problems are discussed.

  3. Winds over the ocean as measured by the scatterometer on Seasat

    NASA Technical Reports Server (NTRS)

    Pierson, W. J.

    1981-01-01

    An analysis is presented of the relative accuracy of Seasat scatterometer measurements of the wind speeds and directions at 19.5 m altitude as compared to ground truth measurements taken by surface ships and instrumented buoys. Attention is given to the JASIN, QE II, and GOASEX surface data. The validity of 2-30 min averages taken from surface stations spread out over a wide area and serving as a basis for defining wind field averages over the 50 km resolution of SASS is examined. Satisfactory wind speeds were found to be available from SASS readings in the wind speed range 6-14 m/sec. The use of 25 SASS readings around a grid point was determined to reduce scatter to 0.25 m/sec when used in numerical weather prediction modeling. Improvements to the SASS techniques by the Seasat successor, NOSS, are discussed, and inclusion of momentum, heat, and water turbulent fluxes by NOSS is noted.

  4. Ultraviolet observations of cool stars. VII - Local interstellar hydrogen and deuterium Lyman-alpha

    NASA Technical Reports Server (NTRS)

    Mcclintock, W.; Henry, R. C.; Linsky, J. L.; Moos, H. W.

    1978-01-01

    High-resolution Copernicus spectra of Epsilon Eri and Epsilon Ind containing interstellar hydrogen and deuterium L-alpha absorption lines are presented, reduced, and analyzed. Parameters of the interstellar hydrogen and deuterium toward these two stars are derived independently, without any assumptions concerning the D/H ratio. Copernicus spectra of Alpha Aur and Alpha Cen A are reanalyzed, and limits on the D/H number-density ratio consistent with the data for all four stars are considered. A comparison of the present estimates for the parameters of the local interstellar medium with those obtained by other techniques shows that there is no compelling evidence for significant variations in the hydrogen density and D/H ratio in the local interstellar medium. On this basis the hypothesis of an approaching local interstellar cloud proposed by Vidal-Madjar et al. (1978) is rejected

  5. Age differences in decision making: a process methodology for examining strategic information processing.

    PubMed

    Johnson, M M

    1990-03-01

    This study explored the use of process tracing techniques in examining the decision-making processes of older and younger adults. Thirty-six college-age and thirty-six retirement-age participants decided which one of six cars they would purchase on the basis of computer-accessed data. They provided information search protocols. Results indicate that total time to reach a decision did not differ according to age. However, retirement-age participants used less information, spent more time viewing, and re-viewed fewer bits of information than college-age participants. Information search patterns differed markedly between age groups. Patterns of retirement-age adults indicated their use of noncompensatory decision rules which, according to decision-making literature (Payne, 1976), reduce cognitive processing demands. The patterns of the college-age adults indicated their use of compensatory decision rules, which have higher processing demands.

  6. Instrumentation for Air Pollution Monitoring

    ERIC Educational Resources Information Center

    Hollowell, Craig D.; McLaughlin, Ralph D.

    1973-01-01

    Describes the techniques which form the basis of current commercial instrumentation for monitoring five major gaseous atmospheric pollutants (sulfur dioxide, oxides of nitrogen, oxidants, carbon monoxide, and hydrocarbons). (JR)

  7. High-order cyclo-difference techniques: An alternative to finite differences

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Otto, John C.

    1993-01-01

    The summation-by-parts energy norm is used to establish a new class of high-order finite-difference techniques referred to here as 'cyclo-difference' techniques. These techniques are constructed cyclically from stable subelements, and require no numerical boundary conditions; when coupled with the simultaneous approximation term (SAT) boundary treatment, they are time asymptotically stable for an arbitrary hyperbolic system. These techniques are similar to spectral element techniques and are ideally suited for parallel implementation, but do not require special collocation points or orthogonal basis functions. The principal focus is on methods of sixth-order formal accuracy or less; however, these methods could be extended in principle to any arbitrary order of accuracy.

  8. The Role of Nuclear Medicine in the Staging and Management of Human Immune Deficiency Virus Infection and Associated Diseases.

    PubMed

    Ankrah, Alfred O; Glaudemans, Andor W J M; Klein, Hans C; Dierckx, Rudi A J O; Sathekge, Mike

    2017-06-01

    Human immune deficiency virus (HIV) is a leading cause of death. It attacks the immune system, thereby rendering the infected host susceptible to many HIV-associated infections, malignancies and neurocognitive disorders. The altered immune system affects the way the human host responds to disease, resulting in atypical presentation of these disorders. This presents a diagnostic challenge and the clinician must use all diagnostic avenues available to diagnose and manage these conditions. The advent of highly active antiretroviral therapy (HAART) has markedly reduced the mortality associated with HIV infection but has also brought in its wake problems associated with adverse effects or drug interaction and may even modulate some of the HIV-associated disorders to the detriment of the infected human host. Nuclear medicine techniques allow non-invasive visualisation of tissues in the body. By using this principle, pathophysiology in the body can be targeted and the treatment of diseases can be monitored. Being a functional imaging modality, it is able to detect diseases at the molecular level, and thus it has increased our understanding of the immunological changes in the infected host at different stages of the HIV infection. It also detects pathological changes much earlier than conventional imaging based on anatomical changes. This is important in the immunocompromised host as in some of the associated disorders a delay in diagnosis may have dire consequences. Nuclear medicine has played a huge role in the management of many HIV-associated disorders in the past and continues to help in the diagnosis, prognosis, staging, monitoring and assessing the response to treatment of many HIV-associated disorders. As our understanding of the molecular basis of disease increases nuclear medicine is poised to play an even greater role. In this review we highlight the functional basis of the clinicopathological correlation of HIV from a metabolic view and discuss how the use of nuclear medicine techniques, with particular emphasis of F-18 fluorodeoxyglucose, may have impact in the setting of HIV. We also provide an overview of the role of nuclear medicine techniques in the management of HIV-associated disorders.

  9. Fate of pathogen indicators in a domestic blend of food waste and wastewater through a two-stage anaerobic digestion system.

    PubMed

    Rounsefell, B D; O'Sullivan, C A; Chinivasagam, N; Batstone, D; Clarke, W P

    2013-01-01

    Anaerobic digestion is a viable on-site treatment technology for rich organic waste streams such as food waste and blackwater. In contrast to large-scale municipal wastewater treatment plants which are typically located away from the community, the effluent from any type of on-site system is a potential pathogenic hazard because of the intimacy of the system to the community. The native concentrations of the pathogen indicators Escherichia coli, Clostridium perfringens and somatic coliphage were tracked for 30 days under stable operation (organic loading rate (OLR) = 1.8 kgCOD m(-3) day(-1), methane yield = 52% on a chemical oxygen demand (COD) basis) of a two-stage laboratory-scale digester treating a mixture of food waste and blackwater. E. coli numbers were reduced by a factor of 10(6.4) in the thermophilic stage, from 10(7.5±0.3) to 10(1.1±0.1) cfu 100 mL(-1), but regenerated by a factor of 10(4) in the mesophilic stage. Neither the thermophilic nor mesophilic stages had any significant impact on C. perfringens concentrations. Coliphage concentrations were reduced by a factor of 10(1.4) across the two stages. The study shows that anaerobic digestion only reduces pathogen counts marginally but that counts in effluent samples could be readily reduced to below detection limits by filtration through a 0.22 µm membrane, to investigate membrane filtration as a possible sanitation technique.

  10. Reducing Production Basis Risk through Rainfall Intensity Frequency (RIF) Indexes: Global Sensitivity Analysis' Implication on Policy Design

    NASA Astrophysics Data System (ADS)

    Muneepeerakul, Chitsomanus; Huffaker, Ray; Munoz-Carpena, Rafael

    2016-04-01

    The weather index insurance promises financial resilience to farmers struck by harsh weather conditions with swift compensation at affordable premium thanks to its minimal adverse selection and moral hazard. Despite these advantages, the very nature of indexing causes the presence of "production basis risk" that the selected weather indexes and their thresholds do not correspond to actual damages. To reduce basis risk without additional data collection cost, we propose the use of rain intensity and frequency as indexes as it could offer better protection at the lower premium by avoiding basis risk-strike trade-off inherent in the total rainfall index. We present empirical evidences and modeling results that even under the similar cumulative rainfall and temperature environment, yield can significantly differ especially for drought sensitive crops. We further show that deriving the trigger level and payoff function from regression between historical yield and total rainfall data may pose significant basis risk owing to their non-unique relationship in the insured range of rainfall. Lastly, we discuss the design of index insurance in terms of contract specifications based on the results from global sensitivity analysis.

  11. Comparison of detailed and reduced kinetics mechanisms of silane oxidation in the basis of detonation wave structure problem

    NASA Astrophysics Data System (ADS)

    Fedorov, A. V.; Tropin, D. A.; Fomin, P. A.

    2018-03-01

    The paper deals with the problem of the structure of detonation waves in the silane-air mixture within the framework of mathematical model of a nonequilibrium gas dynamics. Detailed kinetic scheme of silane oxidation as well as the newly developed reduced kinetic model of detonation combustion of silane are used. On its basis the detonation wave (DW) structure in stoichiometric silane - air mixture and dependences of Chapman-Jouguet parameters of mixture on stoichiometric ratio between the fuel (silane) and an oxidizer (air) were obtained.

  12. Modeling the suppression of sea lamprey populations by the release of sterile males or sterile females

    USGS Publications Warehouse

    Klassen, Waldemar; Adams, Jean V.; Twohey, Michael B.

    2004-01-01

    The suppressive effects of trapping adult sea lampreys, Petromyzon marinus Linnaeus, and releasing sterile males (SMRT) or females (SFRT) into a closed system were expressed in deterministic models. Suppression was modeled as a function of the proportion of the population removed by trapping, the number of sterile animals released, the reproductive rate and sex ratio of the population, and (for the SFRT) the rate of polygyny. Releasing sterile males reduced populations more quickly than did the release of sterile females. For a population in which 30% are trapped, sterile animals are initially released at ratio of 10 sterile to 1 fertile animal, 5 adult progeny are produced per fertile mating, 60% are male, and males mate with an average of 1.65 females, the initial population is reduced 87% by SMRT and 68% by SFRT in one generation. The extent of suppression achieved is most sensitive to changes in the initial sterile release ratio. Given the current status of sea lamprey populations and trapping operations in the Great Lakes, the sterile-male-release technique has the best chance for success on a lake-wide basis if implemented in Lake Michigan. The effectiveness of the sterile-female-release technique should be investigated in a controlled study. Advancing trapping technology should be a high priority in the near term, and artificial rearing of sea lampreys to the adult stage should be a high priority in the long term. The diligent pursuit of sea lamprey suppression over a period of several decades can be expected to yield great benefits.

  13. A fundamental study of suction for Laminar Flow Control (LFC)

    NASA Astrophysics Data System (ADS)

    Watmuff, Jonathan H.

    1992-10-01

    This report covers the period forming the first year of the project. The aim is to experimentally investigate the effects of suction as a technique for Laminar Flow Control. Experiments are to be performed which require substantial modifications to be made to the experimental facility. Considerable effort has been spent developing new high performance constant temperature hot-wire anemometers for general purpose use in the Fluid Mechanics Laboratory. Twenty instruments have been delivered. An important feature of the facility is that it is totally automated under computer control. Unprecedently large quantities of data can be acquired and the results examined using the visualization tools developed specifically for studying the results of numerical simulations on graphics works stations. The experiment must be run for periods of up to a month at a time since the data is collected on a point-by-point basis. Several techniques were implemented to reduce the experimental run-time by a significant factor. Extra probes have been constructed and modifications have been made to the traverse hardware and to the real-time experimental code to enable multiple probes to be used. This will reduce the experimental run-time by the appropriate factor. Hot-wire calibration drift has been a frustrating problem owing to the large range of ambient temperatures experienced in the laboratory. The solution has been to repeat the calibrations at frequent intervals. However the calibration process has consumed up to 40 percent of the run-time. A new method of correcting the drift is very nearly finalized and when implemented it will also lead to a significant reduction in the experimental run-time.

  14. Reducing acquisition time in clinical MRI by data undersampling and compressed sensing reconstruction

    NASA Astrophysics Data System (ADS)

    Hollingsworth, Kieren Grant

    2015-11-01

    MRI is often the most sensitive or appropriate technique for important measurements in clinical diagnosis and research, but lengthy acquisition times limit its use due to cost and considerations of patient comfort and compliance. Once an image field of view and resolution is chosen, the minimum scan acquisition time is normally fixed by the amount of raw data that must be acquired to meet the Nyquist criteria. Recently, there has been research interest in using the theory of compressed sensing (CS) in MR imaging to reduce scan acquisition times. The theory argues that if our target MR image is sparse, having signal information in only a small proportion of pixels (like an angiogram), or if the image can be mathematically transformed to be sparse then it is possible to use that sparsity to recover a high definition image from substantially less acquired data. This review starts by considering methods of k-space undersampling which have already been incorporated into routine clinical imaging (partial Fourier imaging and parallel imaging), and then explains the basis of using compressed sensing in MRI. The practical considerations of applying CS to MRI acquisitions are discussed, such as designing k-space undersampling schemes, optimizing adjustable parameters in reconstructions and exploiting the power of combined compressed sensing and parallel imaging (CS-PI). A selection of clinical applications that have used CS and CS-PI prospectively are considered. The review concludes by signposting other imaging acceleration techniques under present development before concluding with a consideration of the potential impact and obstacles to bringing compressed sensing into routine use in clinical MRI.

  15. A fundamental study of suction for Laminar Flow Control (LFC)

    NASA Technical Reports Server (NTRS)

    Watmuff, Jonathan H.

    1992-01-01

    This report covers the period forming the first year of the project. The aim is to experimentally investigate the effects of suction as a technique for Laminar Flow Control. Experiments are to be performed which require substantial modifications to be made to the experimental facility. Considerable effort has been spent developing new high performance constant temperature hot-wire anemometers for general purpose use in the Fluid Mechanics Laboratory. Twenty instruments have been delivered. An important feature of the facility is that it is totally automated under computer control. Unprecedently large quantities of data can be acquired and the results examined using the visualization tools developed specifically for studying the results of numerical simulations on graphics works stations. The experiment must be run for periods of up to a month at a time since the data is collected on a point-by-point basis. Several techniques were implemented to reduce the experimental run-time by a significant factor. Extra probes have been constructed and modifications have been made to the traverse hardware and to the real-time experimental code to enable multiple probes to be used. This will reduce the experimental run-time by the appropriate factor. Hot-wire calibration drift has been a frustrating problem owing to the large range of ambient temperatures experienced in the laboratory. The solution has been to repeat the calibrations at frequent intervals. However the calibration process has consumed up to 40 percent of the run-time. A new method of correcting the drift is very nearly finalized and when implemented it will also lead to a significant reduction in the experimental run-time.

  16. Refined genetic algorithm -- Economic dispatch example

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheble, G.B.; Brittig, K.

    1995-02-01

    A genetic-based algorithm is used to solve an economic dispatch (ED) problem. The algorithm utilizes payoff information of perspective solutions to evaluate optimality. Thus, the constraints of classical LaGrangian techniques on unit curves are eliminated. Using an economic dispatch problem as a basis for comparison, several different techniques which enhance program efficiency and accuracy, such as mutation prediction, elitism, interval approximation and penalty factors, are explored. Two unique genetic algorithms are also compared. The results are verified for a sample problem using a classical technique.

  17. Design, fabrication and testing of a thermal diode

    NASA Technical Reports Server (NTRS)

    Swerdling, B.; Kosson, R.

    1972-01-01

    Heat pipe diode types are discussed. The design, fabrication and test of a flight qualified diode for the Advanced Thermal Control Flight Experiment (ATFE) are described. The review covers the use of non-condensable gas, freezing, liquid trap, and liquid blockage techniques. Test data and parametric performance are presented for the liquid trap and liquid blockage techniques. The liquid blockage technique was selected for the ATFE diode on the basis of small reservoir size, low reverse mode heat transfer, and apparent rapid shut-off.

  18. Vaginal Vault Suspension at Hysterectomy for Prolapse – Myths and Facts, Anatomical Requirements, Fixation Techniques, Documentation and Cost Accounting

    PubMed Central

    Graefe, F.; Marschke, J.; Dimpfl, T.; Tunn, R.

    2012-01-01

    Vaginal vault suspension during hysterectomy for prolapse is both a therapy for apical insufficiency and helps prevent recurrence. Numerous techniques exist, with different anatomical results and differing complications. The description of the different approaches together with a description of the vaginal vault suspension technique used at the Department for Urogynaecology at St. Hedwig Hospital could serve as a basis for reassessment and for recommendations by scientific associations regarding general standards. PMID:25278621

  19. Infrared thermal imaging of atmospheric turbulence

    NASA Technical Reports Server (NTRS)

    Watt, David; Mchugh, John

    1990-01-01

    A technique for analyzing infrared atmospheric images to obtain cross-wind measurement is presented. The technique is based on Taylor's frozen turbulence hypothesis and uses cross-correlation of successive images to obtain a measure of the cross-wind velocity in a localized focal region. The technique is appealing because it can possibly be combined with other IR forward look capabilities and may provide information about turbulence intensity. The current research effort, its theoretical basis, and its applicability to windshear detection are described.

  20. Perceptions of Teachers towards Assessment Techniques at Secondary Level Private School of Karachi

    ERIC Educational Resources Information Center

    Fatemah, Henna

    2015-01-01

    This paper sets out to explore the perceptions of teachers towards assessment techniques at a secondary level private school of Karachi. This was conjectured on the basis of the circumstances of parallel boards in the education system of Pakistan and its effectiveness within the context with respect to the curriculum. This was gauged in line with…

  1. Factors of Compliance of a Child with Rules in a Russian Cultural Context

    ERIC Educational Resources Information Center

    Bayanova, Larisa F.; Mustafin, Timur R.

    2016-01-01

    The article covers the analysis of the child's psychology compliance with culture rules--the cultural congruence. The description of the technique aimed to detect the cultural congruence of five- to six-year-old children is presented. The technique is made on the basis of the revealed range of rules of a child's and adult's interaction in a social…

  2. Radar observations of asteroids and comets

    NASA Technical Reports Server (NTRS)

    Ostro, S. J.

    1985-01-01

    Radar techniques for the observation of asteroids and comets are reviewed, emphasizing the logical basis for inferring physical properties from radar measurements. Results to date are reviewed, focusing on some recent highlights of the research to demonstrate the synergism between radar and other ground-based techniques. Particular attention is given to the asteroids 2 Pallas, 16 Psyche, 2101 Adonis, and the comet IRAS-Araki-Alcock.

  3. Effective Management Selection: The Analysis of Behavior by Simulation Techniques.

    ERIC Educational Resources Information Center

    Jaffee, Cabot L.

    This book presents a system by which feedback might be generated and used as a basis for organizational change. The major areas covered consist of the development of a rationale for the use of simulation in the selection of supervisors, a description of actual techniques, and a method for training individuals in the use of the material. The…

  4. Development Of Educational Programs In Renewable And Alternative Energy Processing: The Case Of Russia

    NASA Astrophysics Data System (ADS)

    Svirina, Anna; Shindor, Olga; Tatmyshevsky, Konstantin

    2014-12-01

    The paper deals with the main problems of Russian energy system development that proves necessary to provide educational programs in the field of renewable and alternative energy. In the paper the process of curricula development and defining teaching techniques on the basis of expert opinion evaluation is defined, and the competence model for renewable and alternative energy processing master students is suggested. On the basis of a distributed questionnaire and in-depth interviews, the data for statistical analysis was obtained. On the basis of this data, an optimization of curricula structure was performed, and three models of a structure for optimizing teaching techniques were developed. The suggested educational program structure which was adopted by employers is presented in the paper. The findings include quantitatively estimated importance of systemic thinking and professional skills and knowledge as basic competences of a masters' program graduate; statistically estimated necessity of practice-based learning approach; and optimization models for structuring curricula in renewable and alternative energy processing. These findings allow the establishment of a platform for the development of educational programs.

  5. The comet assay: Reflections on its development, evolution and applications.

    PubMed

    Singh, Narendra P

    2016-01-01

    The study of DNA damage and its repair is critical to our understanding of human aging and cancer. This review reflects on the development of a simple technique, now known as the comet assay, to study the accumulation of DNA damage and its repair. It describes my journey into aging research and the need for a method that sensitively quantifies DNA damage on a cell-by-cell basis and on a day-by-day basis. My inspirations, obstacles and successes on the path to developing this assay and improving its reliability and sensitivity are discussed. Recent modifications, applications, and the process of standardizing the technique are also described. What was once untried and unknown has become a technique used around the world for understanding and monitoring DNA damage. The comet assay's use has grown exponentially in the new millennium, as emphasis on studying biological phenomena at the single-cell level has increased. I and others have applied the technique across cell types (including germ cells) and species (including bacteria). As it enters new realms and gains clinical relevance, the comet assay may very well illuminate human aging and its prevention. Copyright © 2016. Published by Elsevier B.V.

  6. A Historical Perspective on the Identification of Cell Types in Pancreatic Islets of Langerhans by Staining and Histochemical Techniques

    PubMed Central

    2015-01-01

    Before the middle of the previous century, cell types of the pancreatic islets of Langerhans were identified primarily on the basis of their color reactions with histological dyes. At that time, the chemical basis for the staining properties of islet cells in relation to the identity, chemistry and structure of their hormones was not fully understood. Nevertheless, the definitive islet cell types that secrete glucagon, insulin, and somatostatin (A, B, and D cells, respectively) could reliably be differentiated from each other with staining protocols that involved variations of one or more tinctorial techniques, such as the Mallory-Heidenhain azan trichrome, chromium hematoxylin and phloxine, aldehyde fuchsin, and silver impregnation methods, which were popularly used until supplanted by immunohistochemical techniques. Before antibody-based staining methods, the most bona fide histochemical techniques for the identification of islet B cells were based on the detection of sulfhydryl and disulfide groups of insulin. The application of the classical islet tinctorial staining methods for pathophysiological studies and physiological experiments was fundamental to our understanding of islet architecture and the physiological roles of A and B cells in glucose regulation and diabetes. PMID:26216133

  7. A diagnostic analysis of the VVP single-doppler retrieval technique

    NASA Technical Reports Server (NTRS)

    Boccippio, Dennis J.

    1995-01-01

    A diagnostic analysis of the VVP (volume velocity processing) retrieval method is presented, with emphasis on understanding the technique as a linear, multivariate regression. Similarities and differences to the velocity-azimuth display and extended velocity-azimuth display retrieval techniques are discussed, using this framework. Conventional regression diagnostics are then employed to quantitatively determine situations in which the VVP technique is likely to fail. An algorithm for preparation and analysis of a robust VVP retrieval is developed and applied to synthetic and actual datasets with high temporal and spatial resolution. A fundamental (but quantifiable) limitation to some forms of VVP analysis is inadequate sampling dispersion in the n space of the multivariate regression, manifest as a collinearity between the basis functions of some fitted parameters. Such collinearity may be present either in the definition of these basis functions or in their realization in a given sampling configuration. This nonorthogonality may cause numerical instability, variance inflation (decrease in robustness), and increased sensitivity to bias from neglected wind components. It is shown that these effects prevent the application of VVP to small azimuthal sectors of data. The behavior of the VVP regression is further diagnosed over a wide range of sampling constraints, and reasonable sector limits are established.

  8. Situation awareness - A critical but ill-defined phenomenon

    NASA Technical Reports Server (NTRS)

    Sarter, Nadine B.; Woods, David D.

    1991-01-01

    The significance of the temporal dimension of situation awareness is examined. Its study requires the staging of complex dynamic situations and the development of less intrusive in-flight probing techniques to assess the pilot's ability to adequately and rapidly retrieve and integrate flight-related information. The cognitive basis of the concept is analyzed, embedding it in the context of related psychological concepts. Methodological approaches to the investigation of situation awareness are discussed on this basis.

  9. Report of Freshwater Mollusks Workshop 19-20 May 1981.

    DTIC Science & Technology

    1982-05-01

    biomes . If discrete groups of individuals may be recognized on the basis of ERM features , and no intergrades occur even in areas of sympatry, it is...the Canadian Interior Basis," Malacologia, Vol 13, pp 1-509. 1979. "Polymorphism in Marine Mollusks and Biome Develop- ment," Smithsonian Contributions...includes data on range, life history, ecological requirements, and identifying features . c. Analyze techniques used to relocate or create habitat for

  10. Validating the Kinematic Wave Approach for Rapid Soil Erosion Assessment and Improved BMP Site Selection to Enhance Training Land Sustainability

    DTIC Science & Technology

    2014-02-01

    installation based on a Euclidean distance allocation and assigned that installation’s threshold values. The second approach used a thin - plate spline ...installation critical nLS+ thresholds involved spatial interpolation. A thin - plate spline radial basis functions (RBF) was selected as the...the interpolation of installation results using a thin - plate spline radial basis function technique. 6.5 OBJECTIVE #5: DEVELOP AND

  11. Photoelectron Spectroscopy for Identification of Chemical States

    NASA Technical Reports Server (NTRS)

    Novakov, T.

    1971-01-01

    The technique of X-ray photoelectron spectroscopy and the fundamental electronic interactions constituting the basis of the method will be discussed. The method provides information about chemical states ("oxidation states") of atoms in molecules. In addition, quantitative elemental analysis can be performed using the same method. On the basis of this information identification of chemical species is possible. Examples of applications are discussed with particular references to the study of smog particulate matter.

  12. Beam localization in HIFU temperature measurements using thermocouples, with application to cooling by large blood vessels.

    PubMed

    Dasgupta, Subhashish; Banerjee, Rupak K; Hariharan, Prasanna; Myers, Matthew R

    2011-02-01

    Experimental studies of thermal effects in high-intensity focused ultrasound (HIFU) procedures are often performed with the aid of fine wire thermocouples positioned within tissue phantoms. Thermocouple measurements are subject to several types of error which must be accounted for before reliable inferences can be made on the basis of the measurements. Thermocouple artifact due to viscous heating is one source of error. A second is the uncertainty regarding the position of the beam relative to the target location or the thermocouple junction, due to the error in positioning the beam at the junction. This paper presents a method for determining the location of the beam relative to a fixed pair of thermocouples. The localization technique reduces the uncertainty introduced by positioning errors associated with very narrow HIFU beams. The technique is presented in the context of an investigation into the effect of blood flow through large vessels on the efficacy of HIFU procedures targeted near the vessel. Application of the beam localization method allowed conclusions regarding the effects of blood flow to be drawn from previously inconclusive (because of localization uncertainties) data. Comparison of the position-adjusted transient temperature profiles for flow rates of 0 and 400ml/min showed that blood flow can reduce temperature elevations by more than 10%, when the HIFU focus is within a 2mm distance from the vessel wall. At acoustic power levels of 17.3 and 24.8W there is a 20- to 70-fold decrease in thermal dose due to the convective cooling effect of blood flow, implying a shrinkage in lesion size. The beam-localization technique also revealed the level of thermocouple artifact as a function of sonication time, providing investigators with an indication of the quality of thermocouple data for a given exposure time. The maximum artifact was found to be double the measured temperature rise, during initial few seconds of sonication. Copyright © 2010 Elsevier B.V. All rights reserved.

  13. Reduction of Metal Artifact in Single Photon-Counting Computed Tomography by Spectral-Driven Iterative Reconstruction Technique

    PubMed Central

    Nasirudin, Radin A.; Mei, Kai; Panchev, Petar; Fehringer, Andreas; Pfeiffer, Franz; Rummeny, Ernst J.; Fiebich, Martin; Noël, Peter B.

    2015-01-01

    Purpose The exciting prospect of Spectral CT (SCT) using photon-counting detectors (PCD) will lead to new techniques in computed tomography (CT) that take advantage of the additional spectral information provided. We introduce a method to reduce metal artifact in X-ray tomography by incorporating knowledge obtained from SCT into a statistical iterative reconstruction scheme. We call our method Spectral-driven Iterative Reconstruction (SPIR). Method The proposed algorithm consists of two main components: material decomposition and penalized maximum likelihood iterative reconstruction. In this study, the spectral data acquisitions with an energy-resolving PCD were simulated using a Monte-Carlo simulator based on EGSnrc C++ class library. A jaw phantom with a dental implant made of gold was used as an object in this study. A total of three dental implant shapes were simulated separately to test the influence of prior knowledge on the overall performance of the algorithm. The generated projection data was first decomposed into three basis functions: photoelectric absorption, Compton scattering and attenuation of gold. A pseudo-monochromatic sinogram was calculated and used as input in the reconstruction, while the spatial information of the gold implant was used as a prior. The results from the algorithm were assessed and benchmarked with state-of-the-art reconstruction methods. Results Decomposition results illustrate that gold implant of any shape can be distinguished from other components of the phantom. Additionally, the result from the penalized maximum likelihood iterative reconstruction shows that artifacts are significantly reduced in SPIR reconstructed slices in comparison to other known techniques, while at the same time details around the implant are preserved. Quantitatively, the SPIR algorithm best reflects the true attenuation value in comparison to other algorithms. Conclusion It is demonstrated that the combination of the additional information from Spectral CT and statistical reconstruction can significantly improve image quality, especially streaking artifacts caused by the presence of materials with high atomic numbers. PMID:25955019

  14. Adaptive inferential sensors based on evolving fuzzy models.

    PubMed

    Angelov, Plamen; Kordon, Arthur

    2010-04-01

    A new technique to the design and use of inferential sensors in the process industry is proposed in this paper, which is based on the recently introduced concept of evolving fuzzy models (EFMs). They address the challenge that the modern process industry faces today, namely, to develop such adaptive and self-calibrating online inferential sensors that reduce the maintenance costs while keeping the high precision and interpretability/transparency. The proposed new methodology makes possible inferential sensors to recalibrate automatically, which reduces significantly the life-cycle efforts for their maintenance. This is achieved by the adaptive and flexible open-structure EFM used. The novelty of this paper lies in the following: (1) the overall concept of inferential sensors with evolving and self-developing structure from the data streams; (2) the new methodology for online automatic selection of input variables that are most relevant for the prediction; (3) the technique to detect automatically a shift in the data pattern using the age of the clusters (and fuzzy rules); (4) the online standardization technique used by the learning procedure of the evolving model; and (5) the application of this innovative approach to several real-life industrial processes from the chemical industry (evolving inferential sensors, namely, eSensors, were used for predicting the chemical properties of different products in The Dow Chemical Company, Freeport, TX). It should be noted, however, that the methodology and conclusions of this paper are valid for the broader area of chemical and process industries in general. The results demonstrate that well-interpretable and with-simple-structure inferential sensors can automatically be designed from the data stream in real time, which predict various process variables of interest. The proposed approach can be used as a basis for the development of a new generation of adaptive and evolving inferential sensors that can address the challenges of the modern advanced process industry.

  15. A Literature Review of Renal Surgical Anatomy and Surgical Strategies for Partial Nephrectomy

    PubMed Central

    Klatte, Tobias; Ficarra, Vincenzo; Gratzke, Christian; Kaouk, Jihad; Kutikov, Alexander; Macchi, Veronica; Mottrie, Alexandre; Porpiglia, Francesco; Porter, James; Rogers, Craig G.; Russo, Paul; Thompson, R. Houston; Uzzo, Robert G.; Wood, Christopher G.; Gill, Inderbir S.

    2016-01-01

    Context A detailed understanding of renal surgical anatomy is necessary to optimize preoperative planning and operative technique and provide a basis for improved outcomes. Objective To evaluate the literature regarding pertinent surgical anatomy of the kidney and related structures, nephrometry scoring systems, and current surgical strategies for partial nephrectomy (PN). Evidence acquisition A literature review was conducted. Evidence synthesis Surgical renal anatomy fundamentally impacts PN surgery. The renal artery divides into anterior and posterior divisions, from which approximately five segmental terminal arteries originate. The renal veins are not terminal. Variations in the vascular and lymphatic channels are common; thus, concurrent lymphadenectomy is not routinely indicated during PN for cT1 renal masses in the setting of clinically negative lymph nodes. Renal-protocol contrast-enhanced computed tomography or magnetic resonance imaging is used for standard imaging. Anatomy-based nephrometry scoring systems allow standardized academic reporting of tumor characteristics and predict PN outcomes (complications, remnant function, possibly histology). Anatomy-based novel surgical approaches may reduce ischemic time during PN; these include early unclamping, segmental clamping, tumor-specific clamping (zero ischemia), and unclamped PN. Cancer cure after PN relies on complete resection, which can be achieved by thin margins. Post-PN renal function is impacted by kidney quality, remnant quantity, and ischemia type and duration. Conclusions Surgical renal anatomy underpins imaging, nephrometry scoring systems, and vascular control techniques that reduce global renal ischemia and may impact post-PN function. A contemporary ideal PN excises the tumor with a thin negative margin, delicately secures the tumor bed to maximize vascularized remnant parenchyma, and minimizes global ischemia to the renal remnant with minimal complications. Patient summary In this report we review renal surgical anatomy. Renal mass imaging allows detailed delineation of the anatomy and vasculature and permits nephrometry scoring, and thus precise, patient-specific surgical planning. Novel off-clamp techniques have been developed that may lead to improved outcomes. PMID:25911061

  16. Spot-Scanning Proton Arc (SPArc) Therapy: The First Robust and Delivery-Efficient Spot-Scanning Proton Arc Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Xuanfeng, E-mail: Xuanfeng.ding@beaumont.org; Li, Xiaoqiang; Zhang, J. Michele

    Purpose: To present a novel robust and delivery-efficient spot-scanning proton arc (SPArc) therapy technique. Methods and Materials: A SPArc optimization algorithm was developed that integrates control point resampling, energy layer redistribution, energy layer filtration, and energy layer resampling. The feasibility of such a technique was evaluated using sample patients: 1 patient with locally advanced head and neck oropharyngeal cancer with bilateral lymph node coverage, and 1 with a nonmobile lung cancer. Plan quality, robustness, and total estimated delivery time were compared with the robust optimized multifield step-and-shoot arc plan without SPArc optimization (Arc{sub multi-field}) and the standard robust optimized intensity modulatedmore » proton therapy (IMPT) plan. Dose-volume histograms of target and organs at risk were analyzed, taking into account the setup and range uncertainties. Total delivery time was calculated on the basis of a 360° gantry room with 1 revolutions per minute gantry rotation speed, 2-millisecond spot switching time, 1-nA beam current, 0.01 minimum spot monitor unit, and energy layer switching time of 0.5 to 4 seconds. Results: The SPArc plan showed potential dosimetric advantages for both clinical sample cases. Compared with IMPT, SPArc delivered 8% and 14% less integral dose for oropharyngeal and lung cancer cases, respectively. Furthermore, evaluating the lung cancer plan compared with IMPT, it was evident that the maximum skin dose, the mean lung dose, and the maximum dose to ribs were reduced by 60%, 15%, and 35%, respectively, whereas the conformity index was improved from 7.6 (IMPT) to 4.0 (SPArc). The total treatment delivery time for lung and oropharyngeal cancer patients was reduced by 55% to 60% and 56% to 67%, respectively, when compared with Arc{sub multi-field} plans. Conclusion: The SPArc plan is the first robust and delivery-efficient proton spot-scanning arc therapy technique, which could potentially be implemented into routine clinical practice.« less

  17. A Literature Review of Renal Surgical Anatomy and Surgical Strategies for Partial Nephrectomy.

    PubMed

    Klatte, Tobias; Ficarra, Vincenzo; Gratzke, Christian; Kaouk, Jihad; Kutikov, Alexander; Macchi, Veronica; Mottrie, Alexandre; Porpiglia, Francesco; Porter, James; Rogers, Craig G; Russo, Paul; Thompson, R Houston; Uzzo, Robert G; Wood, Christopher G; Gill, Inderbir S

    2015-12-01

    A detailed understanding of renal surgical anatomy is necessary to optimize preoperative planning and operative technique and provide a basis for improved outcomes. To evaluate the literature regarding pertinent surgical anatomy of the kidney and related structures, nephrometry scoring systems, and current surgical strategies for partial nephrectomy (PN). A literature review was conducted. Surgical renal anatomy fundamentally impacts PN surgery. The renal artery divides into anterior and posterior divisions, from which approximately five segmental terminal arteries originate. The renal veins are not terminal. Variations in the vascular and lymphatic channels are common; thus, concurrent lymphadenectomy is not routinely indicated during PN for cT1 renal masses in the setting of clinically negative lymph nodes. Renal-protocol contrast-enhanced computed tomography or magnetic resonance imaging is used for standard imaging. Anatomy-based nephrometry scoring systems allow standardized academic reporting of tumor characteristics and predict PN outcomes (complications, remnant function, possibly histology). Anatomy-based novel surgical approaches may reduce ischemic time during PN; these include early unclamping, segmental clamping, tumor-specific clamping (zero ischemia), and unclamped PN. Cancer cure after PN relies on complete resection, which can be achieved by thin margins. Post-PN renal function is impacted by kidney quality, remnant quantity, and ischemia type and duration. Surgical renal anatomy underpins imaging, nephrometry scoring systems, and vascular control techniques that reduce global renal ischemia and may impact post-PN function. A contemporary ideal PN excises the tumor with a thin negative margin, delicately secures the tumor bed to maximize vascularized remnant parenchyma, and minimizes global ischemia to the renal remnant with minimal complications. In this report we review renal surgical anatomy. Renal mass imaging allows detailed delineation of the anatomy and vasculature and permits nephrometry scoring, and thus precise, patient-specific surgical planning. Novel off-clamp techniques have been developed that may lead to improved outcomes. Copyright © 2015 European Association of Urology. Published by Elsevier B.V. All rights reserved.

  18. BiVO4 /N-rGO nano composites as highly efficient visible active photocatalyst for the degradation of dyes and antibiotics in eco system.

    PubMed

    Appavu, Brindha; Thiripuranthagan, Sivakumar; Ranganathan, Sudhakar; Erusappan, Elangovan; Kannan, Kathiravan

    2018-04-30

    Herein, we report the synthesis of novel nitrogen doped reduced graphene oxide/ BiVO 4 photo catalyst by single step hydrothermal method. The physicochemical properties of the catalysts were characterized using XRD, N 2 adsorption-desorption, Raman, XPS, SEM TEM, DRS-UV and EIS techniques. The synthesized catalysts were tested for their catalytic activity in the photo degradation of some harmful textile dyes (methylene blue & congo red) and antibiotics (metronidazole and chloramphenicol) under visible light irradiation. Reduced charge recombination and enhanced photocatalytic activity were observed due to the concerted effect between BiVO 4 and nitrogen-rGO. The degradation efficiency of BiVO 4 /N-rGO in the degradation of CR and MB was remarkably high i.e 95% and 98% under visible light irradiation. Similarly 95% of MTZ and 93% of CAP were degraded under visible light irradiation. HPLC studies implied that both the dyes and antibiotics were degraded to the maximum extent. The plausible photocatalytic mechanism on the basis of experimental results was suggested. Copyright © 2018 Elsevier Inc. All rights reserved.

  19. Host cell reactivation of gamma-irradiated adenovirus 5 in human cell lines of varying radiosensitivity.

    PubMed Central

    Eady, J. J.; Peacock, J. H.; McMillan, T. J.

    1992-01-01

    DNA repair processes play an important role in the determination of radiation response in both normal and tumour cells. We have investigated one aspect of DNA repair in a number of human cell lines of varying radiosensitivity using the adenovirus 5 host cell reactivation assay (HCR). In this technique, gamma-irradiated virions are used to infect cells and the ability of the cellular repair systems to process this damage is assayed by a convenient immunoperoxidase method recognising viral structural antigen expression on the cell membrane 48 h after infection. Reduced HCR was exhibited by radioresistant HeLa cells and by a radiosensitive neuroblastoma cell line, HX142. In contrast, an ataxia telangiectasia cell line, AT5 BIVA, did not show reduced HCR. On the basis of these results we can make no general conclusions about the relevance of HCR to cellular radiosensitivity. We have extended these studies to determine whether our cell lines exhibited enhanced viral reactivation (ER) following a small priming dose of gamma-radiation given to the cells before viral infection. No evidence for this phenomenon was found either in normal or tumour cell lines. PMID:1637659

  20. The impact of adsorption on the localization of spins in graphene oxide and reduced graphene oxide, observed with electron paramagnetic resonance

    NASA Astrophysics Data System (ADS)

    Kempiński, Mateusz; Florczak, Patryk; Jurga, Stefan; Śliwińska-Bartkowiak, Małgorzata; Kempiński, Wojciech

    2017-08-01

    We report the observations of electronic properties of graphene oxide and reduced graphene oxide, performed with electron paramagnetic resonance technique in a broad temperature range. Both materials were examined in pure form and saturated with air, helium, and heavy water molecules. We show that spin localization strongly depends on the type and amount of molecules adsorbed at the graphene layer edges (and possible in-plane defects). Physical and chemical states of edges play crucial role in electrical transport within graphene-based materials, with hopping as the leading mechanism of charge carrier transport. Presented results are a good basis to understand the electronic properties of other carbon structures made of graphene-like building blocks. Most active carbons show some degree of functionalization and are known of having good adsorptive properties; thus, controlling both phenomena is important for many applications. Sample treatment with temperature, vacuum, and various adsorbents allowed for the observation of a possible metal-insulator transition and sorption pumping effects. The influence of adsorption on the localization phenomena in graphene would be very important if to consider the graphene-based material as possible candidates for the future spintronics that works in ambient conditions.

  1. The use of telemedicine in obstetrics: a review of the literature.

    PubMed

    Magann, Everett F; McKelvey, Samantha S; Hitt, Wilbur C; Smith, Michael V; Azam, Ghazala A; Lowery, Curtis L

    2011-03-01

    Telemedicine has been advertised for increasing efficiency, extending the scope of obstetric practice, improving pregnancy outcomes, and reducing costs in the healthcare system. The extent of telemedicine use in obstetrics was identified with a literature search. A total of 268 articles were identified of which 60 are the basis for this review. Telemedicine has been used to read ultrasounds, interpret nonstress tests, counsel patients, manage diabetes, manage postpartum depression, and support parents and children postpartum from remote sites. Reductions in time lost from work, transportation costs, more efficiency for the health care providers, and reducing medical costs all have been suggested as benefits of telemedicine. Despite the information published about telemedicine in obstetrics, this technology has not been shown to have adverse effects in obstetrics but neither has it demonstrated unequivocal benefits. Properly structured and powered investigations will be needed to determine the role of telemedicine in the future. Obstetricians & Gynecologists. After completing this CME activity, physicians should be better able to diagnose and treat diabetes using telemedicine techniques; assess the current scope of research in telemedicine in obstetrics; implement clinical telemedicine consultations based on the interaction and the needs of the participants; and the opportunities for further research in telemedicine in obstetrics.

  2. Poroelastic Trailing Edge Noise and the Silent Flight of the Owl

    NASA Astrophysics Data System (ADS)

    Jaworski, Justin; Peake, Nigel

    2012-11-01

    Many species of owl rely on specialised plummage to reduce their self-noise levels and enable hunting in acoustic stealth. One such plummage arrangement, a compliant array of feathers at the wing trailing edge, is believed to mitigate the scattering of boundary layer turbulence which is the predominant source of airframe noise. The owl noise problem is modelled analytically by the diffraction of a quadrupole source by a semi-infinite porous and elastic edge, and the resulting set of equations is solved exactly using the Wiener-Hopf technique to identify important dimensionless parameters and their scaling behaviour with respect to the aerodynamic noise produced. Special attention is paid to the limiting cases of elastic-impermeable as well as rigid-porous plate conditions, the latter of which is compared against available experimental measurements in the literature. Results from this analysis and comparison seek to validate the weaker sixth-power dependence of far-field acoustic power on flow velocity for porous trailing edges, develop a rigorous basis for the aeroacoustic tailoring of poroelastic edges to reduce airframe noise, and help explain one of the mechanisms of aerodynamic noise suppression by owls.

  3. Organ-specific defence strategies of pepper (Capsicum annuum L.) during early phase of water deficit.

    PubMed

    Sziderics, Astrid Heide; Oufir, Mouhssin; Trognitz, Friederike; Kopecky, Dieter; Matusíková, Ildikó; Hausman, Jean-Francois; Wilhelm, Eva

    2010-03-01

    Drought is one of the major factors that limits crop production and reduces yield. To understand the early response of plants under nearly natural conditions, pepper plants (Capsicum annuum L.) were grown in a greenhouse and stressed by withholding water for 1 week. Plants adapted to the decreasing water content of the soil by adjustment of their osmotic potential in root tissue. As a consequence of drought, strong accumulation of raffinose, glucose, galactinol and proline was detected in the roots. In contrast, in leaves the levels of fructose, sucrose and also galactinol increased. Due to the water deficit cadaverine, putrescine, spermidine and spermine accumulated in leaves, whereas the concentration of polyamines was reduced in roots. To study the molecular basis of these responses, a combined approach of suppression subtractive hybridisation and microarray technique was performed on the same material. A total of 109 unique ESTs were detected as responsive to drought, while additional 286 ESTs were selected from the bulk of rare transcripts on the array. The metabolic profiles of stressed pepper plants are discussed with respect to the transcriptomic changes detected, while attention is given to the differences between defence strategies of roots and leaves.

  4. Grassystatins A–C from Marine Cyanobacteria, Potent Cathepsin E Inhibitors that Reduce Antigen Presentation

    PubMed Central

    Kwan, Jason C.; Eksioglu, Erika A.; Liu, Chen; Paul, Valerie J.; Luesch, Hendrik

    2009-01-01

    In our efforts to explore marine cyanobacteria as a source of novel bioactive compounds we discovered a statine unit-containing linear decadepsipeptide, grassystatin A (1), which we screened against a diverse set of 59 proteases. We describe the structure determination of 1 and two natural analogs, grassystatins B (2) and C (3), using NMR, MS, and chiral HPLC techniques. Compound 1 selectively inhibited cathepsins D and E with IC50s of 26.5 nM and 886 pM, respectively. Compound 2 showed similar potency and selectivity against cathepsins D and E (IC50s 7.27 nM and 354 pM, respectively), whereas the truncated peptide analog grassystatin C (3), which consists of two fewer residues than 1 and 2, was less potent against both but still selective for cathepsin E. The selectivity of compounds 1–3 for cathepsin E over D (20- to 38-fold) suggests that these natural products may be useful tools to probe cathepsin E function. We investigated the structural basis of this selectivity using molecular docking. We also show that 1 can reduce antigen presentation by dendritic cells, a process thought to rely on cathepsin E. PMID:19715320

  5. A study on synthesis of energy fuel from waste plastic and assessment of its potential as an alternative fuel for diesel engines.

    PubMed

    Kaimal, Viswanath K; Vijayabalan, P

    2016-05-01

    The demand for plastic is ever increasing and has produced a huge amount of plastic waste. The management and disposal of plastic waste have become a major concern, especially in developing cities. The idea of waste to energy recovery is one of the promising techniques used for managing the waste plastic. This paper assesses the potential of using Waste Plastic Oil (WPO), synthesized using pyrolysis of waste plastic, as an alternative for diesel fuel. In this research work, the performance and emission characteristics of a single cylinder diesel engine fuelled with WPO and its blends with diesel are studied. In addition to neat plastic oil, three blends (PO25, PO50 and PO75) were prepared on a volumetric basis and the engine was able to run on neat plastic oil. Brake thermal efficiency of blends was lower compared to diesel, but PO25 showed similar performance to that of diesel. The emissions were reduced considerably while using blends when compared to neat plastic oil. The smoke and NOX were reduced by 22% and 17.8% respectively for PO25 than that of plastic oil. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Exact exchange-correlation potentials of singlet two-electron systems

    NASA Astrophysics Data System (ADS)

    Ryabinkin, Ilya G.; Ospadov, Egor; Staroverov, Viktor N.

    2017-10-01

    We suggest a non-iterative analytic method for constructing the exchange-correlation potential, v XC ( r ) , of any singlet ground-state two-electron system. The method is based on a convenient formula for v XC ( r ) in terms of quantities determined only by the system's electronic wave function, exact or approximate, and is essentially different from the Kohn-Sham inversion technique. When applied to Gaussian-basis-set wave functions, the method yields finite-basis-set approximations to the corresponding basis-set-limit v XC ( r ) , whereas the Kohn-Sham inversion produces physically inappropriate (oscillatory and divergent) potentials. The effectiveness of the procedure is demonstrated by computing accurate exchange-correlation potentials of several two-electron systems (helium isoelectronic series, H2, H3 + ) using common ab initio methods and Gaussian basis sets.

  7. Large-Scale Cubic-Scaling Random Phase Approximation Correlation Energy Calculations Using a Gaussian Basis.

    PubMed

    Wilhelm, Jan; Seewald, Patrick; Del Ben, Mauro; Hutter, Jürg

    2016-12-13

    We present an algorithm for computing the correlation energy in the random phase approximation (RPA) in a Gaussian basis requiring [Formula: see text] operations and [Formula: see text] memory. The method is based on the resolution of the identity (RI) with the overlap metric, a reformulation of RI-RPA in the Gaussian basis, imaginary time, and imaginary frequency integration techniques, and the use of sparse linear algebra. Additional memory reduction without extra computations can be achieved by an iterative scheme that overcomes the memory bottleneck of canonical RPA implementations. We report a massively parallel implementation that is the key for the application to large systems. Finally, cubic-scaling RPA is applied to a thousand water molecules using a correlation-consistent triple-ζ quality basis.

  8. Reconstructing biochemical pathways from time course data.

    PubMed

    Srividhya, Jeyaraman; Crampin, Edmund J; McSharry, Patrick E; Schnell, Santiago

    2007-03-01

    Time series data on biochemical reactions reveal transient behavior, away from chemical equilibrium, and contain information on the dynamic interactions among reacting components. However, this information can be difficult to extract using conventional analysis techniques. We present a new method to infer biochemical pathway mechanisms from time course data using a global nonlinear modeling technique to identify the elementary reaction steps which constitute the pathway. The method involves the generation of a complete dictionary of polynomial basis functions based on the law of mass action. Using these basis functions, there are two approaches to model construction, namely the general to specific and the specific to general approach. We demonstrate that our new methodology reconstructs the chemical reaction steps and connectivity of the glycolytic pathway of Lactococcus lactis from time course experimental data.

  9. A partitioned correlation function interaction approach for describing electron correlation in atoms

    NASA Astrophysics Data System (ADS)

    Verdebout, S.; Rynkun, P.; Jönsson, P.; Gaigalas, G.; Froese Fischer, C.; Godefroid, M.

    2013-04-01

    The traditional multiconfiguration Hartree-Fock (MCHF) and configuration interaction (CI) methods are based on a single orthonormal orbital basis. For atoms with many closed core shells, or complicated shell structures, a large orbital basis is needed to saturate the different electron correlation effects such as valence, core-valence and correlation within the core shells. The large orbital basis leads to massive configuration state function (CSF) expansions that are difficult to handle, even on large computer systems. We show that it is possible to relax the orthonormality restriction on the orbital basis and break down the originally very large calculations into a series of smaller calculations that can be run in parallel. Each calculation determines a partitioned correlation function (PCF) that accounts for a specific correlation effect. The PCFs are built on optimally localized orbital sets and are added to a zero-order multireference (MR) function to form a total wave function. The expansion coefficients of the PCFs are determined from a low dimensional generalized eigenvalue problem. The interaction and overlap matrices are computed using a biorthonormal transformation technique (Verdebout et al 2010 J. Phys. B: At. Mol. Phys. 43 074017). The new method, called partitioned correlation function interaction (PCFI), converges rapidly with respect to the orbital basis and gives total energies that are lower than the ones from ordinary MCHF and CI calculations. The PCFI method is also very flexible when it comes to targeting different electron correlation effects. Focusing our attention on neutral lithium, we show that by dedicating a PCF to the single excitations from the core, spin- and orbital-polarization effects can be captured very efficiently, leading to highly improved convergence patterns for hyperfine parameters compared with MCHF calculations based on a single orthogonal radial orbital basis. By collecting separately optimized PCFs to correct the MR function, the variational degrees of freedom in the relative mixing coefficients of the CSFs building the PCFs are inhibited. The constraints on the mixing coefficients lead to small off-sets in computed properties such as hyperfine structure, isotope shift and transition rates, with respect to the correct values. By (partially) deconstraining the mixing coefficients one converges to the correct limits and keeps the tremendous advantage of improved convergence rates that comes from the use of several orbital sets. Reducing ultimately each PCF to a single CSF with its own orbital basis leads to a non-orthogonal CI approach. Various perspectives of the new method are given.

  10. A comparison of cord gingival displacement with the gingitage technique.

    PubMed

    Tupac, R G; Neacy, K

    1981-11-01

    Fifteen young adult dogs were divided into three groups representing 0, 7- and 21-day healing periods. Randomly selected cuspid teeth were used to compare cord gingival displacement and gingitage techniques for subgingival tooth preparation and impression making. Clinical and histologic measurements were used as a basis for comparison. Results indicate that (1) the experimental teeth were clinically healthy at the beginning of the experiment, (2) clinical health of the gingival tissues was controlled throughout the course of the experiment, and (3) within this experimental setting, there was no significant difference between the cord gingival displacement technique and the gingitage technique.

  11. Safety assessment of discharge chute isolation barrier preparation and installation. Revision 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meichle, R.H.

    1994-10-10

    This revision responds to RL comments and increases the discussion of the ``effective hazard categorization`` and the readiness review basis. The safety assessment is made for the activities for the preparation and installation of the discharge chute isolation barriers. The safety assessment includes a hazard assessment and comparison of potential accidents/events to those addressed by the current safety basis documentation. No significant hazards were identified. An evaluation against the USQ evaluation questions were made and the determination made that the activities do not represent a USQ. Hazard categorization techniques were used to provide a basis for readiness review classification.

  12. Sci-Thur PM - Colourful Interactions: Highlights 08: ARC TBI using Single-Step Optimized VMAT Fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hudson, Alana; Gordon, Deborah; Moore, Roseanne

    Purpose: This work outlines a new TBI delivery technique to replace a lateral POP full bolus technique. The new technique is done with VMAT arc delivery, without bolus, treating the patient prone and supine. The benefits of the arc technique include: increased patient experience and safety, better dose conformity, better organ at risk sparing, decreased therapist time and reduction of therapist injuries. Methods: In this work we build on a technique developed by Jahnke et al. We use standard arc fields with gantry speeds corrected for varying distance to the patient followed by a single step VMAT optimization on amore » patient CT to increase dose inhomogeneity and to reduce dose to the lungs (vs. blocks). To compare the arc TBI technique to our full bolus technique, we produced plans on patient CTs for both techniques and evaluated several dosimetric parameters using an ANOVA test. Results and Conclusions: The arc technique is able reduce both the hot areas to the body (D2% reduced from 122.2% to 111.8% p<0.01) and the lungs (mean lung dose reduced from 107.5% to 99.1%, p<0.01), both statistically significant, while maintaining coverage (D98% = 97.8% vs. 94.6%, p=0.313, not statistically significant). We developed a more patient and therapist-friendly TBI treatment technique that utilizes single-step optimized VMAT plans. It was found that this technique was dosimetrically equivalent to our previous lateral technique in terms of coverage and statistically superior in terms of reduced lung dose.« less

  13. Reducing the Risk of Human Space Missions with INTEGRITY

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.; Dillon-Merill, Robin L.; Tri, Terry O.; Henninger, Donald L.

    2003-01-01

    The INTEGRITY Program will design and operate a test bed facility to help prepare for future beyond-LEO missions. The purpose of INTEGRITY is to enable future missions by developing, testing, and demonstrating advanced human space systems. INTEGRITY will also implement and validate advanced management techniques including risk analysis and mitigation. One important way INTEGRITY will help enable future missions is by reducing their risk. A risk analysis of human space missions is important in defining the steps that INTEGRITY should take to mitigate risk. This paper describes how a Probabilistic Risk Assessment (PRA) of human space missions will help support the planning and development of INTEGRITY to maximize its benefits to future missions. PRA is a systematic methodology to decompose the system into subsystems and components, to quantify the failure risk as a function of the design elements and their corresponding probability of failure. PRA provides a quantitative estimate of the probability of failure of the system, including an assessment and display of the degree of uncertainty surrounding the probability. PRA provides a basis for understanding the impacts of decisions that affect safety, reliability, performance, and cost. Risks with both high probability and high impact are identified as top priority. The PRA of human missions beyond Earth orbit will help indicate how the risk of future human space missions can be reduced by integrating and testing systems in INTEGRITY.

  14. Tensor hypercontraction density fitting. I. Quartic scaling second- and third-order Møller-Plesset perturbation theory

    NASA Astrophysics Data System (ADS)

    Hohenstein, Edward G.; Parrish, Robert M.; Martínez, Todd J.

    2012-07-01

    Many approximations have been developed to help deal with the O(N4) growth of the electron repulsion integral (ERI) tensor, where N is the number of one-electron basis functions used to represent the electronic wavefunction. Of these, the density fitting (DF) approximation is currently the most widely used despite the fact that it is often incapable of altering the underlying scaling of computational effort with respect to molecular size. We present a method for exploiting sparsity in three-center overlap integrals through tensor decomposition to obtain a low-rank approximation to density fitting (tensor hypercontraction density fitting or THC-DF). This new approximation reduces the 4th-order ERI tensor to a product of five matrices, simultaneously reducing the storage requirement as well as increasing the flexibility to regroup terms and reduce scaling behavior. As an example, we demonstrate such a scaling reduction for second- and third-order perturbation theory (MP2 and MP3), showing that both can be carried out in O(N4) operations. This should be compared to the usual scaling behavior of O(N5) and O(N6) for MP2 and MP3, respectively. The THC-DF technique can also be applied to other methods in electronic structure theory, such as coupled-cluster and configuration interaction, promising significant gains in computational efficiency and storage reduction.

  15. Temporal and spatial resolution required for imaging myocardial function

    NASA Astrophysics Data System (ADS)

    Eusemann, Christian D.; Robb, Richard A.

    2004-05-01

    4-D functional analysis of myocardial mechanics is an area of significant interest and research in cardiology and vascular/interventional radiology. Current multidimensional analysis is limited by insufficient temporal resolution of x-ray and magnetic resonance based techniques, but recent improvements in system design holds hope for faster and higher resolution scans to improve images of moving structures allowing more accurate functional studies, such as in the heart. This paper provides a basis for the requisite temporal and spatial resolution for useful imaging during individual segments of the cardiac cycle. Multiple sample rates during systole and diastole are compared to determine an adequate sample frequency to reduce regional myocardial tracking errors. Concurrently, out-of-plane resolution has to be sufficiently high to minimize partial volume effect. Temporal resolution and out-of-plane spatial resolution are related factors that must be considered together. The data used for this study is a DSR dynamic volume image dataset with high temporal and spatial resolution using implanted fiducial markers to track myocardial motion. The results of this study suggest a reduced exposure and scan time for x-ray and magnetic resonance imaging methods, since a lower sample rate during systole is sufficient, whereas the period of rapid filling during diastole requires higher sampling. This could potentially reduce the cost of these procedures and allow higher patient throughput.

  16. Antarctic Mass Loss from GRACE from Space- and Time-Resolved Modeling with Slepian Functions

    NASA Astrophysics Data System (ADS)

    Simons, F. J.; Harig, C.

    2013-12-01

    The melting of polar ice sheets is a major contributor to global sea-level rise. Antarctica is of particular interest since most of the mass loss has occurred in West Antarctica, however updated glacial isostatic adjustment (GIA) models and recent mass gains in East Antarctica have reduced the continent-wide integrated decadal trend of mass loss. Here we present a spatially and temporally resolved estimation of the Antarctic ice mass change using Slepian localization functions. With a Slepian basis specifically for Antarctica, the basis functions maximize their energy on the continent and we can project the geopotential fields into a sparse set of orthogonal coefficients. By fitting polynomial functions to the limited basis coefficients we maximize signal-to-noise levels and need not perform smoothing or destriping filters common to other approaches. In addition we determine an empirical noise covariance matrix from the GRACE data to estimate the uncertainty of mass estimation. When applied to large ice sheets, as in our own recent Greenland work, this technique is able to resolve both the overall continental integrated mass trend, as well as the spatial distribution of the mass changes over time. Using CSR-RL05 GRACE data between Jan. 2003 and Jan 2013, we estimate the regional accelerations in mass change for several sub-regions and examine how the spatial pattern of mass has changed. The Amundsen Sea coast of West Antarctica has experienced a large acceleration in mass loss (-26 Gt/yr^2). While mass loss is concentrated near Pine Island and Thwaites glaciers, it has also increased along the coast further towards the Ross ice shelf.

  17. Optimal systems of geoscience surveying A preliminary discussion

    NASA Astrophysics Data System (ADS)

    Shoji, Tetsuya

    2006-10-01

    In any geoscience survey, each survey technique must be effectively applied, and many techniques are often combined optimally. An important task is to get necessary and sufficient information to meet the requirement of the survey. A prize-penalty function quantifies effectiveness of the survey, and hence can be used to determine the best survey technique. On the other hand, an information-cost function can be used to determine the optimal combination of survey techniques on the basis of the geoinformation obtained. Entropy is available to evaluate geoinformation. A simple model suggests the possibility that low-resolvability techniques are generally applied at early stages of survey, and that higher-resolvability techniques should alternate with lower-resolvability ones with the progress of the survey.

  18. Technique optimization of orbital atherectomy in calcified peripheral lesions of the lower extremities: the CONFIRM series, a prospective multicenter registry.

    PubMed

    Das, Tony; Mustapha, Jihad; Indes, Jeffrey; Vorhies, Robert; Beasley, Robert; Doshi, Nilesh; Adams, George L

    2014-01-01

    The purpose of CONFIRM registry series was to evaluate the use of orbital atherectomy (OA) in peripheral lesions of the lower extremities, as well as optimize the technique of OA. Methods of treating calcified arteries (historically a strong predictor of treatment failure) have improved significantly over the past decade and now include minimally invasive endovascular treatments, such as OA with unique versatility in modifying calcific lesions above and below-the-knee. Patients (3135) undergoing OA by more than 350 physicians at over 200 US institutions were enrolled on an "all-comers" basis, resulting in registries that provided site-reported patient demographics, ABI, Rutherford classification, co-morbidities, lesion characteristics, plaque morphology, device usage parameters, and procedural outcomes. Treatment with OA reduced pre-procedural stenosis from an average of 88-35%. Final residual stenosis after adjunctive treatments, typically low-pressure percutaneous transluminal angioplasty (PTA), averaged 10%. Plaque removal was most effective for severely calcified lesions and least effective for soft plaque. Shorter spin times and smaller crown sizes significantly lowered procedural complications which included slow flow (4.4%), embolism (2.2%), and spasm (6.3%), emphasizing the importance of treatment regimens that focus on plaque modification over maximizing luminal gain. The OA technique optimization, which resulted in a change of device usage across the CONFIRM registry series, corresponded to a lower incidence of adverse events irrespective of calcium burden or co-morbidities. Copyright © 2013 The Authors. Wiley Periodicals, Inc.

  19. Combining LCT tools for the optimization of an industrial process: material and energy flow analysis and best available techniques.

    PubMed

    Rodríguez, M T Torres; Andrade, L Cristóbal; Bugallo, P M Bello; Long, J J Casares

    2011-09-15

    Life cycle thinking (LCT) is one of the philosophies that has recently appeared in the context of the sustainable development. Some of the already existing tools and methods, as well as some of the recently emerged ones, which seek to understand, interpret and design the life of a product, can be included into the scope of the LCT philosophy. That is the case of the material and energy flow analysis (MEFA), a tool derived from the industrial metabolism definition. This paper proposes a methodology combining MEFA with another technique derived from sustainable development which also fits the LCT philosophy, the BAT (best available techniques) analysis. This methodology, applied to an industrial process, seeks to identify the so-called improvable flows by MEFA, so that the appropriate candidate BAT can be selected by BAT analysis. Material and energy inputs, outputs and internal flows are quantified, and sustainable solutions are provided on the basis of industrial metabolism. The methodology has been applied to an exemplary roof tile manufacture plant for validation. 14 Improvable flows have been identified and 7 candidate BAT have been proposed aiming to reduce these flows. The proposed methodology provides a way to detect improvable material or energy flows in a process and selects the most sustainable options to enhance them. Solutions are proposed for the detected improvable flows, taking into account their effectiveness on improving such flows. Copyright © 2011 Elsevier B.V. All rights reserved.

  20. Rigid-body transformation of list-mode projection data for respiratory motion correction in cardiac PET.

    PubMed

    Livieratos, L; Stegger, L; Bloomfield, P M; Schafers, K; Bailey, D L; Camici, P G

    2005-07-21

    High-resolution cardiac PET imaging with emphasis on quantification would benefit from eliminating the problem of respiratory movement during data acquisition. Respiratory gating on the basis of list-mode data has been employed previously as one approach to reduce motion effects. However, it results in poor count statistics with degradation of image quality. This work reports on the implementation of a technique to correct for respiratory motion in the area of the heart at no extra cost for count statistics and with the potential to maintain ECG gating, based on rigid-body transformations on list-mode data event-by-event. A motion-corrected data set is obtained by assigning, after pre-correction for detector efficiency and photon attenuation, individual lines-of-response to new detector pairs with consideration of respiratory motion. Parameters of respiratory motion are obtained from a series of gated image sets by means of image registration. Respiration is recorded simultaneously with the list-mode data using an inductive respiration monitor with an elasticized belt at chest level. The accuracy of the technique was assessed with point-source data showing a good correlation between measured and true transformations. The technique was applied on phantom data with simulated respiratory motion, showing successful recovery of tracer distribution and contrast on the motion-corrected images, and on patient data with C15O and 18FDG. Quantitative assessment of preliminary C15O patient data showed improvement in the recovery coefficient at the centre of the left ventricle.

  1. Ultrasonic technique for measuring porosity of plasma-sprayed alumina coatings

    NASA Astrophysics Data System (ADS)

    Parthasarathi, S.; Tittmann, B. R.; Onesto, E. J.

    1997-12-01

    Porosity is an important factor in plasma-sprayed coatings, especially ceramic coatings. Excessive poros-ity can adversely affect the performance of the coated component in various ways. An ultrasonic nonde-structive measurement technique has been developed to measure porosity in plasma-sprayed alumina coatings. The technique is generic and can be extended to other ceramic coating systems. To test the tech-nique, freestanding alumina coatings with varying levels of porosity were fabricated via plasma spray. Samples with varying porosity, obtained through innovative fabrication techniques, were used to gener-ate a calibration curve. The ultrasonic velocity in the low-frequency range was found to be dependent on the density of freestanding coatings (measured via Archimedian techniques). This dependence is the basis of the development of a technique to measure the density of coatings.

  2. [Use of stimulation techniques in pain treatment].

    PubMed

    Rosted, Palle; Andersen, Claus

    2006-05-15

    Stimulation techniques (SB) include manipulation, acupuncture, acupressure, physiotherapy, transcutaneous electrical nerve stimulation, reflexotherapy, laser treatment and epidural stimulation technique. The purpose of this paper is to investigate the scientific evidence for these techniques. The Cochrane Library and Medline were searched for all techniques from 2000 to date. Only randomised controlled studies written in English were included. Search words were used, such as; acupuncture and neck pain, shoulder pain, etc. In total 587 papers were identified for the following diseases; headache, neck pain, shoulder pain, elbow pain, low back pain and knee pain. 415 papers were excluded, and the remaining 172 papers, a total of 20,431 patients, are the basis for this study. The effect of acupuncture and epidural stimulation technique is scientifically well-supported. For the remaining techniques, the scientific evidence is dubious.

  3. Fast online inverse scattering with Reduced Basis Method (RBM) for a 3D phase grating with specific line roughness

    NASA Astrophysics Data System (ADS)

    Kleemann, Bernd H.; Kurz, Julian; Hetzler, Jochen; Pomplun, Jan; Burger, Sven; Zschiedrich, Lin; Schmidt, Frank

    2011-05-01

    Finite element methods (FEM) for the rigorous electromagnetic solution of Maxwell's equations are known to be very accurate. They possess a high convergence rate for the determination of near field and far field quantities of scattering and diffraction processes of light with structures having feature sizes in the range of the light wavelength. We are using FEM software for 3D scatterometric diffraction calculations allowing the application of a brilliant and extremely fast solution method: the reduced basis method (RBM). The RBM constructs a reduced model of the scattering problem from precalculated snapshot solutions, guided self-adaptively by an error estimator. Using RBM, we achieve an efficiency accuracy of about 10-4 compared to the direct problem with only 35 precalculated snapshots being the reduced basis dimension. This speeds up the calculation of diffraction amplitudes by a factor of about 1000 compared to the conventional solution of Maxwell's equations by FEM. This allows us to reconstruct the three geometrical parameters of our phase grating from "measured" scattering data in a 3D parameter manifold online in a minute having the full FEM accuracy available. Additionally, also a sensitivity analysis or the choice of robust measuring strategies, for example, can be done online in a few minutes.

  4. A model and variance reduction method for computing statistical outputs of stochastic elliptic partial differential equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vidal-Codina, F., E-mail: fvidal@mit.edu; Nguyen, N.C., E-mail: cuongng@mit.edu; Giles, M.B., E-mail: mike.giles@maths.ox.ac.uk

    We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the governing PDE; (2) the reduced basis method for a new HDG discretization of the underlying PDE to enable real-time solution of the parameterized PDE in the presence of stochastic parameters; and (3) a multilevel variance reduction method that exploits the statistical correlation among the different reduced basismore » approximations and the high-fidelity HDG discretization to accelerate the convergence of the Monte Carlo simulations. The multilevel variance reduction method provides efficient computation of the statistical outputs by shifting most of the computational burden from the high-fidelity HDG approximation to the reduced basis approximations. Furthermore, we develop a posteriori error estimates for our approximations of the statistical outputs. Based on these error estimates, we propose an algorithm for optimally choosing both the dimensions of the reduced basis approximations and the sizes of Monte Carlo samples to achieve a given error tolerance. We provide numerical examples to demonstrate the performance of the proposed method.« less

  5. Early-onset scoliosis: current treatment.

    PubMed

    Cunin, V

    2015-02-01

    Early-onset scoliosis, which appears before the age of 10, can be due to congenital vertebral anomalies, neuromuscular diseases, scoliosis-associated syndromes, or idiopathic causes. It can have serious consequences for lung development and significantly reduce the life expectancy compared to adolescent scoliosis. Extended posterior fusion must be avoided to prevent the crankshaft phenomenon, uneven growth of the trunk and especially restrictive lung disease. Conservative (non-surgical) treatment is used first. If this fails, fusionless surgery can be performed to delay the final fusion procedure until the patient is older. The gold standard delaying surgical treatment is the implantation of growing rods as described by Moe and colleagues in the mid-1980s. These rods, which are lengthened during short surgical procedures at regular intervals, curb the scoliosis progression until the patient reaches an age where fusion can be performed. Knowledge of this technique and its complications has led to several mechanical improvements being made, namely use of rods that can be distracted magnetically on an outpatient basis, without the need for anesthesia. Devices based on the same principle have been designed that preferentially attach to the ribs to specifically address chest wall and spine dysplasia. The second category of surgical devices consists of rods used to guide spinal growth that do not require repeated surgical procedures. The third type of fusionless surgical treatment involves slowing the growth of the scoliosis convexity to help reduce the Cobb angle. The indications are constantly changing. Improvements in surgical techniques and greater surgeon experience may help to reduce the number of complications and make this lengthy treatment acceptable to patients and their family. Long-term effects of surgery on the Cobb angle have not been compared to those involving conservative "delaying" treatments. Because the latter has fewer complications associated with it than surgery, it should be the first-line treatment for most cases of early-onset scoliosis. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  6. Technique of Automated Control Over Cardiopulmonary Resuscitation Procedures

    NASA Astrophysics Data System (ADS)

    Bureev, A. Sh; Kiseleva, E. Yu; Kutsov, M. S.; Zhdanov, D. S.

    2016-01-01

    The article describes a technique of automated control over cardiopulmonary resuscitation procedures on the basis of acoustic data. The research findings have allowed determining the primary important characteristics of acoustic signals (sounds of blood circulation in the carotid artery and respiratory sounds) and proposing a method to control the performance of resuscitation procedures. This method can be implemented as a part of specialized hardware systems.

  7. Study of teeth phosphorescence detection technique

    NASA Astrophysics Data System (ADS)

    Cai, De-Fang; Wang, Shui-ping; Yang, Zhen-jiang; An, Yuying; Huang, Li-Zi; Liang, Yan

    1995-05-01

    On the basis of research and analysis into optical properties of teeth, this paper introduces the techniques to transform teeth phosphorescence excited by ultraviolet light into electric signals and following steps for data collection, analysis and processing. Also presented are the methods to diagnose pulp-vitality, decayed teeth, and, especially, infant caries and pre-caries diseases. By measurement of a tooth's temperature, other stomatic illnesses can be diagnosed.

  8. Comparing rainfall patterns between regions in Peninsular Malaysia via a functional data analysis technique

    NASA Astrophysics Data System (ADS)

    Suhaila, Jamaludin; Jemain, Abdul Aziz; Hamdan, Muhammad Fauzee; Wan Zin, Wan Zawiah

    2011-12-01

    SummaryNormally, rainfall data is collected on a daily, monthly or annual basis in the form of discrete observations. The aim of this study is to convert these rainfall values into a smooth curve or function which could be used to represent the continuous rainfall process at each region via a technique known as functional data analysis. Since rainfall data shows a periodic pattern in each region, the Fourier basis is introduced to capture these variations. Eleven basis functions with five harmonics are used to describe the unimodal rainfall pattern for stations in the East while five basis functions which represent two harmonics are needed to describe the rainfall pattern in the West. Based on the fitted smooth curve, the wet and dry periods as well as the maximum and minimum rainfall values could be determined. Different rainfall patterns are observed among the studied regions based on the smooth curve. Using the functional analysis of variance, the test results indicated that there exist significant differences in the functional means between each region. The largest differences in the functional means are found between the East and Northwest regions and these differences may probably be due to the effect of topography and, geographical location and are mostly influenced by the monsoons. Therefore, the same inputs or approaches might not be useful in modeling the hydrological process for different regions.

  9. A new basis set for molecular bending degrees of freedom.

    PubMed

    Jutier, Laurent

    2010-07-21

    We present a new basis set as an alternative to Legendre polynomials for the variational treatment of bending vibrational degrees of freedom in order to highly reduce the number of basis functions. This basis set is inspired from the harmonic oscillator eigenfunctions but is defined for a bending angle in the range theta in [0:pi]. The aim is to bring the basis functions closer to the final (ro)vibronic wave functions nature. Our methodology is extended to complicated potential energy surfaces, such as quasilinearity or multiequilibrium geometries, by using several free parameters in the basis functions. These parameters allow several density maxima, linear or not, around which the basis functions will be mainly located. Divergences at linearity in integral computations are resolved as generalized Legendre polynomials. All integral computations required for the evaluation of molecular Hamiltonian matrix elements are given for both discrete variable representation and finite basis representation. Convergence tests for the low energy vibronic states of HCCH(++), HCCH(+), and HCCS are presented.

  10. Decoy-state quantum key distribution with biased basis choice

    PubMed Central

    Wei, Zhengchao; Wang, Weilong; Zhang, Zhen; Gao, Ming; Ma, Zhi; Ma, Xiongfeng

    2013-01-01

    We propose a quantum key distribution scheme that combines a biased basis choice with the decoy-state method. In this scheme, Alice sends all signal states in the Z basis and decoy states in the X and Z basis with certain probabilities, and Bob measures received pulses with optimal basis choice. This scheme simplifies the system and reduces the random number consumption. From the simulation result taking into account of statistical fluctuations, we find that in a typical experimental setup, the proposed scheme can increase the key rate by at least 45% comparing to the standard decoy-state scheme. In the postprocessing, we also apply a rigorous method to upper bound the phase error rate of the single-photon components of signal states. PMID:23948999

  11. Decoy-state quantum key distribution with biased basis choice.

    PubMed

    Wei, Zhengchao; Wang, Weilong; Zhang, Zhen; Gao, Ming; Ma, Zhi; Ma, Xiongfeng

    2013-01-01

    We propose a quantum key distribution scheme that combines a biased basis choice with the decoy-state method. In this scheme, Alice sends all signal states in the Z basis and decoy states in the X and Z basis with certain probabilities, and Bob measures received pulses with optimal basis choice. This scheme simplifies the system and reduces the random number consumption. From the simulation result taking into account of statistical fluctuations, we find that in a typical experimental setup, the proposed scheme can increase the key rate by at least 45% comparing to the standard decoy-state scheme. In the postprocessing, we also apply a rigorous method to upper bound the phase error rate of the single-photon components of signal states.

  12. Evaluation of Techniques for Reducing In-Use Automotive Fuel Consumption

    DOT National Transportation Integrated Search

    1981-04-01

    This report presents an assessment of proposed techniques for reducing fuel consumption in the in-use light duty road vehicle fleet. Three general classes of techniques are treated: (1) modification of vehicles, (2) modification of traffic flow, and ...

  13. Reduced Wiener Chaos representation of random fields via basis adaptation and projection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsilifis, Panagiotis, E-mail: tsilifis@usc.edu; Department of Civil Engineering, University of Southern California, Los Angeles, CA 90089; Ghanem, Roger G., E-mail: ghanem@usc.edu

    2017-07-15

    A new characterization of random fields appearing in physical models is presented that is based on their well-known Homogeneous Chaos expansions. We take advantage of the adaptation capabilities of these expansions where the core idea is to rotate the basis of the underlying Gaussian Hilbert space, in order to achieve reduced functional representations that concentrate the induced probability measure in a lower dimensional subspace. For a smooth family of rotations along the domain of interest, the uncorrelated Gaussian inputs are transformed into a Gaussian process, thus introducing a mesoscale that captures intermediate characteristics of the quantity of interest.

  14. Sparse dynamics for partial differential equations

    PubMed Central

    Schaeffer, Hayden; Caflisch, Russel; Hauck, Cory D.; Osher, Stanley

    2013-01-01

    We investigate the approximate dynamics of several differential equations when the solutions are restricted to a sparse subset of a given basis. The restriction is enforced at every time step by simply applying soft thresholding to the coefficients of the basis approximation. By reducing or compressing the information needed to represent the solution at every step, only the essential dynamics are represented. In many cases, there are natural bases derived from the differential equations, which promote sparsity. We find that our method successfully reduces the dynamics of convection equations, diffusion equations, weak shocks, and vorticity equations with high-frequency source terms. PMID:23533273

  15. Eat dirt and avoid atopy: the hygiene hypothesis revisited.

    PubMed

    Patki, Anil

    2007-01-01

    The explosive rise in the incidence of atopic diseases in the Western developed countries can be explained on the basis of the so-called "hygiene hypothesis". In short, it attributes the rising incidence of atopic dermatitis to reduced exposure to various childhood infections and bacterial endotoxins. Reduced exposure to dirt in the clean environment results in a skewed development of the immune system which results in an abnormal allergic response to various environmental allergens which are otherwise innocuous. This article reviews the historical aspects, epidemiological and immunological basis of the hygiene hypothesis and implications for Indian conditions.

  16. Reduced randomness in quantum cryptography with sequences of qubits encoded in the same basis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lamoureux, L.-P.; Cerf, N. J.; Bechmann-Pasquinucci, H.

    2006-03-15

    We consider the cloning of sequences of qubits prepared in the states used in the BB84 or six-state quantum cryptography protocol, and show that the single-qubit fidelity is unaffected even if entire sequences of qubits are prepared in the same basis. This result is only valid provided that the sequences are much shorter than the total key. It is of great importance for practical quantum cryptosystems because it reduces the need for high-speed random number generation without impairing on the security against finite-size cloning attacks.

  17. Sparse dynamics for partial differential equations.

    PubMed

    Schaeffer, Hayden; Caflisch, Russel; Hauck, Cory D; Osher, Stanley

    2013-04-23

    We investigate the approximate dynamics of several differential equations when the solutions are restricted to a sparse subset of a given basis. The restriction is enforced at every time step by simply applying soft thresholding to the coefficients of the basis approximation. By reducing or compressing the information needed to represent the solution at every step, only the essential dynamics are represented. In many cases, there are natural bases derived from the differential equations, which promote sparsity. We find that our method successfully reduces the dynamics of convection equations, diffusion equations, weak shocks, and vorticity equations with high-frequency source terms.

  18. Reduced Wiener Chaos representation of random fields via basis adaptation and projection

    NASA Astrophysics Data System (ADS)

    Tsilifis, Panagiotis; Ghanem, Roger G.

    2017-07-01

    A new characterization of random fields appearing in physical models is presented that is based on their well-known Homogeneous Chaos expansions. We take advantage of the adaptation capabilities of these expansions where the core idea is to rotate the basis of the underlying Gaussian Hilbert space, in order to achieve reduced functional representations that concentrate the induced probability measure in a lower dimensional subspace. For a smooth family of rotations along the domain of interest, the uncorrelated Gaussian inputs are transformed into a Gaussian process, thus introducing a mesoscale that captures intermediate characteristics of the quantity of interest.

  19. 76 FR 72382 - Atlantic Highly Migratory Species; Electronic Dealer Reporting System Workshop

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-23

    ... tuna data on a more real-time basis and more efficiently, which will reduce duplicative data... a more real-time basis, allowing for timely and efficient data collection for management of Atlantic HMS. In order to give sufficient time for dealers to adjust to implementation of the new system and...

  20. 46 CFR 391.6 - Tax treatment of qualified withdrawals.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... capital gain account; and third, out of the ordinary income account. Such withdrawals will reduce the... (or share therein) is made out of the capital gain account, the basis of such vessel, barge, or... the capital gain account, then the basis of the vessel, barge, or container (or share therein) with...

  1. 46 CFR 391.6 - Tax treatment of qualified withdrawals.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... capital gain account; and third, out of the ordinary income account. Such withdrawals will reduce the... (or share therein) is made out of the capital gain account, the basis of such vessel, barge, or... the capital gain account, then the basis of the vessel, barge, or container (or share therein) with...

  2. 46 CFR 391.6 - Tax treatment of qualified withdrawals.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... capital gain account; and third, out of the ordinary income account. Such withdrawals will reduce the... (or share therein) is made out of the capital gain account, the basis of such vessel, barge, or... the capital gain account, then the basis of the vessel, barge, or container (or share therein) with...

  3. 46 CFR 391.6 - Tax treatment of qualified withdrawals.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... capital gain account; and third, out of the ordinary income account. Such withdrawals will reduce the... (or share therein) is made out of the capital gain account, the basis of such vessel, barge, or... the capital gain account, then the basis of the vessel, barge, or container (or share therein) with...

  4. 46 CFR 391.6 - Tax treatment of qualified withdrawals.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... capital gain account; and third, out of the ordinary income account. Such withdrawals will reduce the... (or share therein) is made out of the capital gain account, the basis of such vessel, barge, or... the capital gain account, then the basis of the vessel, barge, or container (or share therein) with...

  5. Intelligent Traffic Quantification System

    NASA Astrophysics Data System (ADS)

    Mohanty, Anita; Bhanja, Urmila; Mahapatra, Sudipta

    2017-08-01

    Currently, city traffic monitoring and controlling is a big issue in almost all cities worldwide. Vehicular ad-hoc Network (VANET) technique is an efficient tool to minimize this problem. Usually, different types of on board sensors are installed in vehicles to generate messages characterized by different vehicle parameters. In this work, an intelligent system based on fuzzy clustering technique is developed to reduce the number of individual messages by extracting important features from the messages of a vehicle. Therefore, the proposed fuzzy clustering technique reduces the traffic load of the network. The technique also reduces congestion and quantifies congestion.

  6. Roothaan's approach to solve the Hartree-Fock equations for atoms confined by soft walls: Basis set with correct asymptotic behavior.

    PubMed

    Rodriguez-Bautista, Mariano; Díaz-García, Cecilia; Navarrete-López, Alejandra M; Vargas, Rubicelia; Garza, Jorge

    2015-07-21

    In this report, we use a new basis set for Hartree-Fock calculations related to many-electron atoms confined by soft walls. One- and two-electron integrals were programmed in a code based in parallel programming techniques. The results obtained with this proposal for hydrogen and helium atoms were contrasted with other proposals to study just one and two electron confined atoms, where we have reproduced or improved the results previously reported. Usually, an atom enclosed by hard walls has been used as a model to study confinement effects on orbital energies, the main conclusion reached by this model is that orbital energies always go up when the confinement radius is reduced. However, such an observation is not necessarily valid for atoms confined by penetrable walls. The main reason behind this result is that for atoms with large polarizability, like beryllium or potassium, external orbitals are delocalized when the confinement is imposed and consequently, the internal orbitals behave as if they were in an ionized atom. Naturally, the shell structure of these atoms is modified drastically when they are confined. The delocalization was an argument proposed for atoms confined by hard walls, but it was never verified. In this work, the confinement imposed by soft walls allows to analyze the delocalization concept in many-electron atoms.

  7. Roothaan’s approach to solve the Hartree-Fock equations for atoms confined by soft walls: Basis set with correct asymptotic behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rodriguez-Bautista, Mariano; Díaz-García, Cecilia; Navarrete-López, Alejandra M.

    2015-07-21

    In this report, we use a new basis set for Hartree-Fock calculations related to many-electron atoms confined by soft walls. One- and two-electron integrals were programmed in a code based in parallel programming techniques. The results obtained with this proposal for hydrogen and helium atoms were contrasted with other proposals to study just one and two electron confined atoms, where we have reproduced or improved the results previously reported. Usually, an atom enclosed by hard walls has been used as a model to study confinement effects on orbital energies, the main conclusion reached by this model is that orbital energiesmore » always go up when the confinement radius is reduced. However, such an observation is not necessarily valid for atoms confined by penetrable walls. The main reason behind this result is that for atoms with large polarizability, like beryllium or potassium, external orbitals are delocalized when the confinement is imposed and consequently, the internal orbitals behave as if they were in an ionized atom. Naturally, the shell structure of these atoms is modified drastically when they are confined. The delocalization was an argument proposed for atoms confined by hard walls, but it was never verified. In this work, the confinement imposed by soft walls allows to analyze the delocalization concept in many-electron atoms.« less

  8. Characterization of Ar/N2/H2 middle-pressure RF discharge and application of the afterglow region for nitridation of GaAs

    NASA Astrophysics Data System (ADS)

    Raud, J.; Jõgi, I.; Matisen, L.; Navrátil, Z.; Talviste, R.; Trunec, D.; Aarik, J.

    2017-12-01

    This work characterizes the production and destruction of nitrogen and hydrogen atoms in RF capacitively coupled middle-pressure discharge in argon/nitrogen/hydrogen mixtures. Input power, electron concentration, electric field strength and mean electron energy were determined on the basis of electrical measurements. Gas temperature and concentration of Ar atoms in 1s states were determined from spectral measurements. On the basis of experimentally determined plasma characteristics, main production and loss mechanisms of H and N atoms were discussed. The plasma produced radicals were applied for the nitridation and oxide reduction of gallium arsenide in the afterglow region of discharge. After plasma treatment the GaAs samples were analyzed using x-ray photoelectron spectroscopy (XPS) technique. Successful nitridation of GaAs sample was obtained in the case of Ar/5% N2 discharge. In this gas mixture the N atoms were generated via dissociative recombination of N2+ created by charge transfer from Ar+. The treatment in Ar/5% N2/1% H2 mixture resulted in the reduction of oxide signals in the XPS spectra. Negligible formation of GaN in the latter mixture was connected with reduced concentration of N atoms, which was, in turn, due to less efficient mechanism of N atom production (electron impact dissociation of N2 molecules) and additional loss channel in reaction with H2.

  9. Tongue acupuncture in treatment of post-stroke dysphagia

    PubMed Central

    Cai, Haiyan; Ma, Benxu; Gao, Xia; Gao, Huanmin

    2015-01-01

    Tongue acupuncture is a technique that treats illness through acupuncture applied to the tongue. This study was designed to assess its therapeutic effects in the treatment of post-stroke dysphagia. A clinical control study was conducted with randomly selected 180 patients with post-stroke dysphagia. The patients were assigned into 2 groups: 90 in the Tongue acupuncture group received tongue acupuncture on the basis of conventional medication, 90 in the conventional acupuncture group received acupuncture on the neck and wrist. Acupoints in the tongue are Juanquan (EX-HN10) (at the midpoint of dorsal raphe of the tongue) and Haiquan (EX-HN11) (Sublingual frenulum midpoint). Acupoits on the body are Fengchi (GB20) and Neiguan (PC6). The effective rate, the national institutes of health stroke scale (NIHSS), TV X-ray fluoroscopy swallowing function (VFSS), the incidence rate of pneumonia were used to evaluate the efficacy after 4 weeks treatment. The NIHSS and VFSS of tongue acupuncture group were improved significantly than that of the conventional group (P < 0.01, respectively). The incidence rate of pneumonia decreased (P < 0. 01). The effective rate of the tongue acupuncture group was higher than that of conventional group (96.67% vs. 66.67%, P < 0. 01). On the basis of the conventional medication, tongue acupuncture would effectively improve the swallow functions, decrease the neurological deficit and reduce the incidence of pneumonia in patients with post-stroke dysphagia. PMID:26550374

  10. Routine Chest X-ray: Still Valuable for the Assessment of Left Ventricular Size and Function in the Era of Super Machines?

    PubMed Central

    Morales, Maria-Aurora; Prediletto, Renato; Rossi, Giuseppe; Catapano, Giosuè; Lombardi, Massimo; Rovai, Daniele

    2012-01-01

    Objectives: The development of technologically advanced, expensive techniques has progressively reduced the value of chest X-ray in clinical practice for the assessment of left ventricular (LV) dilatation and dysfunction. Although controversial data are reported on the role of this widely available technique in cardiac assessment, it is known that the cardio-thoracic ratio is predictive of risk of progression in the NYHA Class, hospitalization, and outcome in patients with LV dysfunction. This study aimed to evaluate the reliability of the transverse diameter of heart shadow [TDH] by chest X-ray for detecting LV dilatation and dysfunction as compared to Magnetic Resonance Imaging (MRI) performed for different clinical reasons. Materials and Methods: In 101 patients, TDH was measured in digital chest X-ray and LV volumes and ejection fraction (EF) by MRI, both exams performed within 2 days. Results: A direct correlation between TDH and end-diastolic volumes (r = .75, P<0.0001) was reported. TDH cut-off values of 14.5 mm in females identified LV end-diastolic volumes >150 mL (sensitivity: 82%, specificity: 69%); in males a cut-off value of 15.5 mm identified LV end-diastolic volumes >210 mL (sensitivity: 84%; specificity: 72%). A negative relation was found between TDH and LVEF (r = -.54, P<0.0001). The above cut-off values of TDH discriminated patients with LV systolic dysfunction – LVEF <35% (sensitivity and specificity: 67% and 57% in females; 76% and 59% in males, respectively). Conclusions: Chest X-ray may still be considered a reliable technique in predicting LV dilatation by the accurate measurement of TDH as compared to cardiac MRI. Technologically advanced, expensive, and less available imaging techniques should be performed on the basis of sound clinical requests. PMID:22754739

  11. Routine Chest X-ray: Still Valuable for the Assessment of Left Ventricular Size and Function in the Era of Super Machines?

    PubMed

    Morales, Maria-Aurora; Prediletto, Renato; Rossi, Giuseppe; Catapano, Giosuè; Lombardi, Massimo; Rovai, Daniele

    2012-01-01

    The development of technologically advanced, expensive techniques has progressively reduced the value of chest X-ray in clinical practice for the assessment of left ventricular (LV) dilatation and dysfunction. Although controversial data are reported on the role of this widely available technique in cardiac assessment, it is known that the cardio-thoracic ratio is predictive of risk of progression in the NYHA Class, hospitalization, and outcome in patients with LV dysfunction. This study aimed to evaluate the reliability of the transverse diameter of heart shadow [TDH] by chest X-ray for detecting LV dilatation and dysfunction as compared to Magnetic Resonance Imaging (MRI) performed for different clinical reasons. In 101 patients, TDH was measured in digital chest X-ray and LV volumes and ejection fraction (EF) by MRI, both exams performed within 2 days. A direct correlation between TDH and end-diastolic volumes (r = .75, P<0.0001) was reported. TDH cut-off values of 14.5 mm in females identified LV end-diastolic volumes >150 mL (sensitivity: 82%, specificity: 69%); in males a cut-off value of 15.5 mm identified LV end-diastolic volumes >210 mL (sensitivity: 84%; specificity: 72%). A negative relation was found between TDH and LVEF (r = -.54, P<0.0001). The above cut-off values of TDH discriminated patients with LV systolic dysfunction - LVEF <35% (sensitivity and specificity: 67% and 57% in females; 76% and 59% in males, respectively). Chest X-ray may still be considered a reliable technique in predicting LV dilatation by the accurate measurement of TDH as compared to cardiac MRI. Technologically advanced, expensive, and less available imaging techniques should be performed on the basis of sound clinical requests.

  12. Modeling Complex Chemical Systems: Problems and Solutions

    NASA Astrophysics Data System (ADS)

    van Dijk, Jan

    2016-09-01

    Non-equilibrium plasmas in complex gas mixtures are at the heart of numerous contemporary technologies. They typically contain dozens to hundreds of species, involved in hundreds to thousands of reactions. Chemists and physicists have always been interested in what are now called chemical reduction techniques (CRT's). The idea of such CRT's is that they reduce the number of species that need to be considered explicitly without compromising the validity of the model. This is usually achieved on the basis of an analysis of the reaction time scales of the system under study, which identifies species that are in partial equilibrium after a given time span. The first such CRT that has been widely used in plasma physics was developed in the 1960's and resulted in the concept of effective ionization and recombination rates. It was later generalized to systems in which multiple levels are effected by transport. In recent years there has been a renewed interest in tools for chemical reduction and reaction pathway analysis. An example of the latter is the PumpKin tool. Another trend is that techniques that have previously been developed in other fields of science are adapted as to be able to handle the plasma state of matter. Examples are the Intrinsic Low Dimension Manifold (ILDM) method and its derivatives, which originate from combustion engineering, and the general-purpose Principle Component Analysis (PCA) technique. In this contribution we will provide an overview of the most common reduction techniques, then critically assess the pros and cons of the methods that have gained most popularity in recent years. Examples will be provided for plasmas in argon and carbon dioxide.

  13. Transportation Network Analysis and Decomposition Methods

    DOT National Transportation Integrated Search

    1978-03-01

    The report outlines research in transportation network analysis using decomposition techniques as a basis for problem solutions. Two transportation network problems were considered in detail: a freight network flow problem and a scheduling problem fo...

  14. Spherical space Bessel-Legendre-Fourier localized modes solver for electromagnetic waves.

    PubMed

    Alzahrani, Mohammed A; Gauthier, Robert C

    2015-10-05

    Maxwell's vector wave equations are solved for dielectric configurations that match the symmetry of a spherical computational domain. The electric or magnetic field components and the inverse of the dielectric profile are series expansion defined using basis functions composed of the lowest order spherical Bessel function, polar angle single index dependant Legendre polynomials and azimuthal complex exponential (BLF). The series expressions and non-traditional form of the basis functions result in an eigenvalue matrix formulation of Maxwell's equations that are relatively compact and accurately solvable on a desktop PC. The BLF matrix returns the frequencies and field profiles for steady states modes. The key steps leading to the matrix populating expressions are provided. The validity of the numerical technique is confirmed by comparing the results of computations to those published using complementary techniques.

  15. An empirically derived basis for calculating the area, rate, and distribution of water-drop impingement on airfoils

    NASA Technical Reports Server (NTRS)

    Bergrun, Norman R

    1952-01-01

    An empirically derived basis for predicting the area, rate, and distribution of water-drop impingement on airfoils of arbitrary section is presented. The concepts involved represent an initial step toward the development of a calculation technique which is generally applicable to the design of thermal ice-prevention equipment for airplane wing and tail surfaces. It is shown that sufficiently accurate estimates, for the purpose of heated-wing design, can be obtained by a few numerical computations once the velocity distribution over the airfoil has been determined. The calculation technique presented is based on results of extensive water-drop trajectory computations for five airfoil cases which consisted of 15-percent-thick airfoils encompassing a moderate lift-coefficient range. The differential equations pertaining to the paths of the drops were solved by a differential analyzer.

  16. Dynamic lens and monovision 3D displays to improve viewer comfort.

    PubMed

    Johnson, Paul V; Parnell, Jared Aq; Kim, Joohwan; Saunter, Christopher D; Love, Gordon D; Banks, Martin S

    2016-05-30

    Stereoscopic 3D (S3D) displays provide an additional sense of depth compared to non-stereoscopic displays by sending slightly different images to the two eyes. But conventional S3D displays do not reproduce all natural depth cues. In particular, focus cues are incorrect causing mismatches between accommodation and vergence: The eyes must accommodate to the display screen to create sharp retinal images even when binocular disparity drives the eyes to converge to other distances. This mismatch causes visual discomfort and reduces visual performance. We propose and assess two new techniques that are designed to reduce the vergence-accommodation conflict and thereby decrease discomfort and increase visual performance. These techniques are much simpler to implement than previous conflict-reducing techniques. The first proposed technique uses variable-focus lenses between the display and the viewer's eyes. The power of the lenses is yoked to the expected vergence distance thereby reducing the mismatch between vergence and accommodation. The second proposed technique uses a fixed lens in front of one eye and relies on the binocularly fused percept being determined by one eye and then the other, depending on simulated distance. We conducted performance tests and discomfort assessments with both techniques and compared the results to those of a conventional S3D display. The first proposed technique, but not the second, yielded clear improvements in performance and reductions in discomfort. This dynamic-lens technique therefore offers an easily implemented technique for reducing the vergence-accommodation conflict and thereby improving viewer experience.

  17. Edible holography: the application of holographic techniques to food processing

    NASA Astrophysics Data System (ADS)

    Begleiter, Eric

    1991-07-01

    Reports on current research efforts in the application of holographic techniques to food processing. Through a simple and inexpensive production process, diffractive and holographic effects of color, depth, and motion can be transferred to edible products. Processes are discussed which can provide a competitive advantage to the marketing of a diverse group of sugar and non-sugar-based consumable products, i.e. candies, chocolates, lollipops, snacks, cereals and pharmaceuticals. Techniques, applications, and products are investigated involving the shift from a chemical to a physical basis for the production of food coloring and decorating.

  18. The Coordinate Orthogonality Check (corthog)

    NASA Astrophysics Data System (ADS)

    Avitabile, P.; Pechinsky, F.

    1998-05-01

    A new technique referred to as the coordinate orthogonality check (CORTHOG) helps to identify how each physical degree of freedom contributes to the overall orthogonality relationship between analytical and experimental modal vectors on a mass-weighted basis. Using the CORTHOG technique together with the pseudo-orthogonality check (POC) clarifies where potential discrepancies exist between the analytical and experimental modal vectors. CORTHOG improves the understanding of the correlation (or lack of correlation) that exists between modal vectors. The CORTHOG theory is presented along with the evaluation of several cases to show the use of the technique.

  19. State criminal justice telecommunications (STACOM). Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    Fielding, J. E.; Frewing, H. K.; Lee, J. J.; Leflang, W. G.; Reilly, N. B.

    1977-01-01

    Techniques for identifying user requirements and network designs for criminal justice networks on a state wide basis are discussed. Topics covered include: methods for determining data required; data collection and survey; data organization procedures, and methods for forecasting network traffic volumes. Developed network design techniques center around a computerized topology program which enables the user to generate least cost network topologies that satisfy network traffic requirements, response time requirements and other specified functional requirements. The developed techniques were applied in Texas and Ohio, and results of these studies are presented.

  20. Guidelines and techniques for obtaining water samples that accurately represent the water chemistry of an aquifer

    USGS Publications Warehouse

    Claassen, Hans C.

    1982-01-01

    Obtaining ground-water samples that accurately represent the water chemistry of an aquifer is a complex task. Before a ground-water sampling program can be started, an understanding of the kind of chemical data needed and the potential changes in water chemistry resulting from various drilling, well-completion, and sampling techniques is needed. This report provides a basis for such an evaluation and permits a choice of techniques that will result in obtaining the best possible data for the time and money allocated.

Top